Jump to content
The Dark Mod Forums

Game Addiction


Aceyalone

Recommended Posts

I think the greatest deterrent from getting "addicted" per se to MMOs for myself was the fact that I can't stay up at all hours of the night. I'm one of those few people (I know, its rare) that actually needs a good night's sleep and can't function without one.

Me too. If I stay up for an entire night then I'm stuffed for days. If I only get 7 hours of sleep then I'll be quite tired the next day; any less than that and I'll be really tired.

 

On the other hand, if I sleep in too long then I also feel tired. So it's quite rare for me to actually feel truly rested.

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

  • Replies 84
  • Created
  • Last Reply

Top Posters In This Topic

So enough about being addicted to *playing* games...what about being addicted to working on a mod? :P

Link to comment
Share on other sites

Not a problem, at least you're actually achieving something, and learning and improving your skills in whatever area you're contributing in.

Running around in an MMO pretending to be a dwarf, dispatching endless hordes of goblins by swinging your big Sword of Doom is just a pointless waste of time.

Ok, it may be fun for some people, but that's the problem, that sort of simplistic and repetitive 'fun' is addictive to most people.

Sure, not everyone ends up sitting up all night playing it. but I guarantee you that the vast majority of people who play WoW would admit they play it too much.

When I played everquest, I knew I was playing it too much, but I still did it. I think I tried to justify it by saying that I was getting more value for money if I played it more, since it cost the same every month no matter how much you played.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

Evidence is possibly too strong a word. It is an indication that what people perceive as a conscious decision can in fact be a reaction to a stimulus occuring at a lower level of cognitive processing, and not in fact the wilful choice they believe it to be.

 

This does not of course disprove the existence of free will, but provides evidence of the existence of an alternative process which could potentially be occuring in other cases of perceived volition.

Link to comment
Share on other sites

What exactly is your definition of fee will?

Since we're biological machines, there has to be. at some level, some technical or material limitation on our thought process, and therefore we are not totally free thinkers.

We're limited by our hardware.

We may be free within those bounds, but only in the sense of a prisoner who is free to wander around within his cell.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

I guess my definition would be something like "An exercise of free choice which cannot be explained by a purely mechanistic model of brain processes." I do not believe such a thing exists, and clearly you don't either -- but there are plenty of people who do, and who will claim that mechanistic brain models are flawed because they "cannot explain Free Will".

Link to comment
Share on other sites

Even within our limited hardware I don't think we have free will, our will is either driven by natural impulses at a deeper level, or by natural logic.

All we do is choose from a limtied list of options that are already set in stone, using our limited hardware.

If you want to call that free will, then so be it.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

Evidence is possibly too strong a word. It is an indication that what people perceive as a conscious decision can in fact be a reaction to a stimulus occuring at a lower level of cognitive processing, and not in fact the wilful choice they believe it to be.

 

This does not of course disprove the existence of free will, but provides evidence of the existence of an alternative process which could potentially be occuring in other cases of perceived volition.

 

 

Fair enough. It may be that in instances where the brain has been tampered with, where a command is planted on one side (the command to laugh for example), the other side feels compelled to provide reason for its actions (I chose to laugh) perhaps either due to the habit of thinking one's actions are the results of one's decisions (of course I chose to laugh, why else would I laugh?) or because in the realized absence of such a reason an individual feels compelled to provide one anyway (I didnt chose to laugh but since I only laugh for reasons I must have had one)

 

I think experiements like these are not necessarily useful for discussing freedom of the will. Yes, if you stick a wire in someones brain you can stimulate certain reactions in the mind. I certainly would not say my will was free if every input was the product of some other agent, a mad scientist or demon who has possessed me, and the choices I made were also the product of this agent, namely it would not let me respond in any other way but the way it wanted. Theres no doubt this is freedom destroying.

 

But consider a related scenario. The mad scientist is sticking wires in my brain, making me think Im in a room filled with hot girls. Now, I want to smooch these girls, but they are not there, so how can I be freely choosing to kiss them? Well, because the decision to smooch the girls is still mine. The girls are an illusion but the desire to take the action to smooch is still the product of my reflective consciousness, even one that has been "compromised" with false data. My freedom of action has been compromised for certain, as I am making actions that have no real world corollary. But the decision to act has still been mine.

 

So now the mad scientist implants the desire to smooch the girls as well. This is still not effecting my freedom of will formation. I can desire that action, but remember humans can reflect upon their own desires and from that process form volitions, desires that become our actions. So now I want to smooch the girls because the mad scientist has made me feel that way, but I can still reflect upon the desirability of that course of action and produce another. Say I have these feelings from the scientist but my girlfriend walks into the lab. No way am I kissing them, cause now I have a new desire not to be beaten unmercifully by my girl.

 

Now, if the mad scientist implants a desire (call it X) into one's head as well as a firm desire to want only to do X, we can say that our wills are no longer free. The crucial difference is in capturing the reflective hierarchy that allows us to have desires, desires about those desires, and most importantly desires to make other desires our actual actions. When this arrangement is compromised, only then can it be said that no will of our own can be produced because all of its components have been infiltrated.

Link to comment
Share on other sites

All we do is choose from a limtied list of options that are already set in stone, using our limited hardware. If you want to call that free will, then so be it.

 

I disagree that we just choose from a limited set of options. I think our hardware has the ability to generate random ideas, then we use our logic machinery to evaluate whether or not those ideas would help us in our current situation. This process is going on all the time: We look at a horse, and somehow the idea pops into our head "glue a dinner plate to that horse," but we weed it out as a dumb idea and ignore it, almost subconsciously. Then we get the random idea of "maybe I can sit on top that horse and ride it around," and this is selected as a good idea, because we realise it can help us accomplish our goals of finding food and shelter much faster.

 

Given how powerful evolution is at optimizing things, I wouldn't be surprised if our brains have evolved an evolution algorithm for ideas, that occasionally constructs random ideas and leaves it up to our logic functions to select the good ones. Then once an idea gets past an individual's selection logic, the person tries to implement it. If it works they keep implementing it, others see it, and good ideas are remembered and communicated to future generations. I guess that's the "meme" concept, but I'm not sure since I haven't read that book.

Link to comment
Share on other sites

Then once an idea gets past an individual's selection logic, the person tries to implement it. If it works they keep implementing it, others see it, and good ideas are remembered and communicated to future generations. I guess that's the "meme" concept, but I'm not sure since I haven't read that book.

That's pretty much the basics of memes, yes.

 

I'm a bit hesitant to jump into arguments that are outside my domain, like psychology, but given that we are discussing things on a level that is clearly past everyone's domain, I think I'll throw in my two bits. I tend to think that arguing about whether free will exists on the grounds that proper electrical stimulation of the brain can trick or "force" the brain to do something is a red herring. Think about it like this: your average Windows user can do all sorts of things with the OS, but a trojan can do exactly those same things (and more) to the exact same computer. Does this mean that the user has no control over their computer in the absence of the trojan? By definition, a computer is incapable of telling the difference between a successful trojan and an actual user.

 

Don't think that I'm trying to posit the existence of a soul by talking about a user (I make no claims about the existence of a soul at all). We can easily replace the user with an application, for example, or even just an OS that is capable of reasoning about the source of the commands that it intends to execute. The only way a computer can determine whether a command is hostile or not is if a hostile command fails to break past the security measures in place. If it does, the computer must necessarily assume that the command is legitimate. Just like the brain missing its corpus callosum attributes external commands to its own internal functioning (a conscious choice, etc), a compromised computer assumes that commands from a hostile source are its own and treats them as such. You can't assume that a compromised computer will behave the same as an uncompromised one, nor can you reason that because a compromised computer behaves as in an unsecure manner that there was no security to begin with.

Link to comment
Share on other sites

I think you guys are generally on the right track with what the main issues are.

 

I didn't like the trend of some of Orb's previous posts, not because I disagreed -- I think he's generally right -- but because he's focused on the less interesting question IMO. The debate over whether behavior has a "spiritual" or "mechanistic" origin was over a century ago for anyone interested in the right answer. But the most important question, IMO, remains a live one: Does human "freedom" have real content, in the sense of real agency? Saying it's "mechanistic" to me isn't an answer at all. It's the question! We still don't know if our agency has real efficacy in the mechanism.*1

 

Well, anyway, I think the trend of this discussion has been on the right track. "Agency" seems to work when the whole brain system works like it's supposed to ... and we can see that when the system breaks down, so does its ability to have agency. So the question then is how do all the pieces fit together to make a system that works properly? The sorts of thought experiments you guys are asking are good ones, because they're trying to isolate, well, you need this piece, because if you take it out then the system wouldn't have agency for this reason.

 

From my own studies, the field that's going to ultimately get closest to answering these sorts of questions IMO is called neuroeconomics, the combination of behavioral (evolutionary) economics and neurophysiology/cognitive science: the brain is first an optimization machine that tries to maximize the agent's utility at the least cost to it in an uncertain world full of potential gains and risks. And it's on top of that you add things like "culture" and "personality" that color individual agents, with their individual commitments to this and that.

 

Since armchair science is what we seem to like around here, I'll pompously submit that there are three big pieces to making real agency, or "freedom" in humans (the first two also applying to animals). I think, Nyar, Ish, Max, oDD, Orb, all of the intuitions you guys just made can either find their way some place in this model, or this model is the response to them where I might disagree (e.g., to oDD's apparent suggestion that decisionmaking is never narrative-based, but only either reflexive or by "natural logical" (utility optimizing?), which doesn't mesh with some types of narrative- or identity-reinforcing behavior IMO, at least in the short-run -- it may in the long run and we'd both be right). Ok, here we go:

 

(Assumption) First of all, I should mention that the background observation for these three things to make sense is that the brain is multi-modal in its decisionmaking.*2 There are different functional brain areas that are all working symotaneously with a measure of autonomy, in concert and doing a lot of crosstalk for sure, but making their own original decisions first that are then circled around the system so that other systems can modify it, veto it, trump it with their own (e.g., emergency) strategy, etc. (Minsky's Society of Mind sums it up).

 

(1) The bedrock mechanism is that whatever brain area is active generally wants to find a (Nash) equilibrium strategy to whatever "game" it finds itself playing. (A Nash equilibrium strategy is one that once it is arrived at, if the agent were to do a single thing differently it could only lose utility over time.) A meme is just the symptom of an equilbrium being reached, but it isn't the mechanism really doing the work itself. People don't follow what other people are doing for its own sake, usually, but because to do so usually significantly lowers the cost of transactions.

 

(2) The second key feature is that what counts as "utility" and what "games" are even available is defined by natural selection -- feeding, fighting, fucking, fleeing -- the things that helped the system perpetuate itself in the past which naturally selected the systems' sensitivity to just those things.

 

So these two pieces help explain how the system got designed the way it is, and you could go system by system that has to do with behavior -- vision; hearing; the limbic system and emotions of love, fear, hate; hunger; etc... -- and the parts start to fit together according to these two pieces. That describes how the "decisionmaking" system is put together.*3

 

(3) The third piece I'd say is (semantic) "adaptability" to environment. A person is sometimes put in novel games the wetware hasn't been rigged to play. This is where culture, language, and frontbrain logic processesing (along with the elusive self-narrative I mentioned in footnote 2) get a lot more involved ... because the rules for getting payoffs are different; the agent has to piece together new rules based on what value even means in the new game. I'll have to save it for another post, though.

 

End of post.

---------------------------------------------

 

* footnote 1 (dear lord, I have to resort to footnotes to shorten my posts, sigh) : Orb answered one of my posts by saying humans can still be held responsible because they are "rational agents" that respond (economically) rationally to their situation (such as being deterred from pursuing the gains of criminal behavior by the risk of the cost of jail time). It's a good answer, but where did this "rational agent" come from? To me, I don't think it's been finally determined how humans exactly become rational agents, even if we can see that they normally are if we trust the economists.

 

* Footnote 2: but not in its narrative-making about its decisions, which is handled by just one module: language/left temporal lobe. That's the important "self" part of us that acts as executive over all the agencies doing its bidding; its original and veto powers are particularly important. A whole lot of mechanisms are put in place to keep the executive established clearly at the top, the "feeling of agency", the "unity of consciousness", the "coheasiveness of behavior", etc. They may be constructions, but in normal practice they do their job pretty well in keeping the narrative-self focused on the task at hand and not how it can hold its minions together.

 

* footnote 3: The best example I can think of of how this works is a brain area (LIP) which determines the focal point of our eyes. This may be a little too much of a detour (and I think I described it before), but it's too damn cool to pass up, because this is the closest look we've had. The brain area is basically a 2D matrix of the entire visual field, and when one point is "activated", it is rigged so the x- and y-axis eye muscles are tensed as a function of distance from the matrix origin (point 0,0), effectively moving both eyes' focus to the equivalent field point. Now overlaid on this matrix is a "relative expected utility (REU) matrix" ... essentially, every point on the matrix is given a REU rating which varies based on the expected pay-off that the system can expect relative to the pay-off of looking at any other point. The bigger the payoff the agent can expect relative to the others, the higher the rating. And as you can guess, the basic mechanism is that the point with the highest REU rating takes control of the muscle-controlling matrix and the eye moves accordingly. And the clincher is that the REU rating is sensitive to the traditional sorts of basic goods -- food, water, sex. That is, for example, in the appropriate situation the strength of the signal would be a function of the expected nutritional intake of a squirt of fruit-juice for looking at the "right" dot (nutritional value * likelihood of getting it), relative to the expected intake from looking at other dots (times likelihood of getting it). Change the size of the fruit-juice squirt or the likelihood of getting it, and the REU ratings adjust themselves right alongside to (almost) exactly meet the new Nash Equilibrium strategy for maximizing one's nutritional intake over time, just as you'd expect from Nash's theory (I say "almost" because it takes a number of trials to figure out the new payoff-structure). It's astoundingly remarkable that they found such a clear 1-to-1 mapping, IMO. The rating even tapers off after repeated "wins" (the more food you get, the less you want more) just like a real utility index would in economics.

 

...................

 

Ah damn, I should have a postscript. To me, these aren't just trivial questions, fun as they are to think about. We really want a clear picture of what human freedom means because it's not just some esoteric thought-puzzles. Orb was pretty quick to answer why we need criminal sanction from an economic perspective, but from a moral perspective ... you can't just lock up or torture people just because it will lead to a more socially-efficient outcome (that is, gov't action is just about reconfiguring individual's payoffs so that they do the "right" thing that maximizes everybody's interest overall.) I don't think Orb thinks that ... I'm just pointing out that his short answer is just a starting place, as he'd probably admit as well.

 

The real nature of "Freedom" and "agency" matters in the details to what you can imprision people for, for how long, what kinds of civil liberties people deserve, whether disadvantaged populations (ethnic minorities, women, working class, etc.) are really "choosing" their own course in life and we should have a hands-off attitude, or having it pushed on them to hold them down and keep more powerful interests in power, where it wouldn't be wrong to help empower them to have the ability to make meaningful choices. All sorts of important questions are raised by what human freedom really amounts to IMO. Even why we can't stop playing video games.

Edited by demagogue

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

I didn't like the trend of some of Orb's previous posts, not because I disagreed -- I think he's generally right -- but because he's focused on the less interesting question IMO. The debate over whether behavior has a "spiritual" or "mechanistic" origin was over a century ago for anyone interested in the right answer. But the most important question, IMO, remains a live one: Does human "freedom" have real content, in the sense of real agency? Saying it's "mechanistic" to me isn't an answer at all. It's the question! We still don't know if our agency has real efficacy in the mechanism.*1

 

This last part has been answered for me at least, it is the peculiar structure of our mechanism, our ability to abstract things like the idea "I want to do X" then "I want to want to do X" and most crucially "I want to want to do X and I want to make it what I actually do." that captures the free formation of our wills. Desires arise from elsewhere and flow through our consciousness, everything from hunger to the highest intellectual abilities, but even as they do they are subject to a scrutinizing second look in the mind as abstracted objects themselves. We think our thoughts, and we think thoughts about them, and some of those secondary thoughts are that some of our thoughts become our actions, what we actually want to do. Its a picture of the mind as a filter, a sieve that holds back some desires from becoming actions and lets others pass. The question is not whether we are conscious or not of this process, we are this process, to question whether we are in control of this process is to ask if we can be in two places at once.

 

From my own studies, the field that's going to ultimately get closest to answering these sorts of questions IMO is called neuroeconomics, the combination of behavioral (evolutionary) economics and neurophysiology/cognitive science: the brain is first an optimization machine that tries to maximize the agent's utility at the least cost to it in an uncertain world full of potential gains and risks. And it's on top of that you add things like "culture" and "personality" that color individual agents, with their individual commitments to this and that.

 

I can see how neuroeconomics would have great value for animals and for some human behavior but Im curious about the line it draws between culture, personality, and these deeper functions. The arrangement you have described seems to imply that even the most irrational of actions would have to have some rational basis, as this "optimization" mode is the first, fundamental mode through which all behavior must pass. I would posit that its more mixed up then that, with reason and unreason sometimes flowing into and informing one another. But maybe Im reading too much into your post.

 

 

(Assumption) First of all, I should mention that the background observation for these three things to make sense is that the brain is multi-modal in its decisionmaking.*2 There are different functional brain areas that are all working symotaneously with a measure of autonomy, in concert and doing a lot of crosstalk for sure, but making their own original decisions first that are then circled around the system so that other systems can modify it, veto it, trump it with their own (e.g., emergency) strategy, etc. (Minsky's Society of Mind sums it up).

 

I remember some of this from my brief neuroscience readings and its very interesting but Ill dash off two short points and then to bed. For one, Im leery of economic explanations in general, in my experience they are abused as being the sole source of understanding certain phenomena much as science is. The three levels of function you describe are also of interest but I wonder about the arrangement. Anyway, I have to get to sleep, Ill edit and re-respond tomorrow night.

Link to comment
Share on other sites

Well, a "quick" clarification (sorry, I try!) ... I think you're interpreting my post the wrong-way-round. "Reason" and "economically rational behavior" is what we may see coming out, and it may be sensitive to such variables, but the system itself can't be these things per se because it's constructing them. I'll have to explain for that to make sense (I hope!).

 

One way to think about my understanding of it is that what we think of as "reason" and "rational behavior" are actually special kinds of emotional states ... so when you say "with reason and unreason sometimes flowing into and informing one another", that's actually an insight into just the way I think about it too.

 

The ground-level mechanism is an emotional one -- neither "reasonable" nor "unreasonable" per se; words that I tend to believe actually depend on the context of the situation; if it's a familiar game the system is playing, then the behavior will tend to be "rational", if it's very unfamiliar, then the behavior can be quite irrational. They use the term "cognitive frame" for a mechanism doing its thing on familiar turf (typical-apeman situations ... trying to win over a mate, running from a wolf), and "cognitive bias" when the very same mechanism does idiotic things in non-apeman-typical situations ... e.g., we have strong emotional urges to hold on to things we have in possession and discount things we don't have yet but could obtain with some work (the entitlement bias). In economic terms this can be irrational in, e.g., investment or credit situations when there are high opportunity costs that the agent isn't taking into account (we'd be much better off dropping our losing project -- investment, education, g/f, spouse, etc -- right away and moving our resources to something more win-able ASAP; the longer we wait the more we lose, but we are often stricken by the deepest emotional ties to such things) ... although you could imagine that on the East African tundra a million years ago by-and-large it would lead to pretty low-cost, efficient outcomes.

 

Another way of explaining it: So the LIP doesn't abstractly inform us of the value of looking at X. We "feel" the irresistable urge at a deep, emotional level to do so. Even many "reflexes" have this aspect at some level. Or like a game (since so much of this is built on game theory), when we are making decisions, we do so like we are playing a game, and we are being pulled into the "flow" of it when we want to make the "right move". We don't play the rules like a cold equation ... we get "immersed" in the game, and the better immersed we are, the better decisions we make. To the extent we try to play by cold calculation, it rips us out of the game and our performance doesn't always benefit (it depends on the game). I guess this point on "it's an emotional state" should have been my second assumption/observation about structure.

 

You've also rightly questioned how this "emotional state" wetware business on which "rational" behavior is constructed connects to culture, personality, and more "deliberative" decisionmaking where we are talking to ourselves and being committed to certain narratives of identity (being the kind of person we want to be).

 

What I described above is more like the underlying structure of our "emotional state of decisionmaking", what you interpreted as

as this "optimization" mode is the first, fundamental mode through which all behavior must pass.

 

I'll have to think about whether that's a good description. To me, it captures one idea that, at the very root of many important decisions, no matter how much you deliberate with yourself or try to justify your answer, it's a "gut feeling" choice at the end of the day ("optimization mode" is maybe a misleading term; maybe better is "decisional-emotional mode", optimization just describes its genesis and maybe 'primeval meaning' to us, the "primeval fear of loss" and "primeval hope for gain"). With such choices, you may think about it for a while, think of different considerations on different sides (each of which has an emotional impact, a pang of fear of real loss or a spark of excitement from the opportunity, etc), and in the end you take a deep breath and the overall emotional-mush hanging in your chest (the pangs and the sparks fighting for control), and the mush is trying to resolve itself (most of what I think "deliberation" is really doing) and you are finally left with this little half-resolved spark of hope that it might really work, I can really do this, and suddenly the spark "blooms" in your limbic system, and you say "so I'll do X! It feels right, even though I know Y (little pang of doubt comes back) but all things considered, there's Z to make it work (doubt re-quelched)". Most of the work here is being done on an emotional level. And sometimes we bury it with rationalizations that don't always capture the real pressures motivating us on that emotional level ("a person always has two reasons for doing something: a good reason and the real reason.") because of other pressures (which I'll get into next).

 

But even when a decision isn't very well explained by such "pangs and sparks" (and be honest, they often are), there is a clear moment a person has when they can feel they are pushing their instictive emotions to the side to make the decision cold, according to the narrative they've drawn out (or a similar case is unemotional, low-stakes, day-to-day decisions, which are often played out rote to a particular narrative, a "habit of obedience") ... although even these situations have deep emotional root. A businessman follows the rules of business "cold" not really to get the answer right per se, but because it's part of his ethos of being a "respectable" person, someone his father wouldn't be ashamed of, the kind of person he is committed to being on a deep, emotional level.

 

I mean, it gets complicated from this point ... but I think you can see the direction of my thinking. I see your intuition and I think it's on to something that makes sense in the way I think about it, too. I don't trust economic models per se having purchase in explaining why we do what we do, since we aren't following models per se -- as far as we're concerned, we follow our gut, we don't have to reflect on what our gut is actually doing. But I do think that what our "gut" is really after very often has its roots in motivations we can't see but are trying to act in our best interest anyway, which brings the models in through the backdoor more often than we think.

 

If I could sum it up, maybe: Whenever we're playing a "game" in life with real stakes involved, on an emotional level we play to win.

Edited by demagogue

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

Hmm, I wish you guys wouldn't write your posts like it's your final thesis for a degree.

It makes it rather too long-winded for easy consumption.

I know for a fact that you guys could easily sum up what you have to say a lot more concisely - and make it a lot more readable - than that.

In this sort of forum, I really do prefer bullet points rather than an essay.

I mean, since my last post last night, 4500 words have been written..

 

All you have to do here, is to first explain exactly what consciousness is, then explain exactly what free will is, then decide if free will can only exist in the presense of consciousness, then decide that even if the latter is correct, can the relatively small consicous level of our brain - the part which is you - ever act independently of the majority of lower levels, which run on autopilot much like the brains of other animals.

Simple.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

I disagree that we just choose from a limited set of options. I think our hardware has the ability to generate random ideas, then we use our logic machinery to evaluate whether or not those ideas would help us in our current situation. This process is going on all the time: We look at a horse, and somehow the idea pops into our head "glue a dinner plate to that horse," but we weed it out as a dumb idea and ignore it, almost subconsciously. Then we get the random idea of "maybe I can sit on top that horse and ride it around," and this is selected as a good idea, because we realise it can help us accomplish our goals of finding food and shelter much faster.

 

Given how powerful evolution is at optimizing things, I wouldn't be surprised if our brains have evolved an evolution algorithm for ideas, that occasionally constructs random ideas and leaves it up to our logic functions to select the good ones. Then once an idea gets past an individual's selection logic, the person tries to implement it. If it works they keep implementing it, others see it, and good ideas are remembered and communicated to future generations. I guess that's the "meme" concept, but I'm not sure since I haven't read that book.

 

My point was that using logic doesn't require the presence of free will, just the opposite in fact.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

But if logic is already an obvious truth that's just waiting to be seen, then there's really no choice to be made.

I don't have any evidence of course, but I don't see how our conciousness can make decisions totally independent of the rest of the brain, and if that is so, then we do not have free will.

Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Link to comment
Share on other sites

Hmm, I wish you guys wouldn't write your posts like it's your final thesis for a degree.

 

I think it's because I want to write something like a final thesis on this stuff that I like using this forum as a testing bed ... I need the criticism. :unsure: But I realize it's not very fair ... But now that I've said the big gist of it - I'm happy - I can go bullet.

 

I think, true, "consciousness" is one module out of many, but it's the executive. It gets special "veto" and "trump" powers, and it's also the main one to deal with the "identity-enforcing" stuff. And often the emotions can be understood as signals from other modules that are "for" it, to help it make a decision. So it deserves a special status.

 

On "logic" ... well our forebrain apparently deals with deductive/inductive (logical) inferences from our knowledge base. How considerations (and inferences) add up to recommend to us a way to behave is a little different kind of "moral logic", although it builds off of knowledge logic.

 

One prof said that "freedom" is just being able to cover all the relevant considerations that are important to the person, and that they have real trump power over each other according to how they really advance the person's interests/goals, so you know the person has really "covered all the bases" before making a decision. That's a good reason to think there's a link between moral logic and freedom.

Edited by demagogue

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

I think, true, "consciousness" is one module out of many, but it's the executive. It gets special "veto" and "trump" powers, and it's also the main one to deal with the "identity-enforcing" stuff. And often the emotions can be understood as signals from other modules that are "for" it, to help it make a decision. So it deserves a special status.

 

I suspect you are falling into the "mind as a computer" trap with all that talk of "modules". A lot of the brain structure is decidedly not modular -- such as the left hemisphere dealing with language AND the right side of the visual field, while the right hemisphere deals with "artistic stuff" AND the left side of the visual field (or whatever it is, the details aren't important).

 

The view of the brain as a series of independent modules talking to one another is an optimistic, software engineer's perspective that AFAIK does not correspond with reality. In particular, I believe the idea of a "seat of consciousness" that controls the rest of the brain has been thoroughly discredited through neuroscience.

Link to comment
Share on other sites

Ok, short answers, right?

 

(background reading, on modularity pro and con)

 

Since I came into cogsci from the cog psychology experimental end (and some neuroanatomy, but not the CS end), I tend to distrust the "computational" track, too, for similar reasons to what you mentioned. "Modularity" means different things, functional, anatomical, etc. I was talking about a pretty mild form.*1

 

Anyway, short answer is, I don't like too much CS analogizing either, but I'm coming to the idea of modularity from a different path anyway, which has more support, and is a weaker version than the controversial kind, so it doesn't wander too far away from that support.

 

As for "'seat of consciousness' that controls the rest of the brain", you're right there isn't support for that view of it, and I wasn't trying to claim there was.*2 There, that's short.

 

End of post.

--------------------------------

Nothing down here but optional reading:

 

*1 The brand I was talking about above is the observation from cog-psychology that in certain categories of situations, decisions are handled under tailored frames that seem "custom made" (from natural selection) in explaining the behavior we see (even when it gets the "wrong" answer), the entitlement bias, the availability bias, etc. This kind of modularity is pretty well established in psychology, behaviorial economics. I was trying to be agnostic on whether the modularity needs to go any deeper than that (e.g., the Massive Modular thesis, which is closer to what you're talking about) because it's not a consensus position yet. Also, I'm not sure the version I gave fits with the description "a series of independent modules talking to one another." It could just be a connectionist neural-net that correlates the right frame for the right situation, and all the work is buried in the web.

 

As for modularity in anatomy, there a number of good tests to connect anatomy and function (fMRI; double-dissociativity: take out the area and you lose the function and vice versa; etc), and based on them, the brain is pretty anatomically modular over a whole host of cognitive functions (here's a good basic view). The jury may still be out on how far you want to interpret this supporting functional modularity in the strong sense.

 

*2 Cns, understood in its biological sense, has different functions. It focuses attentional energy on specific, impending cognitive functions; it houses an emotional "state of the being" (like general mood), so decisionmaking doesn't lose track of the "big picture"; it centralizes (and organizes) reflective access to the various sense and internal status data; and it has particular sensitivity to "narratives" that the language-areas construct that have a top-down influence on other operations (when you understand a situation, I'm getting bullied here, your emotions react). This gives it a kind of practical privilege over behavior, but it doesn't imply automatic "control" per se, certainly not over "unconscious" behaviors like reflexes, but even "conscious" decisions are more-often-than-not just deferring to their "gut feeling" without mandating or "controlling" that feeling (at most, it doesn't stop the deference). The point being that I don't think I'd disagree with you; but I still think that aspects of cns are amenable to scientific investigation, and that it has a privileged status among brain-functions by design, but not fundamentally different otherwise (e.g., it's not made of spirit-stuff).

Edited by demagogue

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

Yeah, that's the point. If you don't feel like reading it, you aren't missing anything. Just like footnotes, they just elaborate on what's already been said, not really adding anything new. The only reason I wrote them at all is just for the perverse people like Max that seem to share my affinity for the "actual answer" to questions.

 

 

 

 

that's a joke, people. I actually don't like writing long posts, either, because it makes discussion impossible ... I just get suckered into it sometimes. So much for free will...

Edited by demagogue

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recent Status Updates

    • Petike the Taffer

      I've finally managed to log in to The Dark Mod Wiki. I'm back in the saddle and before the holidays start in full, I'll be adding a few new FM articles and doing other updates. Written in Stone is already done.
      · 4 replies
    • nbohr1more

      TDM 15th Anniversary Contest is now active! Please declare your participation: https://forums.thedarkmod.com/index.php?/topic/22413-the-dark-mod-15th-anniversary-contest-entry-thread/
       
      · 0 replies
    • JackFarmer

      @TheUnbeholden
      You cannot receive PMs. Could you please be so kind and check your mailbox if it is full (or maybe you switched off the function)?
      · 1 reply
    • OrbWeaver

      I like the new frob highlight but it would nice if it was less "flickery" while moving over objects (especially barred metal doors).
      · 4 replies
    • nbohr1more

      Please vote in the 15th Anniversary Contest Theme Poll
       
      · 0 replies
×
×
  • Create New...