Jump to content

Mole

Member
  • Posts

    185
  • Joined

  • Days Won

    4

Mole last won the day on February 25 2018

Mole had the most liked content!

Profile Information

  • Gender
    Male
  • Location
    Australia

Recent Profile Visitors

938 profile views

Mole's Achievements

Newbie

Newbie (1/14)

11

Reputation

  1. I indeed did accept your syllogism as part of my original formulation. However, I could make an argument that the syllogism is false. Stefan actually made this argument and it seems to be compatible with physicalism. It is that the whole is something else than the sum of its parts. So while atoms are determined, free will is something else that is not determined. This is not contradictory with physical laws because 1. physics doesn't necessitate determinism. 2. physics doesn't necessitate that everything is reduced to particles, in fact it necessitates the opposite. Physics also has laws that apply at one level of analysis but don't apply at others. For example, you can't formulate laws of electrical circuits by looking at individual electrons. Another perhaps stronger example is that you can't formulate theories of consciousness without considering the a whole neural network. Electrical circuits and consciousness exist, so we know that studying particles using Newtonian/Einsteinian/Quantum/String theory physics is inadequate to understanding the world. We have to use different levels of analysis. Hence, we could understand single neurons as being determined in a closed system but we could understand the brain as having free will. Now, this raises a question. It seems the elements of consciousness does depend on reduced parts. For example, emotions light up in one part of the brain. Senses can be reduced to photons entering my retina. Movements are firings of neurons. It seems then that even if the brain has free will, any particular function of the brain must correlate with the functions of its parts or neurons. It would not make sense that I could have a dream but there would be no trace of it in an MRI. Hence, it must be that free will actually determines the position of neurons. That is, there is a bidirectional relationship between the whole brain and its parts. We are accustomed to believing that everything is reducible. That all phenomena are determined by its parts, and those parts are determined by its parts and so on. I needn't see why this is the case. It makes more metaphysical sense to say that the world consists of entities that act according to their nature and that they have relations to other entities including the very parts of the entity itself but that the entity is the originator of its effects. This is indeed how a child would first understand the world. We may try to prove that everything is reducible but we should at least understand that any such conception must be compatible with what I just said and also that it is possible that we can have metaphysics without reduction. The only issue I have with what I have described in this reply this: As I said, the brain must correlate with its parts even if it isn't determined by those parts. A brain cannot do anything unless something has changed in its parts. However, it also seems to follow that it cannot compel free will without it being correlated with its parts. If I decide to shoot a gun, those thoughts will be reflected in my neural structure. How then could I compel free will such that it has an effect in the parts of my neural structure unless free will itself was correlated with my parts? It seems then that free will cannot effect parts of the brain because any deviation from the determinism of particular neurons would itself need to be reflected in those determined neurons meaning that the deviation is not possible. This is a strong argument, I believe, against what I have proposed in this reply, so I am not entirely convinced of what I have said. The only possible solution is that perhaps free will is not physical and that it doesn't have any correlate with particular neurons at all. This isn't as crazy as it sounds. After all, if we are going to even conceive of free will, it seems to be something that is separate from my thoughts because it seems pure free will doesn't have any content or mechanisms whatsoever. It really does seem to be something that precedes my thoughts and hence it could be non-physical. Furthermore, it may not exist in some kind of other realm but rather be some kind of property of the whole brain. In this way it still belongs to the whole and so the idea that the whole is something else than the sum of its parts still stands. This possibility is the most likely if free will does exist. Maybe in such a way it is actually physical but just doesn't correlate with its parts. Whether this is at all possible, I am not sure and it's something I have to think more about.
  2. Thanks for your perspective. I'll definitely consider what you said.
  3. When I percieve a cloud, the cloud is part of the causal chain that makes up my consciousness in the sense that it led to my perception. But as you seem to agree, consciousness is another part in the entire chain. Well, then what I mean by external is external to the part that is conscious.
  4. Sorry, I dont think I know what this means It means that consciousness is one experience. It is not like seperate body limbs. Our experience is one thing. And that correlates with an endogenous system. The system is the neural network, and it is the entire neural network working endogenously that makes consciousness one thing.
  5. I don't see why truth is not an architectonic good. I should have been more clear in my last comment. The two options in my last comment both have "truth" as the architectonic good. Either actions are truth statements or actions are automatic but either way, we strive towards the truth using our willpower. I believe the architectonic good can be nothing except the truth because all that conscious deliberation is capable of is truth-seeking. All my emotions and hence desires are automatic responses to truth. That might raise the question what is the good that our emotions seek, and I would say that is entirely subjective and based on our biological nature. However, whatever that nature is, it must be consistent with the truth, so, for example, murder could not be a good because murder is not consistent with the truth of morality. The unfortunate thing is that people generally believe that emotions are something to be fought against and we must strive towards some abstract ideal good regardless of how we feel, but that is a paradox. It would mean we would have to be unhappy to be happy. Emotions are responses to thoughts and it could not be otherwise because emotions can't be experienced without context. If you believe something, your emotions will automatically adjust to it. If you believe something contrary to your emotions, you don't really believe it deep down and rather you are rationalising and that is your false self.
  6. Technically it must if "I should do X" might be considered a truth statement. However, there is also the approach that "I should do X" is not a truth statement and rather a person will automatically act on knowing the truth. I would presume the second approach to be correct because an ought cannot be derived from an is.
  7. I am not sure what you mean out of sync with reality. I am not sure where I said or implied that. I believe these people are conscious, but they cannot retain memories so they cannot remember what they did. I don't necessarily think physicalism would hold matter to be primary. I think it can hold matter to be identical to consciousness. In fact, I think it is crucial that it is identical because if consciousness was an after effect of matter, then consciousness would be determined by external elements which is the brain, but that would be determinism. But if consciousness is the elements in the brain, then it is not determined by those elements, it simply is those elements. This means no external elements are necessarily determining it or it's outputs, so it can be self-caused.
  8. Maybe I should say not contingent instead of independent? (1) As with the definition of free will, I mean not determined by. (2) Also, what do you mean by a physically appropriate surrogate? (3) (1) - See my first comment in this reply above. (2) - Ok. (3) - Sure. Back then (IQ-G), now, additionally (living vs. dying) I was thinking back then about how IQ-G (biologically determined) isn't sufficient for free-will, from personal observation, empirically too. Place two smart enough individuals side by side and if one of them isn't capable to manifest free-will in a scenario where the other was capable to utilise it, it's fair to conclude, free-will required more than just sufficient bodies/brains...or 'whatnot' (Chemistry is 'reallyverytoomuch' complicated here, for me) But now, when I have re-visited it, it's also likely that since free-will can be partially blocked/erased (ie - in abuse, coercion, threat), it must have been either intrinsically present before as a capability and later developed further into a more complex sytem. Or in the case if it wasn't intrinsically there, maybe it was taught/learnt. Which made me ask: Would an individual be able to survive on a deserted island (provided enough food, no predators, endless supply of fdr podcasts... was available) by having to rely on the self only, to invent and then manifest free-will? I think yes. It's because life's core part is an endless chain and continous manifestations of choices. Failing at making decisions on a continuous basis, equals to being very good at atrophy, dying. Does what I have written make sense to you?  Oh, I see what you mean by physically appropriate surrogate now. You were just trying to explain how 'will' is dependent on the physical world, particularly dependent on the kind of thing it manifests (particular brains). Yes, that may be a problem for dualism. What I understand from the rest of what you said is that you are questioning the pre-requisites of the 'will'. I would agree with Stefan here that free will is the ability to compare an ideal standard. That entails particular relationships in the brain. It is essentially a pattern of consciousness.
  9. My argument doesn't necessarily need the 'will' to be an extra function, though it may be. The 'will' is a part of conscious experience, and I think my argument would apply to our entire conscious experience, so the 'will' would be covered by that, whatever the 'will' is exactly. My argument is basically trying to defend the idea of self-causation. Consciousness is a unified experience that perfectly correlates with a multi-directional, unified causal chain. The chain causes human actions. Since the chain is unified and consciousness is unified, external causes cannot change the chain without the entire chain changing. Because the entire chain is changed, consciousness is changed so that it is aware of this external cause. However, if it is aware of this external cause, then the external cause does not determine the conscious experience, so it does not determine human action. Yes, every action is 'determined' by prior causes but we are fully experiencing all these causes, so there are no unknown factors if that makes sense. Rather than actions being determined by external causes, they are self-caused. We often think that if consciousness is contingent on some causal chain, that means it is determined by things outside of our control. But if consciousness is the causal chain, then we only have to worry about causes external to that chain.
  10. Your example seems to be a case against free will rather than the will itself. I would think the will is self-evident just as consciousness is self-evident.
  11. I have now replaced Cartesian dualism with mind-body dualism. Maybe I should say not contingent instead of independent? As with the definition of free will, I mean not determined by. Also, what do you mean by a physically appropriate surrogate? There are two different meaning to levels. I am aware researchers are splicing up awareness and even free will to say that there are different degrees of free will working at the same time. Another meaning is reflected in the paper you cited, which is suggesting different stages of awareness, just like developmental stages that a child goes through. I will evaluate both meanings with the original idea I had posted. First meaning (degrees of free will): It seems to me that if my argument is sound then our awareness is fundamentally in control of external causes affecting it. For example, I may be thirsty, but my thirst doesn't over-ride my free will. Because awareness is unified (awareness is a whole package) it seems that if any external cause were to create an effect in my neural network, the neural network would act as a whole to integrate this effect with the rest of the neural network through causal links that go through the entire network. Indeed, this is exactly what we experience when we realise we are thirsty. The neurobiological changes reflect the phenomenological changes. Because of this reason, I have doubt that there are levels of free will. Second meaning (stages of free will): Of course, responsibility depends on knowledge. It would seem that as more information is integrated the person reaches higher stages of free will.
  12. I shouldn't have assumed that you know what I mean, I apologise for that. By specific details, I just mean the actual things I have written down outside of the context of the idea I'm trying to convey. I was not accusing you of doing this, but just cautioning you if you are. Someone can say something factually incorrect but they say it for a purpose and sometimes people when responding ignore the purpose. But it doesn't really matter if what is said is factually incorrect if the purpose is still meaningful in the larger context. But now I know you weren't doing this, you just didn't understand the purpose of why I said that.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.