RestoringGuy
Member-
Posts
314 -
Joined
Profile Information
-
Gender
Not Telling
RestoringGuy's Achievements
Newbie (1/14)
-3
Reputation
-
There is an interval, however short, that the owner is dead and wills are yet-to-be-executed. It will take a few seconds to even know the owner has died. Unless the heir is standing there wrapping their arms around the property during the owner's last breath, this new "owner" must rely on a piece of paper. It's an artifact, a memory, or some other manufactured tool of culture. There is always some process to claim the property, that contract is now like the Bible. We are told to believe in it -- to obey it. Now I will say how this morality is faulty. I understand "the event of death shifts ownership", but only the paper might say that, the dead person does not. I think there's some agreement consciousness is needed (a rock cannot own another rock). However the dead body, having once written the will, is no longer a conscious entity -- so morally it's a rock. To use its signature as a form of ownership proof -- well how is that different that using a rock (or the Bible) as a form of proof? I guess you must sort it out only some time after death, and explain somehow why it's immoral to reject what the piece of paper says -- why people who did not sign it are somehow bound by it. Imagine the so-called "new owner" steps up two minutes (or maybe two decades) after death and says something like "I have this paper that proves the owner transfered the property!" Well I can say "no, you are trying to retroactively stake a claim -- for a brief time there was no owner, there was no property, I was here first -- what you have is paper, no better than a contrary one I just wrote up?" And this moment is where the dead must be believed to have an eternal soul -- they possess some kind of moral holding spot. Their little "will" artifact is to be accepted by faith alone.
-
It does not make sense to me to discuss the morality of treatment of a dead person's body, while not discussing the artificial ghost-body we give them using estates, inheritance, wills, trusts, etc. The signature (audio recording, systems of consent, or whatever you want to call it) means only that a living person agreed. What is the first-principle foundation reason to care after they're dead? Why is their stuff (the body too) not once again a state of nature? The contract too, just paper. To me it seems when a person is dead we can either treat the signature/consent as a dead artifact having no moral weight, or we can say it has moral weight and invent an origin of ongoing morality (ie. a spirit lives on through words, like the Bible). Even the words I write now probably have no worthwhile meaning to you, unless I am able to respond, explain, etc. all of the things, proving I am presently conscious and not just an ancient echo from some dead guy. I mean you are not obligated to accept something just because it's written down. I think truth has to resonate as an inescapable conclusion which a consciousness is capable of defending.
-
Some good observations. The scientific method, if valid before a person knows it, would seem to apply to everyone including the first person to understand it, especially if UPB is asserted. By extension, before humans evolved, the scientific method was valid but remained undiscovered. Otherwise if validity kicks in at some specific time, we are making understanding a key ingredient. In other words, if we discard the notion that understanding is a prerequisite for validity, then understanding should be unnecessary for even the first person to get the idea. The scientific method was essentially valid yet unknown even during the dinosaurs I would imagine. For animals, my thinking is this: whether or not an animal thinks or knows UPB presently is inconsequential. A man in a deep sleep may be incapable of holding UPB presently in his mind. Yet it's his future ability, the capacity to wake up and evaluate behaviors is supposedly what makes UPB relevant. One may be quick to say, "but animals cannot do this and never will". I believe that kind of thinking disregards evolution and the indeterminacy of the world. At one time human ancestors were rodents, presumably with no future ability to grasp UPB. But apparently their descendants could do so because we are here now. So everybody endorsing the idea that animals have no rights, I believe are Creationists. I say that because there is complete failure to address the transitional problem of "human UPB and animal non-UPB" . On the one hand, a system is regarded as having no potential for understanding (the mammals long ago), and another system is regarded as having potential (sleeping man), yet both arrive at the same result (awake humans, here we are), it's just over different timescales. If there is capacity for transition to take place, even over a long time scale, I think there is a good argument that animals have rights even if they somehow aren't quite as good as ours.
-
How to stop counterfeiting?
RestoringGuy replied to afterzir's topic in Libertarianism, Anarchism and Economics
I think the problem of counterfeiting is the same as the problem of less-than-expected goods. If I am sold a television that fails to show a picture after some period of time all the while the same showroom model continues to work, then in my perception the unit I received was counterfeit relative to the store model (just as you may receive a counterfeit bill/coin/commodity meant to be seen as fully functional, but when you go to use it the effect is undesired). Product manufacturers on the other hand have co-opted the concept of counterfeiting, replacing the idea of function with authenticity of mere place-of-origin. If a handbag is counterfeit it usually means the maker is not who you expect, regardless how well the product performs. Even if it functionally superior, it's counterfeit. On the other hand, I would not think of a gold coin as counterfeit if it were made of 100% real gold but merely the designer was fake. I think there is a concept of counterfeit-brand and counterfeit-function. The function idea is like the faulty television. When it comes to currency there is mainly a need to care if currency received works without future objections, not who makes it or how or why. You could take anything, consider its atoms and ask "at exactly which atom (from 1 to 10^26 or whatever) is this counterfeit item considered to exist?" Through a chain of institutional design, don't look at this, don't listen to that, it's not your job, ignorance is maintained and that's why I think product defects and counterfeiting are quite the same. Maybe just a matter of degree. Stopping counterfeiting seems qualitatively the same as stopping bad goods. -
No it isn't consent. That was my point, liking it isn't consent either unless you decide whose opinion is more worthy to listen to. That seems to require principle or property to give preference to which consent is real. I'll use your definitions. Do not ask that I define the words you're using. I only demand you use each definition of yours consistently. But you can decide what definition you want.
-
It seems to wrong to suggest ethics isn't about force. As an example, I did not consent for you to write those sentences. But ethically you don't need my consent, you're not using force. If a non-aggresssion principle is used, the aggressor is wrong not just by non-consent but also by physical imposition. Without a distinction of force, saying "some people like to be smacked" is exactly the same as saying "some people like to smack others" because their liking to smack is consent. Obviously you can say the victim did not like it, but now you must consider which person is on the receiving end of force. That seems to be the only way to tell which people you need consent from.
-
For stealing it could be only part of the story. While I hold my calculator, a thief must impose on me, damaging my hand however gently to free the calculator from my grasp. But when I leave it and walk away, and I'll use words more literally, it's me who's doing the imposing. The technology of calculators doesn't make it owned, it's just storytelling, the idea of mutual gain, and a fiction that my absent self is somehow still there possessing it. That's why I think at least absentee-ownership is a contract, not a principle like NAP, because force is being replaced by indirect ideas. It's done outside the thief's consent. In other words, I rig another person's reality in my absence so when I return my calculator probably won't be gone, and I use this social narrative to defend my imposition. We are attacking freedom of the thief, instead of trying to reverse the polarity of what it means to use force. Otherwise there will be the problem that physical force goes away whenever an attacker decides it's time to call the attack an external defense of their ideas.
-
The isolation is what I call a boundary, things must be inside and outside. To meaningfully say something "is a cup" seems to require implicitly defining what are non-cups. Eventually you can find a physical rule, a cup is that which can do X, now it no longer requires an instance or example. Once X is decided, "cup" is maybe just a jargon handle, same as the letter X, a short representative of some bigger action. Anything constructed or 3d printed that does X would seem to be a cup. In general X might be distinguished by other methods besides recognizing it first-hand. Imagine you're running a coin-sorter machine, and now extend that idea to the many things around us, trees, etc. In a way things are being mechanically sorted relative to your concepts, of course leaving room for uncertain results. I don't mean our naming of a concept does any controlling. The real concept is just a physical reference point, and the name of that concept is the mental reference point, an abbreviation. I think a concept which is "only in the mind" fails to even be a concept because even saying "the mind" supposes some particular container-mind is an instance of a mind, and now we're just using concept of "mind" to draw boundaries around all concept-makers. This seems to just postpone reality, and bury it a level. Physical conditions that distinguish a mind from a non-mind would fix the problem, almost nobody seems to be disputing that initial step. So we seem to be allowed a physical requirement applied to concept of the mind, if for no other reason than to get things started. So why not skip that mysterious step, and make all concepts essentially physical in nature? Suppose there's a vast periodic table of concepts with unconstructed instances. Not like Plato's ideal forms, but instead with their potential for instances as a metric. A concept is real just as oxygen is real, even when it may by chance be physically absent from some volume of space. The concept is just physical parameters based on predictions, internal cycles, interaction with other potential things, inside or outside the mind, inside or outside past instances. Once some instance achieves your goal, the instance is truly new. The concept was simply valid all along and instances are just testing grounds. Abstract without concrete just means there's BS around the corner.
-
I would like to mention that a confirmation process is taking place -- a filter on whatever kinds of things initially provide these decisions you describe. The computer needs some sort of generator of useful symbols, keyboard is just one way. On the other hand, maybe it is dice, a hot cup of tea, or directly to the wetware of the human brain. But the origin of that wetware should not matter, as long as the process is rapid enough. The rational process we want to identify, by my reckoning, is a combined result of (1) an ability to generate new symbol structures which cannot be algorithmically generated, and (2) a computational filter than weeds out meaningless data and allows rational data to remain in place as what we can call a consciousness. This second part does not seem out-of-reach of today's computers.What I mean to illustrate is that lacking either one of these seems to cause some rational failure. The algorithmically pure robot with no random "imagination" will fail to generate certain patterns, as it has certain limitations and a finite number of states. Cleverness of programming might make it able to recognize good pattern if it ever sees one, but the same machine may simply lack the ability to produce a good pattern that it can subsequently evaluate. That machine is irrational on the basis of being too boring. We can call it a limited machine.On the other hand, so-called wetware without good enough pattern recognition will generate new results, but it can fail to be rational on the basis that false/useless/unproven patterns might be accepted just a readily as true/useful/proven patterns. I can get any rational solution or answer by drawing Scrabble pieces, but it will take a long time and allow for constant gibberish. If followed by a machine, such a process is irrational on the extreme basis of being symbolically too clumsy. It is not "limited" it is too unlimited! Now consider suitable random hardware plugged in a USB port, followed by algorithmic software to evaluate/filter such data such that it can answer a question or solve a problem with the correct mixture of these two approaches. Done right I argue this can happen with equal adaptive power of human free will. This process allows for responsibility to prefer what is right while also requiring an imaginative generator that no deterministic algorithm can provide. This may not be wetware of the brain, but it is something I would label "mechanical and asymmetrically indeterminate".
-
I do not see the shame in conversing with a cleverly constructed robot. The cleverness may exceed your expectations. The common free will position seems overly negative toward technology and AI. While it may be the case that a pure algorithm has no free will (a position I agree with), that in itself does not estabilsh machine impossibility with regard to free will. It may be the case that computers that you and I know will execute their zeros and ones with near perfection. As you say, they know no meaning. But they do fail and make computational mistakes at some extremely small rate. Much like evolution and genetic mutation eventually generating the human brain through a process of rare events, there is no law of physics that seems to prohibit us from building a machine with free will. I do not adhere to compatibilism and such play on words. But by exploiting errors in a physical process where symbols get manipulated, simulation is not the only option. Atoms or bits, it should not matter although the process they follow does matter. The meaning generated from a random process can be real, because intellectual challenges might be solved by "errors" that win out. It would seem to me that a machine might purposely allow non-algorithmic mistakes in its symbol manipulation, and take advantage of certain kinds of detectable errors that will be better than any algorithm might generate. It is merely the attempt at self-improvement allowed for by extreme trial-and-error instead of simple straight-line processing. That kind of robot might have a capacity for free will. It could be a clever enough construction to find a refined answer that you cannot predict it will find. You also cannot predict non-living matter in a puddle 4 billion years ago in a swamp would eventually grow into humans. So it would seem there is some leeway in what random patterns can do to make free will possible.
-
Free will seems to be discussed, in part, as knowing what is free will and how experimentally it is manifested. I found an interesting idea of three viewpoints described by P.C.W. Davies http://arxiv.org/abs/quantph/0703041 as follows: A. laws of physics -> matter -> information. B. laws of physics -> information -> matter. C. information -> laws of physics ->matter. The conventional view is A. Davies writes "Matter conforms to the given laws, while information is a derived, or secondary property having to do with certain special states of matter". For view B, "Nature is regarded as a vast information processing system, and particles of matter are treated as special states which, when interrogated by, say, a particle detector, extract or process the underlying quantum state information so as to yield particle-like results." Davies says "The attractions of scheme C is that, after all, the laws of physics are informational statements." My own thinking is that understanding these relationships could be important before trying to decide exactly what free will is or how to test for it.
-
I was meaning two effects, each being mutually exclusive, stemming from one-in-the-same cause. .
-
Stef's argument for self-ownership = Tu Quoque fallacy?
RestoringGuy replied to sdavio's topic in Philosophy
I think "you must eat" and "you must eat to live" are of the same logical character. Both seem to say you can't ever choose otherwise, as if it's absolutely pre-destined to be what you'll do. If I throw a large brick at a thin sheet of glass, I can say "the glass must break". But I say that only based on the certainty I have about how bricks and glass behave. There is a clear sense that "you must eat" could be assigned a truth value, just as "you must eat to live". If we see "you must eat to live" as somehow true by way of cause and effect, I discover I am not eating right now at this instant. Also, one can be kept alive for a long time by injecting the right nutrients. But aside from that loophole, there's some sense "you must eat" carries a meaning of inevitability (eventually you'll eat I predict), and "you must eat to live" (you'll eventually die otherwise, I predict). Those predictions could be wrong, but the sentences seem to have the same logical meaning that something is being predicted to happen.