
Paul Carassafro
Member-
Posts
6 -
Joined
Everything posted by Paul Carassafro
-
When Stefan says soul, he means your consciousness and motivating values, not some immaterial entity.
- 6 replies
-
- define
- definitions
-
(and 1 more)
Tagged with:
-
Building a better person through AI
Paul Carassafro replied to AncapFTW's topic in Science & Technology
The problem of handling dangerous AI comes down to one question. Does the AI have free will/rational consciousness/capacity to compare personal justifications for action or belief with universal standards? If it does not, then it is simply a tool, like a gun, which needs to be manufactured with care but also operated with care. This is also a whole other topic in which I will not go further as not to side-track the topic (with questions such as: who is responsible if two autonomously driving cars crash into each other on the highway? The programmer? The person who's in the car being transported and who started the car?). Computer programs are made on deterministic machines (computers/Turing machines) in which the code describes the way the AI acts. These deterministic machines do as they're told, without any capacity for choice: the CPU reads the instructions on the memory and manipulates memory as is stated in the code on the. On the other hand, the human brain consists of atoms that behave predictably, yet life, consciousness and human rational consciousness arise out of this (the whole being bigger than the sum of its parts). So how does free will work? We do not know. We're not even close to understanding. What we do know, is that animals do not have such capacity (this is why we don't put tigers in jail for eating humans). Looking at brain differences, we humans have distinct brain areas from other organic life: the neofrontal cortex and areas for conceptual language. In epistemology, the difference is between percepts (direct perception) and (objectively verifiable/falsifiable) concepts. Stefan makes a nice point in his "Will artificial intelligence kill us all?": the only thing that has free will that we know of is the human brain. So when we want to make something that has free will, it probably needs to have a conscious and an unconscious mind, emotions, irrational needs, sleep, dreams, self-generated goals, preferences, mirror neurons etc. Going back to computer programs and algorithms, different categories of AI techniques can be identified. - Simple problem solving (data matching, such as computing all possible or probable moves in chess and matching the best move with the state of the game) - Machine learning (in which an algorithm learns to solve a problem (in a supervised or unsupervised way), yet this is also training some parameters or values that best get the program to some goal) - Developmental machine learning (in which the program works on autonomous goal construction and self-motivation: it can self-program) This developmental approach is a new branch of AI ( ) that takes a real human-based cognitive science approach. I study cognitive artificial intelligence, but I haven't done anything with this approach, so I'm not even remotely an expert on this. I do plan on following the freely available online course on http://liris.cnrs.fr/ideal/mooc/ during the next holidays so I can understand the topic better. So let's assume this developmental AI (an AI using developmental machine learning, as I described above) works, one is implemented, has a physical body and even though it has basic code and input connections which are CPU instructions, rational consciousness can arise from it (if properly developed, a baby that is raised in the wild by animals or whatever cannot reach its full intellectual capacity anymore later, but has had the initial capacity for developing capacity for moral thought). Who is responsible for its actions? I think the person who creates this AI or turns it on is responsible for it in the same way as a parent is in child raising (see Stefan's section on child raising in UPB), since it has not developed its capacity for moral thought yet. The developmental AI still has to develop from the start. This wouldn't be as dangerous, as this AI still needs to move around and learn words like a baby and is bound by the flow of time since moving or listening or producing sound simply takes some time. How does this become dangerous? Well, the thing about a computer system is that we can simulate it. So we can give this developmental AI a virtual body and let it develop in a virtual space where time can flow faster since it can be simulated very fast. In theory, if such software would become readily available and robot bodies cheap, one could simulate such a developmental AI at home, maybe create an evil genius and put this evil genius in the robot body. Of course, the development of these techniques would not rise out of nothing and a free society would prepare itself, seeing this coming. These are my thoughts on this late night, let me know what you think . -
Then let me assure you that I'm not worrying about those implications. I came up with this title and topic because I read an interview with one of the owners of secondlove and not because I'm interested in starting my own secondlove or anything of that kind. Facilitating cheating functions as the instance that lead me to the abstract principle of facilitating fraud. In my original post I also did explicitly give a narrow definition of a relationship because I did not want to analyse cheating per se, but the general principle behind the facilitating of it. The reasoning looks sound right? And I agree that it is not as bad as violating the contract in the first place. Facilitating fraud would be an ANA maybe? I'm going to think some more about this.
-
Hey all. So I guess I entered this part of the forum because this is about economics and this topic is about a company/company ethics, but of course I will also talk about relationships. So in the Netherlands there is this website called secondlove.nl, which is a dating platform for people that are in a relationship. Members need to pay a fee and pass a check done by the company to be able to set up their profile and start messaging others. The site says on its front page that this website is for people who want to cheat. "You're happy with your relationship, but you also think that monogamy seems very monotonous? You don't want problems in your relationship, but the rut is not making it better?" A relationship is (in maybe a (too) narrow definition ) a deal between people and cheating is fraud because it breaks the rules of this deal: it requires initiating action on the part of the victim (the victim chose his/her partner) and violators can be avoided (one can choose not to enter into a relationship with his/her partner). Of course when two people agree to be in a polygamous relationship or when one gets "permission" to cheat, it is not cheating (more like swinging I guess) since your partner is okay with it. I was discussing this and trying to wrap my head around this, I came up with an analogy (which is maybe not so good, because it does not fit perfectly). So when an auctioneer auctions the product of one of his customers, he could be facilitating the fencing of a stolen good. Intuitively, I seem to think that the auctioneer needs to check if his customer has stolen his good. On the other hand, he could also say that his auctions have no guarantees and that this is the responsibility of the buyer. The auctioneer is merely giving a platform for people to make their deal. Challenging this intuitive thought, I also do recognize that companies who sell laptops give their customers a platform/tool by which they could also commit a crime (for example a buyer could use a laptop for hacking a bank), but are not responsible for such usage. There is also something more going on with the auctioneer. So when one of his customers wants to auction a stolen good, one has already committed a crime (theft). When one wants to cheat, one has not cheated yet. But of course attempting to cheat (for example having a profile on secondlove.nl) is already breaking the deal in the relationship. I would also intuitively not see a company that facilitates the selling of other companies' secrets (so facilitating fraud) as being very moral. On the one hand, while I would not want to get diseases from my partner because he/she cheats, I do see some merit in the existence of secondlove.nl because if my partner wanted to cheat, his/her preference for doing that would show itself earlier and hopefully I would quickly see this in his/her behavior. After which, of course, I would most likely stop the relationship and start going into reflection mode trying to find out why I entered in a relationship with such a partner and why I did not see this coming during the relationship in the first place. On the other hand, people can also commit fraud/cheat in a other private or public places. Also, only people who have already committed fraud (trying to sell company secrets, trying to cheat) come to the platform. So I guess this boils down to: is it moral to offer a platform that is explicitly about committing the final behavior of fraud: giving the fraudulent person discretion for his/her actions? Should one always check if one's platform is not being used for bad things? Should dating companies check if their customers are cheaters? Installing spyware on laptops would allow a laptop company to check if its customers are using it for an evil purpose (such as hacking a bank), but I would not see this as the companies' responsibility. And if it is not moral to offer a platform where evil is possible, how does this fit the creation of uncontrollable currencies such as bitcoin which could have had a backdoor made for fraud checking by somebody? Are owners of such a platform in a constant state of being able to act as a surrogate-self defense agent (exposing fraud)? Are we all not in a state of being able to be a surrogate-self defense agent for being able to sign up for secondlove.nl and reporting all users to the public? I would certainly tell somebody if i see his/her partner cheating somewhere, but I would not explicitly go somewhere public to check if this is happening all the time. On the other hand, secondlove.nl is a place where this explicitly happens. So these are my thoughts on this. I hope it does make some sense and that it makes you think. I also thought about running this through UPB, but I haven't gotten around to really go first principles on this problem (I had this discussion last night) and I thought I'd first throw all my thoughts in here to see what you guys think.
-
I've been listening to FDR for a couple of months now and I'm interested in talking more about these ideas and my life, so I've decided to join the board. I'm a student of cognitive artificial intelligence and political science and I've been thinking a lot about what I want to do in life. I loved politics and wanted to become a politician in order to further the liberty movement. I was big into democracy and thought that if you wanted to change something in politics, you had to go do it instead of just voting. I started reading books on politics. After reading a lot of books from classical liberals, I encountered Ayn Rand. She destroyed my ideas on the state having to intervene in the economy. After that I found FDR and Stefan destroyed my ideas on the rest of the state, academia and ethics. I've been completely blown away by objectivist epistemology, UPB and Stefan's podcast on Milton Friedman and his accomplishments (in line of all the ideas he presents in HNTAF). So here I am, after having almost every political idea I heard in my (European) public schools destroyed, maybe wanting to do something with computer science in the free market, or doing research in psychohistory and political power. I'm 20 years old and trying to decide which road to pick at the crossroads early on. Hello everybody!