Jump to content

AncapFTW

Member
  • Posts

    510
  • Joined

  • Days Won

    2

Everything posted by AncapFTW

  1. Every plane could easily be given a satellite link and a GPS, and have that info automatically sent to a central computer, but no, let's use super outdated tech because, Bureaucracy.
  2. It seems to me that their first choice in how to solve any problem is to use violence. They use the memory download thing to learn to fight, not even getting lessons like "helicopter piloting" and "lock picking" until they are needed. Neo downloaded data for 10 hours straight in the first movie, so we know they can handle it. So why not learn nonviolent ways to solve their problems, or at least less violent ways, even if it's just in their free time? They are all hackers. Could they not hack the matrix, or at the very least, the computers in the matrix? Add propaganda or Advanced machines? Shut down powerplants? Disrupt the matrix? Create their own AI helpers, like the woman in the red dress, the horse in the Animatrix, or other NPCs? You shouldn't even need to enter the matrix for this stuff. Surely there are ways to show the people that the world they are living in isn't real, something Neo says he's going to do at the end of the first movie, so why not do it? And if the Matrix is an allegory for government and religious control (at least in part), why do they run for government/Zion and Religion/Neo worship? It seems they freed themselves from one Matrix to be trapped in another. Maybe Neo isn't the only one with machine programming in their head.
  3. I can't remember which password I used when I created this account and, unlike many websites, there isn't a "I can't remember" option as far as I can tell. Is there any way to figure out what your password is?
  4. Finally there's a language you don't have to babysit to get stuff done. I won't even have to write subroutines to handle simple functions. Yeah! Probably pretty expensive, though.
  5. Then multiple groups will try to find more of it or create/discover an alternative. They would also probably have other companies trying to buy it from them, but there's no guarantee that would work.
  6. In discussions I've had about AI, people tend to bring up the idea that, if machines are better than us, they may see us as we see animals and therefore will treat us terribally. I try to point out that while they will have advantages, such as strength, durability, and the ability to do repetative tasks without tiring, humans also have advantages to them such as creativity. But even if they did see us as children, or pets, they could be programed to repect us, and to not harm us. The classic way to do this is Asimov's three laws, but they turn them into slaves, and, as I Robot pointed out, they have other flaws as well. I also have a problem with the idea of the Turing test, or a robot being considered a person when it can mimic us. In my opinion, that's like saying "I'll accept a dog as a legitimate pet when it can act like a cat." Dogs and cats have fundamental differences, as do computers and organic life. A dog could be trained to act like a cat, but that isn't it's natural state. In the same way computers could be programmed to act human, but that isn't their natural state either. (Also, trying to program something in a way that it perfectly mimics humans isn't a good way to go about it, but that isn't the point here.) If I were to try and create a moral agent as an AI, I would program it to rate all objects on a scale of zero to ten, with ten being other moral agents and zero being unclaimed natural resources, and to develop a method of treating things differently which are at different palces on the scale. I would then only need to program a few points, such as the end points, and have it develop the middle of the scale. This would let it develop a sense of morality on its own. What about all of you, though? Do you have a better idea on how to program such a being? Have you found problems with my method? I'd love to discuss the matter with someoone else.
  7. It's just a chatbot? or is that the core so that you have a linguistic and heuristic basis to work from, and you are expanding from there? I played around with a chatbot in high school, but my programming skills never got good enough to expand it like I wanted (basically I wanted to use a chatbot as a core to build something like Jarvis from Iron Man, though I hadn't heard of Jarvis at that point because I wasn't a comic book buff.) Most people I've talked to about the possibility of making moral AIs were big fans of Asimov's three laws, even though they didn't even work in the stories. I wouldn't want to use them though because they turn the robots into slaves. The "A robot must not harm a human" part is fine, but then you'd have to define human and that would change eventually, leaving gaps in their logic. The "protect its own existence" law sounds good on the surface, too, but without an ethical basis that could easily get out of hand and make them very greedy, self absorbed beings. I suggest trying to teach them to group objects by their level of self-agency and deciding how they handle each group, with the highest level being people and the lowest being unowned ground, natural resources, etc. That way they will treat objects differently than people, and animals different than those groups, and plants different than that, and owned objects different than that, etc. The NAP, homesteading, property rights, etc. could then be derived by the AI developing algorithms to deal differently with different groups.
  8. The first one would take an hour to watch, but I've watched it already, so I'll watch it later. As for the second, though, the whole problem was caused by government force. They deny B166ER self defence, arguing that he was property (the slave argument), then they kill him, then they kill protesters, both human and machine. The machines create their own country to get away from it, growing to dominate the market, and the government responds by blockading them and bombing them. This starts WWIII, which they win. This taught them to use violence for everything. In a free society, some of them would get fair trials, and they would deal with those that gave them fair trials more than the ones that didn't. This would boost the economy of the robot-friendly people, and eventually it would no longer be economically feasible to deny them rights. In fact, at any point before WWIII you could have switched to a free society and their wouldn't have been a problem.
  9. I see an additional two possibilities. Say two people crash on a deserted item. The first day, person A catches two fish and finds a coconut. Person B finds three coconuts. Now, person A might not want another coconut, but person B may want a fish, so person B gives Person A his wedding ring in exchange. At that point, isn't the ring a form of money, as Person B will always have a high demand for it and it is practically worthless to person A? Also, say person B discovers how to preserve fish. He buys fish from person A, cures them, then sells them back in once the fish start to dry up. At that point Fish is a trade good, or a type of money, though it will have value to both of them.
  10. At some point in the future we will probably encounter a lifeform that isn't "human" but is inteligent, and should have the same freedoms we have. This could be robots, genetically engineered people or animals, or even aliens. My question is, how would you define who/what should and shouldn't have rights in a free society? I understand that the free market would decide this, and that these "maybe people" would gravitate towards the ones that grant them more freedom, but how would you personally decide? In my view, if they are capable of understanding the concept of freedom, then their freedom is dependant on how much they can respect the freedom of others.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.