Jump to content

Humanity needs to invent general AI to survive.


Recommended Posts

This philosopher, Sam Harris is convinced that general AI is the greatest threat that humanity faces. I disagree.
Humanity needs to invent general AI to survive.

 

Despite building the most compassionate and prosperous society the world has ever seen with technology barely indistinguishable from magic the governments of the world are on the verge of suicidal wars.

In recent history governments around the world have killed +250,000,000 of their own citizensIt reveals a great irrational faith in government to consider general AI to be the principal threat facing our species.

There is a good chance that the digital gods we build will be benign helpers of humanity, there is also a chance that they may snuff us out but there is a certainty that evil of the government multiplied by the irrationality, out group preference and impulsiveness of people will turn this planet into a radioactive wasteland.

 

Technology is rapidly replacing our jobs, a disturbing trend that foreshadows great social upset and violence. I can only hope that general AI will first replace the jobs of utterly ineffective government bureaucrats and paper pushers who in their organized irresponsibility deprive the people of freedom. The waste, fraud and abuse of the government could be replaced by elegant algorithms that efficiently solve problems instead of perpetuating them.

 

Sam hypothesizes that general AI will squash us like a human stepping on a bug that crosses his path, what’s not hypothetical is that even now the boot of government power stomps on the face of human liberty.

 

Despite the having the accumulated knowledge of history, economics and philosophy freely available to us in our pockets everywhere we go, the electorate still vote based upon the most petty scandals, visceral emotions and base human biases. General AI could fairly restructure democracy to favor intelligent, philosophically robust policies and politicians and General AI could eventually completely replace the government, the way that democracy replaced aristocracy throughout the world.

As a government AI would likely not be a central planner, AI would interpret all the data provided by the free market, and make decisions unbiased by emotion, ego, nepotism or political correctness.

 

There’s a chance that the infinite intelligence that we can divine from 1’s and 0’s can wrest power away for good from the homicidal institution of government and that is worth taking a chance on!

 

Link to comment
Share on other sites

1.You're presuming governments wouldn't co-opt "general AI" as soon as it is manufactured.

2.You're presuming the AI wouldn't have emotions, which is absurd, since all conscious thought is powered by emotions.

3.You're presuming that brain-prostheses will not lead to widespread brain-atrophy, just as widespread sedentary lifestyles have lead to body-atrophy.

4.You're presuming technological solutions to moral problems.

Link to comment
Share on other sites

1.You're presuming governments wouldn't co-opt "general AI" as soon as it is manufactured.

You're presuming "general AI" would not co-opt government instead.

 

2.You're presuming the AI wouldn't have emotions, which is absurd, since all conscious thought is powered by emotions.

You're presuming AI would have emotions, which is more absurd, since general AI would be entirely rational unless programmed with emotions.

 

3.You're presuming that brain-prosthetics will not lead to widespread brain-atrophy, just as widespread sedentary lifestyles have lead to body-atrophy.

This is a valid concern.

 

4.You're presuming technological solutions to moral problems.

If you watched the last season of the TV program, "Fringe", you'd understand (and likely not change your mind).

 

Link to comment
Share on other sites

 

1.You're presuming governments wouldn't co-opt "general AI" as soon as it is manufactured.

You're presuming "general AI" would not co-opt government instead.

 

2.You're presuming the AI wouldn't have emotions, which is absurd, since all conscious thought is powered by emotions.

You're presuming AI would have emotions, which is more absurd, since general AI would be entirely rational unless programmed with emotions.

 

3.You're presuming that brain-prosthetics will not lead to widespread brain-atrophy, just as widespread sedentary lifestyles have lead to body-atrophy.

This is a valid concern.

 

4.You're presuming technological solutions to moral problems.

If you watched the last season of the TV program, "Fringe", you'd understand (and likely not change your mind).

 

 

 

1.See 2, below.

2.Then it wouldn't be what it will be heralded as, a kind of person, a super-duper person.  It would be just an advanced toaster, a giant Plinko game working itself out according to electronic laws, utterly unable to grasp the moral problems of man.  It would, at best, be Scientological technology that lets humans read the minds of other humans.  Sound like a good idea?  To put toasters in charge?

3.Yes.

4.TV is a PSYOPS I try to avoid.

Link to comment
Share on other sites

When reading OPs post I was wondering if a free market is a kind of artificial intelligence on its own, improving the lives of everyone.

 

As for AI being put in power. Assuming the core of the AI is perfect, who gets to decide what information is fed to it? The type of information that governments receive today also creates a lot of obstruction, pointlessness, and waste of resources. So I am tempted to say that the information supplied to the AI will be just as bad and corrupt, therefore having pretty much the same effect.

Link to comment
Share on other sites

1.See 2, below.

2.Then it wouldn't be what it will be heralded as, a kind of person, a super-duper person.  It would be just an advanced toaster, a giant Plinko game working itself out according to electronic laws, utterly unable to grasp the moral problems of man.  It would, at best, be Scientological technology that lets humans read the minds of other humans.  Sound like a good idea?  To put toasters in charge?

3.Yes.

4.TV is a PSYOPS I try to avoid.

2: Understanding emotions and morals doesn't require having them. A general Ai would be more like a science machine, able to construct and test hypothesis. So it can derive the best "path" to a goal state, you would need to tell it what that goal state is though. Or to use your analogy: an advanced toaster which can tell you what laws to implement as long as you tell it what you want to accomplish with those laws.

 

Any real Ai at that level though would use implicit feedback from multiple sources to test whether it's behavior is desirable or not, so people would "tell" the Ai what to do by simply living their lives.

 

When reading OPs post I was wondering if a free market is a kind of artificial intelligence on its own, improving the lives of everyone.

 

As for AI being put in power. Assuming the core of the AI is perfect, who gets to decide what information is fed to it? The type of information that governments receive today also creates a lot of obstruction, pointlessness, and waste of resources. So I am tempted to say that the information supplied to the AI will be just as bad and corrupt, therefore having pretty much the same effect.

Current knowledge systems are already a few magnitudes too large for an organization to control (few billion elements) by the time we would have such an ai they will be at least a few magnitudes larger. Information bias needs to be inherent to the Ai. for example if an Ai judges every piece of information to be true information bias is bound to occur. Besides the information would hold so many contradictions that the ai wouldn't be functional.

 

A general Ai capable of being in power would need ways to acquire information on it's own, with real actors in the real world.

Link to comment
Share on other sites

Book recommendation: Jeff Hawkins - On Intelligence

It explains how the brain works, the difference between conscience and intelligence and why we'll NEVER create human level AI because creating human level AI literally requires creating a human. The movie Ex-Machina also got it right. The guy didn't create AI, he created an artificial person with the same needs and limitations of a normal human.

 

We cracked the AI problem decades ago and we've been using AI in our day to day lives ever since (firewalls, auto-correct, video-games, etc).

 

Dominance over a group is a distinctively human trait therefore to say that a computer would develop human characteristics is just LOTR style fantasy.

Link to comment
Share on other sites

2: Understanding emotions and morals doesn't require having them. A general Ai would be more like a science machine, able to construct and test hypothesis. So it can derive the best "path" to a goal state, you would need to tell it what that goal state is though. Or to use your analogy: an advanced toaster which can tell you what laws to implement as long as you tell it what you want to accomplish with those laws.

 

Any real Ai at that level though would use implicit feedback from multiple sources to test whether it's behavior is desirable or not, so people would "tell" the Ai what to do by simply living their lives.

 

2.A computer is going to be able to crunch logic at infinite speed, and that's good at parties, but this will only enhance the operations of its operator, it won't replace it.  GIGO.  If the operator is immoral, he is going to use the machine to rig the results.  If the operator is incompetent, the machine isn't going to make him competent.  Imagine a knife:  its understanding of its own sharpness doesn't help it cut anything, if a well-guided hand isn't there to wield it, and no degree of sharpness will matter, and may even prove dangerous, if the hand has a palsy.

Link to comment
Share on other sites

2.A computer is going to be able to crunch logic at infinite speed, and that's good at parties, but this will only enhance the operations of its operator, it won't replace it.  GIGO.  If the operator is immoral, he is going to use the machine to rig the results.  If the operator is incompetent, the machine isn't going to make him competent.  Imagine a knife:  its understanding of its own sharpness doesn't help it cut anything, if a well-guided hand isn't there to wield it, and no degree of sharpness will matter, and may even prove dangerous, if the hand has a palsy.

What is your point? I already explained that the goal state must be defined beforehand in order to create an ai with any functionality, so your repeated arguing about it is a wasted effort. You also keep abstracting ai into everyday tools while I have already stated that ais differ from everyday tools in a significant way because they are able to interpret implicit feedback from multiple users. So not one wielder but multiple wielders and the ability of the ai is determined by the different wielders (millions sometimes). The ai is also optimized for every wielder and ideally it takes the wielder zero effort to operate the ai when pursuing a certain goal, unless incapable (brain dead for example).

 

Ai enables hive-mind structures in which your results are determined by the feedback of others, for example google search automatically updates it's algorithm on different topics according to implicit feedback received from users in the form of clicks. Meaning your resultbar is determined by the users before you.

 

As in how we can create AI and how human-level AI can be achieved.

I don't know which papers you are referring to but the discovery of the necessary and sufficient requirements of ai is barely a "solution" to Ai and human-level ai.

 (Newell and Simon (1976) Computer Science as Empirical Inquiry)

Link to comment
Share on other sites

What is your point? I already explained that the goal state must be defined beforehand in order to create an ai with any functionality, so your repeated arguing about it is a wasted effort. You also keep abstracting ai into everyday tools while I have already stated that ais differ from everyday tools in a significant way because they are able to interpret implicit feedback from multiple users. So not one wielder but multiple wielders and the ability of the ai is determined by the different wielders (millions sometimes). The ai is also optimized for every wielder and ideally it takes the wielder zero effort to operate the ai when pursuing a certain goal, unless incapable (brain dead for example).

 

Ai enables hive-mind structures in which your results are determined by the feedback of others, for example google search automatically updates it's algorithm on different topics according to implicit feedback received from users in the form of clicks. Meaning your resultbar is determined by the users before you.

 

I don't know which papers you are referring to but the discovery of the necessary and sufficient requirements of ai is barely a "solution" to Ai and human-level ai.

 (Newell and Simon (1976) Computer Science as Empirical Inquiry)

 

My point is these machines will behave, at best, like suave savants, but they won't understand principle, just pattern and logical laws, and so putting them in charge of policy will be very dangerous.  They may drive our cars and fill our pharmaceutical prescriptions, they may mind our children and even surgically operate on our bodies, but they can't be in charge of policy or we will end up with a dystopia or simply a plain old nuclear war.  AI is a big sugartrap that will be hard to resist putting our last egg into.

 

"Hooked into everything, trusted to run it all."

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.