Jump to content

AIs create new language


ofd

Recommended Posts

  • 3 weeks later...
  • 2 months later...

There is nothing fake looking about this story.

nor is scary, or surprising or anything really worth noting outside the computer science world. It is a computer doing what it is programmed to do. Perhaps not what the programmers thought it will do, but what the code actually says. That's why computer programs have to be debugged and go thru many versions and updates. Our meat sack radiators only store so much RAM. And of course most programs today are built by many different people. 

Link to comment
Share on other sites

Quote

It is a computer doing what it is programmed to do. Perhaps not what the programmers thought it will do, but what the code actually says.

There is not a really a code that can be understood behind AI / expert systems. Only nodes that have different weights.

Link to comment
Share on other sites

  • 3 weeks later...
On 10/25/2017 at 8:20 PM, Jsbrads said:

Yes, and those nodes were created by programmers

Hi Jsbrads,

A. Turing test, observer's perception human - might be just a really good AI, right?

B. What other test would you recommend?

Barnsley

P.S.: Interstellar - robot, Tars "... plenty of slaves for my robot colony! [queue light - on]" Hilarious yet eerie, isn't it?!

Link to comment
Share on other sites

7 hours ago, Jsbrads said:

just because the programmers don't know what the heck they are doing, doesn't mean they arent setting up every decisions result. 

if you run the same deep neural network over a given dataset, you will get similar but different results. We can test this out by comparing the results' correlations to the data's labels, and what you find is for e.g. one time it will be .93, .94, .92, etc.

There is a degree of randomness, so the programmer cannot possibly set up every result.

Link to comment
Share on other sites

I can place a heavy object on an unstable support without knowing which way it would fall or even if it will fall. That imperfection of knowledge doesn’t free me from responsibility.

the AI is that inanimate overbalanced arrangement. All the power we give it, no responsibility for its failure when it fails.  

Link to comment
Share on other sites

1 hour ago, Jsbrads said:

I can place a heavy object on an unstable support without knowing which way it would fall or even if it will fall. That imperfection of knowledge doesn’t free me from responsibility.

the AI is that inanimate overbalanced arrangement. All the power we give it, no responsibility for its failure when it fails.  

 

Hi @Jsbrads

To the uninitiated (like me) your parallel is such illustrative, thanks.

I'll nick it if you don't mind.

Barnsley

Link to comment
Share on other sites

  • 2 weeks later...

That result doesn't seem surprising. There are, technically speaking, an infinite number of ways to communicate information using english words. If they don't constraint the learning algorithm to use proper english, it will surely drift an use another language.

Link to comment
Share on other sites

Pretty sure this is just an optimization to maximize the information content per message. English is not an optimized language in this sense because each successive character in a word (message) decreases in uncertainty. For example: if I start a word (message) with the character "Q" you can be pretty certain what the next character is going to be. And because you can be pretty certain the next character is going to be "U", the "U" does not contain as much information (because information is the resolution of uncertainty).

Link to comment
Share on other sites

Just because one AI is going "kdsjflkdjf dskjfd fsdkfjdsl fdskfjdf" and the other AI says "4354543 23krjkrr kj3 aaaaaa" doesn't necessarily mean they are communicating anything either. I am a web developer for 5 years and I think its all bull excrement on 10 different levels.

On the philosophical side. There is no such thing as the true sentient AI because it must be programmed. So basically with perfect knowledge you would always know the outcome. I think most people would consider it AI once the knowledge was so far from perfect that it seemed like magic. Kind of like how the sun rising seemed to cavemen. But it begs the question, if we understood the human brain with perfect knowledge such that we could predict the outcome, would people cease to be considered sentient anymore?

Link to comment
Share on other sites

It would be interesting to see what would A.I.-s do when presented with images of everyday objects resembling an abstraction, suggesting a meta-meaning. (Sorry, lack of vocabulary, my language reach)

i. e. Those images labeled 'fail' where due to special circumstances the visual information suggest contradictory meanings...and people find it funny.

https://ixquick-proxy.com/do/spg/show_picture.pl?l=english_uk&rais=1&oiu=http%3A%2F%2Fstarecat.com%2Fcontent%2Fwp-content%2Fuploads%2Fblack-man-wearing-cat-hat-cat-wearing-negro-cap.jpg&sp=197823a858e261f10b4660c510061742

It's at that point when I would suspect resemblance (high enough degree) of independent-ish cognitive functions or sense of humour.

Barnsley

 

Edited by barn
final thought
Link to comment
Share on other sites

On 29-11-2017 at 5:53 PM, lorry said:

Pretty sure this is just an optimization to maximize the information content per message. English is not an optimized language in this sense because each successive character in a word (message) decreases in uncertainty. For example: if I start a word (message) with the character "Q" you can be pretty certain what the next character is going to be. And because you can be pretty certain the next character is going to be "U", the "U" does not contain as much information (because information is the resolution of uncertainty).

The gains from information compression are trivial compared to the elimination of ambiguity which natural languages have. For example a noun can have several meanings, but also a meaning can also have several nouns. Figuring this out is computationally expensive. Besides the meaning of words, even grammar is ambiguous and the only way we can make decent parsers is through the statistical approach where we need (at least) tens of thousands of annotated sentences in order to train a parser to interpret sentences (~75% accuracy) like a human would. So the problem is that there is information required to understand an English sentence, which isn't encoded in the sentence itself, nor in the grammar rules, but is only apparent by observing the language as it is in use.

On 30-11-2017 at 12:56 AM, smarterthanone said:

Just because one AI is going "kdsjflkdjf dskjfd fsdkfjdsl fdskfjdf" and the other AI says "4354543 23krjkrr kj3 aaaaaa" doesn't necessarily mean they are communicating anything either. I am a web developer for 5 years and I think its all bull excrement on 10 different levels.

You can easily test and measure the information exchanged between two entities without knowing what they communicated, provided you can communicate with them separately. If the agents hold memory then you can tell one agent a fact and see how that information is transferred to the other agent. In the case of a chatbot it may be that a chatbot A trained with a human data set or interaction and will cause behavior changes to another chatbot B when they're interacting with eachother.

 

Edited by Kikker
some weird spacing
Link to comment
Share on other sites

6 hours ago, Kikker said:

The gains from information compression are trivial compared to the elimination of ambiguity which natural languages have. For example a noun can have several meanings, but also a meaning can also have several nouns. Figuring this out is computationally expensive. Besides the meaning of words, even grammar is ambiguous and the only way we can make decent parsers is through the statistical approach where we need (at least) tens of thousands of annotated sentences in order to train a parser to interpret sentences (~75% accuracy) like a human would. So the problem is that there is information required to understand an English sentence, which isn't encoded in the sentence itself, nor in the grammar rules, but is only apparent by observing the language as it is in use.

 

Thanks, Kikker. I never thought about that.

Link to comment
Share on other sites

  • 1 month later...

Just heard of an AI that after given many pictures of dogs and wolves, and then asked to identify if a new sample was a dog or a wolf, it said wolf and this AI was abled to be queried as to why it chose wolf and it replied that it decided the husky was a wolf because there was snow in the background of the picture. Not because of any features on the dog.

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.