ofd Posted July 28, 2017 Share Posted July 28, 2017 http://www.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142 Looking forward for the first AI to call in. Link to comment Share on other sites More sharing options...
Kikker Posted August 13, 2017 Share Posted August 13, 2017 This article is probably fake since the event itself is untraceable. Link to comment Share on other sites More sharing options...
ofd Posted August 16, 2017 Author Share Posted August 16, 2017 Good catch. Thanks for the info. Link to comment Share on other sites More sharing options...
Jsbrads Posted October 24, 2017 Share Posted October 24, 2017 There is nothing fake looking about this story. nor is scary, or surprising or anything really worth noting outside the computer science world. It is a computer doing what it is programmed to do. Perhaps not what the programmers thought it will do, but what the code actually says. That's why computer programs have to be debugged and go thru many versions and updates. Our meat sack radiators only store so much RAM. And of course most programs today are built by many different people. Link to comment Share on other sites More sharing options...
ofd Posted October 24, 2017 Author Share Posted October 24, 2017 Quote It is a computer doing what it is programmed to do. Perhaps not what the programmers thought it will do, but what the code actually says. There is not a really a code that can be understood behind AI / expert systems. Only nodes that have different weights. Link to comment Share on other sites More sharing options...
Jsbrads Posted October 25, 2017 Share Posted October 25, 2017 Yes, and those nodes were created by programmers Link to comment Share on other sites More sharing options...
barn Posted November 10, 2017 Share Posted November 10, 2017 On 10/25/2017 at 8:20 PM, Jsbrads said: Yes, and those nodes were created by programmers Hi Jsbrads, A. Turing test, observer's perception human - might be just a really good AI, right? B. What other test would you recommend? Barnsley P.S.: Interstellar - robot, Tars "... plenty of slaves for my robot colony! [queue light - on]" Hilarious yet eerie, isn't it?! Link to comment Share on other sites More sharing options...
ofd Posted November 10, 2017 Author Share Posted November 10, 2017 Quote Yes, and those nodes were created by programmers Out of curiosity, have you ever set up a neural network? Link to comment Share on other sites More sharing options...
Fashus Maximus Posted November 10, 2017 Share Posted November 10, 2017 1 hour ago, ofd said: Out of curiosity, have you ever set up a neural network? I have, using Google's python library for it. I think what he's trying to say is that current neural networks do not yet have free will. Link to comment Share on other sites More sharing options...
Jsbrads Posted November 13, 2017 Share Posted November 13, 2017 Yes, and they never will, based on the technology we are using. just because the programmers don't know what the heck they are doing, doesn't mean they arent setting up every decisions result. Link to comment Share on other sites More sharing options...
Fashus Maximus Posted November 13, 2017 Share Posted November 13, 2017 7 hours ago, Jsbrads said: just because the programmers don't know what the heck they are doing, doesn't mean they arent setting up every decisions result. if you run the same deep neural network over a given dataset, you will get similar but different results. We can test this out by comparing the results' correlations to the data's labels, and what you find is for e.g. one time it will be .93, .94, .92, etc. There is a degree of randomness, so the programmer cannot possibly set up every result. Link to comment Share on other sites More sharing options...
Jsbrads Posted November 16, 2017 Share Posted November 16, 2017 I never said the programmers set up the result. But they do create the result when they make the AI. Link to comment Share on other sites More sharing options...
Fashus Maximus Posted November 16, 2017 Share Posted November 16, 2017 11 hours ago, Jsbrads said: I never said the programmers set up the result. But they do create the result when they make the AI. I guess I don't understand the difference between creating and setting a result... Link to comment Share on other sites More sharing options...
Jsbrads Posted November 17, 2017 Share Posted November 17, 2017 I can place a heavy object on an unstable support without knowing which way it would fall or even if it will fall. That imperfection of knowledge doesn’t free me from responsibility. the AI is that inanimate overbalanced arrangement. All the power we give it, no responsibility for its failure when it fails. Link to comment Share on other sites More sharing options...
barn Posted November 17, 2017 Share Posted November 17, 2017 1 hour ago, Jsbrads said: I can place a heavy object on an unstable support without knowing which way it would fall or even if it will fall. That imperfection of knowledge doesn’t free me from responsibility. the AI is that inanimate overbalanced arrangement. All the power we give it, no responsibility for its failure when it fails. Hi @Jsbrads To the uninitiated (like me) your parallel is such illustrative, thanks. I'll nick it if you don't mind. Barnsley Link to comment Share on other sites More sharing options...
PeterZ Posted November 28, 2017 Share Posted November 28, 2017 That result doesn't seem surprising. There are, technically speaking, an infinite number of ways to communicate information using english words. If they don't constraint the learning algorithm to use proper english, it will surely drift an use another language. Link to comment Share on other sites More sharing options...
Jsbrads Posted November 29, 2017 Share Posted November 29, 2017 A Turing test doesn’t test for free will, merely your inability to distinguish its speech from that of a machine. Link to comment Share on other sites More sharing options...
lorry Posted November 29, 2017 Share Posted November 29, 2017 Pretty sure this is just an optimization to maximize the information content per message. English is not an optimized language in this sense because each successive character in a word (message) decreases in uncertainty. For example: if I start a word (message) with the character "Q" you can be pretty certain what the next character is going to be. And because you can be pretty certain the next character is going to be "U", the "U" does not contain as much information (because information is the resolution of uncertainty). Link to comment Share on other sites More sharing options...
smarterthanone Posted November 29, 2017 Share Posted November 29, 2017 Just because one AI is going "kdsjflkdjf dskjfd fsdkfjdsl fdskfjdf" and the other AI says "4354543 23krjkrr kj3 aaaaaa" doesn't necessarily mean they are communicating anything either. I am a web developer for 5 years and I think its all bull excrement on 10 different levels. On the philosophical side. There is no such thing as the true sentient AI because it must be programmed. So basically with perfect knowledge you would always know the outcome. I think most people would consider it AI once the knowledge was so far from perfect that it seemed like magic. Kind of like how the sun rising seemed to cavemen. But it begs the question, if we understood the human brain with perfect knowledge such that we could predict the outcome, would people cease to be considered sentient anymore? Link to comment Share on other sites More sharing options...
ofd Posted December 4, 2017 Author Share Posted December 4, 2017 And so it begins http://www.sciencealert.com/google-s-ai-built-it-s-own-ai-that-outperforms-any-made-by-humans Link to comment Share on other sites More sharing options...
barn Posted December 4, 2017 Share Posted December 4, 2017 (edited) It would be interesting to see what would A.I.-s do when presented with images of everyday objects resembling an abstraction, suggesting a meta-meaning. (Sorry, lack of vocabulary, my language reach) i. e. Those images labeled 'fail' where due to special circumstances the visual information suggest contradictory meanings...and people find it funny. https://ixquick-proxy.com/do/spg/show_picture.pl?l=english_uk&rais=1&oiu=http%3A%2F%2Fstarecat.com%2Fcontent%2Fwp-content%2Fuploads%2Fblack-man-wearing-cat-hat-cat-wearing-negro-cap.jpg&sp=197823a858e261f10b4660c510061742 It's at that point when I would suspect resemblance (high enough degree) of independent-ish cognitive functions or sense of humour. Barnsley Edited December 4, 2017 by barn final thought Link to comment Share on other sites More sharing options...
Kikker Posted December 10, 2017 Share Posted December 10, 2017 (edited) On 29-11-2017 at 5:53 PM, lorry said: Pretty sure this is just an optimization to maximize the information content per message. English is not an optimized language in this sense because each successive character in a word (message) decreases in uncertainty. For example: if I start a word (message) with the character "Q" you can be pretty certain what the next character is going to be. And because you can be pretty certain the next character is going to be "U", the "U" does not contain as much information (because information is the resolution of uncertainty). The gains from information compression are trivial compared to the elimination of ambiguity which natural languages have. For example a noun can have several meanings, but also a meaning can also have several nouns. Figuring this out is computationally expensive. Besides the meaning of words, even grammar is ambiguous and the only way we can make decent parsers is through the statistical approach where we need (at least) tens of thousands of annotated sentences in order to train a parser to interpret sentences (~75% accuracy) like a human would. So the problem is that there is information required to understand an English sentence, which isn't encoded in the sentence itself, nor in the grammar rules, but is only apparent by observing the language as it is in use. On 30-11-2017 at 12:56 AM, smarterthanone said: Just because one AI is going "kdsjflkdjf dskjfd fsdkfjdsl fdskfjdf" and the other AI says "4354543 23krjkrr kj3 aaaaaa" doesn't necessarily mean they are communicating anything either. I am a web developer for 5 years and I think its all bull excrement on 10 different levels. You can easily test and measure the information exchanged between two entities without knowing what they communicated, provided you can communicate with them separately. If the agents hold memory then you can tell one agent a fact and see how that information is transferred to the other agent. In the case of a chatbot it may be that a chatbot A trained with a human data set or interaction and will cause behavior changes to another chatbot B when they're interacting with eachother. Edited December 10, 2017 by Kikker some weird spacing Link to comment Share on other sites More sharing options...
lorry Posted December 10, 2017 Share Posted December 10, 2017 6 hours ago, Kikker said: The gains from information compression are trivial compared to the elimination of ambiguity which natural languages have. For example a noun can have several meanings, but also a meaning can also have several nouns. Figuring this out is computationally expensive. Besides the meaning of words, even grammar is ambiguous and the only way we can make decent parsers is through the statistical approach where we need (at least) tens of thousands of annotated sentences in order to train a parser to interpret sentences (~75% accuracy) like a human would. So the problem is that there is information required to understand an English sentence, which isn't encoded in the sentence itself, nor in the grammar rules, but is only apparent by observing the language as it is in use. Thanks, Kikker. I never thought about that. Link to comment Share on other sites More sharing options...
Jsbrads Posted January 23, 2018 Share Posted January 23, 2018 Just heard of an AI that after given many pictures of dogs and wolves, and then asked to identify if a new sample was a dog or a wolf, it said wolf and this AI was abled to be queried as to why it chose wolf and it replied that it decided the husky was a wolf because there was snow in the background of the picture. Not because of any features on the dog. Link to comment Share on other sites More sharing options...
Recommended Posts