Jump to content

Ai mimicking your voice INDISTINGUISHABLY


barn

Recommended Posts

Hi thinkers and alike,

Recently a tech giant has unveiled their latest(?) product, an AI (artificial intelligence like subroutine) that demonstrated it's ability effortlessly to:

° make a phone call as a real person

° use language to seamlessly ask-respond-continue a conversation

° the human(?) on the other end of the line treated the caller AS A REAL PERSON, changed its behaviour based on the contents of the call (!!!)

Some creepy stuff, you can find the demonstrations easily for yourself on pooTube, I'm sure you will know which one I'm referring to once you see it.

Barnsley

Link to comment
Share on other sites

It's possible.

Not sure why you'd see it funny though.

(Is there something wrong with me for not seeing it humorous when people can't tell the difference between a real person and an AI subroutine being programmed to mimic identity and curb/influence behaviour... both 100% successfully?)

p.s. - Lyrebirds can't be taught to identify/compose arguments while simultaneously using metadata in real time. Storing everything forever, down to the last bit of data and interfacing with other arms of the network I suppose is obviously understood as well.

Oh, and none of the available potential will be mishandled or can be misused. Big companies don't have to worry about intrusions or forced gov. cooperations... Shuure, right?!

Link to comment
Share on other sites

Right after that I checked out Lyrebird, I realized its simplicity compared to what you mentioned. 

 

- I agree that there is something very scary in someone wielding that power within the state defined corruption of our world, an inevitability that does make me a little uncomfortable. I think you are rational in your fears of which I agree, but I definitely think that there is comedic potential in this technology. Can't subdue the technology, hopefully but probably can't prevent the collapse of the west of which this technology will contribute to, but I am going to get some laughs out. 

Do you think my humor minimized your valid concerns? 

I did read you saying " Oh, and I hate socializing or jokes with virtuous people. "...It's very tiring to be constantly impressed and feel truly inspired while part-taking 'connectedness', real-time. (At least, that's what I had concluded from seeking it continously. I am being difficult... :-} )"

So more specifically, is this a personal distaste for my humor, or is it a criticism ( Very fine if it is; would be good self knowledge )? 

I can laugh at jokes about murder, does not mean I think its moral. 

Thanks

 

 

 

Link to comment
Share on other sites

Oh, don't worry. You're good, I have nothing against you seeing it as something to brighten yourself with...

Perhaps, the main reason why I wouldn't, is it does tend to transform the whole situation to a less serious matter... and I don't think that is a good idea. When such 'tools' in the world ARE NOT SEEN for what they really are, they tend to get overlooked, allowed to 'roam without any responsibility'... and 'once the Djinee is out, you'll have to pay precious sacrifices to (maybe) get it back in the lamp'.

I just think, vigilance and prevention is a much better idea for a sacrifice than let's say... more loss of freedoms, an even tighter grip of the already tight noose on the individual thanks to more intrusive tech perks from a tech giant and it's best buddy the bully that every citizen fear rightfully. In short.

1 hour ago, gavstone21 said:

Do you think my humor minimized your valid concerns? 

I do, yes. Although, you minimised your own concerns for yourself and that doesn't involve me.

Those, who reading you, having been already aligned with how you see things will probably even enjoy the joke you created. Others, not so much.

Even so, you might have noticed I'm not all that serious, kill-joy type of a person, maybe seen my 'funsies' here and there. It's just that this topic sent chills down my spine, maybe because of what I understand. Or, perhaps because what I don't. I just don't see it funny. (Maybe Benjamin Owen could change my mind in a blink of an eye)

1 hour ago, gavstone21 said:

I did read you saying... [...]

Thumbs up! That's observant. You did research, looked it up.

I tried to be sarcastic/ironic there with acknowledging the fact that I knew I was being sarcastic/ironic for the sake of a joke, a statement about my seeking of the virtuous. That's it. (maybe I overdone it a bit)

1 hour ago, gavstone21 said:

So more specifically, is this a personal distaste for my humor, or is it a criticism ( Very fine if it is; would be good self knowledge )?  

If you are referring to your comment, me seeing it as distasteful... No, pal. You're good, so far!

Do I think if it was a good joke? Personally, from my own-unique-subjective-taste-opinion-preference-liking... not really. (Which doesn't mean anything and please feel free to make up new jokes whenever you feel like it.)

1 hour ago, gavstone21 said:

I can laugh at jokes about murder, does not mean I think its moral.  

In certain circumstances, me too. (If I got what you meant... dark humour, right?!)

I strongly uphold the idea that context, intent, the originator's responsibility... etc. matters.

Like, when you tell a joke because you hope to induce laughter and when the same joke is told as a form of mockery where the recipient(s) won't laugh.

But even then,

'Sticks and stones, may break my bones but words will *NEVER harm me.'

*caveat = Well, this is originally about letting taunting/name calling be ignored, I do think there's such a thing as verbal abuse and is different... but yeah, I don't want to go any deeper here. I hope you 'get me' enough already.

Link to comment
Share on other sites

     What we just did there highlights exactly why I am on this site. Thank you very much. 

 

I think the reason for this development not causing much discomfort for me is the fact that I'm already pretty pessimistic about the West's whole situation, and I have not personally experienced the freedom people like you are unwaveringly defending, considering I'm still at the will of my parents and the public school. Then again, I can imagine myself in the future as a business guy getting screwed by some sadistic bureaucrat with the voice-tech. I still value the little freedoms I have enormously, so I definitely understand the shivers. 

 

Thanks

-Gavin

 

Link to comment
Share on other sites

33 minutes ago, gavstone21 said:

What we just did there highlights exactly why I am on this site. Thank you very much.  

Hey, that's a really nice compliment! Agree and my pleasure, really!

33 minutes ago, gavstone21 said:

I think the reason for this development not causing much discomfort for me is the fact that I'm already pretty pessimistic about the West's whole situation, and I have not personally experienced the freedom people like you are unwaveringly defending, considering I'm still at the will of my parents and the public school. Then again, I can imagine myself in the future as a business guy getting screwed by some sadistic bureaucrat with the voice-tech. I still value the little freedoms I have enormously, so I definitely understand the shivers.  

In a sense, pessimism is the easiest of all, not much resistance there... I'd be cautious, things tend to speed up going down on a slope.

Writing things off prematurely is exactly that, wasting precious opportunity cost. And since one man's food is another's poison (Sorry, today's my big proverbial dump... haha:huh:, nevertheless they seem to encapsulate my meaning), when we lose out on an opportunity, someone else gains a chance to utilise that moment for their agenda.

->

Perhaps it was a good idea asking yourself, Who does you being/feeling/acting pessimistic benefit?

7bd920cc7a885f4243897e6297f88a0b--latin-

Barnsley

Link to comment
Share on other sites

There's some obvious potential for abuse with this. However knowledge of it will surely result in a form of protection against it like some "block-tech" magic somehow can make sending and receiving information secure.

...I'm frankly more annoyed at the possibility of A.I. voices mimicking BAD voices. I'd rather hear a smooth, sweet, or cool or whatever voice being mimicked not some whiny, bored, and apathetic receptionist voice or whatever.

Like in that example Stefan Molyneux tweeted; horrendous voice but terrifically convincing. I'd have assumed it was an annoying mouse squeaking rather than an A.I. and a fair amount of people sound bad when they talk, so that's very realistic. 

I think the value of face-to-face interactions would increase again as audio interaction would be treated more carefully or suspiciously than currently.

Link to comment
Share on other sites

Talk to Alexa, she is freaking retarded. I am a developer. I don't believe this.

Talking to an AI and not being able to distinguish if its a human is not one element.

 

It is, Voice recognition, Sound quality, Databases like Vocabulary, memories of events and such, AI, Probably another thing or two.

 

Voice recognition sucks, how many times does Alexa or siri or any of them not understand you because you coughed or some noise or even just because? All the time. But a human can still understand you if you say it, even if you say it weird. Our brain is FARRRRRR better than what a computer can do when it comes to pattern recognition/context etc.

Sound Quality. Ive heard some that are pretty good. But for real. They are still never going to be perfect. I say my name differently when I make a phone call to a business vs a friend vs a romantic partner. Nobody has solved this sort of problem yet in terms of the actual voice.

Databases. So say you have an AI pretend to be you and it calls me, I just have to ask, "when was the last time I saw you?" and it will be wrong, unless you type that info in every time you see me. And that is too simplistic because I could ask any question about some stupid detail that would not lend itself to database format easily. This one will never be solved.

AI. If it really talks to you and you can't tell its a computer, it has passed the turing test. Which would be big news, as I don't think anything has.

Link to comment
Share on other sites

2 minutes ago, smarterthanone said:

Yes. Probably because you don't mention the company name or provide a link. :bunny:

I see.

Well, sure. That's within the set of possible explanations. (By the by... Have you seen the clues I supplemented even so?)

Cute bunny you've got there.

Link to comment
Share on other sites

I went and looked it up. I hope its true. It is a huge asset that provides tons of value that doesn't cost really anything (presumably it can run off a smart phone and will be available to people as most google stuff is). So I, Joe schmo, with the use of my computer and 10 phone lines can run a business all by myself selling products or providing customer support or disseminating information to the public via telephone. This will replace cashiers and all sorts of do nothing jobs for those who are unskilled, uneducated and unimaginative. But for entrepreneurs its one more tool to build a business. Awesome.

I once had a business service that needed massive customer support and sales staff, if I had this, I could have run it with NO employees. Instead I sold it because I did not want to hire a massive amount of employees. I wish this was around then.

BTW: I think its fake/setup and not nearly as good as they say. See Siri See Alexa.

Link to comment
Share on other sites

Currently, there are soundboards which have taken samplings of various actors and applied them to an instant touch screen.

Here is one example: http://www.crocopuffs.com/soundboard/arnold.html

These are often used on YouTube for making prank calls. Here is a simple one with a conversation between "Arnold" and a telemarketer: 

I'm rather amazed that anyone would fall for this, yet they do.

Now, the soundboards are made from actual voice samples. The more voice samples you have, the more complete you can make a soundboard and imitate someone. Just imagine what someone could do with all of Stefan's audio samples floating around out there. 

I don't think this would be easily accomplished without voice samples. Someone close to the person the AI was trying to imitate would be harder to fool than someone unfamiliar with the subject.

Link to comment
Share on other sites

30 minutes ago, smarterthanone said:

I went and looked it up. I hope its true. It is a huge asset that provides tons of value that doesn't cost really anything (presumably it can run off a smart phone and will be available to people as most google stuff is). So I, Joe schmo, with the use of my computer and 10 phone lines can run a business all by myself selling products or providing customer support or disseminating information to the public via telephone. This will replace cashiers and all sorts of do nothing jobs for those who are unskilled, uneducated and unimaginative. But for entrepreneurs its one more tool to build a business. Awesome.

I once had a business service that needed massive customer support and sales staff, if I had this, I could have run it with NO employees. Instead I sold it because I did not want to hire a massive amount of employees. I wish this was around then.

BTW: I think its fake/setup and not nearly as good as they say. See Siri See Alexa.

Sadly/worryingly(I think), lots of people think similarly. I mean 'the benefit without the actual cost' part.

cost here = vulnerability multiplied

Just out of curiosity...

Do you think there should be a limit to how much power do we allow others to have over us?

p.s. (Is it too far fetched to assume, if the lobby groups get their will(I haven't seen a-n-y pushback so far), it'll mean the establishment of an era in which just one company will hold every keys to all the doors? Except if they make a u-turn and reverse their efforts on trying to get into people's heads.)

Link to comment
Share on other sites

1 hour ago, barn said:

Sadly/worryingly(I think), lots of people think similarly. I mean 'the benefit without the actual cost' part.

 cost here = vulnerability multiplied

 Just out of curiosity...

Do you think there should be a limit to how much power do we allow others to have over us?

p.s. (Is it too far fetched to assume, if the lobby groups get their will(I haven't seen a-n-y pushback so far), it'll mean the establishment of an era in which just one company will hold every keys to all the doors? Except if they make a u-turn and reverse their efforts on trying to get into people's heads.)

Who has power over me? I do not have an Alexa or Siri. I have an android but its been jailbroke and encrypted and I deleted a lot of the functionality besides basic phone usage. I realize I do not inherently have a right to a phone so if all phones become unacceptable I will have to do without. I do not need a phone to live.

Link to comment
Share on other sites

On 05/17/2018 at 10:36 PM, smarterthanone said:

I have an android but its been jailbroke and encrypted and I deleted a lot of the functionality besides basic phone usage.


I agree, custom loaded phones have more advantages. Good for you!

Link to comment
Share on other sites

Hi @MercurySunlight

I'm guessing you haven't seen the presentation... That's ok. Though it is hard to discuss it otherwise.

Given that if I took your example with the soundboard... imagine one that listens to you speaking and from that it then effortlessly rebuilds your complete voice profile. Once it's done, or rather 60 seconds or so later, it can speak naturally with any provided dataset and interact MEANINGFULLY.

Not like... 'Aww, isn't that cute?! Clumsy AI is trying its hardest to not give itself away, but it's obvious!'

 

But like...'Wha...? Who...? Nooo...You mean, I just got convinced to do X by a non-person? How would it know what I'd meant? What else does it know about the way I think? Who has access to all that data? No, I don't want this system to be allowed to function. It makes duping a walk in the park. Wait. Now, you're telling me it interfaces with the biggest knowledgebase and greatest reach multinational company? You must be kidding me! Just imagine if it partnered with governments for cash and privileges. Imagine if it tried to filter who could see/access what on their platform based on an algorithm and started thought-policing.'

Link to comment
Share on other sites

On 5/17/2018 at 3:35 PM, barn said:

Do you think there should be a limit to how much power do we allow others to have over us?

Yes. But personally. Don't rely on some big organization if they are not doing the right things. I don't need the government or to band together with people to protect me from AI. I simply just don't buy it or use it in such a way I can't live without it.

Link to comment
Share on other sites

15 minutes ago, smarterthanone said:

Yes. But personally. Don't rely on some big organization if they are not doing the right things. I don't need the government or to band together with people to protect me from AI. I simply just don't buy it or use it in such a way I can't live without it.

Right, those are generally optional. You can choose to not buy or not use certain things.

The problem I was trying to raise awareness of is, when you are on the receiving end, but you can't tell... if you can't tell, you can't influence it. (as in: nobody tries to step over a puddle they haven't thought to look for/looked and the illusion tricked them)

Lastly, would you agree with me on that the less power an entity/individuals in that organisation have, the lower the risk for abusing said power?

Link to comment
Share on other sites

6 hours ago, barn said:

Lastly, would you agree with me on that the less power an entity/individuals in that organisation have, the lower the risk for abusing said power?

I agree but the more power an organization has the more value can be produce. This is why capitalism. It must perform acceptably to accumulate said power. If nobody had any real power, we would still be playing with sticks instead of even talking about AI. Google provides tons of value, thus lots of power, thus ability to invent, build and promote AI or something even if people don't like it.

Link to comment
Share on other sites

On 5/17/2018 at 3:29 PM, MercurySunlight said:

Currently, there are soundboards which have taken samplings of various actors and applied them to an instant touch screen.

Here is one example: http://www.crocopuffs.com/soundboard/arnold.html

These are often used on YouTube for making prank calls. Here is a simple one with a conversation between "Arnold" and a telemarketer: 

I'm rather amazed that anyone would fall for this, yet they do.

Now, the soundboards are made from actual voice samples. The more voice samples you have, the more complete you can make a soundboard and imitate someone. Just imagine what someone could do with all of Stefan's audio samples floating around out there. 

I don't think this would be easily accomplished without voice samples. Someone close to the person the AI was trying to imitate would be harder to fool than someone unfamiliar with the subject.

 

I'm a little late into this topic, but i think this is precisely what we're seeing. Alot of these AI flow with assumptions based on patterns in human conversation. For example, the AI in the presentation definitely talks like the AI using pre-recorded messages when you call places like paypal and they have a spoken directory of questions and answers. At the end of the chinese restaurant call, for example, the AI definitely seemed to be convinced that the whole call failed. I'm quite curious what information the AI redirected  back to the user, to see if it even remotely reflected what was said by the chinese woman. We're seeing pre-recorded cases which are their best results, and you can hear the assumption based flow working: if the AI doesn't hear what it expects, it automatically assumes "no," "fail," etc. Except, in the chinese restaurant call, the flow assumption actually worked, unlike when you or I call for a specialty problem that we know the bot can't handle, and spend 30 minutes trying things like "operator," "tech support," "human," etc to try to trigger an actual person on the other line who could've solved our issue in 5 minutes by a password reset, canceling service, or asking a yes-no question.

 

Case in point, when my girlfriend recently had received a card from the power company after having canceled the service (money that she was overcharged, and they said she was getting a debit card, as they couldn't send cash, check or anything else), she fought with the AI for about an hour just to get ahold of someone, because the AI at the card company wouldn't let her "activate" the card (so she was dealing with two really bad AI). Eventually, she got the first AI to work while she was on hold on another phone after I managed to get the AI to give up and redirect me to a human. Most annoying part was that the AI for the card company kept hanging up on her while she was punching in the activation numbers.

Link to comment
Share on other sites

8 hours ago, barn said:

Right, those are generally optional. You can choose to not buy or not use certain things.

The problem I was trying to raise awareness of is, when you are on the receiving end, but you can't tell... if you can't tell, you can't influence it. (as in: nobody tries to step over a puddle they haven't thought to look for/looked and the illusion tricked them)

Lastly, would you agree with me on that the less power an entity/individuals in that organisation have, the lower the risk for abusing said power?

 

You can easily break these kinds of AI. I get robo calls all the time, sometimes intelligent ones that answer like a human. The smartest AI i've ever dealt with is "cleverbot," but you can break down this bot, too. They have a very hard time relating to logic, despite being based heavily on logic: they can't understand the values we put to things, since they don't live lives like ours that feel pain and joy.

Link to comment
Share on other sites

2 hours ago, smarterthanone said:

I agree but the more power an organization has the more value can be produce. This is why capitalism. It must perform acceptably to accumulate said power. If nobody had any real power, we would still be playing with sticks instead of even talking about AI. Google provides tons of value, thus lots of power, thus ability to invent, build and promote AI or something even if people don't like it.

Thanks for sharing your thoughts.

Link to comment
Share on other sites

40 minutes ago, Kohlrak said:

 

You can easily break these kinds of AI. I get robo calls all the time, sometimes intelligent ones that answer like a human. The smartest AI i've ever dealt with is "cleverbot," but you can break down this bot, too. They have a very hard time relating to logic, despite being based heavily on logic: they can't understand the values we put to things, since they don't live lives like ours that feel pain and joy.

(Sigh)... So, would I be wrong to hypothesise that you haven't had the opportunity to see the presentation in question?

If I'm accurately assessing... that's ok, it isn't a must. Perhaps you could see things in a different light, then. (I can only encourage you to see it for yourself)

Anyhow... let me ask you this:

Isn't the purpose of an insurance to 'cushion' the fall after an unexpected event, proves that accidents are occurring without the people's ability to prevent them?

(as in: if they could, there'd be simply no need for insurance.)

Then it follows, when AI's are acting in the name of individuals, insurances would have to be on par with the capability of the worldwide reach of the largest multinational company that has the full support of the many tentacles of several governments, (actually, it's far greater but I toned it down)

 

Link to comment
Share on other sites

2 hours ago, barn said:

(Sigh)... So, would I be wrong to hypothesise that you haven't had the opportunity to see the presentation in question?

If I'm accurately assessing... that's ok, it isn't a must. Perhaps you could see things in a different light, then. (I can only encourage you to see it for yourself)

I only saw part of it, specifically the bits where the AI made the two calls. Was watching my favorite BS news channel (Secureteam10, which actually picks up a legit news story once in a while, but he absolutely overreacts to everything and turns everything else into aliens, which i find hilarious). The AI reminded me of the robocall AIs we already know. I'd love to have my own shot with the AI.

 

2 hours ago, barn said:

Anyhow... let me ask you this:

Isn't the purpose of an insurance to 'cushion' the fall after an unexpected event, proves that accidents are occurring without the people's ability to prevent them?

(as in: if they could, there'd be simply no need for insurance.)

Then it follows, when AI's are acting in the name of individuals, insurances would have to be on par with the capability of the worldwide reach of the largest multinational company that has the full support of the many tentacles of several governments, (actually, it's far greater but I toned it down)

 

 

I would expect the google AI to admit it's a bot if asked. The biggest threat is impersonation for the benefit of governments (especially in fabricating the crimes of political enemies), but that can happen without giving consent. Meanwhile, i also doubt it's capable of picking up speech mannerisms such as word choices, without a very large amount of training. Even then, i doubt it's even been programmed in. AI demos remind me of Chris Angel: it's not magic or quality you see and hear, it's illusion.  These human interface AI usually have scripts and subroutines to follow, especially regarding followup questions. Usually if the followup question is responded to with something other than something in the script, it won't respond appropriately. For example, if i told Siri (disclaimer: i don't have an iPhone to try this) to set the GPS to take me to Lewistown, it'd ask me if I mean Pennsylvania or Montana, and if i responded with "play Reise, Reise by Rammstein," it'd neither ask me again nor would it play that song. But if it wasn't expecting an answer, it'd play the song. Go back to the chinese restaurant call and listen to the AI's intonation: although the intonation overall is weird, you can hear the very thing i'm talking about, especially at the very end of the call. It seemed to me that the AI failed to understand the concept of walking in, but asked the question(s) regarding it, anyway, based on the failure to reserve (which it assumed was so since it didn't get anything saying that a registration occurred). I get this all the time when dealing with robocalls: i'll answer appropriately, but because it didn't understand me, it responds as if i said something that i didn't say (this is especially common due to my speech mannerisms): "Would you be willing to donate 25 dollars to the veteran's fund?" "Could you call back in a week?" (it works on the assumption that i said "no," or "i don't have the money right now," because i said something it wasn't expecting) "Oh, I see. How about 5 dollars?"/"How about next Friday?" "Wait, wut? Are you a bot?" "No, i'm not a bot. How about next Friday?" (or otherwise repeating the previous statement.) "How much wood could a wood chuck chuck if a wood chuck could chuck wood?" "I understand. If you change your mind, you can get ahold of us at 1-800-###-###." Trust me, this thing is waiting to be disected like that, but i also expect it to be more genuine and admit it's an ai or state so should something go wrong and some failsafe script detects that the conversation needs user interaction of some kind. I'd have to get my hands on it to be sure and to find it's exact weaknesses for calling it out, but i can hear exactly where to look.

Link to comment
Share on other sites

Thank you for your extended response, I think I'm understanding you much better. (if not, do tell)

12 hours ago, Kohlrak said:

I would expect the google AI to admit it's a bot if asked.

That's certainly a possibility, it saying/warning you ahead of time. Sure, can happen. (But at the same time, why would it? I'm thinking : 'I have been programmed to make you think I'm a real person so I hope you can now treat me as a real person')

12 hours ago, Kohlrak said:

Meanwhile, i also doubt it's capable of picking up speech mannerisms such as word choices, without a very large amount of training

That's a good point. I can't tell how accurately can the world's most advanced AI mimic a person using the largest set of data & metadata... +gov. help... Aaand constantly learning, crossreferencing with other people's behaviour from the long dead to the actively logged-in.

I just know that I don't want to find out.

Usually, when I need some perspective on this, I remember how in the past subtitles had to be manually created while nowadays it's pretty easy to do speech-to-text. You can use for example a service where two people can have a live conversation without speaking the same language. Currently it supports 10 different languages.

12 hours ago, Kohlrak said:

Usually if the followup question is responded to with something other than... [...]

Yeah, I get you... Nor do they interrupt you mid-sentence to bridge over to the next probable point of thought, are limited in intonation and fluctuation of 'natural' speech velocity... I know, I know, it's not fully fledged out. Same as with any tech, we can find currently unresolved hiccups and weaknesses. As always.

I'm not sure if you are seeing the perspective though.

i.e. - The moon lander had a computer with a processing power that today we can fit in an earpiece. Not only did our miniaturisation but the availability of information increased since than. Like a survey that used to be filled out by hand and took a month to answer&process 10 questions by 300 people, compared to today's web-crawlers doing millions of pages every second automatically, without most people ever having imagined even such things could exist.

How much have our morals evolve in comparison with all this pile of led on one side of the scale?

Should I maybe ask rather, how much have we left them devolve/evolve towards the immoral?

If you ask me, we've got a huge catching up to do. We've become the most ignorant humanity has ever been, while some of us capitalised on it to unimaginable extents. The price for NOT being vigilant.

12 hours ago, Kohlrak said:

Trust me, this thing is waiting to be disected like that, but i also expect it to be more genuine and admit it's an ai or state so should something go wrong and some failsafe script detects that the conversation needs user interaction of some kind.

I think you are being very naive, or uninformed... I'm trying to find a soft way of saying, there are countless examples of 'certainly it'd NOT do X, we'd be warned/informed ahead of time, we could stop it before getting out of hand'... but that never seems to be the case when there's no freedom of expression or transparency, when responsibility taking is deflected. HUMABIO

I. e. - Are people given answers as to how/why shadow-banning, arbitrary content tailoring is being done? Search results not reflecting the actual available set of information, companies selling your data (not just meta) to third-parties... etc.

I'm sorry, don't see how any good can come out of this IN THIS SET OF circumstances.

On 05/19/2018 at 12:57 AM, barn said:

The problem I was trying to raise awareness of is, when you are on the receiving end, but you can't tell... if you can't tell, you can't influence it. (as in: nobody tries to step over a puddle they haven't thought to look for/looked and the illusion tricked them) 

1. Do you agree with the part, highlighted here?

2. Did they let the call-recipients know (in the video) that they were being duped/tricked/misled/used for an experiment... etc.? (I'm asking because I didn't see it happening. Isn't that worrisome?)

Edited by barn
grrrmer
Link to comment
Share on other sites

13 hours ago, barn said:

That's certainly a possibility, it saying/warning you ahead of time. Sure, can happen. (But at the same time, why would it? I'm thinking : 'I have been programmed to make you think I'm a real person so I hope you can now treat me as a real person')

You'd want it to do that because of the situations the AI can get the user into. Surely it should only do so if enough failures occur, else people will just hang up on it.

13 hours ago, barn said:

That's a good point. I can't tell how accurately can the world's most advanced AI mimic a person using the largest set of data & metadata... +gov. help... Aaand constantly learning, crossreferencing with other people's behaviour from the long dead to the actively logged-in.

I just know that I don't want to find out.

Usually, when I need some perspective on this, I remember how in the past subtitles had to be manually created while nowadays it's pretty easy to do speech-to-text. You can use for example a service where two people can have a live conversation without speaking the same language. Currently it supports 10 different languages.

Yet Siri has a hard time. To be fair, i hear Dragon's pretty good, but i don't know how good. Here's the thing, if the yanny/laural thing doesn't say that the task is even hard for humans to pull off, I don't know what does. Usually speakers on TV are heavily coached before appearing on TV. We usually don't actually catch this, but there actually is a "standard american dialect," just as there are standard dialects for other languages. I'm sure the thing would have trouble transcribing, say, 松本人志, who doesn't speak standard Japanese. Let's take Japanese, for sake of example.

In Japanese, there are many words that seem to be homophones, even to the japanese perspective. A famous example is the tongue twister "箸の橋で端から端まで走った橋本さん." The "phonetic writing system" registers "箸" (chopsticks) as ”はし," and "端" (end [of a bridge or road]) is "はし," and "橋" (bridge) is "はし." To the casual listener (especially someone who doesn't know japanese) it sounds like they're saying "haw she" over and over again with absolutely no meaning, with a couple of other random words in between. To a japanese person, it's just a tongue twister that can be confusing in certain contexts, but not as confusing as the "buffalo buffalo" thing that we have in english. See, in 関西弁 (the mid-west dialect), 箸 is pronounced low with a slightly rising tone (where in their minds it doesn't rise at all), while 端 is a high pitch and drops slightly (the opposite of 箸), and 橋 has a high to very low tonal pattern, with the same pitches that we would use with the word "kitty." So, "haw she noh HAW she deh HAW SHE KAW DDAW HAW SHE MAW DEH haw SHE [pause] taw HAW SHE MOH TOH sawn." Now, in standard japanese (what the AI expects), 箸 is high to low instead, while 橋 is from low to high, with a rule that any sound afterwards returns to low, and 端 is from low to high, but the following sound may be high or low. Let's just say that the AI have a very hard time with this sort of thing.

(Japanese people usually use context to identify which word is which, and the tonal differences are more useful in identifying word borders, but they can be important in some contexts, which can lead to hilarious cross regional puns.)

13 hours ago, barn said:

Yeah, I get you... Nor do they interrupt you mid-sentence to bridge over to the next probable point of thought, are limited in intonation and fluctuation of 'natural' speech velocity... I know, I know, it's not fully fledged out. Same as with any tech, we can find currently unresolved hiccups and weaknesses. As always.

I think that was actually on purpose. One of the major concerns with vocaloid technology, for example, was that the program could sing, and even be convincingly human. So earlier versions actually got restricted due to threats and complaints from unions in japan regarding how the technology could replace human artists. The restrictions made it sound very inhuman, which 初音ミク ("hatsune miku" in english) was known for. On a side note, it's perfectly possible to get her to use natural intonation and fluctuation of speech, but it's the way she makes certain sounds that makes her sound inhuman, because of how they edited the audio data. If you were to mix her tech with better recordings (rather than changing an algorithm) it's quite possible to make her sound human (which japanese people have a hard time doing since they don't have a conscious understanding of the pitch patterns of their own language [for example, the minor rises and drops are important, but in their head their "flat words" are flat]).

13 hours ago, barn said:

I'm not sure if you are seeing the perspective though.

i.e. - The moon lander had a computer with a processing power that today we can fit in an earpiece. Not only did our miniaturisation but the availability of information increased since than. Like a survey that used to be filled out by hand and took a month to answer&process 10 questions by 300 people, compared to today's web-crawlers doing millions of pages every second automatically, without most people ever having imagined even such things could exist.

How much have our morals evolve in comparison with all this pile of led on one side of the scale?

Should I maybe ask rather, how much have we left them devolve/evolve towards the immoral?

If you ask me, we've got a huge catching up to do. We've become the most ignorant humanity has ever been, while some of us capitalised on it to unimaginable extents. The price for NOT being vigilant.

Honestly, i think that's where humility comes in. I used to believe that we were dumber then than we are now, which later found out we really aren't. I also used to believe that we were more moral than we are now, which we have been to some degree, but I don't think we really lost as much as we like to believe. When you look at "80s music," how often do you hear bands like Yello and The Talking Heads? Everyone remembers Survivor, Falco, Led Zeppelin, etc.

I think the same thing is to be said about history. We like to think that we're smarter, because we know about fleas carrying the plague. Yet, how long did it take us to figure out that tobacco in almost any form causes cancer? Our "enlightenment" is vanity. I think the same thing can be said about modern evil. Sure, you see 12 year-olds right now with massive amounts of cleavage in public. But we seem to forget that's probably how old Mary was when she was pregnant with Jesus. Oh, so they didn't show as much clevage in the, say, 20s? Maybe, because it was taboo at the time, but don't think for one minute you didn't have women trying to find ways, such as wearing overly tight clothing, to show off the same things. And let us not even bother with desensitization to these things due to exposure forcing us to constantly "up the game." I don't think it's that we got worse, so much as that we never improved despite technology improving, giving us a larger capacity for showing the evil we already had for the entirety of human history. Yesterday, you slaughtered the women and children of the village, but today you nuke them. What's the difference, except now you can push a button instead of having to write out a fancy edict and crush some wax (or some other compound) with your sigint ring.

That's not to say that we haven't abandoned the lessons of our past, but that's more than just "getting worse." We've always been immoral, but it seems now that it's a greater percentage of the population. Where before we had many, many thinkers who were arrested and killed for their ideas, today we have disinformation campaigns that don't result in the direct death of the thinker.

 

13 hours ago, barn said:

 

I think you are being very naive, or uninformed... I'm trying to find a soft way of saying, there are countless examples of 'certainly it'd NOT do X, we'd be warned/informed ahead of time, we could stop it before getting out of hand'... but that never seems to be the case when there's no freedom of expression or transparency, when responsibility taking is deflected. HUMABIO

 

I. e. - Are people given answers as to how/why shadow-banning, arbitrary content tailoring is being done? Search results not reflecting the actual available set of information, companies selling your data (not just meta) to third-parties... etc.

 

Sure, the tech can be used for other things, but that tech already existed before this. We've seen what the government could do already with the leaked tools and such, so it seems reasonable that if google (who works with government) is public with this, that the government hasn't had this for an ever longer time? And, the (not so) cool thing: all that it's doing and such and we're afraid of is stuff that happens automatically. If the AI can sound like you at all (which is nothing new, as you can find on youtube), all you have to do is write in a way that someone's speech mannerisms come out in the writing that the AI says. Mix a little human with your AI, which you would be doing, anyway, if you're targeting an individual. In other words, the very thing that people are afraid of when the AI can sound like them (the idea that the AI could set you up for a crime or something) was already available to the government for a while. Therefore, if the government wanted to use it, they already had done so before we saw this presentation. Don't get me wrong, i don't believe that simply because they haven't done so that they won't in the future, but at the same time, it doesn't seem to be that they haven't used it yet. That tells me that it's not nearly as powerful of a tool against us as we like to think it is, since a Donald Trump AI still hasn't been caught talking to a Putin AI about hacking the election. Obviously, that kind of attack isn't enough.

13 hours ago, barn said:

1. Do you agree with the part, highlighted here?

I agree completely, and that's why the pakistani robos continue to scam people. The tech that we're meant to see here isn't the tech that everyone's afraid of. The tech that everyone's actually afraid of has been out for a long while, and it has been deployed by scammers.

13 hours ago, barn said:

2. Did they let the call-recipients know (in the video) that they were being duped/tricked/misled/used for an experiment... etc.? (I'm asking because I didn't see it happening. Isn't that worrisome?)

As far as i know, they didn't. I think it'd be pretty controversial if they did, especially if it happened BEFORE the demo. I am sure there are people out there suggesting that, but I don't get the impression from the intonation of the people that that wasn't the case: they didn't sound scripted. To be fair, i think, but i could be wrong, that the hair salon girl figured out something was up, but I could just be hearing things. And while some would say that should be scary: why? Liar tech is old news, as i've said above. Still, the AI has a job to do, and it got the job done. Does it really matter to you if you're talking to a human or AI if the business was conducted properly? Sure, it can sound human, but that tech is old. They're showing off something a bit more new, and everyone is freaking out that it can pose as humans. Frankly, that's old, it's been done, and no one should care. What's special, here, is that they've found a positive usage for this technology: you can now give a task to the AI to schedule an appointment instead of stealing someone's credit card by posing as a charity.

Link to comment
Share on other sites

20 minutes ago, ofd said:

I have seen the part where this was tried out and the system failed.

That's nice. Intersting, but only just nice.

How about mentioning the part where the 'staff' treated this non-person as a real potential customer?

Link to comment
Share on other sites

Hi @ofd

9 hours ago, barn said:

How about mentioning the part where the 'staff' treated this non-person as a real potential customer? 

 

54 minutes ago, ofd said:

How else would they treat the AI? 

Sorry, can't really 'get' why you would be asking that. Care to elaborate?

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.