The Far Reaching Effects of AI - Mimicking the Human Voice

In summary, the article profiles the Lyrebird startup where AI can mimic your voice or any voice given a snippet of spoken words. These technologies have the potential to be used for bad purposes, but there are ways to prevent that from happening.
Computer science news on Phys.org
  • #3
Yes, although folks have done social engineering like this before with a not so perfect voice and gotten away with it.

In one case, years ago a crafty lawyer created a fake company and forged a fake letter to steal the domain address from another guy by stating he was an employee of their company and that they were transferring ownership to a new company. The internet registrar did it no questions asked and it took several years and a long court fight to get it back and many more years later to get paid for the loss. It was through offical looking letters and not through a fake voice but you get the idea of how it can be used. (See case of Kremens vs Cohen and the fight for an **redacted** domain name)
 
  • #4
Presumably you can create a video of anybody saying anything you like and it would be difficult to determine if it was fake. Imagine David Muir (ABC) breaking in and announcing "live" on site an alien invasion (H. G. Wells "War of the Worlds") . What will we be able to believe.
 
  • #5
VIdeos can be analyzed and debunked due to various artifacts found. Scientific American once posted an article about photo debunking where they looked at how shadows were cast and in many fake photos there was a clear discrepancy not obvious to the casual observer. I figure a similar scheme is used in debunking fake videos.

https://www.scientificamerican.com/article/5-ways-to-spot-a-fake/

Lack of resolution causes big problems though that are hard to debunk easily. There was a video of cars mysteriously jumping around on a roadway as if there was selective anti-gravity at work. The resolution didn't show the downed power line cable that was being dragged by a street sweeper that caused the cars to flip as it became more taunt.

https://www.cnn.com/videos/world/2015/11/30/china-levitating-cars-mystery-solved-orig-sdg.cnn
 
  • Like
Likes Aufbauwerk 2045
  • #6
That was 10 years ago. Maybe things go a little more sophisticated.

check this out
 
  • #7
This brings up the dilemma of group specialization where the folks who built it kick the ball down the line when it comes to the moral issue of using the technology. It’s similar to gun makers who don’t feel morally responsible to how their guns are used, or gunshops who sell the guns... each group refuses to take responsibility and so no one does and the technology is used for bad things.

One inventor I knew loved to invent things he hated. Why? Because then he could patent it and prevent it from being made at least for awhile.

Perhaps we need something like that for technology.
 
  • Like
Likes QuantumQuest
  • #8
jedishrfu said:
VIdeos can be analyzed and debunked due to various artifacts found.

I heard that discussed on NPR. The expert being interviewed said that the problem is asymmetric warfare. One can create a fake video in an hour but it takes 40 hours of skilled labor to debunk it. In addition, who funds the debunker and how are the debunked conclusions disseminated?

But I see nothing new here. New technology has always been used for good and bad, and it always will. What else would you expect?
 
  • #10
jedishrfu said:
This brings up the dilemma of group specialization where the folks who built it kick the ball down the line when it comes to the moral issue of using the technology. It’s similar to gun makers who don’t feel morally responsible to how their guns are used, or gunshops who sell the guns... each group refuses to take responsibility and so no one does and the technology is used for bad things.

One inventor I knew loved to invent things he hated. Why? Because then he could patent it and prevent it from being made at least for awhile.

Perhaps we need something like that for technology.
What technology is exempt from bad use? Should we vilify farmers and grocers for feeding bad guys? Granted, some technologies are more readily adapted to harmful and wrongful use than others; however, the responsibility for wrongdoing is primarily with the doer of wrong. I think that the more potentially harmful a technology is, the more its purveyors should be called upon to be diligent that they do not knowingly provide it in aid of a harmful purpose, but it's no easy task to determine and put into practice exactly the right measures wherewith that call to duty should appropriately be effectuated.
 
  • #11
sysprog said:
What technology is exempt from bad use? Should we vilify farmers and grocers for feeding bad guys? Granted, some technologies are more readily adapted to harmful and wrongful use than others; however, the responsibility for wrongdoing is primarily with the doer of wrong. I think that the more potentially harmful a technology is, the more its purveyors should be called upon to be diligent that they do not knowingly provide it in aid of a harmful purpose, but it's no easy task to determine and put into practice exactly the right measures wherewith that call to duty should appropriately be effectuated.

Sure. In the end all such questions reduce to judgements of good and evil, which are based on values, which are not universal, and to what degree can the majority impose its values on the minority. Blah blah. We loosely call it politics or maybe religion. We discuss such things in the GD forum on PF, but not in the technical forums.
 
  • #12
anorlunda said:
Sure. In the end all such questions reduce to judgements of good and evil, which are based on values, which are not universal, and to what degree can the majority impose its values on the minority. Blah blah. We loosely call it politics or maybe religion. We discuss such things in the GD forum on PF, but not in the technical forums.
In this instance, a Staff member introduced the terms "moral issue", "morally responsible", "responsibility" and "bad things" into the topic; I responded accordingly.
 
  • #13
sysprog said:
In this instance, a Staff member introduced the terms "moral issue", "morally responsible", "responsibility" and "bad things" into the topic; I responded accordingly.

No problem. You did nothing wrong. But if this thread continues to go in that direction, I'll move it to General Discussion.
 
  • #14
anorlunda said:
No problem. You did nothing wrong. But if this thread continues to go in that direction, I'll move it to General Discussion.
Fair enough, Sir; the following, I hope, is back on topic:

This problem of fake human phone callers being used fraudulently seems to me to be similar in some ways to the problem of one-way authentication/validation/verification, where two-way would be appropriate. Websites can use captchas to ensure the user is human and not a bot; humans should be able to do something similar to a caller.

An example of the one-way-only problem is the fake ATM that collects the mag stripe data from a would-be user's card, prompts for the PIN, then says something like EID6049I LINK ERROR 02A3 EID6051I LOCAL SYSTEM RESET 012B and then re-displays the welcome screen. The fake ATM collects card data and PINs, and the operator then removes it, and uses the data to make counterfeit cards, which he can then use, along with the PINs, to steal money.

A remedy for this would be a protocol by which your name was not encoded on the card, and the welcome screen displays your name by consulting the bank's records, and if it doesn't display your name, you can call the hotline number on the card and report it, instead of entering your PIN.

Similarly, to prevent machines from fraudulently pretending to be human, we could use ringback protocols in the reverse direction. The original use of ringback protocols was for a computing machine user connecting via modem from an offsite location. The user would call a number for the switch, and the switch would present an authentication dialog, and then the switch would ring back the authenticated user, who would then complete a repeat of the authentication dialog, this time with the switch having made an outgoing call.

A reverse example: if I get a call, ostensibly from a person, who says he's an FBI field agent in Chicago, I can ask him which field office published number I can call him back at, from which the switchboard operator there can route the callback to him. That's 2-way authentication: the FBI knows it's me because the agent called my listed number, and I know it's the FBI because I called back and got the same agent, who acknowledged having just called me.

That might seem a bit much, but before you give out your credit card numbers over the phone, you should at least be able to ensure that the caller is an authorized representative of the entity with which you're trying to do business, and with bots being able to successfully pretend to be human, and the attendant ramp-up in the possible number of phishing calls, we'll have to do something about it; establishing two-way protocols is a reasonable stop-gap measure -- devising human-presentable Turing tests that are very hard for machines to pass and easy for humans is something that we may soon have to get used to.
 
  • #15
I see no reason that this will pose any particular type of risk. Nobody uses voice recognition for security anymore and the threat of an AI being used to swindle someone is no different than someone doing an impression of someone. How many of you have gotten calls from Microsoft or the IRS where the guy on the phone was named "Dave" but had a heavy New Delhi accent? We'll adjust our behavior based on these progressions and legitimate companies will likely to go out of their way to make sure that you know who you're talking to: "Hello, I am Cortana, to speak to a live person, please press 0, otherwise, tell me what you're calling about."
 
  • Like
Likes Aufbauwerk 2045
  • #16
newjerseyrunner said:
I see no reason that this will pose any particular type of risk. Nobody uses voice recognition for security anymore

See #9.
 
  • Like
Likes anorlunda
  • #17
CWatters said:
See #9.
Dave?
Dave's not here.
 
  • #18
I know I'm late to this party and it perhaps may not actually matter much to the discussion in the thread, but this issue really has very little directly to do with AI. That seems like just a way to scare people into reading the article. It's essentially just high-quality, seamless audio editing and/or synthesis. A big step up from Ferris Bueller's implementation and slightly smaller step up from Ethan Hunt's. Yes, it opens-up new avenues for fraud by forgery, but that's not an AI issue (you just don't have to make a fool of yourself trying to get your mark to say "passport" anymore). On the upside, maybe it will make my GPS audio directions less irritating to listen to.

Just a pet peeve of mine, this constant use of "AI" as a slur.
 
Last edited:

Related to The Far Reaching Effects of AI - Mimicking the Human Voice

1. How does AI mimic the human voice?

AI uses a technique called text-to-speech synthesis to mimic the human voice. This involves converting written text into spoken words by analyzing and synthesizing patterns in speech.

2. Can AI mimic any human voice?

With advancements in AI technology, it is possible for AI to mimic any human voice with a high level of accuracy. However, the quality of the mimicry may vary depending on the complexity of the voice and the level of training the AI has received.

3. What are the potential benefits of AI mimicking the human voice?

One potential benefit is the ability to create more natural and human-like interactions with AI assistants and chatbots. This can improve user experience and make interactions more efficient. Additionally, AI voice mimicking can also aid in language translation and accessibility for those with speech impairments.

4. Are there any ethical concerns with AI mimicking the human voice?

There are ethical concerns surrounding the use of AI mimicking human voices, especially in areas such as fake news and voice fraud. It is important for researchers and developers to consider the potential misuse of this technology and implement safeguards to prevent harm.

5. How can AI mimicking the human voice impact the job market?

AI voice mimicking has the potential to automate jobs that require human voice, such as customer service representatives or voice actors. However, it can also create new job opportunities in areas such as AI development and voice synthesis training.

Similar threads

Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
15
Views
3K
  • Biology and Medical
Replies
6
Views
5K
  • General Discussion
Replies
31
Views
5K
  • General Discussion
Replies
33
Views
5K
  • General Discussion
Replies
1
Views
8K
Back
Top