The dangers of voice fraud: We can’t detect what we can’t see
[ad_1]
Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More
It’s hard to believe that deepfakes have been with us long enough that we don’t even blink at the sound of a new case of identity manipulation. But it hasn’t been quite that long for us to forget.
In 2018, a deepfake showing Barack Obama saying words he never uttered set the internet ablaze and prompted concern among U.S. lawmakers. They warned of a future where AI could disrupt elections or spread misinformation.
In 2019, a famous manipulated video of Nancy Pelosi spread like wildfire across social media. The video was subtly altered to make her speech seem slurred and her movements sluggish, implying her incapacity or intoxication during an official speech.
In 2020, deepfake videos were used to heighten political tension between China and India.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
And I won’t even get into the hundreds — if not thousands — of celebrity videos that have circulated the internet in the last few years, from Taylor Swift’s pornography scandal, to Mark Zuckerberg’s sinister speech about Facebook’s power.
Yet despite these concerns, there’s a more subtle and potentially more deceptive threat looming: voice fraud. Which — at the risk of sounding like a doomer — could very well prove to be the nail that sealed the coffin.
The invisible problem
Unlike high-definition video, the typical transmission quality of audio, especially in phone calls, is markedly low.
By now, we are desensitized to low fidelity audio — from poor signal, to background static, to distortions — which makes it incredibly difficult to distinguish a real anomaly.
The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious.
Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use.
Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye.
On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.
The urgency to validate anything and everything
One consequence of all of this is that, by default, people will scrutinize the validity of the source or provenance of information. Which is a great thing.
Society will regain trust in verified institutions. Despite the push to discredit traditional media, people will place even more trust in verified entities like C-SPAN, for example. By contrast, people may begin to show increased skepticism towards social media chatter and lesser-known media outlets or platforms that do not have a reputation.
On a personal level, people will become more guarded about incoming calls from unknown or unexpected numbers. The old “I’m just borrowing a friend’s phone” excuse will carry much less weight as the risk of voice fraud makes us wary of any unverified claims. This will be the same with caller ID or a trusted mutual connection. As a result, individuals might lean more towards using and trusting services that provide secure and encrypted voice communications, where the identity of each party can be unequivocally confirmed.
And tech will get better, and hopefully help. Verification technologies and practices are set to become significantly more advanced. Techniques such as multi-factor authentication (MFA) for voice calls and the use of blockchain to verify the origins of digital communications will become standard. Similarly, practices like verbal passcodes or callback verification could become routine, especially in scenarios involving sensitive information or transactions.
MFA isn’t just technology
But MFA isn’t just about technology. Effectively combating voice fraud requires a combination of education, caution, business practices, technology and government regulation.
For people: It’s essential that you exercise extra caution. Understand that the voices of their loved ones may have already been captured and potentially cloned. Pay attention; question; listen.
For organizations, it’s incumbent upon you to create reliable methods for consumers to verify that they are communicating with legitimate representatives. As a matter of principle, you can’t pass the buck. And in specific jurisdictions, a financial institution may be at least partially responsible from a legal standpoint for frauds perpetrated on customer accounts. This includes any business or media platform you interact with.
For the government, continue to make it easier for tech companies to innovate. And continue to institute legislation to protect people’s right to internet safety.
It will take a village, but it’s possible.
Rick Song is CEO of Persona.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
[ad_2]
Source link