Submitted by Friedman Nemecek on
We face a new day that unfortunately brings different technological threats. Throughout my more than twenty-five years as a criminal defense lawyer, I am routinely presented with various cutting-edge schemes employed by those wanting to gain at the expense of innocent people. The effort and creativity of those with nefarious intent never ceases to amaze me. This article highlights a very real threat and suggests an old school solution.
The Dark Side of AI
In general, Generative Artificial Intelligence refers to deep-learning models that can generate high-quality text, images, and other content based on readily available data. Generative AI systems learn underlying patterns and structures from their training information, which enables the creation of new data. A subset of this is referred to as audio deepfake or voice cloning. These burgeoning technologies emulate recognizable voices which can allow for the perpetration of fraud upon unsuspecting targets, including family members. As a result, people must now be very wary of the call from a familiar family member requesting an immediate transfer of money or access to a needed passcode.
The prevalence of artificial intelligence platforms adept at mirroring human voice is rapidly expanding. This trend is concerning as criminals can simply take content that a person posts on social media to replicate their voice in an entirely different scenario. For instance, a video with voice that a son or daughter posted to any popular social media site, can be analyzed, integrated, and regenerated into an entirely different presentation. This new artificially generated voice is virtually indistinguishable from the original speaker and reliable security tools and preventive applications remain in the development phases.
You Can Expect To Be a Victim of AI Technology and Cyber Crime
It is an undeniable fact that all people will, at some point, be directly or indirectly affected by some type of cybercrime. Despite the exercise of great due diligence in trying to protect your data and online identity, perpetrators are always pushing new bounds. Time serves to their benefit as they seek to craft new crimes and modify traditional criminal acts. People should be assured that both the public and private sectors are working tirelessly to minimize the threat of falling victim to deepfakes and voice cloning. A basic Google search yields countless companies and entities offering suggestions toward prevention and selling products intended to assist in the fight against this new wave of crime. This can provide only so much comfort though.
What Can You Do
To reduce the risk of being personally victimized, I strongly recommend that families identify a unique password that only they will know. So that it cannot be identified through a third-party intrusion (hack), do not store the password or share it with each other online. Each family member should know that any substantial request made by telephone or voice message (i.e. money, combination, passcode), must be accompanied by the unique password. If it is not, immediately contact the supposed inquiring family member to confirm the authenticity of the initial request. It is this time-tested and basic precaution that should give you the greatest confidence in combatting the eager criminal.
Ian Friedman is the Managing Partner of Friedman Nemecek Long & Grant LLC, which is a criminal defense law firm based in Cleveland and Columbus. As a professor, he teaches Cybercrime at the Cleveland State College of Law. For More Information or to Consult With Legal Counsel, Please Visit www.fanlegal.com or Call 216-928-7700.