AI Tricking Humans
- franciscorego7
- Apr 25
- 3 min read
History of AI Fooling Humans
In the Western world, the earliest known case of fraud was recorded in 300 BC.
Since then, scam artists, fraudsters and other bad actors have taken advantage of humans.
As our technology has advanced, so have our malicious schemes.
Now, artificial intelligence (AI) has joined the long list of human tricksters and con artists.
The use of AI and machine learning (ML) technologies is taking phishing and other cyber fraud to a whole new level, making it increasingly difficult for companies to detect and identify threats.
Compared to human cybercriminals, algorithms are learning to exploit their victims on a larger scale and more efficiently.
Top Phishing Figures
Using deep learning language models like GPT-3, hackers can easily launch mass spear phishing campaigns, an attack that involves sending personalized messages.
At the Black Hat and Defcon 2021 security conferences in Las Vegas, a team from Singapore’s Government Technology Agency presented an experiment in which their colleagues received two sets of phishing emails, the first written by a human, while the second was written by an algorithm.
Al won the contest by garnering more clicks on links in emails than on human-written messages.
With the proliferation of AI platforms, hackers can leverage AI as a low-cost service to target large groups of victims, and personalize their emails down to minute details to make them appear as real as possible. AI’s Long History of Artificial Intelligence Scams While AI-as-a-service attacks are a relatively new and still rare phenomenon, machines have been known to fool humans for decades.
Joseph Weizenbaum wrote the first program in the 1960s, Eliza, which could communicate in natural language and pretend to be a psychotherapist. Weizenbaum was surprised that his MIT lab staff viewed what is now known as a “chatbot” as a real doctor.
They shared their personal information with Eliza, surprised to learn that Weizenbaum could examine the logs of conversations between them and the “doctor.” Weizenbaum was also surprised that such a simple program could so easily trick users into revealing their personal data.
In the decades since, humans have not learned to distrust computers and have become increasingly vulnerable.
Recently, an engineer at Google surprised management and colleagues by announcing that the company’s AI chatbot LaMDA (Language Model for Dialogue Applications) had come to life. Blake Lemoine said he believes LaMDA is a human-like creature.
LaMDA is able to mimic human conversation so well that even experienced engineers are fooled by it. These conversational models base their responses on information they learn from social media, previous interactions, and human psychology, in no way implying that they understand the meaning of words or have feelings.
But this level of mimicry of human behavior and social behavior allows hackers to leverage tools similar to LaMDA to mine social media data and send or post spear phishing messages. The Rise of Deepfake Social Engineering
AI is not just capable of mimicking conversations.
Both voice style transfer, also known as voice conversion (a digital clone of a target’s voice) and deepfakes (a fake image or video of a person or event) are very real cyber threats. In 2019, in what many experts believe was the first documented case of such an attack, fraudsters used voice conversion to impersonate a CEO and demand an urgent transfer of funds (to their account).
A year later, in 2020, another group of fraudsters used the technology to imitate a client’s voice to convince a bank manager to transfer $35 million to cover an “acquisition.” Such AI-powered targeted social engineering attacks have been expected, but most companies are unprepared. In an Attestiv survey, 82% of business leaders acknowledged that deepfakes pose a risk, but fewer than 30% have taken any steps to minimize them.
Outsmarting AI Fraudsters
With so many different cyberattacks to watch out for 24×7, it’s no wonder companies are overwhelmed.
So far, AI-designed attacks have proven effective, and they are well-targeted. AI technology continues to expand the cybersecurity threat landscape and has the potential to become too “smart” for humans to understand.And as history shows, humans can be easily fooled, but AI tools can also help us.
Deepfakes can be combated with algorithms that are tuned to spot them.
Advanced analytics tools can help
How Bitdefender Helps You with Machine Learning
Machine learning in the security industry has also proven to be very effective in finding new or unknown malware, based on the features that new malware has in common with previously known threats.
However, we need to train machine learning algorithms with a dataset consisting of known malware samples.
Bitdefender has been continuously developing and training machine learning algorithms for cybersecurity purposes since 2008.
With over 30 patents issued for Bitdefender Machine Learning technology, specifically knowledge of malware behavior.
And how machine learning algorithms equip security researchers who make a world of difference in offering the best protection against malware.

Comments