AI-driven cybercrime to reshape online trust by 2026
Cyber security company Gen has warned that 2026 will mark a turning point in online risk, as artificial intelligence reshapes how people judge identity, trust, and truth on the internet.
The company's Gen Threat Labs unit has released a set of forecasts that describe 2026 as the year the internet outpaces human intuition. The researchers expect AI tools to enable highly convincing digital deception, synthetic identities, and new forms of emotional manipulation.
Gen said criminals are moving from reacting to new technologies to actively steering how they develop.
"Cybercriminals are no longer adapting to technology - they're directing it," said Siggi Stefnisson, Cyber Safety CTO, Gen. "From identity to emotion to the browser itself, every corner of the internet is becoming a contested space. Our goal is to prepare people for the reality ahead and empower them with the habits and tools that can keep them safe."
The predictions outline five major shifts that the company expects will define digital risk next year. The themes cover identity, misinformation, scams, fraud, and the web browser.
The human test
Gen Threat Labs expects AI-driven impersonation to move from static content into live interactions. The team said it is now possible to clone a person's face, voice, and writing style in seconds.
It forecasts that synthetic personas will appear across daily life. These could mimic friends, colleagues, influencers, or romantic partners with a level of realism that is hard to distinguish from genuine contacts.
Deepfake technology is expected to move into real-time calls and video conversations. This would turn routine trust decisions into a point of vulnerability.
The researchers said people will increasingly need to verify the identity of the person on the other end of a message or call through a separate channel. They expect human verification habits to become a standard safety practice.
Distorted information
The second trend centres on how AI will change the information that circulates online. Gen Threat Labs expects an AI feedback loop in which machine-generated content is repeatedly scraped, summarised, and republished by other AI systems.
The team said this process will erode accuracy and introduce large volumes of synthetic material into search results and social feeds. The prediction describes an internet where genuine information and AI-generated text blend into a single stream.
Tech and media organisations are expected to roll out authenticity markers and content-signing frameworks in response. These systems would signal when a piece of content comes from a verified source.
Gen expects adoption of such measures to trail the spread of AI-generated misinformation. It forecasts that users will need to cross-check important claims with multiple independent sources and rely more on official sites.
Scams become emotional
The third forecast describes a shift in online fraud. Gen Threat Labs expects scam operations to move away from mass, generic messages toward what it calls emotional engineering.
Fraudsters are expected to use AI tools that run real-time sentiment analysis on conversations. These systems can detect signs of fear, uncertainty, guilt, or excitement in a person's responses.
The prediction states that scammers will use these signals to adapt their messages instantly. The scams would then mirror empathy, urgency, or reassurance in a way that feels personal.
Gen said this kind of "empathetic scam" will rely less on technical tricks and more on psychological pressure. The company said individuals will need to pay closer attention to sudden emotional shifts during interactions, not just spelling mistakes or unusual links.
Synthetic identities
The fourth theme focuses on identity fraud. Gen Threat Labs expects AI tools to assemble full identity kits that appear legitimate during standard checks.
These kits could include realistic identity documents, utility bills, selfies, and live video streams. The team said such packages may evade many existing verification controls.
Criminals are expected to use these synthetic identities to secure loans, open accounts, and move across platforms. The prediction warns that these attacks will affect financial services, tax systems, digital wallets, and online service providers.
Gen said static credentials, such as a single ID document or password, will become less reliable signals of a real person. It expects more organisations to combine multiple signals and monitoring tools when they assess identity risk.
Browser under fire
The final prediction places the web browser at the centre of user risk. Gen Threat Labs said attacks already concentrate heavily on browser sessions and will intensify in 2026.
The team expects wider use of AI-generated malvertising, fake retail sites, and deceptive pop-ups. These pages may copy the appearance of banks, retailers, or government agencies.
The researchers said the threat is shifting from traditional file-based malware toward code that runs inside the webpage itself. This change makes infections harder for users to spot because no obvious download takes place.
Session token theft is also expected to increase. This method targets the tokens that keep users logged into sites, which can allow attackers to hijack accounts without needing passwords.
Gen is advising users to rely on passkeys or two-factor authentication for sensitive accounts. It also points to browsers that embed stronger security controls at design stage.
Gen Threat Labs said it will continue tracking these trends as AI tools spread into everyday products and services, and expects further shifts in online risk during and beyond 2026.