A few months ago, my mom was beaming as she showed us a photo that had popped up on her phone: two polar bears, mid-embrace, smiling against a glowing Arctic sunset.
"Isn't this wonderful?" she said, completely delighted.
Without missing a beat, my daughter-in-law gently replied, "Oh Grandma, that's AI."
The moment passed with a chuckle and a shrug. But it planted a seed.
A few weeks later, my mom asked me, genuinely and with a hint of concern:
"Son, how do I know what's real with all of this AI?"
I paused, then said: "The same way you've always responded to things like this, with skepticism. Welcome to my world, trust but verify… nothing should be taken at face value."
As a CISO, I get asked about AI constantly, especially by early-stage security founders and startup teams. They want to know how it's going to change the game: Will it transform threat detection? Rewrite red teaming? Automate defenses?
All valid questions. But more often than not, I steer the conversation toward simpler concepts. I'm a strong believer that we often think we've invented something new when, in reality, it almost always maps back to basic, rudimentary human behaviors.
Take Jessica Clark, for example. In a social engineering demonstration for a security conference years ago, she socially engineered her way into a reporter's mobile phone account, not with advanced tools or hacking knowledge, but with empathy, urgency, and a compelling crying baby recording in the background.
Her motivation? In the moment, it was simple: to prove a point, that someone with nothing but a few emotional cues and a believable script could bypass a telco's account protections. Demonstratively, it was more profound: to reveal a blind spot in how much implicit trust we place in customer service channels and emotional appeals.
It's a reminder that the real vulnerabilities aren't always in code, they're in culture. And attackers have known this forever.
Just last week, during a routine security assessment, I spun up a fake email account on a free email service provider, posed as an internal user, and successfully triggered both a password reset and a multi-factor authentication reset. No exploits. No generative voice. Just patience and a few well-placed words.
So yes, AI is changing the game, but the game hasn't changed that much.
Because at its core, social engineering is ancient. It's the same hustle that tricked the Trojans with a wooden horse, conned emperors out of gold, and infiltrated kingdoms long before we ever worried about firewalls and phishing kits. Much of our modern security lexicon,words like Trojan, phishing, spoofing, and exfiltration,traces directly back to centuries-old tactics of deception, disguise, and siege.
And when you look at the attackers themselves, what drives them, what they're after, it turns out we've seen all this before as well.
Technology changes. Motivation doesn't. It's time our industry starts focusing there.
For example, when it comes to Social Engineering, every phishing campaign, ransomware strain, or AI-generated impersonation, there's a very human urge. Most modern attack motivations map disturbingly well to the Seven Deadly Sins, timeless reflections of human weakness, now supercharged by AI.
But here's the good news: each of these motives can be understood, and countered.
Greed is the most familiar sin in cybersecurity, profit at any cost. These attackers are after one thing: money. Whether it's through wire fraud, ransomware payouts, stolen banking credentials, or hijacked crypto wallets, their play is simple. To counter this, organizations must implement layered financial controls, out-of-band verifications, and real-time anomaly detection that flags suspicious behavior before funds move.
Pride drives those who don't necessarily want your money, they want your headlines. These attackers seek notoriety, whether by defacing websites, dropping zero-day proof-of-concepts, or leaking data for attention. The defense here lies in reducing your public attack surface, proactively patching, and monitoring for signs of tampering. You starve them of oxygen by staying invisible and unshakable.
Envy motivates espionage. It's about taking what others have: source code, proprietary strategies, sensitive business plans. These are the actors who want your competitive advantage without doing the work. They can be stopped with strict data classification, encryption, and data loss prevention controls. Limit access based on business need and continuously audit how sensitive data is used.
Wrath is driven by emotion: disgruntled insiders, hacktivists, or attackers seeking revenge. Their aim is harm: destruction, disruption, or humiliation. You defend against wrath not just with monitoring, but with culture. Build insider threat programs, provide offboarding support, and listen to concerns before they metastasize into malicious behavior.
Lust, in this context, is about exploitation: of identity, reputation, and vulnerability. From sextortion to deepfake impersonation, it's rooted in power and control. Preventing this requires educating employees on manipulation tactics, using AI-detection tools for image and voice misuse, and protecting sensitive personal data across platforms.
Gluttony represents the insatiable appetite for data. These actors hoard everything: credentials, metadata, personal info, not necessarily to use immediately, but because it might be useful later. The solution? Data minimization. Only collect what's needed, encrypt it at rest and in transit, and apply strong data retention policies that keep your risk surface lean.
Sloth is perhaps the most modern of threats: low-effort, high-volume attacks launched with pre-packaged tools and automated scripts. These attackers don't innovate; they scale. Your best defense is to raise the bar: phishing-resistant MFA, rate-limiting, CAPTCHAs, and behavioral anomaly detection can turn their laziness into failure.
These are all ideal attacker profiles.
Understanding these motivations helps us shift the narrative: attackers aren't superhuman, they're just human, acting on base instincts with newer tools. AI hasn't changed the nature of the game. It just accelerated it.
The real challenge, and opportunity, lies with the builders. Whether you're working on the next AI platform, infrastructure tool, fintech startup, or SaaS product, founders today and tomorrow must design with attacker motivations in mind. The counter to greed is access control. The counter to pride is resilience. The counter to envy is strong data governance. Wrath is mitigated by culture and insider trust models. Lust? Privacy and dignity by design.
Every product has its own version of "crown jewels", whether it's user trust, financial flows, reputational integrity, or sensitive data. And every one of those assets needs to be protected by thoughtful defaults, not bolted-on policies. It's not about whether you use AI. It's about whether you build with a clear understanding of the human tendencies that will inevitably try to undermine what you've built.
So when my mom asks, "How do I know what's real?", the answer, still, is skepticism. Not cynicism, but deliberate doubt. In security, we aren't chasing novelty for novelty's sake. We're iterating, refining old solutions, adapting proven frameworks, and applying new tools like AI to age-old problems.
Because at the heart of every breach is a human impulse we've known for millennia. Greed, pride, envy,they're not new. Neither is deception. The tools may evolve, but the motivations don't. And so we keep building, testing, breaking, and rebuilding, still searching for better ways to protect people not just from the code, but from themselves.
We're not facing new threat models. We're facing new throughput. AI isn't changing the attacker's playbook, it's just letting them run it at scale.
Help me keep my mom's psyche safe and sound!
One Team!
About the Author: Mark Dorsi is a CISO, cybersecurity advisor, and investor helping organizations build secure, scalable systems. He believes that AI hasn't changed the fundamental nature of security threats—it has only accelerated them. By understanding the timeless human motivations behind attacks (the Seven Deadly Sins of cybersecurity), we can build defenses that address both ancient deception tactics and modern AI-powered scale.