Cybersecurity’s A.I. Problem Isn’t Technology. It’s Human.
Artificial intelligence is not new to cybersecurity. The sector is one of A.I.’s earliest adopters. For years, defensive cybersecurity has relied on machine learning to identify anomalies, detect patterns and respond to threats with speed and precision beyond any human capability. What is new is the speed, scale and accessibility of A.I., and the way it is reshaping not just our defenses, but the very nature of cyber risk itself.
On April 7, Anthropic sent shockwaves through industries when it announced that its latest model, Claude Mythos, was too powerful to release publicly because of its exceptional ability to identify and exploit software vulnerabilities. The company instead opted to provide controlled access to select businesses, including JPMorgan, Apple, Nvidia and Google, to strengthen their cybersecurity defenses. The move underscored a growing reality that the same systems designed to protect can just as easily be weaponized.
The uncomfortable truth is that while A.I. is accelerating both offensive and defensive capabilities, threat actors are proving equally, if not more, innovative. Technology alone won’t bridge the capability gap. Instead, leadership, talent generation and training need to keep pace with this revolutionary shift in technology.
The new offense: faster, smarter and more personal
For decades, cyberattacks followed a familiar pattern. Phishing emails were often clumsy, riddled with grammatical errors and relatively easy to spot. Think the infamous “Nigerian prince” scams. That era is over.
A.I. has fundamentally changed the precision and economics of cybercrime. It allows bad actors to operate with unprecedented speed and sophistication. Cyberattacks have always been, in part, a numbers game—like trying every door and window in a neighborhood until one is unlocked. A.I. simply allows attackers to try exponentially more doors at near light speed. It also acts as a force multiplier for low-skilled actors, simplifying the creation of malware and phishing campaigns and enabling Ransomware-As-A-Service business models.
At the same time, cyberattacks are becoming far more sophisticated. Today’s phishing campaigns are no longer generic; they are highly personalized. A.I. tools scrape vast amounts of publicly available data, from social media profiles, company websites and professional bios and use that information to craft messages that convincingly mimic colleagues, family members or trusted institutions. The result is a new class of cyber threat: one that is more frequent, more scalable and far more believable.
Even small details matter. Recently, my husband received what appeared to be a legitimate fraud alert from his bank. Only upon closer inspection did he realize it used American spelling in a message from a Canadian bank. That subtle inconsistency was the only clue that it was a scam. He checked directly with the bank, and the suspicion was confirmed. But in an environment of increasingly polished deception, those clues are disappearing.
The friction problem
That near-miss points to another problem. Organizations have spent years optimizing for seamless, frictionless digital experiences. We are conditioned to click, approve and move quickly. But when it comes to cybersecurity, friction is frequently the safeguard rather than the enemy.
Multi-factor authentication, verification prompts and transaction delays are viewed as inconveniences. In reality, they are intentional pauses designed to interrupt automated or impulsive behavior and help users—both human and machine—verify actions. A.I.-driven threats exploit exactly what frictionless systems enable: speed without reflection.
As cyber risk evolves, individuals and organizations need to rethink their relationship with convenience. Slowing down, questioning a message, verifying a request and pausing before clicking are some of the most effective defenses we have.
The skills shift happening in real time
While much of the conversation around A.I. in cybersecurity focuses on threats, an equally important transformation is happening within the workforce. A.I. is rapidly being embedded into everyday workflows, changing not only how work is done but also what skills are required to do it.
In cybersecurity, this shift is particularly pronounced. Tasks that were once the domain of entry-level analysts—monitoring alerts, identifying patterns and triaging incidents—are increasingly being automated by A.I.-powered tools. On the surface, this is a positive development. It allows organizations to operate more efficiently and frees up talent to focus on higher-value work.
But it also raises a critical question: if foundational tasks are being automated, how do future professionals build the baseline skills needed to advance? Does the baseline need to shift? Do we need to maintain any of these skills? The challenge is becoming more urgent as organizations continue to deploy powerful systems like those capable of identifying vulnerabilities at scale, as highlighted by the Claude Mythos announcement. But cybersecurity offers a particularly clear lens into the challenge. When entire layers of entry-level experience begin to disappear, traditional career pathways break down.
Without deliberate intervention, we risk creating a generation of professionals who are fluent in tools but lack the foundational knowledge to use them critically.
The illusion of intelligence
Compounding this challenge is a growing tendency to overestimate what A.I. actually is. Much has been written about A.I. hallucinations, but to really understand this, we need to consider how these systems work. Today’s models are powerful, but they are not human. They are not reasoning the way we do; they are identifying patterns in data and predicting the most likely sequence of outputs based on what they have seen before.
They can assist, accelerate and augment, but they cannot replace human judgment. While A.I. systems may have all of the data related to a given situation, qualities such as context, empathy and nuanced understanding remain distinctly human. And yet, many organizations are deploying A.I. tools at scale without adequately preparing their workforce to use them critically.
This brings into sharp focus the issues surrounding safe and secure use of this technology. A.I. is not infallible. There are many examples of bias or model drift, as well as deliberate prompt manipulation that can result in unplanned outcomes that tend to cause more harm than good. The result is an acute and urgent gap: we are more reliant on A.I. than ever, but not necessarily more informed about how to use it safely, securely and effectively.
A leadership gap in the making
These shifts point to a broader issue that has been part of the industry discourse for over a decade: cybersecurity is not just a technical function. It is a strategic, organizational and human challenge. And that requires a kind of leadership that goes beyond only advancing technical expertise.
While this is important, it is no longer sufficient on its own. Leaders must aim to understand A.I., systems thinking, data governance, human behavior, regulatory environments and organizational change simultaneously. They must be able to ask not only “Can we implement this technology?” but “Should we, and how do we do so securely and responsibly?”
At the same time, they must navigate a workforce that is both energized and overwhelmed. Employees are being introduced to a growing number of A.I. tools, often without clear guidance, while also grappling with fears of job displacement. Tool fatigue is real, and without proper support, it can lead to misuse or disengagement.
In many organizations, these leadership capabilities are still emerging. The result is a widening gap between the pace of technological change and the preparedness of those tasked with leading that change.
Rethinking talent for an A.I.-driven future
Addressing this gap will require a fundamental rethink of how we develop talent. First, we must embrace the idea of lifelong learning, not as a slogan but as a necessity. The pace of change in A.I. and cybersecurity means that static skill sets quickly become obsolete. Continuous education, upskilling and reskilling must be embedded into the fabric of organizations.
Second, we need to create new pathways for skill development. If traditional entry-level roles are evolving or disappearing, we must define a new baseline for foundational skills and design alternative ways for professionals to gain the experience they need. This should include learning within simulated environments, coaching or rotational programs or work-integrated learning models that combine theory with practice.
Third, organizations must invest in enabling their people, not just deploying tools. This means providing clear policies, guidance, training and governance frameworks that help employees use A.I. securely, responsibly and effectively.
Finally, collaboration across sectors is critical. The challenges posed by A.I. and the current and emerging cyber risks do not neatly fit into any one domain. Partnerships between industry, government and academia will be essential to building the talent pipelines and knowledge ecosystems required to keep pace.
The path forward
None of this is an argument against A.I. Far from it. A.I. holds enormous potential to strengthen cybersecurity, improve efficiency and unlock new capabilities. But like any powerful tool, its impact depends on how we use it.
The organizations that succeed in this new landscape will not be those that adopt A.I. the fastest, but those that adopt it the smartest, balancing innovation with oversight, speed with intentionality and automation with human judgment.
Cybersecurity has always been about staying one step ahead of the threat. In the age of A.I., that step is no longer just technological; it’s human. Closing this gap will be crucial to preserving our future digital resilience.
Judith Borts is Senior Director at Rogers Cybersecure Catalyst—Toronto Metropolitan University’s national center for training, innovation and collaboration in cybersecurity.