{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026 May 2026
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Cybersecurity’s A.I. Problem Isn’t Technology. It’s Human.

Artificial intelligence is not new to cybersecurity. The sector is one of A.I.’s earliest adopters. For years, defensive cybersecurity has relied on machine learning to identify anomalies, detect patterns and respond to threats with speed and precision beyond any human capability. What is new is the speed, scale and accessibility of A.I., and the way it is reshaping not just our defenses, but the very nature of cyber risk itself.

On April 7, Anthropic sent shockwaves through industries when it announced that its latest model, Claude Mythos, was too powerful to release publicly because of its exceptional ability to identify and exploit software vulnerabilities. The company instead opted to provide controlled access to select businesses, including JPMorgan, Apple, Nvidia and Google, to strengthen their cybersecurity defenses. The move underscored a growing reality that the same systems designed to protect can just as easily be weaponized. 

The uncomfortable truth is that while A.I. is accelerating both offensive and defensive capabilities, threat actors are proving equally, if not more, innovative. Technology alone won’t bridge the capability gap. Instead, leadership, talent generation and training need to keep pace with this revolutionary shift in technology.

The new offense: faster, smarter and more personal

For decades, cyberattacks followed a familiar pattern. Phishing emails were often clumsy, riddled with grammatical errors and relatively easy to spot. Think the infamous “Nigerian prince” scams. That era is over.

A.I. has fundamentally changed the precision and economics of cybercrime. It allows bad actors to operate with unprecedented speed and sophistication. Cyberattacks have always been, in part, a numbers game—like trying every door and window in a neighborhood until one is unlocked. A.I. simply allows attackers to try exponentially more doors at near light speed. It also acts as a force multiplier for low-skilled actors, simplifying the creation of malware and phishing campaigns and enabling Ransomware-As-A-Service business models.

At the same time, cyberattacks are becoming far more sophisticated. Today’s phishing campaigns are no longer generic; they are highly personalized. A.I. tools scrape vast amounts of publicly available data, from social media profiles, company websites and professional bios and use that information to craft messages that convincingly mimic colleagues, family members or trusted institutions. The result is a new class of cyber threat: one that is more frequent, more scalable and far more believable.

Even small details matter. Recently, my husband received what appeared to be a legitimate fraud alert from his bank. Only upon closer inspection did he realize it used American spelling in a message from a Canadian bank. That subtle inconsistency was the only clue that it was a scam. He checked directly with the bank, and the suspicion was confirmed. But in an environment of increasingly polished deception, those clues are disappearing.

The friction problem

That near-miss points to another problem. Organizations have spent years optimizing for seamless, frictionless digital experiences. We are conditioned to click, approve and move quickly. But when it comes to cybersecurity, friction is frequently the safeguard rather than the enemy. 

Multi-factor authentication, verification prompts and transaction delays are viewed as inconveniences. In reality, they are intentional pauses designed to interrupt automated or impulsive behavior and help users—both human and machine—verify actions. A.I.-driven threats exploit exactly what frictionless systems enable: speed without reflection.

As cyber risk evolves, individuals and organizations need to rethink their relationship with convenience. Slowing down, questioning a message, verifying a request and pausing before clicking are some of the most effective defenses we have.

The skills shift happening in real time

While much of the conversation around A.I. in cybersecurity focuses on threats, an equally important transformation is happening within the workforce. A.I. is rapidly being embedded into everyday workflows, changing not only how work is done but also what skills are required to do it.

In cybersecurity, this shift is particularly pronounced. Tasks that were once the domain of entry-level analysts—monitoring alerts, identifying patterns and triaging incidents—are increasingly being automated by A.I.-powered tools. On the surface, this is a positive development. It allows organizations to operate more efficiently and frees up talent to focus on higher-value work.

But it also raises a critical question: if foundational tasks are being automated, how do future professionals build the baseline skills needed to advance? Does the baseline need to shift? Do we need to maintain any of these skills? The challenge is becoming more urgent as organizations continue to deploy powerful systems like those capable of identifying vulnerabilities at scale, as highlighted by the Claude Mythos announcement. But cybersecurity offers a particularly clear lens into the challenge. When entire layers of entry-level experience begin to disappear, traditional career pathways break down.

Without deliberate intervention, we risk creating a generation of professionals who are fluent in tools but lack the foundational knowledge to use them critically. 

The illusion of intelligence

Compounding this challenge is a growing tendency to overestimate what A.I. actually is. Much has been written about A.I. hallucinations, but to really understand this, we need to consider how these systems work. Today’s models are powerful, but they are not human. They are not reasoning the way we do; they are identifying patterns in data and predicting the most likely sequence of outputs based on what they have seen before.

They can assist, accelerate and augment, but they cannot replace human judgment. While A.I. systems may have all of the data related to a given situation, qualities such as context, empathy and nuanced understanding remain distinctly human. And yet, many organizations are deploying A.I. tools at scale without adequately preparing their workforce to use them critically. 

This brings into sharp focus the issues surrounding safe and secure use of this technology. A.I. is not infallible. There are many examples of bias or model drift, as well as deliberate prompt manipulation that can result in unplanned outcomes that tend to cause more harm than good. The result is an acute and urgent gap: we are more reliant on A.I. than ever, but not necessarily more informed about how to use it safely, securely and effectively.

A leadership gap in the making

These shifts point to a broader issue that has been part of the industry discourse for over a decade: cybersecurity is not just a technical function. It is a strategic, organizational and human challenge. And that requires a kind of leadership that goes beyond only advancing technical expertise.

While this is important, it is no longer sufficient on its own. Leaders must aim to understand A.I., systems thinking, data governance, human behavior, regulatory environments and organizational change simultaneously. They must be able to ask not only “Can we implement this technology?” but “Should we, and how do we do so securely and responsibly?”

At the same time, they must navigate a workforce that is both energized and overwhelmed. Employees are being introduced to a growing number of A.I. tools, often without clear guidance, while also grappling with fears of job displacement. Tool fatigue is real, and without proper support, it can lead to misuse or disengagement.

In many organizations, these leadership capabilities are still emerging. The result is a widening gap between the pace of technological change and the preparedness of those tasked with leading that change.

Rethinking talent for an A.I.-driven future

Addressing this gap will require a fundamental rethink of how we develop talent. First, we must embrace the idea of lifelong learning, not as a slogan but as a necessity. The pace of change in A.I. and cybersecurity means that static skill sets quickly become obsolete. Continuous education, upskilling and reskilling must be embedded into the fabric of organizations.

Second, we need to create new pathways for skill development. If traditional entry-level roles are evolving or disappearing, we must define a new baseline for foundational skills and design alternative ways for professionals to gain the experience they need. This should include learning within simulated environments, coaching or rotational programs or work-integrated learning models that combine theory with practice.

Third, organizations must invest in enabling their people, not just deploying tools. This means providing clear policies, guidance, training and governance frameworks that help employees use A.I. securely, responsibly and effectively.

Finally, collaboration across sectors is critical. The challenges posed by A.I. and the current and emerging cyber risks do not neatly fit into any one domain. Partnerships between industry, government and academia will be essential to building the talent pipelines and knowledge ecosystems required to keep pace.

The path forward

None of this is an argument against A.I. Far from it. A.I. holds enormous potential to strengthen cybersecurity, improve efficiency and unlock new capabilities. But like any powerful tool, its impact depends on how we use it.

The organizations that succeed in this new landscape will not be those that adopt A.I. the fastest, but those that adopt it the smartest, balancing innovation with oversight, speed with intentionality and automation with human judgment. 

Cybersecurity has always been about staying one step ahead of the threat. In the age of A.I., that step is no longer just technological; it’s human. Closing this gap will be crucial to preserving our future digital resilience. 

Judith Borts is Senior Director at Rogers Cybersecure Catalyst—Toronto Metropolitan University’s national center for training, innovation and collaboration in cybersecurity. 

Ria.city






Read also

Seattle mayor laughs off millionaires leaving Washington state over progressive taxes, waves 'bye'

Raiders QB Fernando Mendoza: 'I Was an A-Hole Sometimes' at Indiana

Extend Embeds WEX B2B Payments Into SAP Concur Invoice

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости