The agent, Codex Security, is now in research preview, the company said in a Friday (March 6) blog post. It was formerly known as Aardvark when it was in private beta in October.
Codex Security is designed to flag real security risk and to help security review keep pace with the accelerated software development enabled by agents, according to the post.
“By combining agentic reasoning from our frontier models with automated validation, it delivers high-confidence findings and actionable fixes so teams can focus on the vulnerabilities that matter and ship secure code faster,” OpenAI said in the post.
OpenAI began rolling out Codex Security in research preview to ChatGPT Enterprise, Business and Edu customers via Codex web on Friday and will offer free usage for the next month, per the post.
PYMNTS reported in July that a new category of tools is emerging. They are AI-first threat prevention platforms that don’t wait for alerts but seek out weak points in code, configurations or behavior and take defensive action automatically.
The solutions are being developed as AI-enabled tools, such as agentic AI systems and polymorphic malware, are accelerating cyberattacks, lowering barriers to entry for fraudsters and exposing gaps in traditional incident response and forensic models.
The World Economic Forum said in January that it found that AI is expected to be the most consequential factor shaping cybersecurity strategies this year. It said 94% of surveyed executives cited the technology as a force multiplier for defense and offense. The WEF also highlighted how generative AI technologies are expanding the attack surface.
In this environment, enterprises and investors are shifting toward autonomous remediation, PYMNTS reported in February. In some scenarios, defensive agents remove the need for human intervention on a specific class of vulnerability. In others, they compress triage and coordination, so engineers focus on higher-order judgment. In both cases, the common thread is speed, as human-speed remediation is no longer sufficient when AI-driven attackers operate in continuous loops.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.