Offensive security startup Armadin secured nearly $190 million in funding to expand a platform that uses AI agents to automate red-team operations. The technology ...
CodeWall says the threat landscape is shifting drastically in the AI era, and AI agents autonomously selecting and attacking ...
Red teaming is a powerful way to uncover critical security gaps by simulating real-world adversary behaviors. However, in practice, traditional red team engagements are hard to scale. Usually relying ...
OpenAI acquires Promptfoo to embed AI red-teaming and security testing directly into its Frontier agent platform, signaling ...
Editor's note: Louis will lead an editorial roundtable on this topic at VB Transform this month. Register today. AI models are under siege. With 77% of enterprises already hit by adversarial model ...
AI red teaming — the practice of simulating attacks to uncover vulnerabilities in AI systems — is emerging as a vital security strategy. Traditional red teaming focuses on simulating adversarial ...
Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that it’s not the sophisticated, complex attacks that ...
AI systems are becoming part of everyday life in business, healthcare, finance, and many other areas. As these systems handle more important tasks, the security risks they face grow larger. AI red ...
The Cloud Security Alliance (CSA) has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence. The ...
Shellter Project, the vendor of a commercial AV/EDR evasion loader for penetration testing, confirmed that hackers used its Shellter Elite product in attacks after a customer leaked a copy of the ...
The acquisition points to rising demand for tools that test and secure LLMs before they are deployed in enterprise workflows.