Security News > 2025 > May > AutoPatchBench: Meta’s new way to test AI bug fixing tools

AutoPatchBench is a new benchmark that tests how well AI tools can fix code bugs. It focuses on C and C++ vulnerabilities found through fuzzing. The benchmark includes 136 real bugs and their verified fixes, taken from the ARVO dataset. Patch generation flowchart CyberSecEval 4 AutoPatchBench is part of Meta’s CyberSecEval 4, a benchmark designed to objectively evaluate and compare various LLM-based auto-patching agents for vulnerabilities specifically identified via fuzzing, a widely used method of … More → The post AutoPatchBench: Meta’s new way to test AI bug fixing tools appeared first on Help Net Security.
News URL
https://www.helpnetsecurity.com/2025/05/21/autopatchbench-meta-test-ai-bug-fixing-tools/
Related news
- Windows 11 tests sharing apps screen and files with Copilot AI (source)
- Widely available AI tools signal new era of malicious bot activity (source)
- Fake AI Tools Used to Spread Noodlophile Malware, Targeting 62,000+ via Facebook Lures (source)
- AI vs AI: How cybersecurity pros can use criminals’ tools against them (source)