Security News > 2025 > May > AutoPatchBench: Meta’s new way to test AI bug fixing tools

AutoPatchBench: Meta’s new way to test AI bug fixing tools
2025-05-21 05:00

AutoPatchBench is a new benchmark that tests how well AI tools can fix code bugs. It focuses on C and C++ vulnerabilities found through fuzzing. The benchmark includes 136 real bugs and their verified fixes, taken from the ARVO dataset. Patch generation flowchart CyberSecEval 4 AutoPatchBench is part of Meta’s CyberSecEval 4, a benchmark designed to objectively evaluate and compare various LLM-based auto-patching agents for vulnerabilities specifically identified via fuzzing, a widely used method of … More → The post AutoPatchBench: Meta’s new way to test AI bug fixing tools appeared first on Help Net Security.


News URL

https://www.helpnetsecurity.com/2025/05/21/autopatchbench-meta-test-ai-bug-fixing-tools/