AI Cybersecurity Approaches Differ from Proof of Work Models
The analogy of proof of work in cybersecurity is misleading, as it does not apply to AI models detecting bugs. Instead, the effectiveness of AI in cybersecurity will depend on the quality of the models and their intelligence levels rather than sheer computational power.
The concept of proof of work involves finding hash collisions, which guarantees success with enough computational effort. However, this does not translate to AI cybersecurity, where different executions of large language models (LLMs) can lead to varied outcomes.
As the number of samples increases, the limitations of the model's intelligence become more significant than the number of samples taken. For example, the OpenBSD SACK bug illustrates that weaker models may not recognize complex issues due to their inability to understand the underlying code interactions.
Stronger models tend to hallucinate less, making them less likely to identify bugs, while weaker models may incorrectly identify issues without true comprehension. The future of cybersecurity will rely on the development of better AI models and quicker access to them, rather than simply increasing computational resources.