
LLMs for Smart Contract Audits
Large Language Models have evolved significantly in recent years, but can they effectively audit smart contracts or at least assist human auditors?
In this stream, Kirill Balakhonov from Nethermind explores the current capabilities and limitations of AI-driven smart contract auditing tools. He shares his personal experience creating AI tools for vulnerability detection and explains which methods deliver real results and which only provide a false sense of security.
- 🔹 How LLMs can identify bugs and vulnerabilities in Solidity code
- 🔹 Which issues LLMs detect better than static analyzers, and what they commonly miss
- 🔹 Why GPT-4 and GPT-4o differ significantly in their auditing effectiveness
- 🔹 Real-world use cases: how Nethermind’s AI auditor uncovered critical vulnerabilities missed by traditional auditors
- 🔹 The future of auditing: combining LLM-based agent systems with symbolic execution, fuzzing, and formal verification tools
Tags