The quiet revolution in embedded systems just got loud. according to a blistering new 2025 report from RunSafe Security, 80.5% of embedded engineers are now using AI tools to write code, and a staggering 83.5% have deployed that AI-generated code directly into production environments.
In any other industry, this would be a victory lap for productivity. But in the world of embedded systemsâwhere the code controls insulin pumps, industrial armatures, and automotive braking systemsâit represents a terrifying decoupling of velocity from validity.
The report, titled âAI in Embedded Systems: AI Is Here. Security Isnât,â paints a picture of an industry at a dangerous inflection point. We are writing code faster than ever before, using languages that were never designed to be safe, and deploying it into hardware that cannot easily be patched.
Here is why the physics of this specific problem are different, and why the âmove fast and break thingsâ ethos is about to hit the hard wall of memory safety.
The Technical Deep Dive: Why âGoodâ Code Breaks
To understand why this report matters, you have to look past the âAIâ buzzword and down to the metal. The core issue isnât that AI writes âbadâ code. In fact, Large Language Models (LLMs) are surprisingly good at writing syntactically correct C and C++.
The problem is that C and C++ are inherently unsafe languages that require perfect developer vigilanceâsomething AI models, which are probabilistic token predictors, cannot structurally guarantee.
The Memory Safety Gap
In a modern high-level application (written in Python, Rust, or Go), the language runner handles memory allocation. You create an object; the system finds space. You stop using it; the garbage collector frees it.
In embedded C/C++, you are the garbage collector. You manually allocate memory (malloc) and manually free it (free). If you make a mistake, you donât just get an error; you create a vulnerability.
The RunSafe report highlights that this âmanual transmissionâ approach is colliding with AI velocity. When an AI generates a 50-line parser for a JSON stream, it often uses standard, efficient C patterns. But it rarely considers the context of the memory layout.
- Buffer Overflows: The AI writes data to a buffer without rigorously checking if the data fits. In a desktop app, this crashes the program. In an embedded controller without an MMU (Memory Management Unit), this overwrites the instruction pointer and gives an attacker control of the device.
- Use-After-Free (UAF): The AI correctly frees a pointer but leaves a dangling reference to it elsewhere in the code. Later, the logic tries to access that freed memory. If an attacker has sprayed the heap with malicious data, they now own execution.
The Attack Surface Multiplier
The statistics from the report are alarming because of the surface area. 53% of respondents cited security as their top concern, yet 91% are increasing investment in embedded security. They know the wave is coming.
Traditional development acts as a natural throttle on code volume. A human engineer can only write so many lines of C++ per day, and good teams review that code. AI removes the throttle. We are now flooding legacy codebases with vast quantities of new, unverified logic.
If 1 in 10,000 lines of human code has a critical memory flaw, and AI allows us to write 100,000 lines in the time it took to write 10,000, we havenât just increased productivity; weâve increased the density of latent vulnerabilities by an order of magnitude.
Contextual History: The Pattern of Negligence
We have seen this movie before, just on different screens.
In the early 2000s, the âConnect Everythingâ phase of the IoT boom led to the Mirai botnet. Manufacturers rushed to put IP stacks on cameras and DVRs without thinking about default passwords or open telnet ports. The result was a massive DDoS infrastructure built from compromised toasters and webcams.
In the 2010s, the automotive industry rushed to add infotainment and connectivity to cars. The result was the Jeep Cherokee hack, where researchers remotely killed the transmission of a vehicle on a highway because the entertainment system could talk to the CAN bus.
Now, in 2025, we are doing it again with code generation. The RunSafe report notes that 73% of engineers rate the risk of AI code as âmoderate or higher,â yet the deployment numbers (83.5%) show they are doing it anyway.
The economic pressure to ship âsmartâ featuresâpredictive maintenance, edge AI processing, voice interfacesâis overriding the engineering discipline required to secure them.
The Countermeasure: Load-time Function Randomization (LFR)
If we cannot trust the code (because there is too much of it) and we cannot rewrite 30 years of C++ into Rust overnight, what is the defense?
The report points toward Runtime Resilience. If you assume the bug exists, you must make it unexploitable.
One of the most effective techniques for this in the embedded space is Load-time Function Randomization (LFR).
How It Works
In a standard firmware compilation, every function lives at a static, known address. calculate_voltage() might always be at 0x08001234.
Attackers love this. To build an exploit (like Return-Oriented Programming, or ROP), they need to know exactly where to jump to execute the code they want. They chain together little snippets of existing code (gadgets) to build a malicious program.
LFR breaks this chain.
- Compile Time: The compiler emits code that doesnât jump to absolute addresses. Instead, it jumps to a âstubâ or a lookup table.
- Load Time: When the device boots, the secure loader shuffles the deck. It randomly assigns actual memory addresses to all the functions.
- Patching: The loader updates the lookup table or patches the binary in memory so the calls still work.
The result? Every time the device reboots (or every time the firmware is updated, depending on implementation), the memory map changes. An exploit that works on Device A will crash Device B. An exploit that worked yesterday wonât work after a reboot.
RunSafeâs proprietary implementation of this technology is gaining traction because it doesnât require rewriting the source code. You apply it at the binary level. This is crucial for the 60% of respondents who are already trying to use runtime protections.
Forward-Looking Analysis: The 5-Year Outlook
The 2025 report is a snapshot of a transition period. We are currently in the âWild Westâ phase of AI code generation.
Over the next five years, we expect three major shifts:
- The Rise of Silicon-Based Security: Software mitigations like LFR will effectively become mandatory. We will likely see regulations (similar to the EUâs Cyber Resilience Act) demanding that critical infrastructure devices possess binary randomization capabilities.
- The Rust Transition: While AI writes C++ easily, it also writes Rust easily. The friction for switching to memory-safe languages will drop as AI handles the boilerplate. However, this only protects new code. The billions of lines of legacy C/C++ remain.
- Liability Shift: As AI-generated code causes physical failures (e.g., a robotic arm swinging too fast, a battery management system failing), the legal conversation will shift from âsoftware bugsâ to âproduct liability.â If a manufacturer used AI to generate safety-critical code without human review or runtime protection, that is negligence.
The Bottom Line
The RunSafe Security 2025 report is not just a collection of surveys; it is a warning flare. The industry has uncorked the bottle on AI productivity, and there is no putting it back.
The sheer volume of code being produced means that manual review is mathematically impossible at scale. We can no longer pretend that we can catch every bug. The only viable path forward is to assume the code is broken and build systems that refuse to let it break the machine.
For the embedded engineer in 2025, the job is no longer just writing C. It is architecting the containment fields that keep that C from hurting anyone.
Mathematical Appendix: The Probability of exploit
To understand the value of LFR, consider the probability of a successful ROP chain exploit. A standard chain requires gadgets. In a static memory map, the probability of finding gadget at a known location is 1.
With LFR, if there are possible locations (slots) for a function and the attacker has a 1 in chance of guessing the correct offset for each independent gadget (simplified model):
Even with a modest entropy () and a short chain (), the difficulty skyrockets from certainty to one in 16 million.
đŠ Discussion on Bluesky
Discuss on Bluesky