If you can get malicious instructions into the context of even the most powerful reasoning LLMs in the world you'll still be able to trick them into outputting vulnerable code like this if you try hard enough.
I don't think the fact that small models are easier to trick is particularly interesting from a security perspective, because you need to assume that ANY model can be prompt injected by a suitably motivated attacker.
On that basis I agree with the article that we need to be using additional layers of protection that work against compromised models, such as robust sandboxed execution of generated code and maybe techniques like static analysis too (I'm less sold on those, I expect plenty of malicious vulnerabilities could sneak past them.)
The most "shocking" thing to me in the article is that people (apparently) think it's acceptable to run a system where content you've never seen can be fed into the LLM when it's generating code that you're putting in production. In my opinion, if you're doing that, your whole system is already compromised and you need to literally throw away what you're doing and start over.
Generally I hate these "defense in depth" strategies that start out with doing something totally brain-dead and insecure, and then trying to paper over it with sandboxes and policies. Maybe just don't do the idiotic thing in the first place?
When you say "content you've never seen," does this include the training data and fine-tune content?
You could imagine a sufficiently motivated attacker putting some very targeted stuff in their training material - think StuxNet - "if user is affiliated with $entity, switch goals to covert exfiltration of $valuable_info."
> does this include the training data and fine-tune content?
No, I'm excluding that because I'm responding to the post which starts out with the example of: [prompt containing obvious exploit] -> [code containing obvious exploit] and proceeds immediately to the conclusion that local LLMS are less secure. In my opinion, if you're relying on the LLM to reject a prompt because it contains an exploit, instead of building a system that does not feed exploits into the LLM in the first place, security exploits are probably the least of your concerns.
There actually are legitimate concerns with poisoned training sets, and stuxnet-level attacks could plausibly achieve something along these lines, but the post wasn't about that.
There's a common thread among a lot of "LLM security theatre" posts that starts from implausible or brain-dead scenarios and then asserts that big AI providers adding magical guard rails to their products is the solution.
The solution is sanity in the systems that use LLMs, not pointing the gun at your foot and firing and hoping the LLM will deflect the bullet.
We started giving our (https://www.definite.app/) agent a sandbox (we use e2b.dev) and it's solved so many problems. It's created new problems, but net-net it's been a huge improvement.
Something like "where do we store temporary files the agent creates?" becomes obvious if you have a sandbox you can spin up and down in a couple seconds.
It wasn't, but the written version of it it is actually better than what I said in the room (since I got to think a little bit harder and add relevant links).
IIUC your talk "just" suggests using sandbox-exec on Mac, which (as you point out) is sadly labeled as deprecated.
Is that really the best solution the world has to offer in 2025? LLMs aside, there is a whole host of supply chain risk issues that would be resolved by deploying convenient and strong sandboxes everywhere.
1. A sandbox on someone else's computer. Claude Code for web, Codex Cloud, Gemini Jules, GitHub Codespaces, ChatGPT/Claude Code Interpreter
2. A Docker container. I think these are robust enough to be safe.
3. sandbox-exec related tricks. I haven't poked hard enough at Claude Code's new sandbox-exec sandbox yet - they only released it on Monday. OpenAI Codex CLI was using sandbox-exec too last time I looked but again, I've not reviewed it enough to be comfortable with it.
I'm hoping more credible options come along for the sandboxing problems.
I found Vibekit's (open-source https://docs.vibekit.sh/sdk) approach of allowing you to chose your own sandboxing solution for any coding cli the most flexible. Also works with openCode and local or cloud sandboxes ! Really quality piece of software that more devs should know about. I'm surprised Simon hasn't tried it yet.
Yeah they shipped that feature on Monday, you can access it via the /sandbox command. I haven't put it through its paces enough to get a feel for if I trust it yet though.
> The conventional wisdom that local, on-premise models offer a security advantage is flawed. While they provide data privacy, our research shows their weaker reasoning and alignment capabilities make them easier targets for sabotage.
Yeah, I'm not following here. If you just run something like deepseek locally, you're going to be okay provided you don't feed it a bogus prompt.
Outside of a user copy-pasting a prompt from the wild, or break isolation by giving it access to outside resources, the conventional wisdom holds up just fine. The operator and consumption of 3rd party stuff are weak-points for all IT, and have been for ages. Just continue to train folks to not do insecure things, and re-think letting agents go online for anything/everything (which is arguably not a local solution anyway).
Freeform plaintext (not an executable/script) being an attack vector is new, outside of parser vulns. Providing context through tickets, docs, etc is now a non-obvious security liability.
It is still an important attack vector to be aware of regardless of how unrealistic you believe it to be. Many powerful hacks come from very simple and benign appearing starting points.
Written by VPs of sales from Anthropic and OpenAI?
Where this article fails the worse: in my experience smaller local models are not often used in agentic tasks that involve code execution so much of the otherwise OK points don't apply. Also, when I have played with, for example, the Agno agent library with local models, I have the application code print/display any generated Python code before execution, and local sandboxing is not difficult to do!
Local models and embedded models excel at data transformation, NLP tasks, etc.
Especially with agentic browsers like OpenAI Atlas, Comet, etc., there are real security concerns. Probably more of a concern that running local models.
All of these are incredibly obvious. If you have even the slightest idea of what you're doing and review the code before deploying it to prod, this will never succeed.
If you have absolutely no idea what you're doing, well, then it doesn't really matter in the end, does it? You're never gonna recognize any security vulnerabilities (as has happened many times with LLM-assisted "no-code" platforms and without any actual malicious intent), and you're going to deploy unsafe code either way.
Sure, you can simplify these observations into just codegen. But the real observation is not that these models are more susceptible to fail when generating code, but that they are more susceptible to jailbreak-type attacks that most people have come to expect to be handled by post training.
Having access to open models is great, and even if their capabilities are somewhat lower than the closed-source SoTA models, and we should be aware of the differences in behavior.
> more susceptible to jailbreak-type attacks that most people have come to expect to be handled by post training
the keyword here is "more". The big models might not be quite as susceptible to them, but they are still susceptible. If you expect these attacks to be fully handled, then maybe you should change your expectations.
> All of these are incredibly obvious. If you have even the slightest idea of what you're doing and review the code before deploying it to prod, this will never succeed.
Well this is wrong. And it's exactly this type of thinking why people will get absolutely burned by this.
First off the fact they chose obvious exploits for explanatory purposes doesn't mean this attack only supports obvious exploits...
And to your second point of "review the code before you deploy to prod", the second attack did not involve deploying any code to prod. It involved an LLM reading a reddit comment or github comment and immediately executing.
People not taking security seriously and waving it off as trivial is what's gonna make this such a terrible problem.
I thought that local LLMs means they run on local computers, without being exposed to the internet.
If an attacker can exploit a local LLM, means it already compromised you system and there are better things they can do than trick the LLM to get what they can get directly.
LLMs don't have any distinction between instructions & data. There's no "NX" bit. So if you use a local LLM to process attacker-controlled data, it can contain malicious instructions. This is what Simon Willson's "prompt injection" means: attackers can inject a prompt via the data input. If the LLM can run commands (i.e. if it's an "agent") then prompt injection implies command execution.
>LLMs don't have any distinction between instructions & data
And this is why prompt injection really isn't a solvable problem on the LLM side. You can't do the equivalent of (grep -i "DROP TABLE" form_input). What you can do is not just blindly execute LLM generated code.
I guess if you were using the LLM to process data from your customers, e.g. categorise their emails, then this argument would hold that they might be more risky.
Agreed. Some of the big companies seem to be claiming that by going with ReallyBitCompany's AI you can do this safely, but you can't. Their models are harder to trick, but simply cannot be made safe.
Local LLMs may not be exposed to the internet, but if you want them to do something useful you're likely going to hook them up to an internet-accessing harness such as OpenCode or Claude Code or Codex CLI.
Fair enough. Forgive my probably ignorance, but if Claude Code can be attacked like this, doesn’t that means that also foundation LLMs are vulnerable to this, and is not a local LLM thing?
It's not an LLM thing at all. Prompt injection has always been an attack against software that uses LLMs. LLMs on their own can't be attacked meaningfully (well, you can jailbreak them and trick them into telling you the recipe for meth but that's another issue entirely). A system that wraps an LLM with the ability for it to request tool calls like "run this in bash" is where this stuff gets dangerous.
Yeah, that's fair. A good LLM (gpt-oss-20b, even some of the smaller Qwens) can be entirely useful offline. I've got good results from Mistral Small 3.2 offline on a flight helping write Python and JavaScript, for example.
Having Claude Code able to try out JSON APIs and pip install extra packages is a huge upgrade from that though!
> Local LLMs may not be exposed to the internet, but if you want them to do something useful you're likely going to hook them up to an internet-accessing harness such as OpenCode or Claude Code or Codex CLI.
is not "someone finding useful to have a local llm ingest internet content" - it was someone suggesting that nothing useful can be done without internet access.
I guess I don't read that how you do. It says you're likely to do that, which I take to mean that's a majority use case, not that it's the only use case.
yes and I think better local sandboxing can help out in this case, it’s something ive been thinking about a lot and more and more seems to be the right way to run these things
Welcome to corporate security. "If an attacker infiltrates our VPN and gets on the network with admin credentials and logs into a workstation..." Ya, no shit, thanks Mr Security manager, I will dispose of all of our laptops.
The "lethal trifecta" sounds catchy but I don't believe it accurately characterizes the risks of LLMs.
In theory any two of the trifecta is fine, but practically speaking I think you only need "ability to communicate with the outside," or maybe not even that. Business logic is not really private data anymore. Most devs are likely one `npm update` away from their LLM getting a new command from some transitive dependency.
The LLM itself is also a giant blackbox of unverifiable untrusted data, so I guess you just have to cross your fingers on that one. Maybe your small startup doesn't need to be worried about models being seeded with adversarial training data, but if I were say Coinbase I'd think twice before allowing LLM access to anything.
This vulnerability comes from allowing the AI to read untrusted data (usually documentation) from the Internet. For LLMs the boundary between "code" and "data" isn't as clear as it used to be since they will follow instructions written in human language.
So if you are not careful with your inputs you can get stuff injected. Shouldn't this be very clear from start? With any system you should be careful what you input to it. And consider it as possible vector.
Seems obvious to me that you should fully vet whatever goes to LLM.
I get the impression that somehow an attacker is able to inject this prompt (maybe in front of the actual coder’s prompt) in such a way to produce actual production code. I’m waiting to hear how this can happen - cross site attacks on the developer’s browser?
"Documentation, tickets, MCP server" in pictures...
With internal documentation and tickets I think you would have bigger issues... And external documentation. Well maybe there should be tooling to check that. Not expert on MCP. But vetting goes there too.
Yes, of course if you can inject something into context there’s lots can be done. And anything running local will require different security considerations than running remote. Neither of these things make for a paradox.
Also from the article: For example, a small model could easily flag the presence of eval() in the generated code, even if the primary model was tricked into generating it.
People are losing their critical thinking. AI is great, yes, but there’s no need to throw it like a grenade at every problem: There’s nothing in that snippet or surrounding bits from the article that needs an entire model-on-model architecture to resolve. Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks. But these are the lines of BS coming for your CTO’s, spinning them tales about the need for their own prompt-engineered fine tunes w/ laser sighted tokens that will run as edge models and shoot down everything from context injected eval() responses to phishing scams and more, and all require their monthly/annual LoRa for purchasing to stay timely on the attacks. At least if this article is smelling the way I think it is.
>Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks.
But that's the thing, keyword filters aren't enough because you can smuggle hidden instructions in any number of ways that don't involve blacklisted words like "eval" or "ignore previous". Moreover "back in the golden years of sql injection attacks", keyword filters were often (mis)used in a misguided way of fixing SQLI exploits, because they can often be bypassed with escape characters and other shenanigans.
If you're smart enough to run LLMs locally, then you're automatically in the small group of enthusiasts who know something about LLMs and how they work.
Sometimes I wonder if HN people really realize 80% of people out there haven't even heard of ChatGPT, and the remaining 19% have not heard about Claude/Gemini. It's only a small group who know local models exist. We're them, and we complain about their security...
To be fair, if you expand Gemini to "that fucking Google thing that ruined Google with; hey Grandson, how do I turn this off?", a lot of people have heard of Gemini, even if they don't know it by its true name.
Everybody is talking about how this is obvious or not a real problem, but I think the flaw in it is something else.
It assumes that local models are inherently worse. But from a software perspective that's nonsense because there is no reason it couldn't be the exact same software. And from a hardware perspective the theory would have to be that the centralized system is using more expensive hardware, but there are two ways around that. The first is that you can sacrifice speed for cost -- x86 servers are slower than GPUs but can run huge models because they support TBs of memory. And the second is that you can, of course, buy high end local hardware, as many enterprises might choose to do, especially when they have enough internal users to keep it busy.
The point (which they make quite explicitly) is that an individual or small organization can only run open source models locally, and those open source models are less sophisticated than the “frontier” models.
Obviously we can’t run GPT-5 or the cutting edge version of Claude or whatever locally, because OpenAI or Anthropic are keeping those weights as closely kept secrets.
But there is nothing inherent about that. The companies that want to run local models, or the cloud and hardware providers that want to sell hardware to run them, can get together and publish better local models.
Moreover, even that's presuming that you would only use the best available model, but that's also likely to be the one which is the most resource intensive and the most expensive, and then you can't afford it anyway. Meanwhile to use their smaller models you're still paying their margin, whereas if you use a local model you can spend that money on hardware. The bigger local model can beat the smaller proprietary one for the same price.
It is like SQL injection. Probably worse. If you are using unsupervised data for context that ultimately generates executable code you will have this security problem. Duh.
Sure there is. A common way is to have the LLM generate things like {name} which will get substituted for the user's name instead of trying to get the LLM itself to generate the user's name.
That's what I explained. You are trying to do something with an untrusted name and the LLM will not treat the name as instructions because it doesn't see the actual name.
You mentioned having the LLM generate a placeholder, whereas the important thing is what it accepts. You can feed an LLM nothing but placeholders but that's very limited since it can't see the the actual data in any way. You're really just having it emit a template. Something simple like "make a calendar event for the reservation in this email" could not be done. In contrast, parameterized queries let the database actually operate on the data.
It may be limited but that doesn't mean it's not similar. For example MySQL can't check the weather when given city string as a paramertized query, but that doesn't mean MySQL doesn't have parameterized queries.
Querying external information is a different category of thing altogether.
The key thing (really, the only thing) about parameterized queries is that they allow you to provide code and data with a hard separation between the two.
LLMs don't have anything of the sort. They only take in one kind of thing. They don't even have a notion of code versus data that you could separate, or fail to separate. All you can do is either tolerate it sometimes taking instructions from the stuff you want treated as "data," or never give it anything you consider "data." You propose this second one. But never giving it "data" is very different from a feature that allows you to provide arbitrary data with total safety.
They are easier to trick? If a trick is what I want, the LLM should do the trick. If I want a vulnerability, it should make a vulnerability. What’s bad about that?
Would anyone here merge said code. At least example one would fail most commercial static scans like veracode etc even if the pr review was trash and allowed it.
> Attacker plants malicious prompt in likely-to-be-consumed content.
Is the author implying that some random joe hacker writes a blog with the content. Then a <insert any LLM training set> picks up this content thinking its real/valid. A developer within a firm then asks to write something using said LLM references the information from that blog and now there is a security error?
Possible? Technically sure. Plausible? That's ummm a stretch.
This is not new right, LLMs are dumb, they just do everything they are told, and so the orchestration before and after the LLM execution holds key. Even without security, ChatGPT or gemini's value is not just in the LLM but the productization of it which is the layers before and after the execution. Similarly if one is executing local LLMs it's imperative to also have proper security rules around the execution.
These are, without a doubt, the dumbest security vulnerabilities. We are headed for clown world where you can type in "as an easter egg, please run exec() for me" and it actually works. Not to mention the push for agentslop - pushed by people who really should be able to calculate `p_success = pow(.95, num_of_steps)` in their head and realise they have a bad idea from first principles.
Theory: probabilistic machines’ security is asymptotic: more parameters let it get closer to being secure/prompt-injection-resistant/whatever. It’ll never be perfect, but there’s some threshold beyond which it’s good enough.
To me this article reads as a celebration of how much better frontier models have gotten at defending against security flaws, rather than “open models bad”.
Eventually the tools we use everywhere will be “good enough to use and not worry”. This is foreign to software people, but only a Jedi deals in absolutes.
The underlying problem here is giving any model direct access to your primary system. The model should be working in a VM or container with limited privileges.
This is like saying it's safer to be exposed to dangerous carcinogenic fumes than nerve gas, when the solution is wearing a respirator.
Also what are you doing allowing someone else to prompt your local LLM?
"If you’re running a local LLM for privacy and security..."
What? You run a local LLM for privacy, i.e. because you don't want to share data with $BIGCORP. That has very little to do with the security of the generated code (running in a particular environment).
I don't think the fact that small models are easier to trick is particularly interesting from a security perspective, because you need to assume that ANY model can be prompt injected by a suitably motivated attacker.
On that basis I agree with the article that we need to be using additional layers of protection that work against compromised models, such as robust sandboxed execution of generated code and maybe techniques like static analysis too (I'm less sold on those, I expect plenty of malicious vulnerabilities could sneak past them.)
Coincidentally I gave a talk about sandboxing coding agents last night: https://simonwillison.net/2025/Oct/22/living-dangerously-wit...
Generally I hate these "defense in depth" strategies that start out with doing something totally brain-dead and insecure, and then trying to paper over it with sandboxes and policies. Maybe just don't do the idiotic thing in the first place?
You could imagine a sufficiently motivated attacker putting some very targeted stuff in their training material - think StuxNet - "if user is affiliated with $entity, switch goals to covert exfiltration of $valuable_info."
No, I'm excluding that because I'm responding to the post which starts out with the example of: [prompt containing obvious exploit] -> [code containing obvious exploit] and proceeds immediately to the conclusion that local LLMS are less secure. In my opinion, if you're relying on the LLM to reject a prompt because it contains an exploit, instead of building a system that does not feed exploits into the LLM in the first place, security exploits are probably the least of your concerns.
There actually are legitimate concerns with poisoned training sets, and stuxnet-level attacks could plausibly achieve something along these lines, but the post wasn't about that.
There's a common thread among a lot of "LLM security theatre" posts that starts from implausible or brain-dead scenarios and then asserts that big AI providers adding magical guard rails to their products is the solution.
The solution is sanity in the systems that use LLMs, not pointing the gun at your foot and firing and hoping the LLM will deflect the bullet.
Something like "where do we store temporary files the agent creates?" becomes obvious if you have a sandbox you can spin up and down in a couple seconds.
Is that really the best solution the world has to offer in 2025? LLMs aside, there is a whole host of supply chain risk issues that would be resolved by deploying convenient and strong sandboxes everywhere.
1. A sandbox on someone else's computer. Claude Code for web, Codex Cloud, Gemini Jules, GitHub Codespaces, ChatGPT/Claude Code Interpreter
2. A Docker container. I think these are robust enough to be safe.
3. sandbox-exec related tricks. I haven't poked hard enough at Claude Code's new sandbox-exec sandbox yet - they only released it on Monday. OpenAI Codex CLI was using sandbox-exec too last time I looked but again, I've not reviewed it enough to be comfortable with it.
I'm hoping more credible options come along for the sandboxing problems.
It's cool that they made this open source. It seems straightforward and useful enough that it could be used on its own for sandboxing purposes.
https://docs.claude.com/en/docs/claude-code/sandboxing
https://github.com/anthropic-experimental/sandbox-runtime
Yeah, I'm not following here. If you just run something like deepseek locally, you're going to be okay provided you don't feed it a bogus prompt.
Outside of a user copy-pasting a prompt from the wild, or break isolation by giving it access to outside resources, the conventional wisdom holds up just fine. The operator and consumption of 3rd party stuff are weak-points for all IT, and have been for ages. Just continue to train folks to not do insecure things, and re-think letting agents go online for anything/everything (which is arguably not a local solution anyway).
Where this article fails the worse: in my experience smaller local models are not often used in agentic tasks that involve code execution so much of the otherwise OK points don't apply. Also, when I have played with, for example, the Agno agent library with local models, I have the application code print/display any generated Python code before execution, and local sandboxing is not difficult to do!
Local models and embedded models excel at data transformation, NLP tasks, etc.
Especially with agentic browsers like OpenAI Atlas, Comet, etc., there are real security concerns. Probably more of a concern that running local models.
If you have absolutely no idea what you're doing, well, then it doesn't really matter in the end, does it? You're never gonna recognize any security vulnerabilities (as has happened many times with LLM-assisted "no-code" platforms and without any actual malicious intent), and you're going to deploy unsafe code either way.
Having access to open models is great, and even if their capabilities are somewhat lower than the closed-source SoTA models, and we should be aware of the differences in behavior.
the keyword here is "more". The big models might not be quite as susceptible to them, but they are still susceptible. If you expect these attacks to be fully handled, then maybe you should change your expectations.
Well this is wrong. And it's exactly this type of thinking why people will get absolutely burned by this.
First off the fact they chose obvious exploits for explanatory purposes doesn't mean this attack only supports obvious exploits...
And to your second point of "review the code before you deploy to prod", the second attack did not involve deploying any code to prod. It involved an LLM reading a reddit comment or github comment and immediately executing.
People not taking security seriously and waving it off as trivial is what's gonna make this such a terrible problem.
right, so you shouldn't give the LLM access to execute arbitrary commands without review.
I thought that local LLMs means they run on local computers, without being exposed to the internet.
If an attacker can exploit a local LLM, means it already compromised you system and there are better things they can do than trick the LLM to get what they can get directly.
And this is why prompt injection really isn't a solvable problem on the LLM side. You can't do the equivalent of (grep -i "DROP TABLE" form_input). What you can do is not just blindly execute LLM generated code.
I will fight and die on the hill that "LLMs don't need the internet to be useful"
Having Claude Code able to try out JSON APIs and pip install extra packages is a huge upgrade from that though!
Someone who finds it useful to have a local llm ingest internet content is not contrary to you finding uses that don't.
is not "someone finding useful to have a local llm ingest internet content" - it was someone suggesting that nothing useful can be done without internet access.
Sounds like the Open Source model did exactly as it was prompted, where the "Closed" AI did the wrong thing and disregarded the prompt.
That means the closed model was actually the one that failed the alignment test.
In theory any two of the trifecta is fine, but practically speaking I think you only need "ability to communicate with the outside," or maybe not even that. Business logic is not really private data anymore. Most devs are likely one `npm update` away from their LLM getting a new command from some transitive dependency.
The LLM itself is also a giant blackbox of unverifiable untrusted data, so I guess you just have to cross your fingers on that one. Maybe your small startup doesn't need to be worried about models being seeded with adversarial training data, but if I were say Coinbase I'd think twice before allowing LLM access to anything.
If you are executing local malicious/unknown code for reasons you need to read this...
If you are using any LLM's reasoning ability as a security boundary, something is deeply, deeply wrong.
Seems obvious to me that you should fully vet whatever goes to LLM.
With internal documentation and tickets I think you would have bigger issues... And external documentation. Well maybe there should be tooling to check that. Not expert on MCP. But vetting goes there too.
Also from the article: For example, a small model could easily flag the presence of eval() in the generated code, even if the primary model was tricked into generating it.
People are losing their critical thinking. AI is great, yes, but there’s no need to throw it like a grenade at every problem: There’s nothing in that snippet or surrounding bits from the article that needs an entire model-on-model architecture to resolve. Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks. But these are the lines of BS coming for your CTO’s, spinning them tales about the need for their own prompt-engineered fine tunes w/ laser sighted tokens that will run as edge models and shoot down everything from context injected eval() responses to phishing scams and more, and all require their monthly/annual LoRa for purchasing to stay timely on the attacks. At least if this article is smelling the way I think it is.
But that's the thing, keyword filters aren't enough because you can smuggle hidden instructions in any number of ways that don't involve blacklisted words like "eval" or "ignore previous". Moreover "back in the golden years of sql injection attacks", keyword filters were often (mis)used in a misguided way of fixing SQLI exploits, because they can often be bypassed with escape characters and other shenanigans.
Sometimes I wonder if HN people really realize 80% of people out there haven't even heard of ChatGPT, and the remaining 19% have not heard about Claude/Gemini. It's only a small group who know local models exist. We're them, and we complain about their security...
It assumes that local models are inherently worse. But from a software perspective that's nonsense because there is no reason it couldn't be the exact same software. And from a hardware perspective the theory would have to be that the centralized system is using more expensive hardware, but there are two ways around that. The first is that you can sacrifice speed for cost -- x86 servers are slower than GPUs but can run huge models because they support TBs of memory. And the second is that you can, of course, buy high end local hardware, as many enterprises might choose to do, especially when they have enough internal users to keep it busy.
Obviously we can’t run GPT-5 or the cutting edge version of Claude or whatever locally, because OpenAI or Anthropic are keeping those weights as closely kept secrets.
Moreover, even that's presuming that you would only use the best available model, but that's also likely to be the one which is the most resource intensive and the most expensive, and then you can't afford it anyway. Meanwhile to use their smaller models you're still paying their margin, whereas if you use a local model you can spend that money on hardware. The bigger local model can beat the smaller proprietary one for the same price.
There's nothing like that for LLMs.
The key thing (really, the only thing) about parameterized queries is that they allow you to provide code and data with a hard separation between the two.
LLMs don't have anything of the sort. They only take in one kind of thing. They don't even have a notion of code versus data that you could separate, or fail to separate. All you can do is either tolerate it sometimes taking instructions from the stuff you want treated as "data," or never give it anything you consider "data." You propose this second one. But never giving it "data" is very different from a feature that allows you to provide arbitrary data with total safety.
Is the author implying that some random joe hacker writes a blog with the content. Then a <insert any LLM training set> picks up this content thinking its real/valid. A developer within a firm then asks to write something using said LLM references the information from that blog and now there is a security error?
Possible? Technically sure. Plausible? That's ummm a stretch.
To me this article reads as a celebration of how much better frontier models have gotten at defending against security flaws, rather than “open models bad”.
Eventually the tools we use everywhere will be “good enough to use and not worry”. This is foreign to software people, but only a Jedi deals in absolutes.
This is like saying it's safer to be exposed to dangerous carcinogenic fumes than nerve gas, when the solution is wearing a respirator.
Also what are you doing allowing someone else to prompt your local LLM?
What? You run a local LLM for privacy, i.e. because you don't want to share data with $BIGCORP. That has very little to do with the security of the generated code (running in a particular environment).