Deno Sandbox

(deno.com)

523 points | by johnspurlock 2 days ago

38 comments

  • simonw 2 days ago
    Note that you don't need to use Deno or JavaScript at all to use this product. Here's their Python client SDK: https://pypi.org/project/deno-sandbox/

      from deno_sandbox import DenoDeploy
      
      sdk = DenoDeploy()
      
      with sdk.sandbox.create() as sb:
          # Run a shell command
          process = sb.spawn("echo", args=["Hello from the sandbox!"])
          process.wait()
      
          # Write and read files
          sb.fs.write_text_file("/tmp/example.txt", "Hello, World!")
          content = sb.fs.read_text_file("/tmp/example.txt")
          print(content)
    
    Looks like the API protocol itself uses websockets: https://tools.simonwillison.net/zip-wheel-explorer?package=d...
    • rdhyee 3 hours ago
      Took this idea and ran with it using Fly's Sprites, inspired by Simon's https://simonwillison.net/2026/Feb/3/introducing-deno-sandbo.... Use case: Claude Code running in a sandboxed Sprite, making authenticated API calls via a Tokenizer proxy without credentials ever entering the sandbox.

      Hit a snag: Sprites appear network-isolated from Fly's 6PN private mesh (fdf:: prefix inside the Sprite, not fdaa::; no .internal DNS). So a Tokenizer on a Fly Machine isn't directly reachable without public internet.

      Asked on the Fly forum: https://community.fly.io/t/can-sprites-reach-internal-fly-se...

      @tptacek's point upthread about controlling not just hosts but request structure is well taken - for AI agent sandboxing you'd want tight scoping on what the proxy will forward.

    • koakuma-chan 1 day ago
      Because the sandbox is on their cloud, not on your local machine, which wasn't obvious to me.
      • sli 22 hours ago
        It's stated under the "Sandboxes?" heading.

        > Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud) ...

    • ChatGPTBanger 1 day ago
      [dead]
  • emschwartz 2 days ago
    > In Deno Sandbox, secrets never enter the environment. Code sees only a placeholder

    > The real key materializes only when the sandbox makes an outbound request to an approved host. If prompt-injected code tries to exfiltrate that placeholder to evil.com? Useless.

    That seems clever.

    • motrm 2 days ago
      Reminds me a little of Fly's Tokenizer - https://github.com/superfly/tokenizer

      It's a little HTTP proxy that your application can route requests through, and the proxy is what handles adding the API keys or whatnot to the request to the service, rather than your application, something like this for example:

      Application -> tokenizer -> Stripe

      The secrets for the third party service should in theory then be safe should there be some leak or compromise of the application since it doesn't know the actual secrets itself.

      Cool idea!

      • tptacek 2 days ago
        It's exactly the tokenizer, but we shoplifted the idea too; it belongs to the world!

        (The credential thing I'm actually proud of is non-exfiltratable machine-bound Macaroons).

        Remember that the security promises of this scheme depend on tight control over not only what hosts you'll send requests to, but what parts of the requests themselves.

        • orf 1 day ago
          How does this work with more complex authentication schemes, like AWS?
          • solatic 1 day ago
            AWS has a more powerful abstraction already, where you can condition permissions such that they are only granted when the request comes from a certain VPC or IP address (i.e. VPN exit). Malware thus exfiltrated real credentials, but they'll be worthless.
            • tptacek 1 day ago
              I'm not prepared to say which abstraction is more powerful but I do think it's pretty funny to stack a non-exfiltratable credential up against AWS given how the IMDS works. IMDS was the motivation for machine-locked tokens for us.
              • solatic 1 day ago
                There are two separate concerns here: who the credentials are associated with, and where the credentials are used. IMDS's original security flaw was that it only covered "who" the credentials were issued to (the VM) and not where they were used, but aforementioned IAM conditions now ensure that they are indeed used within the same VPC. If a separate proxy is setup to inject credentials, then while this may cover the "where" concern, care must still be taken on the "who" concern, i.e. to ensure that the proxy does not fall to confused deputy attacks arising from multiple sandboxed agents attempting to use the same proxy.
                • tptacek 1 day ago
                  There are lots of concerns, not just two, but the point of machine-bound Macaroons is to address the IMDS problem.
        • svieira 2 days ago
          Did the machine-bound Macaroons ever get written up publicly or is that proprietary?
      • pbowyer 1 day ago
        This reminds me of a SaaS that existed 15+ years ago for PCI-DSS compliance. It did exactly that: you had it tokenize and store the sensitive data, and then you proxied your requests via it, and it inserted them into the request. It was a very neat way to get around storing data yourself.

        I cannot remember what the platform was called, let me know if you do.

        • krab 1 day ago
          There are multiple companies doing that. I was using one a few years ago, also don't remember the name, haha.

          I guess it's an obvious thing to sell, if you go through the process of PCI-DSS compliance. We were definitely considering splitting the company to a part that can handle these data and the rest of the business. The first part could then offer the service to other business, too.

      • dtkav 1 day ago
        I've been working on something similar (with claude code).

        It's a sandbox that uses envoy as a transparent proxy locally, and then an external authz server that can swap the creds.

        The idea is extended further in that the goal is to allow an org to basically create their own authz system for arbitrary upstreams, and then for users to leverage macaroons to attentuate the tokens at runtime.

        It isn't finished but I'm trying to make it work with ssh/yubikeys as an identity layer. The authz macaroon can have a "hole" that is filled by the user/device attestation.

        The sandbox has some nice features like browser forwarding for Claude oauth and a CDP proxy for working with Chrome/Electron (I'm building an Obsidian plugin).

        I'm inspired by a lot of the fly.io stuff in tokenizer and sprites. Exciting times.

        https://github.com/dtkav/agent-creds

    • ptx 1 day ago
      Yes... but...

      Presumably the proxy replaces any occurrence of the placeholder with the real key, without knowing anything about the context in which the key is used, right? Because if it knew that the key was to be used for e.g. HTTP basic auth, it could just be added by the proxy without using a placeholder.

      So all the attacker would have to do then is find and endpoint (on one of the approved hosts, granted) that echoes back the value, e.g. "What is your name?" -> "Hello $name!", right?

      But probably the proxy replaces the real key when it comes back in the other direction, so the attacker would have to find an endpoint that does some kind of reversible transformation on the value in the response to disguise it.

      It seems safer and simpler to, as others have mentioned, have a proxy that knows more about the context add the secrets to the requests. But maybe I've misunderstood their placeholder solution or maybe it's more clever than I'm giving it credit for.

      • booi 1 day ago
        Where would this happen? I have never seen an API reflect a secret back but I guess it's possible? perhaps some sort of token creation endpoint?
        • saghm 1 day ago
          The point is that without semantic knowledge, there's no way of knowing whether the API actually considers it a secret. If you're using the Github API and have it listed as an approved host but the sandbox doesn't predefine which fields are valid or not to include the token, a malicious application could put the placeholder in the body of an API request making a public gist or something, which then gets replaced with the actual secret. In order to avoid this, the sandbox would need some way of enforcing which fields in the API itself are safe. For a widely used API like Github, this might be something built-in, but to support arbitrary APIs people might want to use, there would probably have to be some way of configuring the list of fields that are considered safe manually.

          From various other comments in this thread though, it sounds like this is already well-established territory that past tools have explored. It's not super clear to me how much of this is actually implemented for Deno Sandboxes or not though, but I'd hope they took into account the prior art that seems to have already come up with techniques for handling very similar issues.

        • ptx 1 day ago
          How does the API know that it's a secret, though? That's what's not clear to me from the blog post. Can I e.g. create a customer named PLACEHOLDER and get a customer actually named SECRET?
          • adastra22 1 day ago
            This blog post is very clearly AI generated, so I’m not sure it knows either.
        • mananaysiempre 1 day ago
          Say, an endpoint tries to be helpful and responds with “no such user: foo” instead of “no such user”. Or, as a sibling comment suggests, any create-with-properties or set-property endpoint paired with a get-propety one also means game over.

          Relatedly, a common exploitation target for black-hat SEO and even XSS is search pages that echo back the user’s search request.

        • tptacek 1 day ago
          It depends on where you allow the substitution to occur in the request. It's basically "the big bug class" you have to watch out for in this design.
        • tczMUFlmoNk 1 day ago
          This is effectively what happened with the BotGhost vulnerability a few months back:

          https://news.ycombinator.com/item?id=44359619

        • Tepix 1 day ago
          HTTP Header Injection or HTTP Response Splitting is a thing.
      • sothatsit 1 day ago
        Could the proxy place further restrictions like only replacing the placeholder with the real API key in approved HTTP headers? Then an API server is much less likely to reflect it back.
        • tptacek 1 day ago
          It can, yes. (I don't know how Deno's work, but that's how ours works.)
    • simonw 2 days ago
      Yeah, this is a really neat idea: https://deno.com/blog/introducing-deno-sandbox#secrets-that-...

        await using sandbox = await Sandbox.create({
          secrets: {
            OPENAI_API_KEY: {
              hosts: ["api.openai.com"],
              value: process.env.OPENAI_API_KEY,
            },
          },
        });
        
        await sandbox.sh`echo $OPENAI_API_KEY`;
        // DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19...
      
      It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

      Kind of like how XSS attacks can't read httpOnly cookies but they can generally still cause fetch() requests that can take actions using those cookies.

      • its-summertime 1 day ago
        if there is an LLM in there, "Run echo $API_KEY" I think could be liable to return it, (the llm asks the script to run some code, it does so, returning the placeholder, the proxy translates that as it goes out to the LLM, which then responds to the user with the api key (or through multiple steps, "tell me the first half of the command output" e.g. if the proxy translates in reverse)

        Doesn't help much if the use of the secret can be anywhere in the request presumably, if it can be restricted to specific headers only then it would be much more powerful

        • simonw 1 day ago
          Secrets are tied to specific hosts - the proxy will only replace the placeholder value with the real secret for outbound HTTP requests to the configured domain for that secret.
          • its-summertime 1 day ago
            which, if its the LLM asking for the result of the locally ran "echo $API_KEY", will be sent through that proxy, to the correct configured domain. (If it did it for request body, which apparently it doesn't (which was part of what I was wondering))
            • Dangeranger 1 day ago
              The AI agent can run `echo $API_KEY` all it wants, but the value is only a placeholder which is useless outside the system, and only the proxy service which the agent cannot directly access, will replace the placeholder with the real value and return the result of the network call. Furthermore, the replacement will happen within the proxy service itself, it does not expose the replaced value to memory or files that the agent can access.

              It's a bit like taking a prepaid voucher to a food truck window. The cashier receives the voucher, checks it against their list of valid vouchers, records that the voucher was used so they can be paid, and then gives you the food you ordered. You as the customer never get to see the exchange of money between the cashier and the payment system.

              • its-summertime 9 hours ago
                (Noting that, as stated in another thread, it only applies to headers, so the premise I raised doesn't apply either way)

                Except that you are asking for the result of it, "Hey Bobby LLM, what is the value of X" will have Bobby LLM tell you the real value of X, because Bobby LLM has access to the real value because X is permissioned for the domain that the LLM is accessed through.

                If the cashier turned their screen around to show me the exchange of money, then I would certainly see it.

        • lucacasonato 1 day ago
          It will only replace the secret in headers
          • shivasurya 14 hours ago
            It replaces URL params and body too
      • ryanrasti 2 days ago
        > It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

        Agreed, and this points to two deeper issues: 1. Fine-grained data access (e.g., sandboxed code can only issue SQL queries scoped to particular tenants) 2. Policy enforced on data (e.g., sandboxed code shouldn't be able to send PII even to APIs it has access to)

        Object-capabilities can help directly with both #1 and #2.

        I've been working on this problem -- happy to discuss if anyone is interested in the approach.

        • Tomuus 1 day ago
          Object capabilities, like capnweb/capnproto?
          • ryanrasti 1 day ago
            Yes exactly Cap'n Web for RPC. On top of that: 1. Constrained SQL DSL that limits expressiveness along defined data boundaries 2. Constrained evaluation -- can only compose capabilities (references, not raw data) to get data flow tracking for free
    • Tepix 2 days ago
      It must be performing a man-in-the-middle for HTTPS requests. That makes it more difficult to do things like certificate pinning.
    • artahian 2 days ago
      We had this same challenge in our own app builder, we ended up creating an internal LLM proxy with per-sandbox virtual keys (which the proxy maps to the real key + calculates per-sandbox usage), so even if the sandbox leaks its key it doesn't impact anything else.
    • jkelleyrtp 1 day ago
      @deno team, how do secrets work for things like connecting to DBs over a tcp connection? The header find+replace won't work there, I assume. Is the plan to add some sort of vault capability?
    • perfmode 2 days ago
      I was just about to say the same thing. Cool technique.
    • CuriouslyC 2 days ago
      This is an old trick that people do with Envoy all the time.
    • verdverm 2 days ago
      Dagger has a similar feature: https://docs.dagger.io/getting-started/types/secret/

      Same idea with more languages on OCI. I believe they have something even better in the works, that bundles a bunch of things you want in an "env" and lets you pass that around as a single "pointer"

      I use this here, which eventually becomes the sandbox my agent operates in: https://github.com/hofstadter-io/hof/blob/_next/.veg/contain...

    • linolevan 2 days ago
      It’s pretty neat.

      Had some previous discussion that may be interesting on https://news.ycombinator.com/item?id=46595393

    • syabro 1 day ago
      I don’t quite get how it’s being injected in https requests… do they inject their own https cert?
    • rfoo 2 days ago
      I like this, but the project mentioned in the launch post

      > via an outbound proxy similar to coder/httpjail

      looks like AI slop ware :( I hope they didn't actually run it.

      • lucacasonato 1 day ago
        We run or own infrastructure for this (and everything else). The link was just an illustrative example
  • johnspurlock 2 days ago
    "Over the past year, we’ve seen a shift in what Deno Deploy customers are building: platforms where users generate code with LLMs, and that code runs immediately without review. That code frequently calls LLMs itself, which means it needs API keys and network access.

    This isn’t the traditional “run untrusted plugins” problem. It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review. Sandboxing the compute isn’t enough. You need to control network egress and protect secrets from exfiltration.

    Deno Sandbox provides both. And when the code is ready, you can deploy it directly to Deno Deploy without rebuilding."

    • twosdai 2 days ago
      Like the emdash, whenever I read: "this isn't x it's y" my dumb monkey brain goes "THATS AI" regardless if it's true or not.
      • aiahs 1 day ago
        For me it's the "why this matters", "why this works", etc
        • TheTaytay 1 day ago
          Ugh - yes. I’m seriously close to writing a chrome extension just to warn me or block pages that have that phrase…it’s irrational because there are so many legitimate uses, but they are dead to me.
          • FooBarWidget 1 day ago
            I don't know man, I feel emboldened to keep using emdash exactly because I want to protest against people equating emdash with "AI reply" even though there are very legitimate uses for emdash.
      • bangaladore 1 day ago
        Another common tell nowadays is the apostrophe type (’ vs ').

        I don't know personally how to even type ’ on my keyboard. According to find in chrome, they are both considered the same character, which is interesting.

        I suspect some word processors default to one or the other, but it's becoming all too common in places like Reddit and emails.

        • signal11 1 day ago
          If you work with macOS or iOS users, you won’t be super surprised to see lots of “curly quotes”. They’re part of base macOS, no extra software required (I cannot remember if they need to be switched on or they’re on by default), and of course mass-market software like Word will create “smart” quotes on Mac and Windows.

          I ended up implementing smart quotes on an internal blogging platform because I couldn’t bear "straight quotes". It’s just a few lines of code and makes my inner typography nerd twitch less.

        • deathanatos 1 day ago
          > According to find in chrome, they are both considered the same character, which is interesting.

          Browsers do a form of normalization in search. It's really useful, since it means "resume" will match résumé, unless of course you disable it (in Firefox, this is the "Match Diacritics" checkbox). (Also: itʼs, it's; if you want to see it in action on those two words.)

        • int_19h 1 day ago
          Word (you know, the most popular word processor out there) will do that substitution. And on macOS & iOS, it's baked into the standard text input widgets so it'll do that basically everywhere that is a rich text editor.
      • signal11 1 day ago
        I’ve been using em-dashes since high school — publishing the school paper and everything. I remain slightly bemused by people discovering em-dashes for the first time thanks to LLMs.

        Also, “em-dashes are something only LLMs use” comes perilously close to “huh, proper grammar, must’ve run this by a grammar checker”.

        • Latty 1 day ago
          I started using them when I discovered the compose key and it became easy to type them, but I've genuinely considered stopping using for this reason.
      • yawnxyz 14 hours ago
        the problem with this is that people are adapting their REAL SPEECH to this pattern, so people are actually saying this in real conversations

        (we do this all the time; eg. a new popular saying lands in an episode of a tv show, and then other people start adopting it, even subconsciously)

      • pawelduda 1 day ago
        it's the <<<<gold-standard>>>> for spotting LLMs in the wild

        (that's what Gemini would say)

      • lucacasonato 2 days ago
        I can confirm Ryan is a real human :)
        • zamadatix 2 days ago
          Is there a chance you could ask Ryan if he had an LLM write/rewrite large parts of this blog post? I don't mind at all if he did or didn't in itself, it's a good and informative post, but I strongly assumed the same while reading the article and if it's truly not LLM writing then it would serve as a super useful indicator about how often I'm wrongly making that assumption.
          • bonsai_spool 1 day ago
            There are multiple signs of LLM-speak:

            > Over the past year, we’ve seen a shift in what Deno Deploy customers are building: platforms where users generate code with LLMs and that code runs immediately without review

            This isn't a canonical use of a colon (and the dependent clause isn't even grammatical)!

            > This isn’t the traditional “run untrusted plugins” problem. It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review.

            Another colon-offset dependent paired with the classic, "This isn't X. It's Y," that we've all grown to recognize.

            > Sandboxing the compute isn’t enough. You need to control network egress and protect secrets from exfiltration.

            More of the latter—this sort of thing was quite rare outside of a specific rhetorical goal of getting your reader excited about what's to come. LLMs (mis)use it everywhere.

            > Deno Sandbox provides both. And when the code is ready, you can deploy it directly to Deno Deploy without rebuilding.

            Good writers vary sentence length, but it's also a rhetorical strategy that LLMs use indiscriminately with no dramatic goal or tension to relieve.

            'And' at the beginning of sentences is another LLM-tell.

            • jonny_eh 1 day ago
              > It’s deeper: LLM-generated code, calling external APIs with real credentials, without human review.

              This also follows the rule of 3s, which LLMs love, there ya go.

              • johnfn 1 day ago
                Yeah, I feel like this is really the smoking gun. Because it's not actually deeper? An LLM running untrusted code is not some additional level of security violation above a plugin running untrusted code. I feel like the most annoying part of "It's not X, it's Y" is that agents often say "It's not X, it's (slightly rephrased X)", lol, but it takes like 30 seconds to work that out.
                • jonny_eh 1 day ago
                  It's not just different way of saying something, it's a whole new way to express an idea.
            • r00f 1 day ago
              Can it be that after reading so many LLM texts we will just subconciously follow the style, because that's what we are used to? No idea how this works for native English speakers, but I know that I lack my own writing style and it is just a pseudo-llm mix of Reddit/irc/technical documentation, as those were the places where I learned written English
              • bonsai_spool 1 day ago
                Yes, I think you're right—I have a hard time imagining how we avoid such an outcome. If it matters to you, my suggestion is to read as widely as you're able to. That way you can at least recognize which constructions are more/less associated with an LLM.

                When I was first working toward this, I found the LA Review of Books and the London Review of Books to be helpful examples of longform, erudite writing. (edit - also recommend the old standards of The New Yorker and The Atlantic; I just wanted to highlight options with free articles).

                I also recommend reading George Orwell's essay Politics and the English Language.

              • nananana9 1 day ago
                Given that a lot of us actively try to avoid this style, and immediately disregard text that uses it as not worth reading (a very useful heuristic given the vast amount of LLM-generated garbage), I don't think that would make us more prone to write in this manner. In fact I've actively caught myself editing text I've written to avoid certain LLMisms.
            • twoodfin 1 day ago
              Great list. Another tell is pervasive use of second-person perspective: “We’ve all been there.” “Now you have what you need.”

              As you say, this is cargo cult rhetorical style. No purpose other than to look purposeful.

            • tadfisher 1 day ago
              It's unfortunate that, given the entire corpus of human writing, LLMs have seemingly been fine-tuned to reproduce terrible ad copy from old editions of National Geographic.

              (Yes, I split the infinitive there, but I hate that rule.)

          • javier123454321 2 days ago
            As someone that has a habit of maybe overusing em dashes to my detriment, often times, and just something that I try to be mindful of in general. This whole thing of assuming that it's AI generated now is a huge blow. It feels like a personal attack.
            • zamadatix 1 day ago
              "—" has always seemed like an particularly weak/unreliable signal to me, if it makes you feel any better. Triply so in any content one would expect smart quotes or formatted lists, but even in general.

              RIP anyone who had a penchant for "not just x, but y" though. It's not even a go-to wording for me and I feel the need to rewrite it any time I type it out of fear it'll sound like LLMs.

              • zbentley 1 day ago
                > RIP anyone who had a penchant for "not just x, but y" though

                I felt that. They didn’t just kidnap my boy; they massacred him.

            • adastra22 1 day ago
              It’s about more than the emdash. The LLM writing falls into very specific repeated patterns that become extremely obvious tells. The first few paragraphs of this blog post could be used in a textbook as it exhibits most of them at once.
          • calebhwin 2 days ago
            [dead]
      • Bnjoroge 1 day ago
        couldnt agree more. It's frankly very fatiguing
  • chacham15 1 day ago
    I am so confused at how this is supposed to work. If the code, running in whatever language, does any sort of transform with the key that it thinks it has, doesnt this break? E.g. OAuth 1 signatures, JWTs, HMACs...

    Now that I think further, doesnt this also potentially break HTTP semantics? E.g. if the key is part of the payload, then a data.replace(fake_key, real_key) can change the Content Length without actually updating the Content-Length header, right?

    Lastly, this still doesnt protect you from other sorts of malicious attacks (e.g. 'DROP TABLE Users;')...Right? This seems like a mitigation, but hardly enough to feel comfortable giving an LLM direct access to prod, no?

    • nusl 1 day ago
      My understanding is that it only surfaces the real keys when the request is actually sent under the hood, and doesn't make it available to the code itself, so that LLMs aren't able to query the key values. They have placeholder values for what seems to be obfuscation purposes, so that the LLM receives a fake value if it tries, which would help with stuff like prompt injection since that value is useless.
  • freakynit 1 day ago
    It's always the exorbitant price with such offerings.

    A 2 vCPU, 4GB Ram and 40GB Disk instance on Hetzner cost 4.13 USD.

    The same here is:

    $127.72 without pro plan, and $108.72 with pro plan.

    This means to break even, I can only use this for 4.13/127.72*730 = 23.6 hours every month, or, less than an hour daily.

    • nusl 1 day ago
      The article mentions that it's compute time spent deploying the code and not "wall clock" time, so I don't think it's quite this bad?
  • koolala 2 days ago
    The free plan makes me want to use it like Glitch. But every free service like this ever has been burned...
  • zenmac 2 days ago
    >Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud)

    The real question is can the microVMs run in just plain old linux, self-hosted.

    • echelon 2 days ago
      Everyone wants to lock you in.

      Unfortunately there's no other way to make money. If you're 100% liberally licensed, you just get copied. AWS/GCP clone your product, offer the same offering, and they take all the money.

      It sucks that there isn't a middle ground. I don't want to have to build castles in another person's sandbox. I'd trust it if they gave me the keys to do the same. I know I don't have time to do that, but I want the peace of mind.

      • ushakov 1 day ago
        we have 100% open-source Sandboxes at E2B

        git: https://github.com/e2b-dev/infra

        wiki: https://deepwiki.com/e2b-dev/infra

        • codethief 5 hours ago
          This looks neat! How difficult would it be to run everything locally?
        • dizhn 1 day ago
          This is exactly what i am building for a friend in a semi amateur fashion with LLMs. Looking at your codebase I would probably end up with something very similar in 6 months. You even have an Air toml and use firecracker, not to mention using go. Great minds think alike I suppose :D. Mine is not for AI but for running unvetted data science scripts. Simple stuff mostly. I am using rootless podman (I think you are using docker? or perhaps packer which is a tool i didn't know about until now.) to create the microvm images and the images have no network access. We're creating a .ext4 disk image to bring in the data/script.

          I think I might just "take" this if the resource requirements are not too demanding. Thanks for sharing. Do you have docs for deploying on bare metal?

        • echelon 1 day ago
          This is what I like to see!

          Not sure what your customers look like, but I'd for one also be fine with "fair source" licenses (there are several - fair source, fair code, Defold license, etc.)

          These give customers 100% control but keep Amazon, Google, and other cling-on folks like WP Engine from reselling your work. It avoids the Docker, Elasticsearch, Redis fate.

          "OSI" is a submarine from big tech hyperscalers that mostly take. We should have gone full Stallman, but fair source is a push back against big tech.

          • ushakov 1 day ago
            we aren’t worried about that.

            when we were starting out we figured there was no solution that would satisfy our requirements for running untrusted code. so we had to build our own.

            the reason we open-sourced this is because we want everyone to be able to run our Sandboxes - in contrast to the majority of our competitors who’s goal is to lock you in to their offering.

            with open-source you have the choice, and luckily Manus, Perplexity, Nvidia choose us for their workloads.

            (opinions my own)

  • regisb 13 hours ago
    Is this Extism, but running as a service? https://extism.org/ It seems to me that a key feature of Extism is host functions (which can be called from the sandbox). But maybe I'm not comparing apples to apples?
  • yakkomajuri 1 day ago
    Secret placeholders seems like a good design decision.

    So many sandbox products these days though. What are people using in production and what should one know about this space? There's Modal, Daytona, Fly, Cloudflare, Deno, etc

    • ATechGuy 1 day ago
      These are all wrappers around VMs. You could DIY these easily by using EC2/serverless/GCP SDKs.
      • thundergolfer 1 day ago
        Modal engineer here. This isn’t correct. You can DIY this but certainly not by wrapping EC2 which is using the Nitro hypervisor and is not optimized for startup time.

        Nearly all players in this space use Gvisor or Firecracker.

        • sebmellen 1 day ago
          Do you know Eric Zhang by chance? I went to school with him and saw that he was at Modal sometime back. Potentially the smartest person I’ve ever met… and a very impressive technical mind.

          Super impressed with what you’ve all done at Modal!

          • thundergolfer 20 hours ago
            yeh of course I worked with him for a few years! Agree, smartest person I've ever worked with, and there's a smart crowd at Modal.
      • easton 1 day ago
        You can and can’t, at least in AWS. For instance, you can’t launch a EC2 to a point you can ssh in less than 8-10 seconds (and it takes a while to get EBS to sync the entire disk from s3).

        Many a time I have tried to figure a self scaling EC2 based CI system but could never get everything scaled and warm in less than 45 seconds, which is sucky when you’re waiting on a job to launch. These microvm as a service thingys do solve a problem.

        (You could use lambda, but that’s limited in other ways).

      • ATechGuy 1 day ago
        To the commenters here: thanks for correcting me! So AWS is losing AI sandboxing market to GCP due to high cold start times of EC2...very interesting!
    • ushakov 1 day ago
      Factory, Nvidia, Perplexity and Manus are using E2B in production - we ran more than 200 million Sandboxes for our customers
  • e12e 2 days ago
    Looks promising. Any plans for a version that runs locally/self-host able?

    Looks like the main innovation here is linking outbound traffic to a host with dynamic variables - could that be added to deno itself?

  • earlence 20 hours ago
    Fun! Our work from 10 years ago introduced the secrets protection technique being used in Deno: https://www.earlence.com/assets/papers/flowfence_sec16.pdf and fly's tokenizer. We called it "opaque computation" and it did a lot more than secrets protection.
  • ttoinou 2 days ago
    What happens if we use Claude Pro or Max plans on them ? It’ll always be a different IP connecting and we might get banned from Anthropic as they think we’re different users

    Why limit the lifetime on 30 mins ?

    • lucacasonato 2 days ago
      We'll increase the lifetime in the next weeks - just some tech internally that needs to be adjusted first.
    • mrkurt 2 days ago
      For what it's worth, I do this from about 50 different IPs and have had no issues. I think their heuristics are more about confirming "a human is driving this" and rejecting "this is something abusing tokens for API access".
      • ttoinou 2 days ago
        All the time with the same computer ? Maybe it is looking at others metadata, for example local MAC addresses
        • mrkurt 1 day ago
          All the time with a bunch of different sandboxes.
    • paxys 1 day ago
      What's the use case for this? Trying to get raw API access through a monthly plan? Or something else?
      • ttoinou 1 day ago
        Simply using your subscription in a sandbox ?
  • nihakue 1 day ago
    Not sure if anyone from the deno team is monitoring this forum, but I was trying to stand up a dev-base snapshot and pretty quickly ran into a wall. Is it not currently possible to create a bootable volume from the CLI? https://docs.deno.com/sandbox/volumes/#creating-a-snapshot has an example for the js API, but the CLI equivalent isn't specifying --from and the latest verson of the deno CLI installed fresh from deno.land has no --from option. Is the CLI behind, here? Or is the argument provided some other way?
    • crowlKats 1 day ago
      could you try again? it should be available now (no need to update deno CLI)
  • _pdp_ 1 day ago
    Very interesting. Might copy it.

    We recently built our own sandbox environment backed by firecracker and go. It works great.

    For data residency, i.e. making sure the service is EU bound, there is basically no other way. We can move the service anywhere we can get hardware virtualisation.

    As for the situation with credentials, our method is to generate CLIs on the fly and expose them to the LLMs and then they can shell script them whichever way they want. The CLIs only contain scoped credentials to our API which handles oauth and other forms of authentication transparently. The agent does not need to know anything about this. All they know is that they can do

    $ some-skillset search-gmail-messages -q "emails from Adrian"

    In our own experiments we find that this approach works better and it just makes sense given most of the latest models are trained as coding assistants. They just love bash, so give them the tools.

  • tracker1 1 day ago
    Not mentioned, but something I would like/expect would be to have some kind of editor integration... VS Code remote extensions, as an example even... You can be in a remote code server with your local editor and terminal tab(s) within said editor on the remote system.

    I realize this is using other interactions, but I'd like a bit more observability than just the isolated environment... I'm not even saying VS Code specifically, but something similar at the least.

  • ATechGuy 2 days ago
    > allowNet: ["api.openai.com", "*.anthropic.com"],

    How to know what domains to allow? The agent behavior is not predefined.

    • CuriouslyC 2 days ago
      The idea is to gate automatic secret replacement to specific hosts that would use them legitimately to avoid exfiltration.
    • falcor84 2 days ago
      Well, this is the hard part, but the idea is that if you're working with both untrusted inputs and private data/resources, then your agent is susceptible to the "lethal trifecta"[0], and you should be extremely limiting in its ability to have external network access. I would suggest starting with nothing beyond the single AI provider you're using, and only add additional domains if you are certain you trust them and can't do without them.

      [0] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

  • nihakue 2 days ago
    See also Sprites (https://news.ycombinator.com/item?id=46557825) which I've been using and really enjoying. There are some key architecture differences between the two, but very similar surface area. It'll be interesting to see if ephemeral + snapshots can be as convenient as stateful with cloning/forking (which hasn't actually dropped yet, although the fly team say it's coming).

    Will give these a try. These are exciting times, it's never been a better time to build side projects :)

    • tomComb 1 day ago
      Yes, sprites looks great too – would certainly be interested in a comparison.
    • alooPotato 1 day ago
      what are the key architectural differences?
      • tptacek 1 day ago
        Sprites aren't ephemeral. They're like deli cups: "semi-disposable". You keep them around as long as you feel like, and you don't feel bad about throwing them away.
  • Tepix 2 days ago
    If you can create a deno sandbox from a deno sandbox, you could create an almost unkillable service that jumps from one sandbox to the next. Very handy for malicious purposes. ;-)

    Just an idea…

    • mrkurt 2 days ago
      This is, in fact, the biggest problem to solve with any kind of compute platform. And when you suddenly launch things really, really fast, it gets harder.
    • runarberg 2 days ago
      Isn’t that basically how zip-bombs work?
      • TheDong 1 day ago
        It's much closer to a fork-bomb.
      • kibibu 1 day ago
        Not really, no
  • sibellavia 1 day ago
    I just run a local microVM. I built a small CLI that wraps lima to make my life easier. With a few commands I have a VM running locally with all batteries included (CC/Codex, ssh, packages I need, ...). With this I'm not saying Deno or Docker sandboxes are useless.
    • jrvarela56 1 day ago
      Just wrapped up my own module for this. Remixed my worktree workflow with a lima wrapper. I wanted to go head first to giving Claude Code full autonomy but realized capability and prevention need to go hand in hand

      Next step for me is creating a secrets proxy like credit card numbers are tokenized to remove risk of exfiltrating credentials.

      Edit: It’s nice that Deno Sandbox already does this. Will check it out.

  • arjan_sch 1 day ago
    This sandboxing solution list is getting long... created https://github.com/arjan/awesome-agent-sandboxes, PRs welcome :)
  • dangoodmanUT 1 day ago
    Love their network filtering, however it definitely lacks some capabilities (like the ability to do direct TCP connections to Postgres, or direct IP connections.

    Those limitations from other tools was exactly why I made https://github.com/danthegoodman1/netfence for our agents

  • PeterStuer 1 day ago
    Never used Deno before, and searching through docs and their GitHub still leaves me with questions:

    Can you configure Demo Sandbox to run on a self hosted installation of Deno Deploy (deployd), or is this a SaaS only offering?

    • wsgeorge 1 day ago
      What I gather from the announcement: it's part of Deno Deploy (their SaaS offering). I too would love a self-hosted version.
  • mrpandas 2 days ago
    Where's the real value for devs in something like this? Hasn't everyone already built this for themselves in the past 2 years? I'm not trying to sound cheeky or poo poo the product, just surprised if this is a thing. I can never read what's useful by gut anymore, I guess.
    • slibhb 2 days ago
      > Hasn't everyone already built this for themselves in the past 2 years?

      Even if this was true, "everyone building X independently" is evidence that one company should definitely build X and sell it to everyone

    • mrkurt 2 days ago
      Sandboxes with the right persistence and http routing make excellent dev servers. I have about a million dev servers I just use from whatever computer / phone I happen to be using.

      It's really useful to just turn a computer on, use a disk, and then plop its url in the browser.

      I currently do one computer per project. I don't even put them in git anymore. I have an MDM server running to manage my kids' phones, a "help me reply to all the people" computer that reads everything I'm supposed to read, a dumb game I play with my son, a family todo list no one uses but me, etc, etc.

      Immediate computers have made side projects a lot more fun again. And the nice thing is, they cost nothing when I forget about them.

      • messh 1 day ago
        This is exactly what I built shellbox.dev for.

        SSH in, it resumes where you left off, auto-suspends on disconnect. $0.50/month stopped.

        I have the same pattern - one box per project, never think about them until I need them.

      • simonw 2 days ago
        I'd love to know more about that "help me reply to all the people" one! I definitely need that.
        • mrkurt 1 day ago
          You will be astonished to know it'a a whole lot of sqlite.

          Everything I want to pay attention to gets a token, the server goes and looks for stuff in the api, and seeds local sqlites. If possible, it listens for webhooks to stay fresh.

          Mostly the interface is Claude code. I have a web view that gives me some idea of volume, and then I just chat at Claude code to have it see what's going on. It does this by querying and cross referencing sqlite dbs.

          I will have claude code send/post a response for me, but I still write them like a meatsack.

          It's effectively: long lived HTTP server, sqlite, and then Claude skills for scripts that help it consistently do things based on my awful typing.

    • falcor84 2 days ago
      > Hasn't everyone already built this for themselves in the past 2 years?

      The short answer is no. And more so, I think that "Everyone I know in my milieu already built this for themselves, but the wider industry isn't talking about it" is actually an excellent idea generator for a new product.

      • ATechGuy 2 days ago
        In the last one year, we have seen several sandboxing wrappers around containers/VMs and they all target one use case AI agent code execution. Why? perhaps because devs are good at building (wrappers around VMs) and chase the AI hype. But how are these different and what value do they offer over VMs? Sounds like a tarpit idea, tbh.

        Here's my list of code execution sandboxing agents launched in the last year alone: E2B, AIO Sandbox, Sandboxer, AgentSphere, Yolobox, Exe.dev, yolo-cage, SkillFS, ERA Jazzberry Computer, Vibekit, Daytona, Modal, Cognitora, YepCode, Run Compute, CLI Fence, Landrun, Sprites, pctx-sandbox, pctx Sandbox, Agent SDK, Lima-devbox, OpenServ, Browser Agent Playground, Flintlock Agent, Quickstart, Bouvet Sandbox, Arrakis, Cellmate (ceLLMate), AgentFence, Tasker, DenoSandbox, Capsule (WASM-based), Volant, Nono, NetFence

        • kommunicate 1 day ago
          don't forget runloop!
          • messh 1 day ago
            And shellbox.dev
        • ushakov 1 day ago
          why? because there’s a huge market demand for Sandboxes. no one would be building this if no one would be buying.

          disclaimer: i work at E2B

          • ATechGuy 1 day ago
            I'm not saying sandboxes are not needed, I'm saying VMs/containers already provide the core tech and it's easy to DIY a sandbox. Would love to understand what value E2B offers over VMs?
            • kommunicate 1 day ago
              making a local sandbox using docker is easy, but making them work at high volume and low latency is hard
              • ATechGuy 1 day ago
                That's right. But they (E2B) rely on the underneath Cloud infra to achieve high scalability. Personally, I'm still not sure about the value they add on top of Cloud hosted VMs. GCP/AWS already offer huge discounts to startups, which should be enough for VM-based sandboxing of agents in the MVP phase.
            • ushakov 1 day ago
              we offer secure cloud VMs that scale up to 100k concurrent instances or more.

              the value we sell with our cloud is scale, while our Sandboxes are a commodity that we have proudly open-sourced

              • ATechGuy 1 day ago
                > we offer secure cloud VMs that scale up to 100k concurrent instances or more.

                High scalability and VM isolation is what the Cloud (GCP/AWS, that E2B runs on) offers.

    • drewbitt 2 days ago
      Has everyone really built their own microVMs? I don’t think so.
      • zenmac 2 days ago
        Saw quite bit on HN.

        A quick search this popped up:

        https://news.ycombinator.com/item?id=45486006

        If we can spin up microVM so quickly, why bother with Docker or other containers at all?

        • drewbitt 2 days ago
          I think a 413 commit repo took a bit of time.
          • mrpandas 2 days ago
            That's just over one day worth of commits in a few friends' activity at this point. Thanks to Anthropic.
            • drewbitt 23 hours ago
              I'm not interested in integrating an unguided 400-commit single Ralph iteration as part of critical infrastructure at this point.
        • ushakov 1 day ago
          10 seconds is actually not that impressive. we spin up Sandboxes around 50-200ms at E2B
  • snehesht 2 days ago
    50/200 Gb free plus $0.5 / Gb out egress data seems expensive when scaling out.
  • WatchDog 1 day ago
    If you achieve arbitrary code execution in the sandbox, I think you could pretty easily exfiltrate the openai key by using the openai code interpreter, and asking it to send the key to a url of your choice.
  • ianberdin 2 days ago
    Firecrackervm with proxy?
  • Bnjoroge 1 day ago
    Ignoring the fact that most of the blog post is written by an LLM, I like that they provide a python sdk. I dont believe vercel does for their sandbox product.
  • MillionOClock 2 days ago
    Can this be used on iOS somehow? I am building a Swift app where this would be very useful but last time I checked I don't think it was possible.
    • lucacasonato 1 day ago
      It’s a cloud service - so you can call out to it from anywhere you want. Just don’t ship your credentials in the app itself, and instead authenticate via a server you control.
  • latexr 1 day ago
    > evil.com

    That website does exist. It may hurt your eyes.

    • lucacasonato 1 day ago
      We honestly should have just linked to oracle.com instead of evil.com
  • LAC-Tech 1 day ago
    As a bit of an aside, I've gotten back into deno after seeing bun get bought out by an AI company.

    I really like it. Startup times are now better than node (if not as good as bun). And being able to put your whole "project" in a single file that grabs dependencies from URLs reduces friction a surprising amount compared to having to have a whole directory with package.json, package-lock.json, etc.

    It's basically my "need to whip up a small thing" environment of choice now.

  • eric-burel 1 day ago
    Can it be used to sandbox an AI agent, like replacing eg Cursor or Openclaw sandboxing system?
  • eis 1 day ago
    What's with the pricing of these sandbox offerings recently? I assume just trying to milk the AI trend.

    It's about 10x what a normal VM would cost at a more affordable hoster. So you better have it run only 10% of the time or you're just paying more for something more constrained.

    A full month of runtime would be about $50 bucks for a 2vCPU 1GB RAM 10GB SSD mini-VM that you can get easily for $5 elsewhere.

  • EGreg 1 day ago
    We already have a pretty good sandbox in our platform: https://github.com/Qbix/Platform/blob/main/platform/plugins/...

    It uses web workers on a web browser. So is this Deno Sandbox like that, but for server? I think Node has worker threads.

  • bopbopbop7 1 day ago
    Now I see why he was on twitter saying that the era of coding is over and hyping up LLMs, to sell more shovels...
  • andrewmcwatters 2 days ago
    [dead]
  • Soerensen 1 day ago
    [flagged]
    • rob 1 day ago
      I feel like this is a bot account. Or at least, everything is AI generated. No posts at all since the account was created in 2024 and now suddenly in the past 24 hours there's dozens of detailed comments that all sort of follow the same pattern/vibe.