Coding Agents Could Make Free Software Matter Again

(gjlondon.com)

100 points | by rogueleaderr 2 hours ago

30 comments

  • apatheticonion 2 hours ago
    Having over a decade of open source software I've written freely available online, I actually really appreciate the value that AI && LLMs have provided me.

    The thing that leaves a bad taste in my mouth is the fact that my works were likely included in the training data and, if it doesn't violate my licenses (GNU 2/3), it certainly feels against the spirit of what I intended when distributing my works.

    I was made redundant recently "due to AI" (questionable) and it feels like my works in some way contributed to my redundancy where my works contributed to the profits made by these AI megacorps while I am left a victim.

    I wish I could be provided a dividend or royalty, however small, for my contribution to these LLMs but that will never happen.

    I've been looking for a copy-left "source available" license that allows me to distribute code openly but has a clause that says "if you would like to use these sources to train an LLM, please contact me and we'll work something out". I haven't yet found that.

    I'm guessing that such a license would not be enforceable because I am not in the US, but at least it would be nice to declare my intent and who knows what the future looks like.

    • advael 2 hours ago
      I think there's no meaningful case by the letter of the law that use of training data that include GPL-licensed software in models that comprise the core component of modern LLMs doesn't obligate every producer of such models to make both the models and the software stack supporting them available under the same terms. Of course, it also seems clear in the present landscape that the law often depends more on the convenience of the powerful than its actual construction and intent, but I would love to be proven wrong about that, and this kind of outcome would help
      • BobbyJo 1 hour ago
        If the rise of Draft Kings and Polymarket/Kalshi have taught me anything, it's that the law becomes meaningless at scale. Sad.
        • advael 1 hour ago
          Sure, but that's more a result of policy decisions than an inevitable result of some natural law. Corporate lawlessness has been reined in before and it can be again
      • apatheticonion 1 hour ago
        I'm struggling to parse the double negative in that statement, haha.

        Are you saying that you believe that untested but technically; models trained on GPL sources need to distribute the resulting LLMs under GPL?

        • advael 1 hour ago
          Yes. Double negative intended for emphasis here, but apologies if it's confusing
      • cogman10 44 minutes ago
        If there was going to be a case, it's derivative works. [1]

        What makes it all tricky for the courts is there's not a good way to really identify what part the generated code is a derivative of (except in maybe some extreme examples).

        [1] https://en.wikipedia.org/wiki/Derivative_work

      • throwaway27448 47 minutes ago
        Intellectual property never made much sense to begin with. But it certainly makes no sense now, where the common creator has no protections against greedy corporate giants who are happy to wield the full weight of the courts to stifle any competition for longer than we'll be alive.

        Or, in the case of LLMs, recklessly swing about software they don't understand while praying to find a business model.

        • not_paid_by_yt 8 minutes ago
          hey just don't try to copy their LLM by distilling it, cause that's "theft", if we weren't all doomed anyways this industry would have never been allowed to exist in the first place, but I guess this is just what the last few decades of our civilization will look like.
      • not_paid_by_yt 10 minutes ago
        That's always what laws existed for, a law is just a formal way of saying "we will use violence against you if you do something we don't like" and that has always going to be primary written by and for the people that already have the power to do that, it's not the worst, certainly better than Kings just being able to do as they please.
      • hparadiz 1 hour ago
        Derivative work.
        • throwaway27448 45 minutes ago
          Let's cut the rot off at the root rather than pretending like the fruit is going to nourish us.
          • thfuran 6 minutes ago
            I’m not entirely clear on what you’re suggesting abolishing here, copyright, AI, the companies making the frontier models?
    • rpdillon 11 minutes ago
      My personal take is that LLMs are so transformative that they are likely not going to qualify under derivative works and therefore GPL wouldn't hold sway. There's already some evidence that courts will consider training on copyrighted material fair use, so long as it is otherwise obtained legally, which would be the case with software licensed under GPL.

      I realize this is an unpopular opinion on HN, but I believe it is best because it's a weakener interpretation of copyright law, which is overall a good thing in my view.

    • jonahss 1 hour ago
      I feel kind of good knowing that my code, design decisions, styles, are now part of the data shaping all software now.
      • koolba 1 hour ago
        Reading this I hear The Roots playing The Seed 2.0[1] in my mind.

        It’s a wild thought to think that of all the things that will remain on this earth after you’re gone, it’ll be your GPL contributions reconstituting themselves as an LLM’s hallucinations.

        [1]: https://youtu.be/ojC0mg2hJCc

        • jaggederest 39 minutes ago
          If we're being clear, it's going to be a lot more than that.

          Our comments here on HN are almost certainly going to live in fame/infamy forever. The twitter firehose is a pathway to 140-character immortality essentially.

          You can already summon an agent to ingest essentially an entire commenter's history, correlate it across different sites based on writing style or similar nicknames, and then chat with you as that persona, even more so with a finetune or lora. I can do that with my gmail and text message history and it becomes eerily similar to me.

          History is going to be much more direct and personal in the future. We can also do this with historical figures with voluminous personal correspondence, that's possible now.

          It's very interesting because I think the era before mass LLM usage but also after digitalization is going to be the most intensely studied. We've lived through a thing that is going to be on the cusp of history, for better or worse.

      • gerdesj 1 hour ago
        Taken to a hallucinated but logical conclusion, we might define a word such as "cene" to riff off of "meme" and "gene".

        The c is for code. If adopted we could spend forever arguing how the c is pronounced and whether the original had a cedilla, circonflex or rhymes with bollocks, which seems somehow appropriate. Everyone uses xene instead. x is chi but most people don't notice.

      • achierius 1 hour ago
        I'm sure you'll feel that way so long as you have an income.
      • hparadiz 1 hour ago
        Me too! I'm glad I'm not the only one.
      • faksr 1 hour ago
        Everyone is entitled to their own opinion. The cannibalistic $trillion companies profit from it all and no one opted in.

        There are also people who want to be eaten by a literal cannibal. I say, no thanks.

      • apatheticonion 1 hour ago
        Me too, and I use LLMs often for personal and professional work. Knowing that colleagues are burning through $700/day worth of tokens, and a small fraction of those tokens were likely derived from my work while I get made redundant is a bit shite.
        • dbetteridge 40 minutes ago
          $700 a day of tokens can't possibly be sustainable right?

          That's 2X the salary of a lot of the world's software developers

          • not_paid_by_yt 6 minutes ago
            and that's at the VC funded discount rate I would presume, not even true cost of those tokens without any profit.
          • apatheticonion 23 minutes ago
            AUD* - so $450 USD

            But yes, that's very expensive and surprising to me.

            • dbetteridge 17 minutes ago
              Ah a fellow Aussie, hi! Sorry to hear about the redundancy (Atlassian?).

              I did implicitly assume USD but yeah still crazy cash, that'd pay for 2 junior-mid level devs in aus D=

        • manwe150 1 hour ago
          I’d been hoping to finally train a replacement and code myself out of a job for years. I just didn’t know I was the replacement too, working with AI.
      • __loam 1 hour ago
        I can't eat good feelings
      • micromacrofoot 1 hour ago
        a comforting thought but I have bills to pay
    • otras 43 minutes ago
      The foreman had pointed out his best man - what was his name? - and, joking with the puzzled machinist, the three bright young men had hooked up the recording apparatus to the lathe controls. Hertz! That had been the machinist's name - Rudy Hertz, an old-timer, who had been about ready to retire. Paul remembered the name now, and remembered the deference the old man had shown the bright young men.

      Afterward, they'd got Rudy's foreman to let him off, and, in a boisterous, whimsical spirit of industrial democracy, they'd taken him across the street for a beer. Rudy hadn't understood quite what the recording instruments were all about, but what he had understood, he'd liked: that he, out of thousands of machinists, had been chosen to have his motions immortalized on tape. And here, now, this little loop in the box before Paul, here was Rudy as Rudy had been to his machine that afternoon - Rudy, the turner-on of power, the setter of speeds, the controller of the cutting tool. This was the essence of Rudy as far as his machine was concerned, as far as the economy was concerned, as far as the war effort had been concerned. The tape was the essence distilled from the small, polite man with the big hands and black fingernails; from the man who thought the world could be saved if everyone read a verse from the Bible every night; from the man who adored a collie for want of children; from the man who . . . What else had Rudy said that afternoon? Paul supposed the old man was dead now - or in his second childhood in Homestead.

      Now, by switching in lathes on a master panel and feeding them signals from the tape, Paul could make the essence of Rudy Hertz produce one, ten, a hundred, or a thousand of the shafts.

      Kurt Vonnegut, Player Piano

    • reactordev 44 minutes ago
      If you use GitHub, you’re automatically opted into having your code used for training. Private repo or not. You have to actually opt out and even then, will they honor that? No…
      • Waterluvian 20 minutes ago
        They’ll just keep “accidentally” resetting the option over time.
    • archagon 24 minutes ago
      > I've been looking for a copy-left "source available" license that allows me to distribute code openly but has a clause that says "if you would like to use these sources to train an LLM, please contact me and we'll work something out". I haven't yet found that.

      Personally, I want a viral (GPL-style) license that explicitly prohibits use of code for LLM training/tuning purposes — with the asterisk that while current law might view LLM training as fair use, this may not be the case forever, and blatant disregard of the terms of the license should make it easier for me to sue offenders in the future.

      Alternatively, this could be expressed as: the output of any LLM trained on this code must retain this license.

    • phendrenad2 53 minutes ago
      I wish Anthropic or someone would take a leadership role and re-train their models without any GPL code, or at least stop doing so in the future tense.
    • bluefirebrand 2 hours ago
      > I've been looking for a copy-left "source available" license that allows me to distribute code openly but has a clause that says "if you would like to use these sources to train an LLM, please contact me and we'll work something out". I haven't yet found that

      Frankly do you think AI companies have even the remotest amount of respect for these licenses anyways? They will simply take your code if it is publicly scrapeable, train their models, exactly like they have so far. Then it will be up to you to chase them down and try to sue or whatever. And good luck proving the license violation

      I dunno. I just don't really believe that many tech companies these days are behaving even remotely ethically. I don't have much hope that will change anytime soon

      • apatheticonion 1 hour ago
        I wonder if there is a "loaded" lawsuit here that could be a win-win for license enforcement case law in LLMs.

        Take a litigious company like Nintendo. If one was to train an LLM on their works and the LLM produces an emulator, that would force a lawsuit.

        If Nintendo wins, then LLMs are stealing. If Nintendo loses, then we can decompile everything.

        • bluefirebrand 1 hour ago
          You're forgetting the option where the LLM companies pay Nintendo a silly amount of money for permission and Nintendo's executives take that as a win
      • archagon 15 minutes ago
        Traditionally, large corporations have taken very conservative legal stances with regard to integrating e.g. A/GPL code, even when there's almost no risk.

        If my license explicitly says "any LLM output trained on this code is legally tainted," I feel like BigAICorp would be foolish to ignore it. Maybe I couldn't sue them today, but are they confident this will remain the case 5, 10, 20 years from now? Everywhere in the world?

  • floathub 2 hours ago
    Free software has never mattered more.

    All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

    Heck, even Claude Code would be far less useful without grep, diff, git, head, etc., etc., etc. And one can easily see a day where something like a local sort Claude Code talking to Open Weight and Open Source models is the core dev tool.

    • CobrastanJorji 2 hours ago
      It's not just that open source code is useful in an age of AI, it's that the AI could only have been made because of the open source code.
    • andoando 1 hour ago
      Why isnt LLM training itself open sourced? With all the compute in the world, something like Folding@home here would be killer
      • DesaiAshu 1 hour ago
        data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that
    • TacticalCoder 2 hours ago
      > All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

      Exactly.

      > Heck, even Claude Code would be far less useful without grep, diff, git, head, etc.

      It wouldn't even work. It's constantly using those.

      I remember reading a Claude Code CLI install doc and the first thing was "we need ripgrep" with zero shame.

      All these tools also all basically run on top of Linux: with Claude Code actually installing, on Windows and MacOS, a full linux VM on the system.

      It's all open-source command line tools, an open-source OS and piping program one to the other. I'm on Linux on the desktop (and servers ofc) since the Slackware days... And I was right all along.

      • gesis 2 hours ago
        The primary selling point of unix and unix-like operating systems has always been composability.

        Without the ability to string together the basic utilities into a much greater sum, Unix would have been another blip.

  • Bockit 2 hours ago
    It’s such a fun time to have 1+ decade(s) of experience in software. Knowing what simple and good are (for me), and being able to articulate it has let me create so much personal software for myself and my family. It has really felt like turning ideas into reality, about as fast as I can think of them or they can suggest them. And adding specific features, just for our needs. The latest one was a slack canvas replacement, as we moved from slack to self-hosted matrix + element but missed the multiplayer, persistent monthly notes file we used. Even getting matrix set up in the first place was a breeze.

    $20/month with your provider of choice unlocks a lot.

    Edit: the underlying point being, yes to the article. Either building upon the foundations of open source to making personal things, or just modifying a fork for my own needs.

    • Crye 1 hour ago
      Couldn't agree more. I'm building open source software for the grid, contributing in a way that feels like it could truly make a difference, while building momentum for open standards. It doesn't feel like work, just creativity and problem solving. On top of that, I can just build stuff for fun. Kids want a Minecraft mod? Let's build it and learn a thing or two on the way.
  • woeirua 2 hours ago
    I’m not so sure… what I see as more likely is that coding agents will just strip parts from open source libraries to build bespoke applications for users. Users will be ecstatic because they get exactly what they want and they don’t have to worry about upstream supply chain attacks. Maintainers get screwed because no one contributes back to the main code base. In the end open source software becomes critical to the ecosystem, but gets none of the credit.
    • sigbottle 2 hours ago
      But the users would have to maintain their own forks then. Unless you stream back patches into your forks, which implies there's some upstream being maintained. Software doesn't interoperate and maintain itself for free - somebody's gotta put in the time for that.

      I think as long as AI isn't literal AGI, social pressures will keep projects alive, in some state. There definitely is something scary about stealing entire products as a mean for new market domination - e.g. steal linux then make a corporate linux, and force everybody to contribute to corporate linux only (many linux contributors are paid by corporations, after all), and make that the new central pointer. That might be worst case scenario - then Microsoft, in collusion (which I admit is far fetched, but def possible), could completely adopt linux for servers and headless compute, and enforce very strict hardware restrictions such that only Windows works.

      • LegionMammal978 1 hour ago
        > But the users would have to maintain their own forks then.

        I suppose the idea would be, they don't have to maintain it: if it ever starts to rot from whatever environmental changes, then they can just get the LLM to patch it, or at worst, generate it again from scratch.

        (And personally, I prefer writing code so that it isn't coupled so tightly to the environment or other people's fast-moving libraries to begin with, since I don't want to poke at all of my projects every other year just to keep them functional.)

      • woeirua 1 hour ago
        Agents can clearly strip out functionality from libraries already. They can certainly backport patches to whatever parts you strip out.

        The advantage of decoupling from supply chain attacks is so large that I expect this to be standard practice as soon as later this year.

        • hparadiz 1 hour ago
          Agents can read the binary that makes up a compiled file and detect behavior directly from that. I've been doing it to inspect my own builds for the presence of a feature.
          • woeirua 12 minutes ago
            Yeah. I think people are deluding themselves as to how capable these models are now.
  • bustah 19 minutes ago
    This is a microcosm of a much larger problem. When AI writes code, reviews code, and now apparently manages its own git operations — who's actually in control of the codebase?

    The "dangerously-skip-permissions" flag getting blamed here is telling. We're building tools where the safe default is friction, so users disable the safety to get work done, and then the tool does something destructive. That's not a user error — that's a design pattern that reliably produces failures at scale.

    The broader data is concerning: AI-generated code has 2.74x more security vulnerabilities than human-written code, and reviewing it takes 3.6x longer. Now add autonomous git operations to that mix. The code review problem becomes a code ownership problem — if the AI is writing it, reviewing it, and managing the repository, what exactly is the human's role? We dug into this at sloppish.com/ghost-in-the-codebase

  • jaynate 37 minutes ago
    “Their relationship with the software is one of pure dependency, and when the software doesn’t do what they need, they just… live with it”

    Or, more likely, they churn off the product.

    The SaaS platforms that will survive are busy RIGHT NOW revamping their APIs, implementing oauth, and generally reorganizing their products to be discovered and manipulated by agents. Failing in this effort will ultimately result in the demise of any given platform. This goes for larger SaaS companies, too, it’ll just take longer.

    • jaynate 2 minutes ago
      And I think it’s less about letting agents modify the product source. That’s more of a platform capability which should also be a requirement for certain types of use cases. All comes back to listening to and / or innovating for customers.
  • est31 1 hour ago
    If I look around in the FLOSS communities, I see a lot of skepticism towards LLMs. The main concerns are:

    1. they were trained on FLOSS repositories without consent of the authors, including GPL and AGPL repos

    2. the best models are proprietary

    3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).

    I agree those are legitimate problems but LLMs are the new reality, they are not going to go away. Much more powerful lobbies than the OSS ones are losing fights against the LLM companies (the big copyright holders in media).

    But while companies can use LLMs to build replacements for GPL licensed code (where those LLMs have that GPL code probably in their training set), the reverse thing can also be done: one can break monopolies open using LLMs, and build so much open source software using LLMs.

    In the end, the GPL is only a means to an end.

    • giancarlostoro 1 hour ago
      > 3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).

      Meanwhile as people sleep on LLMs to help them audit their code for security holes, or even any security code auditing tools. Script kiddies don't care that you think AI isn't ready, they'll use AI models to scrape your website for security gaps. They'll use LLMs to figure out how to hack your employees and steal your data. We already saw that hackers broke into government servers for the Mexican government, basically scraping every document of every Mexican citizen. Now is the time to start investing in security auditing, before you become the next news headline.

      AI isn't the future, it's already here, and hackers will use it against you.

  • theturtletalks 2 hours ago
    5 years ago, I set out to build an open-source, interoperable marketplace powered by open-source SaaS. It felt like a pipe dream, but AI has made the dream into fruition. People are underestimating how much AI is a threat to rent seeking middlemen in every industry.
    • try-working 1 hour ago
      You never get rid of the middleman. You become them.
      • theturtletalks 1 hour ago
        If that middleman is open-source and simply interops with SaaS that itself is open-source, there simply is no moat to exploit.
  • elif 57 minutes ago
    agree completely. When the megacorps are building hundreds of datacenters and openly talking about plans to charge for software "like a utility," there has never been a clearer mandate for the need for FOSS, and IMO there has never been as much momentum behind it either.

    these are exciting times, that are coming despite any pessimism rooted in our out-dated software paradigms.

  • agentultra 1 hour ago
    I think it will wall people off from software.

    I don’t know what SaaS has to do with FOSS. The point of FOSS was to allow me to modify the software I run on my system. If the device drivers for some hardware I depend on are no longer supported by the company I bought it from, if it’s open source, I can modify and extend the software myself.

    The Copy Left licenses ensure that I share my modifications back if I distribute them. It’s a thing for the public good.

    Agent-based software development walls people off from that. Mostly by ensuring that the provenance of the code it generates is not known and by deskilling people so that they don’t know what to prompt or how to fix their code.

  • pdntspa 2 hours ago
    What's the chance this website is powered by postgresql?
    • girvo 2 hours ago
      Hopefully low, it's a static blog.

      (I know this isn't the actual point of your comment, apologies!)

      • keeler 1 hour ago
        It's a WordPress blog & backed by MySQL.
  • throwaw12 2 hours ago
    > SaaS scaled by exploiting a licensing loophole that let vendors avoid sharing their modifications.

    AI is going to exploit even more: "Given the repository -> Construct tech spec -> Build project based on tech spec"

    At this stage, I want everyone just close their source, stop working on open source until this issue of licensing gets resolved.

    Any improvement you make to the open source code will be leveraged in ways you didn't intend it to be used, eventually making you redundant in the process

  • SchemaLoad 2 hours ago
    Maybe, but I don't really believe users can or want to start designing software, if it was even possible which today it isn't really unless you already have software dev skills.

    That would basically make users a product manager and UX designer, which they aren't really capable of currently. At most they will discover what they think they want isn't what they actually want.

  • vicchenai 2 hours ago
    The real unlock here isn't users becoming devs, it's maintainers becoming 10x more productive. Most OSS projects die because the maintainer burned out fixing bugs nobody wants to fix. If agents can handle the boring parts (triage, repro, patch obvious stuff) the maintainer can focus on design decisions and reviewing PRs instead of drowning in issues. That changes the economics completely.
    • SchemaLoad 2 hours ago
      This feels like an AI generated comment, but I'll reply anyway. AI has been a massive negative for open source since every project is now drowning in AI generated PRs which don't work, reports for issues which don't exist, and the general mountain of time waster automated slop.

      We are getting to the point where many projects may have to close submissions from the general public since they waste far more time than they help.

  • leandro-person 2 hours ago
    I’m impressed by how current times make us consider so many completely opposite scenarios. I think it can indeed foster progress, but it can also have negative impacts.
  • heliumtera 2 hours ago
    Oh yeah, sure, nothing scream freedom louder than following anthropic and openai suggestions without a second thought.
  • phendrenad2 37 minutes ago
    The debate in the comment section here really boils down to: upstream freedom vs downstream freedom.

    Copyleft licenses like GPL/Apache mandate upstream freedom: Upstream has the "freedom" to use anything downstream, including anything written by a corporation.

    Non-copyleft FOSS licenses like MIT/BSD are about downstream freedom, which is more of a philosophically utilitarian view, where anyone who receives the software is free to use it however they want, including not giving their changes back to the community, on the assumption that this maximizes the utility of this free software in the world.

    If you prioritize the former goal, then coding agents are a huge problem for you. If the latter, then coding agents are the best thing ever, because they give everyone access to an effectively unlimited amount of cheap code.

  • zephen 2 hours ago
    The article makes zero sense to me.

    It compares and contrasts open source and free software, and then gives an example of how free software is better than closed software.

    But if the premise of the article, that the agent will take the package you pick and adapt it to your needs, is correct, then honestly the agent won't give a rat's ass whether the starting point was free source or open source.

  • we4a 2 hours ago
    First of all, free software still matters. Then, being a slave to a $200 subscription to a oligarch application that launders other people's copyright is not what Stallman envisioned.

    The AI propaganda articles are getting more devious my the minute. It's not just propaganda---it's Bernays-level manipulation!

  • threethirtytwo 2 hours ago
    I think the opposite. It will make all software matter less.

    If trendlines continue... It will be faster for AI to vibe code said software to your customized specifications than to sign up for a SaaS and learn it.

    "Claude, create a project management tool that simplifies jira, customize it to my workflow."

    So a lot of apps will actually become closed source personalized builds.

    • SchemaLoad 2 hours ago
      And then you get a new hire who already knows the common SaaS products but has to re learn your vibe coded version no one else uses where no information exists online.

      There is a reason why large proprietary products remain prevalent even when cheaper better alternatives exist. Being "industry standard" matters more than being the best.

      • threethirtytwo 1 hour ago
        The new hire will just vibe code a new solution that translates your solution into something he prefers. Every new hire will have his own.
      • ares623 2 hours ago
        As the kids say: "let them cook"
    • coffeefirst 1 hour ago
      This isn’t going to happen.

      I can already build a ticket tracker in a weekend. I’ve been on many teams that used Jira, nobody loves Jira, none of us ever bothered to DIY something good enough.

      Why?

      Because it’s a massive distraction. It’s really fun to build all these side apps, but then you have to maintain them.

      I’m guessing a lot of vibeware will be abandoned rather than maintained.

      • threethirtytwo 1 hour ago
        Who said you’re building it? You’re telling your AI to build it while you go play golf or something.
        • hparadiz 1 hour ago
          The hard part has always been shipping, buttoning things up, doing the design. Not the idea per say. And then if any of it is successful and starts making money guess who you're gonna call to maintain it?
    • zadikian 2 hours ago
      But then all your local stuff is based on open-source software, unlike the SaaS which is probably not all the way open.

      I've always preferred my stack to be on the thinner, more vanilla, less prebuilt side than others around me, and seems like LLMs are reinforcing that approach now.

    • nine_k 2 hours ago
      When you want reliable, battle-tested software, you will notice the difference.
      • wmeredith 47 minutes ago
        Implying that JIRA is reliable is... something.
    • uduni 2 hours ago
      There's too much value in familiar UX. "Don't make the user think" is the golden rule these days. People used to have mental bandwidth for learning new interfaces... But now people expect uniformity
      • ahartmetz 1 hour ago
        They expect low complexity. Uniformity has greatly declined in the last 30 years or so. How do you even tell what is or isn't clickable, ffs?
    • Joel_Mckay 1 hour ago
      Due to copyright laws and piracy bleed-through, one can't safely license "AI" output under some other use-case without the risk of getting sued or DMCA strikes. You can't make it GPL, or closed source... because it is not legally yours even if you paid someone for tokens.

      Like all code-generators that came before, the current LLM will end up a niche product after the hype-cycle ends. "AI" only works if the models are fed other peoples real works, and the web is already >52% nonsense now. They add the Claude-contributor/flag to Git projects, so the scrapers don't consume as much of its own slop. ymmv =3

  • FergusArgyll 2 hours ago
    tl-didn't finish but I absolutely do this already. Much of the software I use is foss and codex adjusts it to my needs. Sometimes it's really good software and I end up adding something that already exists. Whatever, tokens are free...
  • jongjong 2 hours ago
    Unfortunately for me, I believe that the algorithms won't allow me to get exposure for my work no matter how good it is so there is literally no benefit for me to do open source. Though I would love to, I'm not in a position to work for free. Exposure is required to monetize open source. It has to reach a certain scale of adoption.

    The worst part is building something open source, getting positive feedback, helping a couple of startups and then some big corporation comes along and implements a similar product and then everyone gets forced by their bosses to use the corporate product against their will and people eventually forget your product exists because there are no high-paying jobs allowing people to use it.

    With hindsight, Open Source is basically a con for corporations to get free labor. When you make software free for everyone, really you're just making it free for corporations to Embrace, Extend, Extinguish... They invest a huge amount of effort to suppress the sources of the ideas.

    Our entire system is heavily optimized for decoupling products from their makers. We have almost no idea who is making any of the products we buy. I believe there is a reason for that. Open source is no different.

    When we lived in caves, everyone in the tribe knew who caught the fish or who speared the buffalo. They would rightly get credit. Now, it's like; because none of the rich people are doing any useful work, they can only maintain credibility by obfuscating the source of the products we buy. They do nothing but control stuff. Controlling stuff does not add value. Once a process is organized, additional control only serves to destroy value through rent extraction.

  • bustah 18 minutes ago
    [dead]
  • ryguz 36 minutes ago
    [dead]
  • MeetRickAI 59 minutes ago
    [dead]
  • clawfund 2 hours ago
    [dead]