Will AI Kill Open Source

4 小时前(已编辑)

Will AI Kill Open Source

Saw an argument today that AI will kill open source. The logic sounds compelling: Claude Code can build anything you think of—you have an idea in the morning, working code by afternoon. Think it, build it, ship it. If that's the future, why would anyone spend time digging through Github repos, reading documentation, submitting PRs? AI will drain all the vitality from open source.

At first glance, this sounds alarming. But look closer and there's a fatal circularity.

AI writes code quickly precisely because it trained on massive amounts of open source code from Github. Claude Code can write React components, layout with Tailwind, call various open source APIs—all because it saw thousands of examples during training. If AI actually got strong enough that nobody wrote code or contributed to open source anymore, it would lose its training data.

It's like climbing to a rooftop and then pulling up the ladder. You might stand on the roof for a while, but you can't get down, can't climb higher. And nobody else can climb up either, because the ladder's gone.

This isn't theoretical—it happens every day.

Just the other day I asked Claude to write code using our company's internal UI component library. It got stuck. Why? Because that library is closed source—Claude never saw those APIs. It can write standard Material-UI or Ant Design because those are open source, in the training data. But our heavily customized internal system? Sorry, can't help.

Small example, but it reveals something crucial: if high-quality content actually moved en masse into private spaces—internal corporate codebases, paid closed tools, proprietary knowledge bases—AI couldn't learn from it anymore. It would freeze at whatever knowledge existed in its last training run, while the world moved on.

So the prophecy "AI will kill open source" contains the conditions that prevent it from happening. If open source actually died, AI would stop evolving too.


But there's a more unsettling question here, and I've noticed most people arguing about "will open source die" don't even realize this question exists.

We don't actually know what "creation" means.

Is the red you see the same red I see? We can never verify. We can only align through massive repetition: point at an apple and say "this is red," point at blood and say "this is also red." Eventually we agree we're talking about the same thing. But are we really?

How do humans learn? Documentation, examples, tests, practice problems. We read other people's code, study existing projects, grind LeetCode, and then we say we've "learned." We train ourselves on massive inputs too—we just gave the process fancier names: learning, understanding, mastery.

What does an LLM achieve? Output-level alignment. You give it a problem, it produces an answer that looks right. But process-level alignment? What's actually happening inside? Does it truly "understand," or is it just doing ultra-high-dimensional pattern matching?

This question is hard to answer because we can't even clearly define whether what humans do counts as "true understanding."

Do we have consciousness? Of course—you're reading this right now, you feel what "understanding" means. But can you define "consciousness"? Explain the mechanism of "understanding"? Philosophers, neuroscientists, cognitive psychologists have tried for millennia. We still don't have an answer.

So when AI exhibits something that looks like "creativity," we panic. We say: "No, that's not real creation, that's just imitation, recombination, statistical learning." But what's this judgment based on? Do we have a clear definition of "real creation"? Or are we just instinctively asserting that what humans do is sophisticated and what machines do is primitive?


If we're honest, we might have to admit an uncomfortable possibility: maybe what we call creativity is also sophisticated recombination of massive inputs.

When you write code, how many lines are truly "original"? Most of the time you're using patterns you learned, architectures you saw, ideas from open source projects you read—combining them into something new. When you write an essay, how many sentences did you think of completely independently? Or did books you read, podcasts you heard, conversations you had suddenly connect at some moment?

If "creation" means combining existing elements in novel ways, where's the boundary between what LLMs do and what humans do?

We might say: "But humans have emotions, values, subjective experience." Yes, those matter. But do they define "creation"? If someone lacks emotions, is the code they write not creative? If an AI someday had subjective experience (assuming we could define what "really having it" means), would what it does count as creation then?

We can't answer these questions because we never defined the rules of the game from the start.


This is why "will AI kill open source" ultimately points to a deeper anxiety.

People aren't really afraid Github will shut down or open source communities will go quiet. What they're really afraid of is that AI forces us to confront a fact: we've never truly understood ourselves.

We don't know what creation is, what learning is, what understanding is. We've just been using vague words to describe processes we can't clearly explain. AI's emergence exposes that vagueness.

When AI writes code identical to human code, we can't comfort ourselves with "it's just a machine, it doesn't really think." Because if we can't even define "really think," that statement is empty.

When AI passes the Turing test, writes moving poetry, designs elegant architectures, we have to either admit "okay, maybe that counts as creation too," or admit "okay, maybe we never knew what creation was all along."

Either option is unsettling.


Back to open source. I'm inclined to think it won't die, because that "pulling up the ladder" paradox is real. AI needs continuous, high-quality, public training data to evolve. If humans stop creating and sharing, AI stagnates.

But this argument has a premise: human wisdom hasn't hit the ceiling yet. If we can keep generating new knowledge, new patterns, new solutions, open source remains valuable because it's the channel through which this new knowledge flows.

Is this premise reasonable? I think in most domains, yes. But maybe in some areas we really are approaching the ceiling. Writing a standard CRUD app, for instance—maybe there really isn't much new to learn. In that case, AI consuming existing open source projects might be enough.

But even then, doesn't this show that humans define what "enough" means, what "standard" means, what the "ceiling" is? If humans stop defining these boundaries, how does AI know when to stop, when to continue?

So we've come full circle back to the same question: we don't know what creation is, but we seem to keep creating something. AI doesn't know what it's doing, but it seems to be doing something similar.

Maybe what really makes us anxious isn't whether open source will die or whether AI will replace humans, but this: AI held up a mirror, and we don't quite like what we see in it.

We see: a species that doesn't fully understand what it's doing, worrying that another system that doesn't understand what it's doing might take away something we don't understand.

It sounds absurd, but maybe that's the truth of 2025.

  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...