I’m on the cautious-optimistic side when it comes to AI coding tools and open source.
I want to use this post to document my opinions on AI coding tools and OSS maintenance at this specific point.
With the rate AI models and coding tools improve, I’d be curious to see how much of my takes hold or change after 6 months/a year.
About me
I’m a Ruby committer. I also maintain Ruby’s RDoc, IRB, and Reline libraries.
I don’t see myself as an AI expert, perhaps not even a power user. I use AI tools regularly at work and for OSS development, but haven’t explored many advanced features, such as custom skills. My primary setup is Claude Code with Opus 4.5, so my takes are largely shaped by that.
I’ve used AI to assist my contributions to ZJIT and the creation of Ruby’s new documentation theme. Without AI, I wouldn’t have attempted these—or not to the same extent, due to the upfront cost.
More developers will contribute to OSS using AI tools
I think a main reason is that they see it works well in their work/personal projects. Laziness and malicious attempts could also be behind some people’s contributions, but I want to believe that the majority genuinely believe AI is helping them contribute.
AI coding skills vary even more than traditional coding skills
Just like every developer’s coding skill can vary, sometimes a lot, our AI coding skills and perceptions on coding with AI can vary a lot too. I’d argue that as of now, the difference could be even bigger than traditional coding skill differences.
We have more variables now:
- The models people have access to (due to budget limits, company policies, regions, etc.)
- Usage limits
- The interfaces they use (CLI, IDE integration, chat)
- What projects they use those tools on daily
- The person’s moral compass
- …and many others
(I don’t want to get too deep into these variables here.)
AI is a multiplier, not a leveler
AI amplifies existing developer habits, good or bad. If you lack certain good traits in software development—curiosity, willingness to dig into root causes, knowing when to ask for help…etc.—AI won’t fill that hole. It’ll just help you produce more of whatever you were already producing.
The maintainer’s dilemma
As an OSS maintainer, I don’t get to control what tools people use to “help” them contribute to the project, or how they use it.
I’ve seen more developers feel “enabled” by AI tools to start contributing to OSS projects. In other words, those devs would not have contributed without these tools. I’m one of those devs in terms of contributing to ZJIT, as I’ve detailed in a previous post.
But this also means we’re seeing more low-effort, low-quality contributions.
So what distinguishes good-faith AI-assisted contributions from low-effort ones? I don’t have a good definition that’s worth sharing, but here’s what I look for:
- Did the contributor commit the changes themselves? (indicates they at least did a final review, hopefully)
- Can they answer: what problem they’re solving, and why this specific approach?
The solution exploration doesn’t need to be exhaustive—it’s okay to make mistakes and ask questions. The point is: have good intent and stay engaged with their own work.
AI agents are changing the maintainer-contributor dynamic
Before AI tools, the contribution process involved two parties: maintainers and contributors (it can also involve community discussions, but let’s keep it simple for now). Now there are three: maintainers, contributors, and contributors’ agents.
This creates new communication channels:
- Maintainer → Contributor: CONTRIBUTING.md, PR reviews (unchanged)
- Maintainer → Contributor’s Agent: Agent instruction files like AGENTS.md, CLAUDE.md, etc. (new)
- Contributor → Their Agent: prompts, instructions
Agent instructions talk directly to agents, not necessarily to contributors. A contributor might not read your docs, but their agent is more likely to. This lets maintainers influence how contributors’ tools behave in their repo.
For example, maintainers can ask agents to:
- Care about commit hygiene
- Not commit anything that breaks tests
- Be concise when generating comments, or use a specific format
In the past, these practices were hard to enforce—you could document them, but contributors might not read or follow them. Now that agents are in the loop and tend to follow instructions, maybe this brings some positivity to maintainers too.
But in my opinion, one channel should stay the same: Contributor → Maintainer communication should remain human-to-human. PRs and discussions should come from the contributor, not their agent.
This doesn’t mean you can’t use AI to help draft a PR description—just review it like you would with the code. The expectation is that you’ve reviewed and understood what you’re submitting.
What does this mean to maintainers
Given this new dynamic, I think projects should provide AI-related guidance via agent instruction files. This isn’t about preventing AI slop, as you can’t really stop bad contributions, with or without AI. It’s about empowering good-faith contributors, your fellow maintainers, and their agents to work more effectively with your project.
Yes, it will add more to the maintainer’s plate. But AI can help with that too—if maintainers have access to these tools.
AI companies should sponsor maintainers
Contributors now have access to powerful AI tools. But many maintainers don’t—and without them, maintainers only feel the negatives: more contributions to review, some low-quality, without the means to keep up.
I personally think AI coding tools are the biggest developer productivity boost in recent memory. And the people maintaining our shared infrastructure should have access to them too.
Similar to how CDN and hosting companies sponsor usage credits to OSS projects, I think AI companies can sponsor access to their tools. For example, maintainers of popular OSS projects could get Claude Code Max free of charge.
The exact mechanism could vary—credits, free tiers, partnerships—but providing sponsorship hits multiple birds with one stone:
- It helps projects progress faster
- It allows maintainers to catch up with contributors’ tooling and respond accordingly (e.g., maintain good agent instructions)
- Publicly visible agent instructions (AGENTS.md, skills, etc.) enable sharing real-world agentic coding practices, which helps broader adoption
If these tools help developers ship faster, let’s make sure maintainers have access too.
To contributors
I encourage using AI to contribute to projects I maintain. The expectations I outlined earlier apply here too—review your own work, be able to explain what and why.
If you want to contribute but aren’t familiar with the codebase, use AI to help you learn. Ask it questions and verify the answers by digging into the code yourself. Treat AI as another person who’s also new to the codebase—have discussions together, run experiments together.
To maintainers who haven’t tried AI tools
If you’re still skeptical, I’d echo Armin Ronacher’s advice: give yourself a week to really try it. Not just a quick test—actually use it for tasks you’re already planning to do.
I recommend treating it as a second pair of eyes first. Let it help you with tasks you already understand well, so you can evaluate its output critically.
Once you’re comfortable, create an AI instruction file like AGENTS.md with AI’s help. At the very least, tell AI how to:
- Build your project
- Run tests
- Run linters
- Run end-to-end tests (if applicable)
The latest models should be able to help you generate these with minimal input. If you’re missing any of these instructions in your CONTRIBUTING.md, you can improve it together as well.
With these instructions in place, agents can do a lot with minimal intervention:
- Prototype a few solutions to an issue and summarize the results
- Identify and remove dead code, then run tests to verify
- Execute and test documented code examples
This is where I feel the agents start to increase my productivity significantly.
Long-term optimism
I think in the long run, AI will help the community maintain and improve OSS projects.
RDoc’s new Aliki theme is one example—I wouldn’t have built it without AI. Beyond that, AI has helped me address markdown parsing issues, explore refactoring ideas, and more. It’s made project maintenance a bit more fun, instead of just extra debugging on weekends.
I’d be interested to see if AI tools will help revive unmaintained projects. And whether they’ll help raise a new generation of contributors—or even maintainers.