| The following article was originally published on Angie Jones’s website And is being republished here with the permission of the author. |
I’m seeing more and more open source maintainers taking up AI-generated pull requests. It has even stopped accepting PRs from external contributors.
If you are an open source maintainer, you may have felt this pain. We all have them. It’s disappointing to review a PR that not only ignores the project’s coding conventions but is also full of AI sloppiness.
But yes, what are we doing?! Closing the door to contributors is not the solution. Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part in preparing your repo for AI coding assistants.
i am an escort duck Which has more than 300 external contributors. We felt this frustration early on, but instead of pushing away well-intentioned contributors, we worked to help them contribute with AI. Responsible.
1. Tell humans how to use AI on your project
we made one HOWTOAI.md File as a straightforward guide for contributors on how to use AI tools responsibly when working on our codebase. This includes:
- What is AI good for (boilerplate, testing, documentation, refactoring) and what is it not for (security critical code, architectural changes, code you don’t understand)
- The expectation that you are accountable for every line you submit, whether AI-generated or not
- How to verify AI output before opening a PR: build it, test it, lint it, understand it
- Being transparent about your use of AI in your PR
It welcomes AI PR but also sets clear expectations. most contributors want To do the right thing, they just need to know what the right thing is.
And while you’re at it, take a fresh look at your CONTRIBUTING.md too. A lot of the problems people blame AI for are actually problems that were always there, just exacerbated by AI. be specific. Don’t just say “follow the code style”; Explain what the code style is. Don’t just say “add tests”; Show what a good test looks like in your project. The better your documents are, the better both humans and AI agents will perform.
2. Tell agents how to work on your project
Contributors aren’t the only ones who need instructions. AI agents do this too.
we have one agents.md File that AI coding agents can read to understand our project conventions. This includes project structure, build commands, test commands, linting steps, coding rules, and clear “never do this” guardrails.
When someone points their AI agent at our repo, the agent automatically picks up these conventions. It knows what to do and how to do it, what not to touch, how the project is structured, and how to run tests to check their work.
You can’t complain that AI-generated PR doesn’t follow your conventions if you never told the AI ​​what your conventions are.
3. Use AI to review AI
Investing in an AI code reviewer as the first touchpoint for incoming PR has been a game changer.
I already know what you’re thinking…they suck too. LOL, fair. But again, you have to guide the AI. we added custom instructions So what does the AI ​​code reviewer know? We About care.
We told it our priority areas: security, correctness, architectural patterns. We told him what to leave out: style and formatting issues that CI catches early on. We asked him to comment only when he was confident that there was a genuine issue and not to criticize merely for the sake of lip service.
Now, contributors get feedback even before a maintainer sees the PR. They can clean things themselves. By the time it reaches us, the obvious things have been taken care of.
4. Do good tests
no seriously. I’ve been telling you all this for years. Anyone who follows my work knows that I’ve been working on a test automation soapbox for a long time. And I want everyone to hear me when I say that the importance of a solid test suite is important right now.
Tests are your safety net against bad AI-generated code. Your test suite can capture significant changes from contributors, human or AI.
Without good test coverage, you’re left doing manual reviews on each PR and trying to reason about correctness in your mind. This is not sustainable with five contributors, let alone 50 of them, half of whom are using AI.
5. Automate Boring Gatekeeping with CI
Your CI pipeline should also do the heavy lifting on quality checking so you don’t have to. Linting, formatting, type checking should all run automatically on each PR.
This isn’t new advice, but it makes more sense now. When you have clear, automated checks running on every PR, you create an objective quality bar. The PR either passes or it doesn’t. It doesn’t matter whether it is written by a human or an AI.
For example, at Hans, we run a GitHub action on any PR that contains reusable prompts or AI instructions to ensure they don’t contain prompt injections or anything incomplete.
Think about what is unique about your project and see if you can add some CI testing to it to keep the quality high.
I understand the impulse to lock things down, but y’all we can’t give up what makes open source special.
Don’t close the doors to your projects. Raise the bar, then give people (and their AI tools) the information they need to clear it.
On March 26, join Eddie Osmani and Tim O’Reilly at AI CodeCon: Software Craftsmanship in the Age of AI, where an all-star lineup of experts will delve deeper into orchestration, agent coordination, and the new skills developers need to create excellent software that creates value for all participants. Sign up for free here.