It’s the age of the agents. Everybody has them. Everybody writes, builds and runs them. Everybody blogs about them (so do we) But few really knows what they are, how to treat them. Like everyone else, we’re occasionally debating it.
As in our other posts (our recurring theme), we don’t have time for the industry to settle so we can just cargo-cult it. So we decided to produce, store, and use our agents (at least the most useful ones) in one place, a common github repository that loves all cli’s.
What actually happens without one place
When everyone in the team explores on their own, the cost is high. Someone finds a prompt that works for understanding the codebase; someone else invents a different one. The difference in the agent description is subtle, the output deviate slightly. Agents and skills multiply across people and Slack threads. Nobody can reuse what someone else already figured out. Nobody reads their own 120lines of generated document, while expecting everyone else to go through it diligently (more on that later).
Searching for the right prompt or the right skill is only part of the problem. The bigger problem is that the exploration never compounds. Everyone pays the cost of discovery, trial, and error, almost nobody gets the return of standardisation. My engineering brain cannot survive a repo without a solid utility package. Now that entire teams and organisations turn into executables with agents, my heart seeks the same.
So, we simply wanted to reduce that cost and increase the return. That meant joining an effort and having some standardisation.
What we keep
Simply put, anything that contributes to our common SDLC process, is kept here and used by the whole team/agents/processes so we have a common, ubiquitous system. Here are some of the most used ones,
- Commands
- research-codebase: analyses the codebase for domain and patterns so you (or the agent) can reason about it
- generate-plan and spec: turn a ticket or idea into a structured implementation plan or specification
- implement-plan: runs a plan file and follows project rules
- deslop: reviews the diff and strips AI slop before commit
- review-issue: reviews a Linear issue, asks relevant questions to the poster, reads the answers and incorporates them back into the issue.
How we use
Installation copies these into each agent’s directories. npx installer detects Cursor and Claude on the machine, lets user pick which to install for, and copies commands into ~/.cursor/commands/integral/ (or ~/.claude/commands/integral/) and skills into the matching skills/ tree. A small config file (e.g. ~/.integral-agents.json) records what was installed and when. Without a version pin, npx fetches the latest from the repo each time you run it, so re-running the install pulls in updates.

Anyone on the team can (and should) update the agents, it is one PR away.
Agents can be run locally, in our automation system (we’re early evaluators of tembo.io, and we have our own isolated agent execution environment) or simply triggered from Slack.
Result
We’re exploring whether we can treat agents the same way we treat source code, and how we trigger them the same way we treat CI/CD. You don’t let anyone on your team to keep their own production version of the application and deploy as they like. You let people experiment, change, and merge back. You don’t let anyone deploy to production on a whim; you orchestrate that with safeguards. We’re trying to establish the same for agents: one place, changes via PR, runs orchestrated rather than ad hoc. How we actually run and trigger agents (locally, in automation, from Slack) is a topic for another post. Stay tuned.
There’s also an organisational upside we didn’t expect. Collaboratively writing a simple, plain English document (an agent description, a skill, a reference) turns out to be useful and oddly satisfying (dare I call it mob-agenting(?)). This post is one result of that. The repo is another.
This post is part of the SDLC2 series. [Previous post in series] | Series index or next post.