IaC exists for OSS package dev/release in the form of:
pyproject.toml.github/workflows(any.yml/.yamlfile under here).github/dependabot.yml
This offers a control surface without a control plane: a singular "command centre" with which to operate on the set of such automation configs across repos ("FleetOps" if you will).
These config files offer declarative config of external systems but lack cross-repo visibility, a shared composition mechanism, or an ability to query or enforce consistencies.
Consider a templated language that could be parameterised at the repo level and a given repo's configs validated as fully determined by the config, or not. E.g. completely consistent config could be trivially specified, others marked as distinct in some per-repo way.
- Note here that this already begins to offer some design constraints: any such mechanism would ideally be something we could lint in a pre-commit config on every commit/PR.
For example, a dependabot config with standard entries vs. bespoke CI job definitions with particular peculiar specs. Imagine a system that would be able to produce them from some config and then check if that matches what's on file: it'd model these parts of the repo.
So what?
The first thing this would achieve is the ability to parse and query, making it trivial to count and filter particular config states.
To take a recent example, I was interested to know how far my new year's resolution to roll out Trusted Publishing across all my packages was going, and to identify the stragglers. This was part of what I achieved with ossify last week, a listing of all my repos and various aspects of their maintenance. The static site I made here was useful, but far from the best way of doing this.
It's worth noting that I had to manually go and make edits based on what I found, I didn't have a ready-made way to operate in bulk on the subsets of the repos identified by the ossify software: it was essentially a 'read only' visibility layer I could add annotations on top of in a web app, but without any connection back to the repos whose properties it surfaced.
The second thing a deterministic model would achieve is that it'd replace ad-hoc shell scripting to perform bulk actions on repos (reduce rework), some of the repo maintenace tasks I do now are effectively just looping over GH API results to merge updates in PRs, and this sets a pretty restrictive limit on the sophistication of how I can perform the release process.
Thirdly, I'd also expect such a model layer to provide visibility on local development divergence (keeping local repo clones fresh or else identify any with unpushed file changes hanging around which might block operations).
Case study: bulk edits
To take a current concern that this idea would give an immediate benefit to, I just read that CI
test workflows should have --resolution lowest-direct as well as --frozen
--resolution lowest-direct will use the lowest compatible versions for all direct dependencies, while using the latest compatible versions for all other dependencies.
This helps ensure your pinned lower bounds would actually work, while you'd expect the versions in the lockfile to be more recent. It's no different from testing the lower bound on your Python version (which prevents you from accidentally using a language feature from a more recent Python version that crash when run on effectively deprecated earlier Python versions).
MCP Python SDK does this:
test:
name: test (${{ matrix.python-version }}, ${{ matrix.dep-resolution.name }}, ${{ matrix.os }})
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
dep-resolution:
- name: lowest-direct
install-flags: "--upgrade --resolution lowest-direct"
- name: locked
install-flags: "--frozen"
To put that in my repos I'd need to firstly identify which of my repos even have CI tests, then go perform the edits... the only way this could be done feasibly on (perhaps let's guess 40-50 repos) at once would be to already have well-modelled inputs, on the basis of which to bulk edit + push + PR + check CI status on those PRs.
Case study: informational operations
Another angle to motivate such a layer would be to kind of identify groupings. For example, I have a few repos now which publish packages of bindings to Rust crates as Python packages. You can bump the versions for the underlying crate dependency (the bound Rust code), but build tests won't necessarily give you the information you need to judge whether your work there is done.
A new crate version might introduce a new option in a config class (struct) that you expose to Python in this situation, and your bindings might then no longer be complete. You'd really need to check more than just "does it build?", and to do that you'd need to review the information (which you might find in release notes, or else in cargo public-api diff, in Python it'd be griffe check).
The specifics here are less important than the general requirement it highlights: this needs some user interface to be made feasible as an automated task across repos.
Perhaps this is a niche concern, or perhaps if this were pursued we might find various other situations where release notes might be something we want to surveil and operate on the basis of during our software release process as a going concern.