Locking Myself Out — By Design!
Building Continuous-Integration Gates Before Continuous-Deployment Pipelines
A week ago I published “How You Do Anything Is How You Do Everything,” and the closing list of “things to do next” mentioned this article’s topic with a specific commitment: Every yaml file, every action, every pinned image — I intend to read carefully enough to write the equivalent on a blank editor next time. This article is the receipt for that commitment.
The work it describes is small in line count and large in posture. Roughly fifty lines of yaml, twenty lines of JSON, a paragraph of documentation, and a few clicks in a settings page. From the outside it looks unremarkable. From the inside it is the moment I stopped trusting my own discipline and started trusting a structure I built to outlast my discipline. Both halves of that sentence are intended; the first is not a confession of weakness so much as an admission that on a long enough timeline, every solo developer’s discipline meets a Tuesday at 11 p.m. that beats it. Better to have a structure than to bet on the Tuesday going your way.
That structure has a familiar name in the industry: continuous integration. The phrase has accreted enough meaning over the years to have stopped meaning anything specific without context, so let me ground it in mine.
The CI I Knew
In two previous jobs, spanning more than two decades, my exposure to CI was a build server that someone — sometimes me — kept alive. Code was committed to a central repository. The build server periodically built it. If the build was red, somebody got an email. If the build had been green for too long without anyone looking, somebody might still get an email, depending on which generation of the system we were on at the time.
The discipline that turned this into a useful practice lived entirely outside the build server. It lived in the engineers. We were the ones who remembered to check the latest build status before merging a feature. We were the ones who reviewed each other’s commits in person — over a shoulder, around a whiteboard — before anything controversial landed. We were the ones who held the trunk’s quality. The build server informed that discipline; it did not enforce it. Nothing prevented a developer from merging a feature whose tests had not yet run, or whose tests had run but were red, or whose code reviewer had stamped approval before reading the diff. The friction against doing those things was social, not mechanical. And yes, we sometimes merged code that had not passed its unit tests. At least once, we merged code that didn’t even build!
I do not say this with disdain. The teams I worked on were good and the discipline was real. But it was real because we made it real, day by day, person by person. There was nothing structural about the system that insisted on it. And in the moments when we were tired, or chasing a deadline, or new to a part of the code, the discipline could shave thinner without anyone noticing — until the symptom showed up in production a week later.
That is one shape of CI. The pipeline I built this week is a different one.
Gates as a Type of Memory
The shift I want to draw out is not about what CI does — both the old and the new build, test, and inform — but about whether the act of merging is conditional on the build’s verdict. In the old shape, I was the conditional logic: a human, looking at a status indicator, deciding whether to click Merge. In the new shape, the merge button itself is wired to the build’s verdict. If the build is red, the button does not light up. If the build has not run yet, the button does not light up. There is no condition for me to evaluate; the gate evaluates it.
Spelled this way, it sounds like a small thing, and in line count it is. Its substance is that every gate is a piece of structural memory. Each one represents a class of mistake the team — in my case, the team of one — has decided is no longer acceptable to ship. The discipline of “I won’t merge until I run the tests” lives in my head and erodes when I am tired. The gate that reads “the test job must pass before merge is allowed” lives in a yaml file and erodes only when I rewrite the yaml file. That file is much more durable.
So what gates did I install? In sequence, every time a pull request opens or updates:
Restore dependencies.
Build the solution.
Run the tests.
Run a strict formatting check that fails on whitespace drift.
Publish each of the headless services for a non-Windows runtime even though I deploy on Windows.
Each one corresponds to a category of bug I no longer want to be capable of merging.
The build gate catches the obvious thing: I broke the compilation. The test gate catches the next-obvious thing: my change passed the compiler but tripped a behavior I had encoded as a unit test. The format gate catches whitespace, brace placement, and other style drift that I would otherwise let accumulate quietly until some massive, quarterly cleanup commit. None of those three are surprising; they are the canonical CI gates that most teams run by default.
The fifth gate is more interesting, and I want to dwell on it.
The Gate That Caught a Bug I Could Not See
The fifth gate publishes each of the headless services for linux-x64. I do not deploy to Linux. The four services run on Windows under Windows Service hosting; that is the only runtime they have ever known. The reason I gate on a Linux publish anyway is not because I intend to ship to Linux this week, but because the project is moving — slowly, a step at a time — toward a future where it might. Containerization is on the docket. Continuous deployment is on the docket. Both depend on the services being honestly buildable for a Linux host. The cheapest way to keep that future open is to make every pull request demonstrate, here and now, that the services can publish for Linux without errors.
I was not expecting that gate to find anything on its first run. The codebase had been building cleanly on my Windows machine for weeks. I was, frankly, expecting a shrug and a green check.
What it found, instead, was that the project’s Directory.Build.props — the file that overrides where MSBuild puts intermediate artifacts — hardcoded an absolute Windows path of the form C:\HarderWare\BuildCache\WxServices. This was a deliberate choice from many weeks ago: the project’s source tree lives inside Dropbox, and I did not want the build cache to sync to Dropbox along with the source. The path put the cache somewhere outside Dropbox’s reach. On Windows, that does the right thing. On the Linux runner, the build system did not recognize C: as a drive letter; it treated the absolute path as a relative one and spliced it into wherever the project was being built. Out came file paths like /home/runner/work/HarderWare/HarderWare/WxServices/src/MetarParser/C:/HarderWare/BuildCache/WxServices/MetarParser/obj/MetarParser.csproj.*.props — an unreal location, looked up by an unreal lookup, returning sixteen MSBuild errors in two fractions of a second.
The fix took two minutes — wrap the relevant property group in a condition that fires only when MSBuild is running on Windows. On non-Windows hosts, fall back to MSBuild’s defaults. On Windows, keep the Dropbox-escape redirect that was the entire point. Two minutes to write, ten minutes to think about why I had not considered the case before, an hour or so to sit with the implication.
The implication is this: if I had not built the Linux-publish gate, I would have shipped a project that did not honestly publish on Linux, and I would not have known it for another few weeks or months — until a future version of me asked Claude to show me how to put the services in a container, the container build failed in some confusing way at 11 p.m. on a Tuesday, and I spent an evening discovering the same bug under deadline pressure instead of under the calm conditions of a planned CI rollout. The category of bug — Windows-specific path leaking into a build configuration that is supposed to be portable — is invisible from inside a Windows-only development environment. It is visible only from somewhere else. The CI runner, which is somewhere else by construction, makes that category of bug visible by default.
This is why the named pattern that underlies CI runners is worth knowing: a hermetic build. Each run starts on a fresh, ephemeral virtual machine, with nothing on it except the toolchain and what the workflow file installs. Nothing carries over from a prior run. Nothing carries over from a developer’s laptop. No “but it works on my machine” — an uncommon but real problem in my latest job. Hermeticity is the property that lets you trust the build’s verdict because it would have come out the same way for anyone, anywhere, who ran the same workflow against the same commit. The Linux runner, in our case, was a hermetic build that exposed a Windows assumption I could not have seen from inside my own assumptions. There is a parable about this, most often associated now with David Foster Wallace’s This Is Water commencement speech (though the figure predates him): the fish does not see the water. The runner sees the water because it is a different fish.
Locking Myself Out
I have described the gates and what they catch. They are still, however, useful only if I use them. A gate I can walk around is decoration. The piece of the system that takes the gates from decoration into structural integrity is branch protection.
Branch protection is the GitHub configuration that says: master cannot receive changes except via pull requests whose required checks have all passed. That sentence sounds like a self-evidently good idea, and it is, but it is not a default. You have to enable it. And when you enable it, you also choose whether administrators — that would be me — can override the gate. The default is that administrators can. I unchecked that default. The setting is named, with admirable directness, Do not allow bypassing the above settings.
I want to say plainly why I unchecked it, because the reason matters and is not what you might first assume. It is not that I distrust myself in some general way. It is that I know exactly the moment at which I would override the gate, because I have been that engineer in past jobs and I know what that engineer looks like. It is the moment when something has gone wrong, the rhythm is broken, the day is late, and the cost of doing the right thing — figuring out why CI is failing, fixing the underlying issue, waiting for the gate to re-run — feels, for that moment only, larger than the cost of bypassing the gate. The override exists for that moment, which is also the single moment at which the discipline the gate encodes is most likely to be circumvented. The “do not allow bypassing” setting takes that override away. It says, structurally, no, the gate cost is the gate cost; pay it or do not merge.
I tested the lock by writing a deliberately broken pull request — a single unit test that always fails — and watching what GitHub showed me. The required check went red in the CI runner. The merge button went grey in the pull request. Hovering over it produced the tooltip merging is blocked due to failing merge requirements. As repository owner, I had no override available. The button stayed grey for me, exactly the way it would have stayed grey for any other person. I closed the broken pull request without merging. The verification was the closing of that pull request: I was, structurally, on the outside of the door I had built.
Solo development is the place where that lock matters most, not least. A team has a second pair of eyes built into the social structure: someone else has to approve the change. A team has a Slack channel where the broken build gets noticed within an hour (on a good day). A team has weekly interactions where the bypass habits of one member get caught by another. Solo development has none of those informal structures. Whatever discipline my repository runs on, I have to install structurally — and the most important place to install it is the place where my own override would be the easiest path. That is precisely the place where I locked myself out.
CI Before CD
Earlier in this work I described continuous integration as staging — the bridge to continuous deployment. That bridge is where the article’s title is pointing. Continuous deployment is the practice of letting the system itself decide, on the basis of evidence collected during the merge, that a new commit is fit to put in front of users. The user-facing change happens automatically, gated only on the evidence the merge produced. There is no human in the deploy loop.
That is a high-trust posture, and the trust is not earned by wishing for it. It is earned, every commit, by the gates that ran during the merge and the integrity of the structure that prevented those gates from being skipped. CI is the evidence-collection step; CD is the decision step that consumes the evidence. Without trustworthy evidence-collection — the gates installed, the lock enforced, the runner hermetic — the decision step is not making a decision; it is gambling. And gambling on user-facing changes is the thing CD is not supposed to be.
Building the deploy automation before installing the gates that earn its trust does not produce CD. It produces fast manual deployment with worse visibility — every deploy still needs a human watching it, looking for the failure that should have been caught upstream. The automation is real; the trust the automation depends on is not.
I have no near-term plan to deploy WxServices to a cloud environment. The system runs on a single Windows machine in my home office. But the path I am walking is one where, in some future state, the headless services live in containers, the containers live in a registry, and the registry pushes to a runtime somewhere that I do not log into. That posture only works if the container that lands in the runtime is one I can trust, and the trust starts with knowing it was built from a commit that passed every gate, on a hermetic runner, with no human able to override. Each commit that lands today, under that gate structure, is practice for the day that the deploy is automatic. The gates are the trust I am building toward CD by accumulation.
The Receipt
I said in the prior article that I would build a CI pipeline I could write on a blank editor next time. The way I know I can do that is that I had to: the design of the workflow involved decisions, not transcription. Should the workflow build the entire solution, including the WPF GUIs, on Linux? No — the WPF projects are Windows-only by construction, and forcing them to compile on Linux through targeting-pack hacks would produce artifacts no one consumes. I excluded them, via a solution-filter file that lists only the cross-platform projects. Should the dotnet SDK be version 8 or 9? The projects target .NET 8; the .NET 9 SDK can compile against .NET 8, but the test runner needs the .NET 8 runtime to actually execute the tests, and installing both adds setup time for no benefit the workflow needs. I picked 8 only. Should the JavaScript actions run on Node 20 (today’s default) or Node 24 (the future default)? GitHub will flip the default in early June, with Node 20 entirely removed by mid-September; opting into Node 24 now, six weeks ahead, lets me find any incompatibility while there is still time to address it. I opted in.
None of those decisions look like much in retrospect. None of them is the kind of decision a tutorial would walk you through. They are the small judgments a CI workflow embeds, and they are the ones that distinguish a workflow that ages well from one that breaks the moment something upstream nudges. I would not have been able to read the workflow off a tutorial; I had to know the project well enough to make those judgments deliberately. That is what I committed to a week ago when I said I would read the yaml carefully enough to write it on a blank editor. The yaml itself is short; the understanding behind it is the actual deliverable.
One moment during the rollout taught me about a secondary value of the workflow. After the first run failed on the Windows-path issue, I fixed it and pushed a corrective commit; the next run was green. At the bottom of that run’s log, I noticed a small yellow warning: the JavaScript actions I was using were going to be deprecated in a few months. The warning was not a failure — the build was green, the gate would not have stopped a merge — but it told me, free and unsolicited, that something I depended on was going to change. I had months of slack. I addressed it that afternoon with a Node 24 opt-in: about three minutes of calm work, versus many times that on the day Node 20 finally went away. CI is not only a pre-merge gate; it is also a pre-emergency early-warning system, and that secondary value is one I had not adequately appreciated.
What Mature Engineers Build Now
I close where I began. I am a mature engineer, building a personal project in 2026, with AI tools that have changed what one engineer can do in a week. The version of CI I knew from past employment was one in which the build server informed a discipline that lived in the people. The version I built this week is one in which the build server enforces a discipline that lives in the structure. The shift between those two postures is the same shift the industry has been making for fifteen years, and it is one I have until now only watched from the outside. Building it from scratch, on my own project, has taught me something the watching never did.
The take-home: CI that depends on discipline can break down at awkward times and be expensive to fix, even when the engineers are all highly trained and well-disciplined. CI that depends on structure — and where the structure embodies the right discipline — shifts the engineer’s job from remembering to do the right thing every time to occasionally auditing the structure to make sure it is still the right structure. The first job is exhausting and error-prone. The second is interesting. I prefer the second, and built it this week.
The next item on the list is containers. After that, deployment automation. After that, the door I locked myself out of opens onto a system I can update without ever logging into it. That is the journey continuous integration is staging. The article that describes the next leg will be the one I write after I have done the work — and after I have read every line of the next yaml file carefully enough to write it on a blank editor.
The Repo
The source code is available at https://github.com/PaulHarder2/HarderWare.

