GeistHaus
log in · sign up

Waldo Jaquith

Open source, procurement, and gov tech.

rss en-US
15 posts · 1 narrative
Feed metadata
Generator https://wordpress.org/?v=6.9.4
Status active
Last polled Apr 29, 2026 01:35 UTC
Next poll Apr 30, 2026 01:35 UTC
Poll interval 86400s
ETag "951fbdaa5a4ce1ca07ae0fa898066f37-gzip"
Last-Modified Fri, 17 Apr 2026 22:05:11 GMT

Posts

Independent government cost estimates are essential.
GovernmentTechprocurement
Agencies preparing IGCEs for custom software projects nearly all make the same mistake: basing the cost of software projects on past, similar software projects.
Show full content

In the federal government, when buying anything above the simplified acquisition threshold, contracting officers are obliged to ensure that they receive an independent government cost estimate (IGCE) for the thing being procured. The idea is for agencies to be able to budget for the project, and also to have some kind of a baseline to compare proposals to. Changing how this is done for government software projects is an essential component of making that software less bad.

If you’ve ever had a major repair expense for a car or your home, then you already understand the value of an IGCE. For example, my home has hard water, and it’s become a problem for my plumbing—I need a water softener. But how much will that cost? Is it $500? Or more like $5,000? With a bit of research, I learned that I’d want a salt-based softener, and for my home’s water needs, that’ll run about $500. I’ll need some additional plumbing parts to hook it up—let’s call that $100. And then there’s labor, maybe three hours. Based on past experience, I think that’ll run about $150/hour, or $450 in total. So my estimate comes to $1,050. Now I understand that I should not bother to pursue this until I’ve got about a grand set aside. And I also understand that a quote of $5,000 should either tell that me that I’m missing some fundamental knowledge about this work, or, more likely, that I’m about to get ripped off. In short, I’m a more confident, more informed buyer, who can interrogate quotes to make a sensible choice between vendors.

Federal agencies prepare IGCEs as a matter of course, for software and otherwise, because the FAR requires it. Many states do the same, because the Model Procurement Code requires cost and price analysis in several contexts. But the agencies preparing those IGCEs nearly all make the same mistake: basing the cost of software projects on past, similar software projects. (Legislatures do this as well, which is an even more serious problem that far upstream.) From one perspective, it’s sensible and logical to base the cost of a project on past projects—it’s a reasonable assumption that this new project will not fare any better than those past projects. But from my perspective, this approach is simply a way to persist funhouse mirror project pricing, where prices have no connection to reality, but are instead a continuation of the decades-old price spiral that has resulted from pricing being untethered from any interrogateable logic.

I regret that there’s no better way for an agency to prepare an IGCE than employing people who actually know how software is made. I say “I regret” because creating new positions is actually quite difficult to do. When an agency comes to me for help procuring a custom software system, telling them “simply hire people who know how software is made” is basically telling them to go to hell. With that important caveat, there are a few ways to get at preparing an IGCE. One is to decompose the overall system need into its constituent software services, to figure out which are commercial and which are custom. They’re almost all commercial, which is great, because the prices of such software are readily available, so all the commercial parts can be priced out with certainty. But that doesn’t help with the custom parts, which we can estimate the cost of a second way: basically running a really high-level story-pointing exercise, to figure out how many months of software development that each major component will require, for an estimate in scrum-team years. And then there’s the third way: deciding that you’re simply going to scope down the work to the results that are possible over X years with Y scrum teams. An outsourced scrum team is going to run you about $2 million annually, so if you wanted a two-year contract for two scrum teams, that’s going to run you $8 million. Boom, you’ve got your IGCE.

Your IGCE doesn’t need to be right. It just needs to be coherent, interrogateable, based on facts about the project and the world, and comparable to vendors’ proposals. If my water softener runs me $850 or $1,200, that’s alright—I know what the price is based on, and I can make an informed decision that I’ve budgeted for responsibly. Without an IGCE, you’re potentially allowing yourself to get played by a market that has spent decades quoting $5,000 for water softeners (I’m looking at you, Culligan) in hopes of convincing people that’s a reasonable price tag. It’s unwise and even dangerous to issue a solicitation for a major software project without an IGCE.

https://waldo.jaquith.org/?p=10291
Extensions
A standing disclaimer about my Agile procurement writing.
ShortLinks
For the past few years, the supermajority of my writing here has been about Agile procurement. It would be tedious to provide the same disclaimer every time, so I want to offer a blanket disclaimer here: this guidance is not for everybody. In fact, it’s for an extremely small audience! My guidance is for employees […]
Show full content

For the past few years, the supermajority of my writing here has been about Agile procurement. It would be tedious to provide the same disclaimer every time, so I want to offer a blanket disclaimer here: this guidance is not for everybody. In fact, it’s for an extremely small audience!

My guidance is for employees of governments, in the United States, who are new at an Agile procurement model, and intend to outsource a substantial portion of a major software project.

Sometimes people respond to something I’ve written, telling me “this wouldn’t work well at my company,” or “this doesn’t really apply here in Spain,” or “our agency has been using an Agile procurement model successfully for a decade, and we’ve moved beyond the approach you describe,” or “this doesn’t work well for staff augmentation.” And I imagine they’re right! But none of these people are my audience! I don’t know anything about how corporations procure. I don’t know anything about procurement outside of the United States. Mature Agile procurement models are very interesting to me, but I have no generic advice to offer about them. And I’ve got very little to say about procurement approaches other than this one.

If you read what I write about here and think “this isn’t right for me,” that’s OK! It’s simply not for you!

https://waldo.jaquith.org/?p=10280
Extensions
Delivery deadlines are a mistake.
GovernmentTechprocurement
Government must never outsource control, and that's exactly what you're doing by holding an Agile vendor to a deadline.
Show full content

I wrote last week about how it’s a mistake to include detailed deliverables in an Agile software development contract, but let’s throw another mistake on the pile: including a delivery deadline in an Agile contract.

Yes, I understand it sounds absurd to say that a service contract shouldn’t have a delivery deadline. But hear me out.

A delivery deadline says “you must deliver X by date Y.” That is good and sensible for all manner of contracts. But not for Agile software development contracts.

Imagine that a contract says “you will have a functioning case management system up and running in 12 months,” tying payment to that deadline. And imagine that the product owner is creating user stories that cover much of the case management system, but creating absolutely none that have any bearing on the backend, putting off difficult decisions about the hosting environment. The system is using a stubbed backend that works for local development, but that couldn’t possibly be used in production. Nine months have gone by. The system needs to be in production in three months. The vendor has a choice: they can respect the sanctity of Scrum, or they can get paid. This is not a hard call.

Under an Agile model, the product owner creates and prioritizes the backlog of user stories. (Scrum tells us that the product owner can delegate the power of user story creations to the rest of the scrum team, but they’re not obliged to do that.) When outsourcing software development, the product owner must be a government employee. If a contract includes a delivery deadline, that means that the vendor may be obliged to ignore the backlog, and substitute their own judgment for what work should be done when, in order to meet that deadline. This disempowers the product owner, and eliminates many of the benefits of an Agile model.

Government must never outsource control, and that’s exactly what you’re doing by holding an Agile vendor to a deadline. An Agile control model makes the vendor solely responsible for exactly one thing: writing code. That team is an extension of the government product owner’s will—it is up to the product owner to create and prioritize user stories that will make it possible for the delivery of a product by a deadline, not the vendor.

https://waldo.jaquith.org/?p=10273
Extensions
Detailed deliverables are a mistake.
GovernmentTechprocurement
Extensive lists of requirements provide agencies with an illusion of certainty. But they're just tying their own hands.
Show full content

When procuring custom software, agencies commonly make the mistake of contracting for detailed deliverables. They think “our existing system has 16 workflows, and we want to support two new types of users, plus it has to integrate with our two SaaS vendors, so let’s put that in the contract, so that we’ll know we’ll get all that functionality.” Those requirements provide an illusion of certainty that tends to make leadership happy. But the agency is just tying their own hands.

The core proposition of an Agile contract is this: Instead of defining hundreds of requirements up front, those are instead introduced as user stories throughout the life of the project, based on constant user research, prioritized by a government product owner. Because when you define requirements up front, they’re going to be wrong. Even if you conducted lots and lots of user research, and those requirements were based on solid user needs, once you move into the development phase, the dev team and the users alike will learn new things as they go, discovering that those neatly prepared requirements don’t make sense anymore.

Have you ever read about how the cordyceps fungus infects ants’ brains? It turns them into zombies, making them leave their normal environment to climb onto the underside of a leaf, where they latch onto a vein with its mandibles and await their death. Upon dying, the fungus fruits out of the ant, growing tall to release spores, that will infect more ants and start the cycle anew. This is what it’s like for a software development team to blithely execute a contract’s requirements, even though they know it’s not what the users need. Any team doing that is miserable, stripped of autonomy, reduced to doing work for the sake of doing work, their free will compromised by bad requirements.

Let’s go back to those 16 workflows. It would be a real victory if user research found that, actually, only 9 workflows were needed. But only if the contract allows building just nine! If the vendor is contractually on the hook for 16, too bad, you’re getting 7 extra workflows, never mind whether users want or need them. Let’s also return to the integration with the two SaaS vendors. What if user research finds that one of those vendors is providing a service that the users don’t want, or technical research finds that that service is now being provided by the second vendor? Well, too bad, you’re getting that second integration anyway.

But, wait, it gets worse! It is essential that every project be led by an empowered government product owner, whose has the crucial job of preparing and prioritizing user stories for the vendor team to pull from at the beginning of each sprint. To put it simplistically, this is like preparing new requirements every two weeks. Putting a full-time, government-employed product owner at the helm of a software project restores control to government. Unless you also tell the vendor that they must complete particular objectives. Then you’ve just destroyed that model, because now the vendor has to ignore the product owner and substitute their own judgment about what work needs to be done when, and how, because they must answer to the higher power of the contract.

Again, every requirement is just tying your own hands. The statement of objectives should define the goal of the work, using a problem statement and a vision statement. There should be a few technical requirements, such as the acceptable programming languages, or the agency’s cloud vendor. And there should be a quality assurance surveillance plan to measure the quality of the vendor’s deliverables. But you should not include anything beyond that.

Requirements feel safe. Detailed deliverables are comforting. They provide a sense of assurance and certainty. I get that. But we’re 30 years into a contracting death spiral, where software projects have more and more requirements—thousands of them, in the case of major systems—and yet those projects more likely to fail than ever. This model hasn’t worked. The solution is a radical reversal: the elimination of nearly all of those requirements. This is how we put government in charge again. This is how we succeed.

https://waldo.jaquith.org/?p=10265
Extensions
Capital funding poisons software projects.
ShortLinks
Using capital funding instead of operational funding is one of the biggest causes of failures in government software projects.
Show full content

There are a lot of things that I describe as “the biggest cause of failures in government software projects,” so here’s another one to throw on the pile: using capital funding instead of operational funding.

A state Medicaid agency goes to their legislature and says “we need $50 million for this big custom software project.” And the legislature hands over $50 million in capital funding. The Medicaid agency has to obligate that funding within some brief, defined period, sometimes within a single budget cycle (1–2 years). If they don’t spend the money, it’s clawed back. It’s tough to spend that much money that fast. In practice, the procurement action lead time is going to use up a big chunk of that time, so the agency will put all of that money on a single firm fixed price contract, just to get the money obligated before it evaporates. (In 2025, we saw lots of agencies have funding clawed back by DOGE, which was harder to do if that funding had already been obligated, and damned near impossible if it had already been spent.) But putting huge dollar values on a single contract is enormously risky, and leads to bad outcomes…and in this scenario, it’s the only sensible thing to do.

The problem here is treating custom software as a capital expense. Custom software should not be a capital expense! When you pay people to perform a service for you—the service of developing software, in which the vendor is paid for their time, government owning the software that is the output—that’s an operational expense.

It’s often said in the Agile space: fund teams, not projects. This is part of why that’s so.

This is a hard problem to solve! Unless you’re going to hire FTEs to develop that software—something that should be far more common than it is—how is the legislature supposed to provide operational funding? It would be foolish to expect to be able to vote every year or two to continue to provide sustained funding to an individual software project—a change in partisan control of a legislature or even just a change in a committee chair would be enough to end that funding, and a project would be left without funding. No legislature can bind any future legislature, so they generally can’t pass a bill saying “we will appropriate $10 million annually for the next five years, and that can’t be undone.” How can this problem be solved?

A common solution to this is an information technology investment fund. Instead of allocating $50 million to the Medicaid agency, the legislature allocates it to an agency that operates a revolving fund, such as a state budget agency or a state IT agency. They then dole out the money over the lifetime of the project, ideally pairing that with a level of oversight that’s tighter than what the legislature might be able to manage. Michigan’s IT Investment Fund is a good example of this.

People cleverer and more experienced than me tell me that there are other solutions here, and that things get tricky and interesting when bond revenue gets involved. I haven’t learned enough about that yet, but once I learn more, I’ll write more.

It’s worth exploring just about any mechanism available that will allow smaller, safer contracts—”OpEx, not CapEx” is the mantra here. As changes go, this one is pretty far upstream, and awfully hard to do, but if we can develop patterns and repeatable approaches here, I think it will have an enormous positive impact on the viability of major government software projects.

https://waldo.jaquith.org/?p=10245
Extensions
The Intergovernmental Software Collaborative has a forever home.
GovernmentTechagileprocurement
I'm delighted to announce that the State Software Collaborative, which Robin Carnahan and I started in 2020, is growing up and leaving its Beeck Center nest—the Council of State Governments is taking it on as a major new initiative, building and supporting shared intergovernmental software.
Show full content

I’m delighted to announce that the State Software Collaborative, which Robin Carnahan and I started in 2020, is growing up and leaving its Beeck Center nest—the Council of State Governments is taking it on as a major new initiative, building and supporting shared intergovernmental software.

As I’ve spent the past decade preaching, government is utterly reliant on highly specialized software to deliver on its mission, and a lot of that highly specialized software is overpriced garbage. But there’s a strong exception: the software that agencies and departments team up to develop collaboratively. These software collaboratives are the beating heart of a lot of state agencies. State DMVs, vital records offices, departments of transportation, UI offices, libraries, etc. are all powered by software quietly built by state and local governments that teamed up because they were sick of the status quo.

The Council of State Governments is perhaps best known in my world as the state-owned NGO that puts together interstate compacts, the constitutionally defined agreements that substantially bind together the union. They’re how states agree to do stuff together when congress can’t, won’t, or shouldn’t step up. CSG is better than any other organization in the country at getting states to agree to do a thing collectively. In a happy coincidence, that’s the single biggest obstacle for creating new interstate software collaboratives. As a result, there is no organization better to take on the State Software Collaborative (now the “Interstate Software Collaborative,” and the “Intergovernmental Software Collaborative” in the interim) than the Council of State Governments.

CSG isn’t just hypothetically going to build interstate software collaboratives. They’ve already done it. Over 18 months, they led a signal interstate project in establishing CompactConnect, open source software that allows licensed professionals to practice across state lines. CompactConnect was developed incrementally, based on constant user research, improvements delivered every two weeks, led by a CSG product owner, the process so transparent that video of every sprint review was posted on the website. That’s the very CSG team that will be building new interstate collaboratives.

I have to acknowledge the Beeck Center fellows who helped build up the Interstate Software Collaborative in the years after Robin and I moved onto the Biden administration: Aaron Snow, Shelby Switzer, and Dominic Campbell. We were around for the first 18 months, but those are the people who did the work for years afterward. I’m grateful to Kevin O’Neill at Rockefeller Foundation who provided the funding that got this off the ground, and the Beeck Center (especially Sonal Shah and Cori Zarek) for turning Rockefeller’s funding into a whole lot more funding and providing our project with a home.

If you’re with a state government (or a NGO that supports states) and want to learn more about interstate software collaboratives…I used to say, “let me know,” but actually, no: let CSG know.

https://waldo.jaquith.org/?p=10255
Extensions
How to foster better collaboration.
GovernmentTechprocurement
Successful government software projects have created an environment in which the team can collaborate successfully. No amount of procurement, budgeting, or oversight changes will bring that about.
Show full content

I write a lot about how government should procure custom software, but all of my high-falutin’ theories are built atop some crucial prerequisites that I don’t often write about. I recently wrote about the enormous value of hiring people who understand how technology works, and in the spirit of that, I want to point you to Will Callaghan’s recent blog entry on the subject: “How to be a better collaborator.”

Though I want to paste in the whole thing, I’ll give you the seven major points he reviews:

  1. Consider other people’s needs as well as your own
  2. Share everything openly
  3. Acknowledge what you know and don’t
  4. Have a shared endeavor
  5. Find simple ways to stay aligned with others
  6. Always be iterating
  7. Work in an organization that supports all of the above

Will’s audience is individual collaborators, but I want you to think of this organizationally. Successful government software projects have created an environment in which all of these things are possible. And that’s hard! Sharing everything openly runs counter to many agencies’ instincts. Acknowledging what you don’t know can be scary, especially for anybody from a marginalized demographic. Working iteratively is a battle against the headwinds of long-term planning. In many organizations, it’s not on individuals to work in this way, but on leaders to foster an environment that provides the circumstances and psychological safety necessary to allow people to work in this way.

No amount of procurement, budgeting, or oversight changes will bring about a successful software project. They can create the circumstances under which it’s possible for a project to succeed. For that to happen, the project should be run as Will describes.

https://waldo.jaquith.org/?p=10247
Extensions
Why government software is so expensive.
GovernmentTechprocurement
In government, $100 million is not a shocking price tag for a large software project. The question I get all the time about this is, simply: Why?
Show full content

When government buys custom software (whether built from scratch or having commercial software customized), it costs a lot of money. For both federal and state government, $100 million is not a shocking price tag for a large project. I’ve worked on software projects that top $1 billion. The question I get all the time about this is, simply: why?

There are dozens of reasons why government software is often both expensive and bad, but these are the biggest reasons:

  1. The price tag isn’t just for software—it’s everything. In fact, actual software development is a minority of the cost. Agencies toss everything up to and including the kitchen sink into these contracts: creating and staffing the program management office, running the help desk, providing tech support, hosting the software, monitoring performance and uptime, maintaining the software, outsourced oversight (IV&V), preparing innumerable reports and PowerPoint presentations, and just a smattering of actually creating software. Putting most or all of that on a single contract inevitably means that most or all of it gets outsourced, with associated markups and inefficiencies.
  2. We’re at least 30 years into a pricing death spiral. When the average price tag is 310% of the originally budgeted price, it stands to reason that you should expect the next project to cost more, and vendors adjust their bids accordingly. (This is adjacent to Hofstadter’s law: It always takes longer than you expect, even when you take into account Hofstadter’s law.) Prices are a one-way valve—they only ever go up.
  3. Price tags are a funhouse mirror. Neither program nor procurement staff know what software should cost. After a career of eye-popping price tags, staff have become inured to them. $10 million for a trivial change order is just normal. $300/hour for a junior software developer seems reasonable. $50 million as the all-in cost for a modernization of a major agency system feels like the going rate. Folks from the private sector balk at these prices, and they’re right to do so. These prices aren’t tethered to reality. But if those price tags are all you ever see, they just seem normal.
  4. Agencies’ solicitations aren’t set up for the best software developer to win—they’re set up for the best bidder to win. There are very few vendors equipped to write a 500-page proposal in response to one of these kitchen-sink solicitations—this is not a competitive market. The few vendors who can manage that are the ones who get contracts over and over again. Is their work any good? That’s beside the point. The important thing is that they can get contracts. They’re very good at that.

There are other reasons for the large price tags on government software—waterfall development, vendor lock-in, not conducting user research, diffuse responsibility, lack of in-house knowledge, and so on—but it’s my experience that these aren’t the major culprits. Want the biggest reduction in pricing? Attack those big problems.

https://waldo.jaquith.org/?p=10237
Extensions
One neat trick for buying software that isn’t trash.
GovernmentTechprocurement
I have lots to say about how to ensure that major software projects go well, but it all boils down to this: hire people who understand how technology works.
Show full content

My job is working with state and local agencies on the budgeting for, procurement of, and oversight of the major software projects that undergird every agency. I have lots to say about how to get that right, but it can really all be boiled down to one piece of advice: hire people who understand how technology works.

By hiring coders, designers, user researchers, product owners, etc., agencies can stop getting ripped off (and stop publishing RFPs in which they absolutely demand to be ripped off).

The mission of nearly every government agency is intermediated by software. If the software fails, the agency fails in their mission. If a state unemployment department’s UI system goes down, there’s really no sense in which they’re an unemployment department. If a state Medicaid agency’s Medicaid Management Information System is broken, there’s really no sense in which they provide Medicaid coverage.

And yet state agencies outsource this work. The thinking is that they’re not in the business of software, they’re in the business of unemployment insurance or health insurance or whatever. Anything to do with the software is seen as the domain of software vendors, up to and including the oversight of how and what those vendors are doing. But the software is the UI and the health insurance.

I’ve spent the past decade on helping with this problem in a bunch of ways, but all of the solutions I propose are inferior to simply hiring software developers. These are the people who can evaluating agencies’ existing software, understand the needs of the users, perform market research, test software, help prepare solicitations, evaluate proposals, and oversee the work of vendors. Heck, with enough software developers, the agency can even write software themselves, instead of outsourcing it.

So, sure, you can get better vendors to bid, do a better job of checking vendors’ references, and do a better job preparing to outsource, but better than all of those is simply hiring people who understand how software is made.

https://waldo.jaquith.org/?p=10225
Extensions
The past-performance trap.
GovernmentTechprocurement
When government agencies need to have custom software built, as a matter of course they require that the bidding vendors have built that exact thing before. This is a mistake.
Show full content

When government agencies need to have custom software built, as a matter of course they require that the bidding vendors have built that exact thing before. I understand why this seems like a good idea, but it’s a mistake.

At best, state agencies are willing to say (for example), “our state’s higher education financial aid processes are unique, but we’d be willing to tolerate a vendor who has previously built financial aid systems for state higher education systems.” On its face, this seems sensible. Getting a vendor with extremely similar experience would seem to reduce risk.

Here’s the catch: there are very, very few vendors with experience building that extremely niche software. Like…three? And that’s not to say that they did it well. Were their past projects completed on time, within budget, and within spec? Probably not. Did the delivered systems address the needs of the end users well? Almost certainly not. This is not a competitive vendor pool. The odds of success are not good. Imagine finding a homebuilder by limiting your pool to contractors that had built the exact house that you want—the same layout of rooms, same siding and roofing material, the same fixtures. You could do that, but no reasonable person would.

The solution is to expand the definition of “past performance.” Has built exactly this thing before is too narrow. The way to expand the vendor pool, increase competition, and get more bids from competent vendors is to use a more common analog, something that actually exists broadly in the commercial market. To return to our example of a state higher education financial aid system, it’s better to think of that as a case management system. Students complete a form, the application is subject to a series of automated checks, some applications are queued for review by a state employee, and ultimately each application is either accepted or declined, with a dollar value attached to an acceptance. I appreciate that there’s more to financial aid systems than that, but the core of a financial aid system sounds like a straightforward case management system with a public-facing component.

Case management systems: now there’s something that lots of vendors have experience with. A huge percentage of government software systems are just case management systems in a $50 million trench coat. There are commercial case management systems that can perhaps be configured to serve the current need, there are open source case management systems that can perhaps be modified, and even writing a new case management system isn’t wildly difficult. There are hundreds of vendors with experience implementing case management systems. Some of those vendors probably have experience doing things that are functionally near-identical to financial aid systems, despite the purpose of the implementation being to source auto parts or handle customer complaints or manage travel reservations.

With few exceptions, specialized government software can be understood to be minor variants of extremely common software that’s widely used in the private sector. Agencies will be best served by understanding what software that their need is a minor variant of, and seeking vendors with experience at that. Changing the scope of “similar experience” from highly specific government experience to something broader—but equally relevant—will increase competition, lower costs, and will damned sure make success more likely.

https://waldo.jaquith.org/?p=10165
Extensions
No, your agency’s developers cannot “help” your vendor’s devs.
GovernmentTechprocurement
The vendor team is a cohesive unit, working under a single organizational structure, under a single manager, under a shared HR system, using common tools, norms, and practices. Your agency's developers are part of none of that.
Show full content

Agencies sometimes tell me that they employ some software developers and that they intend to assign those developers to supplement the labor of the vendor team that they have hired for an Agile software development project.

This is a terrible idea. Please don’t do this.

I sympathize with the underlying motivation! We’re contracting for developers, and we already employ developers who have a lot of existing knowledge about our systems, so surely it makes sense to have these groups work together. With that level of detail, yes, this is very sensible. But the sensibility ends there.

The vendor team is a cohesive unit, working under a single organizational structure, under a single manager, under a shared HR system, using common tools, norms, and practices. Your agency’s developers are part of none of that. Worse still, your agency’s developers are on the wrong side of the fence, the customer side of the fence, putting the vendor team in a really uncomfortable position if they disagree with a technical decision made by your devs. If the agency contracting officer believes that the vendor is not in compliance with the contract in some way, and the agency’s developers are involved in that non-compliant work, now you’ve got a mess on your hands.

So what can those agency developers do? Well, one of them can act as the technical lead for the agency, working under the government product owner to ensure that the code delivered by the vendor on a sprintly basis is in compliance with the quality standards laid out in the contract. (As a rule of thumb, this work requires ¼ of an FTE per scrum team.) And another can be that technical lead’s backup, so that the lead can take leave without creating trouble. Agency developers can also work in parallel with the vendor team, building or improving supporting infrastructure or related infrastructure, applying the knowledge learned by the vendor team as they conduct user research. Sometimes those developers can do all of the technical work within the agency’s IT boundaries, allowing the vendor to work exclusively within their environment. But whatever they do, they need to leave the vendor team to do their work in their own way under their own terms.

I love hearing that agencies employ developers. I love when agencies want those developers to support the work of vendor scrum teams. But it’s crucial to position that support correctly to avoid accidental self-sabotage.

https://waldo.jaquith.org/?p=10133
Extensions
A GitHub pull request model for outsourced software projects.
GovernmentTechprocurement
This is a simple approach that protects both the agency and the vendor, allowing a proper review process, and keeping the contracting officer happy.
Show full content

There’s a simple thing I preach that is worth writing down: a good model for a pull-request-length relationship between a government agency and its software development vendor. I didn’t create this approach—this was the model that we used on the projects I worked on at 18F, starting nearly a decade ago. Here’s the approach.

The agency creates a repository in GitHub.1 If this is for a completely new project, the repo may consist only of a README—that’s fine. If the repo is for something that already exists, then of course the existing code is put into the repo. The vendor forks the repo. The vendor is given no permissions for the agency’s original repo—they have no power to change anything there. The vendor does all of their work in their fork. This is safer for both the vendor and for the agency.

At the end of each sprint (or user story, or feature, or whatever milestone you want to use), the vendor files a pull request against the agency’s upstream repo. The pull request is the vehicle for the agency conducting a review of the work, to ensure that it meets the code quality requirements found in the contract, and to ensure that the completed stories meet the definition of done. Designated agency employees can comment, request changes, and do all of the normal things one does when reviewing a pull request. Once the pull request is judged to be satisfactory, the agency can accept it (I recommend having the technical lead sign off on it in the form of a comment, leaving it to the product owner to actually accept the PR). This is the point when the agency is saying “yes, this work meets our standards,” so it’s possible that the contracting officer’s representative may want to be notified or otherwise play a role here.

(Relatedly, it’s important to use the agency’s repo for all user stories etc., not the vendor’s repo. Agencies need to be able to end a relationship with a vendor without losing their backlog.)

And that’s it, that’s the simple approach that protects both the agency and the vendor, allowing a proper review process, and keeping the contracting officer happy.

  1. I say GitHub, but I assume this works equally well in any Git-based repository, e.g. GitLab. As long as there’s a mechanism for a user to fork a repo and subsequently file a pull request to commit changes to that upstream repo, this approach should work. ↩
https://waldo.jaquith.org/?p=10104
Extensions
Custom or COTS, either way it’s almost entirely open source software.
GovernmentTechprocurement
The truth is that nothing is purely custom software, and nothing is purely COTS. There is, instead, a continuum, and it’s a short one.
Show full content

When embarking on a new software project in government, “custom or COTS” is a presented as the initial dichotomous decision. (“COTS” meaning “commercial off-the-shelf” software.) But in terms of technology, that’s a false dichotomy, propped up by both vendors, for their benefit, and government employees, following a policy-prescribed process. The truth is that nothing is purely custom software, and nothing is purely COTS. There is, instead, a continuum, and it’s a short one.

Open Source Software is COTS

Let’s define our terms. COTS is any software that is commonly used in the private sector, can be purchased trivially, and is available for use as-is, whether licensed from a vendor or used under an open source license, whether purchased or acquired at no cost. In fact, let’s look at the Federal Acquisition Regulations’ definition of COTS, though note that it covers all manner of commercially available off-the-shelf products, and not just software:

Commercially available off-the-shelf (COTS) item

(1) Means any item of supply (including construction material) that is–

(i) A commercial product (as defined in paragraph (1) of the definition of “commercial product” in this section);

(ii) Sold in substantial quantities in the commercial marketplace; and

(iii) Offered to the Government, under a contract or subcontract at any tier, without modification, in the same form in which it is sold in the commercial marketplace; and

To go one step deeper, let’s look at the definition of “commercial product”:

(1) A product, other than real property, that is of a type customarily used by the general public or by nongovernmental entities for purposes other than governmental purposes, and–

(i) Has been sold, leased, or licensed to the general public; or

(ii) Has been offered for sale, lease, or license to the general public;

(2) A product that evolved from a product described in paragraph (1) of this definition through advances in technology or performance and that is not yet available in the commercial marketplace, but will be available in the commercial marketplace in time to satisfy the delivery requirements under a Government solicitation;

(3) A product that would satisfy a criterion expressed in paragraph (1) or (2) of this definition, except for-

(i) Modifications of a type customarily available in the commercial marketplace; or

(ii) Minor modifications of a type not customarily available in the commercial marketplace made to meet Federal Government requirements. “Minor modifications” means modifications that do not significantly alter the nongovernmental function or essential physical characteristics of an item or component, or change the purpose of a process. Factors to be considered in determining whether a modification is minor include the value and size of the modification and the comparative value and size of the final product. Dollar values and percentages may be used as guideposts, but are not conclusive evidence that a modification is minor;

(The definition goes on, but it won’t add to our understanding here.) We can see that, per the FAR, open source software is a “commercial product.” Because government had some some rousing battles about this in the early ’10s, the Department of Defense put together an Open Source Software FAQ that eliminates any uncertainty that might be left by navigating a web of regulatory language: “[N]early all OSS [open source software] is ‘commercial computer software’ as defined in US law and the Defense Federal Acquisition Regulation Supplement, and if used unchanged (or with only minor changes), it is almost always COTS.”

Custom Software is Mostly COTS

The layperson may envision custom software as something completely bespoke, each underlying line of code lovingly hand-crafted by an artisan dev, using only organic, locally grown electrons. But custom software is more like building a Lego set of your own design, with each piece a separate open source software program. Some amount of the software will be written from scratch (casting Lego pieces in your own molds, to stretch the metaphor), but as measured by lines of code, even a big, complicated system will be overwhelmingly comprised of open source (COTS) Lego bricks—some north of 90%.

COTS is Made of COTS

It sounds like a truism to say that a COTS application is made up almost entirely of COTS, but what I mean is that, just like open source, it too is comprised overwhelmingly of open source Lego bricks. The major distinction between custom software and a COTS application is licensing—in the former, the custom portions are exposed to you, but in the latter, it’s custom to the vendor, not to you. Having passed through the layer of the licensing vendor the custom bits are transformed into COTS. A 2024 report by Synopsis, based on their analysis of 950 commercial codebases, found that 96% contained open source software, and 77% of the code within those code bases was open source. No doubt most of those commercial codebases contained a substantial amount of sub-licensed COTS components, too.

I occasionally encounter IT agencies that are stuck in the 1990s, who will tell me that that “open source isn’t secure.” Not only is this profoundly wrong, but COTS is made almost entirely of open source! There is no escaping open source software. The world runs on it.

Custom or COTS is a False Technical Dichotomy

The decision of whether to buy a COTS system or build a custom system is legitimately important, but for reasons that have nothing to do with the software itself. The technical distinctions between the two are basically academic. There is no pure COTS, there is no pure custom, there is only an enormous stack of open source software with a thin layer of original code atop it.

“COTS versus custom” should be a decision based on matters of cost or support or stability or lock-in or risk. But not based on differences in the software. Because they’re pretty much the same.

https://waldo.jaquith.org/?p=10070
Extensions
A model for IV&V that’s actually useful.
GovernmentTechoversightprocurementsoftwarevendor
For custom software projects, IV&V is almost always useless. But there is a model where IV&V can be useful.
Show full content

I recently wrote about how independent verification and validation (IV&V) is almost always useless for custom software projects, and nodded toward how it could actually be beneficial. Now I want to go into depth about what useful IV&V looks like.

When I wrote that earlier blog entry, there was a model for useful IV&V—18F. At the end of February, that federal team was shut down by a vindictive Trump administration, as a part of their project to demolish any part of government that is functioning well.

The ostensible purpose of IV&V is to ensure that a government procurement is going the way that it’s supposed to—that the vendor is adhering to the requirements of the contract and the deliverables are up to spec. IV&V comes into play when government lacks the in-house expertise to enforce the contractual requirements. Sometimes the agency decides to hire an IV&V vendor, but more often a funder imposes an IV&V requirement on the agency to guard their investment. IV&V doesn’t actually do that well for custom software projects, but it does provide an elaborate performance of doing that, which is apparently good enough.

18F never billed itself as providing IV&V, but that’s pretty much the process that we put together in my time there, under the umbrella of our acquisitions work. After helping agency partners through the procurement process, we’d help them in the post-award phase, teaching them how to work with an Agile vendor. (I think of this as the Cyrano de Bergerac problem—after whispering all of the right words in the partner’s ear throughout the procurement process, it’s important to stick around after the contract is awarded, lest the vendor find the agency’s high-falutin’ Agile talk isn’t supported by experience or ability.) We developed a standard process and set of services designed to help the agency ensure that the vendor was performing well, though also to ensure that agency was able to keep up with a high-performing vendor. This involved somewhat more than verification and validation, but I’ll explain the whole suite of services, for the sake of completeness.

Help the Agency Keep Up with the Vendor

For a lot of agencies, contracting with a good Agile vendor is like dropping a Ferrari engine into a school bus. There’s going to be a lot of wasted potential. Very few agencies are set up to work with such a vendor at the pace that those vendors work. The vendor’s scrum team will have questions about the product, the business requirements, and the tech stack that the agency cannot respond to fast enough to avoid leaving the scrum team twiddling their thumbs.

At 18F, we’d show up before the procurement process, not just to walk them through the procurement, but also to prepare them to work at vendor speed. That often meant building a prototype with Agile methodologies, which would allow the agency’s new product owner to practice actually being a product owner and allow us to test the process for getting to production, and we could use that as an excuse to both experience and improve the process of providing vendors with access to their environment. This meant that, once a contract was awarded, vendors could get to work immediately, instead of spending months navigating onerous agency processes that are optimized for decade-long tech projects that are outsourced lock, stock, and barrel to a single enormous vendor.

Coach the Product Owner

New product owners should go through a training program (I generally recommend Scrum Alliance’s Product Owner training), but that classroom training only prepares somebody so much to actually be a product owner. At 18F, we’d provide them with ongoing coaching as they learned to do the job. That meant ensuring that knew what it means to ruthlessly prioritize the MVP, helping them to maintain a backlog of user stories that are sized well and supporting a properly incremental development pattern, giving them the confidence to lead scrum ceremonies, and generally showing them what good looks like.

Enforce the Code Quality Requirements

The contract should (must, actually) prescribe standards of quality for the work, probably in the form of a quality assurance surveillance plan (QASP) that speaks to code quality, accessibility, documentation, security, etc. It’s rare for an agency to have anybody on staff whose job description encompasses reviewing vendors’ work for conformance with technical requirements…but it turns out that their IT shop is likely to have one or more people who have those skills, if unused from 9–5 Monday through Friday. At 18F, as with product owners, we’d coach these folks as they learned to play the role of technical lead. We’d train them on how to evaluate the sprint’s deliverables in the terms laid out in the QASP, how to perform a code review, and how to work respectfully and productively with the vendor team. It took a long time to get somebody in place to serve competently as the technical lead, so somebody from 18F would serve in this role on an interim basis, gradually replacing themselves with the agency’s technical-lead-in-training.

Start Strong, Fade Out

18F’s work on these projects always started off strong, with a team of 2–4 people joining every scrum ceremony, holding multiple standing weekly meetings with the partner, and generally filling roles that would be better filled by the agency’s employees. But the goal was always to transition all of that work to the partner agency. At one point we formalized this, setting up a big spreadsheet with a row for every task that 18F was performing, and two column headers: one labelled “18F,” one labelled “Agency Partner.” Our job was to transition every one of those tasks from the 18F column to the agency partner column. That would take months, even years, but it meant that, one day, the partner could ask “so…what is it you do for us again?” And that was when 18F’s IV&V role was no longer needed.


I’m not sure that it makes sense for a commercial IV&V vendor to strive for uselessness. It’s financially irrational to work yourself out of a job. But IV&V contracts are often imposed by external forces (e.g., federal funders), who could hypothetically require that IV&V not merely provide (often useless) oversight, but also train grantee agencies in how to oversee vendors’ work themselves.

Of course, 18F was killed by the Trump administration, so that non-conflicted partner is no longer an option for agencies. And federal grants have also become somewhere between unreliable and mirages, so that force for IV&V is much reduced. But state and local agencies‘ need for functioning software and project oversight is unchanged, and may actually become more urgent as they need to step up where the federal government is crumbling while lacking federal funding to do so. IV&V can be an important part of ensuring that systems are delivered on-time, on-budget, and within spec, but only if it’s IV&V that’s actually useful.

https://waldo.jaquith.org/?p=10054
Extensions
Why I work in the open.
Generalopen
I've long worked in the open, overwhelmingly for one reason: it increases enormously the surface area for success.
Show full content

I’ve long worked in the open, overwhelmingly for one reason: it increases enormously the surface area for success. That might mean thinking out loud on social media, documenting ideas as blog entries, or publishing software as open source. Here are some of the benefits.

My ideas improve by being shared freely. If I’ve hit on something good, more people will know about it if I share it openly. Or if I’ve come up with something that’s actually bad, then people will have the chance to tell me that, so I can fix it or reverse myself. I’m not aware of ever having had a good idea on my own—it’s only in conversation (literal or figurative) that my ideas mature.

I am often wrong about where my work applies. Something that I produce openly can spread beyond what I think its audience is. Often I’m wrong about the highest and best use of a concept, a program, an illustration, whatever. Perhaps I think I’m making a compelling case for agencies publishing software as open source, but in fact I’ve made a really great case for why the EU should support open science hardware standards (which I know nothing about).

I am often wrong about when my work applies. Sometimes, a thing I’ve produced isn’t wrong, I’ve just created it at the wrong time. It’s out of step with the zeitgeist, or maybe it solves a problem that people don’t know they have. But if I’m patient, circumstances will change and something I did years ago will become timely. Its ongoing availability means the work is always waiting to be useful.

My work can be improved upon. Sometimes I’ve created a platform that others can use to create something better still (if inadvertently), whether incorporating my work or replicating it with improvements. Sometimes that benefits me (by improving the thing I created), sometimes it doesn’t, but it’s yet to harm me.

An odd effect of working in this way is that I am often credited for doing things that I don’t feel that I deserve any credit for. I do 10% of the work, others do 90% of the work, and I get credit for the result. But those others will tell me that they wouldn’t have done the last 90% if I hadn’t done the first 10%, perhaps because they wouldn’t have thought of it, or perhaps because they couldn’t stand seeing something so obviously incomplete.

Working in the open means that whatever I’m trying to do is far more likely to happen, or that my inchoate thoughts are more likely to be turned into action by somebody else. I’ve been at it for my entire adult life, and don’t intend to stop. Recommended.

https://waldo.jaquith.org/?p=10041
Extensions

Related Narratives

← Back to feeds