CASE STUDY 01

Winning: FBI Automatic Fingerprint Identification System (AFIS) Contract

Before automation, fingerprint identification was slow.

A physical card would be collected, processed, and eventually compared against a database. The response might take weeks. In some cases, that meant a person could be released before identification was complete.

The goal was to change that — to move to a system that could respond in near real time.

The scale was significant. Tens of thousands of transactions per day. Dozens of different transaction types, each with its own urgency and processing requirements. A database of tens of millions of records. And strict constraints on hardware, cost, and performance.

At first glance, it looked like a computing problem. Faster machines, better algorithms, more capacity.

It wasn't.

The real problem was how the work moved through the system.

Each transaction was treated as a single unit. A request would come in, and the system would process it end-to-end. That works when the workload is uniform. It breaks when you have a mix of urgent and non-urgent requests, simple and complex operations, all competing for the same resources.

The system couldn't meet its service-level requirements that way.

My role was to build a simulation of the system — something that could model how it would behave under real workloads before it was fully built. That meant modeling disk activity, compute time, algorithm performance, and the interactions between them.

As I worked through it, it became clear that the unit of work itself was the problem.

Instead of treating a transaction as a single block, I broke it into smaller pieces — atomic work units. A large matching operation, for example, could be divided into hundreds of smaller comparisons. Those could be scheduled independently, interleaved with other work, and prioritized based on urgency.

That changed everything.

Now the system could:

The simulation showed that this approach would work. More importantly, it showed that it would work within the hardware footprint we were proposing. That mattered, because hardware cost was directly tied to whether we could win the contract.

Midway through, I realized the simulation itself needed to change. The initial version wasn't flexible enough to model what we were discovering. So we refactored it — rebuilt it to be more modular and extensible, even though that meant redoing work under time pressure.

The revised model matched reality much more closely.

As the engineering team built the actual system, the architecture followed the model. When the system was tested, its performance aligned with what we had predicted. That's the point of a simulation — but it only works if the model reflects the real problem.

The result was a system capable of processing tens of thousands of transactions per day, with response times ranging from near real time for urgent cases to longer windows for lower-priority work.

It also meant we didn't have to overbuild the hardware. The system met its requirements within the planned footprint, which kept the bid competitive and contributed directly to winning the program.

That work was recognized internally with a corporate award, but the more important outcome was that the system functioned as intended at scale.

The shift wasn't in the hardware or the algorithms. It was in how the work was defined and scheduled.