---
title: "Iterative Pentesting: Why We Run Three Engagements, Not One"
date: 2026-04-19
category: vitalchain
source: VitalChain Health Research
draft: true
---

# Iterative Pentesting: Why We Run Three Engagements, Not One

Most health-IT vendors treat penetration testing as a box-check. A firm gets hired, a report gets produced, findings get triaged, and the artifact goes into a sales folder. Twelve months later, the cycle repeats.

We don't think that model actually tells a buyer what they need to know. A single point-in-time audit captures what an outside tester could see in a narrow window, against a snapshot of the system, under one specific scoping model. It produces a document. It doesn't produce confidence.

Between late March and mid-April 2026, VitalChain ran three sequential penetration test engagements against the pilot environment. They were deliberately scoped differently. Each one surfaced real findings. Each one closed with remediation work, and the next one tested whether the remediations actually held. The final engagement closed with zero outstanding issues.

This post is about why we chose that cadence, what it cost us in engineering time, and what we think health-IT buyers should look for when they evaluate a vendor's security posture.

## One Test, Three Blind Spots

A single pentest has three structural blind spots, and no amount of tester skill can fully close them.

The first is the scoping problem. Every engagement runs under a defined model: black-box (the tester knows only what an outside attacker would know), gray-box (the tester gets credentials and some architectural context), or white-box (the tester sees the source). Each model surfaces a different class of issue. A black-box tester pressure-tests your external attack surface and your authentication boundaries. A gray-box tester pressure-tests what an authenticated user, a compromised credential, or a malicious insider could actually do. Those are different questions. If you only ever run one, you only ever answer one.

The second is the remediation problem. A report tells you what was broken on the day of the test. It doesn't tell you whether the fix you shipped afterward actually closed the issue. Remediation is where real security work happens, and remediation without a retest is just hope.

The third is the drift problem. Software changes. Between an engagement and the next scheduled one, dozens or hundreds of merges land. Each one is an opportunity to introduce regression. A twelve-month gap between tests is a twelve-month window in which the answer to "is this system secure" quietly changes.

## What Three Engagements Actually Looked Like

Our first engagement was black-box. The tester was given the production domain and nothing else. The goal was to see what a motivated outside party could find with public reconnaissance and standard tooling. The engagement surfaced several real issues, primarily in infrastructure hardening and service exposure. We fixed them over the following weeks.

The second engagement, roughly three weeks later, was gray-box. This time the tester received authenticated credentials at multiple role levels and some architectural context. The shift in scoping was deliberate: we wanted to know what an attacker with a foothold could do, not just what someone on the outside could see. This engagement found a different class of issue, primarily in the authentication and authorization layer. Again, we remediated.

The third engagement was a targeted retest. It specifically re-ran the attack paths that produced findings in engagement two, and it probed related code paths to see if the remediations had introduced new issues or left adjacent gaps. It surfaced one additional concern in a related subsystem, which we fixed during the engagement itself. At close, no outstanding issues remained.

The categories of work, spoken about at the level a buyer can evaluate, were authentication hardening, transport and token handling, and input validation. We are not going to describe the specific findings in this post, and we don't think any vendor should. Publishing the mechanics of past vulnerabilities, even fixed ones, gives an attacker a starting point against the next system they look at. The value of a pentest history to a buyer is the pattern of find-fix-verify, not the inventory of what broke.

## Fix Cadence Matters More Than Finding Count

One of the most common metrics vendors publish is "number of findings resolved." It's a poor metric. A vendor who runs one pentest and resolves ten findings looks the same as a vendor who runs three pentests and resolves fifteen, and those are very different operational profiles.

We pay more attention to time-to-remediation. In all three engagements, the gap between a finding being reported and a fix landing on production was measured in days, not months. For the retest specifically, one issue was surfaced and remediated inside the same engagement window, which is only possible when the engineering team and the testing work are coupled tightly enough to move at the same speed.

Fast remediation is not about heroics. It's about having the infrastructure to do it safely: a CI pipeline that catches regressions, a deployment process that doesn't require a weekend, a staging environment that matches production closely enough that fixes can be validated before they ship, and an audit trail that proves what changed and when. We put that plumbing in place before we ran the first test, specifically because we knew a test without a fast-fix loop produces a report, not a safer system.

## What a Buyer Should Ask

If you're evaluating a health-IT vendor's security posture, the pentest question most procurement packets ask is "when was your last penetration test." That's not the right question. Better questions:

- How many engagements have you run in the last twelve months, and how were they scoped differently?
- Between engagements, what was the median time from finding to remediation?
- Did you retest remediated findings with a separate engagement, or did you self-certify the fixes?
- What was the outcome of your most recent engagement, and what changed between that one and the one before it?

A vendor who can answer those questions directly is doing the work. A vendor who deflects to "we ran a pentest last year and it was clean" is telling you they did the box-check, not the practice.

## Where We Go From Here

The three-engagement pattern is not a one-time exercise for us. The next cycle is already scoped. We run more frequent, smaller engagements because that's the cadence that matches the rate at which the codebase actually changes. A pentest that happens annually is testing a system that no longer exists.

Security work is not a milestone. It's a posture. We think the way you demonstrate that posture is not by publishing clean reports, but by showing the pattern of iteration: scope, test, find, fix, retest, expand scope, repeat.

If you're evaluating VitalChain for a pilot or production deployment and you want to talk through our security program in more detail, including what our retest cadence looks like going forward, we're happy to have that conversation under NDA.
