Here is a number that should bother you. Across the industry, estimates consistently show that between 60 and 70 percent of penetration test findings are never remediated. Not slowly remediated. Never. The report gets filed, the compliance checkbox gets marked, and the vulnerabilities remain.

We see this in our own work. When we perform a retest for a client six months after their initial pentest, we routinely find that critical findings are still present. Often the same findings. Sometimes additional findings that emerged because the original issues were left unaddressed and the attack surface expanded.

The pentesting industry is very good at finding problems. It is structurally bad at ensuring they get fixed. And until we solve the follow-through problem, the entire exercise is security theater with a price tag.

Why Findings Do Not Get Fixed

The reasons vary by organization, but several patterns show up repeatedly in our client base of startups and growth-stage companies.

Priority displacement. Security findings compete with feature work for engineering time, and feature work almost always wins. This is not because engineering leaders do not care about security. It is because the consequences of shipping a feature late are immediate and visible, while the consequences of not fixing a vulnerability are abstract and deferred. The incentive structure favors the urgent over the important.

Report format failures. Many pentest reports are written for auditors, not for engineers. They describe vulnerabilities in the language of CVE catalogs and risk matrices. They do not provide enough context for an engineer who has never touched the vulnerable component to understand the issue, reproduce it, and fix it. The result is a finding that gets triaged, assigned, and then stalled because the assigned engineer cannot figure out what to actually do.

Ownership ambiguity. In fast-growing startups, code ownership is fluid. A vulnerability in a service that was written by an engineer who left three months ago often has no clear owner. Without someone accountable, the finding drifts through sprint backlogs indefinitely.

Severity inflation and fatigue. Some audit firms mark every finding as High or Critical, whether or not it warrants that rating. After seeing their fifth "critical" information disclosure in a row, teams start treating all findings as noise. When everything is critical, nothing is.

A pentest that identifies 47 findings and results in 12 fixes is not a success. It is a detailed map of your attack surface that you chose not to act on.

The Cost of Inaction

The obvious cost is that vulnerabilities remain exploitable. But the hidden costs are larger and compound over time.

Unfixed findings create a false sense of security. Leadership believes the organization has been tested and is therefore secure. The pentest becomes a proxy for security posture rather than a diagnostic tool. This is especially dangerous at startups, where the executive team may not have a security background and relies on the fact that a pentest was performed as evidence that security is under control.

Technical debt accumulates. Each unfixed vulnerability is a constraint on future development. The longer it persists, the more the codebase evolves around it, and the harder and more expensive it becomes to fix. What might have been a two-hour patch in January becomes a two-week refactor in October because the vulnerable pattern was copied and extended by other engineers who assumed it was acceptable.

Retest costs compound. If you pentest annually and never remediate, each new test finds the old vulnerabilities plus new ones. The report grows, the team's willingness to engage with it shrinks, and the cycle deepens.

Rethinking the Pentest Engagement Model

The core problem is that the pentest engagement model treats testing and remediation as separate activities performed by separate parties. The tester finds. The client fixes. There is no structural mechanism to ensure the second step happens.

This model dates back to an era when pentesting was primarily a compliance exercise. You hired a firm, they produced a report, you showed the report to your auditor, and everyone moved on. The assumption was that the client's internal security team would handle remediation. That assumption breaks down at startups, where there often is no internal security team, or where the security function is a single person wearing too many hats.

At ReguSec, we have moved toward a model that wraps remediation support into the engagement itself. This does not mean we write your code for you. It means we stay engaged during the fix phase in three specific ways: we provide remediation guidance written for the engineer who will do the work, not for the security professional who commissioned the test. We offer scheduled retesting windows within the engagement period so that fixes can be verified without a separate contract. And we participate in prioritization sessions with the engineering team, helping them understand which findings actually matter and which can be deferred safely.

This is not charity. It is self-interest. A pentest that leads to actual fixes is a pentest that demonstrates value. A pentest that leads to a PDF in a drawer is a pentest that does not get renewed.

The measure of a good pentest is not the number of findings. It is the percentage of findings that get fixed.

The Startup-Specific Problem

Startups face a version of this problem that is qualitatively different from what large enterprises experience. Large enterprises have dedicated security teams, change management processes, and compliance functions that enforce remediation timelines. Startups have none of these, and they also have less engineering capacity to allocate to fixes.

For the fintech and SaaS companies we work with, there is an additional pressure: regulatory expectations. SOC 2, PCI DSS, and similar frameworks require vulnerability remediation within defined timelines. A pentest report full of unfixed findings is not just a security risk. It is a compliance liability. We have seen startups lose deals because their auditor flagged a pattern of unremediated findings from previous pentests.

The solution is not more testing. It is better operational integration between testing and the development workflow that follows it.

What We Recommend

1
Treat remediation as part of the pentest scope, not an afterthought. When you procure a pentest, negotiate inclusion of remediation guidance, a retest window, and a prioritization session. If your vendor cannot provide these, ask why. The engagement should not end when the report is delivered.
2
Assign ownership before the report arrives. Before the pentest begins, establish who will own each category of finding. If the test covers three services, three engineers should be pre-assigned as remediation owners for their respective services. Ownership should be explicit, not emergent.
3
Require action-oriented reporting. Tell your pentest vendor that findings must include reproduction steps, attack scenarios, and specific remediation instructions written for an engineer unfamiliar with the vulnerable code. If the report reads like it was written for a CISO to hand to an auditor, it will not help the engineer who has to do the work.
4
Cap your finding backlog. Set a hard limit on the number of open findings your team will tolerate. We recommend zero criticals and no more than five high-severity findings open at any time. If you exceed the cap, remediation sprint capacity should be reallocated from feature work until the backlog is under control.
5
Track remediation as a security metric. Report fix rates to leadership, not just finding counts. A dashboard that shows "15 findings, 12 fixed, 3 in progress" tells a different story than "15 findings found." Make remediation velocity visible and tie it to team performance reviews.

Tired of pentest reports that go nowhere? We build remediation into every engagement. Get in touch with our team.