Sunday, March 1, 2026
Your Responsible Disclosure Policy Is a Dead Link and Everyone Knows It
A security researcher finds a vulnerability in your application. A real one. Authentication bypass, exposed customer records, something worth reporting.
They want to tell you about it.
So they Google your company name and “security” or “responsible disclosure.” Maybe they check your footer. Maybe they try yourcompany.com/security. If they’re thorough, they look for a security.txt file at /.well-known/security.txt.
What they find, more often than not, is one of three things:
A 404 page. A PDF last updated in 2019 with an email address that bounces. Or a form that submits to a void and returns no confirmation that anything happened at all.
They shrug. Close the tab. Move on.
You never find out about the vulnerability.
The Policy Isn’t the Problem
Most companies of any meaningful size have some version of a responsible disclosure or vulnerability disclosure policy. They checked the box. Legal signed off. It’s on the website.
But a policy isn’t a program. A PDF isn’t an inbox. And an email address is only as useful as the process behind it.
The dirty secret of vulnerability disclosure is that the writing of a policy takes an afternoon, and the operationalizing of that policy gets perpetually deprioritized. So you end up with a public-facing commitment that privately routes to a shared security@ mailbox that three people theoretically have access to and nobody actually owns.
Researchers know this. They’ve been burned before. They send a report and hear nothing for two weeks, then get a form response from someone in customer support who clearly has no idea what a CVSS score is. Or they get an aggressive reply accusing them of unauthorized access. Or — most commonly — they get nothing at all.
Trust erodes fast. The community talks. Your disclosure inbox develops a reputation before you even know it has one.
What Actually Happens When Someone Reports a Vulnerability
Let’s be specific, because the gap between policy and reality only becomes visible when you trace the actual path a report takes.
A researcher submits a finding to your security@ address on a Tuesday afternoon. Here’s what typically happens next:
The email lands in a shared inbox. It sits there. Someone sees it Thursday and isn’t sure if someone else already looked at it. They forward it to a Slack channel. The channel has twelve members, five of whom are relevant. Two of those five are traveling. One responds asking who owns this. Nobody answers definitively.
By the time the right engineer is looking at the actual report, it’s been six days. The researcher has already assumed you don’t care. If the vulnerability was serious, they’re now weighing other options.
Meanwhile, your audit trail for this interaction is: a forwarded email chain, a Slack thread, and whatever your security engineer wrote down in a personal doc before they closed their laptop.
That’s not a disclosure program. That’s organized chaos with a policy page in front of it.
What Researchers Actually Want
It’s not complicated. Researchers want to know three things when they submit a report:
That it was received. An immediate, automated acknowledgment goes a long way. Not a form letter — something that signals the report has been classified and is in a queue. Five seconds of confidence that it didn’t disappear.
That someone qualified is looking at it. They don’t need constant updates. They need one signal that a real human being with context has seen what they sent.
That there’s a process. Even if the answer is “we’re investigating and will update you in 14 days,” that’s enough. Silence isn’t neutral. Silence communicates that you don’t have your act together.
That’s it. The bar is low. Most companies still don’t clear it.
The researchers who do clear it — who get a fast acknowledgment, a coherent triage response, and a fair resolution — become advocates. They tell other researchers. They list you on their profile. They come back when they find something else.
The ones who hit a dead link or a silent inbox don’t.
The Compliance Angle Nobody Talks About
Here’s the thing that turns this from a reputation problem into a liability problem.
Depending on your regulatory environment — and if you’re handling customer data, you’re probably in one — your ability to demonstrate a documented, timely response to reported vulnerabilities matters. GDPR, ISO 27001, SOC 2: they all touch this in some form.
When an auditor or a regulator asks to see how a reported security issue was handled, “it came into our shared inbox and we dealt with it over Slack” is not a sufficient answer. Not anymore.
You need a timestamped record of when a report was received, how it was classified, who handled it, what actions were taken, and when the reporter was notified. That’s not bureaucracy — that’s defensibility.
A dead link on your disclosure page doesn’t just signal disorganization to researchers. To an auditor, it signals that your security communication processes aren’t real. That your policy is decorative.
What a Functioning Program Actually Looks Like
The good news: this isn’t a hard problem. It’s an operational one.
A functioning disclosure program has four things:
A live, working intake channel. One address, clearly communicated, that actually receives reports and immediately confirms receipt. security.txt filed correctly. Footer link that works. Form that submits without errors.
Automatic classification. The report hits your inbox and something — a person or a system — immediately determines: is this a real vulnerability report? A beg bounty? Spam? That determination should take minutes, not days.
Defined ownership. Someone — a team, a role, a rotation — is responsible for disclosure reports. Not “whoever sees it.” Not “the security channel.” A named owner with a response SLA they’re accountable to.
An audit trail. Every action logged. Every response recorded. Not because you expect to be investigated, but because you will eventually need to reconstruct this timeline and you’d rather have the records than the memory.
None of this requires a massive tooling investment or a months-long implementation. It requires treating your security inbox like the operational channel it actually is, rather than an afterthought attached to a policy document.
The Researcher Finds Something Tomorrow
You don’t get to control when a vulnerability gets discovered. You don’t get to control who finds it or how sophisticated they are or what they’ll do if they feel ignored.
You do get to control what they find when they try to report it.
A dead link is a choice. An unmaintained inbox is a choice. A disclosure policy with no process behind it is a choice. They’re just choices that usually get made by default, not by design.
Fix the link. Own the inbox. Build the trail.
The researcher is out there. Make it worth their time to tell you.
Fortworx gives your security inbox a process from day one — automatic classification, defined workflows, and a full audit trail for every report that comes in. Start for free or book a demo.