Curl Project Founder Snaps Over Deluge Of Time Sucking Ai Slop Bug Reports

Curl project founder Daniel Stenberg is fed up with of the deluge of AI-generated “slop” bug reports and recently introduced a checkbox to screen low-effort submissions that are draining maintainers’ time.

Stenberg said the amount of time it takes project maintainers to triage each AI-assisted vulnerability report made via HackerOne, only for them to be deemed invalid, is tantamount to a DDoS attack on the project.

Frustrated robot in a maze

Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content

READ MORE

Citing a specific recent report that “pushed [him] over the limit,” Stenberg said via LinkedIn: “That’s it. I’ve had it. I’m putting my foot down on this craziness.”

From now on, every HackerOne report claiming to have found a bug in curl, a command-line tool and library for transferring data with URLs, must disclose whether AI was used to generate the submission.

If selected, the bug reporter can expect a barrage of follow-up questions demanding a stream of proof that the bug is genuine before the curl team spends time on verifying it.

“We now ban every reporter instantly who submits reports we deem AI slop,” Stenberg added. “A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.”

He went on to say that the project has never received a single valid bug report that was generated using AI, and their rate is increasing.

“These kinds of reports did not exist at all a few years ago, and the rate seems to be increasing,” Stenberg said, replying to a follower. “Still not drowning us, but the trend is not looking good.”

These concerns are not new. Python’s Seth Larson also raised concerns about these AI slop reports back in December, saying that responding to them is expensive and time-consuming because on face value, they seem legitimate and must be investigated further by trained eyes before confirming that they are, in fact, bogus.

“Security reports that waste maintainers’ time result in confusion, stress, frustration, and to top it off, a sense of isolation due to the secretive nature of security reports,” Larson wrote. “All of these feelings can add to burnout of likely highly trusted contributors to open source projects.

“In many ways, these low-quality reports should be treated as if they are malicious. Even if this is not their intent, the outcome is maintainers that are burnt out and more averse to legitimate security work.”

We now ban every reporter instantly who submits reports we deem AI slop … If we could, we would charge them for this waste of our time

Stenberg’s decision to add an AI filter to HackerOne reports follows years of frustration about the practice. He raised the issue as far back as January 2024, saying reports made with Google Bard, for example, as Gemini was called back then, were “crap” but better crap.

The comment referred to the same point Larson made almost a year later – that AI reports look legitimate at first, but take time to reveal issues like hallucinations.

The issue is especially damaging for open source software projects like curl and Python, which largely depend on the work of a small number of unpaid volunteer specialists to help improve them.

Developers come and go with these projects, staying for a short time, often to help fix a bug they reported, or some other feature, before leaving. At the time of writing, curl’s website states that at least 3,379 people have individually contributed to the project since Stenberg founded it in 1998.

Curl offers bounty rewards of up to $9,200 for the discovery and report of a critical vulnerability in the project, and has paid $86,000 in rewards since 2019.

According to its HackerOne page, it received 24 reports in the previous 90 days, none of which have led to payouts, and as Stenberg said in his LinkedIn post, none of the AI-assisted reports made in the last six years have actually discovered a genuine bug.

Generative AI tools have allowed low-skilled individuals with an awareness of bug bounty programs to quickly file reports based on AI-generated content in the hope they can cash in on the rewards they offer.

However, Stenberg said that it is not just the newbies and grifters using AI to chance their luck on a bounty program – those with a degree of reputation are also getting in on the act.

The report that pushed the project founder over the edge was made two days ago and was a textbook AI-generated submission.

It was pitched as “a novel exploit leveraging stream dependency cycles in the HTTP/3 protocol stack was discovered, resulting in memory corruption and potential denial-of-service or remote code execution scenarios.”

Ultimately, though, it was found to refer to nonexistent functions.

Stenberg said: “What fooled me for a short while was that it sounded almost plausible, combined with the fact that the reporter actually had proper ‘reputation’ (meaning that this person has reported and have had many previous reports vetted as fine). Plus, of course, that we were preoccupied over the day with the annual curl up meeting.” ®


Original Source


A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.

If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below

To keep up to date follow us on the below channels.