Get Paid up to $20,000 for Finding ChatGPT Security Flaws

Get Paid up to $20,000 for Finding ChatGPT Security Flaws

ChatGPT might be the coolest tech on the block right now, but it’s not immune from the issues all software faces. Bugs and glitches affect anything running code, but the stakes are higher than a crummy user experience: The wrong bug could allow bad actors to compromise the security of OpenAI’s users, which could be very dangerous, seeing as the company hit 100 million active users in January alone. OpenAI wants to do something about it, and it’ll pay you up to $20,000 for your help.

OpenAI’s Bug Bounty Program pays out big time

OpenAI announced its new Bug Bounty Program on Tuesday, April 11, inviting “security researchers, ethical hackers, and technology enthusiasts” to scrape through their products (including ChatGPT) for “vulnerabilities, bugs, or security flaws.” If you happen to find such a flaw, OpenAI will reward you in cash. Payouts range based on the severity of the issue you discover, from $200 for “low-severity” findings to $20,000 for “exceptional discoveries.”

Bug bounty programs are actually quite common. Companies across the marketplace offer them, outsourcing the work of searching for bugs to anyone who wants in. It’s a bit like beta testing an app: Sure, you can have developers looking for bugs and glitches, but relying on a limited pool of users increases the chances of missing important issues.

With bug bounties, the stakes are even greater, because companies are most interested in looking for bugs that leave their software—and, therefore, their users—vulnerable to security threats.

How to sign up for OpenAI’s Bug Bounty Program

OpenAI’s Bug Bounty Program is in partnership with Bugcrowd, an organization that helps crowdsource bug hunting. You can sign up for the program through Bugcrowd’s official site, where, as of this writing, 24 vulnerabilities have already been rewarded, and the average payout has been $983.33.

OpenAI wants to make it clear, though, that model safety issues do not apply to this program. If in testing one of OpenAI’s products, you find the model behaving in a way it shouldn’t, you should fill out the model behavior feedback form, not go through the Bug Bounty Program. For example, you shouldn’t try to claim a reward if the model tells you how to do something bad, or if it writes malicious code for you. In addition, hallucinations are also out the scope of the bounty.

OpenAI has a long list of in-scope issues on its Bugcrowd page, in addition to an even longer list of out-of-scope issues. Make sure to read the rules carefully before submitting your bugs.

Source Link