12 April,2023 06:05 PM IST | New Delhi | IANS
OpenAI is also asking ethical hackers to safeguard confidential OpenAI corporate information that may be exposed through third parties. Photo Courtesy: istock
Amid growing popularity of ChatGPT, Microsoft-owned OpenAI, the developer of the AI chatbot, is now offering people up to USD 20,000 to security researchers to help the company distinguish between good-faith hacking and malicious attacks, as it suffered a security breach last month.
OpenAI has launched a bug bounty programme for ChatGPT and other products, saying the initial priority rating for most findings will use the 'Bugcrowd Vulnerability Rating Taxonomy'.
"Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries," the AI research company said.
"However, vulnerability priority and reward may be modified based on likelihood or impact at OpenAI's sole discretion. In cases of downgraded issues, researchers will receive a detailed explanation, it added.
ALSO READ
Coffee, tea may lower risk of head and neck cancer, claims study
Beyond diamonds: The colourful gemstone renaissance
Ahead of Christmas, Drifters Brewing Company introduces three new beer styles
Christmas 2024: Here are 5 easy ways to elevate your Christmas celebration
Mumbai doctors say winter leading to spike in constipation, flu cases among kids
The security researchers, however, are not authorised to conduct security testing on plugins created by other people.
OpenAI is also asking ethical hackers to safeguard confidential OpenAI corporate information that may be exposed through third parties.
Some examples in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe.
"You are not authorised to perform additional security testing against these companies. Testing is limited to looking for confidential OpenAI information while following all laws and applicable terms of service. These companies are examples, and OpenAI does not necessarily do business with them," informed the company.
Last month, OpenAI admitted that some users' payment information may have been exposed when it took ChatGPT offline owing to a bug.
According to the company, it took ChatGPT offline due to a bug in an open-source library which allowed some users to see titles from another active user's chat history.
OpenAI discovered that the same bug may have caused the unintentional visibility of "payment-related information of 1.2 per cent of the ChatGPT Plus subscribers who were active during a specific nine-hour window".
The bug was discovered in the Redis client open-source library called "redis-py".
Also Read: ChatGPT has the power to change healthcare: Report
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever.