Even after withdrawing its services from China last year, concerns regarding data security persist. To counter them, OpenAI has taken preventive measures.
Last week, OpenAI issued an update regarding its verification process. With an increasing number of people using its AI models, there is significant doubt concerning its privacy and ethical usability.
To what extent will AI’s prowess be used as it should without breaking any ethical or moral barriers?
To prevent any potential hiccups, the tech powerhouse will now require developers to provide a government-issued ID from nations supported by OpenAI’s API. If the verification is successful, developers will be able to access some of the most advanced AI capabilities. They only need to have a “Verified Organization” status in advance.
However, there is a crucial limitation to remember: each ID can verify only one organization for up to 90 days. But, not every organization will actually be able to attain verification.
According to OpenAI, this new process is designed to protect the company’s products from malicious actors and to prevent IP theft.
OpenAI’s mandatory ID verification comes after it released a report stating it had blocked ChatGPT user accounts that it thought were leveraging the AI chatbot for online surveillance and misinformation spread. Additionally, the company mentioned these suspicions could be potentially tied to North Korea and China.
But, there continues to be speculation concerning the transparency and clarity of the entire process. Will the IDs be stored on OpenAI’s databases, and for how long?
While this is a strategic move by OpenAI to safeguard its users, significant questions remain unanswered.