OpenAI is considering a new verification process for organizations to access its future AI models via its API, aiming to enhance security and mitigate unsafe AI usage.
According to a support page published on OpenAI’s website, the verification process, called “Verified Organization,” is designed to unlock access to the most advanced models and capabilities on the OpenAI platform. The process requires a government-issued ID from one of the countries supported by OpenAI’s API. An ID can only verify one organization every 90 days, and not all organizations will be eligible for verification.
The verification process is intended to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community. OpenAI stated that a small minority of developers intentionally use the OpenAI APIs in violation of their usage policies, necessitating the new verification step. The company takes its responsibility seriously to ensure that AI is both broadly accessible and used safely.
The new verification process could be aimed at beefing up security around OpenAI’s products as they become more sophisticated and capable. OpenAI has published several reports on its efforts to detect and mitigate malicious use of its models, including by groups allegedly based in North Korea. The verification takes a few minutes and requires a valid government-issued ID.
The move may also be intended to prevent IP theft. According to a report from Bloomberg earlier this year, OpenAI was investigating whether a group linked with DeepSeek, a China-based AI lab, exfiltrated large amounts of data through its API in late 2024, possibly for training models — a violation of OpenAI’s terms.
OpenAI blocked access to its services in China last summer, further highlighting its efforts to manage and secure its API usage. The company’s actions demonstrate its commitment to ensuring the safe and responsible use of its AI models.




