OpenAI is implementing stringent security measures to protect its intellectual property from corporate espionage, according to a Financial Times report. The company has overhauled its security operations, including ‘information ‘tenting’ policies that limit employee access to new algorithms under development. Employees now require fingerprint scans to enter certain rooms, and a “deny-by-default egress policy” prevents unauthorized internet connections for model weights.
These actions follow claims by OpenAI that Chinese AI startup DeepSeek copied its models using distillation techniques. Earlier reports indicated Microsoft security researchers suspected DeepSeek-linked individuals were exfiltrating significant data via OpenAI’s API. OpenAI also stated to FT that it observed “some evidence of distillation.” The company previously began requiring government ID verification for developers accessing advanced AI algorithms. DeepSeek’s open-source R1 reasoning model, comparable to OpenAI’s o1 but at lower cost, has fueled concerns about the competitive threat from Chinese AI models.




