Co-founded in 2021 by brothers Daniela and Dario Amodei, who previously worked at OpenAI, later joined by former employees of the start-up, Anthropic is today one of the leaders in generative AI and a serious competitor to OpenAI. The company was, following an error made by one of its subcontractors, the victim of a data leak.
Anthropic’s goal is to make systems more reliable, steerable, and interpretable through constitutional AI, an approach to training language models that aims to instill specific values and principles into AI systems.
The start-up that revealed Claudius 2 in July 2023, then presented Claude Instant 1.2available to companies through their API the following month, it launched the paid plan Claude Pro last September. Valued at over $18 billionis trusted by large companies such as Google, Amazon, Salesforce and the German publisher SAP.
Upon noticing a data breach due to the unintentional sharing of a file containing information about its customers with third parties (a subset of customer names and open credit balances from 2023), the company immediately notified the customers involved, as we can see in this and -mail released by Windowsreport media:
Important alert about your account.
We would like to inform you that one of our subcontractors inadvertently passed on Anthropic customer account information to a third party. The information included your account name as maintained in our systems and customer account information as of December 31, 2023 – i.e. it was indicated that you were a customer with open credit balances at the end of the year. This information did not include sensitive personal data, including banking or payment information, or requests/outputs. Based on our investigation to date, the contractor’s actions were an isolated error that did not and did not result in any of its violations. We are also not aware of any malicious behavior resulting from this disclosure.
Although this data is not sensitive, the company urged its customers to exercise caution, especially if they were allegedly contacted by Anthropic for a payment request or change in payment instructions, and not to reveal their credentials or passwords.
Even if this data leak is not linked to its internal AI systems, but rather to human errors, as in the vast majority of cases, Anthropic will have to redouble its efforts to restore the trust of its customers and implement reinforced security measures to prevent such leaks in the future. Mainly because one of the main obstacles to the adoption of generative AI by companies is the breach of their proprietary data.