Meta AI was ranked worst for data privacy among nine AI platforms assessed by Incogni,
according to a report published Tuesday.
Mistral AI’s Le Chat was deemed the most privacy-friendly generative AI (GenAI) platform, followed closely by OpenAI’s ChatGPT.
The GenAI and large language model (LLM) platforms were scored by Incogni based on 11 criteria grouped into three main categories: AI-specific privacy issues, transparency and data collection.
Google ranked low on overall privacy, high on training issues
The “AI-specific privacy” ranking mostly covered how users’ prompts and data are used in training AI models, as well as the extent to which user prompts are shared with third parties.
Incogni said its researchers gave the criteria in this category significant weight compared to criteria involving non-AI-specific data privacy issues.
While Google Gemini was ranked as the second most privacy-invasive AI platform overall, it ranked best compared with other platforms for AI-specific issues.
While Gemini does not appear to allow users to opt out of using its prompts to train models, Google does not share prompts with third parties other than necessary service providers and legal entities.
By contrast, Meta, which scored second-worst in this category, shared user prompts with corporate group members and research partners, while OpenAI, which scored third-worst, shared data with unspecified “affiliates.”
ChatGPT, Microsoft Copilot, Le Chat and xAI’s Grok were all noted to allow users to opt out of training models using their prompts, while Gemini, DeepSeek, Inflection AI’s Pi AI and Meta AI did not appear to offer this option. Anthropic stood out by claiming to never use user inputs to train its models.
Overall, Inflection AI ranked worst for AI-specific privacy concerns, although the platform did not appear to share user prompts with third parties other than service providers.
OpenAI ranked No. 1 for transparency
OpenAI ranked best in terms of making it clear whether prompts are used for training, making it easy to find information on how models are trained and providing a readable privacy policy. Inflection AI scored worst in this category.
Researchers noted that information on whether prompts were used for training was easily accessible through a search or clearly presented in the privacy policies for OpenAI, Mistral AI, Anthropic and xAI, which were ranked top one through four in the transparency category, respectively.
By contrast, researchers had to “dig” through the Microsoft and Meta websites to find this information and found it even more difficult to discover this information within the privacy policies Google, DeepSeek and Pi AI, the report stated. The information provided by these latter three companies was often “ambiguous or otherwise convoluted,” according to Incogni.
The readability of each company’s privacy policy was assessed using the Dale-Chall readability formula, with researchers determining that all of the privacy policies required a college-graduate reading level to understand.
While OpenAI, Anthropic and xAI were noted to make heavy use of support articles to present more convenient and “digestible” information outside of their privacy policies, Inflection AI and DeepSeek were criticized for having “barebones” privacy policies, and Meta, Microsoft and Google failed to provide dedicated AI privacy policies outside of their general policies across all products.
Meta, Microsoft deemed the most ‘data-hungry’ AI platforms
The third assessment category covered the collection and sharing of personal data by AI platforms and apps, outside of user prompts. Inflection AI and OpenAI were found to collect and share the least data, while Microsoft and Meta ranked eight and ninth, respectively, in this category.
While all of the platforms collected user data during sign-ups, website visits and purchases, as well as from “publically accessible sources,” some companies also received data from third parties.
ChatGPT, Gemini and DeepSeek collected personal information from security partners, Gemini and Meta from marketing partners, Microsoft Copilot from financial institutions and Anthropic from “commercial agreements with third parties,” according to Incogni. Pi Ai, which scored best in this category, only collected public data and data provided by users.
When it came to data collection and sharing via mobile apps, Mistral’s Le Chat Android and iOS apps collected and shared the least data, while the Meta AI app collected the most, followed by the Gemini app.
Some mobile apps were noted to collect specific types of information; for example, Gemini and Meta AI collect precise locations and addresses, and Gemini, Pi AI and DeepSeek collect phone numbers. Grok’s Android app was disclosed to share photos that users provide access to with third parties.
The Incogni report concluded by stating that one of its main takeaways is the importance of having clear, accessible and up-to-date information on AI companies’ data privacy practices. It noted that the use of a single privacy policy for all products by the biggest tech companies assessed – Microsoft, Meta and Google – made it more difficult to find specific information about data handling practices on their AI platforms.
AI-specific data privacy issues are a growing concern, as research has shown that employees often include sensitive information in their prompts to AI platforms. A
2024 report by Cyberhaven found that 27.4% of the data input to chatbots by employees was sensitive data, a 156% rate increase from 2023.
Additionally, a 2024 survey conducted by the National Cybersecurity Alliance (NCA) and CybSafe found that more than a third of respondents who used AI at work admitted to submitting sensitive information to AI tools.
Much of this sensitive information is submitted through personal accounts that do not have the same data privacy features as enterprise accounts, also known as “shadow AI.” In 2024, Cyberhaven found that 73.8% of employee ChatGPT use and 94.4% of employee Gemini use was conducted on personal accounts.