AI/ML, Data Security, Privacy

Microsoft Azure AI assistants can be tricked to turn over patient data

Share

Microsoft Azure chatbots charged with handling personal medical data could be tricked into handing over personal data for hundreds of customers.

Researchers at Tenable found that AI assistants were willing to hand over more than enough personal data when discussing patient details.

Ideally, the Microsoft AI assistants could look up a limited amount of patient information and provide a brief description of the person’s condition and recommendations for treatment.

“Essentially, the service allows healthcare providers to create and deploy patient-facing chatbots to handle administrative workflows within their environments,” Tenable said in its roundup of the incident.

“Thus, these chatbots generally have some amount of access to sensitive patient information, though the information available to these bots can vary based on each bot’s configuration.”

What the researchers found was that the Microsoft AI assistants were a little too helpful, sharing customer data that should not be made public and allowing the chatbots to access records of other customers.

“Upon seeing that these resources contained identifiers indicating cross-tenant information (i.e. information for other users/customers of the service), Tenable researchers immediately halted their investigation of this attack vector and reported their findings to (Microsoft Security Response Center) on June 17, 2024. MSRC acknowledged Tenable’s report and began their investigation the same day,” Tenable said in its official write-up on the matter.

“Within the week, MSRC confirmed Tenable’s report and began introducing fixes into the affected environments. As of July 2, MSRC has stated that fixes have been rolled out to all regions.”

While this should have remedied the sue, Tenable researchers found that the underlying flaw was still exposed, and the internal metadata service was still accessible even with the fix in place.

“The difference between this issue and the first is the overall impact,” said Tenable.

“The FHIR endpoint vector did not have the ability to influence request headers, which limits the ability to access IMDS directly. While other service internals are accessible via this vector, Microsoft has stated that this particular vulnerability had no cross-tenant access."

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Shaun Nichols

A career IT news journalist, Shaun has spent 17 years covering the industry with a specialty in the cybersecurity field.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds