COMMENTARY: During the COVID-19 pandemic, the rapid rollout of telemedicine kept much of healthcare alive for patients and providers alike. But in the rush to implement remote care, many hospitals skipped critical cybersecurity reviews. Unvetted apps, weak encryption, and unsecured endpoints opened doors to cyberattacks and patient data exposure.Now, as hospitals race ahead with AI adoption, it’s worth asking: Are we opening ourselves to the same risks again?
As AI continues to advance the practice of medicine—from new diagnostic tests, imaging, and clinical decision support to facilitation of documentation and workflow automation—many healthcare organizations are embracing it at breakneck speed. Yet, beneath the innovation and invigoration lies a familiar risk: deploying transformative technology faster than teams can secure it properly.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts.Read more Perspectives here.]Both video conferencing and large language models entered healthcare under pressure—telemedicine to maintain access during a crisis, and AI to offset workforce shortages, burnout, and the demand for efficiency.But speed often comes at a cost. Implementations for telemedicine took place in a matter of weeks or months, hospital leadership measures AI implementations in months or 90 days. For telemedicine, security oversight was often bypassed, AI gets introduced via fast-tracked prototypes with partial model developments. Telemedicine was advanced using consumer-grade platforms and AI is advanced today by start-up ventures. Regarding regulations, HIPAA was temporarily relaxed during telemedicine roll outs, and regulation of AI keeps trying to play catch-up.Both initiatives have relied on third parties, and high levels of shadow IT. Hospitals that turn to external vendors often assume robust security—without ever validating it. In AI today, some platforms are integrated without CISO involvement or formal governance, leading to unmonitored data flows and inconsistent access controls.The regulatory frameworks for telehealth were insufficient during the early pandemic, and AI faces a similar gap. While organizations like the Food and Drug Administration and NIST are working on guidance, many hospitals are implementing AI without AI-specific risk assessments or clear audit trails. Telemedicine introduced new transmission risks: AI introduces massive volumes of sensitive clinical data for training and real-time inference—data that must be secured, anonymized, and monitored across its lifecycle.The challenges of the telemedicine boom offer a clear blueprint for what not to repeat with AI. Hospitals need to institute a few best practices for AI implementations:
Build security reviews into the design and procurement process.
Establish AI-specific data governance and consent frameworks.
Conduct thorough third-party AI risk assessments.
Educate clinical staff on AI limitations, risks, and exposures.
Log all AI interactions and outputs for auditability.
AI in medicine brings enormous promise—but also a new and significantly higher level of risk. Unlike telemedicine, which was mostly about providing communication channels, AI has the power to directly and indirectly influence clinical decisions. If an AI model gets maliciously manipulated, poorly trained, or presented with unbounded training data, it could result in a misdiagnosis, an inappropriate treatment, or worse.Our industry has embraced AI-powered healthcare, and the potential benefits make the rush understandable. These new technologies can reduce administrative burdens, enhance diagnostic precision, and streamline workflows.If we fail to secure AI as we go—just as many failed to initially secure telehealth—we risk undermining the very trust and care improvements that AI promises to deliver. Healthcare leaders must embed cybersecurity into every phase of AI adoption—from risk-based vendor selection to model monitoring—to avoid repeating the costly mistakes of the past.In healthcare, innovation should never come at the expense of cybersecurity.Toby Gouker, chief security officer, First Health AdvisorySC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.
The former Provost for the SANS Technology Institute, Toby Gouker brings a wide breadth of privacy and security expertise to First Health Advisory’s cyber health practice. Coupled with years of experience in the federal healthcare IT industry, his expertise sits at the nexus of cybersecurity, health policy, and healthcare risk management. With over 30 years of industry experience and 10 years in education, Gouker is both a scholar and practitioner, offering healthcare organizations guidance on business tools and techniques that help organizations protect IT and data assets.