AI/ML, AI benefits/risks

Inside Microsoft’s take on AI

Share
Inside Microsoft's AI strategy,

Artificial intelligence (AI) presents major opportunities in cybersecurity, including the ability to defeat cyberattacks at machine speed, advance threat intelligence, and close the skills gap in the cybersecurity workforce.

However, cybersecurity operates as an asymmetrical game, and both defenders and attackers can use AI. To secure the future with AI, defenders need to advance software engineering, work together across sectors and domains, and ensure that AI is both ethical and responsible.

Microsoft has been using AI to enhance cybersecurity in several ways. One of the advantages for our security teams is their view of the data field: they know how the infrastructure, user posture, and applications are set up before a cyberattack begins.

A very large data advantage: 65 trillion daily signals, expertise of global threat intelligence, monitoring more than 300 cyberthreat groups, and insights on cyberattack behaviors from more than 1 million customers and 15,000 partners, helps tip the scale for our defenders.

We anticipate that AI will evolve in the cybersecurity industry, both as a tool for defenders and attackers. Some of the emerging trends and threats include pre-ransomware detection, deepfake detection, generative AI fraud, and AI voice integration. We have also been harnessing the power of AI to boost Windows 11 security.

While there are challenges and obstacles in some organizations for implementing AI for cybersecurity, such as data refinement, prompt injection, and hallucination, to stay ahead of attackers using this technology AI cybersecurity implementation should be top-of-mind for organizations today and going forward.

“Prompt injection,” where an attack gets designed to mislead LLMs to cause the AI system to perform unintended actions is an area we are vigilantly monitoring. Commonly an attacker will embed a malicious prompt into a webpage designed to exploit security vulnerabilities in generative AI-powered chats. When the bot reads the tainted webpage, it can become compromised and initiate an insecure action.

These prompt injections are a major concern for the security industry, and preventing them has become a top focus. Fortunately, we are not seeing robust activity from threat actors and are continuing to monitor these developments.

There are many ways organizations can start using AI today to protect their systems. AI can help organizations with password protection and authentication, phishing detection and prevention, vulnerability management, network security, and behavioral analytics.

Microsoft advocates for responsible behavior and norms in the use of AI for cybersecurity, as well as providing guidance and training for its own employees and customers. Following the SD3+C approach of secure by design, secure by default, secure in deployment and communication, has also become important.

As we know, AI can help with threat detection, response, prevention, and automation, as well as the challenges of dealing with data volume, complexity, and adversarial attacks. That said, no matter how well AI performs, human-led AI featuring privacy and security with humans providing oversight, evaluating appeals, and interpreting policies and regulations will always remain an important layer to a robust security posture.

Vivek Vinod Sharma, senior security architect, Microsoft

Inside Microsoft’s take on AI

As the largest investor in ChatGPT and developers of Copilot, Microsoft has become an important AI stakeholder and innovator – the company aims to foster responsible use of AI led by humans.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.