Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (2024)

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (1)

Copilot for Microsoft 365 Deployment and Adoption Guide

Read our step-by-step guide on deploying Copilot for Microsoft 365 at your company. It’s based on our experience deploying it here at Microsoft:

Generative AI (GenAI) is rapidly changing the way businesses operate, and everyone wants to be in on the action. Whether it’s to automate tasks or enhance efficiency, the allure of what GenAI can do is strong.

However, for companies considering the adoption of GenAI, there are a multitude of challenges and risks that must be navigated. These range from data exposure or exfiltration where your company’s sensitive data can be accessed by unintended audiences to direct attacks on the models and data sources that underpin them. Not acting and waiting until the world of GenAI settles down poses its own risk. Employees eager to try out the latest and greatest will start using GenAI tools and products that aren’t vetted for use in your enterprise’s environment. It’s safe to say that we’re not just in the era of Shadow IT but Shadow AI, too.

Add to that the fact that threat actors have begun to use these tools in their activities, and you get a real sense that navigating the cyberthreat landscape of today and tomorrow will be increasingly difficult—and potentially headache-inducing!

Here at Microsoft, our Digital Security & Resilience (DSR) organization’s Securing Generative AI program has focused on solving this problem since day one: How do we enable our employees to take advantage of the next generation of tools and technologies that enable them to be productive, while maintaining safety and security?

Building a framework for using GenAI securely

At any given moment, there are dozens of teams working on GenAI projects across Microsoft and dozens of new AI tools that employees are eager and excited to use to boost their productivity or use to be more creative.

When establishing our Securing AI program, we wanted to use as many of our existing systems and structures for the development, implementation, and release of software within Microsoft as possible. Rather than start from scratch, we looked at processes and workstreams that were already established and familiar for our employees and worked to integrate AI rules and guidance into those processes, such as the Security Development Lifecycle (SDL), and the Responsible AI Impact Assessment template.

Successfully managing the secure roll-out of a technology of this scale and importance takes the collaboration and cooperation of hundreds of people across the company, with representatives from diverse disciplines ranging from engineers and researchers working on the cutting edge of AI technology, to compliance and legal specialists, through to privacy advocates.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (2)

We work extensively with our partners in Microsoft Security, Aether (AI Ethics and Effects in Engineering and Research), the advisory body for Microsoft leadership on AI ethics and effects, and the extended community of Responsible AI. We also work with security champions who are embedded in teams and divisions across the enterprise. Together, this extended community helps develop, test, and validate the guidance and rules that AI experiences must adhere to for our employees to safely use them.

One of the most popular frameworks for successful change management is the simple three-legged stool. It’s a simple metaphor, emphasizing the need for even efforts across the domains of technology, processes, and people. We’ve focused our efforts to secure GenAI on strengthening and reinforcing the data governance for our technologies, integrating AI security into existing systems and processes, and addressing the human factor by fostering collaboration and community with our employees. The recent announcement of the Secure Future Initiative with its six security pillars emphasizes security as a top priority across the company to advance cybersecurity protections.

Incorporating AI-focused security into existing development and release practices

The SDL has been central to our development and release cycle at Microsoft for more than a decade, ensuring that what we develop is secure by design, by default, and secure in deployment. We focused on strengthening the SDL to handle the security risks posed by the technology underlying GenAI.

We’ve worked to enhance embedded security requirements for AI, particularly in monitoring and threat detection. Mandating audit logging at the platform level for all systems provides visibility into which resources are accessed, which models are used, and the type and sensitivity of the data accessed during interactions with our various Copilot offerings. This is crucial for all AI systems, including large language models (LLMs), small language models (SLMs), and multimodal models (MMMs) that focus on partial or total task completion.

Preventative measures are an equally important part of our journey to securing GenAI, and there’s no shortage of work that’s been done on this front. Our threat modeling standards and red teaming for GenAI systems have been revamped to help engineers and developers consider threats and vulnerabilities tied to AI. All systems involving GenAI must go through this process before being deployed to our data tenant for our employees to use. Our standards are under constant review and are updated based on the discoveries from our researchers and the Microsoft Security Response Center.

As GenAI and the types of risks and threats to models and systems are ever evolving, so too is our acceptance criteria for deploying AI to the enterprise. Here are some of the key points we take into consideration for our acceptance criteria:

Representatives from diverse disciplines: Our journey begins when a diverse team of experts. engineers, compliance teams, security SMEs, privacy advocates, and legal minds come together. Their collective wisdom ensures a holistic perspective.

Evaluate against enterprise standards: Every GenAI feature is subjected to rigorous scrutiny against our enterprise standards. This isn’t a rubber-stamp exercise, it’s a deep dive into ethical considerations, potential security, privacy, and AI risks, and alignment with the Responsible AI standard.

Risk assessment and management: The risk workflow starts in our system to amplify risk awareness and management across leadership teams. It’s more than a formality, it’s a structured process that keeps us accountable. Risks evolve, and so do our mitigation strategies, which is why we revisit the risk assessment of a feature every three to six months. Our assessments are a living guide that adapts to the landscape.

Phased deployment to companywide impact: We used a phased deployment to allow us to monitor, learn, and fine-tune.

Risk contingency planning: This isn’t about avoiding risks altogether; it’s about managing them. By addressing concerns upfront, we ensure that GenAI deployment is safe, secure, and aligned with our values.

By integrating AI into these existing processes and systems, we help ensure that our people are thinking about the potential risks and liabilities involved in GenAI throughout the development and release cycle—not only after a security event has occurred.

Improving data governance

While keeping Gen-AI models and AI systems safe from threats and harms is a top priority, this alone is insufficient for us to consider GenAI as secure and safe. We also see data governance as essential to prevent improper access, improper use, and to reduce the chance of data exfiltration—accidental or otherwise.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (4)

At the heart of our data governance strategy is a multi-part expansion of our labeling and classification efforts, which applies at both the model level and the user level.

We set default labels across our platforms and the containers that store them using Purview Information Protection to ensure consistent and accurate tagging of sensitive data by default. We also employ auto-labeling policies where appropriate for confidential or highly confidential documents based on the information they contain. Data hygiene is an essential part of this framework; removing outdated records held in containers such as SharePoint reduces the risk of hallucinations or surfacing incorrect information and is something we reinforce through periodic attestation.

To prevent data exfiltration, we rely on our Purview Data Loss Prevention (DLP) policies to identify sensitive information types and automatically apply the appropriate policies at the controls at the application or service level (e.g. Microsoft 365), and Defender for Cloud Apps (DCA) to detect the use of risky websites and applications, and if necessary, block access to them. By combining these methods, we’re able to reduce the risk of sensitive data leaving our corporate perimeter—accidentally or otherwise.

Encouraging deep collaboration and sharing of best practices

So far, we’ve covered the management of GenAI technologies and how we ensure that these tools are safe and secure to use. Now it’s time to turn our attention to our people, the employees who work with and build with these GenAI systems.

We believe that anyone should be able to use GenAI tools confidently, knowing that they are safe and secure. But doing so requires essential knowledge, which might not be entirely self-evident. We’ve taken a three-pronged approach to solving this need with training, purpose-made resource materials, and opportunities for our people to develop their skills.

All employees and contract staff working at Microsoft must take our three-part mandatory companywide security training released throughout the year. The safe use of GenAI is comprehensively covered, including guidance on what AI tools to use and when to use them. Additionally, we’ve added extensive guidance and documentation to our internal digital security portal ranging from what to be mindful of when working with LLMs to the tools which are best suited to various tasks and projects.

With so many of our employees wanting to learn how to use GenAI tools, we’ve worked with teams across the company to create resources and venues where our employees can roll up their sleeves and work with AI hands-on in a way that’s safe and secure. Hackathons are a big deal at Microsoft, and we’ve partnered with several events including the main flagship event, which draws in more than 50,000 attendees. The Skill-Up AI presentation series hosted by our partners at the Microsoft Garage allows curious employees to learn the safe and secure way to use the latest GenAI technologies not only in their everyday work, but also in their creative endeavors. By integrating guidance into the learning journey, we help enable safe use of GenAI without stifling creativity.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (5)

Here are our suggestions on how to empower your employees with GenAI while also keeping your company secure:

  • Understand the challenges and risks associated with adopting GenAI technology at your company. Good places to start are assessing the potential for data exposure, direct attacks on models and data sources, and the risks associated with Shadow AI.
  • Develop resources and guidance for your employees to educate them on the risks of using AI. Fostering collaboration and a strong community in support of secure use of GenAI.
  • If applicable, incorporate AI-focused security into existing development and release practices. Check out the Security Development Lifecycle (SDL) and the Responsible AI Impact Assessment template for inspiration.
  • Work to bolster your data governance policies. We strongly recommend starting with labeling and classification efforts, employing auto-labeling policies, and improving data hygiene. Consider tools such as Purview Data Loss Prevention (DLP) and Defender for Cloud Apps to prevent data exfiltration and limit improper data access.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (6)

Learn more about our overall approach to GenAI governance internally here at Microsoft.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (7)

  • Learn how we’re providing further transparency on our responsible AI efforts at Microsoft.
  • See how we’re staying ahead of threat actors in the age of AI.
  • Learn more about implementing a Zero Trust security model at Microsoft.
  • Start reducing your organization’s Shadow IT risk in 3 steps.

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (8)
Want more information? Email us and include a link to this story and we’ll get back to you.

Please share your feedback with us—take our survey and let us know what kind of content is most useful to you.

Tags: AI, security, Zero Trust

Empowering our employees with generative AI while keeping the company secure - Inside Track Blog (2024)

References

Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 6157

Rating: 4.9 / 5 (59 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.