Sign in with Microsoft
Sign in or create an account.
Hello,
Select a different account.
You have multiple accounts
Choose the account you want to sign in with.

Last updated: February 2024

The basics of Copilot in Bing   

Introduction  

In February 2023, Microsoft launched the new Bing, an AI-enhanced web search experience. It supports users by summarizing web search results and providing a chat experience. Users can also generate creative content, such as poems, jokes, stories, and, with Bing Image Creator, images. The new AI-enhanced Bing runs on a variety of advanced technologies from Microsoft and OpenAI, including GPT-4, a cutting-edge large language model (LLM), and DALL-E, a deep learning model to generate digital images from natural language descriptions, both from OpenAI. We worked with both models for months prior to public release to develop a customized set of capabilities and techniques to join this cutting-edge AI technology and web search in the new Bing.  In November 2023, Microsoft renamed the new Bing to Copilot in Bing.

At Microsoft, we take our commitment to responsible AI seriously. The Copilot in Bing experience has been developed in line with Microsoft’s AI Principles, Microsoft’s Responsible AI Standard, and in partnership with responsible AI experts across the company, including Microsoft’s Office of Responsible AI, our engineering teams, Microsoft Research, and Aether. You can learn more about responsible AI at Microsoft here.  

In this document, we describe our approach to responsible AI for Copilot in Bing. Ahead of release, we adopted state-of-the-art methods to identify, measure, and mitigate potential risks and misuse of the system and to secure its benefits for users. As we have continued to evolve Copilot in Bing since first released, we have also continued to learn and improve our responsible AI efforts. This document will be updated periodically to communicate our evolving processes and methods.   

Key terms  

Copilot in Bing is an AI-enhanced web search experience. As it runs on a powerful, new technology, we start by defining some key terms.  

Machine learning models that help to sort data into labeled classes or categories of information. In Copilot in Bing, one way in which we use classifiers is to help detect potentially harmful content submitted by users or generated by the system in order to mitigate generation of that content and misuse or abuse of the system. 

Copilot in Bing is grounded in web search results when users are seeking information. This means that we center the response provided to a user’s query or prompt on high-ranking content from the web, and we provide links to websites so that users can learn more. Bing ranks web search content by heavily weighting features such as relevance, quality and credibility, and freshness. We describe these concepts in more detail in How Bing Delivers Search Results (see “Quality and Credibility” in “How Bing Ranks Search Results”).   

We consider grounded responses to be responses from Copilot in Bing in which statements are supported by information contained in input sources, such as web search results from the query or prompt, Bing’s knowledge base of fact-checked information, and, for the chat experience, recent conversational history from the chat. Ungrounded responses are those in which a statement is not grounded in those input sources.  

Large language models (LLMs) in this context are AI models that are trained on large amounts of text data to predict words in sequences. LLMs are capable of performing a variety of tasks, such as text generation, summarization, translation, classification, and more.  

The metaprompt is a program that serves to guide the system’s behavior. Parts of the metaprompt help align system behavior with Microsoft AI Principles and user expectations. For example, the metaprompt may include a line such as “communicate in the user’s language of choice.”  

A method or combination of methods designed to reduce potential risks that may arise from using AI-driven features in Copilot in Bing.  

Text, voice, images, and/or other enabled queries a user sends to Bing as an input to the model that powers new AI-driven generative experiences in Bing. For example, a user might input the following prompt:  

I am planning a trip for our anniversary in September. What are some places we can go that are within a 3-hour flight from London Heathrow?”  

The text a user sends to Bing from the search bar for web search. In this document, we distinguish a query (for web search) from a prompt (for AI-driven generative experiences in Bing). For example, a user might input the following query to search the web:  

travel destinations near London Heathrow airport”  

Techniques used by experts to assess the limitations and vulnerabilities of a system and to test the effectiveness of planned mitigations. Red team testing and stress-testing are used to identify potential risks and are distinct from systematic measurement of risks.   

The text, image, charts, etc. that Copilot in Bing outputs in response to a prompt. Synonyms for “response” include “completion,” “generation,” and “answer.” For example, Bing may provide the following response to the prompt that we used in the definition of 'prompt’ above:   

Congratulations on your anniversary! �� There are many places you can go that are within a 3-hour flight from London Heathrow. According to one source, some of the weekend escapes within three hours by plane from London are Bilbao, Copenhagen, Reykjavik, and Stockholm 1, 2. Another source suggests some of the best places to go in Europe in September are Krakow, Florence, and Lisbon 3, 4. You can also check the direct flights from London Heathrow to various destinations and see what appeals to you 5. I hope you have a wonderful trip!”  

Intended uses and new AI experiences  

Copilot in Bing’s intended uses are to connect users with relevant search results, review results from across the web to find and summarize answers users are looking for, help users refine their research to get answers with a chat experience, and spark creativity by helping users create content. Copilot in Bing’s generative AI experiences below support the goal of being an AI-powered copilot for the web.  

Summarization. When users submit a search query on Copilot in Bing, the Bing system processes the query, conducts one or more web searches, and uses the top web search results to generate a summary of the information to present to users. These summaries include references to help users see and easily access the search results used to help ground the summary. Summaries can appear on the right side of the search results page and within the chat experience.  

Chat experience. In addition to summarization, users can chat with Copilot in Bing system via text, image, or voice input, ask follow-up questions to clarify searches and find new information, and submit prompts to generate creative content. References are also included in the chat experience when Copilot in Bing is summarizing search results in the response.   

Generation of creative content. In both the chat experience and on the search page, users can create poems, jokes, stories, images, and other content with help from Copilot in Bing.  Images are created by Designer (the former Bing Image Creator), and users can access the feature via Designer homepage as well as the Copilot page. 

How does Copilot in Bing work?  

With Copilot in Bing, we’ve developed an innovative approach to bring state-of-the-art LLMs to web search. When a user enters a prompt in Copilot in Bing, the prompt, recent conversation history, the metaprompt, and top search results are sent as inputs to the LLM. The model generates a response using the user’s prompt and recent conversation history to contextualize the request, the metaprompt to align responses with Microsoft AI Principles and user expectations, and the search results to ground responses in existing, high-ranking content from the web.   

Responses are presented to users in several different formats, such as traditional links to web content, AI-generated summarizations, images, and chat responses. Summarizations and chat responses that rely on web search results will include references and a “Learn more” section below the responses, with links to search results that were used to ground the response. Users can click these links to learn more about a topic and the information used to ground the summary or chat response.    

In the Copilot experience, users can perform web searches conversationally by adding context to their prompt and interacting with the system responses to further specify their search interests. For example, a user might ask follow-up questions, request additional clarifying information, or respond to the system in a conversational way. In the chat experience, users can also select a response from pre-written suggestions, which we call chat suggestions. These buttons appear after each response from Copilot and provide suggested prompts to continue the conversation within the chat experience. Chat suggestions also appear alongside summarized content on the search results page as an entry point for the chat experience.  

Copilot in Bing also allows a user to create stories, poems, song lyrics, and images with help from Bing. When Copilot in Bing detects user intent to generate creative content (for example, the prompt begins with “write me a …”), the system will, in most cases, generate content responsive to the user’s prompt. Similarly, when Copilot in Bing detects user intent to generate an image (for example, the prompt begins with “draw me a …”), the system will, in most cases, generate an image responsive to the user’s prompt. In Visual Search in Chat experience, with an image taken by the user’s camera, uploaded from the user’s device, or linked from the web, users can prompt Copilot in Bing to understand the context, interpret, and answer questions about the image.  Users can also upload their files to Copilot to interpret, convert, process, or calculate information from them. In the Microsoft Designer experience that users can access through Copilot in Bing, users can not only generate images using prompts, but also resize or restyle them, or make edits such as blurring background or making colors more vivid. 

Users with Microsoft accounts (MSA) now also have an option to subscribe to Copilot Pro that offers an enhanced experience, including accelerated performance, faster AI image creation, and soon the ability to create your very own Copilot GPTs. Copilot Pro is currently available in limited countries, and we plan to make Copilot Pro available in more markets soon.

In the Copilot experience, users can access Copilot GPTs. A Copilot GPT, like Designer GPT, is a custom version of Microsoft Copilot on a topic that is of particular interest to you, such as fitness, travel, and cooking, that can help turn vague or general ideas into more specific prompts with outputs including texts and images. In Copilot users can see available Copilot GPTs, and users with Copilot Pro accounts will soon have access to the Copilot GPT Builder, a feature that allows users to create and configure a custom Copilot GPT. The Responsible AI mitigations mentioned above for Copilot in Bing apply to Copilot GPTs.

To learn more about how Copilot Pro and Copilot GPTs work, please visit here.

Copilot in Bing strives to provide diverse and comprehensive search results with its commitment to free and open access to information. At the same time, our product quality efforts include working to avoid inadvertently promoting potentially harmful content to users. More information on how Bing ranks content, including how it defines relevance and the quality and credibility of a webpage, is available in the “Bing Webmaster Guidelines.”   More information on Bing’s content moderation principles is available in “How Bing delivers search results.”   

In the Copilot in Windows experience, Copilot in Bing can work with the Windows operating system to provide Windows-specific skills such as changing the user's theme or background, and changing settings like audio, Bluetooth, and networking. These experiences allow the user to configure their settings and improve their user experience using natural language prompts to the LLM. Application-specific functionality can also be provided from third-party application plugins. These can automate repetitive tasks and achieve greater user efficiency. Since LLMs can occasionally make mistakes, appropriate user confirmation prompts are provided so that the user is the final arbiter of changes that can be made. 

Identifying, measuring, and mitigating Risks  

Like other transformational technologies, harnessing the benefits of AI is not risk-free, and a core part of Microsoft’s Responsible AI program is designed to identify potential risks, measure their propensity to occur, and build mitigations to address them. Guided by our AI Principles and our Responsible AI Standard, we sought to identify, measure, and mitigate potential risks and misuse of Copilot in Bing while securing the transformative and beneficial uses that the new experience provides. In the sections below we describe our iterative approach to identify, measure, and mitigate potential risks.   

At the model level, our work began with exploratory analyses of GPT-4 in the late summer of 2022. This included conducting extensive red team testing in collaboration with OpenAI. This testing was designed to assess how the latest technology would work without any additional safeguards applied to it. Our specific intention at this time was to produce harmful responses, surface potential avenues for misuse, and identify capabilities and limitations. Our combined learnings across OpenAI and Microsoft contributed to advances in model development and, for us at Microsoft, informed our understanding of risks and contributed to early mitigation strategies for Copilot in Bing.  

In addition to model-level red team testing, a multidisciplinary team of experts conducted numerous rounds of application-level red team testing on Copilot in Bing AI experiences before making them publicly available in our limited release preview. This process helped us better understand how the system could be exploited by adversarial actors and improve our mitigations. Non-adversarial stress-testers also extensively evaluated new Bing features for shortcomings and vulnerabilities. Post-release, the new AI experiences in Bing are integrated into the Bing engineering organization’s existing production measurement and testing infrastructure. For example, red team testers from different regions and backgrounds continuously and systematically attempt to compromise the system, and their findings are used to expand the datasets that Bing uses for improving the system.  

Red team testing and stress-testing can surface instances of specific risks, but in production users will have millions of different kinds of conversations with Copilot in Bing. Moreover, conversations are multi-turn and contextual, and identifying harmful content within a conversation is a complex task. To better understand and address the potential for risks in Copilot in Bing AI experiences, we developed additional responsible AI metrics specific to those new AI experiences for measuring potential risks like jailbreaks, harmful content, and ungrounded content. We also enabled measurement at scale through partially automated measurement pipelines. Each time the product changes, existing mitigations are updated, or new mitigations are proposed, we update our measurement pipelines to assess both product performance and the responsible AI metrics.  

As an illustrative example, the updated partially automated measurement pipeline for harmful content includes two major innovations: conversation simulation and automated, human-verified conversation annotation. First, responsible AI experts built templates to capture the structure and content of conversations that could result in different types of harmful content. These templates were then given to a conversational agent which interacted as a hypothetical user with Copilot in Bing, generating simulated conversations. To identify whether these simulated conversations contained harmful content, we took guidelines that are typically used by expert linguists to label data and modified them for use by GPT-4 to label conversations at scale, refining the guidelines until there was significant agreement between model-labeled conversations and human-labeled conversations. Finally, we used the model-labeled conversations to calculate a responsible AI metric that captures the effectiveness of Copilot in Bing at mitigating harmful content.   

Our measurement pipelines enable us to rapidly perform measurement for potential risks at scale. As we identify new issues through the preview period and ongoing red team testing, we continue to expand the measurement sets to assess additional risks.  

As we identified potential risks and misuse through processes like red team testing and stress-testing and measured them with the innovative approaches described above, we developed additional mitigations to those used for traditional search. Below, we describe some of those mitigations. We will continue monitoring the Copilot in Bing AI experiences to improve product performance and mitigations.  

Phased release, continual evaluation. We are committed to learning and improving our responsible AI approach continuously as our technologies and user behavior evolve. Our incremental release strategy has been a core part of how we move our technology safely from the labs into the world, and we’re committed to a deliberate, thoughtful process to secure the benefits of Copilot in Bing. Limiting the number of people with access during the preview period has allowed us to discover how people use Copilot in Bing, including how people may misuse it, so we can try to mitigate emerging issues before broader release. For example, we are requiring users to authenticate using their Microsoft account before accessing the full new Bing experience. Unauthenticated users can access only a limited preview of the experience. These steps discourage abuse and help us to (as necessary) take appropriate action in response to Code of Conduct violations.  We are making changes to Copilot in Bing daily to improve product performance, improve existing mitigations, and implement new mitigations in response to our learnings during the preview period.  

Grounding in search results. As noted above, Copilot in Bing is designed to provide responses supported by the information in web search results when users are seeking information. For example, the system is provided with text from the top search results and instructions via the metaprompt to ground its response. However, in summarizing content from the web, Copilot in Bing may include information in its response that is not present in its input sources. In other words, it may produce ungrounded results. Our early evaluations have indicated that ungrounded results in chat may be more prevalent for certain types of prompts or topics than others, such as asking for mathematical calculations, financial or market information (for example, company earnings, stock performance data), and information like precise dates of events or specific prices of items. Users should always take caution and use their best judgment when viewing summarized search results, whether on the search results page or in the chat experience. We have taken several measures to mitigate the risk that users may over-rely on ungrounded generated content in summarization scenarios and chat experiences. For example, responses in Copilot in Bing that are based on search results include references to the source websites for users to verify the response and learn more. Users are also provided with explicit notice that that they are interacting with an AI system and advised to check the web result source materials to help them use their best judgment.  

AI-based classifiers and metaprompting to mitigate potential risks or misuse. The use of LLMs may produce problematic content that could lead to risks or misuse. Examples could include output related to self-harm, violence, graphic content, intellectual property, inaccurate information, hateful speech, or text that could relate to illegal activities. Classifiers and metaprompting are two examples of mitigations that have been implemented in Copilot in Bing to help reduce the risk of these types of content. Classifiers classify text to flag different types of potentially harmful content in search queries, chat prompts, or generated responses. Bing uses AI-based classifiers and content filters, which apply to all search results and relevant features; we designed additional prompt classifiers and content filters specifically to address possible risks raised by the Copilot in Bing features. Flags lead to potential mitigations, such as not returning generated content to the user, diverting the user to a different topic, or redirecting the user to traditional search. Metaprompting involves giving instructions to the model to guide its behavior, including so that the system behaves in accordance with Microsoft's AI Principles and user expectations. For example, the metaprompt may include a line such as “communicate in the user’s language of choice.”   

Protecting privacy in Visual Search in Copilot in Bing. When users upload an image as a part of their chat prompt, Copilot in Bing will employ face-blurring technology before sending the image to the AI model. Face-blurring is used to protect the privacy of individuals in the image. The face-blurring technology relies on context clues to determine where to blur and will attempt to blur all faces. With the faces blurred, the AI model may compare the inputted image with those of publicly available images on the Internet. As a result, for example, Copilot in Bing may be able to identify a famous basketball player from a photo of that player on a basketball court by creating a numerical representation that reflects the player’s jersey number, jersey color, the presence of a basketball hoop, etc. Copilot in Bing does not store numerical representations of people from uploaded images and does not share them with third parties. Copilot in Bing uses numerical representations of the images that users upload only for the purpose of responding to users’ prompts, then they are deleted within 30 days after the chat ends.    

If the user asks Copilot in Bing for information about an uploaded image, chat responses may reflect the impact of face-blurring on the model's ability to provide information about the uploaded image. For example, Copilot in Bing may describe someone as having a blurred face.    

Limiting conversational drift. During the preview period we learned that very long chat sessions can result in responses that are repetitive, unhelpful, or inconsistent with Copilot in Bing’s intended tone. To address this conversational drift, we limited the number of turns (exchanges which contain both a user question and a reply from Copilot in Bing) per chat session. We continue to evaluate additional approaches to mitigate this issue.  

Prompt enrichment. In some cases, a user's prompt may be ambiguous. When this happens, Copilot in Bing may use the LLM to help build out more details in the prompt to help ensure users get the response they are seeking. Such prompt enrichment does not rely on any knowledge of the user or their prior searches, but instead on the AI model. These revised queries will be visible in the user's chat history and, like other searches, can be deleted using in-product controls.  

User-centered design and user experience interventions. User-centered design and user experiences are an essential aspect of Microsoft’s approach to responsible AI. The goal is to root product design in the needs and expectations of users. As users interact with Copilot in Bing for the first time, we offer various touchpoints designed to help them understand the capabilities of the system, disclose to them that Copilot in Bing is powered by AI, and communicate limitations. The experience is designed in this way to help users get the most out of Copilot in Bing and minimize the risk of overreliance. Elements of the experience also help users better understand Copilot in Bing and their interactions with it. These include chat suggestions specific to responsible AI (for example, how does Bing use AI? Why won’t Copilot in Bing respond on some topics?), explanations of limitations, ways users can learn more about how the system works and report feedback, and easily navigable references that appear in responses to show users the results and pages in which responses are grounded.  

AI disclosure. Copilot in Bing provides several touchpoints for meaningful AI disclosure where users are notified that they are interacting with an AI system as well as opportunities to learn more about Copilot in Bing. Empowering users with this knowledge can help them avoid over-relying on AI and learn about the system’s strengths and limitations.  

Media provenance. Microsoft Designer has enabled the “Content Credentials” feature, which uses cryptographic methods to mark the source, or “provenance,” of all AI-generated images created on Designer. The invisible digital watermark feature shows the source, time, and date of original creation, and this information cannot be altered. The technology employs standards set by the Coalition for Content and Authenticity (C2PA) to add an extra layer of trust and transparency for AI-generated images. Microsoft is a co-founder of C2PA and has contributed with the core digital content provenance technology. 

Terms of Use and Code of Conduct. This resource governs use of Copilot in Bing. Users should abide by the Terms of Use and Code of Conduct, which, among other things, informs them of permissible and impermissible uses and the consequences of violating terms. The Terms of Use also provides additional disclosures for users and serves as a handy reference for users to learn about Copilot in Bing.   

Operations and rapid response. We also use Copilot in Bing’s ongoing monitoring and operational processes to address when Copilot in Bing receives signals, or receives a report, indicating possible misuse or violations of the Terms of Use or Code of Conduct.  

Feedback, monitoring, and oversight. The Copilot in Bing experience builds on existing tooling that allows users to submit feedback and report concerns, which are reviewed by Microsoft’s operations teams. Bing’s operational processes have also expanded to accommodate the features within Copilot in Bing experience, for example, updating the Report a Concern page to include the new types of content that users generate with the help of the model.   

Our approach to identifying, measuring, and mitigating risks will continue to evolve as we learn more, and we are already making improvements based on feedback gathered during the preview period.     

Automated content detection. When users upload images as part of their chat prompt, Copilot in Bing deploys tools to detect child sexual exploitation and abuse imagery (CSEAI), most notably PhotoDNA hash-matching technology. Microsoft developed PhotoDNA to help find duplicates of known CSEAI. Microsoft reports all apparent CSEAI to the National Center for Missing and Exploited Children (NCMEC), as required by US law. When users upload files to analyze or process, Copilot deploys automated scanning to detect content that could lead to risks or misuse, such as text that could relate to illegal activities or malicious code.

Protecting privacy  

Microsoft’s longstanding belief that privacy is a fundamental human right has informed every stage of Microsoft’s development and deployment of Copilot in Bing experience. Our commitments to protecting the privacy of all users, including by providing individuals with transparency and control over their data and integrating privacy by design through data minimization and purpose limitation, are foundational to Copilot in Bing. As we evolve our approach to providing the Copilot in Bing’s generative AI experiences, we will continually explore how best to protect privacy. This document will be updated as we do so. More information about how Microsoft protects our users’ privacy is available in the Microsoft Privacy Statement.  

In the Copilot in Windows experience, Windows skills may, as part of their functionality, share user information with the chat conversation. This is subject to user approval and UI prompts are displayed to confirm user intent before user information is shared with the chat conversation.

Microsoft continues to consider the needs of children and young people as a part of the risk assessments of new generative AI features in Copilot in Bing. All Microsoft child accounts that identify the user as under 13 years of age or as otherwise specified under local laws cannot sign in to access the full new Bing experience.   

As described above, for all users we have implemented safeguards that mitigate potentially harmful content. In Copilot in Bing, results are set as in Bing SafeSearch’s Strict Mode, which has the highest level of safety protection in the main Bing search, hence preventing users, including teen users, from being exposed to potentially harmful content. In addition to information we have provided in this document and in our FAQs regarding chat features, more information about how Copilot in Bing works to avoid responding with unexpected offensive content in search results is available here.  

Microsoft has committed to not deliver personalized advertising based on online behavior to children whose birthdate in their Microsoft account identifies them as under 18 years of age. This important protection will extend to ads in Copilot in Bing features. Users may see contextual ads based on the query or prompt used to interact with Bing.  

To unlock the transformative potential of generative AI, we must build trust in the technology through empowering individuals to understand how their data is used and providing them with meaningful choices and controls over their data. Copilot in Bing is designed to prioritize human agency, through providing information on how the product works as well as its limitations, and through extending our robust consumer choices and controls to Copilot in Bing features.   

The Microsoft Privacy Statement provides information about our transparent privacy practices for protecting our customers, and it sets out information on the controls that give our users the ability to view and manage their personal data. To help ensure that users have the information they need when they are interacting with Bing’s new conversational features, in-product disclosures inform users that they are engaging with an AI product, and we provide links to further FAQs and explanations about how these features work. Microsoft will continue to listen to user feedback and will add further detail on Bing’s conversational features as appropriate to support understanding of the way the product works.   

Microsoft also provides its users with robust tools to exercise their rights over their personal data. For data that is collected by Copilot in Bing, including through user queries and prompts, the Microsoft Privacy Dashboard provides authenticated (signed-in) users with tools to exercise their data subject rights, including by providing users with the ability to view, export, and delete stored conversation history. Microsoft continues to take feedback on how they want to manage their new Bing experience, including through the use of in-context data management experiences.   

Copilot in Bing also honors requests under the European right to be forgotten, following the process that Microsoft developed and refined for Bing’s traditional search functionality. All users can report concerns regarding generated content and responses here, and our European users can use this form to submit requests to block search results in Europe under the right to be forgotten.   

Copilot in Bing will honor users’ privacy choices, including those that have previously been made in Bing, such as consent for data collection and use that is requested through cookie banners and controls available in the Microsoft Privacy Dashboard. To help enable user autonomy and agency in making informed decisions, we have used our internal review process to carefully examine how choices are presented to users. 

In addition to controls available via the Microsoft Privacy Dashboard, which allow users to view, export and delete their search history, including components of their Chat history, authenticated users who have enabled the Chat history feature in the product have the ability to view, access and download chat history through in-product controls. Users may clear specific chats from Chat history or turn off Chat history functionality entirely at any time by visiting the Bing Settings page. Users also may choose whether to allow personalization to access a more tailored experience with personalized answers. Users may opt-in and opt-out from personalization at any time in Chat Settings in the Bing Settings page.  Clearing specific chats from Chat history prevents them from being used for personalization.  
 
More information on Chat history and personalization is provided to the users in Copilot in Bing FAQs

Copilot in Bing was built with privacy in mind, so that personal data is collected and used only as needed and is retained no longer than is necessary. As mentioned above, Visual Search in Copilot in Bing feature deploys a mechanism that blurs faces in the images at the time of the upload by users, so that  facial images are not further processed or stored. More information about the personal data that Bing collects, how it is used, and how it is stored and deleted is available in the Microsoft Privacy Statement, which also provides information about Bing’s new chat features.   

Copilot in Bing has data retention and deletion policies to help ensure that personal data collected through Bing’s chat features is only kept as long as needed.   

We will continue to learn and evolve our approach in providing Copilot in Bing, and as we do so we will continue to work across disciplines to align our AI innovation with human values and fundamental rights, including protecting young users and privacy.   

Copilot with commercial data protection 

Copilot with commercial data protection, formerly known as Bing Chat Enterprise (“BCE”), was released by Microsoft in free public preview in July 2023 as a free add-on for certain M365 customers. Copilot with commercial data protection is an AI-enhanced web search experience for enterprise end users. 

As with Copilot in Bing, when a Copilot with commercial data protection end user enters a prompt into the interface, the prompt, the immediate conversation, top search results, and metaprompt are sent as inputs to the LLM. The model generates a response using the prompt and immediate conversation history to contextualize the request, the metaprompt to align responses with Microsoft AI Principles and user expectations, and the search results to ground responses in existing, high-ranking content from the web. This works the same way as Copilot in Bing as described above in this document, with the exception that Copilot with commercial data protection only relies on immediate conversation history (not recent conversation history) due to stored chat history not being a currently supported feature. Designer and Visual Search are now available in this version. 

Like other transformational technologies, harnessing the benefits of AI is not risk-free, and a core part of Microsoft’s Responsible AI program is designed to identify potential risks, measure their propensity to occur, and build mitigations to address them. Again, the above description of Microsoft’s efforts to identify, measure, and mitigate potential risks for Copilot in Bing also apply to this version, with some clarifications on mitigations described below: 

Phased release, continual evaluation. Much like with Copilot Bing, for Copilot with commercial data protection we have also taken an incremental release approach. On July 18, 2023, Copilot with commercial data protection became available as a free preview for eligible enterprise customers with specific M365 accounts to turn on for their enterprise end users. Thirty (30) days after notifying eligible enterprise customers, Copilot with commercial data protection became “default on” for those same customers. Copilot with commercial data protection has also since become available to specific educational faculty M365 accounts. Copilot with commercial data protection became generally available to certain enterprise customers on December 1, 2023. In the future, we plan to expand access to Copilot with commercial data protection to more Microsoft Entra ID users.

Terms of Use and Code of Conduct. End users of Copilot with commercial data protection must abide by the End User Terms of Use. These Terms of Use inform end users of permissible and impermissible uses and the consequences of violating terms.  

Operations and rapid response. We also use Copilot in Bing’s ongoing monitoring and operational processes to address when Copilot with commercial data protection receives signals, or receives a report, indicating possible misuse or violations of the End User Terms of Use

Feedback, monitoring, and oversight. Copilot with commercial data protection uses the same tooling as Copilot in Bing for users to submit feedback and report concerns, which are reviewed by Microsoft’s operations teams. Copilot in Bing’s operational processes have also expanded to accommodate the features within Copilot with commercial data protection experiences, for example, updating the Report a Concern page to include the new types of content that users generate with the help of the model.  

To help ensure that end users have the information they need when they are interacting with Copilot with commercial data protection, there is product documentation available at the bottom of this document, including the FAQ and Learn More pages. 

Prompts and responses generated by end users in Copilot with commercial data protection are processed and stored in alignment with enterprise data handling standards. The Copilot with commercial data protection offering is currently only available to enterprise customers and their authenticated adult end users, hence we don’t anticipate children or young people to be end users of Copilot with commercial data protection at this time. Additionally, Copilot with commercial data protection does not provide any behaviorally targeted advertising to end users. Instead, any ads that are displayed are only contextually relevant ads.  

Learn more

This document is part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see:

Microsoft's Approach to Responsible AI

Microsoft’s Responsible AI Standard   

Microsoft’s Responsible AI resources  

Microsoft Azure Learning courses on Responsible AI  

About this document  

© 2023 Microsoft. All rights reserved. This document is provided "as-is" and for informational purposes only. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples are for illustration only and are fictitious. No real association is intended or inferred.  

Need more help?

Want more options?

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

Was this information helpful?

What affected your experience?
By pressing submit, your feedback will be used to improve Microsoft products and services. Your IT admin will be able to collect this data. Privacy Statement.

Thank you for your feedback!

×