Related topics
Sign in with Microsoft
Sign in or create an account.
Select a different account.
You have multiple accounts
Choose the account you want to sign in with.

What is Copilot in Viva Engage?

Copilot in Engage provides access to large language model (LLM) technology, which is a type of technology that you can ask to do language-based tasks for you, similarly to how you might ask a person.

Copilot in Engage learns from your communities, campaigns, and interests to make personalized suggestions of what you might like to post and where you might benefit from engaging.

What can Copilot in Viva Engage do?

Copilot in Engage can help you:

  • Brainstorm ideas of what to post, where to post, and points to include in your post.

  • Provide a template, draft content for a post, and make edits to improve writing quality and value.

  • Provide feedback and advice about your post.

What is the intended use of Copilot in Viva Engage?

Copilot empowers with the information and collaboration you need to leverage Viva Engage to achieve your professional goals.

How was Copilot in Viva Engage evaluated? What metrics are used to measure performance?

We measured the performance of Copilot in Engage, which is powered by GPT using these key metrics:

  • Precision and Recall: These metrics were crucial for evaluating the quality of suggestions. Precision quantified how many of the AI-generated suggestions were relevant, while recall determined how many of the relevant suggestions were retrieved.

  • User Satisfaction: To gauge user satisfaction, we conducted user surveys and collected feedback to assess how satisfied users were with the AI system's assistance.

  • Generalizability: To assess how well the system's results generalized across different use cases, we tested Copilot in Engage on a diverse set of data and tasks. This involved evaluating the system's performance on a range of scenarios and domains that were not part of the initial training data.

We conducted red teaming exercises, inviting external experts and testers to find vulnerabilities or biases in the system. This process helped us identify potential issues and improve the system's robustness. Our evaluation process is ongoing, with continuous updates and improvements based on user feedback. By employing a combination of internal evaluation, user feedback, and external testing, we aim to ensure the accuracy, fairness, and generalizability of Copilot in Engage powered by GPT.

What are the limitations of Copilot in Viva Engage? How can users minimize the impact of these limitations when using the system?

The underlying model powering Copilot is trained on pre-2021 data. It won’t provide relevant responses if a question requires knowledge of the post-2021 world.

What operational factors and settings allow for effective and responsible use of Copilot in Viva Engage?

Copilot in Engage is designed with a robust filter system that proactively blocks offensive language and prevents generating suggestions in sensitive contexts. We are making it better at detecting and removing offensive content generated by Copilot in Engage and addressing biased, discriminatory, or abusive outputs. 

We encourage you to report any offensive suggestions they encounter while using Copilot in Engage.

Learn more

Get started with Copilot in Viva Engage

Collaborate with Copilot in Engage on posts and articles

Participate more with suggestions from Copilot in Viva Engage

Set up Copilot in Viva Engage | Microsoft Learn

Need more help?

Want more options?

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

Was this information helpful?

What affected your experience?
By pressing submit, your feedback will be used to improve Microsoft products and services. Your IT admin will be able to collect this data. Privacy Statement.

Thank you for your feedback!