Artificial intelligence (AI) tools are revolutionizing how we learn, create, and be productive. As AI plays a larger role in our daily lives, it's more important than ever that technology is built, deployed, and used responsibly.
This is why we are committed to creating AI systems that are human-centered and research-driven for the benefit of everyone. For the past six years, we've been working to ensure that our AI systems are responsible by design.
If you’re unfamiliar with our perspective on responsible AI, or not quite sure what it means for you, read on to learn more.
Responsible AI at Microsoft
Microsoft believes that when we create technologies that can change the world, we must also ensure that the technology is used responsibly. We are committed to creating responsible AI by design. Our work is guided by a core set of six principles and we are putting those into practice across the company to develop and deploy AI that will have a positive impact on society. We take a cross-company approach through cutting-edge research, best-of-breed engineering systems, and excellence in policy and governance.
Read more about our approach to responsible AI in this blog from our Chief Responsible AI Officer Natasha Crampton.
Responsible AI Principles
Our six responsible AI principles lay the foundation for all of our AI efforts across the company.
Fairness – Microsoft AI systems are designed with quality of service, availability of resources, and a minimization of the potential for stereotyping based on demographics, culture, or other factors.
Reliability and safety – Microsoft AI systems are developed in a way that is consistent with our design ideas, values, and principles so as to not create harm in the world.
Privacy and security – With an increased reliance on data to develop and train AI systems, we’ve established requirements to ensure that data is not leaked or disclosed.
Inclusiveness – Microsoft’s AI systems should empower and engage communities around the world, and to do this, we partner with under-served minority communities to plan, test, and build AI systems.
Transparency – People who create AI systems should be open about how and why they are using AI, and open about the limitations of the system. Additionally, everyone must understand the behavior of AI systems.
Accountability – Everyone is accountable for how technology impacts the world. For Microsoft, this means we are consistently enacting our principles and taking them into account in everything that we do.
Microsoft's Responsible AI standard
The Responsible AI Standard is the set of company-wide rules that help to ensure we are developing and deploying AI technologies in a manner that is consistent with our AI principles.
We are integrating strong internal governance practices across the company, most recently by updating our Responsible AI Standard. With this update, we sought to improve on our earlier Standard, released in the fall of 2019, making it more concrete and actionable, and easier to integrate into existing engineering practices.
We've taken a thoughtful, cross-discipline approach to this work, consulting experts within and beyond Microsoft to ensure we are being deliberately inclusive and forward-thinking. We believe our Responsible AI Standard is a durable framework for the maturing practice of responsible AI and evolving regulatory requirements.
Note: To view the complete guide, see Microsoft Responsible AI Standard.
Use of data
Our approach to privacy and data protection is grounded in our belief that customers own their own data and ensuring any product or service we provide is built with privacy by design from the ground up. We've defined clear privacy principles that include a commitment to be transparent in our privacy practices, to offer meaningful privacy choices, and to always responsibly manage the data we store and process.
To learn more, see responsible AI in action at Microsoft.