Summary
Copilot+ PCs are a new class of Windows PCs designed for the AI generation. These PCs run Windows 11 and come equipped with dedicated AI hardware to unlock industry-leading AI features right on your device.
Copilot+ PCs are the fastest, most intelligent Windows PCs ever. All Copilot+ PCs are Windows 11 PCs, but not all Windows 11 PCs are Copilot+ PCs.
To learn what exactly a Copilot+ PC is, and how they differ from traditional Windows PCs, see Copilot+ PCs vs. Windows PCs: What’s the difference?
What is an execution provider?
An execution provider (EP) is a component that enables hardware-specific optimizations for machine learning (ML) operations. Execution providers abstract different compute backends (CPU, GPU, specialized accelerators) and provide a unified interface for graph partitioning, kernel registration, and operator execution. To learn more, see the ONNX Runtime docs.
MIGraphX (AMD)
The MIGraphX execution provider uses AMD’s Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs.
NvTensorRtRtx (NVIDIA)
The NVIDIA TensorRT-RTX Execution Provider (EP) is an inference deployment solution designed specifically for NVIDIA RTX GPUs. It is optimized for client-centric use cases..
TensorRT RTX EP provides the following benefits:
-
Small package footprint: Optimized resource usage on end-user systems at just under 200 MB.
-
Faster model compile and load times: Leverages just-in-time compilation techniques, to build RTX hardware-optimized engines on end-user devices in seconds.
-
Portability: Seamlessly use cached models across multiple RTX GPUs.
The TensorRT RTX EP leverages NVIDIA’s new deep learning inference engine, TensorRT for RTX, to accelerate ONNX models on RTX GPUs. Microsoft and NVIDIA collaborated closely to integrate the TensorRT RTX EP with ONNX Runtime.
OpenVINO (Intel)
Accelerate ONNX models on Intel CPUs, GPUs, NPU with Intel OpenVINO™ Execution Provider. Please refer to the System Requirements page for details on the Intel hardware supported.
QNN (Qualcomm)
The QNN Execution Provider for ONNX Runtime enables hardware accelerated execution on Qualcomm chipsets. It uses the Qualcomm AI Engine Direct SDK (QNN SDK) to construct a QNN graph from an ONNX model which can be executed by a supported accelerator backend library. OnnxRuntime QNN Execution Provider can be used on Android and Windows devices with Qualcomm Snapdragon SOC’s.
VitisAI (AMD)
Vitis AI is AMD’s development stack for hardware-accelerated AI inference on AMD platforms, including Ryzen AI, AMD Adaptable SoCs and Alveo Data Center Acceleration Cards.