Microsoft Azure integration
- Port 443 is used for HPC communication with Microsoft Azure nodes. In SP3, most HPC communication between the head node and Azure nodes is performed over port 443. This simplifies firewall settings when you add Azure nodes to your HPC cluster. This includes communication related to deployment, job scheduling, service-oriented architectures (SOA) brokering, and file staging. This does not affect Remote Desktop connections to your Azure nodes. Azure nodes will continue to use port 3389. For more information about how to configure the network firewall, visit the following Microsoft website:
- Security updates in usage of management certificates for Azure nodes. SP3 supports a best practices configuration of the Microsoft Azure management certificate on the head node and on client computers that need to connect to Microsoft Azure. Each cluster administrator now configures the management certificate and its private key in the Current User\Personal store. This helps restrict access to the private key, and this provides a more secure configuration than in earlier versions of Windows HPC Server 2008 R2. In earlier versions, the management certificate with the private key is configured in the Trusted Root Certification Authorities store of the computer. This makes it available to any user on the computer. The certificate configuration in Microsoft HPC Pack 2008 R2 Service Pack 2 (SP2) is still supported. However, we recommend that you move your management certificates to the stores now supported in SP3. For more information about how to configure the management certificate for Microsoft Azure, visit the following Microsoft website:
- Configure task level preemption to minimize unnecessary task cancelation. In SP3, you can configure the immediate preemption policy so that preemption happens at the task level, instead of at the job level. Preemption allows for higher priority jobs to take resources away from lower priority jobs. With the default immediate preemption settings, the scheduler will cancel a whole job if any of its resources are needed for a higher priority job. When you enable task level preemption, the scheduler will cancel individual tasks instead. For example, if a Normal priority job is running 100 tasks on 1 core each, and a High priority job is submitted that requires 10 cores, task level preemption will cancel 10 tasks, instead of canceling the whole job. This option can improve job throughput by reducing the rework that must be done because of preemption. For more information about policy configuration, visit the following Microsoft website:
- Harvest cycles from servers on the network. In SP3, you can harvest additional cycles from servers on the network that are running the Windows Server 2008 R2 operating system. These servers are not dedicated compute nodes, and can be used for other tasks. This process is much like adding workstation nodes to the cluster. They can automatically become available to run cluster jobs according to configurable usage thresholds, according to a weekly availability policy (for example, every night on weekdays and all day on weekends), or they can be brought online manually.
Note The edition previously known as HPC Pack 2008 R2 for Workstations is renamed HPC Pack 2008 R2 for Cycle Harvesting. If you already have deployed HPC Pack 2008 R2 for Workstations, you can continue to use it, and updates and service packs still apply to it. Installation on a server requires new media to be created by either downloading the new disc from your volume license website or by downloading the SP3 integration package and following the instructions that accompany it.
- Obtain node information through the HTTP web service APIs. SP3 adds new APIs to the HTTP web service that can be used to obtain information about nodes and node groups in the cluster. For more information about how to use the web service, visit the following Microsoft website:
- Package your HPC applications as a Microsoft Azure service. The Microsoft Azure HPC Scheduler SDK enables developers to define a Microsoft Azure deployment that includes built-in job scheduling and resource management, runtime support for MPI, SOA, parametric sweep, and LINQ to HPC applications, web-based job submission interfaces, and persistent state management of job queue and resource configuration. Applications that are built by using the on-premises job submission API in Windows HPC Server 2008 R2 can use very similar job submission interfaces in the Microsoft Azure Scheduler. For more information about Microsoft Azure HPC Scheduler, visit the following Microsoft website:
- Preview release LINQ to HPC programming model for data-parallel applications. LINQ to HPC and the Distributed Storage Catalog (DSC) help developers write programs that use cluster-based computing to manipulate and analyze very large data sets. LINQ to HPC and the DSC include services that run on a Windows HPC cluster, and client-side features that are invoked by applications. Code samples are available in the SP3 SDK code sample download, and the programmer’s guide is available in the LINQ to HPC section on MSDN. For more information, visit the following Microsoft website:
Important It is the final preview of LINQ to HPC and we do not plan to move forward with a production release. In line with our announcement in October at the PASS conference, we will focus our effort on bringing Apache Hadoop to both Windows Server and Microsoft Azure. For more information, visit the following Microsoft websites:
How to obtain this updateThe update is available for download from the following Microsoft Download Center website:
Download the update package now.
For more information about how to download Microsoft support files, click the following article number to view the article in the Microsoft Knowledge Base:
PrerequisitesTo apply this update, you must be running Windows HPC Server 2008 R2. Additionally, HPC Pack 2008 SP2 must be installed.
Installation instructionsTo install this update, run this update on the head node.
Note If you have a pair of high-availability head nodes, run this update on the active node, and then run this update on the passive node.
Restart requirementYou must restart the computer after you install this update.
Update replacement informationThis update does not replace a previously released update.
Номер статьи: 2638616 — последний просмотр: 20 июня 2014 г. — редакция: 1