[2010-09-16 09:51:31.909] CRITICAL indexer RTSearch: Shutting down. Reason: Failed to parse configuration
[2010-09-16 09:51:32.220] ERROR indexer file_hash_index: Could not create storage path 'f:\esp\data\data_fixml': No such file or directory
[2010-09-16 09:51:32.220] WARNING indexer file_hash_index: Failed to synch file hash index to disk: Could not open file 'f:\esp\data\data_fixml\itemcache.dat' for write: No such file or directory
[2010-09-16 09:51:32.220] ERROR indexer IDocIndex: Could not create storage path 'f:\esp\data\data_fixml': No such file or directory
Using network-attached storage (NAS) with FAST ESP
A network-attached storage system is a file-based storage system that can be attached to an ESP System through the network redirector by using a file sharing protocol (such as server message block [SMB], Common Internet File System [CIFS], or network file system [NFS]). If access to a disk resource requires that a share be mapped, or if the disk resource appears as a remote server by means of a Universal Naming Convention (UNC) path (for example, \\server_name\share_name) on the network, the disk storage system is not supported as a location for an ESP Index and FIXML.
The chance that problems may occur increases as disk I/O operations, requirements and complexity increase. The level of risk and loss of performance varies by device, protocol, network congestion, and configuration. As network bandwidth, latency, data access protocols, and storage technologies continue to evolve, the gap continues to reduce between the performance and reliability that is attainable with locally attached devices versus network-attached devices.
However, this important principle remains: the disk system that is used to store the ESP Index data must be accessible with all the features, protocols, application programming interfaces (APIs) and access methods that are available on a locally attached block mode Microsoft Windows volume, regardless of physical disk locations or underlying disk access technologies and protocols.
One must consider the following performance considerations when selecting a disk system and disk access technology for FAST ESP or any enterprise-level database management system (DBMS).
FAST ESP can add an extremely large load on the disk I/O subsystem. In most large database programs, physical I/O configuration and tuning play a significant role in overall system performance.
There are three major I/O performance factors to consider:
- I/O bandwidth - The aggregate bandwidth, typically measured in megabytes per second, which can be sustained to a database device.
- I/O latency - The latency, typically measured in milliseconds, between a request for I/O by the database system and the point at which the I/O request completes.
- CPU cost - The host CPU cost, typically measured in CPU microseconds, for the database system to complete a single I/O.
Any of these I/O factors can become a bottleneck and all of these factors must be considered when designing an I/O system for a database program.
If disk I/O is processed through the client network stack, the I/O is subject to the bandwidth limitations of the network itself. Even when there is enough overall bandwidth, one may have issues of greater latency and increased processing demands on the CPU, as compared to locally attached storage. Additionally, consider the availability of the network-attached storage when planning an ESP deployment in which the storage is attached by using a network.
Supportability factors and recommendations
Incorrect use of FAST ESP software with a network-attached storage product might result in data loss, including total database loss.
Microsoft recommends protecting the FAST ESP System, the storage system and the connecting network with a UPS.
Microsoft recommends contacting the vendor before deploying any storage solution for the FAST ESP System, to obtain assurance that the end-to-end solution is designed for ESP Server use. Many vendors have best practice recommendations for enterprise-level DBMS.
Microsoft also recommends benchmarking I/O performance to make sure that none of the I/O factors that are described earlier are causing system bottlenecks.
ID Bài viết: 2428517 - Xem lại Lần cuối: 15-06-2011 - Bản sửa đổi: 1