File Server – File System Performance
The NTFS file system is the heart of the Windows O/S and its performance is directly
related to system reliability. However, NTFS writes files to the disk in scattered
pieces, even if the disk is almost empty. This immediately multiplies the amount
of I/Os it takes to complete a task and quickly creates a decline in even the newest
and most advanced systems.
Slows, crashes and frequent downtime
and the effects are felt throughout the company.
File server performance optimization achieves two important objectives: minimizing
downtime and increasing speed and efficiency of applications. High traffic, typical of most servers,
will bring about a loss of performance quickly when a server has an underlying NTFS
performance issue. The problem is that system management and monitoring tools may
only narrow the cause down to a range of possibilities. Possible solutions often
do not state file fragmentation of NTFS files, because the effects of fragmentation
are commonly underestimated.
Learn more about Diskeeper for faster file server performance »
How should I optimize my File Server performance?
File fragmentation is often the "straw that broke the camel’s back" when noting
issues of stability or reliability. Stressed I/O activity, compounded by fragmentation,
can expose faulty device drivers or file filters that may otherwise operate effectively
(in non-fragmented environments). The reliability of third-party applications is
highly dependent on the degree to which those applications can accommodate bottlenecks,
such as in disk subsystems.
System administrators have tested
Diskeeper® data performance software
and found the resulting file server performance to be so consistently maintained at peak levels
that major system and network problems disappear.
V-locity® VM accelerator
does the same with VMs. In two cases a SQL administrator
discovered the Average Disk Sec/Read was very high on their database servers.
Per a Microsoft Knowledgebase article any disk read longer that 50ms represents
a serious bottleneck. The admins found 200ms and 300ms delays – well into the serious
end of the scale. The Knowledgebase article suggested possible solutions but the
one not explicitly stated was to defragment. Both administrators ran Diskeeper and
quickly increased the read speed to 15 and 30ms. A tremendous improvement in performance!
A clue that you should evaluate file I/O optimization is when you read or hear a recommendation
to increase I/O bandwidth, such as adding more disks. Example: your SAN vendor tells
you to add another controller or array. If there is substantial file fragmentation, Diskeeper
will typically provide better results, as it actually fixes the issue and prevents it from happening again
rather than masking it – and it’s a whole lot cheaper.
"Prior to installing Diskeeper some of our lengthy batch process jobs would often
fail at the end of the job with a ‘Delayed Write Fail’ message. I have not seen
that message EVER again once I installed Diskeeper on the file server hosting that
volume and that was over three years ago."
– Damon Young, System Administrator,
Download a free trial of Diskeeper Server or V-locity to increase file server performance today »