18 May 2012

Is Defragmentation Still Worthwhile to Keep your PC Running Smoothly ? Does it Work for SSDS ?



When did you last defragment your hard disk? If it's been a while, that's understandable. Once, "defragging" was a popular way to maximize performance; but as hardware has grown faster and more powerful, the effect of fragmentation has become less noticeable, leading to a perception that defragmentation is no longer necessary.

In truth, defragmenting your disks is still worthwhile. The more powerful our PCs become, the more natural it is to multitask - to leave processes running in the background, to have media players open in the foreground, and even to share files across a home network at the same time. This type of heavily parallel usage can still cause a fragmented disk to hiccup, leading to annoyingly uneven performance. And when it comes to servers, fragmentation is more of an issue. Personal storage servers typically use domestic drives, which aren't designed to handle many simultaneous requests. In business, a host of virtual servers may run on a single set of physical hardware, resulting in intense demands on the hard disk.

Even if a fragmented disk provides acceptable performance, constantly seeking is likely to shorten its lifespan. And every second spent doing so is an extra second the drive spends powered on and generating heat. The effect won't halve a laptop's battery life, but it isn't insignificant: a typical 500GB 2.5in hard disk draws 2.5W while active, but less than one watt idle. For a business running dozens or hundreds of disks, the cumulative wastage could be considerable - and that's before you weigh in cooling requirements.


Free Defragmentation Tools


Since Windows 95, all versions of the OS have come with a built-in graphical defragmentation tool, which can reorganize your hard disk to keep files together. There are plenty of free tools, too, such as Piriform Defraggler (www.piriform.com/defraggler), which adds a more friendly interface and extra features, such as the ability to selectively defragment individual files.

A full defragmentation with these programs can take hours, however; and your hard disk will be constantly seeking all over the place, interfering with your use of it. For these reasons, it's common to schedule a periodic defragmentation for times when you won't be using your system (by default, the Windows 7 defragmenter launches at lam every Wednesday).

But this works only if your PC is switched on, wasting power. And your disk won't be completely defragmented since the software can't move system files that are in use.


Preventing Fragmentation


The ideal solution would be to prevent fragmentation before it happens, by ensuring files are written contiguously to disk in the first place. When you create a file on an NTFS file system, Windows attempts to place it in the area of free space on your hard disk that most closely matches the size of the file, so as to make the most efficient use of the space.

Unfortunately, this involves guesswork, because the OS often has to start before it knows what the eventual size of the file will be. If it's a log file that grows over time, or a video file that will take several minutes to write, the size could be many times larger than expected. Windows tries to find a suitable place to write by looking for a space that is - depending on a simple predictive algorithm - between two and 16 times the size of the data that's already been cached for writing.

Sadly, although well intentioned. this approach is apt to cause more fragmentation than it saves. When a file turns out to be larger than the space allocated, it's fragmented across multiple areas of free space. And when a file turns out to be smaller, an unhelpful island of space is left immediately after it on the disk.


Defragmentation with Diskeeper


Enter Condusiv. Formerly known as the Diskeeper Corporation (and before that as Executive Software), it's been producing defragmentation software since 1981. Indeed, the defragmentation clients in Windows 2000, XP and Server 2003 are built on routines licensed from the company. Via the Condusiv site (http://tinyurl.com/7d8s5ol) you can get a 30-day trial of Diskeeper 2011 Home package, an advanced utility that takes a smarter approach to defragmentation.

Diskeeper's secret weapon is a system called IntelliWrite. This changes Windows' default file write behavior by allocating larger spaces for newly created files, greatly reducing the initial occurrence of fragmentation. It's a strategy that relies on there being a certain amount of disk space to spare - IntelliWrite is automatically disabled when a disk has less than 2GB of available capacity. But modern disks typically offer hundreds of gigabytes of space to play with, giving IntelliWrite plenty of scope to improve performance.

Diskeeper claims that IntelliWrite reduces the occurrence of files being created in a fragmented state by up to 85%. In some cases, though, fragmentation remains unavoidable. The software therefore also continuously monitors your hard disk for fragmented data, and consolidates it whenever it's found. This obviates the need to run large, periodic scheduled defragmentation jobs - although if you want to be sure it won't interfere with your usage, you can create a schedule specifying when defragmentation can and can't happen.

A final string to Diskeeper's bow is the ability to defragment the page file that's used by Windows as virtual memory, by scheduling a targeted defragmentation job to run before Windows starts up. This may be slow if you're using a large page file, but again it can be scheduled - and there's no other way to ensure Windows is running at its smoothest.


Fragmentations & SSDS


So far we've focused on mechanical drives, but with solid-state drives (SSDs) growing in popularity, you might be wondering whether they also need defragmenting. The short answer is that defragmentation, as it's commonly understood, is unnecessary.

To understand why, let's focus first on read performance. Since flash memory has no "seek time", an SSD can read discontiguous cells just as quickly as adjacent ones. Strictly speaking, there's a performance penalty involved in switching between different banks of memory cells -but this is of the order of fractions of a microsecond. For all practical purposes, defragmenting the files on an SSD yields no improvement in read performance.

Indeed, conventional defragmentation tools don't work on SSDs. To the operating system - and hence to defragmentation software - solid-state volumes appear to be divided into cylinders, heads and sectors, just like mechanical hard disks. But this is an abstraction, used to provide compatibility with established disk interfaces. Inside the SSD, these virtual addresses map onto flash memory banks that will almost certainly be differently arranged.

Things are complicated further by a process called wear leveling. Each flash cell inside an SSD can sustain only a limited number of write operations; to prolong the lifetime of the disk as a whole, the SSD's internal controller attempts to distribute write operations evenly across the whole capacity. This means that the mapping of virtual sectors to physical cells changes dynamically every time a block of data is written. Try to write to the same disk location twice in a row and you'll in fact be addressing two different physical memory locations.

This utterly defeats traditional approaches to defragmentation. When a tool such as the Windows defragmenter thinks it's consolidating data, in reality it's shortening the life of your disk with thousands of pointless writes. For this reason, automatic defragmentation is turned off in Windows 7 for SSDs.


Avoiding Write Amplification


Although fragmentation has a minimal effect on SSD read performance, it makes a big difference to writing. This is because of a limitation of SSDs called write amplification. It occurs because flash memory cells can't be overwritten like physical hard disk sectors, To write to a cell that contains data (even junk data left over from a deleted file), it must be erased and then reprogrammed. The catch is that cells can't be erased singly: for technical reasons, they can only be erased in blocks, typically of 512KB.

This creates problems. When you write a file into a given block, it's clearly unacceptable for the drive controller to simply erase all nearby data. So when the write request is received, the controller must read the entire contents of the block into a cache, then erase the block, then rewrite all those contents back to the disk, along with the new data. A single write request is thus "amplified" into dozens or thousands of operations - greatly slowing down performance.

To an extent, the effect is mitigated by an SSD feature called TRIM. When a file is deleted, TRIM enables the operating system to tell the drive controller that the data doesn't need to be kept. Should the block need to be rewritten in future, the unwanted data can be skipped. This speeds up the operation at hand; and because the unused space is subsequently left blank, it can in future be written to directly without requiring further erase operations.

Diskeeper Home 2011 takes this one step further, with an SSD-specific feature called HyperFast As we've described, it isn't possible for defragmentation software to reorganize the data on an SSD. But by using methods similar to those employed by IntelliWrite, HyperFast discourages the OS from writing files in piecemeal fashion in the first place, greatly reducing the break-up of free space. This means that whenever data is written, it's much more likely to be to a block that's mostly blank - minimising the effect of write amplification. The default installation will activate HyperFast automatically for SSDs.

 

0 comments:

Post a Comment

Drop in Your Comments, Problems, Suggestions, Praise, Complains or just anything.

We are always excited to hear from you.

Don't post rude or nasty comments. Ethnic slurs, personal insults and abuses are rather uncool. Criticize, but know where to draw the line.

 
Related Posts Plugin for WordPress, Blogger...