Gassing up your SSD: Top off the tank for higher performance

  • Don't overfill your SSD and improve performance

Have you ever run out of gas in your car? Do you often risk running your gas tank dry? Hopefully you are more cautious than that and you start searching for a gas station when you get down to a 1/4 tank. You do this because you want plenty of cushion in case something comes up that prevents you from getting to a station before it is too late.

The reason most people stretch their tank is to maximize travel between station visits. The downside to pushing the envelope to E is you can end up stranded with a dead vehicle waiting for AAA to bring you some gas.

Now most people know you don’t put gas in a solid state drive (SSD), but the pros and cons associated with how much you leave in the tank is very relevant to SSDs.

To understand how these two seemingly unrelated technologies are similar, we first need to drill into some technical SSD details. To start, SSDs act, and often look, like traditional hard disk drives (HDDs), but they do not record data in the same way. SSDs today typically use NAND flash memory to store data and a flash controller to connect the memory with the host computer. The flash controller can write a page of data (often 4,096 bytes) directly to the flash memory, but cannot overwrite the same page of data without first erasing it. The erase cycle cannot expunge only a single page. Instead, it erases a whole block of data (usually 128 pages). Because the stored data is sometimes updated randomly across the flash, the erase cycle for NAND flash requires a process called garbage collection.

Garbage collection is just dumping the trash
Garbage collection starts when a flash block is full of data, usually a mix of valid (good) and invalid (older, replaced) data. The invalid data must be tossed out to make room for new data, so the flash controller copies the valid data of a flash block to a previously erased block, and skips copying the invalid data of that block. The final step is to erase the original whole block, preparing it for new data to be written.

Before and during garbage collection, some data — valid data copied during garbage collection and the (typically) multiple copies of the invalid data — is in two or more locations at once, a phenomenon known as write amplification. To store this extra data not counted by the operating system, the flash controller needs some spare capacity beyond what the operating system knows. This is called over-provisioning (OP), and it is a critical part of every NAND flash-based SSD.

Over-provisioning is like the gas that remains in your tank
While every SSD has some amount of OP, some will have more or less than others. The amount of OP varies depending on trade-offs made between total storage capacity and benefits in performance and endurance. The less OP allocated in an SSD, the more information a user can store. This is like the driver who will take their tank of gas clear down to near-empty just to maximize the total number of miles between station visits.

What many SSD users don’t realize is there are major benefits to NOT stretching this OP area too thin. When you allocate more space for OP, you achieve a lower write amplification, which translates to a higher performance during writes and longer endurance of the flash memory. This is like the driver who is more cautious and visits the gas station more often to enable greater flexibility in selecting a more cost-effective station, and allows for last-minute deviations in travel plans that end up burning more fuel than originally anticipated.

The choice is yours
Most SSD users do not realize they have full control of how much OP is configured in their SSD. So even if you buy an SSD with 0% OP, you can dedicate some of the user space back to OP for the SSD.

A more detailed presentation of how OP works and what 0% OP really means was presented at the Flash Memory Summit 2012 and can be viewed with this link for your convenience: 2012 Flash Memory Summit – Understanding Over Provisioning

It pays to be the cautious driver who fills the gas tank long before you get to empty. When it comes to both performance and endurance, your SSD will cover a lot more ground if you treat the over-provisioning space the same way — keeping more in reserve.



About the Author:


  1. PJM September 29, 2013 at 11:28 pm - Reply

    I remain uncertain as to one thing –

    This page states “To store this extra data not counted by the operating system, the flash controller needs some spare capacity beyond what the operating system knows.”.

    That would mean ‘not part of any partition or volume’, IOW ‘shows as unallocated space’ under Windows.

    IS that REALLY true ? Elsewhere I see mention of ‘Dynamic OP’, which states that after Trim, the freed pages INSIDE a partion / volume are available as OP space (until written).

    Given a disk with 256GB, and manufacturers ‘minimum reserved for OP’ of 10 GB, and I have a 246GB partition, and I have only written 100GB to it (and then TRIM’d), how much OP space to I have (assuming I add no more data) ? 10GB ? Or (246 – 100 = 146GB) + 10GB ?

    • Kent Smith
      Kent Smith October 1, 2013 at 10:14 pm - Reply

      That is a good question. When adding an SSD to a system with Windows (or any other operating system), the operating system recognizes the total physical capacity of the SSD through a maximum logical block address (max LBA). This number is set by the SSD manufacturer. Windows is free to create partitions up to that number, but not beyond. The SSD manufacturer has some control of over- provisioning in some LBAs that are beyond the maximum number recognized by Windows. The flash controller manages the LBAs inside and outside of the range (0 to maximum) recognized by Windows.

      When a user does not fill the entire range of LBAs known to Windows, the controller automatically uses that space as dynamic over-provisioning (assuming the OS and SSD support TRIM).

      For your example, we will ignore the space in the translation of gigabytes to billion bytes (about 7%). Using those numbers, your math is correct with 156GB of total OP. Of course as soon as you start writing more data (not replacing data), the available OP will drop as low as the original manufacturers 10GB, per your example.

      Also note that if you do not create a partition(s) to reach the entire max LBA count, the unpartitioned area is automatically used as OP by the controller (assuming the partition was unwritten or TRIMd).

  2. PJM October 1, 2013 at 11:14 pm - Reply

    Excellent. So on my new (hypothetical) 960GB, with its ‘factory minimum of 7 % or 12% that Windows can never see no matter what,’ there’s no need for me to set my partition less than the maximum Windows allows (960), as long as I trust myself not to fill it with data, and it is TRIM’d.

    Most articles, such as as Anandtech (excellent recent article of SSD’s and OP), would have me believe I need to create a 750GB partition, and never ever use the ‘25% left for OP.’ I am clear now that that is not so (except for their testing, which will blindly fill the partition until it pukes) in real life. If I had a real life, that is. 🙂


    • Kent Smith
      Kent Smith October 2, 2013 at 8:20 pm - Reply

      Yes. If you never store more than 750GB of data and keep TRIM on (dont disable it), you will see the same performance as the guy who creates a 750GB partition.

  3. LEMC February 12, 2014 at 1:48 pm - Reply

    I use Fedora Linux 20 and am planning to install an SSD (Intel 525 series mSATA, 180GB) on my desktop computer. However, it seems that TRIM is not yet well implemented in the ext4 file system. So, in principle, I do not want to enable TRIM. So If I leave a large unpartitioned space (20GB or more), or do not use more than 60% of the SSD capacity, will this over provisioning, together with DuraWrite, make up for the lack of TRIM? With this SSD setup, will performance degrade over time? Thanks!

    • Kent Smith
      Kent Smith February 12, 2014 at 7:18 pm - Reply

      I am not sure if your version of Linux supports TRIM since I dont use it myself, but this Wikipedia article is pretty good at keeping track:

      In case you are unsure yourself or you know your version does not support TRIM, you can, as you said, increase your over provisioning (OP) to improve performance. Enterprise SSDs typically use 28% OP to improve performance. In case other readers are not clear on this question and answer, TRIM will not improve performance of an SSD that is filled to capacity. Only when you dont fill the SSD to capacity will TRIM convert that free space into what we call Dynamic OP. Therefore, in a 256GB SSD if you set the OP to 28%, you can use the full 200GB for user storage and get enterprise-level OP performance.

      • LEMC February 13, 2014 at 11:14 am - Reply

        Thank you very much for your reply! The Wikipedia article confirms that TRIM is not yet well implemented under Linux (there are “performance concerns”), so I would rather not activate it for now. I will therefore increase over provisioning to get better performance. As per your 2012 presentation “Understanding SSD Over Provisioning,” I will create a 130GB partition on my brand-new SSD (Intel 525 mSATA, 180GB total capacity), leaving 50GB (28% of total capacity) of unallocated space. With this setup, how much am I losing in terms of overall SSD performance, compared to an identical system but with a normally working TRIM?

        • Kent Smith
          Kent Smith February 13, 2014 at 9:48 pm - Reply

          An SSD with no TRIM at 130GB user capacity (with any amount of data filling it) will perform the same as an SSD with TRIM at 130GB user capacity (if filled to capacity). Said another way, the no-TRIM SSD has a fixed performance level from the 28% OP. That performance will be very good. The TRIM SSD has a worst- case performance the same as the no-TRIM SSD. The performance increases as you save less data below full capacity. As I covered in the Flash Memory Summit 2012 presentation that you mentioned, the TRIM command will allow the free user space to become dynamic OP, which will generally increase performance.

          Your question gives me an idea for another blog focused on this specific scenario visually showing the performance difference with different user data fill levels. Thanks.

Leave A Comment