Writing amplification factor
The result is the SSD will have more free space enabling lower write amplification and higher performance.
What is flash write amplification
This means that except for brand new SSDS or ones that have been securely erased by the producer before its sold, the Flash storage chips have to be erased before they can be rewritten. Unfortunately, the process to evenly distribute writes requires data previously written and not changing cold data to be moved, so that data which are changing more frequently hot data can be written into those blocks. To do that, firmware has to read the page into RAM, modify the 1 sector of data and then write the new 8KB of data to a new physical page. Another cause of write amplification is defect management. Any garbage collection of data that would not have otherwise required moving will increase write amplification. However — and this comes at a major risk — once TRIM is active and the storage space is overwritten there is not a chance to recover the original data once saved there. Leave a Reply Your email address will not be published. Write Amplification is the ratio of actual data written to the flash vs data requested by the host to write to the device. What can a designer do to mitigate these issues? As a simple example, let's take an 8KB page that is already written with user data. In this scenario, the data requested by the host to write is Bytes only but the actual amount of data written to the flash is 8KB, thus, the Write Amplification Factor WAF is Now, suppose the user wants to update 1 sector Bytes of data that was allocated to this page. The benefit would be realized only after each run of that utility by the user.
For example, a 1GB NAND flash device that we are currently using has a block size of 64 pages; thus, when a page goes bad and we have to retire this block, it will result in WAF of This would not be a problem if the deletion process was an easy task. The main challenge is that the Flash cells can only be deleted block-wise and written on page-wise.
Once the blocks are all written once, garbage collection will begin and the performance will be gated by the speed and efficiency of that process.
However — and this comes at a major risk — once TRIM is active and the storage space is overwritten there is not a chance to recover the original data once saved there.
The result is that by simply deleting data from an SSD, more data is being created than being destroyed. If the data is mixed in the same blocks, as with almost all systems today, any rewrites will require the SSD controller to garbage collect both the dynamic data which caused the rewrite initially and static data which did not require any rewrite.
Wal write amplification
If the user or operating system erases a file not just remove parts of it , the file will typically be marked for deletion, but the actual contents on the disk are never actually erased. However, this is not the case as when deleting data and writing new data on an SSD, it requires the data and metadata to be written multiple times. This requires even more time to write the data from the host. This article describes Write Amplification which is a fundamental issue SSD controllers must address as part of their design. In this case, the Write Amplification Factor is This reduces the LBAs needing to be moved during garbage collection. Therefore, separating the data will enable static data to stay at rest and if it never gets rewritten it will have the lowest possible write amplification for that data. Blocks are made out of several pages and one page is made out of several storage chips. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD. Unfortunately, the process to evenly distribute writes requires data previously written and not changing cold data to be moved, so that data which are changing more frequently hot data can be written into those blocks. The process requires the SSD controller to separate the LBAs with data which is constantly changing and requiring rewriting dynamic data from the LBAs with data which rarely changes and does not require any rewrites static data.
What can a designer do to mitigate these issues? As shown in the earlier example, writing small chunks of data can result in large WAF; therefore, it is recommended that the host system avoid doing frequent small block writes.
Retrieving and redistributing the new data means that the old data will be copied to a new location and other complex metadata coping and calculations will also add to the total amount of data.
The more efficient the controller handles write amplification, the longer the life of the SSD.
based on 31 review