라이트안프리피케이션
이 항목 「라이트안프리피케이션」은 도중까지 번역된 것입니다.(원문:http://en.wikipedia.org/w/index.php? title=Write_amplification&oldid=528777687) 번역 작업에 협력해 주시는 분을 요구하고 있습니다.노트 페이지나 이력, 번역의 가이드 라인도 참조해 주세요.요약란에의 번역 정보의 기입을 잊지 마세요.(2013년 2월) |
라이트안프리피케이션(WA:Write Amplification)은, 플래쉬 메모리나 Flash SSD에 대하고, 실제로 쓴 데이터의 양보다, 플래시 팁에 대한 쓰기량이 증가한다고 말하는, 좋아해 바구니 현상의 일을 말한다.
Because flash memory must be erased before it can be rewritten, the process to perform these operations results in moving (or rewriting) user data and metadata more than once.→플래쉬 메모리에 재기입을 실시하기 위해서는 미리 그 영역의 소거가 필요하다.그러기 위해서는 데이터나 메타데이타의 이동이나 재기입이 한 번 이상 필요하다. This multiplying effect increases the number of writes required over the life of the SSD which shortens the time it can reliably operate.→이 동작 증가 작용은 쓰기 회수를 늘려, SSD가 신뢰성을 가져 동작할 수 있는 기간을 단축한다. The increased writes also consume bandwidth to the flash memory which mainly reduces random write performance to the SSD.→쓰기 회수 증가는 또 플래쉬 메모리에의 대역을 소비해, SSD의 주로 랜덤 라이트의 성능을 저하시킨다.[1][2] Many factors will affect the write amplification of an SSD; some can be controlled by the user and some are a direct result of the data written to and usage of the SSD.→많은 요인이 SSD의 라이트안프리피케이션에 영향을 준다.일부는 유저에 의한 컨트롤이 가능하고, 일부는 써지는 데이터나 SSD의 사용법그 자체가 원인이 된다.
Intel[3] and SiliconSystems (acquired by Western Digital in 2009)[4] used the term write amplification in their papers and publications as early as 2008. →인텔과 웨스탄데지탈은 2008년까지에는 논문이나 출판물에 라이트안프리피케이션이라고 하는 단어를 사용하고 있었다.Write amplification is typically measured by the ratio of writes coming from the host system and the writes going to the flash memory.라이트안프리피케이션치(WA치)는 보통 호스트에서의 쓰기 데이터량과 플래쉬 메모리에 건네받는 쓰기 데이터량의 비에 의해서 재어진다. Without compression, write amplification cannot drop below one. 압축 없음에는 WA치는 1을 밑돌 것은 없다.Using compression, SandForce has claimed to achieve a typical write amplification of 0.5,[5] with best-case values as low as 0.14 in the SF-2281 controller. [6]샌드 포스는 압축을 이용하는 것으로 통상의 WA치 0.5, SF-2281 콘트롤러를 사용해 최선의 상황으로 0.14, 를 달성했다고 주장했다.
목차
Flash SSD에 있어서의 기본적 처리
플래쉬 메모리의 특징으로서 메모리에 있는 데이터를 직접 「고쳐 쓴다」일(over-write)이 원리상 불가능하고, 이것은 하드 디스크 드라이브와는 다르다.Flash SSD에 데이터가 써질 때, 해당하는 플래쉬 메모리의 셀은 모두 「소거」되고 있어 그러한 페이지(통상 사이즈는 4 KiB)에 대해서 한 번에 데이터가 써진다.
The SSD controller on the SSD, which manages the flash memory and interfaces with the host system, uses a logical to physical mapping system known as logical block addressing (LBA) and that is part of the flash translation layer (FTL).[8]
When new data comes in replacing older data already written, the SSD controller will write the new data in a new location and update the logical mapping to point to the new physical location. The data in the old location is no longer valid, and will need to be erased before the location can be written again.[1][9]
Flash memory can only be programmed and erased a limited number of times. This is often referred to as the maximum number of program/erase cycles (P/E cycles) it can sustain over the life of the flash memory. Single-level cell (SLC) flash, designed for higher performance and longer endurance, can typically operate between 50,000 and 100,000 cycles. 2011년 현재[update], multi-level cell (MLC) flash is designed for lower cost applications and has a greatly reduced cycle count of typically between 3,000 and 5,000. A lower write amplification is more desirable, as it corresponds to a reduced number of P/E cycles on the flash memory and thereby to an increased SSD life.[1]
WA치의 계산
라이트・안프리피케이션(WA)은, 그 용어가 정의되는 것보다도 이전부터 알려져 있던 사실이지만, 2008년에는 인텔[3][10]으로 실리콘 시스템즈가, 이 용어를 자료나 홍보에 사용하기 시작했다.[4]
Flash SSD는 WA치를 가져, 그것은 다음과 같은 식에서 나타내진다.[1][11][12][13]
어느 SSD에 대해 WA치를 정확하게 측정하기 위해서는, 드라이브가 「안정 상태」에 이르기 위해서 충분한 시간, 테스트 쓰기를 해야 하는 것이다.[2]
WA치의 계산식
WA…WA치
NAND…플래쉬 메모리에의 쓰기 데이터량
HOST…호스트 측에서의 쓰기 데이터량
WA치에 영향을 미치는 요소
많은 요소가, SSD의 WA치에 영향을 미친다.다음 겉(표)는, 그 주요 요소가 어떻게 영향을 주는지를 열거하고 있다.변량적 요소에 대해서는, 겉(표)에 대해 「비례」또는 「반비례」의 관계를 나타내고 있다.예를 들면 오버・프로비죠닝이 증대하면, WA치가 감소한다(반비례의 관계).요소가 2치적(유효 또는 무효)의 경우, 「정」또는 「부」의 관계를 나타낸다.[1][8][11]
요소 | 상세 | 타입 | 관계* |
---|---|---|---|
가베지 컬렉션 | 다음에 소거해 다시 쓰기 위해서 가장 적합한 블록을 선출하는 알고리즘의 효율 | 변량 | 반비례(good) |
오버・프로비죠닝 | SSD 콘트롤러에 할당할 수 있었던 예비 영역(유저 영역외)의 물리적 용량의 비율 | 변량 | 반비례(good) |
TRIM | 가베지 컬렉션 시에 SSD 콘트롤러에 어느 데이터를 파기 가능한가 통지하는 SATA 커멘드 | 2치 | 정(good) |
유저 빈영역 | 유저 영역 가운데, 실데이터가 없는 빈영역의 비율.이것이 유효한 요소가 되려면 TRIM 커멘드가 필수. | 변량 | 반비례(good) |
Secure Erase | 모든 유저 데이터 및 제어용의 메타데이타를 소거해, SSD를 공장 출하시의 퍼포먼스에 리셋트 한다.가베지 컬렉션이 재개될 때까지 유효. | 2치 | 정(good) |
웨어 레벨링 | 전블록의 개서 회수를 가능한 한 평균화하기 위한 알고리즘의 효율 | 변량 | 비례(bad) |
정적 데이터와 동적 데이터의 분리 | 데이터를 그 변경 빈도에 의해 그룹 나누어 한다 | 2치 | 정(good) |
시퀀셜 라이트 | 이론상은 시퀀셜 쓰기는 WA치가 1이 되지만, 다른 요소에 의해 WA치는 변동한다. | 2치 | 정(good) |
랜덤 라이트 | 연속하지 않는 복수의 논리 블록 주소에의 기입은, WA치에 가장 큰 영향이 있다. | 2치 | 부(bad) |
데이터 압축과 데이터 장황성의 삭감 | 플래쉬 메모리에 써지기 전에, 제거된 장황성이 있는 데이터의 양 | 변량 | 반비례(good) |
관계 | 상세 |
---|---|
비례(bad) | 요소가 증대하면 WA치가 증대 |
반비례(good) | 요소가 증대하면 WA치가 감소 |
정(good) | 요소가 유효한 경우에 WA치가 감소 |
부(bad) | 요소가 유효한 경우에 WA치가 증대 |
상기의 요소외, 리드・디스타브(en:Read_disturb)등의 불량 모드 관리[14]도 WA치에 영향을 미칠 가능성이 있다(→#SSD에 있어서의 가베지 컬렉션(GC)).
덧붙여 SSD에 대한 데후라그멘테이션 처리에 대해서는, 「SSD로의 defrag」을 참조.
SSD에 있어서의 가베지 컬렉션(GC) 편
데이터는, 복수의 기억 셀로 구성되는 「페이지」라고 하는 단위로 플래쉬 메모리에 써진다.그러나 소거는, 복수의 페이지로 구성되는, 보다 큰 「블록」이라고 하는 단위에서만 가능해지는[7].만약 있는 블록내가 있는 페이지의 데이터가 불필요하게 되었을 경우("stale"페이지로 불린다), 그 블록내의 필요한 데이터가 있는 페이지만, 다른 소거가 끝난 블록에 전송(쓰기) 된다.[2] 그리고"stale"페이지는 전송(쓰기) 되지 않기 때문에, 전송처의 블록으로 프리인 페이지로서 새로운 다른 데이터를 쓸 수 있다.여기까지의 처리를 「가베지 컬렉션(GC)」이라고 부르는[1][12].모든 SSD는 어떠한 GC의 구조를 갖추고 있지만, GC를 언제, 어떻게 처리할까는 각각 차이가 나는[12].GC는 SSD의 라이트・안프리피케이션에 큰 영향이 있는[1][12].
읽기 동작에 대해서는, 플래쉬 메모리를 소거할 필요는 없기 때문에, 통상은 라이트안프리피케이션에 관련지을 수 있을 것은 없다.그러나 리드・디스타브(en:Read_disturb)등의 불량 모드[14]가 발생하기 전에, 그 블록은 재작성을 한다.무엇보다, 이 일은, 드라이브의 라이트안프리피케이션 대하는 실질적인 영향도는 낮다고 보여지고 있는[15].
백그라운드 GC
GC의 프로세스는 플래쉬 메모리의 독입과 재쓰기 동작을 수반한다.즉 호스트로부터의 새로운 쓰기 지령에 의해서, 1 블록 전체의 독입과 그 블록의 쳐 유효한 데이터를 포함한 부분의 쓰기, 그리고 새로운 데이터의 쓰기가 우선 필요하다.이것은 시스템의 성능을 현저하고 저감 시킨다.[16] SSD의 콘트롤러에는 「백그라운드 GC(BGC)」, 「아이돌시 GC(ITGC)」 등으로 불리는 기능을 갖추는 것이 있다.이것은, 콘트롤러가 SSD의 아이돌시에, 호스트측으로부터 새로운 쓰기 데이터가 오는 것보다도 전에, 플래쉬 메모리의 복수의 블록을 통합하는 것이다.이것에 의해 디바이스의 성능의 저하를 막을 수가 있다.[17]
If the controller were to background garbage collect all of the spare blocks before it was absolutely necessary, new data written from the host could be written without having to move any data in advance, letting the performance operate at its peak speed. The trade-off is that some of those blocks of data are actually not needed by the host and will eventually be deleted, but the OS did not tell the controller this information. The result is that the soon-to-be-deleted data is rewritten to another location in the flash memory increasing the write amplification. In some of the SSDs from OCZ the background garbage collection only clears up a small number of blocks then stops, thereby limiting the amount of excessive writes.[12] Another solution is to have an efficient garbage collection system which can perform the necessary moves in parallel with the host writes. This solution is more effective in high write environments where the SSD is rarely idle.[18] The SandForce SSD controllers[16] and the systems from Violin Memory have this capability.[11]
파일 시스템에 주목한 GC
In 2010, some manufacturers (notably Samsung) introduced SSD controllers that extended the concept of BGC to analyze the file system used on the SSD, to identify recently deleted files and unpartitioned space. The manufacturer claimed that this would ensure that even systems (operating systems and SATA controller hardware) which do not support TRIM could achieve similar performance. The operation of the Samsung implementation appeared to assume and require an NTFS file system.[19] It is not clear if this feature is still available in currently shipping SSDs from these manufacturers. Systematic data corruption has been reported on these drives if they are not formatted properly using MBR and NTFS.[20]
오버・프로비죠닝
Over-provisioning (sometimes spelled as OP, over provisioning, or overprovisioning) is the difference between the physical capacity of the flash memory and the logical capacity presented through the operating system (OS) as available for the user. During the garbage collection, wear-leveling, and bad block mapping operations on the SSD, the additional space from over-provisioning helps lower the write amplification when the controller writes to the flash memory.[3][21][22][23]
The first level of over-provisioning comes from the computation of the capacity and the use of units for gigabyte (GB) where in fact it should be written as gibibyte (GiB). Both HDD and SSD vendors use the term GB to represent a decimal GB or 1,000,000,000 (10^9)bytes. Flash memory (like most other electronic storage) is assembled in powers of two, so calculating the physical capacity of an SSD would be based on 1,073,741,824 (230) per binary GB. The difference between these two values is 7.37% (=(230-109)/109). Therefore a 128 GB SSD with 0% over-provisioning would provide 128,000,000,000 bytes to the user. This initial 7.37% is typically not counted in the total over-provisioning number.[21][23]
The second level of over-provisioning comes from the manufacturer. This level of over-provisioning is typically 0%, 7%, or 28% based on the difference between the decimal GB of the physical capacity and the decimal GB of the available space to the user. As an example, a manufacturer might publish a specification for their SSD at 100 GB, 120 GB or 128 GB based on 128 GB of possible capacity. This difference is 28%, 7% and 0% respectively and is the basis for the manufacturer claiming they have 28% of over-provisioning on their drive. This does not count the additional 7.37% of capacity available from the difference between the decimal and binary GB.[21][23]
The third level of over-provisioning comes from end users to gain endurance and performance at the expense of capacity. Some SSDs provide a utility that permit the end user to select additional over-provisioning. Furthermore, if any SSD is set up with an OS partition smaller than 100% of the available space, that unpartitioned space will be automatically used by the SSD as over-provisioning as well.[23] Over-provisioning does take away from user capacity, but it gives back reduced write amplification, increased endurance, and increased performance.[18][22][24][25][26]
Over-provisioning calculation
TRIM 커멘드
TRIM is a SATA command that enables the operating system to tell an SSD what blocks of previously saved data are no longer needed as a result of file deletions or using the format command. When an LBA is replaced by the OS, as with an overwrite of a file, the SSD knows that the original LBA can be marked as stale or invalid and it will not save those blocks during garbage collection. If the user or operating system erases a file (not just remove parts of it), the file will typically be marked for deletion, but the actual contents on the disk are never actually erased. Because of this, the SSD does not know the LBAs that the file previously occupied can be erased, so the SSD will keep garbage collecting them.[27][28][29]
The introduction of the TRIM command resolves this problem for operating systems which support it like Windows 7,[28] Mac OS (latest releases of Snow Leopard, Lion, and Mountain Lion, patched in some cases),[30] and Linux since 2.6.33.[31] When a file is permanently deleted or the drive is formatted, the OS sends the TRIM command along with the LBAs that are no longer containing valid data. This informs the SSD that the LBAs in use can be erased and reused. This reduces the LBAs needing to be moved during garbage collection. The result is the SSD will have more free space enabling lower write amplification and higher performance.[27][28][29]
TRIM의 제한과 한계
The TRIM command also needs the support of the SSD. If the firmware in the SSD does not have support for the TRIM command, the LBAs received with the TRIM command will not be marked as invalid and the drive will continue to garbage collect the data assuming it is still valid. Only when the OS saves new data into those LBAs will the SSD know to mark the original LBA as invalid.[29] SSD Manufacturers that did not originally build TRIM support into their drives can either offer a firmware upgrade to the user, or provide a separate utility that extracts the information on the invalid data from the OS and separately TRIMs the SSD. The benefit would only be realized after each run of that utility by the user. The user could set up that utility to run periodically in the background as an automatically scheduled task.[16]
Just because an SSD supports the TRIM command does not necessarily mean it will be able to perform at top speed immediately after. The space which is freed up after the TRIM command may be random locations spread throughout the SSD. It will take a number of passes of writing data and garbage collecting before those spaces are consolidated to show improved performance.[29]
Even after the OS and SSD are configured to support the TRIM command, other conditions will prevent any benefit from TRIM. As of early 2010[update], databases and RAID systems are not yet TRIM-aware and consequently will not know how to pass that information on to the SSD. In those cases the SSD will continue to save and garbage collect those blocks until the OS uses those LBAs for new writes.[29]
The actual benefit of the TRIM command depends upon the free user space on the SSD. If the user capacity on the SSD was 100 GB and the user actually saved 95 GB of data to the drive, any TRIM operation would not add more than 5 GB of free space for garbage collection and wear leveling. In those situations, increasing the amount of over-provisioning by 5 GB would allow the SSD to have more consistent performance because it would always have the additional 5 GB of additional free space without having to wait for the TRIM command to come from the OS.[29]
유저 빈영역
The SSD controller will use any free blocks on the SSD for garbage collection and wear leveling. The portion of the user capacity which is free from user data (either already TRIMed or never written in the first place) will look the same as over-provisioning space (until the user saves new data to the SSD). If the user only saves data consuming 1/2 of the total user capacity of the drive, the other half of the user capacity will look like additional over-provisioning (as long as the TRIM command is supported in the system).[29][32]
Secure erase
The ATA Secure Erase command is designed to remove all user data from a drive. With an SSD without integrated encryption, this command will put the drive back to its original out-of-box state. This will initially restore its performance to the highest possible level and the best (lowest number) possible write amplification, but as soon as the drive starts garbage collecting again the performance and write amplification will start returning to the former levels.[33][34] Many tools use the ATA Secure Erase command to reset the drive and provide a user interface as well. One free tool that is commonly referenced in the industry is called HDDErase.[34][35] Parted Magic provides a free bootable Linux system of disk utilities including secure erase.[36]
Drives which encrypt all writes on the fly can implement ATA Secure Erase in another way. They simply zeroize and generate a new random encryption key each time a secure erase is done. In this way the old data cannot be read anymore, as it cannot be decrypted.[37] Some drives with an integrated encryption may require a TRIM command be sent to the drive to put the drive back to it original out-of-box state.[38]
웨어 레벨링
If a particular block were programmed and erased repeatedly without writing to any other blocks, the one block would wear out before all the other blocks, thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD. In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time. Unfortunately, the process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data which are changing more frequently (hot data) can be written into those blocks. Each time data are relocated without being changed by the host system, this increases the write amplification and thus reduces the life of the flash memory. The key is to find an optimum algorithm which maximizes them both.[39]
정적 데이터와 동적 데이터의 분리
The separation of static and dynamic data to reduce write amplification is not a simple process for the SSD controller. The process requires the SSD controller to separate the LBAs with data which is constantly changing and requiring rewriting (dynamic data) from the LBAs with data which rarely changes and does not require any rewrites (static data). If the data is mixed in the same blocks, as with almost all systems today, any rewrites will require the SSD controller to garbage collect both the dynamic data (which caused the rewrite initially) and static data (which did not require any rewrite). Any garbage collection of data that would not have otherwise required moving will increase write amplification. Therefore separating the data will enable static data to stay at rest and if it never gets rewritten it will have the lowest possible write amplification for that data. The drawback to this process is that somehow the SSD controller must still find a way to wear level the static data because those blocks that never change will not get a chance to be written to their maximum P/E cycles.[1]
시퀀셜 라이트
When an SSD is writing data sequentially, the write amplification is equal to one meaning there is no write amplification. The reason is as the data is written, the entire block is filled sequentially with data related to the same file. If the OS determines that file is to be replaced or deleted, the entire block can be marked as invalid, and there is no need to read parts of it to garbage collect and rewrite into another block. It will only need to be erased, which is much easier and faster than the read-erase-modify-write process needed for randomly written data going through garbage collection.[8]
랜덤 라이트
The peak random write performance on an SSD is driven by plenty of free blocks after the SSD is completely garbage collected, secure erased, 100% TRIMed, or newly installed. The maximum speed will depend upon the number of parallel flash channels connected to the SSD controller, the efficiency of the firmware, and the speed of the flash memory in writing to a page. During this phase the write amplification will be the best it can ever be for random writes and will be approaching one. Once the blocks are all written once, garbage collection will begin and the performance will be gated by the speed and efficiency of that process. Write amplification in this phase will increase to the highest levels the drive will experience.[8]
성능에의 영향
The overall performance of an SSD is dependent upon a number of factors, including write amplification. Writing to a flash memory device takes longer than reading from it.[17] An SSD generally uses multiple flash memory components connected in parallel to increase performance. If the SSD has a high write amplification, the controller will be required to write that many more times to the flash memory. This requires even more time to write the data from the host. An SSD with a low write amplification will not need to write as much data and can therefore be finished writing sooner than a drive with a high write amplification.[1][9]
제품에 있어서의 상황
In September 2008, Intel announced the X25-M SATA SSD with a reported WA as low as 1.1.[5][40] In April 2009, SandForce announced the SF-1000 SSD Processor family with a reported WA of 0.5 which appears to come from some form of data compression.[5][41] Before this announcement, a write amplification of 1.0 was considered the lowest that could be attained with an SSD.[17] Currently, only SandForce employs compression in its SSD controller.
출전
- ^ a b c d e f g h i j Hu, X.-Y. and E. Eleftheriou, R. Haas, I. Iliadis, R. Pletka (2009년). "[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1
. 1.154 . 8668 Write Amplification Analysis in Flash-Based Solid State Drives]". IBM. 2010년 6월 2일 열람. - ^ a b c Smith, Kent (2009년 8월 17일). "Benchmarking SSDs: The Devil is in the Preconditioning Details". SandForce. 2012년 8월 28일 열람.
- ^ a b c Lucchesi, Ray (2008□09). "SSD Flash drives enter the enterprise". Silverton Consulting. 2010년 6월 18일 열람.
- ^ a b Kerekes, Zsolt. "Western Digital Solid State Storage - formerly SiliconSystems". ACSL. 2010년 6월 19일 열람.
- ^ a b c Shimpi, Anand Lal (2009년 12월 31일). "OCZ's Vertex 2 Pro Preview: The Fastest MLC SSD We've Ever Tested". AnandTech. 2011년 6월 16일 열람.
- ^ Ku, Andrew (2012년 2월 6일). "Intel SSD 520 Review: SandForce's Technology: Very Low Write Amplification". Tomshardware. 2012년 2월 10일 열람.
- ^ a b c Thatcher, Jonathan (2009년 8월 18일). "NAND Flash Solid State Storage Performance and Capability – an In-depth Look". SNIA. 2012년 8월 28일 열람.
- ^ a b c d Hu, X.-Y. and R. Haas (2010년 3월 31일). "The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modelling". IBM Research, Zurich. 2010년 6월 19일 열람.
- ^ a b Agrawal, N., V. Prabhakaran, T. Wobber, J. D. Davis, M. Manasse, R. Panigrahy (2008년 6월). "[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1
. 1.141 . 1709 Design Tradeoffs for SSD Performance]". Microsoft. 2010년 6월 2일 열람. - ^ Case, Loyd (2008년 9월 8일). "Intel X25 80 GB Solid-State Drive Review". 2011년 7월 28일 열람.
- ^ a b c Kerekes, Zsolt. "Flash SSD Jargon Explained". ACSL. 2010년 5월 31일 열람.
- ^ a b c d e "SSDs - Write Amplification, TRIM and GC". OCZ Technology. 2012년 11월 13일 열람.
- ^ "Intel Solid State Drives". Intel. 2010년 5월 31일 열람.
- ^ a b http://pc.watch.impress.co.jp/docs/news/event/20110421_441051.html
- ^"TN-29-17: NAND Flash Design and Use Considerations". Micron (2006년). 2010년 6월 2일 열람.
- ^ a b c d Mehling, Herman (2009년 12월 1일). "Solid State Drives Take Out the Garbage". Enterprise Storage Forum. 2010년 6월 18일 열람.
- ^ a b c Conley, Kevin (2010년 5월 27일). "Corsair Force Series SSDs: Putting a Damper on Write Amplification". Corsair.com. 2010년 6월 18일 열람.
- ^ a b Layton, Jeffrey B. (2009년 10월 27일). "Anatomy of SSDs". Linux Magazine. 2010년 6월 19일 열람.
- ^ Bell, Graeme B. (2010년). "Solid State Drives: The Beginning of the End for Current Practice in Digital Forensic Recovery?". Journal of Digital Forensics, Security and Law. 2012년 4월 2일 열람.
- ^"SSDs are incompatible with GPT partitioning?!". unknown 열람.
- ^ a b c d Bagley, Jim (2009년 7월 1일). "Over-provisioning: a winning strategy or a retreat?". StorageStrategies Now. p. 2. 2010년 6월 19일 열람.
- ^ a b Drossel, Gary (2009년 9월 14일). "Methodologies for Calculating SSD Useable Life". Storage Developer Conference, 2009. 2010년 6월 20일 열람.
- ^ a b c d Smith, Kent (2011년 8월 1일). "Understanding SSD Over-provisioning". flashmemorysummit.com. p. 14. 2012년 12월 3일 열람.
- ^ Shimpi, Anand Lal (2010년 5월 3일). "The Impact of Spare Area on SandForce, More Capacity At No Performance Loss?". AnandTech.com. p. 2. 2010년 6월 19일 열람.
- ^ OBrien, Kevin (2012년 2월 6일). "Intel SSD 520 Enterprise Review". Storage Review. 2012년 11월 29일 열람. "20% over-provisioning adds substantial performance in all profiles with write activity"
- ^"White Paper: Over-Provisioning an Intel SSD (PDF)". Intel (2010년). 2011년 시점의 오리지날보다 어카이브.2012년 11월 29일 열람.
- ^ a b Christiansen, Neal (2009년 9월 14일). "ATA Trim/Delete Notification Support in Windows 7". Storage Developer Conference, 2009. 2010년 6월 20일 열람.
- ^ a b c Shimpi, Anand Lal (2009년 11월 17일). "The SSD Improv: Intel & Indilinx get TRIM, Kingston Brings Intel Down to $115". AnandTech.com. 2010년 6월 20일 열람.
- ^ a b c d e f g Mehling, Herman (2010년 1월 27일). "Solid State Drives Get Faster with TRIM". Enterprise Storage Forum. 2010년 6월 20일 열람.
- ^"Enable TRIM for All SSD's [sic] in Mac OS X Lion". osxdaily.com (2012년 1월 3일). 2012년 8월 14일 열람.
- ^"Linux 2 6 33 Features". kernelnewbies.org (2010년 2월 4일). 2010년 7월 23일 열람.
- ^ Shimpi, Anand Lal (2009년 3월 18일). "The SSD Anthology: Understanding SSDs and New Drives from OCZ". AnandTech.com. p. 9. 2010년 6월 20일 열람.
- ^ Shimpi, Anand Lal (2009년 3월 18일). "The SSD Anthology: Understanding SSDs and New Drives from OCZ". AnandTech.com. p. 11. 2010년 6월 20일 열람.
- ^ a b Malventano, Allyn (2009년 2월 13일). "Long-term performance analysis of Intel Mainstream SSDs". PC Perspective. 2010년 6월 20일 열람.
- ^ "CMRR - Secure Erase". CMRR. 2010년 6월 21일 열람.
- ^"How to Secure Erase Your OCZ SSD Using a Bootable Linux CD". OCZ Technology (2011년 9월 7일). 2012년 2월 10일 열람.
- ^"The Intel SSD 320 Review: 25 nm G3 is Finally Here". anandtech. 2011년 6월 29일 열람.
- ^"SSD Secure Erase - Ziele eines Secure Erase". Thomas-Krenn.AG. 2011년 9월 28일 열람.
- ^ Chang, Li-Pin (2007년 3월 11일). "[[[w:en:CiteSeer#CiteSeerX|CiteSeerX]]: 10.1
. 1.103 . 4903 On Efficient Wear Leveling for Large Scale Flash Memory Storage Systems]". National ChiaoTung University, HsinChu, Taiwan. 2010년 5월 31일 열람. - ^"Intel Introduces Solid-State Drives for Notebook and Desktop Computers". Intel (2008년 9월 8일). 2010년 5월 31일 열람.
- ^"SandForce SSD Processors Transform Mainstream Data Storage". SandForce (2008년 9월 8일). 2010년 5월 31일 열람.
외부 링크
This article is taken from the Japanese Wikipedia 라이트안프리피케이션
This article is distributed by cc-by-sa or GFDL license in accordance with the provisions of Wikipedia.
In addition, Tranpedia is simply not responsible for any show is only by translating the writings of foreign licenses that are compatible with CC-BY-SA license information.
0 개의 댓글:
댓글 쓰기