Solid state drive (SSD) technology is mature today and is eclipsing traditional magnetic disks for performance, management and cost in business applications. Below we dispel some myths that still hold back their adoption.
SSD technology isn’t new in data centers. Despite the tests to which it is subjected in the most demanding uses, the doubts and reservations regarding performance, reliability and costs are not entirely resolved. But let’s proceed in order, illustrating what SSD means and then continue examining some false beliefs that still concern it.
How SSD makes computer faster?
Solid state drives (SSD – acronym for Solid state drives) are mass storage devices that stand out because they are able to store large amounts of data in a non-volatile way without using mechanical parts as happens with traditional hard drives.
In fact, they use flash memory or flash memory, that is, a type of solid state, non-volatile memory where the information is recorded on transistors which represent real memory cells with the ability to maintain the electric charge for long times.
Just this basic difference in operation compared to traditional disks means that the SSD technology is very fast in writing and reading data, therefore it involves less energy expenditure, allows you to build devices that do not heat, do not emit noise and have smaller dimensions than the storage units to which they were accustomed.
The 3 main components of solid disks:
- Controller, cache memory and supercapacitor are the elements that characterize these memory units.
- Specifically, the microprocessor, which has a certain number of cores, coordinates mass memory operations. The cache is used by the processor to temporarily record the information they need to carry out their activities.
- Thanks to a supercapacitor (similar to a battery, but characterized by the fact that it can charge and discharge very quickly) solid-state memories can finish writing data that they started even in the absence of voltage.
SSD reliability
As anticipated, although the technology in question has had important technological evolutions that make it a point of reference for the world of storage, there are still prejudices that hold back IT administrators and storage managers in proceeding with the large-scale replacement of magnetic units.
You may also like to read, chromecast without wifi. To know more visit our blog http://techconnectmagazine.com/.
Below are the 5 main myths to dispel indicated by TechTarget about this technology with all the useful topics to understand how the SSD is able to improve storage management, the processing power of the servers and therefore the overall efficiency of the data center.
1. Better HDD or SSD? How to evaluate the average SSD lifetime
SSDs don’t last long: there is something true behind this claim. In fact, they age, but today’s products are built to last many years thanks to better electronics, more effective fault detection and correction systems. In addition, there are SSDs specifically designed to withstand very heavy workloads, measured with entire daily rewrite cycles. These drives internally have more unallocated free space which increases the cost per gigabyte, but also the expected useful life.
By the way, even magnetic disks (HDD) are not immune to aging and have a finite number of writes per day, specifications that are not too different from those of some SSDs. Except for those of speed that in the case of SSDs are higher than those of any HDD.
2. Is SSD management complicated?
One of the problems that plagued the first SSDs was the amplification of writing, a direct consequence of how flash memories(of which SSDs are made) are deleted. Unlike blocks on magnetic disks, memory cells must first be reset in order to be available for writing again.
The complication concerns the fact that the reset operation can only be done for large blocks of cells, typically with a capacity of 2MB. This makes it inevitable from time to time to copy the data to be kept elsewhere, to reset and recover the space occupied by the files already deleted.
This process, if performed during write operations, can significantly slow the SSD even if there is a cache memory buffer between the drive and the controller. The best strategy is to free the blocks in advance of the writing operations with the TRIM functions created by the driver;
Unlike ordinary HDDs, SSDs must not be defragmented: an operation that would unnecessarily waste time and I / O performance, reducing the life of the drive. The reason is simple: the writing process arranges the blocks randomly in the internal space of the SSD, but unlike the HDDs, it does not cause loss of performance or latency in reaching the next block.
The peculiar characteristics of the SSDs lend themselves to the use of data compression which allows to further increase performance. Since it is possible to reduce the amount of data to be read and written on average by 5 times, the performance and actual capacity of the drive increase hand in hand.
If data compression and decompression are done at the server level, the performance of the storage network will also have a 5-fold improvement. All this allows a great saving of resources in the data center, the large I / O capacities of the SSDs allow you to usefully exploit the compression processes in the background.
3. SSDs and bottleneck risk
One aspect can irritate IT administrators and storage that have disk arrays. The current SSDs are so fast that a common array controller can manage only a small number of them. Otherwise, the performance will be sacrificed. This depends on the fact that the arrays are designed for the performance of common HDDs (1000 times slower in random I / O and more than 100 times in the sequential operations of the SDDs) therefore to consolidate the slow data flow of traditional disks on a pair of fast Fiber Channel connections.
This is the bottleneck that occurs when using SSDs on older arrays. The suggestion is to use appliances designed for SSDs and to evaluate the adoption of a 100GbE multichannel SAN backbone. A similar problem afflicts servers with old SCSI and SATA interfaces that do not handle drive speeds. The new NVMe protocol comes in handy,
4. The best SSDs: how to evaluate the price?
Another of the reported problems concerns the cost of SSDs. The price of these drives has fallen rapidly in recent years and has then stabilized due to delays in moving to the new 3D-NAND flash memory technology. The problem is now resolved and a further drop in prices is to be expected. In any case, even if there is a difference in price with traditional HDDs, it should be considered that SDDs are able to make servers capable of doing more work and faster. Data compression also allows you to lower the cost per terabyte below HDDs.
Price comparisons to choose the best SSDs should also take into account the interfaces. Solid state SATA drives today cost less than half of Sas SSD disks of similar size but which surpass with high-performance margin. Disks in NVMe technology have high costs, but all the indicators say that in the future they will not deviate much from SATA SSDs of equivalent capacity.
5. SSD capacity
One of the disadvantages of SSDs concerns their space capacity. But the technology already allows today to exceed the capacity of magnetic disks (now stationary at 16TB per unit) with the 2.5 ”and 32TB SSDs, while 50TB is expected to soon be exceeded.
You may also like to read https://spinity.co.uk/