What is SSD: Speeds, Capabilities, and Technologies

Memory RAM, GPU and CPU have undergone amazing evolutions in recent years. The HDs also, mainly in the aspect of the amount of data that can store. However, today’s applications require even more sophisticated storage devices, which combine to reduce space usage, reasonable storage capacity, lower power consumption and durability. Drives – or “drives” – SSD (Solid-State Drive) are the answer to this need.

In this text, I will explain to you exactly what SSD is. I also address related concepts such as formats, Flash, TRIM, construction technologies, among others.

What is SSD?

Let’s start by defining the idea. As you already know, SSD is the acronym for Solid-State Drive, something like ” Solid State Drive “, in free translation. It is a type of data storage device that somehow competes with hard drives.

It is accepted that the name alludes to the absence of moving parts in the construction of the device, which is no longer the case in HDs, which need motors, disks, and read and write heads to function.

The term “solid state” actually refers to the use of solid material for the transport of electrical signals between transistors rather than a passageway based on vacuum tubes, as was done at the time of the valves.

In SSDs, storage is done on one or more memory chips, totally eliminating the use of mechanical systems for their operation. As a consequence, type units end up being more economical in energy consumption, after all, they do not need to feed motors or similar components (note, however, that there are other conditions that can increase energy consumption, depending on the product).

This feature also makes “SSD disks” (it is not a disk, so the use of this denomination is not correct, although relatively common) use less physical space because the data is stored in special chips, very small size. Thanks to this, SSD technology began to be widely used in extremely portable devices, such as ultra-thin notebooks (ultrabooks) and tablets.

Another advantage of non-use of moving parts is in silence – you do not hear an SSD drive work, just like it can with an HD. Physical endurance is also a benefit: the risk of damage is less when the device is dropped or swung (which is not to say that SSDs are indestructible, though).

In addition, SSDs weigh less and, in most cases, can work at higher temperatures than those supported by hard drives. There is yet another significant feature: the time of data transfer between RAM and SSD drives is usually much lower, streamlining data processing.

Of course there are also disadvantages: SSDs are much more expensive than HDs, although prices may decrease as their usage increases. Because of this – in many cases, also because of technological limitations – the vast majority of SSDs offered on the market have much lower storage capacity compared to hard drives that have the same price range.

Flash memory: the main ingredient

SSD technology is based on specially crafted chips for storing data, even when there is no power reception. They are, therefore, non-volatile devices. This means that you do not need to use batteries or leave the device constantly plugged in to keep the data in it.

For this to be possible, it was agreed among the manufacturers of SSD the use of Flash memories. It is a type of EEPROM * memory (see explanation below) developed by Toshiba in the 1980s. Flash memory chips are similar to the RAM used in computers, but unlike the latter, their properties data are not lost when there is no more power supply, as previously reported.

* EEPROM is a type of ROM that allows data rewriting, however, unlike EPROM memories, the processes for erasing and writing information are executed electrically, so that it is not necessary to move the device to a special equipment for re-recording to take place.

There are basically two types of Flash memory: Flash NOR ( Not OR ) and Flash NAND ( Not AND ). The name comes from the data-mapping technology of each. The first type allows access to memory cells in a random manner, as with RAM, but at high speed. In other words, the NOR type allows access to data in different positions of memory quickly, without the need for this activity to be sequential. The NOR type is used in BIOS chips or smart phone firmware, for example.

The NAND type, on the other hand, also works at high speed, but it does sequential access to the memory cells and treats them together, that is, in blocks of cells, instead of accessing them individually. In general, NAND memories can also store more data than NOR memories, considering physical blocks of equivalent sizes. It is therefore the cheapest and most used type in SSDs.

SLC, MLC and TLC Technologies

Currently, there are three main technologies that can be used in NOR Flash memory: Multi-Level CellMLC ), Single-Level Cell ( SLC ) and Triple-Level Cell ( TLC ). You may find one of these three acronyms in the description of the SSD you are choosing, so it is good to know them.

Single-Level Cell (SLC)

The first SSDs were based on chips with SLC technology that basically save one bit in each storage cell. This one-bit-per-cell schema makes the device more expensive because you need to have more cells to store the same amount of data as the MLC and TLC types.

In contrast, an SLC chip is quite reliable, supporting by default around 100,000 read and write operations per cell, against 10,000 of the MLC and 5,000 of the TLC (but these numbers may vary as the technology evolves).

SLC chips also usually allow read and write operations to run faster, after all, each cell stores only one bit, 0 or 1. In MLC, for example, a cell can have two bits; this increase in the amount of data makes the procedure a little slower.

SLC technology has virtually fallen into disuse, and is now targeting very specific applications.

Multi-Level Cell (MLC)

The MLC type is quite common today, consisting of a process that uses differentiated voltages to make a memory cell store two bits (theoretically, it is possible to make it store more) rather than just one, as in SLC.

Thanks to MLC technology, the costs of Flash storage devices have become smaller, even increasing the offer of products such as more affordable flash drives and smartphones.

As you may have guessed, MLC allows the SSD to store more data per chip: where there was only one bit, there are now two. There is a disadvantage though: performance is usually lower in comparison to the SLC type, as I explained in the previous topic.

This is because, in MLC, a cell can store four information values ​​on the support of two bits: 00, 01, 10 and 11. Because of this, the unit controller must use very specific voltages to correctly identify if the cell is in use and with what value. This process slows down the operation.

Triple-Level Cell (TLC)

The name itself already indicates: the TLC type stores three bits per cell, so the amount of data that can be stored in the unit increases considerably. It is the latest standard we have on the market.

However, the performance is also lower compared to SLC technology, after all, we get eight possible values ​​with three bits, which is why there are more variety of voltages: 000, 001, 010, 011, 100, 101, 110 and 111.

Here, the main benefit is even storage space gain, because TLC memories are often slower than MLC chips which, in turn, have less performance than SLC technology.

Even so, TLC and MLC memories are faster than HDs, which is why their use is feasible in most applications: in many situations, it does not make up for having a fairly fast SSD but does not provide enough storage capacity.

3D NAND (V-NAND)

The industry’s efforts to increase the storage capacity of SSDs have not stopped there. The leading companies in the industry are employing in their most sophisticated products a technique called 3D NAND. This ‘3D’ in the denomination is a reference to the stacking of memory cells.

For easy understanding, imagine that an SSD is a bin full of boxes. Each box stores information. However, these boxes lie side by side, occupying the entire floor and thus, forming a two-dimensional (2D) plane.

As the deposit became full, someone had the idea of ​​putting one box on top of the other, forming piles and stacks of boxes, that is, a three-dimensional (3D) plane. You see, with this approach, the storage capacity has increased, but the warehouse has remained the same size.

This is more or less the beginning of 3D NAND: instead of just having a horizontal layer of memory cells on the chip, we have several, forming a stack.

The industry started by creating 24-layer stacks, but soon went on to 32. Just to cite one example, in 2015, Intel introduced a 32-layer MLC chip with a capacity of 256 gigabits (GB). Another company’s 32-layer chip used TLC technology and therefore offered 384 gigabits (48 GB). If you join eight of these MLC chips, you get to have a 256GB SSD accordingly (or, with TLC chips, 384GB).

In 2016, the industry began investing in chips with 48 and 64 layers. Western Digital, for example, announced at the time of the last update of this text an MLC chip that, having 64 layers, could store 512 gigabits of data.

All this, it should be noted, is done without affecting the physical size of the device. The increase in the number of layers is possible thanks to modifications in the techniques of manufacture and the use of certain materials.

Samsung is one of the companies that uses stacking, but based on a technology called Charge Trap Flash (in general, other manufacturers work with the FGMOS technique – Floating-Gate MOSFET ). Instead of 3D NAND, the company uses the nameV-NAND ( Vertical NAND ).

3D XPoint

In mid-2015, Intel and Micron announced 3D XPoint, a new type of non-volatile memory that promises to be up to 1,000 times faster than conventional NAND flash memory. Yes, a thousand times!

XPoint 3D memories are also denser, meaning they support more memory cells. As a result, they can also store much more data – up to ten times more. As if that were not enough, the technology used in its construction makes 3D XPoint up to a thousand times more resistant.

But these are theoretical estimates. In 2016 and early 2017, the first 3D XPoint SSDs tested were up to four times faster in write operations and three times more resilient than NAND Flash drives.

Expectations may improve as technology improves, but in the early years of the market, Intel and Micron expect 3D XPoint to reach up to ten times faster, three times as much resistance and up to four times more storage capacity.

Still, it’s a significant step forward, is not it? All these advantages are possible because 3D XPoint is also based on a layering technique (as the ‘3D’ indicates). But the cells are positioned at intersections of the lines of each layer in a way that they are very close. Combining this with the fact that it is not necessary to use transistors (unlike NAND memory), the density ends up being much higher.

Basically, this is what allows XPoint 3D memories to store more data and provide more speed in data transfer. The building model facilitates access to small blocks of memory (while Flash NAND, as a rule, works with larger blocks), streamlining the writing and reading processes.

There is no prediction that XPoint 3D memories will replace Flash memory. At least initially, the new technology will only serve niche markets.

Formats and interfaces: M.2, SATAe, NVMe and more

From the approach we’ve taken so far, we can understand any device that uses Flash memory as an SSD drive. But in fact, it is more appropriate to think of SSD as a type of device competing with the hard drive – we can not forget the word “Drive” in the name.

Following this line of thinking, the industry began to provide SSD drives as if they were HDDs, only with memory chips instead of disks. Thus, these devices can be connected to SATA interfaces, for example. We can then find SSDs in 1.8, 2.5 and 3.5 inch formats, such as HDDs.

The problem is that even the fastest version of SATA (SATA III), which achieves data transfer rates of up to 6 Gb / s (gigabits per second), may be insufficient to meet certain SSDs: many models, especially those (such as those used by gamers ) can work at speeds higher than that of the SATA III bus.

SATA Express

To address this limitation, the industry has resorted to some alternatives, among them, SATA Express (also known as SATAe). The name is a reference to the combination of two technologies: SATA and PCI Express.

PCI Express technology is quite common on computers (your video card probably uses this standard) and offers high speeds in data transfer. Why not take full advantage of this potential with SSDs?

The SATA Express connector combines two standard SATA plugs with a third that is used for power supply. The interesting thing about this approach is that if a SATA slot is not in use on the motherboard, it can be used to connect up to two devices via “normal” SATA.

Theoretically, SATA Express can achieve data transfer rates of up to 16 Gb / s.

PCI Express

If PCI Express (PCIe) is fast enough, would not it be convenient to have SSDs based entirely on this technology? Yes! These units actually exist. Some models even have data read rates up to 2,400 MB / s. The write speed is usually less than half the read rate, but still remains high.

Size performance weighs in the pocket. PCI Express SSDs are often very expensive, which is why they are often only used in high-performance applications.

M.2

The M.2 (formerly known by the acronym NGFF – Next Generation Form Factor ) is a specification that can work with both SATA III as PCI Express. The standard can provide very high speeds, therefore: up to 32 Gb / s with four lines of PCI Express 3.0, the fastest version currently (although there are still no SSDs that can reach this speed).

Another advantage of M.2 is its flexibility of formats, which has made this standard used on both very thin laptops and desktops. We have sizes ranging from 16mm to 110mm wide, and 30mm to 110mm long.

With M.2, the SSD ends up assuming the board format. The 22 mm wide option is the most common. Smaller models, of course, are best suited for compact devices.

NVMe

The NVMe ( Non-Volatile Memory Express ) is not a connection standard that competes with SATA Express or M.2, but a kind of protocol that optimizes the access time to the data, in order to standardize communication between the controller and the storage components themselves.

In SATA technology, there is a specification called AHCI that answers for this task. The problem is that the AHCI is more suitable for HDs, that is, the working mode that considers access to data in different positions on the drive disks.

With no disks on SSDs, NVMe was developed to explore the potential that can not be achieved with AHCI. What NVMe does is often multiply the drive’s ability to receive read and write commands at the same time. Thus, there is less latency (time the data takes to be accessed and read) and obtaining the data ends up being faster.

With lower latency, workloads also run faster, allowing SSDs to spend more time idle. This saves energy and even extends the life of the unit.

The NVMe specification is not limited to a single connection technology: it can be used with units based on PCI Express and M.2, for example.

U.2

A major limitation of PCI Express and M.2 is that these standards require direct connection of the SSD to slots. If you want to connect the SSD to the computer via a cable, you will have to use another standard, such as SATA Express. But it will not be possible to take advantage of the NVMe specification. That’s why the industry created another connection pattern (since it is, plus one), the U.2 (for some time called SFF-8639 ).

U.2 allows for cable connection and, at the same time, supports PCI Express 3.0, in addition to NVMe, of course. The problem then is solved, except, perhaps, for a small detail: U.2 cables can be quite expensive.

The nanometers

We have already talked about 3D architecture, technologies such as MLC and TLC and other aspects that contribute to increase the data storage capacity of SSDs. But it lacks one: the miniaturization of the chips.

The purpose here is to leave the transistors that make up the chip as small as possible, so the component can store more data without, however, having its physical size increased. This aspect is measured in nanometers (nm), a measurement that is equivalent to one millionth of a millimeter, that is, to one millimeter divided by one million.

We find on the market units with chips of 34 nm, 25 nm and 20 nm, for example. Nowadays, it is also possible to find more sophisticated SSDs with chips with 15 nm and 10 nm. At the time of the last update of this text, we talked about options with 7 nm.

It should not go much further, though. Miniaturization is not an easy process because, in addition to the costs involved, it can lead to problems such as instability and increased read error rates. It is for this reason that the industry studies alternative technologies, such as the aforementioned XPoint 3D memory.

TRIM

When it comes to SSD, especially when referring to newer drives, you should pay attention to a feature that has been gaining more prominence: the TRIM feature. It is extremely important. Let us understand why.

In general, when you delete a file, it is not completely deleted from the operating system. In fact, the area it occupies is marked as “free to use” and the data is hidden in the system until a new recording occurs. That’s why many deleted file recovery programs can succeed in this task.

In HDs, the available data space can be recorded and rewritten without major difficulties. This is possible because on hard disks, data is grouped into 512-byte sectors (learn more about this in this matter about HDs ), where each industry can be recorded and rewritten independently.

On SSD, this process is a bit different. In Flash memory, data is grouped into blocks, usually 512 KB, with each group consisting of several divisions called pages. Each one is usually 4 KB.

The problem is that these data blocks can not simply be recorded and then rewritten with the same ease as the HDs. For this, it is necessary to first erase the data of a recorded area, making it return to the original state, only to insert the new data.

The issue is aggravated by the fact that this process generally needs to cover the whole block and not just certain pages of it. You may have noticed that this situation can cause a significant loss of performance.

One of the ways to handle this is to make the operating system always use a free area of ​​the SSD. But this is a palliative solution. Sooner or later, the unused blocks will all be filled. TRIM arises precisely to prevent the user from “panicking” by realizing that their SSD drive is “overwriting” data and thus slowing down.

With TRIM, the operating system is instructed to perform a scan to “clear” pages of deleted files, instead of simply marking them as “available for use”, as it does on the hard drives. Thus, when the blocks that go through this process have to receive new data, they will be prepared for it, as if nothing had ever been recorded there.

This is why TRIM is so important. Its function is able to avoid serious performance problems. I have to say that this feature has to be supported by both the operating system and the SSD. This is the case with Windows 10 and newer versions of Linux, for example.

Characteristics to be observed when choosing an SSD

When choosing an SSD, it is always important to check the device’s specifications. One is linked to the performance aspect. How many megabytes can be read per second? How many can be recorded at the same time?

These parameters can vary greatly from one product to another. It is common, for example, to find SSD units made up of a set of ten Flash memory chips. The device driver (discussed later) can split a particular file into 10 parts so that they are recorded simultaneously on the drive, making the entire recording process faster, for example.

However, more or less resources can improve or worsen this process. Hence the importance of checking these details. Fortunately, it’s pretty much the rule for manufacturers to tell you how much data can be written and read per second.

Another parameter that can also be observed is the IOPS ( Input / Output Operations Per Second ), which indicates the estimated amount of input and output operations per second for both read and write data. The bigger these numbers, the better.

As for the storage capacity, the SSDs tend to lead to worse in comparison with the HDs. So it’s not uncommon to find notebooks that offer 128GB SSDs complemented with a 1TB hard drive, for example. The ideal here is to carefully assess how much space you need. Units with lots of storage capacity are very expensive and may therefore not offer good value for money. The minimum, by current standards, is a 128GB SSD.

Also note the technologies supported by your computer. Do not buy an M.2 SSD, for example, before making sure the motherboard on your desktop or laptop is compatible with this format.

It may be that you have more than one machine supported pattern, for example, SATA and U.2. U.2 is faster but costs more. It is then necessary to measure whether the performance gain outweighs the highest investment or whether a SATA drive is sufficient to meet what you need.

Finally, it is worth checking the average durability time predicted by the manufacturer and if the unit has additional resources, such as buffer, the aforementioned TRIM, SMART monitoring technology (widely used with HDs) or even RoHS ( Restriction of Certain Hazardous Substances ), which indicates that the manufacturer has not used certain substances harmful to health and the environment in the manufacture of the product.

SSD controller

Like HDDs, SSDs also have controllers. It is up to the controller – a kind of processor – to intermediate the exchange of data between the computer and Flash memory, manage read and write operations, detect and correct errors, among other tasks.

Because SSD controllers need to handle large volumes of data, they rely on features that enable or facilitate such work, such as dedicated cache memory and data compression algorithms that make operations faster or extend the life of the drive.

The absence or implementation of certain features on the controllers varies from manufacturer to manufacturer and from one SSD model to another. Companies often do not divulge many details about the operation of these chips to protect their technology, which is why it is not possible to explore the matter in depth.

History: the first SSD on the market

SSD devices began to appear massively on the market starting in 2006, but it can be said that the technology itself came much earlier, though not under the same name.

In 1976, a company named Dataram put on the market a data storage device named Bulk Core (link in PDF) which was composed of eight modules of a kind of non-volatile memory with the incredible (for the time) capacity of 256 KB each.

Bulk Core “emulated” the disk drives used at the time, with the differential being faster than these. The equipment cost about $ 10,000 and was used in data processing centers.

In view of its characteristics (use of non-volatile memory and higher data transfer speed), the Bulk Core can be considered the first SSD in the market.

Finishing

Many people wonder if SSD technology signals the end of the era of hard drives. It is hard to say. Regarding the storage capacity, the HDs still have an excellent cost-benefit ratio, not to mention that these devices have a very satisfactory average durability.

Because SSDs have higher storage costs and HDs continue to be optimized for greater capacity and durability, the two categories will have to live together “peacefully” for a long time.