The shipping container the A55 uses is attractive in a non-garish way, it offers decent amounts of protection, and allows you to see exactly what you are buying before you pay for it. With that said it is radically different that the A60 or A80 models we looked at previously. Those higher tiered models use cardboard not plastic for their boxes. Basically, for the A55 series Silicon Power did not recycle their previous design and instead they adapted their DDR4 RAM shipping container. The downside to this is blunt force trauma protection is not the box’s strong suit; nor is ESD protection.
Of the two… electrostatic discharge is the only one to be concerned with. Plastic cases love to make static electricity so what we recommend is ‘grounding’ yourself before cutting the case open. To do so simply touch a piece of metal before cutting it open with scissors. That of course leads us to the last issue. The case is a sealed clamshell. Once you cut it open there is no putting ‘the genie back in the bottle’. Overall while more than adequate and pretty decent… it is a downgrade compared to their higher tiered models.
Moving on. The accessory list is rather easy to go over as there is not an accessory list. This means if you need a M.2 screw, or a cut-down version of Acronis TI the A55 will not be right for you. The freebie Acronis versions are easily available… and with free trials for the full version they are rarely worth the download. As for M.2 screws. They are cheap and your motherboard should have come with all you need. So why pay for something you most likely will never use?
When you actually pick the A55 up for the first time and take a close look, a few things to pop out. The first and foremost of those is that this is a single sided M.2 drive. This is a big deal. In 90 to 95 percent of builds only one side of a M.2 device will get active cooling when installed. This leaves the other side’s components to their own devices. Put simply, single sided M2 drives throttle less as their critical components run cooler. When looking for M.2 drives – regardless of asking price- single sided is right there in the top 5 things you should be looking for. Not the most important, let alone ‘be all and end all’… but very handy to have.
The reason Silicon Power was able to make a single sided, 1 Terabyte drive is simple – there is no onboard RAM cache. Instead it uses the latest and ‘greatest’ Silicon Motion SM2259XT SATA controller. Much like the SMI NVMe controller found in the A60 series, this ‘DRAM-less’ controller relies upon your systems main RAM bus for its cache (aka your systems DDR4 RAM sticks next to the CPU) via HMB.
(Image Courtesy of Longsys)
For those who do not know what HMB or Host Memory Buffer technology is, a bit back story is required. In a nutshell HMB aware Operating Systems can reserve a chunk of the systems memory for use by the storage controller much the same way integrated graphics can reserve a chunk of system memory for its random-access ‘video buffer’. The idea behind this branch of the Solid State technology tree was simple: by not having a RAM IC on the SSD this cost savings allowed SSD manufactures the luxury of reducing build cost. In theory it should have worked as well as APU’s do at day to day video processing tasks.
The reality is a bit more… nuanced to say the least. The main reason for having an onboard RAM cache or volatile memory buffer on the storage device is also simple. It is where the controller stores the ‘hot’ translation data table it needs. Using and having a translation table larger than the Static RAM integrated into the controller is because SSD controllers do not actually keep the data where they were first written to (i.e. where the OS expects the data to be). Instead, every so often the controller will write the data to a new location (aka ‘wear leveling’). Yes in a perfect world, every time the data gets moved its pointers in the partition table (aka ‘MBR’ Master Boot Record in older systems, and ‘GPT’ GUID Partition Table in newer systems) would also get updated with the new LBA address(es). This however would defeat the purpose of wear-leveling and eat up a ton of P/E cycles.
(Image Courtesy of cnx-software)
Instead, solid state drives have two tables. The one that matches what the OS uses to make requests (logical map version) and the physical map of where the data is actually stored. To keep the two tables from getting entirely messed up, the controller uses a third table / map… one where the changes that have not yet been updated to the physical table are stored. This is the ‘hot’ table and is stored in volatile Random Access Memory until enough changes between the two ‘real’ maps warrant a re-write (at which point the hot table is flushed and the process starts all over again).
Generally speaking, this hot table requires about 1MB per GB of used physical storage capacity. Before HMB, the DRAM-less controller stored it in a section of the NAND… and ate NAND life cycles like a hungry teenager at an all you can eat buffet. Your DDR3/4/etc. system RAM on the other hand does not care all that much about constant writes to this table (as RAM is much, much more robust than NAND). It is also technically faster than NAND. Thus HMB was born, and was seen as a great solution for entry level solid state drives.
Unfortunately, while lower than the pre-HMB method of reading from the NAND, the latency penalty from accessing the system’s RAM is greater than onboard RAM cache. To ‘mitigate’ this problem controller manufactures added in a bit more Static RAM to their controller designs. One capable of holding a bit of the hot table right inside the controller itself (think L3 cache for an analogy). This SRAM unfortunately is still measured in Kilobytes not Megabytes. So once the controller saturates its SRAM buffer Read and Write requests become slower as it the data ahs to be transferred to and from the main system’s RAM via the memory controller and PCIe bus. That is a lot more steps than simply reading from an onboard RAM cache chip.
The controller manufacture’s product literature played down this issue, as they felt that for home users the SRAM buffer combined with more aggressive flushing of the hot table would mitigate this issue to the point of being not noticed. Once again reality was different than theory. By late 2018 what was once a ‘flood’ of HMB enabled devices being released became a trickle. Make no mistake, there are some very good technical reasons for why HMB can actually be a good solution, but for the most part the market has moved in a different direction – namely using QLC NAND instead of TLC NAND to keep prices down.
Sadly, even with QLC NAND there is still only room for a 5 ICs on the top of a typical M2 2280 PCB. With one taken up for the controller and one for the RAM this leaves room for only 3 NAND ICs. In a perfect world, SSD manufactures would simply use higher stacked/density NAND ICs and keep the total number of NAND chips low… but this is not a perfect world. Stacking NAND die packs on top of each other comes with its own host of thermal and price related issues… with the price difference being big enough that few large capacity M.2 SSDs are single sided. This is the standard solution. Silicon Power did not take the ‘standard’ solution. They wanted the A55 to be like their A60… just less expensive. They wanted a single sided high capacity drive for the entry level / SATA market. This meant opting for a DRAM-less controller, as there is no third ‘option’.
Based on their experiences with SMI, the latest SMI SM2259XT was pretty much a no-brainer. Room for four less expensive NAND ICs that could be TLC not QLC NAND based, single sided, still offer ‘good enough’ performance all backed by good ECC and ECC engine (aka NANDXtend) is not a bad compromise given the market niche they were interested in satisfying with the A55 series.
In a very interesting move Silicon Power has not opted for Toshiba BiCS TLC NAND like their higher tiered models. Instead it is using Micron’s latest and greatest TLC NAND – 96 Layer CMOS under Array ‘B27A’ TLC NAND. These NAND chips represent the third generation ‘3D’ NAND from Micron. The 3rd generation really does not bring much to the table over B17A / Gen 2.2 CuA NAND. While yes, it is 96 layer (two strings of 48 per die package) of NAND instead of 64layers (two strings of 32) neither the density per die pack nor speed has changed. It still is 512Gbit/64GB 666MT/S NAND. The only real change is the cost of manufacture and the layout of the 20nm NAND cells themselves. Basically, Micron re-arranged their Gen 2.2 CuA cell layout so that the footprint in the X and Y axis is less but the Z-axis is greater by placing less fifty percent less NAND cells on each layer and just using 50 percent more layers per die pack. Certainly not the NAND density breakthrough we though 96L would bring to the table before its release.
The easiest way to ‘get’ this change is to imagine two apartment buildings. One is 96 floors high and the other is 64 floors high. Both have the same number of ‘apartments’ and the same square footage per apartment. The 64 tower just takes up less room on a city block compared to the taller apartment… but the 96 tower apartment building is taller (obviously). So why the change and justification for an entirely new generation moniker? This smaller footprint for the NAND means there are more useable NAND die packs per wafer which in turn decreases the cost of manufacture… and allows Micron to sell them to companies like Silicon Power for less than their older B17A’s. This is why SP went with the arguably inferior (from a performance and durability perspective) NAND instead of opting for the more costly Toshiba BiCS NAND like on the A60 and higher models. We would have preferred to have seen Toshiba used.
The first drive to make use of this 3rd gen CuA TLC NAND was the Crucial 960GB capacity BX500 model… and only the 960GB model. Crucial actually did a silent upgrade in this regard, as they not only swapped out 2.2Gen TLC NAND for gen 3 CuA but also upgraded from the SMI2258XT to the SMI2259XT controller. It would not surprise us in the least if smaller capacity models also (eventually) get this silent upgrade… but only if Crucial runs out of 2258XTs and B17A NAND ICs. Considering Crucial is Micron (the M in IMFT) we would not count on this happening anytime soon to say the least.
In case you are wondering… yes the Silicon Power A55 is basically a Crucial BX500 960GB SSD with a different form-factor and running different firmware. Silicon Power is using ‘stock’ SMI firmware and that is why it is a ‘1TB’ drive and not a ‘960GB’ drive like the BX500 960GB drive. The upside to this firmware is that is a touch faster on sequential performance than the BX500 960GB but has a lot less over-provisioning. For SATA this difference will be minor at best. Performance is lacking with all SATA drives and if you need the extra peace of mind those 40GB worth of NAND offer… format the drive with 40GB not seen by the OS. It will do the same thing as what Crucial’s firmware does (just at the OS and not hardware level).
Overall, the Silicon Power A55 is an interesting model from a design perspective. One that takes the road less traveled. The ‘road less traveled’ however is usually a road less traveled for very good reason. Let’s go over the performance results to show you what we mean.