We have said it in the past, and we will say it again. Seagate are conservative by nature and they take the highly specific needs of the Enterprise customer extremely seriously. They understand that HDD models intended for these ultra conservate buyers must meet certain highly specific criteria that simply do not apply to the ‘home’ consumer. For example. Enterprise purchasing agents care more about up-time, watts per TB, and Total Cost of Ownership calculations, than they do about new and shiny technology… as the technology does not matter. It either meets the requirements of a given buildout or it does not. An Enterprise model could literally use vacuum tubes and steam power to store bits and bytes and if it lowered Total Cost of Ownership, met the reliability and density requirements of a buildout… it would be purchased over “sexier” / “new and exciting” technology like the various NAND based options. First time. Every time.
While that is being hyperbolic, Enterprise consumers really do place T.C.O and reliability well above absolute performance, and that is why nearline (and even online) storage is still “spinning rust” and not solid state based. Furthermore, these specific needs pretty much are the checklist of what defines the Exos X series. As they should… as that is what Seagate intended them to do when they created the Exos X series in the first place.
We make mention of all this as, once again, the X24 generation is more along the lines of a series of continuing refinements, rather than paradigm shifting changes to the underlying Exos X blueprint. Which is perfectly fine. The X24 offers the same excellent 550TB per year workload rating with the same MTBF ratting of 2.5M hours as previous generations. It offers the same 512MB of RAM cache for the MTC to work its magic as the X22 offers. It makes use of the same Helium filled chassis. Same vibration sensor hardware. Even the use of a whopping ten platters that was the X22s main claim to fame has been carried over to the new X24 generation of Exos X models.
Once again this is all true because Seagate does not make radical changes from one generation to the next. Instead they mostly make minor adjustments, with one maybe two noticeable changes at the most in a given Exos X generation. For example, the X16 introduced the idea of nine platters instead of the assumed maximum of eight. The X18 generation quietly introduced the idea of Super Parity as well as Two-Dimensional Magnetic Recording (TDMR) technology. Then Seagate backed off due to push back from their clients over too many changes too fast… and just increased the areal density for the X20 series. Meaning it was not until the X22 debuted that Seagate changed the assumed max platter count from nine to ten and (finally gave in to market pressures and) doubled the cache size from 256MB to 512MB.
Does that mean that this X24 is another X20 ‘holding pattern’ / ‘gap year’ generation just meant to spread out the schedules change over a long enough period of time to keep Enterprise buyers from getting an eye and head twitch? Oh hell no. Just as it was not until the X20 that TDMR really started to show its worth, with the 2.4TB per platter densities and further refinement to TDMR technology we are finally getting to see enough differences between typical MTC based firmware models like the IronWolf Pro… and those with Super-Parity MTC based firmware like the Exos X.
Before we delve in to what Super Parity is and what it offers, we first have to make something crystal clear. It is not a magic bullet. It simply is a refinement on how the r/w heads lay down the data on the platters. One that is entirely platter technology agnostic and works with all of them… and most likely will play a critical role in how Seagate is going to finally get HAMR and BPM (Bit Patterned Media… aka glass platters with iron platinum alloy film ‘recording’ media… of IBM Death… err “DeskStar” renown) to work well enough to get it out of beta deployment and in to the mainstream purchase channel.
We also have to mention that it requires TDMR to not only work but work fast enough so as to not be a (noticeable) detriment to overall write performance. So as a brief overview of TDMR, Two-Dimensional Magnetic Recording technology uses a dual r/w head configuration with a ‘detector’ signal processor in between the main drive controller and the read/write heads. The two heads are slightly offset with one read head over the central track being read and partially over the track ‘above’ it, and the second read head offset so it is mainly on the central track and the one ‘below’ it. This allows for a more sensitive read on a given bit, allows for eliminating the interference from the bits surrounding the actual bit being read, and generally improves the signal to noise ratio (aka SNR which the main culprit of false errors). It also gives the drive controller two chances of reading the bit properly for a single pass.
With that out of the way, Super-Parity is actually not new for the X24 generation. Instead it has been in certain Seagate drives for many years now… but not much of the low level details were available up until recently. In fact we can now talk about it without breaking any NDAs… as we did manage to track down one of the patents that covers this technology (Seagate and Hong et al. US patent 9,633,675 B2 and previous US20170322844 patent application, both circa 2017). As such everything we say is taken directly from the patent and application filings. Nothing more. Nothing less. In either case, in the most simplistic terms Super-Parity technology draws heavily from old SandForce SSD controller algorithms in that the tracks and platters are not considered as one cohesive whole storage unit. Instead, just as SandForce controllers did with their NAND ICs, Seagate has broken it down virtually into a (highly) modified RAID array type layout. Just as the track level and not drive level.
That however is being highly disrespectful, overly simplistic, and is just for a frame a reference that most will ‘get’… as it ignores the genius of Eun Yeong Hong, Woo Sik Kim, Moweon Lee, and the rest of Seagate’s South Korean S-Tier research team. So, let’s back up and start at the beginning. With typical HDDs a write operation looks something like this. The data is pushed over the SATA bus, is funneled into the HDD’s onboard controller, the onboard multi-core ARM controller calculates the necessary ECC parity bits, and then this data is broken up into chunks that are then sent to the low-level controller which in turn starts the process of writing the data blocks and parity bits to the platters in a linear fashion. What this means is, the platters will look like this: a data block, followed by its ECC parity bits, then another data block and its parity bits… in an ad nausea fashion until the write operation is complete or the drive is filled up. This is a tried and true method, however read performance does somewhat suffer as the ECC is not needed (relatively) all that often and yet the r/w heads have to pass over the parity bits every time a given data file(s) are requested. Meaning that overall performance is not as high as it could be. It just was always considered ‘good enough’ and is good enough for all but Enterprise consumers. With Super Parity that all changes and write operations got a much needed 21st century refresh… albeit with 20th century Redundant Array of Independent Disks ideas/ideals as viewed through a SandForce IP lens.
No matter the lineage, things are done differently with Super-Parity enabled drives. Yes, the controller calculates the parity. Yes, the controller then figures out how to chunk up the data in to write operations, and then sends this information over to the low level controller. However, the controller is not just doing a Data+Parity, Data+Parity linear layout. Instead the controller can implement one of two (known) options. Both of which pay homage to RAID arrays with their underlying philosophy. In option one all the data is written and then the parity data is all written to separate “super blocks” consisting of nothing but said parity information during a second write operation. To imagine this, if there are ten blocks worth of data and ten parity chunks… all ten data chunks are written then and only then are the parity information written. Typically to a separate adjacent track. An adjacent track that will only be filled with Super-Parity ECC data. To imagine this, think of RAID 5 with its data and parity stripe that spans all the drives in a given array. Just at the track level not drive array level.
This option results in improved random read IOPS, improved sequential read performance, but offers both at the expense of write performance (as that second operation is required to lay down the super block data to its track(s)). It also does come with a slightly increased risk of inflight data corruption (as the ECC data is temporarily stored in RAM and only written once the data track writes are completed… and if the data tracks are not fully written before unexpected power loss all the data being written is toast due to lack of parity data on the tracks).