(image courtesy of WikiChip)
Going hand in glove with the increase in frequency, cache capacity, and cache efficiency are noticeable improvements Intel has made to the ‘Compute Fabric’ / ‘Ring Bus’. This is a highly overlooked portion of Intel CPUs and yet can have a direct, and sometimes massive, impact on overall performance. After all, you can have “Phenomenal Cosmic Powers” but if you try and push all that power through an itty-bitty straw… the bottleneck has simply moved from processing performance to transfer performance. Intel was aware of this issue and has taken great pains to improve this interconnect bus by upwards of 900Mhz – or more. In other words when the P and E cores are active and the data is flowing hot and heavy between the cores (and out to the rest of the system) instead of dropping down into the 3.5-3.6 range they will now hold steady in the mid 4Ghz range (~4.5GHz stock). Even more impressive is when only a few cores are active, and the p-cores are clocking in at 5.8Ghz range, the ring bus can max boost itself to a whopping 5Ghz. Stock. No overclock.
This level of improvement should hopefully alleviate the last generation edge case scenarios where this critical interconnect bus could be saturated by the e-cores… and why some hardcore gaming enthusiasts disabled the e-cores so as to gain a small, but somewhat noticeable, performance gain. At the very least it will help – and based on game testing data it helps a lot in some edge cases. Arguably still not optimal, and for some games disabling the e-cores might be the answer, but a massive step in the right direction. For most a touch of overclocking will eliminate, or at least obfuscate, any bottlenecking in games and allow those uber-hardcore gamers to keep their e-cores active. That however will come down a case-by-case basis… and Intel has done an impressive amount of work in a rather short amount of time.
Moving on to the integrated memory controller. Once again Intel has kept DDR4 backwards compatibility. While ‘stock’ support for faster than DDR4-3200 is still MIA, including DDR4 is a great boon for those interested in an inexpensive upgrade that still packs a punch. Bluntly stated it is a tangible benefit compared to DDR5 only support from AMD. On the DDR5 front, things have been improved. Noticeably improved. In one DIMM populated per channel with single rank memory (AKA ‘1DPC 1R’… aka “One DIMM Populated per Chanel using single rank RAM”) the 12th generation integrated memory controller (‘IMC’) was capable of handling DDR5-4800 without the need for overclocking. Sadly… if one was to use four sticks of even single rank DDR5 memory it dropped to DDR5-4000. With four sticks of dual rank memory (aka ‘2DPC 2R’… aka 128GB RAM installed) that support plummeted to DDR5-3600… aka DDR5’s original revision… aka why even bother with DDR5 in the first place frequency. Thus, the reason many a system builder, including ourselves, opted for DDR4 based motherboards for the last generation (like the incredible price/performance MSI PRO Z690-A WiFi DDR4).
With Raptor Lake things are much rosier. Starting at the top, when using two sticks of single rank memory the new IMC supports speeds of up to DDR5-5600. With two sticks of dual rank DDR5 used, support for DDR5-5200 is not terrible at all. Better still… 2DPC has been raised to DDR5-4400 when using dual rank (i.e., the only way to hit 128GB memory capacity… and about the only way to keep 24 cores from being RAM starved for workstation on a budget users). Still not quiet there for even ‘entry level’ DDR5-4800, but it does make ‘supports 128GB’ claims a bit more realistic. Falling in between those two ‘extremes’ is four sticks of 1 rank memory which can be run at –stock – DDR5-4800 frequencies. Once again this is stock, guaranteed stable / if there is a problem it is most likely the RAM’s fault, supported speeds.
Memory overclocking is also a bit better. In testing getting DDR5-5200 on 2DPC 1R was not a difficult endeavor. At all. Neither was DDR5-4800 with 4x32GB configurations. Of course with a sample size of only two, it is impossible to draw any definitive conclusions… but, the guaranteed boost not only helps increase overall performance of the cores, and reduces the potential memory bottleneck when dealing with 24 cores, it also should help alleviate fears over new systems not working when dealing with either more than two sticks of RAM or faster than DDR5-5600 ram like it was with the last generation.
This combination of more cache with a wider memory bus and wider internal interconnect is why Intel can legitimately claim an ‘up to’ 15 percent increase in single threaded performance. In testing that is being… a bit optimistic but it certainly is in the high single digits that can sometimes indeed hit low double-digit numbers. Of course, single threaded performance at God-tier levels is all well and fine but it is the massive improvements to multi-threaded performance that allows Intel’s 13th generation to do what we have not seen since… well… the advent of Ryzen: beat AMD at their own game and offer insane multi-threaded performance.
Before we get to that, one more topic needs to be covered: TDP. In some ways Intel has not changed the way the processors handle their TDP. Much as with the last generation the i9-13900K technically has a PL1 (aka “Processor Base Power”) rating of 125 watts and the same PL2 (aka “Maximum Turbo Power”) of 253. Even the tau (the length it is allowed to by Intel specifications to boost frequencies) has not really changed. What has changed is Intel has included another mode… one that says “power limits and time limits? What is this ‘limit’ you speak of?!”. In simplistic terms when an owner wants to, power limits can be… well… removed. When removed the only wall you will hit is temperatures as the i9 can (and will) suck down in excess of 350 watts of power.
Needless to say, this will not be for the faint of heart. Nor a good idea on less than the beefiest of VRM equipped motherboards (making 26-phase Godlike and 24-phase ACE type motherboards actually practical this generation). Nor a good idea if you do not use an ultra-high-end cooling solution. We personally recommend a 360mm AIO like the Artic Freezer II as a good starting point if such things interest you. Even if the Extreme Performance Mode does not interest you this level of overclock-ability is a return of the days of yore. When overclocking was fun and Intel actually helped, arguably encouraged, their buyers in their overclocking adventures. It is almost a shame few will ever feel the need to do so.
This duality extends well past the i9 and in some ways the i5-13600K has greater (relative) potential than the i9. To be specific, the Raptor Lake i5-13600K’s PL1 may not have changed from the 12th generation (125w) but the PL2 has been increased from 150 watts to 181 watts. For a ‘mere’ 14 core processor that is actually (slightly) more power per core than the i9. Make no mistake if you do go for even higher wattages this ‘small’ processor will require just as good cooling as an i7 or even i9. Thus, a good rule of thumb when it comes to cooling any modern CPU is simple: there is no kill like overkill. Aim high, “go big or go home”, and spend your money wisely. Do that and these processors will reward you with even higher levels of performance. Right out of the box. Just understand that just because something “can” run at 95… err… 100 degrees Celsius does not mean it should do so consistently. Heat degrades overclocking stability over time. Keep the temperatures as low as you can, and the CPU will probably outlast your willingness to keep using it in the coming decades. Don’t and you may find it not clocking like it used to before you are ready to upgrade your system to a new one in 5 years’ time.
With all that said, these P-Core improvements are more of the Aperitif for the main argument Intel is making directly to buyers previously dead set on going the AMD Ryzen 7000 route. Even excluding major selling features like DDR4 compatibility from the equation, it really is those teeny-tiny e-Cores that buyers will want to own. Yes, Windows 11 (or at least Process Lasso for Windows 10) is needed to get the Fully Monty. Yes, they cannot and should not be compared in a 1:1 basis to even AMD’s two generation old cores (which ran at about the same or lower frequencies when all were active). A fair comparison is about 1.9’ish of them to one AMD Ryzen-7000 core. There are (up to) sixteen of them in this new generation… which just reaffirms the old adage that “quantity has a quality all its own”.
So yes. What these e-cores lack in individual / single thread performance they make up for in having mass quantities on tap (and if Intel doubles them again next generation it will be a legit ‘zerg rush’). Furthermore, and unlike AMD which still are only producing one type of chiplet, for scenarios where only a few cores are active they are more than up to the task. Up to the task yet do so without producing much heat, consuming all that much electricity, or creating all that much noise.
The reason they can do all that is only partially due to the fact that Intel has doubled their numbers in one single generation (yes, the itty bitty i5 of this generation has the same number of e-cores as the last generation’s i9). The other reason is that Intel may still be using the same core architecture (Gracemont) for this generation as they did for the last… but they have improved them.