Introduction
Let us not mince words. The ‘video card’ (or more formally the “discrete GPU”) industry is suffering major ennui. Buyers are consistently being offered a choice between buying either extremely expensive and only the size of a brick cards… or so costly you need to sell (somebody else’s) organs to pay for the house sized card. Mix in the fact that the duopoly brands are more than content to do the above and offer only modest improvements over their previous generation and… yeah…. people have been crying out for an alternative. One that can swoop in and break the Red/Green strangle hold on the marketplace, or at the very least push the big 2 into actually innovating again. This is where Intel and their Arc line of video cards enters the chat.
First launched back in 2022, during peak ClownWorld we might add, the Arc Alchemist A3/5/700 series offered a unique perspective on what it takes to break into the dGPU world. Namely Intel focused in on value. Be it a small A300 mainly for Transcoding, a moderate A500 for budget constrained rigs, or an A700 meant more for mainstream 1080P buyers. All had one thing in common. Small size, small power consumption, small asking price, and decent performance… once the driver issues got sorted out. Sadly, the driver issues did ensure that the Arc “Alchemist” generation were going to be seen as not much more than a sideshow. One that few would be willing to ‘take a risk’ on, even if the price was very enticing… and did offer features the ‘other guys’ did not (e.g. native AV1 transcoding!).
This is where the second generation Arc “BattleMage” series enters the fray. Even with just a quick scan of its features it is obvious that ongoing financial (and CEO) troubles or not Intel is all in on breaking the duopoly and giving buyers a good third choice. Namely they have invested the time into upgrading the underlying Xe architecture enough that Intel is promising a massive 24% uptick in overall performance improvement… compared to the higher tiered A750. The last time we ever even heard of that level of promised performance gains… NVIDIA were trying to recover from their “leaf blower” debacle.
Counteracting some of the excitement experienced, and yet jaundiced eyed, buyers had about the BattleMage series was the fact that Intel were releasing a 5-class option with twenty percent fewer Xe cores (20 vs 24) and smaller memory bus (192bits vs 256bits) than the first generation A580. Needless to say this bring into question any and all claims over improved performance. Possbily with experienced buyers musing on if Intel had hired some ex-AMD marketing people to build the pre-release marketing infodump.
Thankfully, things are not as bad as they first appear. Yes. Intel is bucking the trend of making everything bigger, but they are counting on a massive increase in clock rate (2850 vs 2000MHz) combined with massive IPC gains to not only overcome these perceived shortcomings but prove to be potent enough to take on the latest generation $300 to $350 MSRP (aka “x60” class) options from both Team Red (RX 7600) and Team Green (RTX 4060). Considering the asking price is only $250, if the B580 can indeed do what it promises to do then consumers may indeed be witnessing the Dawn of a new (Jedi… err…) Day. One where there is some hope that sanity has returned the dGPU marketplace. One where we can actually optimize the choice in dGPU to better match a given builds needs. Be they professional, recreational, or even both.
Let’s see if the BattleMage can indeed live up to the pressure that is going to be placed on it… or if it is going to be more a case of being a BattleBust with the Nightbrothers and Nightsisters ruling unopposed for another generation.
Closer Look p.1
It is funny but if one has never purchased anything other than a CPU from Intel the shipping container may come as a bit of surprise. A pleasant surprise, but a surprise never the less as this box is not boring. It is not plain. It is not even clad in “Intel Blue” like their CPUs (tiny) box use. Put bluntly, where their CPU division is all about ultra-conservativism and come across as a bit on the ‘cold’ side their dGPU “ARC” division is all about being approachable by the average jane or joe.
Make no mistake, Intel ARC shipping containers are still on the conservative end of the spectrum compared to say a Gigabyte VIPERIA. To us the fact you will not find epileptic seizure inducing color schemes, over the top ‘I AM A GAMER’ graphics, nor even the typical dross that has infected the video card industry is not a good thing. It is a great thing. Thanks to its quiet, understated, elegance this is a shipping container that instills confidence in potential buyers thinking about expanding their horizon past Team Green or Team Red options.
With all that said, it may be a bit too boring for some. It really is a shame that you will not typically be able to see inside the box until you purchase it as this charmingly minimalistic exterior does not extend to the internals. Instead of seeing a stripped down internal protection scheme Intel has gone all out. Multiple layers of colorful, and graphic laden, internal cardboard covering a high density foam ‘cutout’ that lets the ARC B580 stay safe and sound. Be it from blunt force, an iterant box cutter, or darn near anything else this bomb proof box is so far above average it breaks the bell curve.
With that said… we could do without some of the accessories. Does anyone really need a carboard ‘do it yourself’ ARC toy? Hopefully that is just a reviewer’s “Limited Edition” inclusion as ooof. Give me a couple bucks off the MSRP instead of it. After all, if I want to play with any video card its not going to involve children’s toys. It is going to involve Hell Diver… err… Battlefiel… err The Final… err… okay. Maybe we can see Intel’s point of view on toys. All jokes aside, this cardboard toy accessory (with stickers!) did increase the build cost and considering this is a 250 and not $180 like the A580 was… maybe Intel needs to think a bit more inside the box?
Moving on. Thankfully, this… whimsical touch does not translate into a glowy, showy, LED lightshow clad card. Instead when you look at the B580 for the first time you are strongly reminded of some of the best Founders Editions and other premium looking dGPUs we have reviewed over the years. Put blunt, the softly sloping sides, the graceful curves and even a freakin backplate (on a $250 card we may add!) is phenomenal.
Sure, the backplate is polymer and not metal. Few will care as it is a backplate clad model in a sea of nekkid options. Yes, this is a video card that does not look like an inexpensive card. Instead it could easily be part of a uber high end custom build, a more industrial looking build, or even a corpo-build. It is just that elegant and that flexible in its aesthetics. AMD (and to lesser extend NVIDIA) could learn a thin or two about how to do video card aesthetics right from Intel.
With that said, and much like the shipping container, certain demographics will find it a bit too conservative. As such, if you are 1337 gamer reliving your youth, a TickTok’er with TickTok brain, or even just a YouTube “influencer” you probably will be very bored with the B580. Maybe that is a good thing as “influencers” have ruined multiple industries just for “clout” already. So maybe flying under their radar for a couple more generations is a good thing. Put another way Intel is serious about their ARC division and are trying to take AMD’s place as the plucky underdog who under promises and over delivers. Which would be a refreshing change of pace compared to modern Team Red and Team Green marketing shenanigans – with over the top claims that they may try (kinda-sorta) at some point in the future to actual deliver on. Either way, AMD should be concerned as this is a card that looks like it deserves to be in a mainstream build.
Make no mistake. The new ARC 2.0 / BattleMage B580 is not just a pretty face. This is a well designed card. One that Team Green and Team Red’s entry mainstream design teams could learn a thing or two from on more than just aesthetics. For example, this is not a card rocking a ‘blower’ style cooling solution. Over the years too many companies have cut corners by using blowers instead of ‘down draft’ solutions as they are cheaper to implement, and with under 200watt TDPs are generally considered ‘good enough’. They however are loud. In testing this card is not silent but its dual custom ~89mm fans rarely get loud enough to be overly noticeable and for the most part will be lower than ambient noise created by the typical inexpensive case’s 120/140mm fans.
Furthermore, the cooling design of this downright inexpensive (by 2024 standards) dGPU is excellent. You have the frontmost fan in a passthrough design reminiscent of a CPU cooling solution (or… Founders Edition) that will actually provide most of the core’s cooling while the rearmost fan acts a more standard downdraft fan that provides much needed air movement not only over the cooling array’s aluminum fins but also the GDDR6 RAM ICs… and the rest of the components on the rather small PCB.
To dig down and be a bit more precise Intel has opted for a single slot height full length fin array that is cooling 5 heatpipes.
Considering this card is 27.5CM long, the card is ~99.5mm “wide” (or about 4.2mm over PCIe SIG’ zheight standard) and the array is 20mm “tall”… there is a lot of surface area (~270 square cm) for this 190TDP video card. All without ballooning the dimension up to 3 slots form-factor.
Before we move on. We do need to make one thing clear. We are in no shape nor form slamming the fact that the overall length of the PCB makes up only about half of the overall length of the B580. The opposite is in fact true. A small PCB has proven to be better than overly long ones – as it does allow for better cooling and lower noise. It also allows, if a company decides to make one, for smaller ‘full coverage’ water blocks. All of which are good things. Sadly, going hand in glove with all those benefits to short boards is the fact that the power connector is in a bad location.
Yes, modern GPUs typically stick them half way down the board but this modern trend is trash. It makes final tidying up way, way harder than the ‘old school’ method of sticking the 8-pin(s) on the end of the card. On the positive side… it is a single 8-pin PCIe connector so there are little worries over an incorrectly inserted cable “melting”… and burning your frickin house down. So while not perfect, we will take even a four 8-pin model over a trash-in-the-bag single 12V-2×6 connector option. The old way is safer, easier to work with, easier to find aftermarket (color matching) cables, and generally speaking just plumb better.
Counteracting some of that positivity is the fact that a single 8pin is rated for 150 watts of power. Considering this is a card that A) has a TDP of 190 watts and B) allows for easy overlocking via its software… we are not precisely overjoyed with seeing a single connector being used. It should have been one 8 + one 6 pin. That way no power would ever be pulled via the PCIe slot, no gremlin limitations on overclocking (as while the PCIe spec is for 75 watts… not every board can supply clean, stable power at that level). Hopefully, third party / board partners will release a more… overclocking friendly version that is rocking dual connectors (like say a three fan ASRock Steel Legend model).
Circling around the rearIO ports we can see that Intel has opted for a very decent 3xDP 2.1 + 1xHDMI 2.1a port configuration. While we would have liked to have seen a USB port (for VR) having them all stacked on a single slot layer and the second for helping exhausting hot air… we can not complain to much.
Closer Look p.2
Let’s face it. With the Arc Alchemist series, Intel wanted to prove to the world that their Xe blueprint was actually a workable design. One that could prove to be the foundation upon which a third party contender could be built… sometime in the future. As such, Intel did not shoot for the moon. Instead they realistically calculated the chances of a RTX 3090 “Killer” even breaking even, and then decided to target the 1080P crowd… as that is the resolution of what the majority of people game at. This meant good enough performance, at a good enough price tag was the main priority. Thus the A580 came with 8GB of RAM, and core clocks that were aptly described as ‘good enough’. Not overkill. Just… “good enough”.
With the second generation, the design team wanted to prove that their (pardon the pun) core philosophy was not only a workable solution, but arguably a good one. This is why their goals were not to satisfy the needs of just 1080P gamers but also 1440P gamers. After all, if you can provide ‘good enough’ performance at 1440P you will easily break into the good to great 1080P performance territory. This means gone is the 8GB of GDDR6 and its stead is 12GB of GDDR6. Which is arguably ‘good enough’ for 1440P modern games… and certainly more than that for 1080P gaming.
Sadly, going hand in glove with that boost to total onboard memory is the fact that the second gen BMG-G21 core’s memory controller’s bus is smaller than its predecessor. To be precise, when dealing with a 5-class B580 dGPU one only gets access to a 192-bit wide memory bus. Not 256-bit like its predecessor was rocking. Mix in the fact that it is the same GDDR6 frequency ICs being used… and the net result is the overall memory bandwidth is smaller. Arguably still ‘good enough’ for 1440P but this is a tad disappointing given the goal and targets of the BattleMage. Thankfully, the duopoly has all but written off the ‘low end’ and given Intel a free pass on this issue.
Take for instance AMD. At about 3bills, their Radeon RX 7600 is probably the closet in price to the B580 and it “features” a 128-bit bus paired with 8GB of GDDR6 RAM. It is not until you move up to the 4-5bill range and their more expensive Radeon RX 7700 XT (as the non-XT is still MIA) will you find 12GB offered on a 192-bit bus. NVIDIA is even worse. Their RTX 4060 (300 to 350 USD range) has an anemic 128-bit bus and 8GB of RAM. Hell, the RTX 4060Ti only (350 to 400 yanky bucks) 128-bits and while you can get 16GB options… 8GB is what it was designed around. So yeah. 192-bits plus 12GB is more than just good enough for the 1080P and 1440P market… or at least it is by both AMD and NVIDIA standards. So who are we to argue that it should have been 256-bits and 12GB… even if we sincerely hope the 7-class is 16GB on a 256-bit bus.
Moving on. The amount of RAM is not the only thing to change. In fact, we started with it because it probably is the least important change. Xe 2.0 brings multiple massive changes. All focused around trimming the fat while netting major performance gains. Albeit with a couple caveats.
First the caveats. For the first time in recent memory a dGPU manufacture is releasing a smaller core for a given class. For example, the A580 made use of six rendering slices (aka the basic building block Xe is based upon) and thus has/had 24 Xe cores + 24 Ray Tracing units + 96 ‘AI Acceleration” XMX engines. Thus a grand total of 3072 shader units (think ‘cuda cores’ for point of reference), 384 XMX cores (think ‘Tensor cores’ for point of reference), 24 Ray Tracing engines, 96 ROPS, and 192TPUs.
The B580 only has five rendering slices. Not six.
Since the Xe 2.0 core breakdown per slice is very… very similar this means the B580 is rocking only 20 Xe 2.0 cores. However, that is about where the similarities end. Yes, it means 2560 shader units vs 3072. However it means 160 XMX engines, and 20 Ray Tracing engines versus 384 + 24. On paper. Ooof. That… that is a massive reduction. Yet, Intel not only promises but down right boasts of a swole 50 percent or better overall performance gains. How have they cut this gordian knot and done what is seemingly a contradiction in terms? Low level optimizations and sheer brute horsepower. Put another way, what Intel did was look for ways to slice off nanosecond delays. For example, what would have taken 2 cores/slices/etc to render now takes ‘one’. With each “one” running much, much faster than before.
Put another way, think of the HEDT line Core i9-10900X and compare and contrast that ten core beast (for its day) to a modern desktop Core Ultra 200 series processor. Imagine a Core Ultra 9 285 with just its 8 p-cores and no e-cores. Thus ten vs eight cores… and yet one would be foolish to pick the higher core count 10gen HEDT’er over the newer Core Ultra 200! That is basically why Intel can promise, and deliver, on a smaller dGPU core doing more. Much more.
On the IPC front its in the high double digits as Intel’s Arc dev team have done major optimizations… and boosted the L2 cache from 8 to 18MB (and L1 is now 256KB!). Take for example the vector engines and XMX engines. In Alchemist series they were 256bits 1024bit based. In the new BattleMage era they are 512 and 2048bit based. So while the total number is ‘halved’ each one is larger and more efficient. Netting major IPC gains to say the least.
These major improvements in Instructions Per Clock cycle combined with more on-chip and off-chip RAM cache is then paired with a base clock of 2,670 (vs 1700!) MHz and a boost of 2.85Ghz (instead of a flat 2GHz). This means that each of these cores not only can do more per cycle… they are clocked ~30 percent higher. Intel claims “up to” 70 percent higher overall performance. In testing, that is being a wee bit optimistic but the low-level hardware (combined with massive software) improvements does allow this B580 to hit well above its price class and easily match a RTX 4060 and even sometimes (albeit rarely) a 4060 Ti. When compared against AMD… the company once known for their massive memory bandwidth cards? Fuhgeddaboudit. Their low-end is arguably ‘better’ than the Team Green option, but that is not the same as saying it is a good option.
These days all the above will net you some ‘nice’ to ‘very nice’ compliments… but Ray Tracing is still the new sexy and all modern dGPUs must support it. In NVIDIA’s case that means 24 to 34(4060Ti) Ray Tracing cores. In the case of AMD it means up to 54 (7700XT). As such, on paper a mere 20 is not disappointing. It is ‘disgusting’. In reality these new 3-pipeline based beasties are arguably as good as NIVDIA on a per core basis, and out and out smoke AMD. They smoke AMD so badly it really is only a two-horse race in RT land: Team Green… and Team Blue.
Since Ray Tracing hammers frames per second all modern dGPUs must also offer “frame generation”… aka fake it till you make it frames. AMD is once again barely worth looking at, and instead the only two serious options are Intel and NVIDIA. NVIDIA’s DLSS (Deep Learning Super Sampling) offers multiple approaches to make fake frames to keep FPS in the realm of reasonable. Intel now also offers multiple options.
To be precise when one puchases an Intel Arc dGPU one gains access to XeSS 2.0 Super Resolution (aka XeSS 2 -SR), XeSS Frame generation (XeSS-FG where it uses two prior real rendered frames and two different algorithms to make a… “inbetween” or “blending” frame… for point of reference think of the ‘soap opera effect’ on TVs for the level of smoothness that can be obtained via blending in faux frames between real frames), and Xe Low Latency (“XeLL” that overrides the game logic and allows for actions to be rendered earlier than the game engine would typically do things – for point of reference think how dogwater bad a poorly optimized PC game runs compared to a slick ‘n’ smooth highly optimized game. Most of the difference is in when ‘things’ get rendered by the game engine/dGPU… as the game engine might be using as low as 30 frames per second time slices… on your 120Hz+ monitor). Put another way Intel now offers good frame interpolation, excellent frame smoothing, and insane responsiveness.
More importantly, all three can be active at the same time. Of the three the last is actually the most exciting… as frame generation increases latency. Sometimes noticeably. Sometimes enough that ‘pro gamers’ typically turn that shite off and brute force it via 4090 level cards. Mere mortals do not have the luxury of solving a problem by throwing a mortgage payment at the problem.
As such everyone will be happy to know that with all three of these new technologies active Intel promises better than native latency. With better than native FPS. Of course, each game will have to have Xe SS 2.0 enabled… but given the fact “pro gamers” will get better than native latency we highly doubt many next gen games will not come with them as an option. Especially given the fact that the Software Dev Pack for XeSS now supports DX12, DX11 and Vulcan. Making it pretty much a no-brainer, (little to) no cost feature to enable.
“But wait! There’s more!” All these XESS software improvements are backwards compatible. Yes. Unlike Team Green that forces you to buy the latest gen to get the latest software stack… Intel is letting it work on gen 1 Alchemist models. Albeit… not as efficiently as XeSS2 was designed (and tested) around BattleMage. This is still eons better than the “other guys” though.
Overall the BattleMage is not just a beefed up Alchemist. Instead it really does represent the next generation of Intel Arc design. Now let’s see how it performs… as few care about the ‘how’ or the ‘why’, just that it does.
Overclocking
In retrospect it should not have come as any surprise that the BattleMage series would not only be good at, but down right easy to overclock. It did however take us surprise. Yes, Intel is actually one of the largest GPU manufactures in the world. Yes, they have literally countless decades of experience in what overclocking enthusiasts want, need, and even desire. It just is that most of this experience, and effort, has been focused more towards CPU and not GPU enthusiasts. Rarely do you see experience in one area transfer so seamlessly over to another… and yet that is precisely what Intel has been able to do in such a radically short period of time.
Make no mistake, we do no encourage or even recommend dGPU overclocking these days as all modern dGPU’s “overclock” themselves and the (now) Big Three know precisely how far to push things at the factory. Thus, it is mostly an endeavor of minutia. One you undertake not because you expect massive gains, but because it is fun. So with that in mind, we would suggest not nuking your warranty on any dGPU card and leaving it at stock if possible. Would even suggest underclocking to see how good your core and memory are so as to reduce power consumption, noise output, and generally speaking increase longevity.
In either case, the fact remains that while Intel has done a bang up job of giving you 95 to 99 percent of all the potential performance the B580 core has on tap, this is a card that is just that: fun to play with. It is so fun because of the time and effort Intel has put into their software package. We are not exaggerating when we say it makes NVIDIA and AMD overclocking look antiquated by comparison… and we doubt any one will want to go back to MSI Afterburner after using what Intel now natively supplies.
Yes. Unlike AMD or NVIDIA which really expect you to use 3rd party tools… and seemingly go out of their way to minimize the leavers you have to push/pull on… Intel and their ARC team have gone above and beyond for the Arc Battlemage series. So much so the Arc series includes an excellent host of basic and advanced overclocking features that are more in line with what one would expect with a CPU. Right out of the box. No, trickery. No channery. No needing to source out 3rd party software options. Instead it’s all here for you in the Intel software package. Seriously.
Want to push the voltage or power limits a touch? Easy. Intel has included it. Want to play with the memory speed (and try to get back some of the lost bandwidth that goes along with 192 instead of previous gen’s 256-bit)? Easy. Want to set a target speed so that it will use the above to help ‘self-overclock’? It does that too. Hell, want to set a temperature limit? Yup. It does that too. If all that is not impressive enough… it even takes a page from the CPU world (cough Ryzen Master) and includes a ‘tuning’ feature that allows for both over and underclocking with custom voltage/frequencies curves!
Yes, this really is a card that was made by professionals who are passionate about what they do… and you can really feel the love. As such we do not care that the majority of the improvements one is going to obtain from overclocking a B580 card is going to stem directly from pushing the memory. Which is never a good idea if you want to keep you card for 3 or more years. It is what it is, and these days our favorite saying (when it comes to overclocking) is simple: “Overclocking is dead. It remains dead. And we killed it”… but it still can be a bit of laugh, especially when talking about a card that costs less than what many a kit of DDR5 RAM will set you back. Color us highly impressed… and are tempted to suggest that any one wanting to learn about dGPU overclocking should start with this bad boy. It really is consistent and novice friendly.
Testing Methodology
To fully test the abilities of a given video card, we have used a blend of in-game benchmarks and custom recorded real-world game benchmarking. For custom game play we have used FRAPS to record the minimum and average frame rates and to do so for a set period of time. All tests were run a minimum of four times and the scores are the average of all four runs.
All games were patched to their latest version. Latest, as the time of this review the latest Intel/AMD/NVIDIA drivers were applied. The OS was a fresh clean install of Windows 11 with all latest hotfixes, patches and updates applied. All games were tested at the two or three of the most popular resolutions of 1080P (1920×1080), then at 1440P (2560×1440), and again (if warranted based on class of card) at 4K (3840×2160). This means each game’s tested was run a minimum of 8 to 12 times: 4 @ 1080P, 4 @1440P, and possibly 4@4K. Before testing Unigine’s Valley benchmark was run for 15 minutes to ‘warm up’ the video card. This was done to ensure that long term performance and not short-term performance is being illustrated.
The games and applications used for testing were:
Adobe Premier Pro
Assassins Creed: Mirage
Battlefield 5
Blender
Borderlands 3
Call of Duty Modern Warfare II (2022)
Cyberpunk 2077
Crysis 3
DaVinci Resolve
Facry 6
Grand Theft Auto: V
Handbrake
Hitman 3
Metro Exodus EE
Red Dead Redemption 2
Shadow of the Tomb Raider
Tom Clancy’s The Division 2
Topaz AI
Watch Dogs Legion
Witcher 3
For stress testing we used Unigine’s Valley benchmark.
AC: Mirage, Cyberpunk 2077
Assassins Creed: Valhalla
Assassin’s Creed Odyssey is a 2020 action role-playing video game developed by Ubisoft Quebec and published by Ubisoft. While this game does come with an in-game benchmark we have instead opted for a 5 minute gameplay in the Jorvik city in England that consists of using Odin’s Sight, running, fighting, and generally goofing off for five minutes. An average of four runs was taken.
The settings used in the testing below are Ultra default settings with Vsync disabled at a resolution of 1080P , 1440P
Cyberpunk 2077
Cyberpunk 2077 is a 2020 action role-playing video game developed and published by CD Project. While this game does come with an in-game benchmark we have instead opted for a 5 minute gameplay in the Arasaka Waterfront subdistrict of Watson area that consists of 5 minutes of exploring, fighting and generally goofing off in Night City.
The settings used in the testing below are Ultra default settings with Vsync disabled at a resolution of 1080P , 1440P
Battlefield 5, CoD: MWII
Battlefield 5
Battlefield 5 is first person shooter video game, published by EA. It was released on November 2018. This game does not include an in-game benchmark. To obtain repeatable results we have used FRAPs and recorded 5 minutes of the Under No Flag level. The test begins after the cut scene where you get to watch the bombs planted explode and where you are directed to destroy enemy aircraft. This section was chosen as it combines numerous NPCs, explosions, and is generally fun to play as well as being highly repeatable.
The settings used in the testing below are Ultra display settings with Vsync disabled at a resolution of 1080P , 1440P
Call of Duty: MW II
Call of Duty: Modern Warfare II (2022) is a first-person shooter video game published by Activision and released in November 2022. Our test uses a custom in game timed benchmark that consists of ten minutes of game play during the ‘’Dark Water’’ level. Recording starts as soon as the cutscene transition from the R.H.I.B to the oil rig ends. This location was chosen at it offers everything from a variety of NPCs, explosions, shadows, water surface reflections, and generally speaking is both visually complex and surprisingly hard on a video card… for a CoD game.
An average of four runs was taken.
The settings used in the testing below are ‘Ultra’ settings for anything that allows for Ultra, and high for anything that does not. With VSync disabled, all at a resolution of 1080P, 1440P.
Crysis 3, Far Cry 6
Crysis 3 Gaming Benchmark
Crysis 3 is a first person shooter video game, published by Electronic Arts and released in February 2013. While older than some of the others it still puts a lot of demands on the GPU. This makes it perfect for more real world gaming testing. To obtain repeatable results we have used FRAPs and recorded 300 seconds of the single player ‘Post Human’ level, starting as soon as soon as prophet is handed a Hammer II pistol by Psycho. An average of four runs was taken.
The settings used in the testing below are highest settings for quality, VSync disabled and a resolution of 1080P, 1440P.
Texture Quality, Game Effects, Objects, Particles, Post Processing, Shadows, Shading, Water, and System Specs all set to Very High. Motion Blur was set to High and Lens Flare was set to On. Anti-Aliasing was set to MSAA 8X and Antistrophic Filtering was set to 16x.
Far Cry 6
Far Cry 6 is an action-adventure first-person shooter video game developed by Ubisoft Montreal and Ubisoft Toronto and published by Ubisoft. It was released on October 7th 2021. As this game comes with a built-in benchmark that provides fairly realistic results, we have opted for it.
The settings used in the testing below are Ultra graphics preset, at a resolution of 1080P, 1440P
GTA V, Hitman 3
Grand Theft Auto 5 Gaming Benchmark
GTA V is an open world action-adventure video game published by Rockstar Games and released in April 2014 for the PC. For testing purposes, we have used a five-minute game play that takes place in and around a forest. This gameplay combines prehistoric vegetation, water, fire, and even animals.
The settings used in the testing below are stock ‘high’ settings for graphics quality, VSync disabled, and with a resolution of 1080P and 1440P.
Hitman 3 Gaming Benchmark
Hitman 3 is an action-adventure stealth video game developed by IO Interactive and published by Warner Bros. It was released in 2021 for the PC. The game has a benchmark component to it that mimics game play, an average of four runs was taken.
The settings used in the testing below are ‘ultra’ settings for quality, VSync disabled, FXAA, DX12 mode with a resolution of 1080P, 1440P
Metro Exodus, S.T.R.
Metro Exodus Benchmark
Metro Exodus is a first-person shooter video game developed by 4A Games and published by Deep Silver in 2019. The game has a built-in benchmark component to it that mimics game play well enough that we have opted for it. An average of four runs was taken.
The settings used in the testing below are ‘ultra’ settings for quality, DX12 mode with a resolution of 1080P, 1440P.
Shadow of the Tomb Raider
Rise of the Tomb Raider is an action-adventure video game published by Square Enix and released in January 2018. The game has a benchmark component to it that mimics game play and an average of four runs was taken.
The settings used in the testing below are Highest default settings for quality (with ‘Pure Hair’ shading set to normal, ambient occlusion set to HBAO+), VSync disabled and a resolution of 1080P, 1440P, and 4K.
The Division 2, Watch Dogs: Legion, Witcher 3
Tom Clancy’s The Division 2 Gaming Benchmark
Tom Clancy’s The Division 2 is an open world 3rd person shooter video game published by Ubisoft and released in March 2019. The game has a benchmark component to it that mimics game play and an average of four runs was taken.
The settings used in the testing below are ‘ultra’ preset settings for quality, using DirectX 12, with VSync disabled and a resolution of 1080P, 1440P.
Watch Dogs: Legion
Watch Dogs: Legion is an action-adventure video game set in a fictionalized version of London. Published and developed by Ubisoft, it was released on October 29th, 2020.
Our test uses a custom in game timed benchmark that consists of five minutes of game play near and around the London Eye starting on the bridge. This location was chosen at it offers everything from a variety of NPCs, NPC vehicles, water surface reflections, and generally speaking is both visually stunning and is incredibly hard on a video card. Particularly with ray tracing turned on.
An average of four runs was taken.
The settings used in the testing below are the defaults for the ‘Ultra’ display settings, but with Vsync disabled and Ray Tracing enabled. All at a resolution of 1080P, 1440P
Witcher 3
Witcher 3 is an action role-playing video game set in an open world environment, developed by Polish video game developer CD Projekt RED and released on May 2015.
Our test uses a real world timed benchmark that consists of three minutes of game play during the ‘Beast of the White Orchard’ quest. This section of the game incorporates everything from grass, to water, to hair, to numerous NPCs. An average of four runs was taken.
The settings used in the testing below are Ultra display settings, but with HairWorks off and Vsync disabled, at a resolution of 1080P, 1440P.
Ray Tracing Performance
To show what one can expect from a given video card’s ‘Ray Tracing’ performance we have redone three of our previous game’s tests with Ray Tracing enabled. This is what we found.
CyberPunk 2077
Far Cry 6
Metro Exodus E.E.
Adobe PP & Blender
Adobe Premiere Pro
Adobe Premiere Pro is a timeline-based video editing software application developed by Adobe Systems and published as part of the Adobe Creative Cloud licensing program.
Blender Benchmark
Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, interactive 3D applications and video games. For testing we have opted for the standard ‘BMW’ demo.
DaVinci, Handbrake, Topaz AI
DaVinci Resolve Studio Benchmark
For this test we are using DaVinci Resolve Studio and have taken a one half hour 1080P (30,000Kbps) resolution video and exported it using MP4 format, H.264 codec, UltraHD resoltuion (4K) at 20,000Kbps. No effects beyond super-scaling, standard fade in from black at start, and fade to black at end have been used.
Handbrake X265
HandBrake is a free and open-source transcoder for digital video files, originally developed in 2003 by Eric Petit. Since then it has continued to evolve. Included in its list of features is the ability to transcode existing video from x264/MPEG-4 AVC to x265/HEVC. For this test we are using a one half hour length 4K x264 file and using the H.265 MKV 1080p30 preset transcoding it to 1080p resolution.
Topaz Labs AI
Topaz Labs has quickly made a name for themselves for both their upscaling and denoise ‘AI’ software options. For this custom test we have taken 10 RAW photos and super-scaled them by 4 times. We than ran DeNoise on them. The below result is the combination of both timed tests.
Temperature, Noise, and Power Analysis
Video Card Temperature Results
For all temperature testing the cards were used in an open test bed environment. Ambient temperature was kept at a constant 20°C (+/- 0.5°C) and if the ambient room temperatures rose above 21°C or dropped below 19°C at any time, all benchmarking was stopped until proper temperatures could normalized.
For Idle tests, we let the system idle at the Windows 7 desktop for 25 minutes and recorded the peak temperature.
For Load tests, we ran Unigine’s Valley benchmark for 20 minutes.
Sound Level Test Results
While everyone “hears” noise differently there is one easy way to remove all subjectivness and easily compare different fans: use a sound level meter. This way you can easily compare the various fans noise envelopes without us coloring the results and see what fans fit within your personal comfort level. Of course, we will endeavor to try and explain the various results – which are taken at a 15 inch distance from the GPU’s fan(s) – to help you gain an even better understanding of how loud a cooler’s stock fan is, but even if you discount our personal opinions, the fact remains numbers don’t lie.
For Idle tests, we let the system idle at the Windows 10 desktop for 25 minutes and recorded the peak dB.
For Load tests, we ran Unigine’s Valley benchmark for 20 minutes and recorded the peak dB.
System Power Consumption
To obtain accurate results we have connected the system to a Power Angle power meter that has in turn been attached to a 1500watt UPS. This ensures only 120volt power is supplied to the PSU and removes any variances that could potential crop up because of brownouts and power spikes.
In order to stress the video card we have once again used Unigine’s Valley benchmark and ran it for 20 minutes to determine peak system power consumption. For idle results we have let the system idle at the Windows 10 desktop for 25 minutes and recorded the peak idle power consumption.
Closing Thoughts
Final Score: 88 out 100
The YouTube generation has seemingly forgotten the fact that Intel were the ones who invented the idea of a dedicated video card slot with its own dedicated, direct to the CPU lanes: the Accelerated Graphics Port (“AGP” back in 1997). They also overlook the fact that when AGP standard proved to not be able to keep up with growing demands, they checked their own egos and helped create its replacement: the PCIe standard… that is based upon their earlier invention: the PCI bus standard.
The TickTok generation equally suffer from amnesia and have ‘forgotten’ the fact that Intel invented the integrated GPU (circa 1999’s i810 albeit on Whitney’s NorthBridge… and then SandyBridge with it on the CPU itself). They ignore the fact that for every PC with a dedicated GPU there is at least one that is happily rocking out with “just” an Intel iGPU… and those multiple generations of Intel iGPUs are still being supported with active driver updates.
This breadth and depth of institutional knowledge has then been paired with a dedicated driver team that actively engages with the gaming and non-gaming communities (albeit via Discord). This pontent one-two combination is why Intel is able to do what no one else has been able to do this century: break the dGPU duopoly and offer a viable third option. A third option with models that offer their own unique strengths and weaknesses. This fact alone makes the Intel Arc worthy of being added to Intel’s long list of Industry advancing inventions. Possibly, and in time, even right up there with the iGPU.
With all that said… Billionaire Huang of NVIDIA was not exactly worrying over the increased competition; nor was his relative Billionaire Su of D.A.M.I.T. (AKA “AMD’s video card division formerly known as ATI Technologies of Canuckistan”) all that concerned over nuBlue entering the fray. With the second generation Arc “BattleMage” series… this will not change all that much. The fact of the matter is Intel is starting with a massive disadvantage when it comes to buyers even remotely considering a third option. The mindshare is so locked up that even AMD are having problems wooing buyers away from Leather Jacket Man.
This is such a huge disadvantage that it is going to take time to overcome. However, Intel is making massive strides on both the hardware and software fronts to reassure buyers that they are a serious option. Enough that their ARC can, and are, considered ‘mostly stable’ by those who have investigated and used A3/5/7-class Arc cards in the real world… and not just watched an influencer video on them. That may sound like a backhanded compliment but many, ourselves included, barely consider AMD and NVIDIA drivers “mostly stable” these days. Hell, let’s be honest there has to yet to be a year without one, the other, or both releasing a bomb of a driver package, and yet Intel gets the majority of hate over whiffing a driver.
When one combines cutting edge software with massive improvements in the actual hardware the end result is the Arc B-series is no “B-Side” meant to be nothing more than to take up space that would otherwise go to waste. Sure there are rough spots (especially in the minimum frame rate category) but the BattleMage is arguably the modern computing industry equivalent of La Bamba. One that, much La Bamba did for Ritchie Valens way back in the day, will launch Intel’s Arc series into the spotlight. This time in a good way.
It will do so because, for the first time in a long time, buyers can get an actually good 1080P and 1440P video card that does not cost a fortune and yet arguably offers better frame interpolation than AMD, (certainly) better Ray Tracing than AMD, more onboard RAM than either’s x60 class option, and yet sip power like NVIDIA (used to). It does all that and thanks to really sweet new technology can even whistle the tune more than acceptably at 4K. Potentially offering lower latency than what can be obtained on cards costing multiple times the B580’s asking price. If all that was not enough the B580 also loves to be used in more professional non-gaming scenarios where it is indeed a viable option to the Green Monster that rules the ‘pro’ market. That is one unique and refreshing blend of features to say the least.
Considering we are only basing our judgment of the Intel Arc B-series on the mid-range B580 and not a 7-class the Intel ARC BattleMage may just end up being not a La Bamba… but a groundbreaking option like the Beatle’s “Rain”. We include the latter in our judgement because Rain was famous for it was the first time ever a major label released a song featuring reversed lyrics. With the BattleMage series Intel is also reversing and flipping the script. For many generations dGPU buyers have been trained to expect a new generation to be bigger… as ‘bigger = better’. The B580’s core is smaller and features fewer render slices than its predecessor, and yet throughout testing we saw this 20Xe (2.0) based option trouncing a 28Xe (1.0) based card. In some cases it was so much better that instead of being like a NVIDIA RTX 3rd generation x60 class card, it is serious competition for the 4th generation RTX x60 class card. All in a single 8-pin package that is not as big as a house, as loud as a jet plane, nor cost a significant chunk of the build’s budget.
Yes, Intel is indeed doing more with less, and in turn this lets your budget do more for less. So much so buyers can once again ask a sane question: what does spending $100 or more extra actually get me? That is the kind of question which is going to result in many a Home Theatre PC (especially if it does double duty as both a Movie and Gaming) system rocking one of these bad boys. The same but even to a larger extent holds true for budget PC systems… and for video editors (who can’t afford the cost of a x80 class card) the choice is clear. Stick one of these bad boys in the rig as it’s a transcoding rocket.
It does all that and is just Intel’s ‘mainstream’ 5-class option. Taken as a whole the Arc Battlemage series is chock full of innovation, value, and even performance. So while the B580 may not be perfect it demands to be added to your short list for consideration as it is the new Value King. Furthermore it should make you excited to see what the BattleMage 7xx class version of them can do. We have a feeling it too is going to shake up the industry, and that too is a Good Thing™.
The Review
Intel BattleMage B580 12GB Review
ntel's Arc BattleMage series marks a bold entry into the GPU market, positioning itself as a viable alternative to NVIDIA and AMD. The B580 model delivers impressive performance at 1080p and 1440p, with efficient ray tracing and competitive pricing. However, Intel faces significant challenges in gaining market trust and overcoming driver optimization issues. While promising, Arc's success will depend on sustained innovation and convincing consumers to break away from established brands.