Let us start by saying that the intended successor to the GTX 1070 was/is the RTX 2070; however as NVIDIA has trained consumers for many years now, the RTX 2060 was seen as the GTX 1070s ‘spiritual’ successor as both cards came in at about the same price bracket and the RTX 2060 does offer performance a better than the GTX 1070. Due to a somewhat slow embrace of Ray Tracing technology, or at least the higher MSRPs that accompanied the RTX series, NIVDIA did have to shift gears a wee bit. To be precise their existing RTX series does not scale down all that well and releasing sub-60 models was most likely seen as a waste of silicon. Instead, NVIDIA cut the ray tracing and DLSS portions off their RTX design and created a ‘between series’. A series not meant to compete directly against their RTX 20-series but one that could woo users away from the glut of GTX 10-series cards that were floating around in various e- and retailer shelves.
This new in between series was dubbed the GTX 16-series and offers all the low-level improvements (and 12nm fab process) that make the RTX series very potent. Equally important is by cutting off a huge chunk of the RTX series core NVIDIA was able to create a new GTX series that is smaller, more fuel efficient… and rather powerful in classic / non-Ray Tracing scenarios.
As of the date of this publication there has been a total of three GTX 16-sereies card released (with a GTX 1050 Ti in the works). The GTX 1050, the GTX 1660, and the GTX 1660Ti. The NIVIDA GTX 1660Ti remains the ‘flagship’ model and of the three is the most intriguing. It is so interesting because it is a classic example of finesse vs brute strength when compared to the GTX 1070 that it somewhat replaces in the new NVIDIA lineup.
On paper the GTX 1660Ti does have a few things going for it. For while yes it does have 25 percent less shader (aka CUDA) cores than the GTX 1070 it also has a TDP that is 25 percent less at 120 vs 150 watts. Also, while it has fewer cores (1536 vs 1920) and a lower TDP, it has noticeably more L1 cache (1536KB vs 720KB), can boost the frequency of these cores higher (1683 vs 1770), and has all the low-level tweaks of Turning baked into each of these shaders. While yes, the GTX 1660Ti does have a noticeable smaller memory bus (192-bits vs 256-bits) it actually has more memory bandwidth thanks to 12000MHz(effective) GDDR6. However, contrasting this increase in memory bus speed is the fact it only has 6GB total vs 8GB for the GTX 1070, and ‘gently used’ NVIDIA GTX 1070’s can still be found for about the same cost as a new GTX 1660Ti.
What all this boils down to is the difference in real world performance may not be as significant as CUDA count vs CUDA count would lead you to believe. It really may be a case of efficiency can sometimes make up for brute strength… or at least that is what NVIDIA intended when they carefully designed the GTX 1660Ti. After all, they did not want to sway too many potential buyers away from the RTX 2060, but at the same time needed cards that could fulfill the needs of the lower priced niches of the market. Niches that AMD was more than happy to snap up with offerings that were not used, and offered their own spin on the sub 350-dollar marketplace needs.
Even excluding the RTX 2060 (and its bigger core with more processing power) and its higher price tag from the equation the GTX 1070 and GTX 1660Ti are all but begging to be pitted against one another. In the coming pages, we will be doing precisely that. We will not only show what the GTX 1660Ti offers vs the lower priced GTX 1660, but also how it stacks up against reference, mid-grade, and some of the best GTX 1070s that were released during the last generation’s halcyon days. Let’s dig in and see what makes this particular NVIDIA GTX 1660Ti tick… and how it stacks up against all comers in the real world.