NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

Tuesday, August 14th, 2018 - GPUs, Teknologi

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

Moments ago at NVIDIA’s SIGGRAPH 2018 keynote presentation, company CEO Jensen Huang formally unveiled the company’s much awaited (and much rumored) Turing GPU architecture. The next generation of NVIDIA’s GPU designs, Turing will be incorporating a number of new features and is rolling out this year. While the focus of today’s announcements is on the professional visualization (ProViz) side of matters, we expect to see this used in other upcoming NVIDIA products as well. And by the same token, today’s reveal should not be considered an exhaustive listing of all of Turing’s features.

Advertisement

Hybrid Rendering & Neural Networking: RT & Tensor Cores

So what does Turing bring to the table? The marquee feature, at least for NVIDIA’s ProViz crowd, is on hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA’s RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren’t fully informed on at this time, but serve as dedicated ray tracing processors. NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, which is an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA’s other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that’s not the only feature tensor cores are for – NVIDIA’s entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA’s most powerful neural networking hardware will be coming to a wider range of GPUs.

Though it’s interesting that despite these individual speed-ups, NVIDIA’s overall performance promises aren’t quite as extreme. All told, the company is promising a 6x performance boost versus Pascal, and this doesn’t specify against which parts. Time will tell if even this is a realistic assessment, as even with the RT cores, ray tracing in general is still quite the resource hog.

Meanwhile, to better take advantage of the tensor cores outside of ray tracing and specialty deep learning software, NVIDIA will be rolling out a SDK, NVIDIA NGX, to integrate neural networking into image processing. Details here are sparse, but NVIDIA is envisioning using neural networking and the tensor cores for additional image and video processing, including methods like the upcoming Deep Learning Anti-Aliasing (DLAA).

Turing SM: Variable Rate Shading, Dedicated INT Cores, & More

Alongside the dedicated RT and tensor cores, the Turing architecture Streaming Multiprocessor (SM) itself is also learning some new tricks. In particular here, it’s inheriting one of Volta’s more novel changes, which saw the Integer cores separated out into their own blocks, as opposed to being a facet of the Floating Point CUDA cores. The advantage here – at least as much as we saw in Volta – is that it speeds up address generation and Fused Multiply Add (FMA) performance, though as with a lot of aspects of Turing, there’s likely more to it (and what it can be used for) than we’re seeing today.

Speaking of ALUs, NVIDIA has confirmed that Turing supports “variable rate shading”, which is their term for shader performance scaling with the size of the data type. In Volta this is manifested as FP16 operations at 2x the FP32 rate and INT8 operations at 4x the INT32 rate, and I expect much the same here, subject to further confirmation. Variable rate shading, rapid packed math, and other means of packing together multiple smaller operations into a single larger operation, are all a key component of improving GPU performance at a time when Moore’s Law is slowing down. By only using data types as large (precise) as necessary, it’s possible to pack them together to get more work done in the same period of time. This in turn is particularly important to neural networking inference and other, similar actions, as thus far most neural networking models are showing that they don’t need nearly as much precision as what FP32/INT32 offers.

The Turing SM also includes what NVIDIA is calling a “unified cache architecture.” As I’m still awaiting official SM diagrams from NVIDIA, it’s not clear if this is the same kind of unification we saw with Volta – where the L1 cache was merged with shared memory – or if NVIDIA has gone one step further. At any rate NVIDIA is saying that it offers twice the bandwidth of the “previous generation” which is unclear if NVIDIA means Pascal or Volta (with the latter being more likely).

Feeding the Beast: GDDR6 Support

As the memory used by GPUs is developed by outside companies, there are no big secrets here. The JEDEC and its big 3 members Samsung, SK Hynix, and Micron, have all been developing GDDR6 memory as the successor to both GDDR5 and GDDR5X, and NVIDIA ha confirmed that Turing will support it. Depending on the manufacturer, first-generation GDDR6 is generally promoted as offering up to 16Gbps per pin of memory bandwidth, which is 2x that of NVIDIA’s late-generation GDDR5 cards, and 40% faster than NVIDIA’s most recent GDDR5X cards.

GPU Memory Math: GDDR6 vs. HBM2 vs. GDDR5X
 NVIDIA Quadro RTX 8000
(GDDR6)
NVIDIA Quadro RTX 5000
(GDDR6)
NVIDIA Titan V
(HBM2)
NVIDIA Titan Xp
 
NVIDIA GeForce GTX 1080 TiNVIDIA GeForce GTX 1080
Total Capacity24 GB16 GB12 GB12 GB11 GB8 GB
B/W Per Pin14 Gb/s1.7 Gb/s11.4 Gbps11 Gbps
Chip capacity2 GB (16 Gb)4 GB (32 Gb)1 GB (8 Gb)
No. Chips/KGSDs128312118
B/W Per Chip/Stack56 GB/s217.6 GB/s45.6 GB/s44 GB/s
Bus Width384-bit256-bit3092-bit384-bit352-bit256-bit
Total B/W672 GB/s448GB/s652.8 GB/s547.7 GB/s484 GB/s352 GB/s
DRAM Voltage1.35 V1.2 V (?)1.35 V

Relative to GDDR5X, GDDR6 is not quite as big of a step up as some past memory generations, as many of GDDR6’s innovations were already baked into GDDR5X. None the less, alongside HBM2 for very high end use cases, it is expected to become the backbone memory of the GPU industry. The principle changes here include lower operating voltages (1.35v), and internally the memory is now divided into two memory channels per chip. For a standard 32-bit wide chip then, this means a pair of 16-bit memory channels, for a total of 16 such channels on a 256-bit card. While this in turn means there is a very large number of channels, GPUs are also well-positioned to take advantage of it since they are massively parallel devices to begin with.

NVIDIA has not confirmed whether they’ll actually be using memory running at 16Gbps – it’s worth noting that Micron’s product catalog only goes up to 14Gbps – however in the same breath, NVIDIA has confirmed that they will support Samsung’s cutting-edge 16Gb capacity modules. This is important, as it means for a typical 256-bit GPU, NVIDIA could outfit the card with the standard 8 modules and get 16GB of total capacity, or even 32GB if they use clamshell mode.

Odds & Ends: NVLink & VirtualLink

Rounding out the Turing package thus far, NVIDIA has also briefly confirmed some of the external I/O features that the architecture will support. NVLink support will be present on at least some Turing products, and NVIDIA is tapping it for all three of their new Quadro cards. In the case of all of these products, NVIDIA is offering two-way GPU configurations. I am assuming based on this that we are looking at two NVLinks per board – similar to the Quadro GV100 – however I’m waiting on confirmation of that given NVIDIA’s 100GB/sec transfer number.

One thing I’d like to note here before any of our more gaming-focused audience reads too much into this is that the presence of NVLink in Turing hardware doesn’t mean it’ll be used in consumer parts. Today’s event is all about ProViz, and it would be an entirely NVIDIA thing to do to limit that feature to Quadro and Tesla only. So we’ll see what happens once NVIDIA announces their obligatory consumer cards.

USB Type-C Alternate Modes
 VirtualLinkDisplayPort
(4 Lanes)
DisplayPort
(2 Lanes)
Base USB-C
Video Bandwidth (Raw)32.4Gbps32.4Gbps16.2GbpsN/A
USB 3.x Data Bandwidth10GbpsN/A10Gbps10Gbps + 10Gbps
High Speed Lane Pairs64
Max PowerMandatory: 15W
Optional: 27W
Optional: Up To 100W

Meanwhile gamers and ProViz users alike have something to look forward to for VR, with the addition of VirtualLink support. The USB Type-C alternate mode was announced last month, and supports 15W+ of power, 10Gbps of USB 3.1 Gen 2 data, and 4 lanes of DisplayPort HBR3 video all over a single cable. In other words, it’s a DisplayPort 1.4 connection with extra data and power that is intended to allow a video card to directly drive a VR headset. The standard is backed by NVIDIA, AMD, Oculus, Valve, and Microsoft, so Turing products will be the first of what we expect will ultimately be a number of products supporting the standard.

Performance Numbers

Along with the hardware specifications announced thus far, NVIDIA has also thrown out a handful of performance numbers for Turing hardware. It should be noted that there is a lot more we don’t know here than we do. However at a high level, these appear to be based around a mostly or completely enabled high-end Turing SKU featuring 4608 CUDA cores and 576 tensor cores. Clockspeeds are not disclosed, however as these numbers are profiled against Quadro hardware, we’re likely looking at lower clockspeeds than what we’ll see in any consumer hardware.

NVIDIA Quadro Specification Comparison
 RTX 8000GV100P6000M6000
CUDA Cores4608512038403072
Tensor Cores576640N/AN/A
ROPs96?1289696
Boost Clock~1730MHz?~1450MHz~1560MHz~1140MHz
Memory Clock14Gbps GDDR61.7Gbps HBM29Gbps GDDR5X6.6Gbps GDDR5
Memory Bus Width384-bit4096-bit384-bit384-bit
VRAM48GB32GB24GB24GB
ECC?FullPartialPartial
Half Precision32 TFLOPs?29.6 TFLOPs?N/AN/A
Single Precision16 TFLOPs14.8 TFLOPs12 TFLOPs7 TFLOPs
Double Precision?7.4 TFLOPs0.38 TFLOPs0.22 TFLOPs
Tensor Performance500T "TOPs"
(INT4)
118.5T FLOPs
(FP16)
N/AN/A
TDP?250W250W250W
GPUUnnamed TuringGV100GP102GM200
Die Size754mm2815mm2471mm2601mm2
Transistor Count18.6B21.1B11.8B8B
ArchitectureTuringVoltaPascalMaxwell 2
Manufacturing Process?TSMC 12nm FFNTSMC 16nmTSMC 28nm
Launch DateQ4 2018March 2018October 2016March 2016

Along with the aforementioned 10GigaRays/sec number for the RT cores, for the tensor cores NVIDIA is touting 500 trillion tensor operations per second (500T TOPs). For reference, NVIDIA frequently quotes the GV100 GPU as maxing out at 120T TOPs, however it’s not clear if these values are measured with the same precision math (e.g. FP16 vs. INT8). At any rate, what we do know is that the 576 tensor cores in this chip is quite close to the 640 offered by GV100, but at the end of the day is still a lower amount.

As for the CUDA cores, NVIDIA is saying that the Turing GPU can offer 16 TFLOPS of performance. This is slightly ahead of the 15 TFLOPS single precision performance of the Tesla V100, or even a bit farther ahead of the 13.8 TFLOPS of the Titan V. Or if you’re looking for a more consumer-focused reference, it’s about 32% more than the Titan Xp. Some quick paper napkin math with these figures would put the GPU clockspeed at around 1730MHz, assuming there have been no other changes at the SM level which would throw off the traditional ALU throughput formulas.

Meanwhile NVIDIA has not offered us any guidance on memory bandwidth. However with the top two Quadro SKUs offering 48GB and 24GB of GDDR6 respectively, we are almost certainly looking at a 384-bit memory bus for this Turing GPU. With even the slowest grade of GDDR6 (12Gbps), this puts memory bandwidth at a minimum of 576GB/sec, and potentially as high as 768GB/sec.

Otherwise with the architectural shift, it’s difficult to make too many useful performance comparisons, especially against Pascal. From what we’ve seen with Volta, NVIDIA’s overall efficiency has gone up, especially in well-crafted compute workloads. So the roughly 33% improvement in on-paper compute throughput versus the Quadro P6000 may very well be underestimating things. As for consumer product speculation, I’ll hold off on that entirely.

Coming in Q4 2018, If Not Sooner

Wrapping things up, alongside the Turing architecture announcement, NVIDIA has announced that the first 3 Quadro cards based on Turing GPUs – the Quadro RTX 8000, RTX 6000, and RTX 5000 – will be shipping in Q4 of this year. As the very nature of this reveal is somewhat inverted – normally NVIDIA announces consumer parts first – I wouldn’t necessarily apply that same timeline to consumer cards, which don’t have as stringent the validation requirements. Still, this means we’ll see Turing hardware in Q4 of this year, if not sooner. Interested Quadro buyers will want to start saving their pennies now: a top-tier Quadro RTX 8000 will set you back a cool $10,000.

Finally, as for NVIDIA’s Tesla customers, the Turing launch leaves Volta in a state of flux. NVIDIA has not told us whether Turing will eventually expand into the high-end Tesla space – replacing the GV100 – or if their one-off Volta part will remain the master of its domain for a generation. However as the other Tesla cards have been Pascal-powered thus far, they are very clear candidates for a Turing treatment in 2019.

 (This is a developing story)

Source link : NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

Advertisement

Pictures gallery of NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More | admin | 4.5