Nvidia Titan RTX: The most powerful graphics card is already official with 16 TFLOPS

The most powerful graphics card in the world is already here for $2499 US Price or 2,699 euros it comes loaded with 24 GB of GDDR6 memory, twice the size of the RTX 2080 Ti.

It is official; Nvidia has presented its new beast, the so-called Titan RTX, the most powerful graphics card on the planet, which in turn presents exaggerated figures, in all senses. And is that the new Nvidia GPU reaches a recommended price of 2,699 euros, a model that comes just two months after the presentation of its new family RTX with the 2070, 2080 and 2080 Ti.

Double in price to the RTX 2080 Ti; And in power?

The new Titan RTX is also based on the new Turing architecture, like its little sisters of the 20XX series, although in this case it presents some really surprising numbers. And it has up to 4,608 CUDA cores and 576 Tensor Cores, with a base frequency of 1,350 MHz and a Boost of 1,770 MHz. Although perhaps what surprises most is its amount of memory, with up to 24 GB GDDR6. What about his brute strength? Following with such brutal figures, the new Titan RTX reaches 16.3 TFLOPS, far exceeding the 14.2 TFLOPS of the RTX 2080 Ti and the 13.8 TFLOPS of the Titan V, which guarantees a 4K game with maximum detail. even in the most leading videogames of the market, also taking full advantage of the new Ray Tracing technology.

On the other hand, the GPU of the new Titan RTX is the same as that of the RTX 2080 Ti, that is, the TU102, yes, a version of the same chip fully unlocked, as with the Quadro series from Nvidia. At the level of connections presents an HDMI port, three ports DisplayPort and a USB port Type C. It is expected that the new Titan RTX will reach the market in December at a price, as we say, nothing more and nothing less than $2500. We can now sign up on the official NVIDIA website to receive notification when there are stocks.

ASRock announces the X399 Phantom Gaming 6 motherboard for Threadripper processors

Companies are encouraging to put more and more motherboards on the market for Ryzen Threadripper processors because they are an AMD sales success.

A good power-price ratio for the Ryzen Threadripper 2950X makes it ideal for enthusiasts, professionals and prosumers, which is a sector where a lot of money is made. ASRock’s X399 Phantom Gaming 6 motherboard follows the company’s usual design, dispensing with the most expensive and gamer lighting and aesthetics by focusing on providing a more economical product.

It has the usual eight memory banks for DDR4 modules up to 3400 MHz before entering the overclocking zone outside of the use of memory profiles. It also has three PCIe 3.0 × 16 slots that allow up to three graphics cards to be used in SLI or CrossFire, and it is curious that it does not have any type of PCI slot anymore. This makes more space for two M.2 slots for units up to type 2280 and 22110.

It includes heads for RGB connections, and only has red lighting around the chipset heatsink. The power supply of the motherboard is done with the usual ATX connect and two eight-pin PCIe. It includes four four-pin connectors for water pumps and liquid cooling that can deliver up to 15 W, compatible with the case fans for which it also has several additional four pins. The audio codec is a Realtek ALC1120 with 7.1 audio, and the Ethernet controller is a Dragon RTL8125AG for 2.5 Gigabit Ethernet connectivity, and has another Intel I211AT for an additional Gigabit Ethernet connection. On the back panel includes a PS / 2 connector, an optical SPDIF, eight USB 3.0, a USB 3.1 and a USB 3.1 type, plus five 3.5 mm connectors. The motherboard is ATX format, with a size of 24.4 cm × 30.5 cm.

 

Justice will be the first video game to implement Nvidia DLSS and Ray Tracing technology

Justice will be the first videogame to implement Nvidia DLSS and Ray Tracing technology. Hopefully the title will not confuse any visitor to the website, since in itself, the first video game to implement Ray Tracing technology is Battlefield V, and it is expected that more games will be coming soon. The next to reach the market with support for Nvidia technology will be the title Justice, an MMORPG from China. It may seem a bit absurd that an MMORPG has this type of technology that can cause sharp drops in performance, in a game that contains many characters, but we all want to enjoy a better graphic quality in games.

But as we have said before, Justice will not only make use of Ray Tracing technology, it will also make use of the anti-aliasing filter with AI, Nvidia DLSS that will help improve performance between 30% and 40%, making the loss of performance by Ray Tracing not so big. Battlefield V does not have DLSS support and this is the reason why you lose between 40% and 60% of performance when this technology is active in the new EA delivery.

If you want to see Justice first hand here is the first trailer of the video game using the new Nvidia technology:

https://www.youtube.com/watch?v=4gqzZREHhMY

We can see in the video how the use of the new ray tracing technology gives a spectacular appearance and of course a greater realism, which in the end is also important. We hope that with DLSS support in Ray Tracing, we could achieve very high ‘FPS’ rate, of more than 60 in 4K resolutions.

Nvidia RTX 2060 significantly faster than the GTX 1060 according to leaked report

More good news from Nvidia, as it looks like its Nvidia RTX series cards continue to show significant improvements over the GTX series video cards.

According to Tomshardware, a leaked benchmark found that the Nvidia RTX 2060 is 30 Percent Faster than GTX 1060. Although we still dont know if this new card will be called Nvidia RTX 2060 nonetheless it is showing great potential. 

The RTX 2060 appeared in the database of the test bench of Final Fantasy XV with a performance clearly superior to the 1060, being placed between the AMD Vega 56 and the GeForce 1070.

All this speculation could come to a head when NVIDIA decides to launch the 2060 and fine tune its drivers, then we could see an even bigger improvement in FPS.

The arrival of the NVIDIA RTX 2080 Ti is delayed again

NVIDIA asks for patience and apologizes for the delays of its RTX 2080 Ti

A few months ago, NVIDIA presented its new graphics cards with Turing architecture, the RTX 2080 and RTX 2080 Ti. While in the case of the first there are units available for sale, the second or there are either a few or not available at all. In the case of the NVIDIA Founders Edition, availability is delayed again.

Only a few days passed after the official launch and presentation (September 20) when NVIDIA itself stated that the GeForce RTX 2080 Ti Founders Edition would not be available by that date. That managed to anger many critics because the pre-reserves were limited units until empty stock, where after would not be available from its website. That is, they knew perfectly the stock they had to be able to offer their users such cards, but for some reason all the dates are delayed.

The only understandable thing would be that they did not expect such a good reception and left the stock too long open, but on the other hand, perhaps they did it on purpose to attract more customers even if there were delays. In any case, this is speculation and we assume that we will never know. What we do know is that NVIDIA itself, through an official statement, has postponed the delivery of said RTX 2080 Ti between October 5 and 9.

According to sources, GeForce RTX 2080 Ti Founders Edition is in stock is likely to be in the mid- to late-October window at the earliest. While the RTX 2080 Ti Founders Edition is priced at $1199 with the standard MSRP at $999, $1200 is the observed going price for all 2080 Ti cards on the market and in general inflated prices will be the norm for weeks or perhaps months, something to be expected with new video card releases.

All these delays point directly to an unspecified and unclear problem, NVIDIA simply says that they have problems in the supply chain and that they can meet the needs and dates of customers for their RTX 2080 Ti is assuming a ” challenge”.

We hope Nvidia can solve this problem ASAP.

NVSlimmer: How to remove bloatware from Nvidia drivers

This simple program allows us to modify the installation package.

The latest version of the Nvidia drivers for Windows occupies more than half a gigabyte. Basically it behaves like an operating system, installed inside another operating system. It would be great if the company offered “lite” editions (for example, without telemetry or GeForce Experience) as well as a way to customize the entire process, without hiding modules.

In other words, we would love to know how to remove the bloatware from the Nvidia drivers, but until an official response appears, we can use the new NVSlimmer.

How to remove bloatware from Nvidia drivers with NVSlimmer

520 megabytes. That is what occupies the installer of the version 411.70 of the driver Nvidia for Windows 10 64 bit edition. There are Linux distros much smaller than that, and an unpatched ISO image of the old Windows XP SP3 floats around 600 megabytes. It is supposed to be accepted, that graphic cards have become very complex and powerful, and that they need giant controllers. Fortunately, video game enthusiasts love to challenge official positions, and over time have discovered how to remove bloatware from Nvidia drivers, modifying some .cfg files and deleting folders. The manual process is effective but tedious, and the uKER user of the Guru3D forum decided to automate it a bit. The result is NVSlimmer.

NVSlimmer is in a relatively early stage of development, and users have not hesitated to report problems under more extreme configurations, but its basic functionality is impeccable. The idea is that everything that is not marked in the NVSlimmer interface will be removed from the controller, while the modules considered “critical” are pre-selected. There are three in total: Core Display Driver, Install Core, and PhysX. HD Audio is recommended in case of using audio via HDMI, and the rest of the dependencies is still a work in progress, but we already have a fairly solid idea of the close relationship between some modules.

Once the purging of the controllers is finished, we can proceed to its formal installation, or perform a repackaging, which will create a decompressible .exe. In the specific case of the 411.70 driver, when choosing the three essential modules and HD Audio, the reduction in the installation pack was 144 megabytes, but the most important thing is that the bloatware will not be in our system. As always, any operation involving modified drivers should be considered experimental. There is no support here beyond asking in the forums of Guru3D, and if something goes wrong, you are on your own. Finally, remember that the bloatware is not exclusive to the controllers. You also have to remove it in new computers, and of course, in Windows 10.

Official site: Click here

Nvidia Quadro RTX 6000 and RTX 5000 already in presale

Nvidia has opened pre orders of its new Quadro RTX 6000 and RTX 5000 graphics cards based on its advanced Turing architecture on its website. We give a review of the prices of these new cards, as well as their most important characteristics.

Nvidia has already pre-sold the new Quadro RTX graphics cards based on the advanced Turing architecture. The new Nvidia Quadro RTX 6000 graphics card is priced at $6,300, and there is a quantity limitation of 5 units per customer. On the other hand, the Nvidia Quadro RTX 5000, has a price of $2,300 and is already sold out at the time of writing this article.

The Quadro RTX 6000 model maximizes the TU102 silicon with the inclusion of 4,608 CUDA cores, 576 Tensor cores, 72 RT cores, and 24 GB of GDDR6 memory, across a 384-bit memory bus width. This makes it the cheapest graphics card to use the Nvidia TU102 silicon in all its glory. We recommend reading our post about Nvidia announces the Quadro RTX card, the first capable of running Ray-Tracing The Quadro RTX 8000 model, which has a price of $ 10,000, but is not yet available to book, equips the same core TU102 with 48 GB of memory and clocks higher than the RTX 6000.

As for the Quadro RTX 5000, this unit maximizes the TU104 silicon with 3,072 CUDA cores, 384 cores, 48 ??RT cores and 16 GB of GDDR6 memory through of the 256-bit interface of the chip.

Recall that the Quadro series comes with a set of business features, and certifications, for the main content creation applications, which are not available in the GeForce series. Therefore, these cards are more suitable for use in the professional world, although GeForce can also be used. The Quadro series also uses higher quality components, to ensure greater strength during 24/7 use.

NVIDIA NVLink vs. SLI: differences and performance test

The appearance of the NVIDIA NVLink connector, as a substitute for the traditional SLI connector, in the new NVIDIA GeForce RTX 2000 graphics cards, has made an enthusiast wonder why the change and if one technology is better than the other to play. In this tutorial we will explain both and we will be able to see, with data, which of the two is better.

Before getting into the matter, it would not hurt for us to explain the differences between the NVIDIA NVLink and the SLI. The SLI (Scalable Link Interface) is a technology that NVIDIA brought to the market in 2004, which is based on the Scan Line Interleave technology developed by 3dFX for its Voodoo 2 graphics cards. Ideally, this technology allows the burden to be distributed of work between several equal graphic cards, being able to multiply the capacity of general computation of the system.

In an SLI configuration, there is a graphic card, which acts as a Master, being the rest of the configuration, slave cards, to which the master card directs. However, an important drawback of this technology lies in the bandwidth available for the graphics to share information. But, above all, there is the fact that it is a unidirectional bus. This fact creates a series of latencies in the system that mean that, as the number of cards on the bus increases, the increase in power decreases proportionally.

Comparison of performance between NVLink and SLI

The NVIDIA GeForce GTX 1080 Ti, Quadro GP100 and Quadro GV100 graphics cards have been used to compare the performance between NVLink and SLI. In the case of these last two, both the SLI mode and the complete NVLink mode have been used, since these professional graphics cards support both modes.

As we can see, between the graphics cards with similar architectures, such as the GeForce 1080 Ti and the Quadro GP100, there is hardly a big difference in the performance in the games between the SLI and the NVLink, except in the case of the game Far Cry 5, which presents a quite spectacular performance increase. But the general trend is that there is practically no increase in performance between both technologies. This would confirm NVIDIA’s claim that the difference in performance of graphics cards in multi GPU configurations between SLI and NVLink is minimal, as demonstrated by the results obtained in the benchmarks.

HDR on the RTX 2080 and 2080 Ti: do they lose performance as in Pascal?

A little over a week ago we heard about the fact that the Pascal architecture was having problems with some games with HDR support. So far we have little data about Turing in this regard, so is it possible that the RTX 2080 and 2080 Ti lose when activating HDR?

NVIDIA has not yet commented on the reason for Pascal’s problems in certain games, since with others it is not an issue and the performance of SDR vs. HDR is practically the same. All indications point to a problem in the drivers, specifically in the HDR YCbCr 4: 2: 2 configuration (but not in the YCbCr 4: 4: 4 or RFB modes) where the Pascal cards when performing the chroma subsampling “mysteriously” they lose performance 8 vs 10 bits: what changes on a screen with more depth of color?

It seems that the tone mapping necessary for HDR is done by software in the 1000 series, while in Turing that mapping is already done via hardware and from there the problem of the drivers seems to come. But nothing could be further from reality, recently Toms Hardware made a series of tests to try to prove whether the loss was something exclusive of Pascal or, on the contrary, Turing was also affected. The only “problem” is due to the use of the driver version; the RTX were tested with the 411.51 and the rest of the NVIDIA cards with the 398.82. Saving this impediment, let’s proceed to see the data:

Forza Motorsport 7 is one of those games where the losses were more obvious, to be specific all GPUs have a greater or lesser loss. The GTX 1080 Ti loses 13.7%, the RTX 2080 2.5% and finally the RTX 2080 Ti loses 1.2%. It seems that the more power loss.

Farcry 5 is the most disconcerting of all the comparative games. The GTX 1080 Ti maintains the framerate and even achieves 0.2 FPS more, while the Turing cards lose a bit of performance: -2.7% and -3.7% respectively.

There is no solution in sight

The theories about the problem have already been enumerated and unfortunately they are that, theories. We have searched for data on this topic and the results are as disparate as reflective, since you can find the same problem in different games, where a user or web the result gives loss to others gives them gain and vice versa. We can not specify the problem, in fact, no one seems to be able to do so since this has been talked about and not given to the source of the evils. All this NVIDIA is knowledgeable and silent, we do not know if you are working on it or it is simply a failure that you know you can not fix for one reason or another. Meanwhile encouraged the owners of a Pascal card to update the firmware of their GPUs for support HDR + 4K + 144Hz since the version of DP 1.3 was causing problems and the upgrade was installed on the 1.4. Too many unknowns and few answers, we can not say more.

Ray Tracing: everything you need to know about this new revolution in videogames

The latest talk of town now at days seems to be about Ray Tracing. Undoubtedly soon after the release of the new RTX series video cards from NVIDIA. This technology will undoubtedly change the world of videogames, but many will be asking what it is and what its going to mean for technology.

Ray Tracing (RT from now on) is a technique based on an algorithm created by Arthur Appel called Ray Casting (1968) through algorithms to determine visible surfaces.
we can find more here.

Thanks to RT, 3D graphics can be rendered with complex lighting models simulating the physical behavior of light. Until now this process could not be done in real time, so it had to be processed first and later rendered to get said behavior of the light. This conventional 3D rendering process has so far used a process called rasterization, which uses objects created from a mesh of triangles or polygons to represent a 3D model of an object. This “rendering pipeline” then converts each triangle of the 3D models into pixels on a 2D screen, so that they can then be processed or “shaded” before the final display on the screen.

NVIDIA OPTIX

Ten years ago OptiX introduced the programmable shader model for ray tracing (OptiX GPU Ray Tracing). NVIDIA has continued to invest in hardware, software and algorithms to accelerate that programming model in its GPUs and it is now when they have presented it finished together with its RTX series. The OptiX API is an application framework that leverages RTX technology to achieve optimal ray tracing performance on the GPU. It provides a simple, recursive and flexible pipeline to accelerate the ray tracing algorithms. In addition, the post-processing API includes an AI-accelerated noise eliminator, which also leverages RTX technology. From movies and games to scientific design and visualization, OptiX has been successfully implemented in a wide range of commercial applications. These applications range from software visualization to scientific visualization (including Gordon Bell Award finalists), defense applications, audio synthesis and computer light maps for games.

Microsoft DirectX Ray Tracing or DXR

The DirectX Ray Tracing (DXR) API from Microsoft extends DirectX 12 to support raytracing. DXR fully integrates directX ray tracing, allowing developers to integrate this technology with traditional rasterization and calculation techniques by providing four new concepts for the DX12 API.

Vulkan

Vulkan is a multiplatform API for the development of applications with 3D graphics. It was announced for the first time in the GDC 2015 by the Khronos Group. Initially, it was presented by Khronos as “the next generation OpenGL initiative”, but then the name was discarded, leaving Vulkan as definitive. Vulkan is based on Mantle, another API of AMD, whose code was assigned to Khronos with the intention of generating an open standard similar to OpenGL, but low level. Unlike the Microsoft API, Vulkan can work on a wide range of platforms, including Windows 7, Windows 8, Windows 10, Android and Linux. NVIDIA is developing a ray tracking extension for the multiplatform computing and graphics API of Vulkan. It will be available soon according to NVIDIA, as this extension will allow Vulkan developers to access the full power of the RTX graphics. NVIDIA is also contributing to the design of this extension to the Khronos group as a contribution to potentially carry out a ray tracking capability among suppliers to the Vulkan standard.

So what games will support Ray Tracing?

It is something yet to be seen, at the moment there is not even a benchmark that gives full support to ray tracing and the closest thing to knowing a complete list of future titles is what NVIDIA showed:

This does not mean that those 21 games support Ray Tracing, but that they can also make use of artificial intelligence. Of all of them some have confirmed support such as: Asset Corsa Competizione, Atomic Heart, Battlefield V, Control, Enlisted, Justice, JX3, Mechwarrior V: Mercenaries, Exodus Metro, Shadow of the Tomb Raider and Project DH. The only problem with this technology is the high consumption of resources, since it seems that we will have to use specific units to increase it, just as NVIDIA has designed its Turing architecture. Seen the seen many are skeptical with Vulkan and DXR, since NVIDIA has shown empirically that without such units (RT Cores) the performance is reduced too much. To see the improvements of this technology and to finalize this article that better than seeing it in action in 3 of the main AAA titles that are going to come out or have already come to the market:

Metro Exodus

Shadow of the tomb raider

Battlefield V