Message boards : Graphics cards (GPUs) : GTX 980Ti
Author | Message |
---|---|
The NVIDIA GeForce GTX 980 Ti is launching in two days and we have already seen how the reference cards look and perform. The main feat of the GeForce GTX 980 Ti is going to be the non-reference lineup which is going to be launched by NVIDIA’s AIB partners and will include several high-end coolers featuring triple-fan, hybrid and liquid coolers. | |
ID: 41218 | Rating: 0 | rate: / Reply Quote | |
https://twitter.com/TEAM_CSF/status/605247234679128064 | |
ID: 41221 | Rating: 0 | rate: / Reply Quote | |
Ladies and Gentlemen, Boys and Girls, | |
ID: 41570 | Rating: 0 | rate: / Reply Quote | |
Awesome Retvari! | |
ID: 41574 | Rating: 0 | rate: / Reply Quote | |
40% more CUDA cores (2816 vs 2048) That is "only" 37.5% more CUDA cores, so there's only 5.2% discrepancy between the ratio of CUDA cores and the ratio of the performance. | |
ID: 41577 | Rating: 0 | rate: / Reply Quote | |
This [email protected] seems to be the maximum setting, as I had a task which errored out after 543 seconds. 5pot's reported (w8.1) 980ti boost bin is 1450 at ~2hr slower. XP with GM200 offers a +20% performance advantage over WDDM. The scaling of these cards are a bit lower than a direct ratio, so there must be some bottleneck present in the system (apart from the WDDM, as I'm using Windows XP x64). It could be the application (as it's CUDA 6.5), the GPU, or the PCIe bus, or a combination of them. WDDM holding back single WUat more than ~10% with GM200. I'm curious to see what an app update does. CUDA6.5 app has the 980ti equaling (2) 970's or 780's as the 980 equals (2) 680's. There are so many micro-factors to get hold of when seeking peak optimization. ACEMD is one of kind - challenging the GPU with little latitude. | |
ID: 41582 | Rating: 0 | rate: / Reply Quote | |
Nice, complete report, almost 1 Mppd wow! | |
ID: 41587 | Rating: 0 | rate: / Reply Quote | |
Thanks Retvari for sharing these results ! | |
ID: 41607 | Rating: 0 | rate: / Reply Quote | |
...I could not find if you were running one task or two (or three) simultaneously ? You can know by the shortest runtime ever achieved that I was running only one task. If I had run two simultaneously, the runtime would have been doubled. I don't think it's worth the hassle to run two or more workunits at the same time under Windows XP (or Linux), as these OSes don't have the WDDM overhead. - petebe's 980tis yield 22-26k sec / GERARD, Win XP and i7-4770k, unknown GPU usage His GPU usage is between 93 and 95%, but he's runnig 4 GPUs (two GTX 980 and two GTX 980Ti) in the same host. From the huge difference of his and my runtimes I came to the conclusion that one should not reduce any bandwidth to achieve full utilization of the GTX980Ti. This GPU is simply too big and too fast. ... but yours yields 96% GPU usage ? Yes. | |
ID: 41609 | Rating: 0 | rate: / Reply Quote | |
From the huge difference of his and my runtimes I came to the conclusion that one should not reduce any bandwidth to achieve full utilization of the GTX980Ti. This GPU is simply too big and too fast. Thanks for your replies and conclusion. | |
ID: 41622 | Rating: 0 | rate: / Reply Quote | |
My 980Ti achieved over 1 million points yesterday. I've not been able to do that since I was running with two 780Ti and two 680s. It's a lot more power-efficient as well with current tasks only taking me up to about 80% power with a small overclock. | |
ID: 41712 | Rating: 0 | rate: / Reply Quote | |
Last week, one of my GTX 690 cards failed in my windows 7 computer (the other one is still working), so I bought a GTX 980 Ti, and experimented. Here are my findings. The motherboard on my computer has the following slots: 1 x PCI Express x16 slot (PCI_E2) with x16 operation and 1 x PCI Express x16 slot (PCI_E3) with x4 operation. When I installed the 980 Ti into the x16 slot, a GERARD_FXCXCL12_LIG_ units would finish in a little over 9 hours with 70% usage, but in the x4 slot it would finish in just under 16 hours with 55% usage. This compares with 690 card which would finish 2 GERARDS units (one in each GPU) in over 16 hours in either slot with usage in the mid 80’s %. The temperature for the 980 Ti is in the mid 50’s C, with fan setting at 70%, while the 690 is in the low to mid 70’s C, with fan setting at 90%. The 980 Ti has a better cooling system. The 980 Ti is very bandwidth sensitive. | |
ID: 41806 | Rating: 0 | rate: / Reply Quote | |
Bedrich, you could run 2 WUs concurrently on your GTX980Ti. With Gerards this already helps a lot on GTX970, boosting GPU usage from ~70% to 93% on my main machine. | |
ID: 41808 | Rating: 0 | rate: / Reply Quote | |
The only problem with that solution is, it would also be running 2 WUs concurrently on each of GTX 690 GPUs, which I don't want to do because they are already running at near capacity. | |
ID: 41809 | Rating: 0 | rate: / Reply Quote | |
There wouldn't be much gain on the GTX690, but it wouldn't hurt either. I've run 2 concurrent WUs on my older GTX660Ti, and it was OK (although of no benefit) for the larger Noelias, which yield a higher GPU utilization than the Gerards. There was a clear benefit for short runs, though. Each GPU on your GTX690 has 8/7 the shaders and 4/3 the memory bandwidth of my old GPU, so it should profit a bit more from 2 concurrent WUs. | |
ID: 41811 | Rating: 0 | rate: / Reply Quote | |
There wouldn't be much gain on the GTX690, but it wouldn't hurt either. I've run 2 concurrent WUs on my older GTX660Ti, and it was OK (although of no benefit) for the larger Noelias, which yield a higher GPU utilization than the Gerards. There was a clear benefit for short runs, though. Each GPU on your GTX690 has 8/7 the shaders and 4/3 the memory bandwidth of my old GPU, so it should profit a bit more from 2 concurrent WUs. Actually both heat and power, I am running the GTX 690 at 70 + degrees C and at 95% power already, and I have been doing that for over 2 1/2 years. I don't want to push it any harder. At that, it takes over 16 hours to complete a GERARD WU. If I run 2 WU per GPU, my pick will be small, and it will take more than 24 hours to complete the said WU. I slow the project down, potentially get more errors and lose my 24 hours bonus. It's not worth it. On another topic related to the 980 Ti, how does one get to run it on a Windows XP machine? | |
ID: 41822 | Rating: 0 | rate: / Reply Quote | |
On another topic related to the 980 Ti, how does one get to run it on a Windows XP machine? You should download the GTX 960 Windows XP driver (or the x64 version), and include two lines to the nv4_dispi.inf file. 1. Download the recent driver for GTX 960 from NVidia. 2. Start the installer, and copy the path of the installation files to the clipboard. 3. Close the installer. 4. Open the nv4_dispi.inf file with a text editor (like notepad) in the Display.Driver folder of the installation files. -- You can do it by creating a shortcut on your desktop like: -- notepad "<the path of the installation files>\Display.Driver\nv4_dispi.inf", -- if you leave the installation path on it's default value, your shortcut would look like this: -- notepad "C:\NVIDIA\DisplayDriver\355.82\WinXP\English\Display.Driver\nv4_dispi.inf" 5. Search for the string (press CTRL-f in notepad) dev.1401 6. The first hit would look like this: %NVIDIA_DEV.1401% = Section008, PCI\VEN_10DE&DEV_1401 7. Copy the whole line and paste it under the original, then change both 1401 to 17c8 (leave the original line unchanged): %NVIDIA_DEV.17c8% = Section008, PCI\VEN_10DE&DEV_17c8 8. Search again (press CTRL-f in notepad) for the string dev.1401 9. It will find this: NVIDIA_DEV.1401 = "NVIDIA GeForce GTX 960" 10. Copy the whole line and paste it under the original, then change the 1401 to 17c8 and the 960 to 980 Ti (leave the original line unchanged): NVIDIA_DEV.17c8 = "NVIDIA GeForce GTX 980 Ti" 11. Save the nv4_dispi.inf file (overwrite the original). 12. Start the installer. (or install the drivers manually, there will be a warning of unsigned driver, but it's safe to and should be ignored) Note for future use of this method, that NVidia may change the Section numbers in the inf file from time to time, so you have to use the correct section number from the original file in step 6 (not just simply copy this line from here to the inf file in step 7) This method is explained in this post, I've just updated the numbers. | |
ID: 41823 | Rating: 0 | rate: / Reply Quote | |
Thanks Zoltan for the help. I knew it was going to just like last time, but I didn't know which lines to copy and how to edit them, with the exception of the substitution of "NVIDIA GeForce GTX 980 Ti" for "NVIDIA GeForce GTX 960". It's better to ask, then to screw it up. | |
ID: 41830 | Rating: 0 | rate: / Reply Quote | |
Thanks Zoltan for the help. You're welcome. The interesting thing is, when both XP and 7 computers were using the GTX 690 cards, the finish times were about the same. It's because the GTX 690 has a PCIe splitter chip, so its two GPUs using the same PCIe bus, and since there's a lot of traffic by the two GPUGrid app they are holding back each other (especially if the MB has only PCIe2.0, then both GPU chips on the card will run at PCIe2.0) I think the issue may be in the GPUGRID application itself, 900 series cards on windows 7, (I am not sure about windows 8 or 10). The issue is a combination of the WDDM latency, (the dual GPU architecture for the GTX 590, 690 and Titan Z), and PCIe bandwith. Since the same app is running on Windows XP and Windows 7, the issue would have effect Windows XP also if it's only in the GPUGrid app. Did anybody say in one of the posts to "I think you need to ditch XP for 7 or 8..."? I don't remember! I think you refer to this post, but I've posted this method there for GTX 980 and 970 (the GTX 980 Ti didn't exist at that time, so its device id was unknown), to avoid ditching Windows XP. I will update when the next application fixes this slow performance issue. Maybe! It could be a bit better, but there's no way for the app to bypass the WDDM overhead under Windows 7 and later. Windows 10 theoretically could be better than earlier versions, since it has WDDM 2.0 (which is said to be more optimized for better performance than the previous versions), but there's no sign of this better performance yet. Theoretically Windows 7 could use XDDM drivers (which is the architecture Windows XP is using) but I couldn't install NVidia's Windows XP drivers on Windows 7. But I've successfully installed Windows 7 on my old laptop, which don't have Windows 7 drivers for its video controller (it has AGP interface), so I have to install the Windows XP drivers under Windows 7, and it's working (without the aero glass interface). Probably this method is not applicable for PCIe cards. PS: To make your Windows XP safe, you should follow the instructions in this post. | |
ID: 41831 | Rating: 0 | rate: / Reply Quote | |
How exactly can one run two work units on the same GTX 980? I used to do this on Einstein, but I do not where to find the BOINC config program for GPUGRD which allows one to set usage like on Einstein. I may be able to get slightly higher performance. Would this dual unit configuration also work on a GTX 780? That one only has 3GB RAM. GPU utilization is 80% on both cards. I have thermal headroom on both of my CPU's. Would overclocking them 300 MHz help enough to offset the diminished CPU lifespan which occurs at 10C hotter temps? My i7 3370K runs @ 75C @ 4.2 GHz @ 65% load. | |
ID: 41936 | Rating: 0 | rate: / Reply Quote | |
How exactly can one run two work units on the same GTX 980? https://www.gpugrid.net/forum_thread.php?id=4155&nowrap=true#41796 ____________ Thanks - Steve | |
ID: 41938 | Rating: 0 | rate: / Reply Quote | |
Works great! Now if I could get two more work units for the second GPU? Apparently there are none available to send? Thanks for the help! | |
ID: 41939 | Rating: 0 | rate: / Reply Quote | |
Works great! Now if I could get two more work units for the second GPU? Apparently there are none available to send? Thanks for the help! | |
ID: 41940 | Rating: 0 | rate: / Reply Quote | |
Just got my 980Ti under full cover water. It is now 75% utilized and 37C when running single task at 1496Mhz. Am I maximizing the GPU power properly? | |
ID: 42263 | Rating: 0 | rate: / Reply Quote | |
Just got my 980Ti under full cover water. It is now 75% utilized and 37C when running single task at 1496Mhz. Am I maximizing the GPU power properly? You can make it faster by assigning it a CPU core all to itself. You can do that via swan_sync. | |
ID: 42264 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GTX 980Ti