Message boards : Graphics cards (GPUs) : Cuda Memory tester
Author | Message |
---|---|
Just saw this over at folding@home... | |
ID: 9204 | Rating: 0 | rate: / Reply Quote | |
Thanks for posting. Here's th link more convenient: | |
ID: 9206 | Rating: 0 | rate: / Reply Quote | |
Well the guys on the F@H forums are saying its the best there is for testing memory. It doesn't do much on the shaders though. They were talking about another one thats in development for testing the core/shaders. | |
ID: 9231 | Rating: 0 | rate: / Reply Quote | |
That makes a lot of sense: if the memory controller in the GPU (the "logic") is broken, it will surely be detected. But it's not a proper test for the shaders, which the title "Cuda Memory tester" already implies. | |
ID: 9244 | Rating: 0 | rate: / Reply Quote | |
It is nice if you OC memory :) | |
ID: 9375 | Rating: 0 | rate: / Reply Quote | |
I dont like the idea of having to register to download the program. | |
ID: 10837 | Rating: 0 | rate: / Reply Quote | |
For now "bugmenot" still works ;) | |
ID: 10840 | Rating: 0 | rate: / Reply Quote | |
A link that doesn't require registering... | |
ID: 10841 | Rating: 0 | rate: / Reply Quote | |
I dont like the idea of having to register to download the program. If you read the page that asks for this info you'll see it's a requirement of NHS who must fund them. | |
ID: 11649 | Rating: 0 | rate: / Reply Quote | |
Is there any advantage with OC'ing the memory for the GPUgrid application. I noticed that in the Milkyway forums the guy who did the ATI app actually suggests that you can downclock the memory (to lower temps) with no adverse effect on the app run times. | |
ID: 11653 | Rating: 0 | rate: / Reply Quote | |
The Milkyway's ATI project uses different software, so its difficult to compare to GPUGRID Clients and work units. | |
ID: 11654 | Rating: 0 | rate: / Reply Quote | |
In GPU grid the memory doesn't increase performance much at all... Maybe 5% increase in speed if that. | |
ID: 11661 | Rating: 0 | rate: / Reply Quote | |
Power saving is reduced by lowering the Voltage of the GPU &/or Shaders &/or Memory. Just because Power Saving 2D Clocks reduce all three, does not me you have to. | |
ID: 11673 | Rating: 0 | rate: / Reply Quote | |
There was a thread on overclocking where we agreed on the following (I tested myself): | |
ID: 11711 | Rating: 0 | rate: / Reply Quote | |
Power saving is reduced by lowering the Voltage of the GPU &/or Shaders &/or Memory. Just because Power Saving 2D Clocks reduce all three, does not me you have to. Sorry, that didn't help. I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory. Its worth noting that the Milkyway is starting to support NVIDIA cards now, but only Compute capable 1.3. So that’s the GTX 260, 275, 280, 285 and 295, (unless you engineered yourself a Quadro of can afford the Tesla)! So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM, and remember it’s in the early Alpha stage, so the less messing around the better. The GPUGRID is still supporting CC 1.1 Cards, but has stopped support for 1.0. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 11774 | Rating: 0 | rate: / Reply Quote | |
I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory. Did you read my post? So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM Sorry, but what do you mean? (and why) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 11782 | Rating: 0 | rate: / Reply Quote | |
I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory. Yeah, I was just correcting some non-intelligible sentence I came out with in my previous post! So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM Sorry, but what do you mean? (and why) I was just mentioning that people can now use top end NVIDIA cards for the MW Project, as well as ATI cards. From what you said, and I probably did not need to repeat it, there is no point in trying to underclock the RAM on an NVIDIA card used on the MW Project; it will not respond in the same way as the ATI cards are reported to (DDR3 vs DDR5). As its in the Alpha stage, people are not going to help by adding random crashes. Just because a card is stable running one program does not mean it will be stable running another and during Alpha testing the paramiters might change frequently. Sorry for going off the beat. | |
ID: 11809 | Rating: 0 | rate: / Reply Quote | |
Now I understand, thanks! Well.. I expect it to be this way (lower power consumption for GDDR3 compared to GDDR5), but neither did I test it myself nor do I remember reading a proper test of this. It's just that the problem really only revealed itself as a major one to me on the new ATIs. I wouldn't mind being proven wrong here, though.. which would mean that nVidia guys could also save some power via downclocking, should they choose to only run MW. If the latter would make any sense is a totally different question ;) | |
ID: 11856 | Rating: 0 | rate: / Reply Quote | |
I would like to save a few Watts when running GPUGRID, keep the card cooler, the system quieter and still get through the same amount of work. Perhaps some cards can be tweaked by nibbling away at the voltage here, a DIY green card, or even underclocked to use less Watts Per Point? Unfortunately I don’t have the time to look into this but I would like to give it a go. | |
ID: 11862 | Rating: 0 | rate: / Reply Quote | |
I would like to save a few Watts when running GPUGRID, keep the card cooler, the system quieter and still get through the same amount of work. A nice idea, but there's no big free lunch here - the high end cards are already pushed quite hard and there's not much reserve to tap into. Current mid range CPUs can often easily overclock 50%, whereas on high end GPUs you rarely see more than 10% overclocks - the voltages are already set quite tight. That's why if you lower your GPU voltage (difficult but should be possible on NV cards via software, or maybe a bios flash is needed) you may quickly loose stability at stock clocks. Just downclocking core & shader without lowering the voltage will decrease temperature but will reduce performance / watt of the entire system. Downclocking the memory will save a few watts but will greatly reduce performance in GPU-Grid, reducing performance / watt even more. My 9800GTX+ can run 1944 Mhz shader clock (stock 1830 MHz), so I guess the voltage at stock speeds could be lowered from stock 1.18 V to 1.12 V (the 2nd step, there's also 1.0x V available on these cards). However, since my card runs pretty cool (<60°C in summer) with an Accelero S1 Rev 2 and 2 inaudibly 120mm fans I'd chose the high performance option anyway. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 11913 | Rating: 0 | rate: / Reply Quote | |
I think I have managed to get my GTX260 stable at about 112% but only time will tell for sure. | |
ID: 13145 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Cuda Memory tester