Advanced search

Message boards : Graphics cards (GPUs) : Cuda Memory tester

Author Message
popandbob
Send message
Joined: 18 Jul 07
Posts: 67
Credit: 40,277,822
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9204 - Posted: 2 May 2009 | 17:37:32 UTC

Just saw this over at folding@home...
Based partly on Memtest86 but done for testing CUDA capable GPU's.

https://simtk.org/home/memtest/

Bob

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9206 - Posted: 2 May 2009 | 17:40:23 UTC - in response to Message 9204.
Last modified: 2 May 2009 | 17:42:33 UTC

Thanks for posting. Here's th link more convenient:
https://simtk.org/home/memtest/

They say for testing memory and logic.. do you know how "good" it is at finding logic / gpu core errors?

MrS
____________
Scanning for our furry friends since Jan 2002

popandbob
Send message
Joined: 18 Jul 07
Posts: 67
Credit: 40,277,822
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9231 - Posted: 3 May 2009 | 0:52:49 UTC

Well the guys on the F@H forums are saying its the best there is for testing memory. It doesn't do much on the shaders though. They were talking about another one thats in development for testing the core/shaders.

Bob

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9244 - Posted: 3 May 2009 | 11:39:17 UTC - in response to Message 9231.

That makes a lot of sense: if the memory controller in the GPU (the "logic") is broken, it will surely be detected. But it's not a proper test for the shaders, which the title "Cuda Memory tester" already implies.

Drop us a line if you (or someone else) see the finished shader tester. I think this would be even more important than the memory tester.

MrS
____________
Scanning for our furry friends since Jan 2002

TomaszPawel
Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9375 - Posted: 6 May 2009 | 12:22:34 UTC - in response to Message 9244.
Last modified: 6 May 2009 | 12:22:53 UTC

It is nice if you OC memory :)
____________
POLISH NATIONAL TEAM - Join! Crunch! Win!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10837 - Posted: 24 Jun 2009 | 21:35:30 UTC - in response to Message 9375.

I dont like the idea of having to register to download the program.
No doubt they want to share your details with selected partners.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10840 - Posted: 24 Jun 2009 | 21:42:53 UTC - in response to Message 10837.

For now "bugmenot" still works ;)

MrS
____________
Scanning for our furry friends since Jan 2002

popandbob
Send message
Joined: 18 Jul 07
Posts: 67
Credit: 40,277,822
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10841 - Posted: 24 Jun 2009 | 23:29:18 UTC

A link that doesn't require registering...
F@H Utilities
Bob

Skip Da Shu
Send message
Joined: 13 Jul 09
Posts: 63
Credit: 2,350,495,165
RAC: 11,165,222
Level
Phe
Scientific publications
watwatwatwatwatwatwat
Message 11649 - Posted: 4 Aug 2009 | 8:36:03 UTC - in response to Message 10837.

I dont like the idea of having to register to download the program.
No doubt they want to share your details with selected partners.


If you read the page that asks for this info you'll see it's a requirement of NHS who must fund them.

Skip Da Shu
Send message
Joined: 13 Jul 09
Posts: 63
Credit: 2,350,495,165
RAC: 11,165,222
Level
Phe
Scientific publications
watwatwatwatwatwatwat
Message 11653 - Posted: 4 Aug 2009 | 10:44:09 UTC

Is there any advantage with OC'ing the memory for the GPUgrid application. I noticed that in the Milkyway forums the guy who did the ATI app actually suggests that you can downclock the memory (to lower temps) with no adverse effect on the app run times.

Anyone have any idea if that would also apply here in the CUDA world?
____________
- da shu @ HeliOS,
"A child's exposure to technology should never be predicated on an ability to afford it."

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11654 - Posted: 4 Aug 2009 | 12:10:14 UTC - in response to Message 11653.

The Milkyway's ATI project uses different software, so its difficult to compare to GPUGRID Clients and work units.

For the GPUGRID an overclocked card will get through the tasks faster, so you will get more points over a set period of time. I expect that if you underclock the reverse will be true; in a set period of time you will get through less work and therefore get less points. My opinion is that if you can overclock, keep it stable and not increase the Voltage it is perhaps worth doing. Otherwise, not!

I expect there might be the odd quirky setting found, occasionally, to do with DDR3 timings that might rock the boat a bit, but not by much. Perhaps this is what the ATI App guy found? Perhaps it was a bit more specific; underclocking the GPU but not the RAM or he was just talking about reducing power usage.

For GPUGRID, if you can underclock the card and reduce the voltage it might be an option, but unless you reduce the voltage the power usage won't decrease, so it would be a bit pointless.

popandbob
Send message
Joined: 18 Jul 07
Posts: 67
Credit: 40,277,822
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11661 - Posted: 5 Aug 2009 | 2:24:04 UTC

In GPU grid the memory doesn't increase performance much at all... Maybe 5% increase in speed if that.

Lowering the memory clocks does reduce power usage. How else would the power saving 2D clocks work if it didn't use less power at lower clock rates?

The biggest changing factor in performance will always be the number of shaders and their speed followed by the core speed.

Bob

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11673 - Posted: 5 Aug 2009 | 19:17:52 UTC - in response to Message 11661.

Power saving is reduced by lowering the Voltage of the GPU &/or Shaders &/or Memory. Just because Power Saving 2D Clocks reduce all three, does not me you have to.

What we are all looking for is better performance and lower cost. So as Bob said, you could look into reducing the RAM speed (and lowering the voltage), but also raise the core & Shader speeds.

If you are 5% slower because of reduced memory speed and Voltage, but 5% faster from a speed bump in GPU and Shaders at no extra Voltage increase, you will be doing the same work but saving a little bit on the Electric Bill. You might also find that a slight Voltage increase for the GPU & Shaders yields much faster results. So the Electric bill would be the same but the result count would rise.

Better to test these before running GPUGRID, just in case your speeds fail and you lose your running tasks. However you really need to be measuring the power usage at the plug on the wall to know how much you are saving or losing to have an idea if it is worthwhile – the clock speeds and voltages don’t really tell you much by themselves.

I also think you are probably slightly better off with a card that has less memory. A 512MB card will perform just as well as a 2GB card for GPUGRID, but will cost less at the outset and will probably use slightly less electric, on RAM. That said, not all DDR3 video RAM is equal, some require slightly higher or lower Voltages, and well made RAM tends to be slightly more energy efficient; better materials, less energy loss, less energy requirements.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11711 - Posted: 8 Aug 2009 | 11:08:24 UTC

There was a thread on overclocking where we agreed on the following (I tested myself):

- GPU-Grid does need memory clock
- downclocking reduces performance significantly, much more so than you save in power
- OC'ing GPU-RAM does speed things up, not as much as shader & core OC, but clearly measureable

With Milkyway it's different. They're doing completely different calculations: the working set is much smaller and thus much less memory bandwidth is needed. And the GDDR5 used on high end ATI is much more power hungry than the GDDR3 used on current nVidia cards. A 4870 512 MB uses ~60W at idle, almost all of which is simply due to the RAM. Downclocking the RAM reduces power usage linearly and thus does help. For the current MW app a memory speed of ~500 MHz (x4) instead of the stock 900 MHz is sufficient, saving you ~30W.

Power saving can be achieved by both, reducing clock and voltage. Reducing clock speed decreases the "dynamic power draw", i.e. the power which is used upon transistor state switches linearly. It doesn't affect sub-threshold leakage, though (the power which each transistor uses even if it shouldn't do anything).
Reducing voltage (V) classically reduces power draw with the square of V, i.e. half the voltage means one quarter the power draw. In modern chips it's more like V³ due to leakage currents. Voltage can't be reduced infinitely, though, as it's needed to achieve higher frequencies.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11774 - Posted: 10 Aug 2009 | 13:17:04 UTC - in response to Message 11673.

Power saving is reduced by lowering the Voltage of the GPU &/or Shaders &/or Memory. Just because Power Saving 2D Clocks reduce all three, does not me you have to.


Sorry, that didn't help.
I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory.

Its worth noting that the Milkyway is starting to support NVIDIA cards now, but only Compute capable 1.3. So that’s the GTX 260, 275, 280, 285 and 295, (unless you engineered yourself a Quadro of can afford the Tesla)!

So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM, and remember it’s in the early Alpha stage, so the less messing around the better.

The GPUGRID is still supporting CC 1.1 Cards, but has stopped support for 1.0.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11782 - Posted: 10 Aug 2009 | 19:43:40 UTC - in response to Message 11774.

I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory.


Did you read my post?

So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM


Sorry, but what do you mean? (and why)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11809 - Posted: 11 Aug 2009 | 19:57:47 UTC - in response to Message 11782.

I meant, Power usage is reduced by lowering the Voltage of the GPU, Shaders and Memory.


Did you read my post?


Yeah, I was just correcting some non-intelligible sentence I came out with in my previous post!

So if you decide you want to contribute to the MW GPU project, there is no point trying to underclock your RAM


Sorry, but what do you mean? (and why)


I was just mentioning that people can now use top end NVIDIA cards for the MW Project, as well as ATI cards.
From what you said, and I probably did not need to repeat it, there is no point in trying to underclock the RAM on an NVIDIA card used on the MW Project; it will not respond in the same way as the ATI cards are reported to (DDR3 vs DDR5). As its in the Alpha stage, people are not going to help by adding random crashes. Just because a card is stable running one program does not mean it will be stable running another and during Alpha testing the paramiters might change frequently.
Sorry for going off the beat.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11856 - Posted: 13 Aug 2009 | 19:46:12 UTC - in response to Message 11809.

Now I understand, thanks! Well.. I expect it to be this way (lower power consumption for GDDR3 compared to GDDR5), but neither did I test it myself nor do I remember reading a proper test of this. It's just that the problem really only revealed itself as a major one to me on the new ATIs. I wouldn't mind being proven wrong here, though.. which would mean that nVidia guys could also save some power via downclocking, should they choose to only run MW. If the latter would make any sense is a totally different question ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11862 - Posted: 13 Aug 2009 | 23:23:18 UTC - in response to Message 11856.
Last modified: 13 Aug 2009 | 23:26:05 UTC

I would like to save a few Watts when running GPUGRID, keep the card cooler, the system quieter and still get through the same amount of work. Perhaps some cards can be tweaked by nibbling away at the voltage here, a DIY green card, or even underclocked to use less Watts Per Point? Unfortunately I don’t have the time to look into this but I would like to give it a go.

I recently ran folding@home for a while, which used the GPU more intensively than GPUGRID, much more; It had my GTS 250 running at 88 degrees C, that’s 12 degrees hotter than GPUGRID. It certainly tested the RAM, but I don’t think a burn-in is supposed to run 24/7 for days and weeks! If I decide to dedicate any more time to that project I will seriously consider throttling the card back, or more likely using a more expendable card.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11913 - Posted: 15 Aug 2009 | 12:01:24 UTC - in response to Message 11862.

I would like to save a few Watts when running GPUGRID, keep the card cooler, the system quieter and still get through the same amount of work.


A nice idea, but there's no big free lunch here - the high end cards are already pushed quite hard and there's not much reserve to tap into. Current mid range CPUs can often easily overclock 50%, whereas on high end GPUs you rarely see more than 10% overclocks - the voltages are already set quite tight.

That's why if you lower your GPU voltage (difficult but should be possible on NV cards via software, or maybe a bios flash is needed) you may quickly loose stability at stock clocks. Just downclocking core & shader without lowering the voltage will decrease temperature but will reduce performance / watt of the entire system. Downclocking the memory will save a few watts but will greatly reduce performance in GPU-Grid, reducing performance / watt even more.

My 9800GTX+ can run 1944 Mhz shader clock (stock 1830 MHz), so I guess the voltage at stock speeds could be lowered from stock 1.18 V to 1.12 V (the 2nd step, there's also 1.0x V available on these cards). However, since my card runs pretty cool (<60°C in summer) with an Accelero S1 Rev 2 and 2 inaudibly 120mm fans I'd chose the high performance option anyway.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13145 - Posted: 11 Oct 2009 | 20:13:54 UTC - in response to Message 11913.

I think I have managed to get my GTX260 stable at about 112% but only time will tell for sure.
I thought it might be working at about 118% as it seemed ok completing Milkyway tasks, which run hotter, but the GPUGRID tasks tended to fail. It is Not good when they fail 5 hours into a 19h task!

Post to thread

Message boards : Graphics cards (GPUs) : Cuda Memory tester

//