Advanced search

Message boards : Graphics cards (GPUs) : OC'ing an

Author Message
Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47084 - Posted: 24 Apr 2017 | 18:38:22 UTC

This is more of a theoretical question, if I have a card that is being used less than 95-100% (common apparently in the latest cards), would OC'ing it be useless then in terms of GPUGRID performance/completion time?
____________

Erich56
Send message
Joined: 1 Jan 15
Posts: 1131
Credit: 9,884,857,676
RAC: 32,900,563
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47085 - Posted: 24 Apr 2017 | 19:06:15 UTC

no, OCing would not be useless. GPU load is one thing, GPU clock is another thing.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47088 - Posted: 24 Apr 2017 | 22:33:15 UTC
Last modified: 24 Apr 2017 | 22:36:00 UTC

OC will generally make your GPU faster. Having said this, the efficiency will drop with a clock increase, so for 24/7 Operation it is recommended to use default configuration or even to mildly under-clock a GPU. Aside from efficiency, the temperatures will be lower and the card will (hopefully) last longer.

One more comment.. from your computer list it seems that you use a notebook for crunching. That might be a problem in terms of temperature and load oscillation. I would not OC your 660M ... it is a Kepler hothead anyway.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47091 - Posted: 24 Apr 2017 | 23:10:01 UTC - in response to Message 47088.
Last modified: 24 Apr 2017 | 23:11:21 UTC

OC will generally make your GPU faster. Having said this, the efficiency will drop with a clock increase, so for 24/7 Operation it is recommended to use default configuration or even to mildly under-clock a GPU. Aside from efficiency, the temperatures will be lower and the card will (hopefully) last longer.

One more comment.. from your computer list it seems that you use a notebook for crunching. That might be a problem in terms of temperature and load oscillation. I would not OC your 660M ... it is a Kepler hothead anyway.


It hovers around 60-65, running just GPUGRID. It overclocks very nicely tho.
I underclock its VRAM tho. It's on a relatively old ASUS ROG notebook. Big heatsink with dual fans.

My question was targeted towards the 10XX cards tho. This old timer hits 90% use almost with every WU.

BTW, idk why the title of the thread got "cut".
____________

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47092 - Posted: 24 Apr 2017 | 23:12:52 UTC - in response to Message 47085.

no, OCing would not be useless. GPU load is one thing, GPU clock is another thing.


I understand this. But, if you can't make the "most of it", wouldn't OC'ing simply dry the GPU even further? Assuming the < 100% load is due to a bottleneck somewhere.
____________

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47095 - Posted: 25 Apr 2017 | 6:57:45 UTC - in response to Message 47091.

It hovers around 60-65, running just GPUGRID. It overclocks very nicely tho. I underclock its VRAM tho. It's on a relatively old ASUS ROG notebook. Big heatsink with dual fans.


Try to run 2 Tasks in parallel and allocate more CPU cores per GPU Tasks (2:1) so that the CPU does not bottleneck the jobs. If you still have fluctuations then you likely have a heat problem, all you could do here is downclocking and undervolting your GPU. Which is not always possible with a Notebook.

My question was targeted towards the 10XX cards tho. This old timer hits 90% use almost with every WU.


New mobile or Desktop Cards? Well, anyway... as I wrote, OC will help to meet a time Limit as it accelerates your GPU. But OC it on one hand but letting the load bounce on the other hand doesnt make sense. I prefer to have >90% stable load ad T<70°C instead of increasing the clock and pushing the GPU temperature to the sky.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1131
Credit: 9,884,857,676
RAC: 32,900,563
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47096 - Posted: 25 Apr 2017 | 11:00:38 UTC - in response to Message 47095.

... I prefer to have >90% stable load ad T<70°C instead of increasing the clock and pushing the GPU temperature to the sky.

in my humble opinion, anything beyond 64/65°C, permanently, is not really good for a GPU.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47099 - Posted: 25 Apr 2017 | 12:55:21 UTC - in response to Message 47096.

... I prefer to have >90% stable load ad T<70°C instead of increasing the clock and pushing the GPU temperature to the sky.

in my humble opinion, anything beyond 64/65°C, permanently, is not really good for a GPU.


why?
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47100 - Posted: 25 Apr 2017 | 12:55:28 UTC
Last modified: 25 Apr 2017 | 12:55:57 UTC

In my experience, it's completely safe to run 24/7 at the GPUs point of thermal limiting.... 70*C for pre-Maxwell, 83*C otherwise, I believe. I've run 5+ GPUs that way for years, did have fans stop working correctly on one (and I fixed them), but other than that, still going strong.

They're designed to be able to keep running all the way up to 100*C.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47102 - Posted: 25 Apr 2017 | 15:12:05 UTC - in response to Message 47096.
Last modified: 25 Apr 2017 | 15:12:43 UTC


in my humble opinion, anything beyond 64/65°C, permanently, is not really good for a GPU.


Load of rubbish

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 47103 - Posted: 25 Apr 2017 | 16:36:35 UTC - in response to Message 47088.

OC will generally make your GPU faster. Having said this, the efficiency will drop with a clock increase, so for 24/7 Operation it is recommended to use default configuration or even to mildly under-clock a GPU. Aside from efficiency, the temperatures will be lower and the card will (hopefully) last longer.

One more comment.. from your computer list it seems that you use a notebook for crunching. That might be a problem in terms of temperature and load oscillation. I would not OC your 660M ... it is a Kepler hothead anyway.



My temps on both cards are about 70 degrees Celsius. Do you recommend any under clocking done for me? What temps should I focus on? To under clock, I simply adjust my power target, right?
____________
Cruncher/Learner in progress.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47104 - Posted: 25 Apr 2017 | 16:57:00 UTC - in response to Message 47096.
Last modified: 25 Apr 2017 | 17:10:23 UTC

... I prefer to have >90% stable load ad T<70°C instead of increasing the clock and pushing the GPU temperature to the sky.

in my humble opinion, anything beyond 64/65°C, permanently, is not really good for a GPU.


For all I know, the failure probability of a PCB depends very much on the temperature and is a nonlinear function, actually exponential, as the process of thermal aging can be described by the Arrhenius equation. Materials most affected are polymers in discrete components such as capacitors. But also other components are impacted, the board itself hosts materials with different expansion coefficients (conducting paths, carrier) causing mechanical stress, so I wouldn’t recommend to exhaust the absolute maximum ratings.

They're designed to be able to keep running all the way up to 100*C.


When you look at thermal images of various graphics cards, you see that there are many different ways of cooling the chips, the RAM, Bios and other… and some solutions are not very favorable (even from renowned brands), leading to shortened lifespans. Please note that if you set the temperature limit to 80°C it doesn’t really mean all the chip and board temperatures are below, as there is only one sensor giving you a single value from a single location! Other components may have 100°C even so, therefore it is wise to reduce the temperature. Well, of course the PCBs are designed to withstand 250°C and more, but only for a very short time for flow soldering and not energized.

I don’t see why lower temperatures should have negative impact on the PCB … aside from mechanical stress, dashing the board with liquid nitrogen… ;-)

And in regard to good experience with GPUs at >90°C for many years … well, my grandpa is a heavy smoker, 89 years of age and still does not show any lung function disorder. Lucky him!
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47105 - Posted: 25 Apr 2017 | 17:08:43 UTC - in response to Message 47103.

My temps on both cards are about 70 degrees Celsius. Do you recommend any under clocking done for me? What temps should I focus on? To under clock, I simply adjust my power target, right?


Well.. as I wrote, depends very much on the cooling design, there are cheap cards with awesome heat distribution you could run at 90°C temperature limit without a Problem and there are branded, expensive cards with 70°C limit only but still overheated RAM chips. It's a crazy world.

But assuming a moderate and somewhat homogeneous thermal performance, 70°C should be OK for most Nvidia cards. You don't need to down-clock in my humble opinion.

____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 47106 - Posted: 25 Apr 2017 | 17:27:34 UTC - in response to Message 47105.

My apologies I didn't see it before. Thanks for your thoughts, appreciate it
____________
Cruncher/Learner in progress.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47107 - Posted: 25 Apr 2017 | 20:24:54 UTC
Last modified: 25 Apr 2017 | 20:41:28 UTC

See this thermal image: the EVGA GeForce GTX 1080 (without backplate)

http://www.tomshardware.de/evga-gtx-1080-gtx-1070-warmeleitpads-grafikkarte,news-257180.html

Note: The GPU is at moderate 76°C whereas the VRAM is steaming 107°C hot (+31°C difference), so it will not last for very long.

They're designed to be able to keep running all the way up to 100*C.


Now imagine the GPU temperature reading is 100°C. Guess what (and how long) the VRAM will do at 130°C. Not even military standard chips [allowed range -55°C to +125°C] can handle it.

Meanwhile that EVGA provides backplates in order to fix this ... but I have seen similar images from ASUS as well, so not an isolated case. That is why I suggest overclocking with due care, if at all. A backplate is always a good idea by the way, in regard to heat dissipation, but for mechanical stability too.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47109 - Posted: 26 Apr 2017 | 8:50:09 UTC - in response to Message 47102.
Last modified: 26 Apr 2017 | 9:12:52 UTC


in my humble opinion, anything beyond 64/65°C, permanently, is not really good for a GPU.


Load of rubbish


Well... you cannot generalize... but looking at the EVGA GTX1070 FTW thermal image, I'd say that Erich56 has a point. As I wrote, there are cards that run several years at 80°C and there are other designs failing at 70°C after a couple of months only.

Side note: if you crunch, avoid temperature fluctuations by all means, as they cause a lot of mechanical stress to the PCB, extending the conducting paths, lands and solding joints. A black screen or stripes may be the result after a while.

Which means: 24/7 operation is much better for a graphics card lifespan than crunching in the day- or nighttime only (so the card is cool in half a day) ... and then even OC the card all the way in order to accomplish the time limit. Another thing that speaks for lower GPU temperatures is the cooling phase inbetween two jobs when the GPU temp drops to 40-50°C for 10-20 seconds and then increases again. You simply will have less mechanical stress by keeping the temperature difference as little as possible.

As a consequence, a graphics card will age faster with short runs than long runs, as there are much more cooling intervals. Running two tasks in parallel (and therefore having no sharp decline of load at the end of one job) will cushion that effect by the way.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47112 - Posted: 26 Apr 2017 | 16:47:07 UTC - in response to Message 47109.

Well... you cannot generalize... but looking at the EVGA GTX1070 FTW thermal image, I'd say that Erich56 has a point. As I wrote, there are cards that run several years at 80°C and there are other designs failing at 70°C after a couple of months only.
I agree.

Side note: if you crunch, avoid temperature fluctuations by all means, as they cause a lot of mechanical stress to the PCB, extending the conducting paths, lands and soldering joints.
That's called thermal fatigue caused by the thermal cycle.
Less thermal cycle = longer lifespan.
Lower amplitude of the thermal cycle = longer lifespan.
As the cards normally start from room temperature, the latter equals to:
Lower maximum temperature = longer lifespan.
Applying liquid nitrogen (-196°C=-321°F) cooling causes 4 times higher thermal cycle, than the card going from room temperature to 80°C.

A black screen or stripes may be the result after a while.
Failed cards can't process workunits, so failed workunits are also a sign.

Which means: 24/7 operation is much better for a graphics card lifespan than crunching in the day- or nighttime only (so the card is cool in half a day)
I fully agree.

... and then even OC the card all the way in order to accomplish the time limit. Another thing that speaks for lower GPU temperatures is the cooling phase inbetween two jobs when the GPU temp drops to 40-50°C for 10-20 seconds and then increases again.
The chip could withstand far more and far higher thermal cycles than the whole PCB. The PCB won't cool down that fast, so it does not have such a big impact on the card's lifespan than letting the whole card reaching room temperature.

You simply will have less mechanical stress by keeping the temperature difference as little as possible.
I fully agree.

As a consequence, a graphics card will age faster with short runs than long runs, as there are much more cooling intervals.
That's another reason for crunching long runs only on fast cards, and short runs only on slower cards.
And also a reason for not set the "Suspend GPU while the computer is in use".

Running two tasks in parallel (and therefore having no sharp decline of load at the end of one job) will cushion that effect by the way.
True.

Post to thread

Message boards : Graphics cards (GPUs) : OC'ing an

//