Message boards : News : Changes to scheduling policy
Author | Message |
---|---|
Hi all, | |
ID: 38145 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have a GTX690 card and driver 344.11 cuda 6.5, and I am getting nothing. | |
ID: 38151 | Rating: 0 | rate:
![]() ![]() ![]() | |
There was a tiny problem. You should be getting something now. | |
ID: 38155 | Rating: 0 | rate:
![]() ![]() ![]() | |
Not all works correct if I may. I have probably a to old driver (331.00) as the newer is a bit slower for my 780Ti's on win7. | |
ID: 38165 | Rating: 0 | rate:
![]() ![]() ![]() | |
I hope you're still able to feed my rigs 3 GPUs, both now and in the future: | |
ID: 38167 | Rating: 0 | rate:
![]() ![]() ![]() | |
Ok, looks like your old driver is incorrectly reporting CUDA 6 capability. | |
ID: 38168 | Rating: 0 | rate:
![]() ![]() ![]() | |
Ok, looks like your old driver is incorrectly reporting CUDA 6 capability. Thanks Matt, yes I have already downloaded the latest version an will update as the queue is empty. I know that is not necessary but that is the way I do always and I have no loss of any work, having a high electricity bill. ____________ Greetings from TJ | |
ID: 38170 | Rating: 0 | rate:
![]() ![]() ![]() | |
The scheduler is still doing its job in a strange manner. | |
ID: 38218 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi all | |
ID: 38250 | Rating: 0 | rate:
![]() ![]() ![]() | |
My three NVIDIA GeForce GTX 760 (2048MB) driver: 337.88, each in its own machine, are getting 6.0 work and doing just fine. | |
ID: 38252 | Rating: 0 | rate:
![]() ![]() ![]() | |
It's as specific as I can make it until I've a CUDA 6.5 app on each queue. | |
ID: 38254 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have a Asus [2] NVIDIA GeForce GTX 295 (896MB) driver: 340.52, and I am getting nothing cuda60. Short runs (2-3 hours on fastest card) v8.41 (cuda60) | |
ID: 38255 | Rating: 0 | rate:
![]() ![]() ![]() | |
Yes, quite so- sm 1.3 is re-enabled for the moment. I'll probably drop this support fairly soon, though. GTX270-295 account for less than 1% of our throughput now, and it's not worth maintaining an application for. Matt | |
ID: 38257 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi, With my GTX 770 in Ubuntu 14.04 Boinc 7.4.22 and Nvidia 340.46 (cuda 6.5) I receive tasks only 8.21 (60 Cuda). | |
ID: 38258 | Rating: 0 | rate:
![]() ![]() ![]() | |
Ok. Thanks | |
ID: 38259 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi, With my GTX 770 in Ubuntu 14.04 Boinc 7.4.22 and Nvidia 340.46 (cuda 6.5) I receive tasks only 8.21 (60 Cuda). Hi Carlesa25, if I have read all the posts correct, than at the moment cuda65 is only in beta and short runs. ____________ Greetings from TJ | |
ID: 38260 | Rating: 0 | rate:
![]() ![]() ![]() | |
10/3/2014 3:45:28 PM | GPUGRID | Sending scheduler request: To fetch work. | |
ID: 38283 | Rating: 0 | rate:
![]() ![]() ![]() | |
10/3/2014 3:45:28 PM | GPUGRID | Sending scheduler request: To fetch work. Boinc, using linux nvidia driver 343.22, reports cuda version 6.5 for my 780Ti: No work available. If I roll back my nvidia driver to 337.25, boinc reports cuda 6.0 and now I can download work. Problem solved for now. | |
ID: 38287 | Rating: 0 | rate:
![]() ![]() ![]() | |
There'll be a CUDA 6.5 app for linux later today. | |
ID: 38289 | Rating: 0 | rate:
![]() ![]() ![]() | |
biodoc - it's on acemdbeta and short now. Please test it! | |
ID: 38290 | Rating: 0 | rate:
![]() ![]() ![]() | |
biodoc - it's on acemdbeta and short now. Please test it! Ok, will do. Finishing up a windows cuda 6.5 long WU in about an hour. Thanks! | |
ID: 38292 | Rating: 0 | rate:
![]() ![]() ![]() | |
There'll be a CUDA 6.5 app for linux later today. Will it work for my 780Ti or is it exclusive for the 980/970? | |
ID: 38293 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've revised the scheduling policy rules in the original post. | |
ID: 38295 | Rating: 0 | rate:
![]() ![]() ![]() | |
I havr a GTX460 which has not had work for over a day (long) | |
ID: 38300 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi | |
ID: 38318 | Rating: 0 | rate:
![]() ![]() ![]() | |
Veneto, | |
ID: 38319 | Rating: 0 | rate:
![]() ![]() ![]() | |
sabayonino, I see that you are getting work now. | |
ID: 38322 | Rating: 0 | rate:
![]() ![]() ![]() | |
sabayonino, I see that you are getting work now. Now all my gpus are crunching :) (cuda65) so my boinc client version is 7.2.42 for all hosts with gpu maybe it was a temporary problem :) tnx | |
ID: 38326 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have more or less the same issues on this host http://www.gpugrid.net/results.php?hostid=178360 Boinc 7.2.42 I'm able to get short workunits *only*, no matter what I try | |
ID: 38337 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi Valterc | |
ID: 38339 | Rating: 0 | rate:
![]() ![]() ![]() | |
The Linux cuda65 app is on long now. | |
ID: 38340 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hello, | |
ID: 38414 | Rating: 0 | rate:
![]() ![]() ![]() | |
unfortunately I'm getting computation errors most of the time. If you take a look into your tasks details, you could see the reason for those errors: # The simulation has become unstable. Terminating to avoid lock-up (1) This error is a sign of an unstable GPU. The root of this instability can be various: - Too high GPU temperature (above 80°C - so this is not for you) - Too low GPU voltage for the given GPU clock - Too high GPU clock for the given GPU voltage (e.g. an aging GPU could not run even at factory settings) - Too high GDDR5 frequency - Insufficient, low quality or (nearly) broken PSU - Too high transient resistance on the PCIe power connectors (usually caused by Molex->PCIe converters), or on the two 12V pins of the 24-pin MB power connector I've got two GTX 570 with 2.5 GB VRAM each, newest driver 344.11. This card has twice as much memory chips as a standard GTX570 has, so perhaps the GPU can't drive the memory data lanes that fast. Doesn't matter if I'm in SLI or not. SLI is usually a source of random errors. Other GPU projects like SETI, Einstein or Asteroids run fine. Other GPU projects has obsolete GPU applications built on older CUDA versions, while GPUGrid uses the latest (CUDA6.5 at the moment), therefore other projects couldn't stress the GPU as much as the GPUGrid client does. The "GPU usage" measurement is misleading. Is there anything I can do? Check all power connectors in your PC for burnt ones. Lower the GPU clock by 100MHz steps until it gets stable, if it doesn't work then try again by lowering the GDDR5 frequency by 100MHz steps. If your GPU gets stable by lowering the GPU clock at some point, you can try to raise the GPU clock by 10-20MHz steps, while it doesn't cause these "simulation became unstable" messages, then increase the GPU voltage by 12.5mV, and repeat increasing the clock while the GPU doesn't get hot. Beware of that different GPUGrid batches stressing the GPU differently, so if there's no stability headroom in your settings, some harder workunits could fail. | |
ID: 38415 | Rating: 0 | rate:
![]() ![]() ![]() | |
Thank you very much for the detailed information. | |
ID: 38510 | Rating: 0 | rate:
![]() ![]() ![]() | |
And the next WU crashed again at about 10% :-( | |
ID: 38513 | Rating: 0 | rate:
![]() ![]() ![]() | |
Message boards : News : Changes to scheduling policy