Advanced search

Message boards : Graphics cards (GPUs) : Had to stop crunching on GT240

Author Message
Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19338 - Posted: 7 Nov 2010 | 20:44:58 UTC

I've had to stop crunching for GPU Grid on the GT240's I've got because since the new app,my dual machine keeps freezing even when cards are at stock and I get stuttery video on another computer. Before anyone suggests it's not this projects apps I have tested it with and without. One will still run for a while because it's remote but it too will be detached.
It's all well and good squeezing every ounce out of these cards but please remember they also have other tasks to perform which, I think is sometimes forgotten especially as they don't have the low priority capability of the CPU.

SORRY!
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

David @ TPS
Send message
Joined: 8 May 09
Posts: 1
Credit: 1,596,130
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 19798 - Posted: 7 Dec 2010 | 17:54:46 UTC - in response to Message 19338.

I also had to pull my GT240 since it did not like the new WU's at all. It went from about 20 hrs to several days. I do not think it had the power to run GPUgrid any more.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19799 - Posted: 7 Dec 2010 | 19:56:26 UTC

Yes, it's a shame that such good, not really old, cards get abandoned by the project in it's addiction to only the latest, most expensive, cards running 24/7.

They don't care about ordinary crunchers, they only want people with very much money to spend.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19801 - Posted: 7 Dec 2010 | 20:13:37 UTC

As was suggested to me by skgiven, you could roll back to the 197.45 drivers, so that you get the older 6.12 app running under cuda 3.0. I have done this on one machine and it seems to have resolved the reliability issues I was having.

Cuda 3.1 + the 6.13 app are not stable together (under windows).

I understand if you need a later cuda version for some other project, so this may not be the answer for you.

Cheers
____________
BOINC blog

Jeff Gu
Avatar
Send message
Joined: 12 Apr 08
Posts: 1
Credit: 8,249,452
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 19815 - Posted: 8 Dec 2010 | 21:23:45 UTC

I'm going to have to drop this one, too. My Nvidia cards are all 9600/9800/8800 cards and GPUgrid WU's either error out, take days to finish with no credit given or the machine simply won't get any.

I also think it's a shame that my mostly recycled crunchers, machines I brought back to life to put to work for science, aren't desirable. I'm not about to put together $2000 machines to satisfy the requirements of a couple of projects that seem more interested in playing with bleeding edge equipment than doing anything resembling real science.
____________
Jeff Gu

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1618
Credit: 8,639,094,351
RAC: 16,756,360
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19817 - Posted: 8 Dec 2010 | 21:47:39 UTC - in response to Message 19815.

Sometimes real (bleeding edge) science requires bleeding edge equipment to get answers while the science is still - well, bleeding edge.

Having said that, my 9800GT and 9800GTX+ cards are (mostly) finishing tasks - except the new 'Fatty' WUs - within 24 hours, without error, provided I abort the *-KASHIF_HIVPR_n1_(un)bound_* sub-class of tasks: that's the only group I'm still having trouble with.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19820 - Posted: 8 Dec 2010 | 23:06:10 UTC - in response to Message 19801.

As was suggested to me by skgiven, you could roll back to the 197.45 drivers, so that you get the older 6.12 app running under cuda 3.0. I have done this on one machine and it seems to have resolved the reliability issues I was having.

Cuda 3.1 + the 6.13 app are not stable together (under windows).

I understand if you need a later cuda version for some other project, so this may not be the answer for you.

Cheers



Would it not be better to have the server only give the 6.12 app to 200 series cards rather then the user having to use old drivers which is going to affect things like gaming???

If the project wants only cutting edge cards then why doesn't it say so???


____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19834 - Posted: 9 Dec 2010 | 15:57:46 UTC - in response to Message 19820.

Would it not be better to have the server only give the 6.12 app to 200 series cards rather then the user having to use old drivers which is going to affect things like gaming???

I know that it would have been better for some of us, including myself on one or two systems, but probably too much work for the scientists (assuming it is actually possible), and it would not have solved the problem where people can’t use a Fermi and a GT200 card in the same system – they now can.

If the project wants only cutting edge cards then why doesn't it say so???

Just to clear this up, the project does want sharp cards, and these are presently limited to the GTX580, GTX570, GTX480, GTX470, GTX465. However the project also wants the lesser (mid-high range) Fermi cards such as the GTX460 and GTS450, and the lower end cards such as the GT440.
This does not mean the project does not want contributions from people with older cards and that is not limited to the top end GT200 cards either (GTX260-216, GTX275, GTX280, GTX285, GTX295), it extends to the GT240, and any of the mid to high end CC1.1 cards that are recommended and work on GPUGrid.

There have always been problems with the CC1.1 cards, overheating, bad design, bad drivers, CUDA bugs and the performance of many CC1.1 cards tended to vary with the different research applications and tasks. To a lesser extent this was and is also the case with CC1.3 and CC1.2 cards.

Overall the present situation is not any worse for CC1.1 card users, it’s just that it is much better for Fermi users; there are fewer critical issues with Fermi cards. This is inherent in the cards design; these are newer, better cards with improved driver support. Most of this is down to NVidia’s designs and software support (development of drivers and CUDA). This has improved substantially from what it was with the CC1.1 cards.

As there are no new driver improvements or CUDA app improvements for CC1.1, CC1.2 or CC1.3 cards from NVidia, the research is static on this front, the research team cannot etch out any scientific app improvements for these cards. Indeed to spend time trying to further refine mature apps would be a waste of time, instead they made compatible apps for CC1.3 and Fermi cards. Any app advancements that do come will primarily be for Fermi cards, but the scientific research still runs on the older cards.

BarryAZ
Send message
Joined: 16 Apr 09
Posts: 163
Credit: 920,875,294
RAC: 5
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19836 - Posted: 9 Dec 2010 | 16:58:07 UTC - in response to Message 19817.

I periodically check back on this project to see if there has been any progress resolving work unit failure issues on mid range cards (I still have a few 9800GT's running). The answer, sadly, is that things are worse. Just had a batch error out after running anywhere from 3 to 14 hours -- the deadly error while computing result.

GPUGrid (for me) has a number of problems.

1) Relatively low credits for GPU cycles (compared to nearly any other CUDA supporting project).

2) VERY long run times - which increase the possibility of 'error while computing'.

3) Bad citizenship regarding erroring work units, that is, unlike other projects where the work units either error out early or 'pre-abort', GPUGrid workunits are perfectly content sucking up GPU cycles for hours before failing thus wasting cycles.

4) Over time, the GPU requirements have grown, making any but quite expensive quite high power CUDA cards increasingly marginal. Further, the long run/fail scenario has gotten worse rather than better over time.

5) The continued non-support for ATI GPU cards. As ATI GPU cards have been getting better and more efficient, as for a double precision lower cost and relatively lower power card, the 4850 is quite attractive, the lack of support from GPUGrid increasingly marginalizes the project.

Given all this, for me, I've elected to detatch the 9800 based workstations I have from GPUGrid, instead, they will focus of Collatz, DNetc, and to a lesser degree Einstein and SETI as NONE of those projects are plagued by the long run, error while computing problems seen so frequently here.




Sometimes real (bleeding edge) science requires bleeding edge equipment to get answers while the science is still - well, bleeding edge.

Having said that, my 9800GT and 9800GTX+ cards are (mostly) finishing tasks - except the new 'Fatty' WUs - within 24 hours, without error, provided I abort the *-KASHIF_HIVPR_n1_(un)bound_* sub-class of tasks: that's the only group I'm still having trouble with.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 19994 - Posted: 18 Dec 2010 | 22:06:04 UTC

For Linux users you can use the GT 240 with the latest NVIDIA drivers. There was a problem with the Cairo 2D libraries that caused X.Org to chew a lot of CPU but that's resolved in the latest version.

Run times for the 6.13 app increased about 50-75% over 6.12 so I don't get the 24 hour bonus but I do make it within the 48 hour window. At least the workunits don't fail like they did before.

For those interested, my configuration is:

Fedora 14 x86_64 (all current updates)
NVIDIA Driver 260.19.29
CUDA 3.2 Toolkit

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20022 - Posted: 24 Dec 2010 | 16:13:31 UTC - in response to Message 19836.

I periodically check back on this project to see if there has been any progress resolving work unit failure issues on mid range cards (I still have a few 9800GT's running). The answer, sadly, is that things are worse. Just had a batch error out after running anywhere from 3 to 14 hours -- the deadly error while computing result.

GPUGrid (for me) has a number of problems.

1) Relatively low credits for GPU cycles (compared to nearly any other CUDA supporting project).

2) VERY long run times - which increase the possibility of 'error while computing'.

3) Bad citizenship regarding erroring work units, that is, unlike other projects where the work units either error out early or 'pre-abort', GPUGrid workunits are perfectly content sucking up GPU cycles for hours before failing thus wasting cycles.

4) Over time, the GPU requirements have grown, making any but quite expensive quite high power CUDA cards increasingly marginal. Further, the long run/fail scenario has gotten worse rather than better over time.

5) The continued non-support for ATI GPU cards. As ATI GPU cards have been getting better and more efficient, as for a double precision lower cost and relatively lower power card, the 4850 is quite attractive, the lack of support from GPUGrid increasingly marginalizes the project.

Ditto. I check back from time to time. Would like to run the project again if things improve. Personally think that some help is needed in the programming department.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20024 - Posted: 24 Dec 2010 | 18:24:59 UTC - in response to Message 20022.

These guys program in CUDA, this is their expertise.

Things are good and will get better.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20045 - Posted: 27 Dec 2010 | 9:32:05 UTC - in response to Message 20024.

These guys program in CUDA, this is their expertise.

Things are good and will get better.

Things were good, got bad and will perhaps get good again.

For GT240 things are very, very bad now, and they were without problems before this fatal change in application.

If the credits for work done haven't changed, and as they are determined at the project only I assume that's the case, my computer does about half the work as before if I heavily meddle with each and every single WU. If not, as it's currently the case because I'm away from the computer, and the WUs run as the project officially wants them, it will probably not even make the for-credit-only-deadline of 4 days, not to speak of the real one of two days.

A re-installation of the old app would have made my computer double effective, but the project decided to rather ditch it than to cater to recent, just not very expensive, cards.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20048 - Posted: 27 Dec 2010 | 13:00:58 UTC - in response to Message 20045.

Saenger, you present a skewed view that lacks project objectivity. The project does cater for GT240’s:
Running 24/7 this stock GT240 would get between 16K and 19K per day. These cards each get around 14K per day despite being on Vista and not running for about 20h per week.

The widely accepted advice is to use a driver that will allow GT240 crunchers to run the 6.12app rather than the 6.13app (designed for Fermi cards). More recent drivers are slower for the GT240 and there is a fault in these drivers causing some cards (especially GT240’s) to reduce the clock rates. This seems to be the case more often on XP and Linux. GPUGrid does not write the drivers and has no influence in their design. It is also advised to free up one CPU core per GPU card and to use swan_sync=0.
As for Linux variations, it appears that the most readably available driver for Linux 10.10 versions is the 260 driver. While it is possible to use a different driver, it’s not easy under Linux 10.10 and many previous builds just do not install. Again these troubles are down to Linux and NVidia, not GPUGrid. Note that Linux 10.10 was released after the 6.12 and 6.13 apps.
Your choice of operating system, driver and configuration determines your ability to crunch the various Boinc Project. It has been explained many times that the new system facilitates more cards. Your use of non recommended configurations is your choice. I would encourage you to cease complaining about your setups shortcomings, it does not help anyone. Please either change your setup or accept the situation as it is for now.

Profile Stoneageman
Avatar
Send message
Joined: 25 May 09
Posts: 224
Credit: 34,057,374,498
RAC: 610
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20050 - Posted: 27 Dec 2010 | 18:43:31 UTC

Hello Saenger,
Checkout Bigtuna
He's running Linux with a 240 and averaging less than a day/task at stock, with no SWAN_SYNC. Might be worth a PM to find out what his set up is.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20081 - Posted: 1 Jan 2011 | 14:52:44 UTC

My machine finished the first WU under non-babysitting conditions, so like the projects has to expect it's users to set their machines, except that it's running an unusual 24/7:

Sent 23 Dec 2010 21:07:21 UTC
Received 30 Dec 2010 18:27:37 UTC
Server state Over
Outcome Success
Client state Keines
Exit status 0 (0x0)
Computer ID 66676
Report deadline 28 Dec 2010 21:07:21 UTC
Run time 588547.353027
CPU time 2359.39
stderr out

<core_client_version>6.10.17</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory: 536150016 bytes
# Number of multiprocessors: 12
# Number of cores: 96
MDIO ERROR: cannot open file "restart.coor"
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory: 536150016 bytes
# Number of multiprocessors: 12
# Number of cores: 96
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory: 536150016 bytes
# Number of multiprocessors: 12
# Number of cores: 96
# Time per step (avg over 175000 steps): 466.165 ms
# Approximate elapsed time for entire WU: 582706.597 s
19:20:35 (5944): called boinc_finish

</stderr_txt>
]]>

Validate state Task was reported too late to validate
Claimed credit 8011.53935185185
Granted credit 0
application version ACEMD2: GPU molecular dynamics v6.13 (cuda31)

It even has set swan_sync to 0.
So far for catering a GT240. Definitely not so. If it's not running on that machine, you're not interested in normal crunchers with very recent cards like mine.

Everything was running as you have to expect it. You have to deal with this environment, not us poor crunchers with your absurd babysitting and fiddling demands.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20082 - Posted: 2 Jan 2011 | 15:27:07 UTC - in response to Message 20081.

You persistently refuse to use recommended configurations, and then persistently complain about the project not supporting your non recommended setup.
While I understand that setup and configuration is your choice, and that you are disgruntled that the project does not run well under your setup, you should accept that the project cannot afford to go out of its way to facilitate such setups:
Although the project has thousands of crunchers attached, the top 5 crunchers do 10% of the work. The top 20 crunchers do 22% and the top 100 crunchers do 45%. If the project went back to old apps then no-one with a GTX460 could crunch on Linux and no-one with a GTS450 could crunch at all. Many of these cards are in use here.
Your setup appears to facilitate other projects rather than this one.
It’s down to the application developers to support new cards and down to crunchers to support the project, not the other way round.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20083 - Posted: 2 Jan 2011 | 16:42:15 UTC

As I have no way to choose any application, it's the project peoples choice alone to send me Fermi apps for my non-Fermi card. I can't do anything to choose 6.12, I can only install the newest drivers.

The projects knows perfectly well my setup (and not because I posted it here, but because my BOINC manager knows it), but the persistently refuse to sent me a matching app. If I could set anything in my account, I would do so. I even asked about this possibility quite some time ago, another active choice by the project not to implement.

I can only be passive, as I have no possibility to choose.
The project actively sends the applications, the WUs and it decides alone what it deems to be suitable for my machine.
If they wanted, they had all possibilities to send only matching WUs and apps, they simply don't want.
They want my machine not to use CPU.
They want my machine to run the app on low priority.
They want my machine to crunch even extremely long WUs.
They want to send to my non-Fermi machine Fermi apps.
They want to waste my GPU-power.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

ftpd
Send message
Joined: 6 Jun 08
Posts: 152
Credit: 328,250,382
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20084 - Posted: 2 Jan 2011 | 17:03:47 UTC - in response to Message 20083.
Last modified: 2 Jan 2011 | 17:21:33 UTC

Saenger,

The final choice is yours.

You can crunch GPUgrid or not!

Please, stop this wastefull discussion on the forum from now.

Happy new year and go crunch RNA-World!

Use your card for Primegrid or Seti@home!
____________
Ton (ftpd) Netherlands

crazyrabbit1
Send message
Joined: 13 Jun 10
Posts: 1
Credit: 641,860
RAC: 0
Level
Gly
Scientific publications
watwat
Message 20085 - Posted: 2 Jan 2011 | 21:21:49 UTC - in response to Message 20084.

I also have a gt240 and i downgradet my driverversion to crunch some wu's in a normal time, because i thought the project is worth it, as a backup. And normally i do not spend time on an beta project but after reading the thread i think the project never be out of beta state.
The project is hunting for an application for the newest and fastet gpus that are available.....and not for as many crunchers as possible....that is no science, just my to cents.
Even if i buy today the fastest gpu i can get, i can not be shure i can use it on gpugrid in 12 month.....I think the project should think about this.

Only my opinion.
I would upgrade my driver that i can crunch Einstein.

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 20086 - Posted: 2 Jan 2011 | 22:47:37 UTC - in response to Message 20083.

Saenger, I couldn't get my GT240, 8800GT, & GTX260 to work with Linux either, both Mint Linux 8 & Mint Linux 10 failed, I couldn't get it to work no matter what. So I gave up & decided to use Windows 7 90day Enterprise evaluation instead link

I hope this helps. Nothing else did... :-(
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20087 - Posted: 3 Jan 2011 | 7:32:55 UTC - in response to Message 20085.

I also have a gt240 and i downgradet my driverversion to crunch some wu's in a normal time, because i thought the project is worth it, as a backup. And normally i do not spend time on an beta project but after reading the thread i think the project never be out of beta state.
The project is hunting for an application for the newest and fastet gpus that are available.....and not for as many crunchers as possible....that is no science, just my to cents.
Even if i buy today the fastest gpu i can get, i can not be shure i can use it on gpugrid in 12 month.....I think the project should think about this.

Only my opinion.
I would upgrade my driver that i can crunch Einstein.

People have been using 200 series cards for around 2 years. Think about only supporting old GPUs. What sort of science do you think that would be?
GPU crunching is more difficult than CPU crunching, otherwise every project would be doing it.
Every Boinc project does alpha and beta work. Despite having a small team this project works in several fields of science and in computer programming (for molecular dynamics). GPUGrid produces scientific publications (hardly Beta work).

Perhaps you missed the bit where I described the 260 driver issues; it causes some GPUs (especially GT240's) to slow down to lowest power mode, making tasks take about 7 times as long. This will still happen on Einstein or Folding or any other project because it is a driver problem. In itself this is a good enough reason to not force people to use this driver, but by doing so you can only crunch tasks on the 6.13app, which is slower for GT240's; because it was primarily written with Fermi cards in mind.

PS. I have 6 GT240's.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20094 - Posted: 3 Jan 2011 | 16:18:52 UTC - in response to Message 20087.
Last modified: 3 Jan 2011 | 16:19:26 UTC

Perhaps you missed the bit where I described the 260 driver issues; it causes some GPUs (especially GT240's) to slow down to lowest power mode, making tasks take about 7 times as long. This will still happen on Einstein or Folding or any other project because it is a driver problem.
Not here, it stays at "Prefer Maximum Performance"
In itself this is a good enough reason to not force people to use this driver, but by doing so you can only crunch tasks on the 6.13app, which is slower for GT240's; because it was primarily written with Fermi cards in mind.
The project knows, that the card is worse off with 6.13, but still it insists on pushing those apps on my machine. I'd like to say "No", but there's no possibility, they refuse to give us one.

Again:
All choices are made alone by the project team, and they are made active by them, and thus this is what they especially and definitely want, I have no choice here but to quit or to extremely babysit each and every single WU.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20097 - Posted: 3 Jan 2011 | 17:31:54 UTC - in response to Message 20094.

As you have repeatedly noted the project team has made their choices ... perhaps it is now time for you to make yours to either work with the project as best you can or move on.
____________
Thanks - Steve

Profile Lazarus-uk
Send message
Joined: 16 Nov 08
Posts: 29
Credit: 122,821,515
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20102 - Posted: 4 Jan 2011 | 11:33:24 UTC - in response to Message 20097.

As you have repeatedly noted the project team has made their choices ... perhaps it is now time for you to make yours to either work with the project as best you can or move on.




It would seem that many people have already made their choices and decided to move on. With the attitude that has been expressed by the project/moderators (and others), I'm not surprised either.

This project is starting to become an exclusive club for those with the latest cards/drivers. I am still able to crunch GPU Grid on one of my part-time hosts with a GTX260 (that I went out and bought, just so I could run GPU Grid). Although I do have to be careful when I download and start a wu and make sure I turn the pc on for a couple of hours before I go off to work and start it up again when I get home for dinner etc, otherwise I have trouble returning them for the bonus. I stopped crunchin here when the longer tasks came out and now crunch projects that I do not need to babysit.

Please remember that Boinc's mantra is to "use the idle time of your computer..." Many of us would love to contribute more but cannot afford to buy the latest cards or let their machines run 24/7 or be willing to set their machines up just so they can run a few tasks here.

I feel that GPU Grid should work with the people that are willing to crunch it, not to dictate to the people what they need to do to crunch it. If you continue like this, that graph will only get worse

Mark



Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20103 - Posted: 4 Jan 2011 | 12:24:40 UTC - in response to Message 20102.
Last modified: 4 Jan 2011 | 12:28:04 UTC

The gradual rise during Nov is probably due to outages at PrimeGrid, and the obvious outage at GPUGrid around the 4th Dec meant the same people that came to GPUGrid left and went to other places, including PrimeGrid which by that stage was up and running again.
Aqua seems to have faded badly recently, not sure why.
Einstein seems to have taken a seasonal hit along with some of the big CPU projects cosmology, Rosetta, qmc, abc.

The structure of this project requires a fast turnover of tasks. The contribution of casual crunchers that have average GPUs and only have their systems on for one or two hours a day is negligible at best. Boinc projects differ and the default Boinc settings are not always ideal. So crunchers need to change their settings to suite the project. This is the case with other GPU projects too. For example some projects require specific drivers, and have card limitations. Only the project leader can make the changes that you and others want. You and I are not aware of their financial or time constraints. When there use to be a choice of which task to crunch, there were even more problems than now.

PS. Remember that graph starts at 26M and only rises to 36M; it's not a zero up graph!

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 20104 - Posted: 4 Jan 2011 | 15:36:49 UTC - in response to Message 20103.

Still, for whatever reason(s) there are ups & downs, looking at that graph is like looking at the stock market. There's little love & little trust. Even if the top 5 crunchers do 10% of the work, The top 20 crunchers do 22% and the top 100 crunchers do 45%, that drop looks like there's little trust & little love, maybe all this; if you don't like it, go somewhere else because "science" requires us to tell you to go somewhere else...
____________

Profile Stoneageman
Avatar
Send message
Joined: 25 May 09
Posts: 224
Credit: 34,057,374,498
RAC: 610
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20106 - Posted: 4 Jan 2011 | 18:10:36 UTC

The last three weeks or so have been almost all KASHIF wu's. They return the least points, at about 85% of the previous average.

DARGHelp
Send message
Joined: 8 Oct 09
Posts: 3
Credit: 6,598,882
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 20118 - Posted: 5 Jan 2011 | 16:38:08 UTC

I have given up on GPUGRID for good this time... what a waste of time!!!

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20120 - Posted: 5 Jan 2011 | 20:27:45 UTC - in response to Message 20118.

sk and SAM bring up excellent reasons for some of the PPD decline but I think most of the downward trend was from the SETI crunchers who joined us when their project went down and then returned to their favorite project when it came back on line.

If there was a stats site that displayed more than 60 days of history you would probably also see that this recent decline really is only a bubble and that, overall, the project has continued to increase PPD over time.

While recognizing that the XtremeSystems team is most comprised mostly of dedicated 24/7 crunchers with hardware adequate for this project, our stats have steadily increased with minor flucuations based on normal joins and departures, not based on project complexities, but rather on personal goals. Has anyone looked at other teams to see if the same is true for them?

Are there any stats on the length of time people who have departed have been joined with the project?

If it is a quick turnaround then maybe more focus can be placed on clarifying just how demanding the science of this project is to help set expectations. That might help avoid the hightened disappointment we are seeing in this (and some others) thread.
____________
Thanks - Steve

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1618
Credit: 8,639,094,351
RAC: 16,756,360
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20129 - Posted: 6 Jan 2011 | 19:18:18 UTC
Last modified: 6 Jan 2011 | 19:27:20 UTC

For people who find it hard to run GPUGrid on older cards, you might like to know that Einstein has improved the efficiency of its CUDA app (Windows only, so far), and it's now roughly comparable with SETI on one of my 9800GT cards (6,320 seconds for a BRP3 task, under Windows XP/32). Edit - forgot to say that it still uses quite a bit of CPU, though not nearly as much as the old ABP2 app.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20143 - Posted: 8 Jan 2011 | 15:55:49 UTC - in response to Message 20129.
Last modified: 11 Jan 2011 | 20:30:33 UTC

The latest Beta driver (266.35) is no better for the GT240, but it allows some tasks to run on CC1.3 cards that previously failed.
Might be worth testing if you have a GTX570 or GTX580 - anyone?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20151 - Posted: 11 Jan 2011 | 18:14:34 UTC - in response to Message 20143.
Last modified: 11 Jan 2011 | 20:10:51 UTC

A GIANNI_DHFR1000 task on a GT240 @1.4GHz (just over ref) W2003x64, [email protected], 17.56ms per step:

3550471 2232832 10 Jan 2011 23:40:55 UTC 11 Jan 2011 14:27:07 UTC Completed and validated 35,131.58 35,120.72 7,491.18 11,236.77 ACEMD2: GPU molecular dynamics v6.12 (cuda)

Good to see a few of these again.
More would go down well, could even get a card RAC of over 27K at that rate.

- Ran another one on a different GT240, W7x64, stock Q6600, 23.715ms per step. Would still get over 20K RAC, but I guess the difference is the CPU and supporting architecture. The i7 is only using one thread, but it's way faster (much more than 11%). The GT240 on the Q6600 system is also OC'd to 1.59GHz.

Post to thread

Message boards : Graphics cards (GPUs) : Had to stop crunching on GT240

//