Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking
Author | Message |
---|---|
Hi guys! | |
ID: 20399 | Rating: 0 | rate: / Reply Quote | |
If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot: | |
ID: 20400 | Rating: 0 | rate: / Reply Quote | |
If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot: First af all, thanks a lot for your support! :) Uhm... I think... No, I definitely do not want to waste 98% of a cpu thread (I have two cores without hyper threading), if I can have the exact same GPU efficiency through niceness adjustment (verified) while happily crunching two other CPU tasks that will be a tiny bit slower than usual.
I suspected and feared it. :( I'll keep on searching for a while, but I think I'm going to surrender.
Sure, but I've already put in /etc/rc.local the line: renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) and in /etc/crontab the line: */5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1 which is quite enough for me now. Thanks again. :) Bye. | |
ID: 20402 | Rating: 0 | rate: / Reply Quote | |
Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1. | |
ID: 20404 | Rating: 0 | rate: / Reply Quote | |
Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1. Sorry about it. But you're dealing with a gtx570 (fine card!) and 6.13 app, aren't you? Maybe this makes the difference. The niceness trick is actually working for me with boincsimap 5.10 and WCG (FightAIDS@Home 6.07 and Help Conquer Cancer 6.08) on the CPU side. You said Rosetta... It works for me with Rosetta Mini 2.17 too. However my next try, probably tomorrow, will be to test the newest 270.18 nvidia driver, and see what happens with 6.13 gpugrid (someone is getting fine results even with a GT240 and 6.13). Bye. | |
ID: 20405 | Rating: 0 | rate: / Reply Quote | |
When I use swan_sync=0 and free up a CPU core on my GT240’s they now improve performance by around 7.5% (Phenom II-940, compared to running 4 CPU tasks and not using swan_sync). It use to be higher but recent tasks seem less reliant on the CPU (the GPU task priority is set to below normal by the project, while the CPU task priority is less; low). I’m using the 6.12 app. The 6.13 app is substantially slower for the GT240 cards, and while that might have changed, I doubt it. I have not tested the 270 driver, as I don’t have a Linux platform, but partially because none of the 260.x drivers I previously tested offered any improvement for the 6.13app and caused some of my cards to drop their speed. I would be very reluctant to install Linux just to test the 270.18 Beta for a GT240, but let us know how you get on, should you chose to (I suggest you don’t if you are unsure of how to uninstall it and revert to your present driver). | |
ID: 20406 | Rating: 0 | rate: / Reply Quote | |
To elucidate further, I am currently using Linux Mint 10 with kernel version 2.6.35-25-generic. My GTX 570 is using the latest stable Linux driver version, 260.19.36. And yes, I am using version 6.13 of the GPUGrid CUDA app. It certainly would be nice if the Rosetta app running on that fourth core would slow down a little bit just so that the GPUGrid app can get some CPU time in. | |
ID: 20408 | Rating: 0 | rate: / Reply Quote | |
If that was my system I would be inclined to do some calculations. | |
ID: 20414 | Rating: 0 | rate: / Reply Quote | |
Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance. | |
ID: 20417 | Rating: 0 | rate: / Reply Quote | |
Found this possible workaround (not yet tested): http://www.nvnews.net/vbulletin/showthread.php?t=158620 Bye. | |
ID: 20418 | Rating: 0 | rate: / Reply Quote | |
Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance. Same here, it's just every 28h +/-5h, depending on the WU, on my GT240. All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues. Changing the priority from 10 to 0 or even -3 increases the crunch speed big time, it's the only solution for my GT240 under Linux. Fortunately Einstein now provides a reliable application for CUDA crunching under Linux, that does worthy science as well, so I manually stop Einstein every other day in the evening, DL a fresh GPUGrid-WU, set it manually to -3 and let it crunch for the next +/-28h, set GPUGrid to NNW asap, and set Einstein working again by hand once the GPUGrid is through. Unfortunately sometimes Linux decides to set back the nice-factor to 10 during crunching, I don't know why and when, it looks unpredictable, and so I will loose precious crunching time because of the stubbornness of the app not to do what I want. I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 20420 | Rating: 0 | rate: / Reply Quote | |
I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level. I have put in my /etc/crontab the line: */5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1 So every 5 minutes a cronjob renices to -1 the task having name acemd2_6.12_x86_64-pc-linux-gnu__cuda (if it exists and if its niceness is different, otherwise it does nothing). Modify the line according to your app name (6.13). You'll probably find the proper name by executing (as root!): # ls /var/lib/boinc-client/projects/www.gpugrid.net/ |grep ace You can also choose an other niceness, if -1 doesn't satisfy you. :) HTH. Bye. | |
ID: 20422 | Rating: 0 | rate: / Reply Quote | |
I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level. It seems to work fine, thanks a lot I still have to manually change between Einstein and GPUGrid, as otherwise I will not make the deadline here if BOINC switches between the apps, but that's nothing GPUGrid can do about (besides setting the deadline to the needed 48h), that's a BOINC problem. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 20423 | Rating: 0 | rate: / Reply Quote | |
Saenger, All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues. Yet another misleading and disrespectful message! Your 98.671 ms per step performance for a GIANNI_DHFR1000 is exceptionally poor for a GT240 Using the recommended config (even on vista; over 11% slower than Linux), using swan_sync and the 6.12 app, 22.424 ms per step Lem Novantotto, thanks for the Linux tips. That hardware overclock method should work for your GT240, as it was tested on a GT220. Several users at GPUGrid have performed hardware OC's for Linux in the past. If it’s any help I tend to leave the core clock at stock, the voltage at stock and only OC the shaders to around 1600MHz, usually 1599MHz (this is stable on 6 GT240 cards I presently use). Others can OC to over 1640, but it depends on the GPU. | |
ID: 20424 | Rating: 0 | rate: / Reply Quote | |
Saenger,All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues. Why do you ignore my messages? I'm using this stupid swan_sync thingy, no f***ing use for it. I've tried to "free a whole CPU" for it, the only effect was an idle CPU. So don't talk to me about misleading and disrespectful! ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 20425 | Rating: 0 | rate: / Reply Quote | |
...and yet there is no mention of swan_sync in your task result details. | |
ID: 20427 | Rating: 0 | rate: / Reply Quote | |
...and yet there is no mention of swan_sync in your task result details. I don't know how this stupid swan_sync stuff is supposed to work, it's your invention, not mine. As I have posted in this post 66 days ago, and before that as well 82 days ago, and as I just tested again, my swan _sync is "0". saenger@saenger-seiner-64:~$ echo $SWAN_SYNC 0 So if your precious swan_sync isn't working with my WU, as you claim, it's not my fault. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 20429 | Rating: 0 | rate: / Reply Quote | |
I'm beginning to suspect that the reason swan_sync isn't working is because the environment variable is associated with the wrong user. GPUGrid tasks don't run at the user level. Rather, the user is boinc. | |
ID: 20430 | Rating: 0 | rate: / Reply Quote | |
Using Linux is not easy, especially if you use several versions. As you know you can add export SWAN_SYNC=0 to your .bashrc file, but that is easier said than done, and depends on how/where you install Boinc. With 10.10 versions it is especially dificult; when I tried using it the repository only had the 260 driver, and some of the familiar commands did not work. | |
ID: 20431 | Rating: 0 | rate: / Reply Quote | |
I installed BOINC using the package manager. Rather than adding export SWAN_SYNC=0 to my .bashrc file, I added it to /etc/bash.bashrc instead. I changed it because when I looked back at all of the tasks I did, even though I set swan_sync in my .bashrc file, the SWAN synchronization message has never showed up in any of the tasks I have done. That tells me that the GPUGrid task is not taking up the set environment variable in .bashrc. Perhaps placing it in /etc/bash.bashrc would help. | |
ID: 20433 | Rating: 0 | rate: / Reply Quote | |
As you know you can add export SWAN_SYNC=0 to your .bashrc file That wouldn't work, skgiven. A good place to set an *environment* variable is /etc/environment. $ sudo cp /etc/environment /etc/environment.backup $ echo 'SWAN_SYNC=0' | sudo tee -a /etc/environment *Next* time boinc will run something, it will do it... "zeroswansyncing". ;) It should, at least. Bye. | |
ID: 20434 | Rating: 0 | rate: / Reply Quote | |
Yes, my mistake. I’m working blind here (no Linux) and as you can tell, Linux is not my forte. It’s been months since I used any version. I found swan_sync fairly easy to use on Kubuntu 10.04 but struggled badly with Ubuntu 10.10. The commands are very different, even from Ubuntu 10.04; I had to use nautilus to get anywhere and change lots of security settings. Your entries look close to what I used, but I would have to dig out a notebook to confirm. | |
ID: 20435 | Rating: 0 | rate: / Reply Quote | |
I still fail to grasp why this extremely nerdy stuff isn't simply put in the app, especially as GPUGrid worked nearly fine and smooth until the last change of app, it just had to acknowledge it's use of a whole core, like Einstein does now, and everything would have been fine. | |
ID: 20436 | Rating: 0 | rate: / Reply Quote | |
GDF did ask that some such configurations be facilitated via Boinc Manger. I guess the differences in various distributions would make it difficult. | |
ID: 20438 | Rating: 0 | rate: / Reply Quote | |
While there are a few how to use Linux threads, there is not a sufficient how to optimize for Linux thread. If I get the time I will try to put one together, but such things are difficult when you are not a Linux guru. Hi, The truth is that I am not very knowledgeable in Linux, but I'm using Ubuntu 10.10 (before other versions for a year) and it works very well, better performance with Windows7. The current Nvidia driver is 270.18 and my GTX295 works perfectly, does not exceed 62 º c and the fan control works (from 40% to 65%). Well-ventilated box. As I said in another thread, just change the process priority (10 to - 10 or under interested me) allows extensive control and I have good yields. For these types of jobs I have found it much better choice than Windows. Greetings. ____________ http://stats.free-dc.org/cpidtagb.php?cpid=b4bdc04dfe39b1028b9c5d6fef3082b8&theme=9&cols=1 | |
ID: 20442 | Rating: 0 | rate: / Reply Quote | |
I tried changing the niceness of the GPUGrid task to -10 (defaulted to 19). Then I set BOINC to use 100% of the processors. I wanted to see if the priority change would allow Rosetta@Home and GPUGrid to share CPU time in the fourth core. It still seems like Rosetta@Home is being greedy with the CPU, causing GPUGrid to slow down drastically. The Rosetta@Home task in the fourth core was using 99-100% of that particular core. The kicker was that the niceness for Rosetta@Home tasks is set at 19! It really appears that swan_sync doesn't do anything at all. It certainly isn't showing up in the stderr section when I look at my completed tasks. | |
ID: 20443 | Rating: 0 | rate: / Reply Quote | |
I tried changing the niceness of the GPUGrid task to -10 (defaulted to 19). Then I set BOINC to use 100% of the processors. I wanted to see if the priority change would allow Rosetta@Home and GPUGrid to share CPU time in the fourth core. It still seems like Rosetta@Home is being greedy with the CPU, causing GPUGrid to slow down drastically. The Rosetta@Home task in the fourth core was using 99-100% of that particular core. The kicker was that the niceness for Rosetta@Home tasks is set at 19! It really appears that swan_sync doesn't do anything at all. It certainly isn't showing up in the stderr section when I look at my completed tasks. Kirby, I'm running the 6.12 app, so I cannot replicate faithfully your environment. Please, open a terminal and run these commands: 1) top -u boinc Would you please cut&paste the output? Looking at the rightmost column, you'll immediately identify the gpugrid task. Read its "pid" (the leftmost value on its line). Let's call this number: PID. Now press Q to exit top. 2) ps -p PID -o comm= && chrt -p PID && taskset -p PID (changing PID with the number). Cut&paste this second output, too. Now repeat point 2 for a rosetta task, and cut&paste once again. We'll be able to have a look of how things are going. Maybe we'll find something. Bye. | |
ID: 20451 | Rating: 0 | rate: / Reply Quote | |
acemd2_6.13_x86 pid 2993's current scheduling policy: SCHED_IDLE pid 2993's current scheduling priority: 0 pid 2993's current affinity mask: f minirosetta_2.1 pid 2181's current scheduling policy: SCHED_IDLE pid 2181's current scheduling priority: 0 pid 2181's current affinity mask: f As you can see, they're exactly the same. At this point, all four cores are being used, and GPUGrid has a niceness of -10. EDIT: The percent completion is still incrementing on GPUGrid; it's just moving at a glacial pace. Normally when I only have three cores working on CPU tasks, GPUGrid tasks take about 4.5 hours to finish. With four cores enabled, this is looking to take 2x-2.5x longer. | |
ID: 20454 | Rating: 0 | rate: / Reply Quote | |
Here is the problem! :) See my outputs with different tasks from different projects: acemd2_6.12_x86 pid 15279's current scheduling policy: SCHED_OTHER pid 15279's current scheduling priority: 0 pid 15279's current affinity mask: 3 wcg_faah_autodo pid 29777's current scheduling policy: SCHED_BATCH pid 29777's current scheduling priority: 0 pid 29777's current affinity mask: 3 simap_5.10_x86_ pid 15996's current scheduling policy: SCHED_BATCH pid 15996's current scheduling priority: 0 pid 15996's current affinity mask: 3 minirosetta_2.1 pid 16527's current scheduling policy: SCHED_BATCH pid 16527's current scheduling priority: 0 pid 16527's current affinity mask: 3 You see it, don't you? The problem is your sched_idle, mostly in the *gpugrid* app. Niceness is not priority itself: niceness is intended to affect priority (under certain circumstances). If you want to read something about priority: $ man 2 sched_setscheduler Try changing to SCHED_OTHER the scheduling policy for you *gpugrid* app: $ sudo chrt --other -p 0 PID (using its right PID - check with top). Remember that, if it works, you have to do it every time a new task begins (you could set up a cronjob to do it, as we've seen for niceness). Let me know. Bye. | |
ID: 20455 | Rating: 0 | rate: / Reply Quote | |
It works! Now I can run four CPU tasks and a GPUGrid task at the same time! Thank you very much! This is much better than the swan_sync method that is often spoken of here. | |
ID: 20456 | Rating: 0 | rate: / Reply Quote | |
It works! I'm glad. :) You're welcome. Another thing: does this need to be in rc.local as well? Or would crontab suffice? Additionally, does the chrt command need the terminal output suppression thingy at the end in crontab? (... > /dev/null 2>&1) Using a cronjob, we could forget about rc.local (even for the niceness thing). However it doesn't hurt: rc.local is executed every time the runlevel is changed, so basically at boot (and at shutdown). Our cronjob runs every 5 minutes, so without rc.local we loose at most 5 minutes (as we obviously loose at most five minutes every time a new task starts), which is not so much with workunits that last many hours. But we can make it run more frequently, if we like. Every three minutes, for example. This entry takes care of both the scheduling policy and the niceness: */3 * * * * root chrt --other -p 0 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1 ; renice -1 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1 Bye. P.S. The above works with no more than one gpugrid task being crunched at the same time. The renice part works with even more, actually, but the chrt part doesn't: you can renice many tasks at once, but you cannot change the scheduling policy of more than one task per invocation. Let's generalize for any number of simultaneous gpugrid tasks: */3 * * * * root for p in $(pidof acemd_whatever_is_your_app) ; do chrt --other -p 0 $p > /dev/null 2>&1 ; done ; renice -1 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1 | |
ID: 20458 | Rating: 0 | rate: / Reply Quote | |
I've got something to compare: | |
ID: 20460 | Rating: 0 | rate: / Reply Quote | |
This time I got a Gianni, but I still have one of those with 7,491.18 credits in my list from before as well. Here's the data: | |
ID: 20462 | Rating: 0 | rate: / Reply Quote | |
You have to leave a CPU core free when using swan_sync, otherwise its not going to be faster for the GPU. | |
ID: 20463 | Rating: 0 | rate: / Reply Quote | |
There is no getting away from the fact that the 6.13app is slower for GT240 cards, which is why I use the 6.12app, albeit driver dependent. Yesterday I decided to give the 270.18 driver a try. The cuda workunit I had in cache went like a charm with the good old 6.12 app; then a cuda31 workunit was downloaded, and it was a no go (even though *both* the apps, the former and the latter, almost saturated the GPU - the new driver can show this kind of info - and took their right slice of cpu time). I've got to come back to 195.36, lastly. The problem - if we can call it a problem - is that every time boinc asks for new work, it previously uses a function to retrieve cpu and gpu specs on the fly, which seems appropriate. These specs are part of the request (they can be read in /var/lib/boinc-clien/sched_request_www.gpugrid.net.xml). Among these specs there is the cudaVersion, which is "3000" with older drivers, and "4000" with newer ones. I'm pretty sure the gpugrid server sends back a cuda31 wu (and the 6.13 app if needed) if it reads 4000, a cuda wu (6.12) otherwise. Since the specs aren't stored in a file, but ratherly got from the driver on the fly, feign a 3000 cudaversion is not so easy. You should modify the boinc sources, and then recompile, to hide newer drivers response. Sorry for possible mistakes and for my overwhelming bad English, I'm a bit tired today. Goodnight (it's midnight here). :) | |
ID: 20464 | Rating: 0 | rate: / Reply Quote | |
You have to leave a CPU core free when using swan_sync, otherwise its not going to be faster for the GPU. This answer is totally b***s***. It took a whole core in my first comparison example, and it was extremely slow. There is no getting away from the fact that the 6.13app is slower for GT240 cards, which is why I use the 6.12app, albeit driver dependent. Your drivers are for Einstein, not GPUGrid. The project team is giving me those WUs although it knows my setup. So it's their fault, and only their fault, to give 6.13 to GT240 instead of 6.12. They know my card, they actively decided to give me 6.13, so they say it's better for my machine. If they are too stupid to give me the right app, it's because of their lack of interest, not mine, As I said before: They only care about people who buy a new card for several hundred €/$ every few month. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 20465 | Rating: 0 | rate: / Reply Quote | |
I have to agree with Saenger on this one, which is a pretty rare thing. I have noticed no difference with swan_sync + free core. This is on a win7 machine with a 580, which is a different setup than this thread subject. But my impression is similar about these tweeks that are supposed to speed things up. | |
ID: 20466 | Rating: 0 | rate: / Reply Quote | |
Yep, I agree. There is no difference whatsoever with swan_sync on or off for my GTX 570. It will still run for about 4.5-5 hours. | |
ID: 20467 | Rating: 0 | rate: / Reply Quote | |
Did any of you restart (even just the x-server, not so handy in 10.10) after adding swan_sync=0? If you didn't that would explain your observations. There is no difference whatsoever with swan_sync on or off for my GTX 570. It will still run for about 4.5-5 hours.You don't have swan_sync enabled and none of your tasks to as far back as 5th Feb have actually used swan_sync! The ACEMD2 is at 70 WU's available. I don't know why, but perhaps they are letting them run down so they can start to use ACEMDLONG and ACEMD2 tasks, &/or they need to remove tasks in batches from the server. Thanks for the warning, I will keep an eye out and if I think I will run dry on my Fermi's I will allow MW tasks again. | |
ID: 20470 | Rating: 0 | rate: / Reply Quote | |
The software showed the card was running at max clock, maximum performance, 95% GPU occupation. And it was running as hot as with the 195 driver. So I think we can regard it as a fact. The reason of the degraded performance must be elsewhere.
Yep. The 170 driver reports cudaversion 4000: since 4000>=3010, the gpugrid server sends a cuda31 wu (that will be run by the 6.13 app).
I tried it just for the sake of curiosity, and indeed it was really too slow: a no go. Bye. | |
ID: 20471 | Rating: 0 | rate: / Reply Quote | |
My card is running stock speed. | |
ID: 20474 | Rating: 0 | rate: / Reply Quote | |
You don't have swan_sync enabled and none of your tasks to as far back as 5th Feb have actually used swan_sync! Since February 5, I had swan_sync=0 in my .bashrc file. Then after reading a bit of Lem's post, I tried moving it to /etc/profile. That didn't work, so I put it in /etc/environment. It still doesn't show up in the workunit logs. And yes, I did reboot. Can't play any of my favorite games unless I switch over to Windows. That said, manually manipulating the Linux CPU scheduler did the trick. It allowed the fourth core in my i5-750 to run both a CPU task and GPUGrid at the same time. GPUGrid still took the same amount of time to finish as before. True, the CPU task runs a tiny bit slower because it has to share CPU time with GPUGrid, but at least every core is running at 100%. As for the shortage of workunits, I have DNETC as a backup for my GPU. Unlike GPUGrid (at least in the present), DNETC runs my GPU at 100% load. | |
ID: 20477 | Rating: 0 | rate: / Reply Quote | |
I previously tried to enable swan_sync with similar success to others in this thread. Even renice -1 had no effect. I decided to try it again and after adding it to /etc/rc.d/rc.local, /etc/bashrc, /etc/profile and /etc/environment I still could not get it to work. export SWAN_SYNC=0 and it worked. It gave my GT240 about a 20% improvement. I don't know about other distros but that's what it took on my Fedora 14 x86_64 system to get swan_sync enabled. | |
ID: 20506 | Rating: 0 | rate: / Reply Quote | |
I previously tried to enable swan_sync with similar success to others in this thread. Even renice -1 had no effect. I decided to try it again and after adding it to /etc/rc.d/rc.local, /etc/bashrc, /etc/profile and /etc/environment I still could not get it to work. Greg, Your solution works flawlessly. Well done! Since I don't know Fedora, could you please help me understand? I suspect that Fedora's PAM configuration is slightly different with respect to Ubuntu's one. I guess that running: $ grep "pam_env.so" /etc/pam.d you'll get some output containing "readenv=0". Is it true? Thanks. Bye. | |
ID: 20513 | Rating: 0 | rate: / Reply Quote | |
Your solution works flawlessly. Well done! Glad it worked for you. I ran the grep command and there were no entries on my Fedora system with readenv. I assume that implies readenv=0. Out of curiosity, since I haven't done a lot of monkeying around with PAM I ran the same command in my Ubuntu install (running in VMWare) and it shows a number of entries with readenv=1. Greg | |
ID: 20515 | Rating: 0 | rate: / Reply Quote | |
Ah, no, I do not actually use swan_sync. But I'm happy you found a way to have it working on your system, despite my advice about /etc/environment didn't help you. Here I have no need to save a whole cpu core for gpugrid: I have exacly the same performance without wasting cpu power. Having just 2 cores, both are precious for me. :)
Yep. However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment). Lacking any readenv=0, I do not understand why your /etc/environment isn't read. I may be wrong about the default PAM behaviour, of course. I'll try a little bit harder. Bye, and thank you again. :) | |
ID: 20520 | Rating: 0 | rate: / Reply Quote | |
However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment). I've just tried on a live virtualized fedora 14, and /etc/environment is actually read. I don't know why on your system it is not. Sorry, I have to give it up. Bye. | |
ID: 20527 | Rating: 0 | rate: / Reply Quote | |
However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment). I don't know what to tell you. I set up SWAN_SYNC in /etc/environment. I was able to open a command window and echo the SWAN_SYNC value no problem. So the default readenv=1 behaviour is there. However, it had no impact on the behaviour of the acedm2 app. Only when I add the "export SWAN_SYNC=0" command to the boinc-client init script did acedm2 enable SWAN_SYNC. | |
ID: 20542 | Rating: 0 | rate: / Reply Quote | |
My 4x580 Ubuntu rig has been working fine for some time using swan_sync=0 in /etc/environment. Yesterday I had to relocate it, but after booting up I noticed it was no longer using full cores. I typed env in the terminal and it showed swan_sync=0 was listed. However, upon checking /etc/environment, swan_sync was missing. Added it back and now ok again. | |
ID: 20546 | Rating: 0 | rate: / Reply Quote | |
After the discussion here I've noticed that there are some differences with the way the two most common distros, Ubuntu and Fedora/Red Hat, process the environment. | |
ID: 20547 | Rating: 0 | rate: / Reply Quote | |
Hello: Well, I have configured my Ubuntu 10.10_64bits with SWAN_SYNC in "etc / environment" working perfectly. | |
ID: 20548 | Rating: 0 | rate: / Reply Quote | |
I think I bugger this swan_sync rubbish and stick to the priority alone. It's simple and it works. and it's far more effective than wasting precious CPU-time for slow-down. As Einstein had some difficulties with WU creation this week I crunched some new WUs, this time with the new app 6.14. Unfortunately I forgot to delete the swan_sync rubbish before, and so I wasted precious resources with this idiotic stuff again. 93-KASHIF_HIVPR_GS_so_ba1-8-100-RND2726_1 96,510.67 1,336.56 12,822.18 16,027.72 98-KASHIF_HIVPR_cut_ba1-45-100-RND5086_1 95,885.94 74,305.64 5,929.17 7,411.47 The upper one was without swan_sync, the lower one with it. Niceness was 19 with the lower one, and -5 with the upper one. You can clearly see, both took about the same amount of wall clock time, but without swan_sync the CPU-time was extremely less. You can as well see that the upper one had done more than double the scientific work as the lower one, as credits, given as a fix by the project, are more than double for the upper one. To sum it up: Swan_sync on a GT240 is a giant waste of resources. Fanboys who proclaim otherwise are either dumb or liars. Niceness is crucial, swan_sync is detrimental. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 21747 | Rating: 0 | rate: / Reply Quote | |
To sum it up: No wonder, SWAN_SYNC was introduced to handle the low GPU usage of Fermi based cards. | |
ID: 21755 | Rating: 0 | rate: / Reply Quote | |
Sanger, you are comparing two different task types, using different niceness settings and concluding that swan_sync makes the tasks slower, despite knowing that SWAN_SYNC would need to be used alongside a free core even if it did make much difference for that GPU type. Starving the GPUGrid task of any CPU time would obviously make it less efficient. | |
ID: 21763 | Rating: 0 | rate: / Reply Quote | |
Sanger, you are comparing two different task types, using different niceness settings and concluding that swan_sync makes the tasks slower, despite knowing that SWAN_SYNC would need to be used alongside a free core even if it did make much difference for that GPU type. Starving the GPUGrid task of any CPU time would obviously make it less efficient. The app is called ACEMD2: GPU molecular dynamics v6.14 (cuda31) for Linux, as can easily be seen on the apps page, the old app was called ACEMD2: GPU molecular dynamics v6.13 (cuda31). As I said before in other posts, it uses a whole CPU if swan_sync is put to 0 and nice stays at -10 as delivered by project. The use of more CPU-power is detrimental to the clock time of the WU, I have shown several times. If a WU gets the same amount of credit granted as another, it's about the same as the other in size. Credits are determined by the project unrelated to anything on my machine. If a WU gets more credits than the other, it has done more scientific work. I've crunched 2 WUs of the same type (KASHIF_HIVPR), both took the same amount of time, but one used for the whole time the CPU as well, the other nearly not. The one without using the CPU did far more than double the scientific work done in this time, as is shown in more than double credits, than the one without much CPU-use. Obviously Starving the GPUGrid task of any CPU time makes it really really much more efficient. Edith says: As my thanks were gone as well, here they are again (to Message 21755): To sum it up: Thanks for this very useful information. I would have appreciated this to have come from the project people, I don't know why they said otherwise. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 21766 | Rating: 0 | rate: / Reply Quote | |
To sum it up: I will quote myself from the distant past (321days ago), "If a CPU core is allocated to the Fermi GPU it significantly increases the GPU speed, especially if you use SWAN_SYNC=0. This is the recommended configuration for Fermi users, especially GF100 cards". I expect SWAN_SYNC would make little or no difference for a 9800GT, or any other CC1.1 card; it is mainly for Fermi's (317days ago). "The Windows only optional variable SWAN_SYNC=0 is for Fermi's and does not have to be used along with one free core but it usually helps a lot. It will make little or no difference to the performance of a 9800GT. There is little need to leave a CPU core free, unless you have 3 or 4 such cards in the same system, at which point your CPU performance for CPU only tasks will degenerate to the point that you might as well free up a CPU core. On a high end Fermi it is still optional but generally recommended to use both SWAN_SYNC=0 and to leave a Core/Thread free; the performance difference is quite noticeable." (266days ago) "For a GTX260 this is not the situation; you don’t need to free up a CPU core or use SWAN_SYNC=0" (251 days ago) It's worth remembering that there have been 5 different apps since SWAN_SYNC was originally only used for Linux (within the app by default). At different times SWAN_SYNC either made or didn't make a difference for Linux or Windows, if setup correctly, for different cards in the past. Whatever the situation you or anyone else previously found themselves in is no longer relevant; we are onto 6.14 and 6.15. I have not tried any GPU under Linux for the present 6.14app, and found that my GT240's kept downclocking with the latest Win drivers, so I pulled those cards. I think both Windows and Linux apps were named 6.14 and released on the 12th Jun 2011, with only the Windows version being subsequently updated to 6.15 (on the 6th Jul); thread priority being configurable in the Windows app but not the Linux app. | |
ID: 21770 | Rating: 0 | rate: / Reply Quote | |
I will quote myself from the distant past (321days ago), OK, that was the thread where gdf tried to push the swan_sync-stuff to me with force, starting here. The thread after the fiasco with the new rubbish 6.12 app. I expect SWAN_SYNC would make little or no difference for a 9800GT, or any other CC1.1 card; it is mainly for Fermi's (317days ago). I never looked in that thread, title was nothing that looked like it would be helpful for my problems, it's about hardware, while my prob is software. "The Windows only optional variable SWAN_SYNC=0 is for Fermi's and does not have to be used along with one free core but it usually helps a lot. It will make little or no difference to the performance of a 9800GT. There is little need to leave a CPU core free, unless you have 3 or 4 such cards in the same system, at which point your CPU performance for CPU only tasks will degenerate to the point that you might as well free up a CPU core. On a high end Fermi it is still optional but generally recommended to use both SWAN_SYNC=0 and to leave a Core/Thread free; the performance difference is quite noticeable." (266days ago) The title "Lots of failures" doesn't sound like something for my prob, why should I look it up there? And if it was "Windows only" in this thread, why did you try so hard to make me use it under Linux in the other one? "For a GTX260 this is not the situation; you don’t need to free up a CPU core or use SWAN_SYNC=0" (251 days ago)Thread title "Fermi", why should I look in there? It's worth remembering that there have been 5 different apps since SWAN_SYNC was originally only used for Linux (within the app by default). At different times SWAN_SYNC either made or didn't make a difference for Linux or Windows, if setup correctly, for different cards in the past. Whatever the situation you or anyone else previously found themselves in is no longer relevant; we are onto 6.14 and 6.15. I have not tried any GPU under Linux for the present 6.14app, and found that my GT240's kept downclocking with the latest Win drivers, so I pulled those cards. That's absolutely irrelevant insider talk for me, it's apps delivered by the project, that worked fine with the name 6.05 attached, that went unusable with 6.12 attached, and then all you project people tried to a) make me use the newest drivers, otherwise I would have problems b) make me use swan_sync=0, otherwise I would have problems² c) make me use older drivers, otherwise I would have problems ² the ways to make swan_sync=0 work were told me in different ways by the project people, none was helpful, I managed to make it work with the help of Leo, and it showed the use was completely rubbish. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 21771 | Rating: 0 | rate: / Reply Quote | |
I think both Windows and Linux apps were named 6.14 and released on the 12th Jun 2011, with only the Windows version being subsequently updated to 6.15 (on the 6th Jul); thread priority being configurable in the Windows app but not the Linux app. How exactly can I configure the thread priority in the Windows app? I can't recall and I can't find any messages about doing this. Perhaps you mean thread priority being configurable by the programmers? | |
ID: 21773 | Rating: 0 | rate: / Reply Quote | |
"thread priority being configurable in the Windows app" | |
ID: 21789 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking