Advanced search

Message boards : Graphics cards (GPUs) : Nvidia GTX 260 - Overclocking in Linux

Author Message
RalphEllis
Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 11647 - Posted: 4 Aug 2009 | 4:58:51 UTC

Since the Gpugrid staff have worked around the Nvidia linux driver issues with version 185 and 190, I have started running Gpugrid under Suse 11.2 64 bit and Sabayon 4.2 64 bit. It is possible to do some mild overclocking with the card to speed up the production of the results.
After you have installed the Nvidia drivers either through the repositories or through the Nvidia website, you will need to install nvclock-gtk and nvidia-settings. You will also need to enable the Coolbits option in your xorg.conf file.
Add
Option "Coolbits" "1"
to the Screen section of the xorg.conf file.
See
http://www.headshotgamer.com/review.aspx?id=95
for a more detailed run down on this.
After the Nvidia drivers are installed and Coolbits are enabled in your xorg.conf file, you will need to reboot.
After reboot, start nvclock-gtk either through the menu or but typing
nvclock_gtk
in a terminal.
Enable fan speed adjustments and set the fan to 100%.
Close nvclock-gtk and start up nvidia-settings either by the menu or by typing nvidia-settings in a terminal.
In the Clock Frequencies section, Enable Overclocking and choose the 3D Clock Frequencies. The settings that I use are GPU 734Mgh and Memory 1235Mgh. I have seen faster settings but when I go beyond this I start to see computation errors. The numbers may vary for your individual card and there are variations even among cards from the same manufacturer. I am using an EVGA GTX 260 Superclocked. The stock settings on this card are GPU 620Mgh and Memory 1026Mgh.
I also run BOINC mainly when I am using the IceWm window manager since it uses less graphical resources than Gnome and KDE 4. You can run Gpugrid under Gnome and KDE 4 but I like to have less overhead when I am running Gpugrid on the GPU and World Community Grid on the CPU.
I have also tried overclocking the EVGA Nvidia GTX 260 under Windows XP Professional x64 using the EVGA Precision utility and EVGA Voltage untility. These utilities work well but if I overclock the card beyong 660Mhz in Windows, I start to get some instability in both Gpugrid or Folding@home. Overall, Linux is a more stable platform for me to do Gpu crunching.
Good luck in your experimentation. Remember, when in doubt, go for more conservative settings. You lose more points in computation errors by pushing frequencies too hard than by living with a milder overclock.

Skip Da Shu
Send message
Joined: 13 Jul 09
Posts: 63
Credit: 2,350,645,165
RAC: 11,163,616
Level
Phe
Scientific publications
watwatwatwatwatwatwat
Message 11652 - Posted: 4 Aug 2009 | 10:35:09 UTC - in response to Message 11647.
Last modified: 4 Aug 2009 | 10:40:28 UTC


Add
Option "Coolbits" "1"
to the Screen section of the xorg.conf file.
See
http://www.headshotgamer.com/review.aspx?id=95
for a more detailed run down on this.
After the Nvidia drivers are installed and Coolbits are enabled in your xorg.conf file, you will need to reboot.
After reboot, start nvclock-gtk either through the menu or but typing
nvclock_gtk
in a terminal.
Enable fan speed adjustments and set the fan to 100%.
Close nvclock-gtk and start up nvidia-settings either by the menu or by typing nvidia-settings in a terminal.
In the Clock Frequencies section, Enable Overclocking and choose the 3D Clock Frequencies. The settings that I use are GPU 734Mgh and Memory 1235Mgh. I have seen faster settings but when I go beyond this I start to see computation errors. The numbers may vary for your individual card and there are variations even among cards from the same manufacturer. I am using an EVGA GTX 260 Superclocked. The stock settings on this card are GPU 620Mgh and Memory 1026Mgh.
... deleted ...
Overall, Linux is a more stable platform for me to do Gpu crunching.
Good luck in your experimentation. Remember, when in doubt, go for more conservative settings. You lose more points in computation errors by pushing frequencies too hard than by living with a milder overclock.


Good Job!

The
Option "Coolbits" "1"
can also be placed in the device section (and seems more logical there, to me at least) but functionally it makes no difference.

Ubuntu 64b v9.04:
Another option is to use the command line 'nvclock' (in Debian/Ubuntu repository) in the /etc/init.d/boinc-client script so that it sets the GPU OC just prior to starting boinc-client and then resets it back to lower settings right after stopping.

start()
{
log_begin_msg "Starting $DESC: $NAME"
if is_running; then
log_progress_msg "already running"
else
nvclock -n 700 -m 1250 -f
start-stop-daemon --start --quiet --background --pidfile $PIDFILE \
--make-pidfile --user $BOINC_USER --chuid $BOINC_USER \
--chdir $BOINC_DIR --exec $BOINC_CLIENT -- $BOINC_OPTS
fi


And
stop()
{
log_begin_msg "Stopping $DESC: $NAME"
if ! is_running; then
log_progress_msg "not running"
else
start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE \
--user $BOINC_USER --exec $BOINC_CLIENT
nvclock -n 400 -m 700 -f
fi


In a terminal "sudo nvclock -s" will also just report your actual current core and memory clock rates (which might be a bit off from what u set it to... eg 702 on a setting of 705).
____________
- da shu @ HeliOS,
"A child's exposure to technology should never be predicated on an ability to afford it."

RalphEllis
Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 11662 - Posted: 5 Aug 2009 | 3:10:08 UTC - in response to Message 11652.

The nvclock command line version certainly will do everything that you will need. I just like the graphical interface for playing with the settings. Putting it in the start up script is a very good way to automate the process for setting fan speeds and overclocking once you have found the settings that you want to use.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12270 - Posted: 2 Sep 2009 | 13:24:44 UTC - in response to Message 11647.

Why nvclock shows core freq and do not mention shader one?

May be it's a stupid way, but I did like this.

In parallel with ubuntu 9.04 I've got windows (to run COS WaW, NFS, Race Driver and other "necessary" stuff, ha-ha). Then using Riva-Tuner I overclocked my GTX275 SC (factory overclocked to 6xx/1488/10xx - I do not remember values exactly) up to 702/1584/1260. I verified this freq's to be stable 100% (OCCT 3.1 GPU Linpak + some other stability tests) and also I ran F@H on GPU for some time.

Then when I was sure in the freq's, I grab BIOS using GPU-Z, modified freq's and voltage by NiBiTor and reflashed BIOS using nvflash. As of now my card works just fine in linux.

If you need I can make a small FAQ even in pictures.

RalphEllis
Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 12373 - Posted: 5 Sep 2009 | 1:28:54 UTC - in response to Message 12270.

A "how to" might be very useful for other people who are looking at overclocking in Linux. Every situation is different but your experience can be a useful guide to some of our creative experimenters out there.

RalphEllis
Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 12452 - Posted: 11 Sep 2009 | 8:29:32 UTC

Just as an additional comment, running Gpugrid and overclocking for me is much more stable with the IceWm window manager. I tried recently running Gpugrid under Gnome and Kde4 and, while it would run under Gnome with the stock clock speeds, any overclocking would cause significant stability issues.
With IceWM, Gpugrid starts up easily and overclocks with no problems.

Skip Da Shu
Send message
Joined: 13 Jul 09
Posts: 63
Credit: 2,350,645,165
RAC: 11,163,616
Level
Phe
Scientific publications
watwatwatwatwatwatwat
Message 12970 - Posted: 2 Oct 2009 | 0:33:42 UTC

The iceWM sounds interesting but as of right it seems I have something called the "FFT" bug that is particular to GTX-260s once we got to the 185.xx and above drivers (cuda 2.2 and cuda 2.3).

Do you know if anyone has found anyway around this?

Are the current GTX-275 cards immune?
____________
- da shu @ HeliOS,
"A child's exposure to technology should never be predicated on an ability to afford it."

Post to thread

Message boards : Graphics cards (GPUs) : Nvidia GTX 260 - Overclocking in Linux

//