Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 5
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I was having the problem of sluggish framerates and high response time when hcc1 tasks were running on my primary display GPU, even just navigating Windows' GUI. Another user suggested using the exclude_gpu element in the cc_config.xml file to limit crunching to a different GPU. I have two GTX 460s, so I gave it a go.
Here is my cc_config.xml: <cc_config> This solved the framerate issues, even if it did sacrifice roughly half my crunching capability. But I noticed that the fan of my primary display card -- the excluded GPU0 -- was higher than it should have been for a practically idle card. I checked out the cards with NVIDIA Inspector's graph, and saw that the clocks and voltages (and temperature) of GPU0 were kept high when GPU1 was crunching -- even when GPU0 was idling. I changed cc_config.xml to exclude device 1 instead of 0, and the idle card still heated up when the other card was crunching hcc1. Clocks and voltages returned to normal idle levels when GPU activity was suspended. Here's an imgur album of NVIDIA Inspector graphs comparing GPU clocks, voltages, and temps while crunching, suspended, and during normal, non-BOINC usage. I have no idea what the problem is, but I don't like that hcc1 is consuming power on an idle card. Windows 7 64-bit GeForce driver version 306.97 WHQL BOINC 7.0.36 In case it matters, the cards are not in SLI and not connected by an SLI connector. |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I believe your <use all gpus>1</use all gpus> is the problem. If you just want to crunch on one and use the other for your monitor delete that line from your config file. You're telling your system to use all and exclude 1 at the same time.
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
----------------------------------------![]() ![]() [Edit 1 times, last edit by nanoprobe at Oct 18, 2012 12:14:05 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Good thought. <use_all_gpus> is a leftover from when I was trying to get them both to work.
----------------------------------------Unfortunately, removing the line doesn't seem to have made a difference. Whether I exclude device 0 or 1, when one GPU is crunching, the other still doesn't idle properly. [Edit 1 times, last edit by Former Member at Oct 18, 2012 1:41:06 AM] |
||
|
JacobKlein
Cruncher Joined: Aug 21, 2007 Post Count: 21 Status: Offline |
Based on your graphs, here's what I see:
- When using BOINC, the driver appears to lock both cards to maximum performance while a CUDA task is processing. GPU Clk, Shader Clk, and voltage, are the same for both cards, even though only 1 is doing work. This may just be how the driver works, or it may be configurable -- Have you tried looking into the "Manage 3D settings" for the options "Power management mode" and "CUDA - GPUs", both can be set, but I don't know their effects. - Your GPU 0 doesn't appear to dissipate as much heat as you'd like, but I'm betting it sits closer to the CPU than GPU 1, which may explain it. Also, 50-65 *C is not horrible for a card that's caught in the mix of CPU heat and other-GPU-heat. I have 3 GPU's, on top of a quad-core processor, and when everything is under load, the GPU temps are: 74*C, 78*C, 88*C. Fun. Do you crunch CPU tasks? It really just seems like your GPU 0 is near hotter components than your GPU 1 which can cool more efficiently. |
||
|
dskagcommunity
Senior Cruncher Austria Joined: May 10, 2011 Post Count: 219 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I only write something about the heat in a system with more then one gpu..gpu 0 is always heated up from gpu1. Heat going up in normal vertical cases, you can mount extrem fast cooler behind them to minimize the effect. But the faster the louder. Dual, triple gpu needs a verrrry good cooling system for such things. All of my systems are case open so i can blow from side between the both cards with a normal cooler with normal noise to minimize the effect a little. All the other things on mainboard above can heat gpu0 too. But this effects only when not enough overall cooling in a closed system is working.
-------------------------------------------------------------------------------- [Edit 2 times, last edit by dskagcommunity at Oct 18, 2012 7:45:27 AM] |
||
|
|
![]() |