Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 109
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If 3 CPU + 1 GPU ran before the boot, no others in "waiting to run", there is something to be said for the GPU resource getting priority deployment. JEklund, your reply implies that 1 HCC-CPU task went "waiting to run" lest you booted the second a task finished. (oh the wonderful permutations of the chance factor in BOINC)
|
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
However, I have once got a "the graphics driver stopped responding and has been reset" message or similar. As this happened, the running WUs in BOINC got stuck and I had to abort them. The driver error you're getting is a problem with how Windows handles the drivers. MS has acknowledged the problem and to fix it requires a very simple registry edit. You don't have to abort the tasks that get stuck. Just suspend all of you GPU tasks and restart the stuck ones one at a time. Then resume all your other tasks. And FWIW I had problems with the 12.11 beta 11 driver. Switched back to the 12.11 beta 4 and no more problems. Oh thanks for the reply, you don't happen to know what registry edit? I believe I'll be able to manage on this driver for now, and hopefully AMD gives us the Christams present of a stable 12.11 driver? :) Right now I am unable to test more since I am moving and the PC is off for the time being. But as soon as I have settled in a little I hope to be back on track! When you get settled I can give the registry edit. E-mail me when you're ready so I don't miss a post in here.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
PS: BOINC gives this kind of line when the file has the *right* name : That one I had not noticed before, though running the app_config.xml since 7.0.40, but a stdoutdae.old scan revealed17/12/2012 17:30:49 | World Community Grid | Found app_config.xml 08-Dec-2012 21:38:52 [---] No usable GPUs found 08-Dec-2012 21:38:52 [World Community Grid] Found app_config.xml Old *men* not seeing this line in event log, take lessons from elder sons ;P |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Crystal Pellet. Thanks for the response .
----------------------------------------I wouldn't call it virtualising. That's confusing. I have 4 cores no HT, so 4 threads. Running 3 CPU-tasks each using 1 full thread and 2 GPU-tasks sharing 1 core/thread. Looks to me like the import of the word 'virtualisation' was, perhaps unconsciously, associated with 'CPU-hardware-thread-virtualisation'. I used 'virtualised' in the broadest sense. I avoided the phrase 'CPU-virtualised thread' for example for that phrase did not appear to me as having the same meaning as CPU-thread virtualised. I just wanted to preserve the 1-CPU-thread-to-1-GPU-task HCC-WU design in my thought-navigation as my thoughts wiggle through the actual thread distribution breakdown: the only way, in my thoughts, to satisfy the 1-CPU-thread-to 1-GPU-task HCC-WU design and still do 2 GPU-WUs -- was to 'virtualise' a true CPU-thread into two, and hence the phrase '1-true-CPU-thread virtualised to 2-virtual CPU-threads'. Anyways, my choice of words/phrase apparently failed to convey what I meant. That's ok and '2 GPU-tasks sharing 1 core/thread' is indeed very clear.In fact I've 5 tasks running on 4 cores and no affinity used, so 5 tasks are fighting for CPU-time. Isn't it that, in your example case, only 1 thread is shared by 2 tasks? The remaining 3 tasks are each reserved/guaranteed 1 thread, if I understand correctly.It's BOINC that's calculating what could run maximum: 3 cores for the cpu-tasks and 2 times .5 core = 1 core for 2 GPU-tasks. My experience is that BOINC always first wants to run the GPU-tasks and leave the rest of free cores to CPU-tasks. But in JEKlund's example it's running 3 cpu-tasks and 1 GPU. I think BOINC calculated (max 4 is allowed) that running 3 CPU + .5 for 1 GPU (total 3.5 used) is better than running 2 CPU + 2x.5 (total 3 used). Or perhaps 3.5 being not an integer number, BOINC rounded the 0.5 (that caused the non-integer result of 3.5) to 1; hence, 3 for-the-CPU, and 1 for-the-GPU.; ; andzgridPost#750 ; [Edit 2 times, last edit by Former Member at Dec 18, 2012 12:02:53 AM] |
||
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1323 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
In fact I've 5 tasks running on 4 cores and no affinity used, so 5 tasks are fighting for CPU-time. Isn't it that, in your example case, only 2 threads are sharing 1 core? The remaining 3 threads are each reserved/guaranteed 1 core, if I understand correctly.It's the Operating System that decides what's to do next and without dedicated affinity for certain tasks the OS decides on which core to do it. As BOINC tasks are running lowest priority, other no-BOINC work is always in front for cycles. Depending on memory- and cache management it could be that 1 BOINC-task will switch from 1 core to another. That's why I said 5 tasks fighting for 4 cores, but because WCG has no multi-threaded tasks it cannot use more than 1 thread/core, but not always the same core. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It's the Operating System that decides what's to do next and without dedicated affinity for certain tasks the OS decides on which core to do it. As BOINC tasks are running lowest priority, other no-BOINC work is always in front for cycles. Depending on memory- and cache management it could be that 1 BOINC-task will switch from 1 core to another. The switching of cores as orchestrated by the OS does not violate: the 1:1 thread-to-WU ratio on the for-the-CPU HCC-WU side; nor does the switching violate, in your example case, the 1:2 thread-to-WU ratio on the for-the-GPU HCC-WU side. Further, if there are non-BOINC tasks that request for CPU-cycles, the switching of cores preserves the said ratios, but divides the CPU-compute power to accommodate the said request. That division is essentially also the mechanism that allows a single CPU-thread to accommodate, in your example case, 2 for-the-GPU HCC-WUs. This is my understanding.That's why I said 5 tasks fighting for 4 cores, but because WCG has no multi-threaded tasks it cannot use more than 1 thread/core, but not always the same core. On the aggregate, that is at the OS-level, yes, 5-tasks are competing for 4-cores. But in the breakdown of that aggregate, in your example case, 3 for-the-CPU HCC-WUs are guaranteed 3 CPU-threads with the remaining 1-CPU thread shared by 2 for-the-GPU HCC-WUs.; ; andzgridPost#752 ; [Edit 1 times, last edit by Former Member at Dec 17, 2012 11:43:35 PM] |
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, nano, made that conclusion already. See my last post and the one before. The <max_concurrent> line was designed for IO intense science limitations, speak help limit CEP2, so it's contribution can be boosted without micromanaging, no other purpose. We went and stretched the function, that's what users do, and discover what it does when used outside of it's objective. We do don't we. Some observations from this evening by me: Dual 5870's in a 2600K setup, win7 example set this app_config> <app> <name>hcc1</name> <max_concurrent>Variable to change behaviour</max_concurrent> <gpu_versions> <gpu_usage>.1</gpu_usage>to get 10 instances on each card <cpu_usage>0.4</cpu_usage>to share 8 cores between 2 x 10 instances </gpu_versions> </app> </app_config> Making <max_concurrent>20, as one would expect, gives 20 wu's running 10 on each card. Making <max_concurrent>13, populates the primary card with 10 wu's and allows just 3 for the secondary card (plays havoc with the points claimed on wu's completed by the secondary card) Making <max_concurrent>10, populates the primary card only with 10 wu's Tomorrow I want to take a look at the following that I believe I saw late on but need to confirm. Making <max_concurrent>21, populates the primary card with 11 and secondary with 10. What possible use could there be for this info? well, I am thinking one could influence the number of wu's held in cache by adding a card (low end) to a high performance card but not running it or if then only a single wu at a time maybe? I am sure others may think of other things. ![]() |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@OldChap- Do you see any advantage to running that many tasks on a 5870? I saw no improvement running more than 4 on mine and efficiency dropped fast with more that 6 albeit that was on an i7-860.
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
deltavee
Ace Cruncher Texas Hill Country Joined: Nov 17, 2004 Post Count: 4891 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@OldChap- Do you see any advantage to running that many tasks on a 5870? I saw no improvement running more than 4 on mine and efficiency dropped fast with more that 6 albeit that was on an i7-860. One advantage may be more run time. I have a 7870 on a i7-3770 running 16 hcc-gpu WUs at the same time. My average Run Time on this machine has gone from 8 days per day to over 15 days per day. Multiply this by several machines and my 66 threads are now doing 99 days of Run Time per day, and this will increase as my wingman return rate stabilizes. The number of points however is the same as when I was running 8 WUs. 12/17/2012 0:099:22:57:07 12/16/2012 0:093:19:20:15 12/15/2012 0:075:08:21:55 12/14/2012 0:071:22:57:07 12/13/2012 0:068:04:57:02 |
||
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1323 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That's why I said 5 tasks fighting for 4 cores, but because WCG has no multi-threaded tasks it cannot use more than 1 thread/core, but not always the same core. On the aggregate, that is at the OS-level, yes, 5-tasks are competing for 4-cores. But in the breakdown of that aggregate, in your example case, 3 for-the-CPU HCC-WUs are guaranteed 3 CPU-threads with the remaining 1-CPU thread shared by 2 for-the-GPU HCC-WUs.E.g. When the 2 GPU-tasks both are in their low GPU-phase and very high demanding CPU-phase, those tasks will steal cycles from the other 3 CPU-cores. |
||
|
|
![]() |