Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 160
|
![]() |
Author |
|
Grumpy Swede
Master Cruncher Svíþjóð Joined: Apr 10, 2020 Post Count: 2175 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hello, There are still more that need to go out, but it looks like with the reports of too little time allowed for the work units I need to expand on that. I did the estimate based on the other work units which was probably a bad idea because the production work units are quite a bit harder. I'm going to make the change to the estimated time now, but not load any extra in to see if I can capture more results from members to confirm a suspicion of mine. I need to read through this thread up until now to make sure I'm capturing all the correct data. Thanks, -Uplinger Well, for what I see both from my GTX980 and iGPU HD4600, these workunits are not any harder than those in the 7.26 Beta. On the contrary they crunch faster than the previous ones. Edit: And there I just got my first "exceeded elapsed time limit" for my iGPU HD4600. The other two WU's it crunched finished OK. https://www.worldcommunitygrid.org/ms/device/...og.do?resultId=1559231038 [Edit 2 times, last edit by Grumpy Swede at Mar 12, 2021 9:13:22 PM] |
||
|
YoToP
Cruncher Joined: Feb 28, 2021 Post Count: 4 Status: Offline Project Badges: ![]() ![]() ![]() |
Got two errors on my ManjaroLinux machine with AMD RX 560. This did not happen in the previous betas.
----------------------------------------One that stands out is this one: https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=569919620 It looks like other machines produced errors on it as well. [Edit 1 times, last edit by YoToP at Mar 12, 2021 9:14:02 PM] |
||
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some statistics so far with this BETA:
Windows results avg elapsed time planclass 3692 715.3809485046066 opencl_ati_102 334 2458.397474544911 opencl_intel_gpu_102 18665 335.38833408186485 opencl_nvidia_102 Mac results avg elapsed time planclass 1177 361.0751368377228 opencl_ati_102 29 1998.429564310345 opencl_intel_gpu_102 10 1206.8059322000001 opencl_nvidia_102 Linux results avg elapsed time planclass 193 586.239158290155 opencl_ati_102 21 2734.893113476191 opencl_intel_gpu_102 4231 226.57401046513837 opencl_nvidia_102 As you can see I have a wild swing between ati, intel and nvidia. I originally calculated the estimated fpops from the overall average. I think I need to change that to be closer to what windows is seeing. These stats are only successful results in the database, does not include the elapsed time exceeded that members have noticed. @nanoprobe I'll reset your hosts when we get more coming out. For now the beta is paused while I review more data. Thanks, -Uplinger |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Marked as too late?
----------------------------------------https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=569919034 @nanoprobe I'll reset your hosts when we get more coming out. For now the beta is paused while I review more data. ![]()
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
----------------------------------------![]() ![]() [Edit 1 times, last edit by nanoprobe at Mar 12, 2021 9:29:25 PM] |
||
|
Grumpy Swede
Master Cruncher Svíþjóð Joined: Apr 10, 2020 Post Count: 2175 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Marked as too late? https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=569919034 The whole "Workunit Status" should be marked as "too many errors" IMO. I've seen those "Too late" before in GPU Beta here, when there's a bunch of errored out wingmen, and one who finished it OK. [Edit 2 times, last edit by Grumpy Swede at Mar 12, 2021 9:32:42 PM] |
||
|
Richard Haselgrove
Senior Cruncher United Kingdom Joined: Feb 19, 2021 Post Count: 360 Status: Offline Project Badges: ![]() ![]() |
Remember that GPUs have a hugely wider range of speed than CPUs.
You don't send us an 'estimated time': you send us a size (fpops), and we supply a speed (GFlops) and work it out from the combination. For CPUs, our 'speed' is the Whetstone benchmark - loball. For GPUs, it's the 'Peak' speed for the card architecture - hiball. That may be part of the problem. |
||
|
bozz4science
Advanced Cruncher Germany Joined: May 3, 2020 Post Count: 104 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Had a couple of WUs error out with "exceeded elapsed time limit" is noted in the thread for new beta. My 2 NVIDIA cards are set up to crunch 3 WUs concurrently each. While I had no issues on my 970, the 1660 Super card had a lot of tasks that seemingly got stuck in the middle of computation. If I didn't suspended/unsuspended those stuck tasks, they went on forever without placing load on the GPU and then errored out with the above message.
----------------------------------------![]() AMD Ryzen 3700X @ 4.0 GHz / GTX1660S Intel i5-4278U CPU @ 2.60GHz |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Marked as too late? https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=569919034 The whole "Workunit Status" should be marked as "too many errors" IMO. I've seen those "Too late" before in GPU Beta here, when there's a bunch of errored out wingmen, and one who finished it OK. Then I shouldn't have to divvy up the .4-2.6 points. ![]()
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
----------------------------------------![]() ![]() [Edit 1 times, last edit by nanoprobe at Mar 12, 2021 9:45:08 PM] |
||
|
ThreadRipper
Veteran Cruncher Sweden Joined: Apr 26, 2007 Post Count: 1321 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Another question I have: That same machine has three GPUs: 7850K iGPU, 1050Ti and GT 710. BOINC runs WUs on the iGPU and 1050Ti simultaneously, but it does not want to try to run on the GT710, despite it being available. Are the WUs assiges specifically to a certain GPU when they get downloaded ot why would it not try to run any on that GPU? Do you have <use_all_gpus>1</use_all_gpus> in your cc_config. xml? Hi, I did not have a cc_config.xml file, but now I creatd one and put it in boinc data directory. However, now I do not have any Nvidia beta WUs left in cache so I'll have to wait and see if this works when more GPU work is available. Maybe @Uplinger knows and can confirm: is a cc_config.xml really needed for BOINC to use all GPUs available? Thanks! ![]() Join The International Team: https://www.worldcommunitygrid.org/team/viewTeamInfo.do?teamId=CK9RP1BKX1 AMD TR2990WX @ PBO, 64GB Quad 3200MHz 14-17-17-17-1T, RX6900XT @ Stock AMD 3800X @ PBO AMD 2700X @ 4GHz |
||
|
JohnDK
Advanced Cruncher Denmark Joined: Feb 17, 2010 Post Count: 77 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Another question I have: That same machine has three GPUs: 7850K iGPU, 1050Ti and GT 710. BOINC runs WUs on the iGPU and 1050Ti simultaneously, but it does not want to try to run on the GT710, despite it being available. Are the WUs assiges specifically to a certain GPU when they get downloaded ot why would it not try to run any on that GPU? Do you have <use_all_gpus>1</use_all_gpus> in your cc_config. xml? Hi, I did not have a cc_config.xml file, but now I creatd one and put it in boinc data directory. However, now I do not have any Nvidia beta WUs left in cache so I'll have to wait and see if this works when more GPU work is available. Maybe @Uplinger knows and can confirm: is a cc_config.xml really needed for BOINC to use all GPUs available? Thanks! At least this is what BOINC config page says: <use_all_gpus>0|1</use_all_gpus> If 1, use all GPUs (otherwise only the most capable ones are used). Requires a client restart.
Intel i7-6850K / 16GB / RTX 3090 / 2x RTX 3080 Ti / RTX 3070 Ti
AMD Ryzen 9 5950X / 32GB / RTX 2080 Ti |
||
|
|
![]() |