Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 200
|
![]() |
Author |
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1320 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some tasks are running endless and I've to restart the task to get progress again.
----------------------------------------Advantage: It starts from the last meanwhile long ago checkpoint. Because on this low end card (AMD A4 Micro-6400T APU + AMD Radeon R3 (Mullins)), these OPNG-tasks takes a very long time, I'll disable GPU-crunching on that laptop. BTW: I reserved 1 full CPU of the 4 threads to support a GPU-task. INFO:[22:47:17] End AutoDock... [Edit 2 times, last edit by Crystal Pellet at Mar 7, 2021 11:02:33 AM] |
||
|
maeax
Advanced Cruncher Joined: May 2, 2007 Post Count: 142 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, we need a balance between low, mid and high performance GPU's with the Autodock work.
----------------------------------------
AMD Ryzen Threadripper PRO 3995WX 64-Cores/ AMD Radeon (TM) Pro W6600. OS Win11pro
|
||
|
Vester
Senior Cruncher USA Joined: Nov 18, 2004 Post Count: 325 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
mdxi, the problem is the driver (driver version 20.3.4, device version OpenCL 1.1) not the AMD RX550.
----------------------------------------![]() |
||
|
Pandelta
Advanced Cruncher Joined: Jun 24, 2012 Post Count: 55 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just want to confirm a simple question. Once GPU goes into production will all stats from CPU and GPU units still be together under existing OpenPandemic project or will GPU get its own separate project and stats? Curious since i shifted CPU resources to another project in anticipation of getting one hundred year badge with GPU units. They were all grouped together for the HCC GPU app I would guess the same will apply for OPN as well. Hey Nanoprobe! It makes sense to me they would be together like HCC was but at one point a seen a post from uplinger about different queues so wasn’t so sure. |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just want to confirm a simple question. Once GPU goes into production will all stats from CPU and GPU units still be together under existing OpenPandemic project or will GPU get its own separate project and stats? Curious since i shifted CPU resources to another project in anticipation of getting one hundred year badge with GPU units. They were all grouped together for the HCC GPU app I would guess the same will apply for OPN as well. Hey Nanoprobe! It makes sense to me they would be together like HCC was but at one point a seen a post from uplinger about different queues so wasn’t so sure. Not 100% sure either. Just best guess.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
cehunt
Senior Cruncher CANADA Joined: Oct 10, 2011 Post Count: 172 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi:
I am wondering if someone else is experiencing a similar error that I am having with my Alienware laptop. The CPU is an i7-3740QM ON board GPU is an Intel 4000 The GPU is Nidiva GeForce GTX 660M Device driver has been updated OS is Windows 10 fully patched. On the last GPU beta test run, I got a rather lengthy error dump with no clear error messages but the test still failed. Just wondering if there are other testers who own this laptop and are experiencing similar errors. Sometime two heads are better than one. :-)) Clive Hunt |
||
|
SOS_Ready
Cruncher Joined: Sep 23, 2008 Post Count: 19 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Got 45 WUs on my Win10 PC: 2 x AMD 6386SE (16c/32t) + 2 x Nvidia GTX TITAN.
All in PV status and completed without errors. My cc_config.xml <cc_config> <options> <use_all_gpus>1</use_all_gpus> </options> </cc_config> I tried two different setups for my app_config.xml: for first 25 WUs it was: <app> <name>beta29</name> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> and I got average CPU time per one GPU task about 0.36 hours. For the next 20 WUs I used the following: <app> <name>beta29</name> <gpu_versions> <gpu_usage>0.25</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> Here the average time was 0.81 per task but four tasks ran simultaneously on each GPU. Extrapolating the "average" run for 4 tasks from the previous 1CPU + 1GPU scenario it would be approx. 1.4 hour. Obviously this is the very rough estimation because the "test" samples were not the same for the both versions of app_config file, but if the computing load needed for each sample was similar then the 1CPU + 0.25GPU offers much better performance on my machine. I wonder where is the "golden ratio"? Maybe I'll be able to find it. |
||
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 443 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would like to get feedback from other volunteers about their experiences so far relating to SSD activity during the computation of these gpu WUs. Personally, I saw a huge spike in overall activity while running 4 WUs at a time across 2 gpus. The spikes in SSD write activity were directly corrected with the spikes in gpu ultilization, so it seems that after job that is packaged within a given gpu WU induces the final result to be written to the SSD. If the general notion of my thought here were to be true, it would seem a little excessive given the overall short runtimes. Isn’t there a way to reduce this? Caching multiple results in the meantime.... Open to any suggestions as to how to deal with this on the client side :) I just don't run BOINC on the SSD... why would i do that? Probably because it will run faster on an SSD. |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Got 45 WUs on my Win10 PC: 2 x AMD 6386SE (16c/32t) + 2 x Nvidia GTX TITAN. All in PV status and completed without errors. My cc_config.xml <cc_config> <options> <use_all_gpus>1</use_all_gpus> </options> </cc_config> I tried two different setups for my app_config.xml: for first 25 WUs it was: <app> <name>beta29</name> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> and I got average CPU time per one GPU task about 0.36 hours. For the next 20 WUs I used the following: <app> <name>beta29</name> <gpu_versions> <gpu_usage>0.25</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> Here the average time was 0.81 per task but four tasks ran simultaneously on each GPU. Extrapolating the "average" run for 4 tasks from the previous 1CPU + 1GPU scenario it would be approx. 1.4 hour. Obviously this is the very rough estimation because the "test" samples were not the same for the both versions of app_config file, but if the computing load needed for each sample was similar then the 1CPU + 0.25GPU offers much better performance on my machine. I wonder where is the "golden ratio"? Maybe I'll be able to find it. With higher end GPUs,like your Titans, it will be more efficient to run multiple tasks concurrently. Finding the "golden ratio" is basically trial and error. When the HCC GPU app was running the 7970 AMD card was the best performer. IIRC I could run 16 tasks concurrently on those but the opn autodock app is a completely different animal and you'll probably not be able to run that many at one time. Have fun with the testing. ![]()
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
koschi
Cruncher Joined: Dec 16, 2007 Post Count: 5 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would like to get feedback from other volunteers about their experiences so far relating to SSD activity during the computation of these gpu WUs. Personally, I saw a huge spike in overall activity while running 4 WUs at a time across 2 gpus. The spikes in SSD write activity were directly corrected with the spikes in gpu ultilization, so it seems that after job that is packaged within a given gpu WU induces the final result to be written to the SSD. If the general notion of my thought here were to be true, it would seem a little excessive given the overall short runtimes. Isn’t there a way to reduce this? Caching multiple results in the meantime.... Open to any suggestions as to how to deal with this on the client side :) I just don't run BOINC on the SSD... why would i do that? Probably because it will run faster on an SSD. Or because spinning disks are a thing of the past. I haven't had magnetic storage in my computer for at least 5 years now. |
||
|
|
![]() |