Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 77
|
![]() |
Author |
|
cjslman
Master Cruncher Mexico Joined: Nov 23, 2004 Post Count: 2082 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Congrats to the WCG admins, developers, scientists and beta testers for making this possible. The results are already being seen with an 18% increase (today) in returned results. Hope it continues to grow. Luck to all.
----------------------------------------CJSL Happy crunching down the road... ![]() |
||
|
Ian-n-Steve C.
Senior Cruncher United States Joined: May 15, 2020 Post Count: 180 Status: Offline Project Badges: ![]() |
I wouldn't say it's nvidia's fault alone. We'll just have to agree to disagree. When it comes to OpenCl support for applications on Nvidia hardware you can clearly see the difference and it's because they would be shooting themselves in the foot if they made OpenCl better to compete with CUDA. Another thing, CUDA is proprietary and OpenCl is open source so I wouldn't expect anything else from Nvidia. The researchers who want to do their research here are probably not in a position to pay Nvidia for CUDA support and why should they and I highly doubt Nvidia would give it away for free. They can accomplish all they need with OpenCl despite that it could probably get done faster on CUDA. And last but not least OpenCl will run on both AMD and Nvidia. CUDA is Nvidia only. what? you don't have to PAY for CUDA. it's a platform, and the cuda toolkit is free from nvidia. they just need a developer who knows it or can learn it. anyone can code with CUDA. when SETI was still live, we had a volunteer who wrote his own application basically from scratch, in CUDA, and provided it absolutely free. I even helped compile it at times and I have absolutely no affiliation with nvidia or SETI other than being a user. that app was 4-5x faster than the SETI provided OpenCL app, partly due to CUDA optimization, and partly due to the great search algorithms and coding techniques by the author. From what I understand, nvidia provided some resources to SETI to help develop one of their earlier CUDA apps, but I'm not aware that SETI actually paid for that. I got the impression it was more of a PR/marketing campaign from nvidia to promote GPU computing. CUDA is just faster. on similarly spec'd hardware CUDA on NVIDIA will trump OpenCL on AMD. ![]() EPYC 7V12 / [5] RTX A4000 EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060 [2] EPYC 7642 / [2] RTX 2080Ti [Edit 2 times, last edit by Ian-n-Steve C. at Apr 8, 2021 2:19:08 AM] |
||
|
Robokapp
Senior Cruncher Joined: Feb 6, 2012 Post Count: 249 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
we are ripping through the WUs like savages arent we? I get big batches and devour them in an hour and then dry dry dry...
i'm not complaining. I'm very happy to finally GPU again. just wondering if this is the steady pace or still ramping up. |
||
|
bozz4science
Advanced Cruncher Germany Joined: May 3, 2020 Post Count: 104 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
While, I don't have any experience myself with CUDA programming, to me it seems only fair to point out the obvious speedup of the GPU app over the CPU app, but on the same time the obvious inefficiencies as well that still exist with this app. I noted this same observations at the beginning of March numerous times along with others during the beta tests, when the first proposal to run several WUs concurrently was initially made to increase util. However, I still see 2 problems with this approach. First, I agree with Ian&Steve on their stance that CPU resources are essentially wasted. If the GPU were more demanding/would require more computational power, less WUs would have to be run concurrently to bring up GPU util and thus free CPU threads. Second, with the current WU supply, the app_configs specifically finetuned to allow running of multiple WUs concurrently on a card, is at least for now never completely satisfied and thus restricts performance of the few tasks that actually get downloaded and run on the card. In my case allowing each card to run 3 WUs but only getting two, means that 1/3rd of the card's computational power cannot be used.
----------------------------------------Is this app open-source for users to review? Or only the research results? Hope that some smart people can contribute beyond their hardware to enhance efficiency of the app even further. ![]() AMD Ryzen 3700X @ 4.0 GHz / GTX1660S Intel i5-4278U CPU @ 2.60GHz |
||
|
maeax
Advanced Cruncher Joined: May 2, 2007 Post Count: 142 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Wouldn't see OPG as a standalone Project-part.
----------------------------------------Have HCC-GPU also seen and done tasks. When you mix your GPU with other projects for GPU-work is the best possibility to get always work. Your OS do the balance to work with CPU-Tasks also as with GPU-work.
AMD Ryzen Threadripper PRO 3995WX 64-Cores/ AMD Radeon (TM) Pro W6600. OS Win11pro
|
||
|
Ulrich Metzner
Cruncher Joined: Apr 8, 2020 Post Count: 7 Status: Offline Project Badges: ![]() ![]() ![]() |
Hi there,
----------------------------------------i got a bunch of GPU-WUs, unfortunately all failed as in BETA. Here are some: https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=609761634 https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=609761638 https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=609761636 https://www.worldcommunitygrid.org/ms/device/...s.do?workunitId=609761586 ...more... Basically they all failed the same way. System: Windows 10 Pro, Q9550S, 8GB RAM, NVidia GT640 with OpenCL 1.2 driver installed. [edit] GPUz says it is GK107 revision A2 chip (Kepler) and is capable for OpenCL 1.2 [/edit] Other projects, f.e. milkyway run without problems. For now i have disabled GPU for WCG until further notice. :( [edit2] A native CUDA application would be an absolute killer! ;) [/edit2]
Aloha, Uli
----------------------------------------[Edit 2 times, last edit by Ulrich Metzner at Apr 8, 2021 11:09:24 AM] |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I wouldn't say it's nvidia's fault alone. We'll just have to agree to disagree. When it comes to OpenCl support for applications on Nvidia hardware you can clearly see the difference and it's because they would be shooting themselves in the foot if they made OpenCl better to compete with CUDA. Another thing, CUDA is proprietary and OpenCl is open source so I wouldn't expect anything else from Nvidia. The researchers who want to do their research here are probably not in a position to pay Nvidia for CUDA support and why should they and I highly doubt Nvidia would give it away for free. They can accomplish all they need with OpenCl despite that it could probably get done faster on CUDA. And last but not least OpenCl will run on both AMD and Nvidia. CUDA is Nvidia only. what? you don't have to PAY for CUDA. it's a platform, and the cuda toolkit is free from nvidia. they just need a developer who knows it or can learn it. anyone can code with CUDA. when SETI was still live, we had a volunteer who wrote his own application basically from scratch, in CUDA, and provided it absolutely free. I even helped compile it at times and I have absolutely no affiliation with nvidia or SETI other than being a user. that app was 4-5x faster than the SETI provided OpenCL app, partly due to CUDA optimization, and partly due to the great search algorithms and coding techniques by the author. From what I understand, nvidia provided some resources to SETI to help develop one of their earlier CUDA apps, but I'm not aware that SETI actually paid for that. I got the impression it was more of a PR/marketing campaign from nvidia to promote GPU computing. CUDA is just faster. on similarly spec'd hardware CUDA on NVIDIA will trump OpenCL on AMD. Guess I should have been clearer. I didn't mean pay for CUDA I meant pay for CUDA support. The researchers are being paid by whoever they work for. If Nvidia could help then the question is why haven't the researchers asked?
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
Ian-n-Steve C.
Senior Cruncher United States Joined: May 15, 2020 Post Count: 180 Status: Offline Project Badges: ![]() |
I still don't know what you mean by paying for "cuda support". in no way will the researchers have to pay money directly to nvidia for this. they simply need a developer with the skills or ability to learn it. having the ability to write a cuda application is not something that nvidia holds a ransom for.
----------------------------------------if you mean by "support" as paying for the developer's time, then it's no different than paying for openCL support. you're just paying the person to do the work. having openCL is nice for compatibility (nvidia and intel and AMD GPUs with basically one source, just compiled differently) and it's a little easier for the development team, but CUDA would speed up processing on nvidia cards. I'm fairly sure the researchers couldn't care less about what apps or devices things are run on, they just want the results as fast as possible. I'm not proposing to abandon OpenCL completely, keep it for the Intel and AMD GPUs that need it (they don't have their own special platform), but it would be nice to have a CUDA app for nvidia since it's faster. though not many projects supply both a CUDA and openCL app probably for dev/time reasons. they usually pick one or the other. ![]() EPYC 7V12 / [5] RTX A4000 EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060 [2] EPYC 7642 / [2] RTX 2080Ti |
||
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3295 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I wouldn't say it's nvidia's fault alone. We'll just have to agree to disagree. When it comes to OpenCl support for applications on Nvidia hardware you can clearly see the difference and it's because they would be shooting themselves in the foot if they made OpenCl better to compete with CUDA. Another thing, CUDA is proprietary and OpenCl is open source so I wouldn't expect anything else from Nvidia. The researchers who want to do their research here are probably not in a position to pay Nvidia for CUDA support and why should they and I highly doubt Nvidia would give it away for free. They can accomplish all they need with OpenCl despite that it could probably get done faster on CUDA. And last but not least OpenCl will run on both AMD and Nvidia. CUDA is Nvidia only. The researchers built a CUDA app. They even got some support from Nvidia. The CUDA app was used at the Summit supercomputer, IIRC correctly. I don't know why WCG didn't test that app but it exists and it's on their github. Edit: Here's the link https://github.com/ccsb-scripps/AutoDock-GPU - IIRC, the CUDA version has been mentioned there from the very first moment I opened that page about a year ago. AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz [Edit 1 times, last edit by Falconet at Apr 8, 2021 3:12:40 PM] |
||
|
Ian-n-Steve C.
Senior Cruncher United States Joined: May 15, 2020 Post Count: 180 Status: Offline Project Badges: ![]() |
I wouldn't say it's nvidia's fault alone. We'll just have to agree to disagree. When it comes to OpenCl support for applications on Nvidia hardware you can clearly see the difference and it's because they would be shooting themselves in the foot if they made OpenCl better to compete with CUDA. Another thing, CUDA is proprietary and OpenCl is open source so I wouldn't expect anything else from Nvidia. The researchers who want to do their research here are probably not in a position to pay Nvidia for CUDA support and why should they and I highly doubt Nvidia would give it away for free. They can accomplish all they need with OpenCl despite that it could probably get done faster on CUDA. And last but not least OpenCl will run on both AMD and Nvidia. CUDA is Nvidia only. The researchers built a CUDA app. They even got some support from Nvidia. The CUDA app was used at the Summit supercomputer, IIRC correctly. I don't know why WCG didn't test that app but it exists and it's on their github. Edit: Here's the link https://github.com/ccsb-scripps/AutoDock-GPU - IIRC, the CUDA version has been mentioned there from the very first moment I opened that page about a year ago. even better! ![]() EPYC 7V12 / [5] RTX A4000 EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060 [2] EPYC 7642 / [2] RTX 2080Ti |
||
|
|
![]() |