Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 781
|
![]() |
Author |
|
Bryn Mawr
Senior Cruncher Joined: Dec 26, 2018 Post Count: 346 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I advocated previously for switching all the CPU WU's for OPN into GPU units. Since most members are crunching multiple projects this would free up more processing power for those other projects without hindering the Open Pandemics research at all. I am aware that the GPU still requires some CPU power to be allocated, but you get far more work done per CPU thread when the bulk is done on the GPU. Almost everyone would win from this scenario, the only ones who would lose out would be those hunting badges, but that could be gotten round by allocating notional time credits to GPU units that are equivalent to time to process as CPU only, in the same way that points credits has already been done. Time is as time does. If you contribute by processing 100 GPU WUs at 30 seconds each then you have done 50 minutes processing. To say that if you had done them on the CPU it would have taken 150 hours is ridiculous. |
||
|
siu77
Cruncher Russia Joined: Mar 12, 2012 Post Count: 21 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I advocated previously for switching all the CPU WU's for OPN into GPU units. It seems to me that this is technically useless. Not all possible calculations can be transferred. As far as I know, calculations on the GPU are possible only with one arithmetic sign. That is, on the CPU it happens like this,n 1 + n2 + n3 + n4 ... = result. And on the GPU, at first the values are written into cells, and then, for example, they are added, or subtracted at the same time. Therefore, it is calculated faster. But, if you need something like n 1 + n2 - n3 * n4 ... = result, then GPU computing is not the best option. It makes sense to speed up simultaneous additions. And they are not everywhere. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If Open Pandemics were GPU only i would do much more MCM work units.My proposal would increase OPNG work units higher then it is now but not as high as it was during the stress test based on availability of the work units
|
||
|
squid
Advanced Cruncher Germany Joined: May 15, 2020 Post Count: 56 Status: Offline Project Badges: ![]() |
A CPU (central processing unit) works together with a GPU (graphics processing unit) to increase the throughput of data and the number of concurrent calculations within an application. GPUs were originally designed to create images for computer graphics and video game consoles, but since the early 2010’s, GPUs can also be used to accelerate calculations involving massive amounts of data.
https://www.omnisci.com/technical-glossary/cpu-vs-gpu |
||
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3295 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I also agree that it would be great if we did most of the work on the GPUs. AFAIK/IIRC, the current GPU and CPU work units are very similar with the main difference being that GPU's pack much, much more work. In the future, the researchers hope to use the GPUs for the more complex molecules which would take much more time on CPUs than they would on GPUs.
----------------------------------------CPU isn't going away but it would certainly be nice if, at some point, GPU's represented the bulk of the work being in OPNG and that the freed up CPU power went to other projects. Edit: There is a post from one of the researchers that describes the differences in CPU vs GPU . It's a bit old now (almost 1 year old) but I'm guessing it's still valid. AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz [Edit 3 times, last edit by Falconet at May 5, 2021 9:57:40 AM] |
||
|
adriverhoef
Master Cruncher The Netherlands Joined: Apr 3, 2009 Post Count: 2172 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
If Open Pandemics were GPU only i would do much more MCM work units. Of course, for there would not be any CPU workunits for OPN1. You can't crunch that what isn't there. ![]() [Edit 1 times, last edit by adriverhoef at May 5, 2021 10:48:57 AM] |
||
|
m0320174
Cruncher Joined: Feb 13, 2021 Post Count: 11 Status: Offline Project Badges: ![]() ![]() ![]() ![]() |
I also agree that it would be great if we did most of the work on the GPUs. AFAIK/IIRC, the current GPU and CPU work units are very similar with the main difference being that GPU's pack much, much more work. In the future, the researchers hope to use the GPUs for the more complex molecules which would take much more time on CPUs than they would on GPUs. CPU isn't going away but it would certainly be nice if, at some point, GPU's represented the bulk of the work being in OPNG and that the freed up CPU power went to other projects. Edit: There is a post from one of the researchers that describes the differences in CPU vs GPU . It's a bit old now (almost 1 year old) but I'm guessing it's still valid. It would be a shame if WCG would continue to push OPN CPU workunits if: - OPN and OPNG workunits are fundamentally doing exactly the same (same functionality) - The performance of OPNG is X times better than OPN (==> that's a fact) I don't know if the first condition is true, but if it is then it would be a waste of resources to do OPN works with CPU's. There are other projects hunting for CPU power which cannot be converted to GPU. It's a matter of using the limited resources in the most effective way... |
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some are missing a very important point...those that want to work on this project AND don't have a GPU to use.
----------------------------------------![]() |
||
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3295 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some are missing a very important point...those that want to work on this project AND don't have a GPU to use. ![]() Considering the limited resources for the large computational needs of WCG and BOINC projects, and the fact that GPUs run the work far more efficiently, I think that's a stronger point than "choice" or "being able to contribute". If there's something that does the work a lot faster and using less resources, why choose the slower, less efficient option? Still, it's a non-issue since CPU work isn't going away. Just my opinion. That said however, everyone can get a free GPU at Google Colab, even if only for 12 hours every couple of days. I've seen it run a Nvidia K80, a Nvidia P4 (not recently, though), a Nvidia T4 and a Nvidia P100. They do good work, even if just the K80. I run it every now and then instead of my RX 550 or my laptop GPU. AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz [Edit 4 times, last edit by Falconet at May 5, 2021 12:12:16 PM] |
||
|
poppinfresh99
Cruncher Joined: Feb 29, 2020 Post Count: 49 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() |
I advocated previously for switching all the CPU WU's for OPN into GPU units. It seems to me that this is technically useless. Not all possible calculations can be transferred. As far as I know, calculations on the GPU are possible only with one arithmetic sign. That is, on the CPU it happens like this,n 1 + n2 + n3 + n4 ... = result. And on the GPU, at first the values are written into cells, and then, for example, they are added, or subtracted at the same time. Therefore, it is calculated faster. But, if you need something like n 1 + n2 - n3 * n4 ... = result, then GPU computing is not the best option. It makes sense to speed up simultaneous additions. And they are not everywhere. GPUs can do "n 1 + n2 - n3 * n4 ... = result" The threads of a GPU can do far more complicated code than even this. GPUs are parallel computing because each thread of a work group (of let's say 32 threads) will be on the same line of code as the others. Also, there is no such thing as a "cell" that you describe. GPGPUs have RAM, caches, and cores just like a CPU. The difference is that there are cores grouped into work groups that are running in parallel (Nvidia calls these work groups "warps"). |
||
|
|
![]() |