Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 6
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 2957 times and has 5 replies Next Thread
micropro-be
Cruncher
Joined: Dec 9, 2019
Post Count: 10
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
GPU vs CPU - trying to compare time and complexity of work

Hi all,

I know I'm not the best when it comes to WCG research.

However, I'm using a Nvidia Quadro P620 and I've tried some work units (still working as I'm writing).
For a task with a Intel i5 9600K locked at 3,7Ghz, it takes about 1h30 at most to finish the job.
With my Quadro, it takes about 11 minutes at most.

So, question is simple, if the intel i5 9600K comes to analyse the same thing than the Quadro P620, how much time will it take roughly ?
The Quadro P620 is not a cutting edge card. When it idles, it's at 32°C and when it's fully working for WCG it reaches 62°C easily with stock cooler (I've lost my belove RTX 2060 three days ago trying to fix the ventirad).

I know the GPU can work on most complex tasks but... better work than the processor with such a "tiny" card ? Really ?

Anyway, if somebody has some answer let me know. I'll adjust my way of working.

Best regards,

micropro
----------------------------------------

[Apr 7, 2021 11:42:34 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU vs CPU - trying to compare time and complexity of work

"I know I'm not the best when it comes to WCG research."

You know, rest assured, not the worst either.

🖖

/OT
[Apr 7, 2021 12:24:45 PM]   Link   Report threatening or abusive post: please login first  Go to top 
uplinger
Former World Community Grid Tech
Joined: May 23, 2005
Post Count: 3952
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU vs CPU - trying to compare time and complexity of work

The quick answer is, it is estimated to take 20x longer to run on CPU if it were to attempt an OPNG task.

Long answer: OPN1 (cpu work units) run on a single core, so please remember that if you have an 8 core machine, it could run 8 work units at a time. The benefit of the GPU is that it uses all the cores in a GPU card to run in parallel a single job within the work unit. This allows it to process things faster. Also, the GPU version of autodock uses another enhancement that allows it to stop when it gets to an answer.

So, an 8 core machine may take 3 hours 45 minutes to complete a similar task your GPU took 11 minutes to run.

Thanks,
-Uplinger
[Apr 7, 2021 2:01:13 PM]   Link   Report threatening or abusive post: please login first  Go to top 
micropro-be
Cruncher
Joined: Dec 9, 2019
Post Count: 10
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU vs CPU - trying to compare time and complexity of work

Thank you for your answer.

The example of the 8 core taking much more time alongside a GPU taking 11 minutes convinces me to use the Quadro.

Actually, I was worried about having a too old graphic card to get a good ration restult/time but it seems OK.

Thanks again for the reply.

Best regards,

micropro

PS : I might make some mistakes in english but it's not my native language, sorry smile
----------------------------------------

[Apr 7, 2021 7:25:50 PM]   Link   Report threatening or abusive post: please login first  Go to top 
uplinger
Former World Community Grid Tech
Joined: May 23, 2005
Post Count: 3952
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU vs CPU - trying to compare time and complexity of work

The one thing to note though is we are only able to send out 200 batches a day for OPNG (GPU). This means that only 81k work units are available which makes them harder to run. The CPU version is still sending out the same amount it was before. In the future, the GPU will encounter more difficult work units that should cause it to run longer than what is being seen right now. Your computer should be able to handle both at the same time, which means you'll be adding an equivalent of an extra machine computing results for OpenPandemics by running both.

Also, English is my native language, and I make many mistakes :) Also, your posts were good in my eyes :)

Thanks for the questions and participation in World Community Grid.
-Uplinger
[Apr 7, 2021 10:41:30 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Ian-n-Steve C.
Senior Cruncher
United States
Joined: May 15, 2020
Post Count: 180
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU vs CPU - trying to compare time and complexity of work

In the future, the GPU will encounter more difficult work units that should cause it to run longer than what is being seen right now.


Two questions:

1. how difficult/long will these be? by more difficult do you mean more packaged sub-tasks, or longer/harder sub-tasks? just curious what kinds of plans you have for these. we talking like runtimes of hours or days for a single tasks?
2. any estimated time frame for when we might see these harder tasks?
----------------------------------------

EPYC 7V12 / [5] RTX A4000
EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti
EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060
[2] EPYC 7642 / [2] RTX 2080Ti
[Apr 9, 2021 3:51:53 PM]   Link   Report threatening or abusive post: please login first  Go to top 
[ Jump to Last Post ]
Post new Thread