Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 438
|
![]() |
Author |
|
Correcaminos
Cruncher Spain Joined: Dec 17, 2013 Post Count: 7 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
But are they going to put New tasks ? Hi Correcaminos, As yet, no date has been stated, although now that we're basically over the Christmas/New Year holiday period, hopefully it won't be long before we hear some more news at least. Ok thanks for the news I'll be careful, while with mapping cancer markers ![]() ![]() [Edit 1 times, last edit by Correcaminos at Jan 5, 2015 12:45:16 AM] |
||
|
Sandvika
Advanced Cruncher United Kingdom Joined: Apr 27, 2007 Post Count: 112 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
As my train of though derailed, wanted to come back and confirm what Eric was saying. With a relatively speedy system, it is impossible to get a queue longer than two days. This is correct. My PC has 12 cores, 24 hyperthreads. I had been running 12 threads of WCG because they get allocated 1 to each physical core and the cores then turbo-boost (inbuilt automatic overclocking) and crunch very fast. It also leaves 12 hyperthreads free to do my work on the computer without noticing any performance decrease. However, this left me running out of OET WU a lot of the time so I changed it to 24 threads of WCG,so 2 per physical core at which point there is no longer any turbo-boost and they crunch at a more sedate pace. The overall number of WU processed is pretty similar (I think about 10% more with the 24 threads), the crucial difference was that I was no longer running out of OET WU as often because I was being sent a lot more. It hadn't occurred to me before, but now I realised that the badges come twice as fast this way, so I could have got 10 years of MCM in the time I notched up 5. The logical extension to this is to do as little work as possible in as long a time as possible if you want to get diamond badges easily. Putting it another way, the badges should really be awarded for points earned, not for elapsed CPU time, to reflect actual contributions made to science. ![]() ![]() |
||
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3715 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
... so I changed it to 24 threads of WCG,so 2 per physical core at which point there is no longer any turbo-boost and they crunch at a more sedate pace. This is wrong. If feeding one core with one WU is enough to trigger the turbo-boost (and it actually is), asking it to process two at the same time will trigger it as much. The logical extension to this is to do as little work as possible in as long a time as possible ... This is wrong too. As long as your hyperthreaded WUs do not need more than twice the time to process a not-hyperthreaded one the amount of work done over a given period of time is superior in hyperthreaded mpde.As you stated, the benefit is not much for OET WUs because they use the processor very well and don't leave many unused CPU cycles, but still, even if it is only 10 % more WUs per day these 10 % are worth taking. Regarding the effect on badge hunting fairness you are right but this is due to the way hyperthreaded processors report CPU time, and this is another story. ![]() |
||
|
dango
Senior Cruncher Joined: Jul 27, 2009 Post Count: 307 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
If you get off on collecting badges by cheating then this is probably the wrong community to be in. it is not cheating. it is the way how to get more work changing CPU count is not cheating? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If you do a target search you will find a fairly clear technicians position on running intentionally slower to stretch the hours on limited availability tasks. Creating through the config more threads than the CPU really has, to do/grab more CPU tasks, is not in spirit either, Being a potential added source of tasks failure, this could lead to goose chasing fault reports that really are not.
|
||
|
Sandvika
Advanced Cruncher United Kingdom Joined: Apr 27, 2007 Post Count: 112 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
... so I changed it to 24 threads of WCG,so 2 per physical core at which point there is no longer any turbo-boost and they crunch at a more sedate pace. This is wrong. If feeding one core with one WU is enough to trigger the turbo-boost (and it actually is), asking it to process two at the same time will trigger it as much. The logical extension to this is to do as little work as possible in as long a time as possible ... This is wrong too. As long as your hyperthreaded WUs do not need more than twice the time to process a not-hyperthreaded one the amount of work done over a given period of time is superior in hyperthreaded mpde.As you stated, the benefit is not much for OET WUs because they use the processor very well and don't leave many unused CPU cycles, but still, even if it is only 10 % more WUs per day these 10 % are worth taking. Regarding the effect on badge hunting fairness you are right but this is due to the way hyperthreaded processors report CPU time, and this is another story. ![]() ![]() You'll have to help me out then, because I don't understand - it looks simple enough on this chart where I varied the number of WCG threads (all UGM threads at the moment). The heavy line is Total MHz, the other lines are all %CPU per thread, except the almost invisible aggregate %CPU which is flat lining at 100% whenever 12+ threads are crunching. The ~10% step between the 24 threads and 12 threads is pretty clear, the 18 threads splits the difference as you would expect. However, it also seems that full turbo-boost is required for any thread to hit 100% CPU since the moment 12+ threads are crunching none of them get there. There's an apparent inverse relationship between the number of CPU-bound threads and the maximum %CPU such that by the time all 24 hyperthreads are crunching, there's almost no variability and they appear to average about 55%. Thus I surmised that the turbo-boosting is decreased as the use of second hyperthread per core increases, presumably to remain within the thermal dissipation limit. From BOINC's perspective in terms of accounting for CPU time it appears that anything >= 55% looks like 100%, so running 12 threads for 24 hours will add 12 days to the stats, whereas running 24 threads for 24 hours will add 24 days to the stats but only do 10% more work! ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Check out Dr. Saphire's latest post in the "Welcome to the Project" thread. She sheds light on where they are, the challenges, etc. Maybe we will get the chance to help them again in February...
https://secure.worldcommunitygrid.org/forums/...ad,37519_offset,30#480478 |
||
|
I need a bath
Senior Cruncher USA Joined: Apr 12, 2007 Post Count: 347 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
As a scientist myself I predict no more work for a few months at least. If the guy they hired only starts in Feb, I can't imagine them coming up with testable models immediately. And honestly, you want them to take their time and put out models with the highest probability of success. It's possible that they underestimated how quickly that the grid would burn through the first project. OR it was necessary for their plans to get early demonstration of successful grid-based science to get collaborators and financial backers on board. This is just the way things work and not a mistake or an attempt to mislead people.
----------------------------------------![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Good points there and sounds reasonable. I gotta say that I'm in no hurry on this project other than (like others) would like to see this disease solved. I just got over the Flu (Type A), had the shot this year, etc. After having that virus kick my butt for over a week (onset and triggered problems), I'm all for a solution to that virus as well.
![]() I have (4) other, well deserving Projects to work here at WCG in the mean time and look forward to picking up on this one whenever they are ready to go again. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Don't forget there are 2 parts to the initial OET work.
----------------------------------------1) The rather small set that was processed in 'Rigid' mode, the 560K results we've done 2) The held back 'Flex' mode tasks that run very long, depending on speed of device over 48 hours and -do not- checkpoint. This is magnitudes larger amount of work compared to 1. The latter is what the technicians are working on [zero info guess], to modify the AD VINA program so that checkpoints are taken in reasonable intervals. This intermediate checkpointing in a single docking run is novel and daunting, to achieve a model restore after resume without upsetting a single bit. [Edit 1 times, last edit by Former Member at Jan 5, 2015 5:51:05 PM] |
||
|
|
![]() |