Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 169
|
![]() |
Author |
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
But then he wouldn't have really reached diamond on his own though. I agree...I would be probably missing a 1y to 10y DIAMOND to the end of June! but still crunching...we need to finish that project! ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
More Mathurbation: (FinalBatch-Last Batch)*BatchTasksGuess/ResultsDay (908573-896611)*1,000/700,000=17 days work left (still) 596K results validated the last calendar day, runtime 0.98hrs/result ... day before was 1.0 hrs/res. Breaking it down to just suppositioning remaining work for exp.164, there would 3.27 days left of new work [@ 700K results/day] Cannot find any discussion of us having actually computed exp. 165-166, batches 898901-908573, so wonder what's up with that. Anybody knows, techs? 13 more days FT computing for 2xdiamond, on 1 account and scraping the barrel till the last unit. ![]() Thx for the offers, but already had my say on boasting medals not achieved myself, yes Multi-VMing and halving the clock speed and more on the final buffer are options. Have 4 completed projects all sitting > 4 years, in distributing cycles with what I had. ![]() ---------------------------------------- [Edit 1 times, last edit by Former Member at Jun 9, 2015 7:55:38 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
But then he wouldn't have really reached diamond on his own though. Who cares? Then why care about reaching a milestone then? If you care about getting the recognition, then isn't it more satisfying to reach it yourself? Person A takes a test and gets 100% correct. Person B copied off of person A and also got 100% correct. Who had the higher achievement? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Figured out how to get the 4770 1 day buffer, running at 3700 to have a final 4 day buffer at 800.
As the warnings in Jack-Ass 2 to viewers speak: Don't do this at home ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
For those who think this is a cheat, the system is in this medium heatwave running at 83C, and at 800, to be [precise 798.16MHz, at 41C [15 minutes later], CoreTemp claiming the TDP is 11.7 watts. Think I'm going to run it right from now till summer's over at that. Chuck the points.
![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
All my systems are queued to the max. I can't get anymore work units due to BOINC limitations (1000 runnable tasks maximum).
21118 World Community Grid 6/9/2015 12:59:13 PM Not requesting tasks: too many runnable tasks 21119 World Community Grid 6/9/2015 12:59:17 PM Scheduler request completed a total of 13056 WUs queued and it still may not be enough. 13000 WUs only equate to about 400 days of run time. The 24 core machine will finish it's 1100 units in about 31 hours. The 1000 doesn't seem to be a hard and fast limit as most hosts have more than 1000. Ranges from 1025 to over 1100. Still need 7 years to reach 100 so it's probably out of reach. Will finish with about 95 or 96 years depending on re-runs. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The number is 35 IP per running core, so 24 cores would be 840, and yes it's FAIK a hard number for WCG total, not per project. All my machines have the exact max per-core.
|
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
see what happens when you put <ncpus>32</ncpus> in the cc_config.xml file on a rig with less that 32 threads running. This is particularly good for rigs with less cores
----------------------------------------It should start working on 32 wu's at once and start downloading work for 32 threads * 35 if I remember correctly. (1120 wu's) Try this only if your rig can do that much work in 7 days. Running less wu's at one time may be possible using app_config but I am not sure if this will affect the number of downloads. This is all from memory as I am running mostly big rigs on MCM now I cannot test. ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Done the very
----------------------------------------![]() edit: And not to be fooled, when running 16 on a 8 core, you only get a 50% efficiency, Elapsed v CPU time plus more overhead, so it helps buffering but not CPU time accumulation! ![]() edit2: And, of course the mode of operation is now high priority as there's repairs in the queue that now feel they wont get processed if sticking to FIFO. It will take several days for the feeder to wise-up that the < 24 hours return does not apply for the time being. ![]() [Edit 4 times, last edit by Former Member at Jun 9, 2015 8:19:01 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Here's hoping seippel & co did make a calc slip in the crunchers favor, as when exp.164 is truly last, going to be falling short still by a few months.
![]() |
||
|
|
![]() |