Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 2370
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
As of last night there were 200,720 validated. My biggest thumb guess is that between 30 and 50 percent of the MIA are already in [of the remaining 100,000+ WU's], tenting while waiting on the wingman. Got 124 in PV looking out
![]() Curtailing to 1 per core like Beta? Think it could lead to even longer completion times of the whole set, as they for most would just sit at end of queue for longer than they take to compute. Done no sums to figure that, but DDDT2 C-Type rains are a magnitude larger than any Beta. --//-- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Weird, so it appears somehow my 2 day cache turned into ~4.5 days for this device. In my experience, there is something not quite right with the time estimates from WCG. On my main rig, most units are estimated to run, and do take, 3 hours. But with shorter units, it is able to finish them in less time than estimated. As programmed, BOINC reduces the estimate on all WUs, so 3 hour WUs are now estimated lower. BOINC then fills the cashe with the new, artificially lower estimates. When it runs a long WU, the estimates reset and I now have a cashe larger than programmed. it happens. Just keep crunching. |
||
|
Jean-David Beyer
Senior Cruncher USA Joined: Oct 2, 2007 Post Count: 338 Status: Recently Active Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
just check the "Results Status" page, select Valid and count the Wus (15 results per page) wink Do the same for Pending Validation and so on OK. I thought there might be a web page that already did this. I have so few results (one page, total), and they do not include the history, that I can do it by eye. Valid: 6 Pending validation: 2. Nothing else. I got 4 work units up to 2011, I got 12 all together. I do not know if that includes the Pending validation ones or not. So 2012 is a lot better than in the past. For DDDT-1, I got 265 work units, so DDDT-2 seems stingy by comparison. ![]() |
||
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 824 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
As of last night there were 200,720 validated. My biggest thumb guess is that between 30 and 50 percent of the MIA are already in [of the remaining 100,000+ WU's], tenting while waiting on the wingman. Got 124 in PV looking out ![]() Curtailing to 1 per core like Beta? Think it could lead to even longer completion times of the whole set, as they for most would just sit at end of queue for longer than they take to compute. Done no sums to figure that, but DDDT2 C-Type rains are a magnitude larger than any Beta. --//-- They could crank up the priority a little bit or shorten the deadlines so they get crunched sooner, that would be two easy ways to get thru them quicker. Give each unit a 2 or even 3 day deadline, one at a time per cpu so you won't get more than you can handle, and they should be done fairly quickly. ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Weird, so it appears somehow my 2 day cache turned into ~4.5 days for this device. In my experience, there is something not quite right with the time estimates from WCG. On my main rig, most units are estimated to run, and do take, 3 hours. But with shorter units, it is able to finish them in less time than estimated. As programmed, BOINC reduces the estimate on all WUs, so 3 hour WUs are now estimated lower. BOINC then fills the cashe with the new, artificially lower estimates. When it runs a long WU, the estimates reset and I now have a cashe larger than programmed. it happens. Just keep crunching. The WCG time estimates are based on "per-science" current averages of returned work, where the exception is DDDT2 which within C-Type's 's' and 'p', given each their own average estimated TTC, so the former arrives here with about 1/3rd of the run time of the latter. Opposed, the BOINC client itself is clueless and just uses one single Duration Correction Factor [DCF] to compensate for the whole of WCG's 10 science variations. An impossible task for the client, long as the developers keep that multi years old ticket to fix this [1 DCF per science application] on the shelf. Testing the latest latest greatest and nothing has improved. No idea when they will [or adopt code that 3rd party developers seem to have put in a private distro of 6.10.58]. Sadly though, since the next client has an entirely new scheduler, ground up, that code could also be unfit to get integrated, but at least we know it can be done. One of the major laments in crunching... the huge cache total estimated TTC, which is why small caches for day to day crunching are recommended... not above 1 day. --//-- |
||
|
BSD
Senior Cruncher Joined: Apr 27, 2011 Post Count: 224 Status: Offline |
... which is why small caches for day to day crunching are recommended... not above 1 day That's normally what I do, with exception DDDT2 runs set at 2 day cache.I just put my 4 core device, that I recently posted about, back on the "Default" device profile with .25 day cache with DSFL selected, hit update, and it promptly downloaded 60 DSFL WUs with an estimated 4.75 HR per WU to complete. So that's 60 / 4 * 4.75 = 71.25 HRs. 71.25 HRs is nowhere near 6 HRs for .25 day cache, talk about an "estimate". ![]() ![]() ![]() [Edit 1 times, last edit by BSD at Jan 22, 2012 6:52:14 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Don't be surprised. When you changed your profile from hi cache to low cache and at the same time added DSFL back, your client did not know of the lower cache setting *yet*, but the server knew instantly to send DSFL if asked... You hit update, BOINC asked for work and during that handshake also received the new profile settings, so next time it will follow 0.25 days per core. Just look in the message log for a line like this when you hit the button:
12190 World Community Grid 22-1-2012 7:40:17 [sched_op] CPU work request: 79.83 seconds; 0.00 CPUs How many seconds in yours? --//-- |
||
|
BSD
Senior Cruncher Joined: Apr 27, 2011 Post Count: 224 Status: Offline |
12190 World Community Grid 22-1-2012 7:40:17 [sched_op] CPU work request: 79.83 seconds; 0.00 CPUs Didn't see similar in my log, but found these related to getting WUs: ---snip--- 1/22/2012 1:28:27 PM | World Community Grid | update requested by user 1/22/2012 1:28:30 PM | World Community Grid | Sending scheduler request: Requested by user. 1/22/2012 1:28:30 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:28:32 PM | World Community Grid | Scheduler request completed: got 15 new tasks 1/22/2012 1:28:32 PM | World Community Grid | New computer location: 1/22/2012 1:28:32 PM | World Community Grid | General prefs: from World Community Grid (last modified 19-Jan-2012 19:29:11) 1/22/2012 1:28:32 PM | World Community Grid | Host location: none 1/22/2012 1:28:32 PM | World Community Grid | General prefs: using your defaults 1/22/2012 1:28:32 PM | | Reading preferences override file 1/22/2012 1:28:32 PM | | Preferences: 1/22/2012 1:28:32 PM | | max memory usage when active: 6143.38MB 1/22/2012 1:28:32 PM | | max memory usage when idle: 7372.06MB 1/22/2012 1:28:32 PM | | max disk usage: 10.00GB 1/22/2012 1:28:32 PM | | don't use GPU while active 1/22/2012 1:28:32 PM | | (to change preferences, visit the web site of an attached project, or select Preferences in the Manager) ---snip--- 1/22/2012 1:28:43 PM | World Community Grid | Sending scheduler request: To fetch work. 1/22/2012 1:28:43 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:28:45 PM | World Community Grid | Scheduler request completed: got 15 new tasks ---snip--- 1/22/2012 1:28:57 PM | World Community Grid | Sending scheduler request: To fetch work. 1/22/2012 1:28:57 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:28:59 PM | World Community Grid | Scheduler request completed: got 15 new tasks ---snip--- 1/22/2012 1:29:10 PM | World Community Grid | Sending scheduler request: To fetch work. 1/22/2012 1:29:10 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:29:11 PM | World Community Grid | Scheduler request completed: got 13 new tasks ---snip--- 1/22/2012 1:29:23 PM | World Community Grid | Sending scheduler request: To fetch work. 1/22/2012 1:29:23 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:29:25 PM | World Community Grid | Scheduler request completed: got 1 new tasks ---snip--- 1/22/2012 1:29:36 PM | World Community Grid | Sending scheduler request: To fetch work. 1/22/2012 1:29:36 PM | World Community Grid | Requesting new tasks for CPU 1/22/2012 1:29:38 PM | World Community Grid | Scheduler request completed: got 1 new tasks "Reading preferences override file" - Don't know why it says this, I don't have anything set locally. I even hit "Clear" and "Update" again and this reading preferences override file still appeared in the log. Other than the BOINC Manager, I'm running TThrottle, but it's running at 100%. |
||
|
breathesgelatin
Advanced Cruncher Joined: Aug 5, 2006 Post Count: 117 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I posted the following in the "Work Unit Types" thread back at the end of November and never got a response. Does anyone know the answer to this question?
----------------------------------------Do we know all the names of the targets for this project? And could there be a recap of where we stand on the various targets at the moment? I know there were several targets that were run for verification/norming purposes... and now we are working on the "real" targets... if I recall correctly, so far we've worked on dg01-05... Have we completely finished any of the targets? ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
BSD ,
----------------------------------------Don't know about the override file and why it's still being logged, but at the very least look what the local preferences have as additional buffer and knock it down. You could go into the BOINC data dir and delete it manually, then restart client. You did not say which profile this host is linked to, but your log says none and there the client assumes default. 1/22/2012 1:28:32 PM | World Community Grid | Host location: none 1/22/2012 1:28:32 PM | World Community Grid | General prefs: using your defaults Before that this line prints, which does though tell that the profile change was received. 1/22/2012 1:28:32 PM | World Community Grid | General prefs: from World Community Grid (last modified 19-Jan-2012 19:29:11) As for the log entry I asked you to look for, sorry, I forgot that it's an extra flag I've permanently set <sched_op_debug>1</sched_op_debug>. Helps to read how much work is asked for at each request. The number should drop with each sequential call and receipt of 15. At any rate, looks like your client never accepted the web prefs. Here's some more interesting info it prints, such as the server acknowledgement when a Ready to Report has successfully cleared and why there is a time deferral. 6311 World Community Grid 22-1-2012 20:38:59 [sched_op] Server version 601 6312 World Community Grid 22-1-2012 20:38:59 Project requested delay of 11 seconds 6313 World Community Grid 22-1-2012 20:38:59 [sched_op] handle_scheduler_reply(): got ack for task dg05_b361_sr56a0_0 6314 World Community Grid 22-1-2012 20:38:59 [sched_op] Deferring communication for 11 sec 6315 World Community Grid 22-1-2012 20:38:59 [sched_op] Reason: requested by project Why the client got so much work against your will was not poor runtime estimates for DSFL, we may conclude. Let us know. --//-- edit: The file to delete would be global_prefs_override.xml [Edit 1 times, last edit by Former Member at Jan 22, 2012 8:29:19 PM] |
||
|
|
![]() |