Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 58
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The 5959 batch is running three longer than the previous batches. So once again the WU run-times have elongated.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
@ technicians: Work generators working with 2 meantime projections? Longer past posts can be found about flip-flopping runtime projections of newly received tasks. Here was noticing that 6205 batch was running shorter again and decided on piling up stock. After safely setting 5 days worth, based on a supposed 8:26 hours runtime, really just 2-3 on this device, some in between are showing with 3:19 hours. Any logic as to why that could be? Sorting the buffer, 4 out of 144 for this batch do this, much closer to real runtime than 8:26 hours.
' 7.32 mcm1 MCM1_0006205_9580_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8194_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9598_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9394_0 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9675_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9324_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9013_0 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9608_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9121_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8170_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_7215_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9364_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8674_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9442_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9715_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9564_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8298_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9053_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB Running a version 7 client where no amount of hacking and scripting will work to make the duration correction factor variable, based on actual returned times and not what wcg tells the client, hugely lapsing actuals, with 3 weeks of mcm preloaded no surprise. Although, that information seems not to join with the creation dates on the tasks, the last 6205 received saying 07/24/2014 00:12:50, i.e. we're being send work prepared for feeding less than 36 hours ago. Last day average for project was 4:11 hours, where 3:19 is much closer to reality than 8:26, which has led to now see 6:09 days per computing thread and not the realistic 2-3 days. The point of the query is, members are still complaining of hugely overcaching due the intermittent shorts being distributed, which you stop-gapped by limiting devices to have something like 25 tasks per active thread. Got 5 days set, but really this is not 2 days worth of work. Lol, one way stop members from buffering too much and improving return times ![]() Not betting, but thanks for reading, and responding ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
@ technicians: Work generators working with 2 meantime projections? Longer past posts can be found about flip-flopping runtime projections of newly received tasks. Here was noticing that 6205 batch was running shorter again and decided on piling up stock. After safely setting 5 days worth, based on a supposed 8:26 hours runtime, really just 2-3 on this device, some in between are showing with 3:19 hours. Any logic as to why that could be? Sorting the buffer, 4 out of 144 for this batch do this, much closer to real runtime than 8:26 hours.
' 7.32 mcm1 MCM1_0006205_9580_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8194_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9598_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9394_0 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9675_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9324_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9013_0 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9608_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9121_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8170_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_7215_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9364_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8674_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9442_0 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9715_1 - (-) 0,00 0,000 08:26:38 06d,23:54:11 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9564_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_8298_1 - (-) 0,00 0,000 03:19:02 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB ' 7.32 mcm1 MCM1_0006205_9053_1 - (-) 0,00 0,000 08:26:38 06d,23:54:10 7/25/2014 10:59:19 AM Ready to start 2751017 0.00 MB 0.00 MB Running a version 7 client where no amount of hacking and scripting will work to make the duration correction factor variable, based on actual returned times and not what wcg tells the client, hugely lapsing actuals, with 3 weeks of mcm preloaded no surprise. Although, that information seems not to join with the creation dates on the tasks, the last 6205 received saying 07/24/2014 00:12:50, i.e. we're being send work prepared for feeding less than 36 hours ago. Last day average for project was 4:11 hours, where 3:19 is much closer to reality than 8:26, which has led to now see 6:09 days per computing thread and not the realistic 2-3 days. The point of the query is, members are still complaining of hugely overcaching due the intermittent shorts being distributed, which you stop-gapped by limiting devices to have something like 25 tasks per active thread. Got 5 days set, but really this is not 2 days worth of work. Lol, one way stop members from buffering too much and improving return times ![]() Not betting, but thanks for reading, and responding ![]() |
||
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
lavaflow,
MCM1 is not a project that is split up by our estimator. These are direct workunits from the researchers. We provide them back statistical data with each result so they can better size them from their end. As for some running short and some running long, this is probably to keep the result sizes within reason. Thanks, -Uplinger |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
All tasks, now 182 mcm have the same fpops in client_state for batchers 6205 to latest of 6211.
<rsc_fpops_est>30914659455312.000000</rsc_fpops_est> Why these 4 with a time estimate that is stuck to 3:19 hours, and the rest, uniformly decrementing with each new task fetch received, going from 8:26 to 7:42 now. This suggests the server is slowly adjusting the mean runtime estimate for new tasks. Somewhere there's a bug on computing the estimated runtimes and it's been pestering ever since wcg switched to version 7 on the server side and locking dcf to 1.00000, meaning the agent version 7 has no way to adjust projected runtimes for unstarted tasks based on actual completion times, at agent level. |
||
|
l_mckeon
Senior Cruncher Joined: Oct 20, 2007 Post Count: 439 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Not super short, but batches 6917 and 6918 are back around the 2 to 2.5 hour range, down from the 6 or 7 hour jobs we've been seeing recently.
As usual, the BOINC estimates of time remaining are way off and living in the past, so you may have trouble getting enough work in your queue to carry you over. |
||
|
asdavid
Veteran Cruncher FRANCE Joined: Nov 18, 2004 Post Count: 521 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Not super short, but batches 6917 and 6918 are back around the 2 to 2.5 hour range, down from the 6 or 7 hour jobs we've been seeing recently. As usual, the BOINC estimates of time remaining are way off and living in the past, so you may have trouble getting enough work in your queue to carry you over. I got these shorter ones starting at batch 6880. It seems that new ones are starting to get a little bit longer (Batch 6942)
Anne-Sophie
![]() |
||
|
l_mckeon
Senior Cruncher Joined: Oct 20, 2007 Post Count: 439 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Batch 7508 is down around 3 hours.
Again not very short, but a step change fro the 8 hour tasks seen recently. |
||
|
|
![]() |