Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 6
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
@Technicians,
At this time when increasing the buffer, it's like watching corn grow, 1) For mcm1 each call you get 1 and then it backs off for 121 seconds to call for another until buffer is full, if ever. 2) For ugm1 each call you get 2, same routine 3) For oem1 you get a variable number from 1-5, whatever is maybe available in slots. Interval calling starts with 121 seconds 4) For cep2, no comment, you get as many as you want. 5) For faah/fahv, unknown to me. When there's none, the interval increments for the projects of choice until some tipping point is reached when it resets. If a work call was successful, the 121 seconds backoff routine repeats. Getting 8 mcm is thus 16 minutes and 8 seconds plus dl time, if lucky. If there are many 'high server load' messages, then would this not be one of the main causes? For sure, can't load up quick and go offline. The reason of this way of operation is escaping me, is there some strategy in this benefiting the volunteer, for instance give more diverse work supply, but piece-meal? Certainly it used to be like 10-20 per fetch request for general available projects, which mcm1 to me would be. Tia for your explanation. |
||
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7687 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Correct me if I am wrong, but this seems to be a recent phenomenon. I do not watch too closely but did notice this on a machine where the the uploads had hung, using up the entire queue. When I re-established communication, the WU's just trickled in one or two at a time. Eventually the queue refilled. MCM1 was the project.
----------------------------------------Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
asdavid
Veteran Cruncher FRANCE Joined: Nov 18, 2004 Post Count: 521 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am also interested in the answer: how to get more tasks in a scheduler request?
----------------------------------------
Anne-Sophie
![]() |
||
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1673 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I did observe the same behaviour on my side.
----------------------------------------From a client point of view, I think that nothing can be done for fetching more WUs at once. The new behaviour brings the advantage that the work is distributed on a better way on the time line. In the past I had big deadline peaks, and nothing outside. The second advantage is to be less dependent on the boinc crazy forecasting. In the past, I experienced numerous of times a buffer overflow because suddenly, based on some shorter WUs, boinc assumes that the buffer was not adequately filled. Because of this incorrect forecast, boinc fetched at once plenty of Wus and finally too many. For all these reasons, I am OK with the current scheduler behaviour. Cheers, Yves |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I got 61 fresh MCM tasks with a single request instead of 2 tasks.
----------------------------------------[Edit 1 times, last edit by Former Member at Feb 21, 2015 5:28:08 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
What you are posting there is dangerous... before the unwitting knows it, there's a serious queue pile-up, a panic state in store. Lots of work not making deadline. Additional copies being circulated is not going to help anyone, particularly with the other thread discussed Sneaker-netting... no live connection to the server so it can feed back that copies have become redundant. Tasks superfluous to requirement, in a quorum assimilated off the server, moved into the master DB, do -Not- get credited!
----------------------------------------Anyway, there's been more leniency with the provisioning of work since that thread was started in December, more per call is given for 'normal' priority projects, the low priority [OET/UGM] in a restricted feed per call. In short, no need to dig a hole the client(s) may not be able to get out from. Thx for considering. P.S. The restriction per core [Edit 1 times, last edit by Former Member at Feb 23, 2015 1:53:58 PM] |
||
|
|
![]() |