Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 95
|
![]() |
Author |
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Simlar to reporting limits, you can also set minimal requesting, think though the buffer has to be empty for it to work.
<fetch_minimal_work>0|1</fetch_minimal_work> Fetch one job per device (see --fetch_minimal_work). <fetch_on_update>0|1</fetch_on_update> When updating a project, request work even if not highest priority project. List-add.pngNew in 7.0.54 |
||
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Of course you can set the buffer to zero FTM and play the allowed core count to see if things begin to move.... anything to minimize the rpc size.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
<fetch_minimal_work>1</fetch_minimal_work>
<fetch_on_update>1</fetch_on_update> 07/07/2016 22:05:29 | World Community Grid | Not requesting tasks OK, one for me to try when the buffer is empty tomorrow. Prior to trying that, 2 more tasks had completed and reported but remained as Ready to Report. Setting "No new tasks" solved that temporarily. I've put max_tasks_reported back to 0 (I assume that means unlimited) but will remember it if necessary. Many thanks for now. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Of course you can set the buffer to zero FTM and play the allowed core count to see if things begin to move.... anything to minimize the rpc size. Crossed in the post. Do you mean set 0 days of work in computing prefs, or really have an empty cache of work? |
||
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Any setting that ensures the smallest possible work requesting.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
OK, I get it. Set "Store at least N days of work" to a low number, tried a work fetch - "don't need", then increased slowly until a fetch attempt occurred. Interesting:
07/07/2016 22:29:12 | World Community Grid | Requesting new tasks for CPU 07/07/2016 22:29:14 | World Community Grid | Scheduler request completed: got 1 new tasks 07/07/2016 22:29:14 | World Community Grid | Resent lost task ZIKA_000028953_x2jly_DV4_NS3hlcs_ssRNAdelMn_chB_0399_3 followed by another: 07/07/2016 22:31:16 | World Community Grid | Requesting new tasks for CPU 07/07/2016 22:31:19 | World Community Grid | Scheduler request completed: got 1 new tasks 07/07/2016 22:31:19 | World Community Grid | Resent lost task ZIKA_000034795_x1nkt_Mtb_SecA_DelMg_chnB_0203_1 then looking good ... 07/07/2016 22:35:19 | World Community Grid | Requesting new tasks for CPU 07/07/2016 22:35:22 | World Community Grid | Scheduler request completed: got 0 new tasks 07/07/2016 22:35:22 | World Community Grid | No tasks sent 07/07/2016 22:35:22 | World Community Grid | No tasks are available for the applications you have selected. 07/07/2016 22:35:22 | World Community Grid | Tasks are committed to other platforms Frustration! Then... 07/07/2016 22:37:35 | World Community Grid | Requesting new tasks for CPU 07/07/2016 22:37:37 | World Community Grid | Scheduler request completed: got 2 new tasks YES!!! Keep checking, increase cache size... 07/07/2016 22:45:20 | World Community Grid | Requesting new tasks for CPU 07/07/2016 22:45:23 | World Community Grid | Scheduler request completed: got 15 new tasks BIG YES!! SekeRob, you're a genius. Again, thank you. The problem looks solved for me (until next time). Wonder if any of this might help Keith understand what's going wrong. |
||
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
There's truncation going on, timeouts, so by making the scheduler file to transmit forcibly small, the file manages to get through whole. Sort of.
|
||
|
KWSN-A Shrubbery
Senior Cruncher Joined: Jan 8, 2006 Post Count: 476 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for making me feel less lonely! Have you checked your Results Status page for that machine? According to mine, my completed tasks (all OpenZika) are still uploading, reporting and turning Valid, despite the client thinking they haven't reported (saying 59 ready to report so far). My Plan A is to a) wait a bit more and see if it resolves spontaneously (as others have reported), then b) when I run out of tasks (77 to go, say about 24+ hours clock time), experiment with detaching, reinstalling, deleting folders, ... Hmm, didn't have time to check the results status at lunch. Everything shows reported and validated except two units which are not even showing on my machine. Gonna do a project reset, that should clear things up. If not, I'll re-install the client. Just didn't want to lose all that work. ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Gonna do a project reset, that should clear things up. If not, I'll re-install the client. Just didn't want to lose all that work. As above, SekeRob gave me a Plan B, which worked. It'll be interesting to hear if your Plan A succeeds. |
||
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Plan B may be the future Plan A DIY ;D)
----------------------------------------When back, will draft something of a 123. Think whence the connection goes bad, it's harder to get out, so I would set the max report permanently to e.g. 2 and revert buffer temporarily to zero, which in effect will only request work for about to go idle cores (usually a new task is requested a few minutes before finishing one.) When things normalize, gradually up (MinB).and keep MaxAB to zero. The price .(for WCG) is a little bit more frequent connecting, but the Ready to Report clearing is more frequent and thus smaller fetch requesting. Its the big ones that have tge failing more frequentlu. [Edit 1 times, last edit by SekeRob* at Jul 8, 2016 1:35:20 PM] |
||
|
|
![]() |