Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 2370
|
![]() |
Author |
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
OK but re-reporting aside, the first error is different:
The CHARGE OUTSIDE INNER GSBP REGION is a completely different error. These tasks actually run, before erroring out. I expect this is just categorized as an error, as the result (charge) was found beyond the rigidly set experimental boundaries. Is the system not flexible enough, should the boundaries of the GSBP region be relaxed/expanded a few angstroms? One for them to look at, if it's important. I have seen similar end of batch errors elsewhere, where consecutive changes in charge/gap/bond strength propagate beyond the desired region of interest (during auto-batching), always triggering an error. Just saying, on the off-chance it's important. |
||
|
astrolabe.
Senior Cruncher Joined: May 9, 2011 Post Count: 496 Status: Offline |
Slowly enjoying the drips and drops... I have 120 dddt2 IP |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Anyone who sets the Device Profile correct [it's generally known how/what] and the cache size to at least default [0.25], my recommendation 1.0 for permanent operation, would have little concern these rainy days to having idling cores. Going through the caches this minutes, no intervals of unsuccessful fetches have lasted longer than 4:30 hours i.e. well within the 0.25 cache default. Is this a WCG tuning coincidence... Who knows :?
--//-- PS: Of course, when setting the DPs up this way, you can't go on a long weekend... rains do end :O) |
||
|
astrolabe.
Senior Cruncher Joined: May 9, 2011 Post Count: 496 Status: Offline |
Anyone who sets the Device Profile correct [it's generally known how/what] and the cache size to at least default [0.25], my recommendation 1.0 for permanent operation, would have little concern these rainy days to having idling cores Unless you hit a patch of errors and all of sudden you are dry |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Comes with the territory, no matter how long one beta tests. Relying on the rates WCG observes [knreed looks at the 0.7% too] striving to keep it lowest, those patches are getting real small. One does have to be in real bad luck with the drippy/droppy way of raining to get more than a few of the same wonky batch with DDDT2.
--//-- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Anyone who sets the Device Profile correct [it's generally known how/what] and the cache size to at least default [0.25], my recommendation 1.0 for permanent operation, would have little concern these rainy days to having idling cores Unless you hit a patch of errors and all of sudden you are dryI had three WUs error with 0 time last night. I do have a 1.0 cache so it didn't hurt much. A fourth WU errored with time but I already received credit for it so no loss. |
||
|
astrolabe.
Senior Cruncher Joined: May 9, 2011 Post Count: 496 Status: Offline |
Comes with the territory, no matter how long one beta tests. Relying on the rates WCG observes [knreed looks at the 0.7% too] striving to keep it lowest, those patches are getting real small. One does have to be in real bad luck with the drippy/droppy way of raining to get more than a few of the same wonky batch with DDDT2 It does come with the territory. But I do find that with dual core computers, I have an increased risk to running dry. I had one yesterday that could not get a dddt2 WU for 9.5 hours. I switched it to HCC only which gave it 6 WU then switched it back to dddt2 which might be enough to start pulling dddt2 WU over then next .4 days. I have had no problems with my tri and quad core computers. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Continue to like [me IMHO] proposition [can't think of a number to give it ;>] of ''If there is no work...'' to kick in *only* when cache is dry and cores go idle, then to fetch something within the minute. In your case, that'd be continued, unattended crunching for the duos too.
[Small] downside: Random work would come, rather than the second choice [that'd be off topic... other thread on this exists] --//-- |
||
|
JSYKES
Senior Cruncher Joined: Apr 28, 2007 Post Count: 200 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It looks like things have gone AWOL again - had 1 WU error within a few seconds (plus all 3 other instances) and another server aborted for the same reason today....is it back to the same situation that we had in April?? I was hoping that the problems had been resolved - from my perspective that may be premature......
----------------------------------------![]() [Edit 1 times, last edit by JSYKES at Jul 20, 2011 6:04:29 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
On one PC, 13 WUs ended with the "Detached" status, apparently with no time spent. Now everything seems back to normal. First time I recall this happens to me.
|
||
|
|
![]() |