Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 63
|
![]() |
Author |
|
littlepeaks
Veteran Cruncher USA Joined: Apr 28, 2007 Post Count: 748 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Nuts--
Had 6 completed batch 50 jobs, and 5 were marked too late. (Of course I have 8 gigs of RAM and was only running two WUs at a time.) Guess that's the way the molecule bounces -- sigh. |
||
|
deltavee
Ace Cruncher Texas Hill Country Joined: Nov 17, 2004 Post Count: 4891 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am working on a script right now to grant credit to member who have returned work for Target 50... Thanks for the credit on the "Too Late" WUs.Thanks, -Uplinger |
||
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 1952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am working on a script right now to grant credit to member who have returned work for Target 50... Thanks for the credit on the "Too Late" WUs.Thanks, -Uplinger ![]() ![]() Ralf ![]() |
||
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
TPCBF,
The script runs periodically, it should catch the workunits that were from target 50. Please be patient :) Let me know the WU names and I can double check it for you. Thanks, -Uplinger |
||
|
boulmontjj
Senior Cruncher France Joined: Nov 17, 2004 Post Count: 317 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
boulmontjj, As far as I can tell this is specific to DSFL, not Boinc, Windows, Rosetta, Malaria or any other project, though it will cause failures on other projects if you were also running DSFL tasks on the same system (just found 15 HFCC failed tasks). So far I have only experienced this on 1system (Linux), so you can't blame Windows (and I have 20+ Windows systems hooked up and crunching). It has not impacted on the systems I also run Malaria on, or Ralph (Rosetta beta). There was a memory leakage issue with one (or more) Boinc clients, so perhaps you just happed to be experiencing that issue at the same time. After an other check, i had one or two DSFL on the 3 machines. ![]() There were not running but in waiting status and the other projects were running but waiting for memory. That's why i didn't thought it was just DSFL. All 50 batch have been cancelled and now, every thing is ok for me. |
||
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for the Server Aborts; I had 14 systems running the bad batch. For sure some would have locked up.
|
||
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 1952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
TPCBF, Thanks, they showed up over night, one was actually unrelated and in fact to late (from a laptop that wasn't on the Internet for a couple of days). The script runs periodically, it should catch the workunits that were from target 50. Please be patient :) Let me know the WU names and I can double check it for you. From a previous post, I understood you ran a one-time script to fix those particular "too late" WUs for that batch, wasn't aware that this would be a regular scheduled task... ![]() thanks, Ralf ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Temp Stopped on this project after a first batch of bad WU's made the cores cold.. working on getting CEP-P2 up to 5 years.. whew... did not notice any issues with RAM.. a quad as 8 GB RAM and a 990x rig with maxed 24 GB RAM..
----------------------------------------[Edit 1 times, last edit by Former Member at Nov 2, 2011 3:49:13 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello... Have an old single core Pent 500 that WAS crunching WU# 43, and 45 @ ~ 48hrs/wu and suddenly the est. completion time JUMPED to 650-734 hrs!!!
What ever happened will cause last 3 wu's to complete "Too Late" (unstarted & due 11/6 MST). Wondering if i should abort them or let the server do the 'axe job'?? Thanks |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I rejected that project - it slowed down my Linux system (stable Gentoo x86) w/o any warning - the project spec says ~250 MB RAM usage !
|
||
|
|
![]() |