Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 2370
|
![]() |
Author |
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I got a couple of these errors:
----------------------------------------CHARGE OUTSIDE INNER GSBP REGION All wingman failed too. No big deal as I have seen them before. I must say though, this "quota" and reliable host crap is getting to be annoying. Each time I bring a machine back to WCG I have to reprove it. A waste of my time. ![]() ![]() |
||
|
Jean-David Beyer
Senior Cruncher USA Joined: Oct 2, 2007 Post Count: 338 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I must say though, this "quota" and reliable host crap is getting to be annoying. When I first heard about the reliable and quota stuff, it seemed reasonable: why send a lot of work units to machines that are likely to be unreliable and fail or miss the deadline? At first, I got about 4 work units in a year. They did not fail, I believe. Then in 2011, I got 10 more. During the recent flood, I got 53 work units, 42 of which were yesterday, June 22.. I must assume that somehow they determined that my machine is reliable and fairly fast. But how did they determine this? ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I got a couple of these errors: CHARGE OUTSIDE INNER GSBP REGION All wingman failed too. No big deal as I have seen them before. I must say though, this "quota" and reliable host crap is getting to be annoying. Each time I bring a machine back to WCG I have to reprove it. A waste of my time. ![]() It's Berkeley where the moan needs to be planted in the first place. The science app is new to the device, so it has to demonstrate it is successful at them [just return them without an outright error code]. Soon as the result is returned the quota is incremented per result, so after a few my quota jumped to 10 per core, and suddenly had 40 in queue on the quad... that's a fair days work. Simply prevents a [new] host to a science to cache up quick, and potentially destroy the trickle supply, and cause feeder congestion (see herna post mangling a bunch). For sure, if you have 2 per core, it means that if first jobs finish, there's no idling, and reporting is auto instant, at least on my client 7, as this one wants to backfill too to the minimum buffer level. For perspective and nuance. --//-- |
||
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's Berkeley where the moan needs to be planted in the first place. Yep, and not the only irritating thing they've done. How's everyone loving those "BOINC Notices"? ![]() No worries, I'm just howling at the moon this morning. But quotas = crap. ![]() ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yes, and this is why i said "surprisingly" i got 100 new DDDT2 tasks, although i just returned 54 errors. I hadn't even returned one single OK tasks in between. I have to admit that this isn't really fair to others.
Btw. my errors were probably caused by some hard disk overflow. Need to check if something's wrong with it, just defragged the boinc-data partition, which took ages. |
||
|
Jean-David Beyer
Senior Cruncher USA Joined: Oct 2, 2007 Post Count: 338 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
just defragged the boinc-data partition <Saturday Morning rant on> I am so lucky to be running an OS that does not require defragging. I have a 15 GByte partition just for BOINC. I built this machine in early 2004, and have never defragged it. It seems sad that Microsoft have never gotten around to fixing their file system so defragging was not necessary.. <Saturday Morning rant off> ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thing is, there's nothing surprising as these 'oops' errors go against the 120 quota, per core per day once at max, so a quad has 480 to fetch. Burn 50 and "what's the issue?" ... the system shrugs and moves on.
--//-- |
||
|
BSD
Senior Cruncher Joined: Apr 27, 2011 Post Count: 224 Status: Offline |
All my machines are getting one or two at most at a time. Don´t know if it is a project distribution policy... Yes! Yep, getting a little sprinkle intermittantly accross 1 Linux and 3 Windows devices. No ground saturation during this event. ![]() Interesting, my dual core Linux is practically filling up to the 1 day cache setting, but my quad core Win7 devices usually don't have any extra WU in the queue other than what's currently crunching. These Win7 usually have "Tasks are committed to other platforms". More non-Windows devices crunching this rain event? ![]() ![]() |
||
|
ccandido
Senior Cruncher Joined: Jun 22, 2011 Post Count: 182 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For a few hours my faster machine did not get any WU.
----------------------------------------I had to start it manually. One remote computer did not get any so far. But on my other machines I got approx 1000 WU. All windows machines ![]() ![]() |
||
|
cjslman
Master Cruncher Mexico Joined: Nov 23, 2004 Post Count: 2082 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I was wondering how the DDDT2 nomenclature works.... according to seippel:
----------------------------------------The first batch is dg05_c_g03_c021_to_c030_typeC and the last batch is dg05_d_g50_d491_to_d499_typeC I checked the latest DDDT2 WU that I have received (aprox 5:45 pm UTC): dg05_c_g30 _c291_to_c300_typeC Does this mean that the WUs will make it to dg05_c_g99 before flipping over to dg05_d _gxx Even better... is there a post that already explains this? I searched, but didn't find anything. I want to be able to get a feeling how the consumption of WUs is going to do my own planning. Thanks, CJSL |
||
|
|
![]() |