Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 48
|
![]() |
Author |
|
jonnieb-uk
Ace Cruncher England Joined: Nov 30, 2011 Post Count: 6105 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
but a 6 1/3 times increase in run time from initial estimate (the "Time Remaining" on the Advanced View tasks list, before the task starts) isn't expected by most users... NOBODY expects the OET Inquisition!!! ![]() Sorry, couldn't resist it. ![]() +1 ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
but a 6 1/3 times increase in run time from initial estimate (the "Time Remaining" on the Advanced View tasks list, before the task starts) isn't expected by most users... Not expected - true. But they do use the word "estimate" and if you try to "estimate" on objects which are random I am sure you realize the "estimate" is nothing more than a guess based on the performance of the past WU's. At the moment be prepared to have these units vary wiiiiiiiiidely. Cheers Even so, a 633% increase over initial estimate means that that "initial estimate" is not really a good "estimate" And why is the run time "random?" The data may be random, but the run time should be known, to within a reasonable percentage (50-200%, to my mind...) or are you saying that the program is going to go through a loop a random number of times? And, BTW, the OET tasks I've seen (all 10 of them) have all been in the 9-10 hour bracket, on my Core I7 4790... I can understand the occasional WU bombing out early, due to unforseen consequences of input data... (happens all the time over at LHC...) [Edit 2 times, last edit by Former Member at Feb 14, 2015 2:16:29 PM] |
||
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1679 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Dear fellow contributors,
----------------------------------------should I friendly remind you that we are supporting research activities? If we would know everything in advance, we would not have to perform any research task. Looking back over the last 7 years, the scientists and technical staff had to adapt the strategy several times, see for example HCMD's grand-parent/parent/child approach for improving the management of large and complex WUs. OET stays at the beginning. We cannot expect really accurate assumptions regarding the required computation time since nobody could knows in advance what we could experience or not experience with this project. I agree with SekeRob, it is good that OET1 is an "opt-in project". Cheers, Yves ---------------------------------------- [Edit 1 times, last edit by KerSamson at Feb 14, 2015 2:47:58 PM] |
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Over a day on a couple of Wu's on a 2600K does suggest that lesser cpu's might be at some of these for quite a while but there are also results in my lists for 0.03 hours on a 2.4Ghz cpu.
----------------------------------------We are here to help the science.... Why does it matter if any WU is either short or long? They all get us a tiny step closer to a solution. If on the other hand anyone feels that the science suffers then perhaps that should be the discussion ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
On "Task Duration Factor" really called Duration Correction Factor or DCF, WCG uses since server version 7 is in place <dont_use_dcf/>. This means the clients of version 7 and up -DO NOT- adjust the projected times of work ready to start unless a new benchmark is run and those vary only marginally. The -actual average- FPOPS as computed by the server on returned work is used as header info for new work. This has a lapse time, but given the project is just not exceeding means of 3 hours, see http://bit.ly/WCGOET1 for the average runtime history, this morning 2.16, it means the really long ones are a minority and have little impact on the average used in new work.
NB: If you see the cached work flip flop windly in projected times, yes there's a bug [or it's a feature]. One possible cause is the arrival of a repair unit with old average in-between new, MCM having quite substantial average variances. |
||
|
Jack007
Master Cruncher CANADA Joined: Feb 25, 2005 Post Count: 1604 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
these 8 min ones are killing me,
----------------------------------------when i have long ones, I pause two then play my games (with 6 running). With 8 min WUs they get done and I have idle cores for an hour. ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
This might be the same problem, but...
----------------------------------------my OET keeps resetting. Time elapsed hits about 7.5 minutes, then goes back down to about 4.5 minutes. Is this a problem anyone else has been having? UPDATE: Over the past several hours, the workunit has been progressing a little. Checkpoints happen every couple hours of real-world time, or after a few minutes of time elapsed. I'm currently expecting this workunit to take about as much real-world time as a Mapping Cancer Markers workunit. [Edit 1 times, last edit by Former Member at Feb 16, 2015 5:42:51 PM] |
||
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 1957 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Nope, the bulk of OET work is single job 'flex' type, uncuttable/indivisible. Shortest here on one host 2:39 minutes, longest > 24 hours. Well, the last 24h, I seem to get a whole bunch of those ultra-shorties, on the fastest machines, also in the 2-3 min range.On some older dual-cores, +20h/WU seems to be the norm, only getting and crunching a couple at a time... Indeed a very wide variation in runtimes, and certainly not necessary to prematurely terminate any WUs. For those "just" crunching for points and fame, OET is certainly not a project that pays well. But that's not the reason why those WCG projects exists, right? ![]() Ralf ![]() |
||
|
deltavee
Ace Cruncher Texas Hill Country Joined: Nov 17, 2004 Post Count: 4891 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I don't really mind the long run times. But it is the low points that seem wrong. 0.35 points per hour for this one:
OET1_ 0000335_ xSDGP-OM_ rig_ 11386_ 0-- xxxx Valid 2/11/15 20:52:03 2/15/15 01:58:06 41.52 / 41.52 14.7 / 14.7 |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yes, sigh, variable task runtimes for a science app is the wooden stick into the credit system. The Linux system has plummeted to 14, serially, where 30-35 is the normal for even running sciences. Any random long result in-between pays the monopoly money price. It is just beyond me why with the big stats available no standard credit per fpop can be given. End of discussion.
The mean runtime for the project dropped yesterday to 1.23 hours on 249K validated down from 2.88 the day before. Same same all over again. |
||
|
|
![]() |