Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 23
Posts: 23   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 3891 times and has 22 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Seeing lower than beta19 efficiencies

Lol, modded the app_config to 3 ugm 5 mcm and now ugm runs more efficient than mcm, albeit it's at the 2nd decimal level laughing. All now solidly into the >= 99.4 percent efficiency realm. Note, still running with a write to disk setting of 1000 seconds!

Now that data_dir is no longer indexed, the slide file timestamp changing, used per docs by the indexer to revisit the files, has possibly become a quasi moot issue too. It is slowly becoming evidential that ugm was setting something off on the i/o front, no explanation why 'mcm only' gave no performance issues.

Now we also could use a random checkpoint offset time for each task, so if 8 resume or start simultaneous, they do not sync-checkpoint at fixed intervals, which is what they're now doing.

Momentarily considering if this also impacted cep2 that never ran that great on this node. Something else to test. smile

Finally, hoping with 3:5 crunching in present config the feeders keep up the ugm supply, else it is 'simap we have supply needs'. cool
[Oct 23, 2014 12:35:14 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Seeing lower than beta19 efficiencies

Confidence building it was the search indexing, now 99.5 percent efficiency, upped the ratio ugm:mcm to 4:4 and prefetched a bunch of ugm, batch 592 now, 23 tasks coming with 28 x 6.1mb files, suggesting some are used in multiple tasks that were assigned. This should carry uninterrupted wcg computing through the day.

The prefetch was set off by there being only 1 ready to start left. Since they were running shorter than mcm, more were getting through the mill, and 32 mcm waiting to be processed. Upped, then lowered cache and switched profile again. Things one has to do, to compute what one wants to. smug
----------------------------------------
[Edit 1 times, last edit by Former Member at Oct 23, 2014 5:06:53 PM]
[Oct 23, 2014 5:00:25 PM]   Link   Report threatening or abusive post: please login first  Go to top 
armstrdj
Former World Community Grid Tech
Joined: Oct 21, 2004
Post Count: 695
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Seeing lower than beta19 efficiencies

I have not looked at the part of the client code in detail but I believe it has something to do with the client checking if any of the files need to be deleted.

Thanks,
armstrdj
[Oct 27, 2014 3:20:53 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 23   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread