Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
Member(s) browsing this thread: PowerFactor , phytell
Thread Status: Active
Total posts in this thread: 3227
Posts: 3227   Pages: 323   [ Previous Page | 258 259 260 261 262 263 264 265 266 267 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 2912147 times and has 3226 replies Next Thread
hchc
Veteran Cruncher
USA
Joined: Aug 15, 2006
Post Count: 801
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

If the Delft researcher needs this for her PhD, she'll be in graduate school for 10 years at this rate lmao.
----------------------------------------
  • i5-7500 (Kaby Lake, 4C/4T) @ 3.4 GHz
  • i5-4590 (Haswell, 4C/4T) @ 3.3 GHz
  • i5-3570 (Broadwell, 4C/4T) @ 3.4 GHz

[May 6, 2023 9:30:02 PM]   Link   Report threatening or abusive post: please login first  Go to top 
hchc
Veteran Cruncher
USA
Joined: Aug 15, 2006
Post Count: 801
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

@adriverhoef said:
PS Does anyone remember how/what we called the number between 'ARP1' and the generation number in the name of workunit? (E.g. the number 0033555 in the name of task ARP1_0033555_101_2)


All I could find was this post by Keith Uplinger, where he called them "grid squares" like I did.

https://www.worldcommunitygrid.org/forums/wcg...ad,41967_offset,50#619709

I'm not sure what the official name is.
----------------------------------------
  • i5-7500 (Kaby Lake, 4C/4T) @ 3.4 GHz
  • i5-4590 (Haswell, 4C/4T) @ 3.3 GHz
  • i5-3570 (Broadwell, 4C/4T) @ 3.4 GHz

[May 6, 2023 9:39:35 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Loadie
Cruncher
USA
Joined: Mar 29, 2021
Post Count: 15
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

ARP1_0035308_138 just hit my PC
----------------------------------------
Crunching system held together with duct tape and bailing wire.
[May 6, 2023 10:55:04 PM]   Link   Report threatening or abusive post: please login first  Go to top 
adriverhoef
Master Cruncher
The Netherlands
Joined: Apr 3, 2009
Post Count: 2160
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

Here are another two Extreme workunits with the same grid square (or grid cell):
workunit 301205919
ARP1_0034320_104_0  Linux Ubuntu  Valid  2023-05-05T21:51:30  2023-05-06T12:28:09    7.70/8.13     657.6/617.2
ARP1_0034320_104_1 Linux InPrg 2023-05-05T21:51:37 2023-05-07T09:51:37 0.00/0.00 0.0/0.0
ARP1_0034320_104_2 Linuxmint Valid 2023-05-05T21:51:42 2023-05-06T22:04:58 16.54/16.57 576.8/617.2
workunit 301689526
ARP1_0034320_105_0  Linux Ubuntu  InPrg  2023-05-06T23:14:10  2023-05-08T11:14:10    0.00/0.00       0.0/0.0
ARP1_0034320_105_1 Linux Ubuntu InPrg 2023-05-06T23:13:11 2023-05-08T11:13:11 0.00/0.00 0.0/0.0
ARP1_0034320_105_2 Fedora Linux InPrg 2023-05-06T23:12:04 2023-05-08T11:12:04 0.00/0.00 0.0/0.0

Adri
PS The workunit numbers suggest that (301689526 - 301205919 =) 483607 workunits have been generated between 2023-05-05T21:51:30 and 2023-05-06T23:12:04, which would equal a velocity of (60*483607/(25*3600+22*60+40) =) 317 workunits per minute. Since a workunit often contains two or more tasks, the speed of releasing tasks (for ARP1, MCM1 and OPNG) might well approximate to 500 tasks / min.
[May 6, 2023 11:43:40 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Loadie
Cruncher
USA
Joined: Mar 29, 2021
Post Count: 15
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

Now ARP1_0000048_139. Moving right along.
----------------------------------------
Crunching system held together with duct tape and bailing wire.
[May 7, 2023 1:10:57 AM]   Link   Report threatening or abusive post: please login first  Go to top 
MJH333
Senior Cruncher
England
Joined: Apr 3, 2021
Post Count: 266
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

What still puzzles me if that is so is this -- why have there been far fewer generation movements for the Extremes and Accelerated cells over the last few months
Al,
I've been wondering this too. The last extreme unit I received prior to ARP1_0033555_ 102 was on 7 November 2022. Even with the intermittent progress on ARP1, this seems rather a long gap!

One thought I had was that the "reliable machine" mechanism may no longer be working.

But I think part of the answer may be that the extremes, or some of them, have an even shorter time_step. I checked the time_step for ARP1_0033555_ 102 this morning. It was 18. I think this is the first time I've seen a task with a time_step lower than 24.

Cheers,
Mark
[May 7, 2023 9:36:28 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Crystal Pellet
Veteran Cruncher
Joined: May 21, 2008
Post Count: 1320
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

@adriverhoef said:
PS Does anyone remember how/what we called the number between 'ARP1' and the generation number in the name of workunit? (E.g. the number 0033555 in the name of task ARP1_0033555_101_2)


All I could find was this post by Keith Uplinger, where he called them "grid squares" like I did.

https://www.worldcommunitygrid.org/forums/wcg...ad,41967_offset,50#619709

I'm not sure what the official name is.

For all of us as a reminder. Source IBM Publications.

Meso-gamma-scale numerical weather simulations for
sub-Saharan Africa via grid-based, distributed computing


Numerical simulations at cloud-resolving scales have becoming practical for both research and operational applications due to advances in computing technology. However, deploying such capabilities beyond a limited scale (e.g., extended metropolitan region, large watershed) typically remains out of reach due to the computational cost and the complexity of the systems to support such work. Yet such capabilities are needed to address the local impacts of precipitation events that can impact much broader areas. In particular, convective storms driven by monsoons remain unresolved by current numerical weather prediction systems applied to sub-Saharan Africa. To address this problem, the African Rainfall Project (ARP) was initiated to deploy the community Weather Research Forecast (WRF) model across this region at 1x1 km horizontal resolution on the World Community Grid (WCG). WRF is configured to capture a diversity of geographic conditions in the region with appropriate boundary layer, land surface and cloud microphysics and parameterizations in addition to high vertical and temporal resolution. WCG provides a fully distributed computational environment that crowd-sources unused computing power from volunteers’ devices and donates it to scientific projects. As such, all computations must be embarrassingly parallel, which creates a challenge for models like WRF. Hence, each instance of WRF must operate serially on a volunteer’s device. To address the regional-scale simulations, sub-Saharan Africa is decomposed into individual 52 by 52 km domains at 1x1km as the third nest in two-way telescoping grids with common centroids. The outer domains are at 3 and 9 km resolution, respectively with the same vertical resolution. Each 48-hour simulation is done as a cold-start forced by reanalysis with output saved every 15 minutes. The collection of these simulations will cover at least one year to capture seasonal variations. Since there is no operational imperative, the ability of typical volunteer’s system to compute each simulation in several hours is practical. Scaling is achieved with many thousands of systems being deployed simultaneously. With this decomposition, over 35000 overlapping domains cover the region. During post-processing, the individual simulations are stitched together to create a consistent, single output for over for the period of study. Although the focus is precipitation, the simulations provide additional standard output for 2m temperature and 10m horizontal wind velocity, for example. We will report on the results to date and validation in comparison to in situ (e.g., from TAHMO, www.tahmo.org) and remotely sensed observations as well as conventional WRF deployments for a large computational domain covering a small subset of the region.

[May 7, 2023 10:05:00 AM]   Link   Report threatening or abusive post: please login first  Go to top 
alanb1951
Veteran Cruncher
Joined: Jan 20, 2006
Post Count: 957
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

Mark,

This is interesting...
I checked the time_step for ARP1_0033555_ 102 this morning. It was 18. I think this is the first time I've seen a task with a time_step lower than 24.

Presumably that cell had stalled (again?) which would mean the automation probably wouldn't recognize it straight after a repair (such as a step-size change or even just a request for a generation to be re-run) so it would probably have to be fired off manually...

I checked the WUs surrounding ARP1_0033555_101 and they were MCM1, so that does look like a unit outside the normal flow. Your generation 102 task, however, seems to be part of a block of ARP1 WUs...

The CPU and elapsed times for the returned results for the generation 101 WU seem to be quite high; it'll be interesting to see how your 102 task compares with typical times on the same system -- I'd expect it to take up to twice as long as usual, given that the usual step is 36 :-)

And it'll be interesting if we can manage to track this cell, with a view to seeing how long it stays in "special measures"... I don't think it's fair to expect the current WCG team to take time out from the general fire-fighting to tell us, so we're left to speculate (constructively, I hope)

Cheers - Al.

P.S. I wonder how many other stalled cells there are...

P.P.S. I wish I got some of these interesting WUs instead of boring high-generation "Normal" tasks. It's a lot easier to learn things first hand :-)
[May 7, 2023 10:38:59 AM]   Link   Report threatening or abusive post: please login first  Go to top 
MJH333
Senior Cruncher
England
Joined: Apr 3, 2021
Post Count: 266
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

The CPU and elapsed times for the returned results for the generation 101 WU seem to be quite high; it'll be interesting to see how your 102 task compares with typical times on the same system
The average CPU time on ARP1 tasks I've completed on this system recently is about 15.75 hours. The 102 task is still running, expected to take about 25 hours total.
Cheers,
Mark
[May 7, 2023 12:29:40 PM]   Link   Report threatening or abusive post: please login first  Go to top 
adriverhoef
Master Cruncher
The Netherlands
Joined: Apr 3, 2009
Post Count: 2160
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work Available

Another two sequential Extreme workunits at the same grid space:
workunit 301205994
ARP1_0034389_122_0  MSWin 10      Valid  2023-05-05T21:51:50  2023-05-06T22:14:15   20.14/20.16    955.9/781.2
ARP1_0034389_122_1 MSWin 10 Valid 2023-05-05T21:51:49 2023-05-06T13:12:05 10.25/10.41 606.6/781.2
ARP1_0034389_122_2 MSWin 10 NoRep 2023-05-05T21:51:48 2023-05-07T09:51:48 0.00/0.00 0.0/0.0
workunit 301689525
ARP1_0034389_123_0  MSWin 10      InPrg  2023-05-06T23:12:10  2023-05-08T11:12:10    0.00/0.00       0.0/0.0
ARP1_0034389_123_1 MSWin 11 InPrg 2023-05-06T23:12:16 2023-05-08T11:12:16 0.00/0.00 0.0/0.0
ARP1_0034389_123_2 MSWin 10 InPrg 2023-05-06T23:11:51 2023-05-08T11:11:51 0.00/0.00 0.0/0.0

Adri
[May 7, 2023 1:46:44 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 3227   Pages: 323   [ Previous Page | 258 259 260 261 262 263 264 265 266 267 | Next Page ]
[ Jump to Last Post ]
Post new Thread