Page 1 of 1

Test case for benchmarking MPI run

Posted: November 17th, 2020, 2:07 am
by hlbutterfly
We are trying to run "hycm_std" on our platform with 32 nodes. However, we notice that the simulation time is same or even slower than "hycs_std". Is there any test case that we could test our MPI run and benchmark the simulation time? Just try to make sure there is nothing wrong with our setting.

I read previous posts regarding MPI simulation taking as much time as single node simulation. When we are running "hycm_std", we are getting MESSAGE files for each of the node (i.e. MESSAGE.001, MESSAGE.002, etc.). Everything looks normal but seems to take forever (more than a day) to complete a 24 hour simulation.

I am attaching the CONTROL, SETUP.CFG and MESSAGE file.

Thank you very much!

Re: Test case for benchmarking MPI run

Posted: November 24th, 2020, 10:20 am
by barbara.stunder
We do not have an MPI benchmarking case, though we should probably add one.  

Some questions in general regarding run duration:

Increase the meteorological subgrid size to lower the number of reads, for example to MGMIN=300 or more.  If MGMIN is larger than the grid, the whole grid is read.  The MESSAGE file shows the value you are using is 10.  

Why are you running HYSPLIT in the 3-d puff mode (INITD=3)?  That causes the number of particles to increase a lot with time and slow down the run. 
You could try INITD=4 (horizontal puff and vertical particle) or the default INITD=0.  

Do you need 1 minute output? Does the output look reasonable? That is a lot of write time.

Please try hycs_std and hycm_std with these, and if there is still little difference, send the EMITIMES, and CONTROL and SETUP if changed so I can reproduce it.