download error when parallelizing dispersion runs using batch script
Posted: March 29th, 2023, 9:45 am
I need to run hysplit for 575 point sources monthly from 1970-2017 (i.e. 575*47*12 = 324,300 times). I am using a supercomputer to parallelize the process. I wrote one R script that parallelizes the monthly runs for a single point source. Then I submitted 575 different jobs (one for each point source).
The output of each run is saved in a different working directory, so that there are no conflicts. This process worked fine for about 400 point sources, and then I started getting various timeout, download, and connection errors.
Is it possible that NOAA has blocked the connection from the supercomputer node due to too many attempts? Is there a way to resolve that?
Note that I am using the splitR package as a wrapper.
Thank you.
The output of each run is saved in a different working directory, so that there are no conflicts. This process worked fine for about 400 point sources, and then I started getting various timeout, download, and connection errors.
Is it possible that NOAA has blocked the connection from the supercomputer node due to too many attempts? Is there a way to resolve that?
Note that I am using the splitR package as a wrapper.
Thank you.