Large ram usage for dispersion backward models
Posted: January 23rd, 2026, 7:46 am
I am running backwards dispersion models on a Linux server, using 0.25 degree GFS weather data with 1e6 particles with a simulation time of between 5 and 10 days. These simulations max out at 15GB ram memory usage per core, even if a parallelized version of HYSPLIT is used. When looking closer at the memory use profile, HYSPLIT uses around 1-2GB for the first 20-30 minutes, then there is a steep increase to plateau at 15GB for the rest of the runtime. This is again the case for both multicore and single core runs. My questions are
- Is this normal? (Feature, bug or user error?)
- Is there any way to negate the large usage of ram? I have tried to change the number of particles, played around with simulation times and the size of the output grid, but ram usage always maxes out at 15GB. The only thing that makes a difference is change the weather data used.
- Is this normal? (Feature, bug or user error?)
- Is there any way to negate the large usage of ram? I have tried to change the number of particles, played around with simulation times and the size of the output grid, but ram usage always maxes out at 15GB. The only thing that makes a difference is change the weather data used.