run Fluent, export .cas and .dat, and compress them after computation (on Linux)

I could not find the way to specify the auto-save .cas & .dat files in the format of .cas.gz & .dat.gz, and I gave up to compress them using the journal file. Instead, I decided to compress them in the script file after the computation, if there's any remaining wallclock time. For this I used a simple shell script utilising pigz (parallel version of gzip). If your HPC does not have pigz you can use normal gzip instead but won't get the benefit of parallel compression.

Sample script for pigz

The script will be something like this:

#!/bin/bash
#$ OTHER PARAMETERS REQUIRED BY HPC SYSTEM

export INPUT='my_journal.jou'
N_cores=28

. /etc/profile.d/modules.sh
module load ansys
module load mpi-intel

fluent 3ddp -mpi=intel -g -i ${INPUT} > log.txt 2>&1

wait

for file in $(find . -type f -name '*.dat' -o -name '*.cas'); do
    pigz -p ${N_cores} "$file"
done

Sample script for gzip

Or, if you are using normal gzip, simply:

#!/bin/bash
#$ OTHER PARAMETERS REQUIRED BY HPC SYSTEM

export INPUT='my_journal.jou'

. /etc/profile.d/modules.sh
module load ansys
module load mpi-intel

fluent 3ddp -mpi=intel -g -i ${INPUT} > log.txt 2>&1

wait

for file in $(find . -type f -name '*.dat' -o -name '*.cas'); do
    gzip "$file"
done