2 projects for practicing distributed computing via Slurm and HPC.
getData.sh: Split data into three files (pretend these files are large so that they're worth processing in parallel jobs)
jobArray.sh: process the ".csv" file (with header) corresponding to the SLURM_ARRAY_TASK_ID gived to it
findLightest.sh: conbine three outputfiles into one and write the lightest weight to a file "out"
submit.sh: use sbatch to submit parallel jobs
pipeline.sh: download and catch the data
turn.sh: merge all the .csv files into one big .csv file (allMSN.csv)
farthest.sh: find the farthest value in allMSN.csv
submit.sh: use sbatch to submit parallel jobs