Snakemake-Profiles / slurm

Cookiecutter for snakemake slurm profile

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SLURM --tasks-per-node, --cpus-per-task, etc.

whophil opened this issue · comments

Disclaimer: I am very new to Snakemake

I was trying to use this profile to submit a job using SLURM's --ntasks-per-node, and found that I needed to modify RESOURCE_MAPPING in slurm-submit.py in order to do so.

RESOURCE_MAPPING = {
"time": ("time", "runtime", "walltime"),
"mem": ("mem", "mem_mb", "ram", "memory"),
"mem-per-cpu": ("mem-per-cpu", "mem_per_cpu", "mem_per_thread"),
"nodes": ("nodes", "nnodes"),
"partition": ("partition", "queue"),
}

Other flags I often use which are not mapped include: --cpus-per-task, --threads-per-core

Is there any reason not to do this? If not, would a PR with said change be accepted?

Hi @whophil , I've been looking into this today for myself. I just found out you can pass threads to your rule outside of the resources section, and it appears that it will get passed to SLURM as cpus-per-task. On my cluster, I check resource allocation of this job (example below), and 8 CPUs have been allocated. Not sure if this is helpful or if you have already solved it, but I figured I'd mention it in case.

 rule do_stuff:
    input:
        "file"
    output:
        "file"
    threads: 8
    resources:
        partition="short",
        mem_mb=int(100*1000), # MB, or 100 GB
        runtime=int(10*60), # min, or 10 hours
    shell:
        "do stuff"

If you already know the exact Slurm flags you want to define, an alternative is to use my smk-simple-slurm profile. You can directly edit the example config.yaml to pass --ntasks-per-node to sbatch, and also to set a default value for this resource. And the same applies to any other Slurm flag you might want to use.