sebrauschert / OceanOmics-amplicon-nf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

This pipeline is used to create ASVs and ZOTUs from eDNA amplicon data, assign taxonomy to those ASVs/ZOTUs and finally produce phyloseq objects.

OceanOmics-amplicon-nf creates a phyloseq object from eDNA amplicon data.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies.

Pipeline summary

  1. Read QC (FastQC)
  2. Present QC for raw reads (MultiQC)
  3. Demultiplex with Cutadapt ([Cutadapt])(https://cutadapt.readthedocs.io/en/stable/)
  4. Optionally demultiplex with Obitools3 ([Obitools3])(https://git.metabarcoding.org/obitools/obitools3)
  5. Additional QC with Seqkit Stats ([Seqkit])(https://bioinf.shenwei.me/seqkit/usage/)
  6. Create ASVs with DADA2 (DADA2)
  7. Create ZOTUs with VSEARCH (VSEARCH)
  8. Optionally create ZOTUs with USEARCH ([USEARCH])(https://www.drive5.com/usearch/)
  9. Cutate ASVs/ZOTUs with LULU (LULU)
  10. Assign taxonomy with blastn (blastn)
  11. Lowest Common Ancestor (LCA)
  12. Phyloseq object creation (phyloseq)

Quick Start

  1. Install Nextflow (>=22.10.1)

  2. Install any of Docker, Singularity (you can follow this tutorial), Podman, Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs).

  3. Download the pipeline and test it on a minimal dataset with a single command:

nextflow run MinderooFoundation/OceanOmics-amplicon-nf -profile test,YOURPROFILE --outdir <OUTDIR>

Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

  • The pipeline comes with config profiles called docker, singularity, podman, shifter, charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker.
  • If you are using conda, it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.
  1. Start running your own analysis!
nextflow run MinderooFoundation/OceanOmics-amplicon-nf --input samplesheet.csv --outdir <OUDIR> --bind_dir <BINDDIR> --dbfiles "<BLASTDBFILES>" -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

Documentation

The OceanOmics-amplicon-nf pipeline comes with documentation about the pipeline usage, parameters and output.

Credits

This pipeline incorporates aspects of eDNAFlow, which was written by Mahsa Mousavi. OceanOmics-amplicon-nf was written by Adam Bennett. Other people who have contributed to this pipeline include Sebastian Rauschert (conceptualisation), Philipp Bayer, and Jessica Pearce. This pipeline was built using the nf-core template.

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

About

License:MIT License


Languages

Language:Nextflow 64.8%Language:Groovy 20.3%Language:Python 13.7%Language:HTML 1.2%