Nextomics / NextDenovo

Fast and accurate de novo assembler for long reads

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

nextgraph segmentation fault

ucassee opened this issue · comments

Hi developer,

Describe the bug

I meet an error with 03.ctg_graph/01.ctg_graph.sh.work/ctg_graph0/nextDenovo.sh
I attached the log files
nextDenovo.sh.e.txt
pid161556.log.info.txt

Look forward to your reply. Thanks.

Genome characteristics
genome size:2.1G

Input data
pacbio

Operating system
Which operating system and version are you using?
PBS

GCC
What version of GCC are you using?
gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)

Python
What version of Python are you using?
Python 2.7.16

NextDenovo
What version of NextDenovo are you using?
nextDenovo v2.4.0

Hi, first follow #101 to check ovl file. If there is no error, I maybe need the input (all ovl, ovl.bl files) of nextgraph to reproduce this bug, without these files, it is almost impossible to fix this error.

Hi @moold ,
I attached the log file of ls cns_align*/cns.filt.dovt.ovl|while read line;do echo $line;/data/software/NextDenovo/bin/ovl_cvt -m 1 $line|head -5;done > check.log
check.log

Hi, it seems everything is OK, but this is only head 5 lines. you can rerun nextgraph, while the input file 01.ctg_graph.input.ovls contains only one ovl file, and you can run it for each ovl file to check which one causes this error.

Hi @moold
I run nextgraph with each cns_align*/cns.filt.dovt.ovl file. I find `cns_align(02/03/04/08)/cns.filt.dovt.ovl' with Segmentation fault error.

So you need to rerun cns_align(02/03/04/08) subtasks, to reproduce these files. you can run one by one to avoid unknown errors.

Hi @moold
The running time of each subtask is about 50 days. Can I rerun them in parallel or modify some settings to accelerate?
The command in one subtask is like:
time /data/software/NextDenovo/bin/minimap2-nd -I 20G --step 2 --dual=yes -t 28 -x ava-pb -k 17 -w 17 --minlen 2000 --maxhan1 5000 /data/Project/01.assmbly/01_rundir/02.cns_align/01.seed_cns.sh.work/seed_cns0/cns.fasta /data/Project/01.assmbly/01_rundir/02.cns_align/01.seed_cns.sh.work/seed_cns2/cns.fasta -o cns.filt.dovt.ovl

I am not sure, an error in ovl file usually caused by not enough RAM. Note, the required RAM is dynamic, so if your RAM is enough, you can run it in parallel. Our test shown the peak RAM of each these subtasks is aboult 32~120G, depending on the max read length and the value of option -I and -t.

I use the cluster to run it. Each node has 256G RAM and 28 cores. So I think RAM is not a problem.

No, may be 256G RAM is not enough, because you have different -i -t and max read length.

I run each subtask in the different nodes. If the RAM is not enough should I set -I smaller?

YES,as well as -t, maybe.

Will the smaller -I and -t significantly increase the time-consuming?

Was this solved?
I'm having a similar issue:

[INFO] 2024-04-30 18:24:00 Initialize graph and reading...
/90daydata/tephritid_gss/dani/nextDenovo/SMB_Q12L5K_nextDenovo/03.ctg_graph/01.ctg_graph.sh.work/ctg_graph1/nextDenovo.sh: line 5: 267342 Segmentation fault
(core dumped) /project/tephritid_gss/daniel.paulo/envs/nextDenovo/lib/python3.10/site-packages/nextdenovo/bin/nextgraph -a 1 -f /90daydata/tephritid_gss/d
ani/nextDenovo/SMB_Q12L5K_nextDenovo/.//03.ctg_graph/01.ctg_graph.input.seqs /90daydata/tephritid_gss/dani/nextDenovo/SMB_Q12L5K_nextDenovo/.//03.ctg_graph/01
.ctg_graph.input.ovls -o nd.asm.p.fasta

my run.cfg looks like this:

[General]
job_type = local
job_prefix = nextDenovo
task = all
rewrite = yes
deltmp = yes
parallel_jobs = 1
input_type = raw
input_fofn = input.fofn
read_type = ont
workdir = ./

[correct_option]
genome_size = 640m
seed_depth = 31
pa_correction = 6
minimap2_options_raw = -t 48
sort_options = -m 20g -t 48
correction_options = -p 48 --blacklist

[assemble_option]
minimap2_options_cns = -t 48
nextgraph_options = -a 1

and I' running in a high performance computing (HPC), 48 CPUs, 1 Node, 1 Task, max 372Gb RAM.