Nextomics / NextDenovo

Fast and accurate de novo assembler for long reads

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AttributeError: 'Cluster' object has no attribute '_set_lsf_mem'

ryys1122 opened this issue · comments

commented

Describe the bug
A clear and concise description of what the bug or error is.

Error message
Paste the complete log message, include the main task log and failed subtask log.
The main task log is usually located in your working directory and is named pidXXX.log.info and the main task log will tell you the failed subtask log in the last few lines, such as:

Traceback (most recent call last):
  File "~/NextDenovo/2.5.0//nextDenovo", line 850, in 
    main(args)
  File "~/NextDenovo/2.5.0//nextDenovo", line 593, in main
    submit=cfg['submit'], kill=cfg['kill'], check_alive=cfg['check_alive'], job_id_regex=cfg['job_id_regex'])
  File "~/NextDenovo/2.5.0/lib/python2.7/site-packages/paralleltask/task_control.py", line 176, in set_run
    cpu, mem, cfg_file, submit, kill, check_alive, job_id_regex)
  File "~/NextDenovo/2.5.0/lib/python2.7/site-packages/paralleltask/task_control.py", line 202, in __init__
    self.mem = self._parse_mem(str(mem))
  File "~/NextDenovo/2.5.0/lib/python2.7/site-packages/paralleltask/task_control.py", line 248, in _parse_mem
    return int(self._set_lsf_mem())
AttributeError: 'Cluster' object has no attribute '_set_lsf_mem'
~/NextDenovo/test_data/01_rundir/02.cns_align/02.cns_align.sh.work/cns_align0/nextDenovo.sh.e

Genome characteristics
genome size, heterozygous rate, repeat content...

Input data
Total base count, sequencing depth, average/N50 read length...

Config file
Please paste the complete content of the Config file (run.cfg) to here.

Operating system
Which operating system and version are you using?
You can use the command lsb_release -a to get it.

GCC
What version of GCC are you using?
You can use the command gcc -v to get it.

Python
python 2.7

NextDenovo
nextDenovo 2.5.0

To Reproduce (Optional)
Steps to reproduce the behavior. Providing a minimal test dataset on which we can reproduce the behavior will generally lead to quicker turnaround time!

Additional context (Optional)
change code from

def _parse_mem(self, mem):

		def _set_lsf_mem():
			unit = 'M'
			unit_cfg_fpath = os.getenv('LSF_ENVDIR') + "/lsf.conf"
			if os.path.exists(unit_cfg_fpath):
				with open(unit_cfg_fpath) as IN:
					g = re.search(r'LSF_UNIT_FOR_LIMITS\s*=\s*(\S+)\s*', IN.read(), re.I)
					if g:
						unit = g.group(1)
			return parse_num_unit(mem) / parse_num_unit("1%s" % (unit))

		if self.job_type == 'slurm' and '--mem-per-cpu' in self._submit:
			return int(parse_num_unit(mem, 1024)/1000000/int(self.cpu)) + 1
		if self.job_type == 'lsf' and 'mem' in self._submit:
			return int(self._set_lsf_mem())
		return mem

to

def _parse_mem(self, mem):
			
		if self.job_type == 'slurm' and '--mem-per-cpu' in self._submit:
			return int(parse_num_unit(mem, 1024)/1000000/int(self.cpu)) + 1
		if self.job_type == 'lsf' and 'mem' in self._submit:
                        unit = 'M'
			unit_cfg_fpath = os.getenv('LSF_ENVDIR') + "/lsf.conf"
			if os.path.exists(unit_cfg_fpath):
				with open(unit_cfg_fpath) as IN:
					g = re.search(r'LSF_UNIT_FOR_LIMITS\s*=\s*(\S+)\s*', IN.read(), re.I)
					if g:
						unit = g.group(1)
			return parse_num_unit(mem) / parse_num_unit("1%s" % (unit))		
                return mem

it runs ok.

Hi, thanks for your report, could you make a pull request to ParallelTask to fix this? and do some test first, because I do not have a LSF system.

commented

it's still not working properly, the generated script NextDevono.sh is done normally, but the flowing pipeline does't start.

See here to continue running unfinished tasks.

commented

it reports other errrors, and i fix the code, it looks ok now. if any other errors or completing normal, i will leave a massage.

commented

the "02.cns_align.sh.work" does't run, restarting nextdenovo also does't work.

.work/cns_align001/cns_align00393/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956292] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00394/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956293] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00395/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956294] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00396/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956295] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00397/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956296] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00398/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956297] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00399/nextDenovo.sh] in the lsf_cycle.
[256886 INFO] 2021-12-22 10:19:38 Submitted jobID:[46956298] jobCmd:[~/001_rundir/02.cns_align/02.cns_align.sh.work/cns_align001/cns_align00400/nextDenovo.sh] in the lsf_cycle.

These errors should not be related to this error/issue, you need to open another issue.

commented

it occured after i fix the code, so i think it should be under this issue.

OK, that is may be caused by RAM settings.

Additional context (Optional) change code from

def _parse_mem(self, mem):

  def _set_lsf_mem():
  	unit = 'M'
  	unit_cfg_fpath = os.getenv('LSF_ENVDIR') + "/lsf.conf"
  	if os.path.exists(unit_cfg_fpath):
  		with open(unit_cfg_fpath) as IN:
  			g = re.search(r'LSF_UNIT_FOR_LIMITS\s*=\s*(\S+)\s*', IN.read(), re.I)
  			if g:
  				unit = g.group(1)
  	return parse_num_unit(mem) / parse_num_unit("1%s" % (unit))

  if self.job_type == 'slurm' and '--mem-per-cpu' in self._submit:
  	return int(parse_num_unit(mem, 1024)/1000000/int(self.cpu)) + 1
  if self.job_type == 'lsf' and 'mem' in self._submit:
  	return int(self._set_lsf_mem())
  return mem

to

def _parse_mem(self, mem):

  if self.job_type == 'slurm' and '--mem-per-cpu' in self._submit:
  	return int(parse_num_unit(mem, 1024)/1000000/int(self.cpu)) + 1
  if self.job_type == 'lsf' and 'mem' in self._submit:
                    unit = 'M'
  	unit_cfg_fpath = os.getenv('LSF_ENVDIR') + "/lsf.conf"
  	if os.path.exists(unit_cfg_fpath):
  		with open(unit_cfg_fpath) as IN:
  			g = re.search(r'LSF_UNIT_FOR_LIMITS\s*=\s*(\S+)\s*', IN.read(), re.I)
  			if g:
  				unit = g.group(1)
  	return parse_num_unit(mem) / parse_num_unit("1%s" % (unit))		
            return mem

it runs ok.

Hi:
I've met the same problem as you. Could you tell me which file's code you modified? Thank you.

Your best

Try pip install paralleltask -U to update paralleltask to fix this bug

Try pip install paralleltask -U to update paralleltask to fix this bug

Thank you very much! It didn't report this bug again.