Error for doublet detection
lh12565 opened this issue · comments
Hi,
When I execute this command:
doublets = clf.fit(raw_counts).predict(p_thresh=1e-16, voter_thresh=0.5)
I got a error as below
0%| | 0/50 [00:00<?, ?it/s]/usr/local/python37/lib/python3.7/site-packages/numba/compiler.py:602: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../../../../usr/local/python37/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
self.func_ir.loc))
0%| | 0/50 [00:06<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/python37/lib/python3.7/site-packages/doubletdetection/doubletdetection.py", line 248, in fit
self.all_scores_[i], self.all_log_p_values_[i] = self._one_fit()
File "/usr/local/python37/lib/python3.7/site-packages/doubletdetection/doubletdetection.py", line 358, in _one_fit
aug_counts, random_state=self.random_state, resolution=4, directed=False
File "/usr/local/python37/lib/python3.7/site-packages/scanpy/tools/_louvain.py", line 114, in louvain
g = utils.get_igraph_from_adjacency(adjacency, directed=directed)
File "/usr/local/python37/lib/python3.7/site-packages/scanpy/utils.py", line 381, in get_igraph_from_adjacency
import igraph as ig
File "/usr/local/python37/lib/python3.7/site-packages/igraph/__init__.py", line 8, in <module>
raise DeprecationWarning("To avoid name collision with the igraph project, "
DeprecationWarning: To avoid name collision with the igraph project, this visualization library has been renamed to 'jgraph'. Please upgrade when convenient.
>>> import multiprocessing
>>> doublets = clf.fit(raw_counts).predict(p_thresh=1e-16, voter_thresh=0.5)
0%| | 0/50 [00:00<?, ?it/s]/usr/local/python37/lib/python3.7/site-packages/numba/compiler.py:602: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../../../../usr/local/python37/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
self.func_ir.loc))
0%| | 0/50 [00:06<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/python37/lib/python3.7/site-packages/doubletdetection/doubletdetection.py", line 248, in fit
self.all_scores_[i], self.all_log_p_values_[i] = self._one_fit()
File "/usr/local/python37/lib/python3.7/site-packages/doubletdetection/doubletdetection.py", line 358, in _one_fit
aug_counts, random_state=self.random_state, resolution=4, directed=False
File "/usr/local/python37/lib/python3.7/site-packages/scanpy/tools/_louvain.py", line 114, in louvain
g = utils.get_igraph_from_adjacency(adjacency, directed=directed)
File "/usr/local/python37/lib/python3.7/site-packages/scanpy/utils.py", line 381, in get_igraph_from_adjacency
import igraph as ig
File "/usr/local/python37/lib/python3.7/site-packages/igraph/__init__.py", line 8, in <module>
raise DeprecationWarning("To avoid name collision with the igraph project, "
DeprecationWarning: To avoid name collision with the igraph project, this visualization library has been renamed to 'jgraph'. Please upgrade when convenient.
These warnings come from umap. We have a line of code in there that uses scanpy, which uses umap to calculate an approximate NN graph. This graph is used for clustering when use_phenograph==False
. However, we have a small bug that runs this line even when use_phenograph==True
. We will fix this soon.
By the way, are you using use_phenograph==True
?
I use use_phenograph==False. Although there is a warning that I can ignore, I can't get results.
I see... @lh12565 This might actually have to do with your scanpy version. I see that we don't have scanpy in the setup.py requirements. I will go ahead and fix that soon!