Cmake can't build project.
LiaoAnn opened this issue · comments
I want to run this demo project in my wsl vm.
But after I ran "sudo yarn build" I got some error message below.
@rapidsai/core: =====================================================
@rapidsai/core: PARALLEL_LEVEL=4
@rapidsai/core: =====================================================
@rapidsai/core: info TOOL Using g++ compiler, because preferGnu option is set, and g++ is available.
@rapidsai/core: info TOOL Using Unix Makefiles generator.
@rapidsai/core: info CMD CONFIGURE
@rapidsai/core: info RUN cmake "/home/ann/node/modules/core" --no-warn-unused-cli -G"Unix Makefiles" -DCMAKE_JS_VERSION="6.0.0" -DCMAKE_BUILD_TYPE="Release" -DCMAKE_LIBRARY_OUTPUT_DIRECTORY="/home/ann/node/modules/core/build/Release/Release" -DCMAKE_JS_INC="/home/ann/node/modules/core/.cmake-js/node-x64/v16.17.0/include/node;/home/ann/node/node_modules/nan" -DCMAKE_JS_SRC="" -DNODE_RUNTIME="node" -DNODE_RUNTIMEVERSION="16.17.0" -DNODE_ARCH="x64" -DCMAKE_LIBRARY_OUTPUT_DIRECTORY="/home/ann/node/modules/core/build/Release" -DCMAKE_CXX_COMPILER="g++" -DCMAKE_C_COMPILER="gcc"
@rapidsai/core: info CMD BUILD
@rapidsai/core: info RUN cmake --build "/home/ann/node/modules/core/build/Release" --config Release
@rapidsai/core: make[3]: warning: -j4 forced in submake: resetting jobserver mode.
@rapidsai/core: make[3]: *** No rule to make target 'rapidsai_core.node'. Stop.
@rapidsai/core: make[2]: *** [CMakeFiles/rapidsai_core_60.dir/build.make:71: CMakeFiles/rapidsai_core_60] Error 2
@rapidsai/core: make[1]: *** [CMakeFiles/Makefile2:162: CMakeFiles/rapidsai_core_60.dir/all] Error 2
@rapidsai/core: make: *** [Makefile:156: all] Error 2
@rapidsai/core: ERR! OMG Process terminated: 2
@rapidsai/core: [
@rapidsai/core: '/usr/bin/node',
@rapidsai/core: '/home/ann/node/modules/core/node_modules/.bin/cmake-js',
@rapidsai/core: 'build',
@rapidsai/core: '-g',
@rapidsai/core: '-O',
@rapidsai/core: 'build/Release',
@rapidsai/core: '--CDCMAKE_LIBRARY_OUTPUT_DIRECTORY=/home/ann/node/modules/core/build/Release'
@rapidsai/core: ]
@rapidsai/core: Not searching for unused variables given on the command line.
@rapidsai/core: -- The C compiler identification is GNU 9.4.0
@rapidsai/core: -- The CXX compiler identification is GNU 9.4.0
@rapidsai/core: -- Detecting C compiler ABI info
@rapidsai/core: -- Detecting C compiler ABI info - done
@rapidsai/core: -- Check for working C compiler: /usr/bin/gcc - skipped
@rapidsai/core: -- Detecting C compile features
@rapidsai/core: -- Detecting C compile features - done
@rapidsai/core: -- Detecting CXX compiler ABI info
@rapidsai/core: -- Detecting CXX compiler ABI info - done
@rapidsai/core: -- Check for working CXX compiler: /usr/bin/g++ - skipped
@rapidsai/core: -- Detecting CXX compile features
@rapidsai/core: -- Detecting CXX compile features - done
@rapidsai/core: -- RAPIDS core include: /home/ann/node/modules/core/include
@rapidsai/core: -- Enabling the GLIBCXX11 ABI
@rapidsai/core: -- Using GPU architectures from CUDAARCHS env var: 60-real;70-real;75-real;80-real;86
@rapidsai/core: -- Found CUDAToolkit: /usr/local/cuda/include (found version "11.7.99")
@rapidsai/core: -- Looking for pthread.h
@rapidsai/core: -- Looking for pthread.h - found
@rapidsai/core: -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
@rapidsai/core: -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
@rapidsai/core: -- Looking for pthread_create in pthreads
@rapidsai/core: -- Looking for pthread_create in pthreads - not found
@rapidsai/core: -- Looking for pthread_create in pthread
@rapidsai/core: -- Looking for pthread_create in pthread - found
@rapidsai/core: -- Found Threads: TRUE
@rapidsai/core: -- BUILD_FOR_DETECTED_ARCHS: FALSE
@rapidsai/core: -- BUILD_FOR_ALL_CUDA_ARCHS: FALSE
@rapidsai/core: -- CMAKE_CUDA_ARCHITECTURES: 60-real;70-real;75-real;80-real;86
@rapidsai/core: -- The CUDA compiler identification is NVIDIA 11.7.99
@rapidsai/core: -- Detecting CUDA compiler ABI info
@rapidsai/core: -- Detecting CUDA compiler ABI info - done
@rapidsai/core: -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
@rapidsai/core: -- Detecting CUDA compile features
@rapidsai/core: -- Detecting CUDA compile features - done
@rapidsai/core: -- CMAKE_JS_INC: /home/ann/node/modules/core/.cmake-js/node-x64/v16.17.0/include/node;/home/ann/node/node_modules/nan
@rapidsai/core: -- NAPI_INCLUDE_DIR: /home/ann/node/modules/core/node_modules/node-addon-api
@rapidsai/core: -- NAPI_INCLUDE_DIRS: /home/ann/node/modules/core/.cmake-js/node-x64/v16.17.0/include/node;/home/ann/node/node_modules/nan;/home/ann/node/modules/core/node_modules/node-addon-api
@rapidsai/core: -- NODE_RAPIDS_CMAKE_C_FLAGS: -D__linux__;-DNAPI_EXPERIMENTAL;-DNAPI_CPP_EXCEPTIONS;-DNODE_ADDON_API_DISABLE_DEPRECATED
@rapidsai/core: -- NODE_RAPIDS_CMAKE_CXX_FLAGS: -Wall;-Werror;-Wno-unknown-pragmas;-Wno-error=deprecated-declarations;-D__linux__;-DNAPI_EXPERIMENTAL;-DNAPI_CPP_EXCEPTIONS;-DNODE_ADDON_API_DISABLE_DEPRECATED
@rapidsai/core: -- NODE_RAPIDS_CMAKE_CUDA_FLAGS: -D__linux__;-Werror=cross-execution-space-call;--expt-extended-lambda;--expt-relaxed-constexpr;-Xcompiler=-Wall,-Werror,-Wno-error=deprecated-declarations;-DNAPI_EXPERIMENTAL;-DNAPI_CPP_EXCEPTIONS;-DNODE_ADDON_API_DISABLE_DEPRECATED
@rapidsai/core: -- get_cpm: Using CPM source cache: /home/ann/node/.cache/source
@rapidsai/core: -- get_cpm: Using CPM BINARY cache: /home/ann/node/.cache/binary/Release
@rapidsai/core: -- get_cpm: Using CMake FetchContent base dir: /home/ann/node/.cache/binary/Release
@rapidsai/core: -- Configuring done
@rapidsai/core: -- Generating done
@rapidsai/core: -- Build files have been written to: /home/ann/node/modules/core/build/Release
@rapidsai/core: [ 50%] Building CXX object CMakeFiles/rapidsai_core.dir/src/addon.cpp.o
@rapidsai/core: [100%] Linking CXX shared library rapidsai_core.node
@rapidsai/core: [100%] Built target rapidsai_core
@rapidsai/core: real 0m5.932s
@rapidsai/core: user 0m4.595s
@rapidsai/core: sys 0m0.561s
lerna ERR! yarn run build exited 1 in '@rapidsai/core'
lerna WARN complete Waiting for 1 child process to exit. CTRL-C to exit immediately.
Can some one help me to solve this problem?
Ubuntu 20.04 (WSL 2)
Node v16.17.0
npm v8.15.0
yarn v1.22.19
cmake v3.21.7
Hey @LiaoAnn, if building the modules is not a priority, I'd recommend trying out installing via npm. @trxcllnt recently published npm packages for all the node-rapids modules.
You can find them here: https://www.npmjs.com/search?q=%40rapidsai
Hey @LiaoAnn, if building the modules is not a priority, I'd recommend trying out installing via npm. @trxcllnt recently published npm packages for all the node-rapids modules.
You can find them here: https://www.npmjs.com/search?q=%40rapidsai
So after I install those package in root directory and run "sudo yarn demo", it will run successfully?
@LiaoAnn It looks like you're using make, which I honestly haven't tested. We use ninja
in our images, so it's possible I'm generating a rule that make runs in a different order than ninja. I'd try installing ninja-build
and seeing if that works.
You shouldn't need to use sudo
for any of the commands in the repo, and that could actually be messing things up. The root user doesn't always have the same things on the path. And lastly, you can try yarn rebuild
instead of yarn build
, as the former will clean the build dir before configuring CMake.
But generally yes, as @AjayThorve mentioned, unless you're doing development (in which case I recommend using our docker images and not building on bare metal) installing the packages from npm will likely be faster in most cases.
@LiaoAnn It looks like you're using make, which I honestly haven't tested. We use
ninja
in our images, so it's possible I'm generating a rule that make runs in a different order than ninja. I'd try installingninja-build
and seeing if that works.You shouldn't need to use
sudo
for any of the commands in the repo, and that could actually be messing things up. The root user doesn't always have the same things on the path. And lastly, you can tryyarn rebuild
instead ofyarn build
, as the former will clean the build dir before configuring CMake.But generally yes, as @AjayThorve mentioned, unless you're doing development (in which case I recommend using our docker images and not building on bare metal) installing the packages from npm will likely be faster in most cases.
Actually I have download and run your docker image. But I don't know how to use than image to demonstration.
If you just want to run the demos, you can use the runtime images (full details in USAGE.md):
docker run --rm \
--runtime=nvidia \
-e "DISPLAY=$DISPLAY" \
-v "/etc/fonts:/etc/fonts:ro" \
-v "/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v "/usr/share/fonts:/usr/share/fonts:ro" \
-v "/usr/share/icons:/usr/share/icons:ro" \
ghcr.io/rapidsai/node:22.8.0-runtime-node16.15.1-cuda11-ubuntu20.04-demo \
npx @rapidsai/demo-graph
But running this, it looks like we currently have a broken image! I'll make a PR and rebuild these here shortly.
If you just want to run the demos, you can use the runtime images (full details in USAGE.md):
docker run --rm \ --runtime=nvidia \ -e "DISPLAY=$DISPLAY" \ -v "/etc/fonts:/etc/fonts:ro" \ -v "/tmp/.X11-unix:/tmp/.X11-unix:rw" \ -v "/usr/share/fonts:/usr/share/fonts:ro" \ -v "/usr/share/icons:/usr/share/icons:ro" \ ghcr.io/rapidsai/node:22.8.0-runtime-node16.15.1-cuda11-ubuntu20.04-demo \ npx @rapidsai/demo-graph
But running this, it looks like we currently have a broken image! I'll make a PR and rebuild these here shortly.
Wow thanks!
But if I want to develop with rapidsai/node, what should I do?
But if I want to develop with rapidsai/node, what should I do?
Do you mean use the libraries in your code, or develop on the rapidsai/node
libraries themselves?
If the former, you can list the modules in your package.json
"dependencies" list and run yarn
:
{
"dependencies": {
"@rapidsai/cudf": "~22.6.2"
}
}
If the later, it should be enough to clone the repo and follow these instructions:
git clone https://github.com/rapidsai/node.git
# Make a local .env file of envvar overrides for the container
cp .env.sample .env
# Modify the .env file to match your system's parameters
nano .env
# build the development container locally
yarn docker:build:devel:main
# Start the main development container
yarn docker:run:devel
# Inside the development container, run these to build everything:
yarn
yarn rebuild
# or optionally, to build a project individually:
yarn workspace @rapidsai/core rebuild
# note for building projects individually -- the projects depend on each other,
# so you must build the projects in the correct order. The top-level `yarn rebuild` does
# this automatically, or you can get the dependency order from lerna via:
# yarn lerna exec --scope "@rapidsai/*" 'echo $LERNA_PACKAGE_NAME' | grep -v demo
@LiaoAnn The runtime images are rebuilding now, which I've verified locally will fix the above docker run
command. After this I'll publish new 22.8.0
packages to npm.
@LiaoAnn The runtime images are rebuilding now, which I've verified locally will fix the above
docker run
command. After this I'll publish new22.8.0
packages to npm.
But after I ran
REPO=ghcr.io/rapidsai/node
VERSIONS="22.8.0-runtime-node16.15.1-cuda11-ubuntu20.04"
docker run --rm --gpus=0 $REPO:$VERSIONS-cudf \
-p "const {Series, DataFrame} = require('@rapidsai/cudf');\
new DataFrame({ a: Series.new([0, 1, 2]) }).toString()"
in my wsl vm, I got some error message below.
Error: Fatal CUDA error encountered at: /opt/rapids/node/.cache/source/cudf/2011600dae8dfdcf9bcef6866b9bb6f542e62759/cpp/src/bitmask/null_mask.cu:94: 801 cudaErrorNotSupported operation not supported
at Float64Series._castAsString (/home/node/node_modules/@rapidsai/cudf/build/js/series/float.js:27:52)
at CastVisitor.visitUtf8 (/home/node/node_modules/@rapidsai/cudf/build/js/series.js:50:44)
at CastVisitor.visit (/home/node/node_modules/apache-arrow/visitor.js:27:48)
at Float64Series.cast (/home/node/node_modules/@rapidsai/cudf/build/js/series.js:166:54)
at /home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:406:105
at Array.reduce (<anonymous>)
at DataFrame.castAll (/home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:406:41)
at DataFrameFormatter._preprocess (/home/node/node_modules/@rapidsai/cudf/build/js/dataframe/print.js:68:20)
at new DataFrameFormatter (/home/node/node_modules/@rapidsai/cudf/build/js/dataframe/print.js:40:24)
at DataFrame.toString (/home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:232:37)
Thrown at:
at _castAsString (/home/node/node_modules/@rapidsai/cudf/build/js/series/float.js:27:52)
at visitUtf8 (/home/node/node_modules/@rapidsai/cudf/build/js/series.js:50:44)
at visit (/home/node/node_modules/apache-arrow/visitor.js:27:48)
at cast (/home/node/node_modules/@rapidsai/cudf/build/js/series.js:166:54)
at /home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:406:105
at castAll (/home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:406:41)
at _preprocess (/home/node/node_modules/@rapidsai/cudf/build/js/dataframe/print.js:68:20)
at DataFrameFormatter (/home/node/node_modules/@rapidsai/cudf/build/js/dataframe/print.js:40:24)
at toString (/home/node/node_modules/@rapidsai/cudf/build/js/data_frame.js:232:37)
at [eval]:1:107
What should I do to solve this problem?
@LiaoAnn what version of the CUDA driver and toolkit are you using, and what's your GPU?
It looks like that line is a simple cudaMemsetAsync()
call, which sounds like a driver or docker/WSL2 problem?
Can you run a simpler smoke test, like this?
docker run --rm --gpus=0 \
ghcr.io/rapidsai/node:22.8.0-runtime-node16.15.1-cuda11-ubuntu20.04-cudf \
-p '(new (require("@rapidsai/cuda").Uint8Buffer)(8)).fill(1).toArray()'
# Should print:
# Uint8Array(8) [
# 1, 1, 1, 1,
# 1, 1, 1, 1
# ]
GPU: GTX 1060
I ran /usr/local/cuda/bin/nvcc --version
and get result below.
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
I ran that test successfully, the result is same as yours.
I see in https://docs.nvidia.com/cuda/wsl-user-guide/index.html#wsl2-system-requirements it says this:
With the NVIDIA Container Toolkit for Docker 19.03, only --gpus all is supported.
On multi-GPU systems it is not possible to filter for specific GPU devices by using specific index numbers to enumerate GPUs.
Does it make a difference if you pass --gpus all
instead of --gpus 0
?
I see in https://docs.nvidia.com/cuda/wsl-user-guide/index.html#wsl2-system-requirements it says this:
With the NVIDIA Container Toolkit for Docker 19.03, only --gpus all is supported.
On multi-GPU systems it is not possible to filter for specific GPU devices by using specific index numbers to enumerate GPUs.Does it make a difference if you pass
--gpus all
instead of--gpus 0
?
The result is almost same.
Ah, actually I found these comments from @taureandyernv:
rapidsai/cudf#9427 (comment)
rapidsai/cudf#11382 (comment)
It sounds like Pascal generation GPUs in WSL2 are not supported and we don't have a fix yet 😞.
Okay. I'm looking forward to you guys fixing it.😀