NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build error (error: expected primary-expression before 'some' token)

kkjh0723 opened this issue · comments

I'm trying to update latest apex on the system with Cuda9.1, pytorch 1.1.0, ubuntu16.04.
and I got the error attached at the end.

Actually, I recently got a problem that my model performance degrades significantly after I updated my docker image from cuda9.1-pytorch1.1.0-old_apex to cuda9.2-pytorch1.4.0-latest_apex.
I want to check whether the issue came from updating pytorch or updating apex.

So hopefully there is any way to install on current system (Cuda9.1, pytorch 1.1.0, ubuntu16.04.) without changing the cuda and pytorch version.

/usr/local/lib/python3.5/dist-packages/pip/_internal/commands/install.py:244: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options.
  cmdoptions.check_install_build_global(options)
Created temporary directory: /tmp/pip-ephem-wheel-cache-czknub85
Created temporary directory: /tmp/pip-req-tracker-4qfgsf2_
Created requirements tracker '/tmp/pip-req-tracker-4qfgsf2_'
Created temporary directory: /tmp/pip-install-n7l9uz2n
Processing /tmp/apex
  Created temporary directory: /tmp/pip-req-build-u4sa6hp4
  Added file:///tmp/apex to build tracker '/tmp/pip-req-tracker-4qfgsf2_'
    Running setup.py (path:/tmp/pip-req-build-u4sa6hp4/setup.py) egg_info for package from file:///tmp/apex
    Running command python setup.py egg_info
    torch.__version__  =  1.1.0
    running egg_info
    creating pip-egg-info/apex.egg-info
    writing pip-egg-info/apex.egg-info/PKG-INFO
    writing dependency_links to pip-egg-info/apex.egg-info/dependency_links.txt
    writing top-level names to pip-egg-info/apex.egg-info/top_level.txt
    writing manifest file 'pip-egg-info/apex.egg-info/SOURCES.txt'
    /tmp/pip-req-build-u4sa6hp4/setup.py:46: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies!
      warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")
    warning: manifest_maker: standard file '-c' not found

    reading manifest file 'pip-egg-info/apex.egg-info/SOURCES.txt'
    writing manifest file 'pip-egg-info/apex.egg-info/SOURCES.txt'
  Source in /tmp/pip-req-build-u4sa6hp4 has version 0.1, which satisfies requirement apex==0.1 from file:///tmp/apex
  Removed apex==0.1 from file:///tmp/apex from build tracker '/tmp/pip-req-tracker-4qfgsf2_'
Skipping bdist_wheel for apex, due to binaries being disabled for it.
Installing collected packages: apex
  Found existing installation: apex 0.1
    Uninstalling apex-0.1:
      Created temporary directory: /tmp/pip-uninstall-mp7b2fsu
      Removing file or directory /usr/local/lib/python3.5/dist-packages/amp_C.cpython-35m-x86_64-linux-gnu.so
      Created temporary directory: /usr/local/lib/python3.5/dist-packages/~pex-0.1.egg-info
      Removing file or directory /usr/local/lib/python3.5/dist-packages/apex-0.1.egg-info
      Created temporary directory: /usr/local/lib/python3.5/dist-packages/~pex
      Removing file or directory /usr/local/lib/python3.5/dist-packages/apex/
      Removing file or directory /usr/local/lib/python3.5/dist-packages/apex_C.cpython-35m-x86_64-linux-gnu.so
      Removing file or directory /usr/local/lib/python3.5/dist-packages/fused_adam_cuda.cpython-35m-x86_64-linux-gnu.so
      Removing file or directory /usr/local/lib/python3.5/dist-packages/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so
      Removing file or directory /usr/local/lib/python3.5/dist-packages/syncbn.cpython-35m-x86_64-linux-gnu.so
      Successfully uninstalled apex-0.1
  Created temporary directory: /tmp/pip-record-s2somaw8
    Running command /usr/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-req-build-u4sa6hp4/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-s2somaw8/install-record.txt --single-version-externally-managed --compile
    torch.__version__  =  1.1.0
    /tmp/pip-req-build-u4sa6hp4/setup.py:46: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies!
      warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")

    Compiling cuda extensions with
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2017 NVIDIA Corporation
    Built on Fri_Nov__3_21:07:56_CDT_2017
    Cuda compilation tools, release 9.1, V9.1.85
    from /usr/local/cuda/bin

    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.5
    creating build/lib.linux-x86_64-3.5/apex
    copying apex/__init__.py -> build/lib.linux-x86_64-3.5/apex
    creating build/lib.linux-x86_64-3.5/apex/RNN
    copying apex/RNN/RNNBackend.py -> build/lib.linux-x86_64-3.5/apex/RNN
    copying apex/RNN/__init__.py -> build/lib.linux-x86_64-3.5/apex/RNN
    copying apex/RNN/cells.py -> build/lib.linux-x86_64-3.5/apex/RNN
    copying apex/RNN/models.py -> build/lib.linux-x86_64-3.5/apex/RNN
    creating build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/__init__.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/__version__.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/_amp_state.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/_initialize.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/_process_optimizer.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/amp.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/compat.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/frontend.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/handle.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/opt.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/rnn_compat.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/scaler.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/utils.py -> build/lib.linux-x86_64-3.5/apex/amp
    copying apex/amp/wrap.py -> build/lib.linux-x86_64-3.5/apex/amp
    creating build/lib.linux-x86_64-3.5/apex/fp16_utils
    copying apex/fp16_utils/__init__.py -> build/lib.linux-x86_64-3.5/apex/fp16_utils
    copying apex/fp16_utils/fp16_optimizer.py -> build/lib.linux-x86_64-3.5/apex/fp16_utils
    copying apex/fp16_utils/fp16util.py -> build/lib.linux-x86_64-3.5/apex/fp16_utils
    copying apex/fp16_utils/loss_scaler.py -> build/lib.linux-x86_64-3.5/apex/fp16_utils
    creating build/lib.linux-x86_64-3.5/apex/multi_tensor_apply
    copying apex/multi_tensor_apply/__init__.py -> build/lib.linux-x86_64-3.5/apex/multi_tensor_apply
    copying apex/multi_tensor_apply/multi_tensor_apply.py -> build/lib.linux-x86_64-3.5/apex/multi_tensor_apply
    creating build/lib.linux-x86_64-3.5/apex/normalization
    copying apex/normalization/__init__.py -> build/lib.linux-x86_64-3.5/apex/normalization
    copying apex/normalization/fused_layer_norm.py -> build/lib.linux-x86_64-3.5/apex/normalization
    creating build/lib.linux-x86_64-3.5/apex/optimizers
    copying apex/optimizers/__init__.py -> build/lib.linux-x86_64-3.5/apex/optimizers
    copying apex/optimizers/fused_adam.py -> build/lib.linux-x86_64-3.5/apex/optimizers
    copying apex/optimizers/fused_lamb.py -> build/lib.linux-x86_64-3.5/apex/optimizers
    copying apex/optimizers/fused_novograd.py -> build/lib.linux-x86_64-3.5/apex/optimizers
    copying apex/optimizers/fused_sgd.py -> build/lib.linux-x86_64-3.5/apex/optimizers
    creating build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/LARC.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/__init__.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/distributed.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/multiproc.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/optimized_sync_batchnorm.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/optimized_sync_batchnorm_kernel.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/sync_batchnorm.py -> build/lib.linux-x86_64-3.5/apex/parallel
    copying apex/parallel/sync_batchnorm_kernel.py -> build/lib.linux-x86_64-3.5/apex/parallel
    creating build/lib.linux-x86_64-3.5/apex/reparameterization
    copying apex/reparameterization/__init__.py -> build/lib.linux-x86_64-3.5/apex/reparameterization
    copying apex/reparameterization/reparameterization.py -> build/lib.linux-x86_64-3.5/apex/reparameterization
    copying apex/reparameterization/weight_norm.py -> build/lib.linux-x86_64-3.5/apex/reparameterization
    creating build/lib.linux-x86_64-3.5/apex/contrib
    copying apex/contrib/__init__.py -> build/lib.linux-x86_64-3.5/apex/contrib
    creating build/lib.linux-x86_64-3.5/apex/mlp
    copying apex/mlp/__init__.py -> build/lib.linux-x86_64-3.5/apex/mlp
    copying apex/mlp/mlp.py -> build/lib.linux-x86_64-3.5/apex/mlp
    creating build/lib.linux-x86_64-3.5/apex/pyprof
    copying apex/pyprof/__init__.py -> build/lib.linux-x86_64-3.5/apex/pyprof
    creating build/lib.linux-x86_64-3.5/apex/amp/lists
    copying apex/amp/lists/__init__.py -> build/lib.linux-x86_64-3.5/apex/amp/lists
    copying apex/amp/lists/functional_overrides.py -> build/lib.linux-x86_64-3.5/apex/amp/lists
    copying apex/amp/lists/tensor_overrides.py -> build/lib.linux-x86_64-3.5/apex/amp/lists
    copying apex/amp/lists/torch_overrides.py -> build/lib.linux-x86_64-3.5/apex/amp/lists
    creating build/lib.linux-x86_64-3.5/apex/contrib/groupbn
    copying apex/contrib/groupbn/__init__.py -> build/lib.linux-x86_64-3.5/apex/contrib/groupbn
    copying apex/contrib/groupbn/batch_norm.py -> build/lib.linux-x86_64-3.5/apex/contrib/groupbn
    creating build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/__init__.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/encdec_multihead_attn.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/encdec_multihead_attn_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/fast_encdec_multihead_attn_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/fast_encdec_multihead_attn_norm_add_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/fast_self_multihead_attn_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/fast_self_multihead_attn_norm_add_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/self_multihead_attn.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    copying apex/contrib/multihead_attn/self_multihead_attn_func.py -> build/lib.linux-x86_64-3.5/apex/contrib/multihead_attn
    creating build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    copying apex/contrib/optimizers/__init__.py -> build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    copying apex/contrib/optimizers/fp16_optimizer.py -> build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    copying apex/contrib/optimizers/fused_adam.py -> build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    copying apex/contrib/optimizers/fused_lamb.py -> build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    copying apex/contrib/optimizers/fused_sgd.py -> build/lib.linux-x86_64-3.5/apex/contrib/optimizers
    creating build/lib.linux-x86_64-3.5/apex/contrib/xentropy
    copying apex/contrib/xentropy/__init__.py -> build/lib.linux-x86_64-3.5/apex/contrib/xentropy
    copying apex/contrib/xentropy/softmax_xentropy.py -> build/lib.linux-x86_64-3.5/apex/contrib/xentropy
    creating build/lib.linux-x86_64-3.5/apex/pyprof/nvtx
    copying apex/pyprof/nvtx/__init__.py -> build/lib.linux-x86_64-3.5/apex/pyprof/nvtx
    copying apex/pyprof/nvtx/nvmarker.py -> build/lib.linux-x86_64-3.5/apex/pyprof/nvtx
    creating build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/__init__.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/__main__.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/db.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/kernel.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/nvvp.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    copying apex/pyprof/parse/parse.py -> build/lib.linux-x86_64-3.5/apex/pyprof/parse
    creating build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/__init__.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/__main__.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/activation.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/base.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/blas.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/conv.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/convert.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/data.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/dropout.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/embedding.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/index_slice_join_mutate.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/linear.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/loss.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/misc.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/normalization.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/optim.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/output.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/pointwise.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/pooling.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/prof.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/randomSample.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/recurrentCell.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/reduction.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/softmax.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/usage.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    copying apex/pyprof/prof/utility.py -> build/lib.linux-x86_64-3.5/apex/pyprof/prof
    running build_ext
    building 'apex_C' extension
    creating build/temp.linux-x86_64-3.5
    creating build/temp.linux-x86_64-3.5/csrc
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/include/python3.5m -c csrc/flatten_unflatten.cpp -o build/temp.linux-x86_64-3.5/csrc/flatten_unflatten.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=apex_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
    x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.5/csrc/flatten_unflatten.o -o build/lib.linux-x86_64-3.5/apex_C.cpython-35m-x86_64-linux-gnu.so
    building 'amp_C' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/amp_C_frontend.cpp -o build/temp.linux-x86_64-3.5/csrc/amp_C_frontend.o -O3 -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_sgd_kernel.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_sgd_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_scale_kernel.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_scale_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_axpby_kernel.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_axpby_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_l2norm_kernel.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_l2norm_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_lamb_stage_1.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb_stage_1.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_lamb_stage_2.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb_stage_2.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_adam.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_adam.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_novograd.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_novograd.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/multi_tensor_lamb.cu -o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -lineinfo -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=amp_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.5/csrc/amp_C_frontend.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_sgd_kernel.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_scale_kernel.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_axpby_kernel.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_l2norm_kernel.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb_stage_1.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb_stage_2.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_adam.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_novograd.o build/temp.linux-x86_64-3.5/csrc/multi_tensor_lamb.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.5/amp_C.cpython-35m-x86_64-linux-gnu.so
    building 'syncbn' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/syncbn.cpp -o build/temp.linux-x86_64-3.5/csrc/syncbn.o -O3 -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=syncbn -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/welford.cu -o build/temp.linux-x86_64-3.5/csrc/welford.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=syncbn -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.5/csrc/syncbn.o build/temp.linux-x86_64-3.5/csrc/welford.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.5/syncbn.cpython-35m-x86_64-linux-gnu.so
    building 'fused_layer_norm_cuda' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/layer_norm_cuda.cpp -o build/temp.linux-x86_64-3.5/csrc/layer_norm_cuda.o -O3 -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=fused_layer_norm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
    /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/layer_norm_cuda_kernel.cu -o build/temp.linux-x86_64-3.5/csrc/layer_norm_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -maxrregcount=50 -O3 --use_fast_math -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=fused_layer_norm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    csrc/layer_norm_cuda_kernel.cu(271): warning: function "<unnamed>::SharedMemory<double>::getPointer" was declared but never referenced

    csrc/layer_norm_cuda_kernel.cu(261): warning: function "<unnamed>::SharedMemory<float>::getPointer" was declared but never referenced

    x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.5/csrc/layer_norm_cuda.o build/temp.linux-x86_64-3.5/csrc/layer_norm_cuda_kernel.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.5/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so
    building 'mlp_cuda' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.5/dist-packages/torch/include -I/usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.5/dist-packages/torch/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c csrc/mlp.cpp -o build/temp.linux-x86_64-3.5/csrc/mlp.o -O3 -DVERSION_GE_1_1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=mlp_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
    cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
    In file included from csrc/mlp.cpp:2:0:
    /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp]
     #warning \
      ^
    csrc/mlp.cpp: In function 'std::vector<at::Tensor> mlp_forward(std::vector<at::Tensor>)':
    csrc/mlp.cpp:47:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       for (int i = 0; i < num_layers; i++) {
                         ^
    csrc/mlp.cpp:56:68: warning: narrowing conversion of 'reserved_size' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
       auto reserved_space = at::empty({reserved_size}, inputs[0].type());
                                                                        ^
    csrc/mlp.cpp:56:68: warning: narrowing conversion of 'reserved_size' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
    In file included from /usr/local/lib/python3.5/dist-packages/torch/include/ATen/ATen.h:9:0,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/extension.h:4,
                     from csrc/mlp.cpp:1:
    csrc/mlp.cpp: In lambda function:
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:84:52: warning: 'c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)' is deprecated [-Wdeprecated-declarations]
         at::ScalarType _st = ::detail::scalar_type(TYPE);                        \
                                                        ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:47:23: note: declared here
     inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) {
                           ^
    In file included from /usr/local/lib/python3.5/dist-packages/torch/include/ATen/ATen.h:9:0,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/extension.h:4,
                     from csrc/mlp.cpp:1:
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:61:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:30: error: expected primary-expression before '>' token
             out.data_ptr<scalar_t>(),
                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:32: error: expected primary-expression before ')' token
             out.data_ptr<scalar_t>(),
                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:41: error: expected primary-expression before '>' token
             reserved_space.data_ptr<scalar_t>());
                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:43: error: expected primary-expression before ')' token
             reserved_space.data_ptr<scalar_t>());
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:61:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:30: error: expected primary-expression before '>' token
             out.data_ptr<scalar_t>(),
                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:32: error: expected primary-expression before ')' token
             out.data_ptr<scalar_t>(),
                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:41: error: expected primary-expression before '>' token
             reserved_space.data_ptr<scalar_t>());
                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:43: error: expected primary-expression before ')' token
             reserved_space.data_ptr<scalar_t>());
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:61:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:62:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:63:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:66:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:30: error: expected primary-expression before '>' token
             out.data_ptr<scalar_t>(),
                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:73:32: error: expected primary-expression before ')' token
             out.data_ptr<scalar_t>(),
                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:41: error: expected primary-expression before '>' token
             reserved_space.data_ptr<scalar_t>());
                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:74:43: error: expected primary-expression before ')' token
             reserved_space.data_ptr<scalar_t>());
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:58:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp: In function 'std::vector<at::Tensor> mlp_backward(at::Tensor, std::vector<at::Tensor>, std::vector<at::Tensor>)':
    csrc/mlp.cpp:90:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       for (int i = 0; i < num_layers; i++) {
                         ^
    csrc/mlp.cpp:95:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       for (int i = 0; i < inputs.size(); i++) {
                         ^
    In file included from /usr/local/lib/python3.5/dist-packages/torch/include/ATen/ATen.h:9:0,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/extension.h:4,
                     from csrc/mlp.cpp:1:
    csrc/mlp.cpp: In lambda function:
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:84:52: warning: 'c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)' is deprecated [-Wdeprecated-declarations]
         at::ScalarType _st = ::detail::scalar_type(TYPE);                        \
                                                        ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:47:23: note: declared here
     inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) {
                           ^
    In file included from /usr/local/lib/python3.5/dist-packages/torch/include/ATen/ATen.h:9:0,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /usr/local/lib/python3.5/dist-packages/torch/include/torch/extension.h:4,
                     from csrc/mlp.cpp:1:
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:102:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:107:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < inputs.size(); i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:57: error: expected primary-expression before '>' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:59: error: expected primary-expression before ')' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:43: error: expected primary-expression before '>' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:45: error: expected primary-expression before ')' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:46: error: expected primary-expression before '>' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:48: error: expected primary-expression before ')' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:43: error: expected primary-expression before '>' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:45: error: expected primary-expression before ')' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:37: error: expected primary-expression before '>' token
             work_space.data_ptr<scalar_t>(),
                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:39: error: expected primary-expression before ')' token
             work_space.data_ptr<scalar_t>(),
                                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:102:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:107:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < inputs.size(); i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:57: error: expected primary-expression before '>' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:59: error: expected primary-expression before ')' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:43: error: expected primary-expression before '>' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:45: error: expected primary-expression before ')' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:46: error: expected primary-expression before '>' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:48: error: expected primary-expression before ')' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:43: error: expected primary-expression before '>' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:45: error: expected primary-expression before ')' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:37: error: expected primary-expression before '>' token
             work_space.data_ptr<scalar_t>(),
                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:39: error: expected primary-expression before ')' token
             work_space.data_ptr<scalar_t>(),
                                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp: In lambda function:
    csrc/mlp.cpp:102:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < num_layers; i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:54: error: expected primary-expression before '>' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:103:56: error: expected primary-expression before ')' token
           w_ptr.push_back(inputs[i + 1].data_ptr<scalar_t>());
                                                            ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:67: error: expected primary-expression before '>' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                       ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:104:69: error: expected primary-expression before ')' token
           b_ptr.push_back(inputs[i + 1 + num_layers].data_ptr<scalar_t>());
                                                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:107:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i = 0; i < inputs.size(); i++) {
                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:57: error: expected primary-expression before '>' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                             ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:108:59: error: expected primary-expression before ')' token
           outputs_ptr.push_back(outputs[i].data_ptr<scalar_t>());
                                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:115:44: warning: narrowing conversion of '(work_size / sizeof (scalar_t))' from 'long unsigned int' to 'long int' inside { } [-Wnarrowing]
         auto work_space = at::empty({work_size / sizeof(scalar_t)}, inputs[0].type());
                                                ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:36: error: expected primary-expression before '>' token
             inputs[0].data_ptr<scalar_t>(),
                                        ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:118:38: error: expected primary-expression before ')' token
             inputs[0].data_ptr<scalar_t>(),
                                          ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:43: error: expected primary-expression before '>' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:119:45: error: expected primary-expression before ')' token
             fprop_outputs[0].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:46: error: expected primary-expression before '>' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                  ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:125:48: error: expected primary-expression before ')' token
             grad_o.contiguous().data_ptr<scalar_t>(),
                                                    ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:43: error: expected primary-expression before '>' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                               ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:126:45: error: expected primary-expression before ')' token
             fprop_outputs[1].data_ptr<scalar_t>(),
                                                 ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:37: error: expected primary-expression before '>' token
             work_space.data_ptr<scalar_t>(),
                                         ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    csrc/mlp.cpp:127:39: error: expected primary-expression before ')' token
             work_space.data_ptr<scalar_t>(),
                                           ^
    /usr/local/lib/python3.5/dist-packages/torch/include/ATen/Dispatch.h:11:12: note: in definition of macro 'AT_PRIVATE_CASE_TYPE'
         return __VA_ARGS__();                          \
                ^
    csrc/mlp.cpp:99:3: note: in expansion of macro 'AT_DISPATCH_FLOATING_TYPES_AND_HALF'
       AT_DISPATCH_FLOATING_TYPES_AND_HALF(inputs[0].type(), "mlp_forward", [&] {
       ^
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
  Running setup.py install for apex ... error
  Rolling back uninstall of apex
  Moving to /usr/local/lib/python3.5/dist-packages/amp_C.cpython-35m-x86_64-linux-gnu.so
   from /tmp/pip-uninstall-mp7b2fsu/amp_C.cpython-35m-x86_64-linux-gnu.so
  Moving to /usr/local/lib/python3.5/dist-packages/apex-0.1.egg-info
   from /usr/local/lib/python3.5/dist-packages/~pex-0.1.egg-info
  Moving to /usr/local/lib/python3.5/dist-packages/apex/
   from /usr/local/lib/python3.5/dist-packages/~pex
  Moving to /usr/local/lib/python3.5/dist-packages/apex_C.cpython-35m-x86_64-linux-gnu.so
   from /tmp/pip-uninstall-mp7b2fsu/apex_C.cpython-35m-x86_64-linux-gnu.so
  Moving to /usr/local/lib/python3.5/dist-packages/fused_adam_cuda.cpython-35m-x86_64-linux-gnu.so
   from /tmp/pip-uninstall-mp7b2fsu/fused_adam_cuda.cpython-35m-x86_64-linux-gnu.so
  Moving to /usr/local/lib/python3.5/dist-packages/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so
   from /tmp/pip-uninstall-mp7b2fsu/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so
  Moving to /usr/local/lib/python3.5/dist-packages/syncbn.cpython-35m-x86_64-linux-gnu.so
   from /tmp/pip-uninstall-mp7b2fsu/syncbn.cpython-35m-x86_64-linux-gnu.so
  Replacing /usr/local/lib/python3.5/dist-packages/amp_C.cpython-35m-x86_64-linux-gnu.so from /tmp/pip-uninstall-mp7b2fsu/amp_C.cpython-35m-x86_64-linux-gnu.so
  Replacing /usr/local/lib/python3.5/dist-packages/apex-0.1.egg-info from /usr/local/lib/python3.5/dist-packages/~pex-0.1.egg-info
  Replacing /usr/local/lib/python3.5/dist-packages/apex/ from /usr/local/lib/python3.5/dist-packages/~pex
  Replacing /usr/local/lib/python3.5/dist-packages/apex_C.cpython-35m-x86_64-linux-gnu.so from /tmp/pip-uninstall-mp7b2fsu/apex_C.cpython-35m-x86_64-linux-gnu.so
  Replacing /usr/local/lib/python3.5/dist-packages/fused_adam_cuda.cpython-35m-x86_64-linux-gnu.so from /tmp/pip-uninstall-mp7b2fsu/fused_adam_cuda.cpython-35m-x86_64-linux-gnu.so
  Replacing /usr/local/lib/python3.5/dist-packages/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so from /tmp/pip-uninstall-mp7b2fsu/fused_layer_norm_cuda.cpython-35m-x86_64-linux-gnu.so
  Replacing /usr/local/lib/python3.5/dist-packages/syncbn.cpython-35m-x86_64-linux-gnu.so from /tmp/pip-uninstall-mp7b2fsu/syncbn.cpython-35m-x86_64-linux-gnu.so
Cleaning up...
  Removing source in /tmp/pip-req-build-u4sa6hp4
Removed build tracker '/tmp/pip-req-tracker-4qfgsf2_'
ERROR: Command "/usr/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-req-build-u4sa6hp4/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-s2somaw8/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-u4sa6hp4/
Exception information:
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/pip/_internal/cli/base_command.py", line 178, in main
    status = self.run(options, args)
  File "/usr/local/lib/python3.5/dist-packages/pip/_internal/commands/install.py", line 414, in run
    use_user_site=options.use_user_site,
  File "/usr/local/lib/python3.5/dist-packages/pip/_internal/req/__init__.py", line 58, in install_given_reqs
    **kwargs
  File "/usr/local/lib/python3.5/dist-packages/pip/_internal/req/req_install.py", line 951, in install
    spinner=spinner,
  File "/usr/local/lib/python3.5/dist-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess
    % (command_desc, proc.returncode, cwd))
pip._internal.exceptions.InstallationError: Command "/usr/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-req-build-u4sa6hp4/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-s2somaw8/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-u4sa6hp4/

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

@da03 thanks, I can use this workaround temporarily. Hopefully, the issue will be addressed in the latest commit.

I also met the same error! Solved by @da03's solution.

commented

I had the same issue, also solved by @da03's solution.

Machine configuration: CentOS 7, GCC 7.3, Pytorch 1.5, CUDA 9.2. It's on a shared resource, so upgrading is not possible.

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

Same error with cuda 9.0, torch 1.1.0.
Thanks to @da03 , saved my life!!!

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

Thank you so much. Helped a lot

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

Saved my day! Thanks!

Same error solved by @da03's solution and apex was installed successfully,but a new issue arises when running with apex.

File "run_classifier.py", line 107, in train
   model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
File "/home/jin/anaconda2/envs/bl-36/lib/python3.6/site-packages/apex/amp/frontend.py", line 358, in initialize
   return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
File "/home/jin/anaconda2/envs/bl-36/lib/python3.6/site-packages/apex/amp/_initialize.py", line 225, in _initialize
   optimizers[i] = _process_optimizer(optimizer, properties)
File "/home/jin/anaconda2/envs/bl-36/lib/python3.6/site-packages/apex/amp/_process_optimizer.py", line 344, in _process_optimizer
   optimizer._amp_stash.dummy_overflow_buf = torch.cuda.IntTensor([0]);
RuntimeError: CUDA error: unknown error

Configuration: Cuda9.0, gcc 4.8, pytorch 1.1.0, CentOS 7.

Thanks, @da03 !

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

Thanks @da03. This worked for me as well. I tried building pytorch from source but was getting an error there as well. Nothing else worked!

Thanks @da03 ! You saved my end-term project!

My environment:

  • CentOS 7
  • cuda 10.0
  • pytorch 1.2
  • gcc 6.2

Thanks @da03. Your awesome finding saved me a lot of trouble.
My dev env looks like:

  • Ubuntu 18.04.3 LTS
  • CUDA 9.2.148
  • pytorch 1.1.0
  • gcc 7.4.0
  • nvcc 9.2.148
  • NVIDIA-SMI 440.64.00 Driver Version: 440.64.00
commented

I am also seeing this error, the env is:

  • Ubuntu 18.04
  • CUDA 10.0.130
  • pytoch 1.2.0
  • gcc 7.4.0

I find that at least commit 5b71d3695bf39 can compile without errors. Here is what I do:

git clone https://github.com/NVIDIA/apex.git && \   
cd apex && \                                        
git checkout 5b71d3695bf39 && \                     
python setup.py install --cuda_ext --cpp_ext        

It can compile without errors.

Thanks @da03 !
My env:

  • Centos 7
  • CUDA 9.1
  • PyTorch 1.2

I'm still encountering the issue even with the suggested rollback (git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0) - Any suggestions?

Update: I was having issues with pytorch 1.5.1, but they went away when I downgraded to pytorch 1.4 (and rolling back with git checkout 5b71d3695bf39)

@da03
thanks, help a lot

THX, save my day. 😄

My Env Info:

  • Pytorch: 1.2.0
  • CUDA: 10.0
  • GCC: 7.4.0

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

Thanks @da03 !
It dose work!

     }
     ^
    /home/hadoop-basecv/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h: In member function ‘Return c10::Dispatcher::callUnboxedOnly(const c10::OperatorHandle&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]’:
    /home/hadoop-basecv/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h:203:1: warning: control reaches end of non-void function [-Wreturn-type]
     }
     ^
    /home/hadoop-basecv/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h: In member function ‘Return c10::Dispatcher::doCallUnboxed(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = bool; Args = {}]’:
    /home/hadoop-basecv/.local/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h:191:1: warning: control reaches end of non-void function [-Wreturn-type]
     }
     ^
    error: command 'gcc' failed with exit status 1
    Running setup.py install for apex ... error
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-av9m897s/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-av9m897s/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-2aev5wdr/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/hadoop-basecv/.local/include/python3.6m/apex Check the logs for full command output.
Exception information:
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 854, in install
    req_description=str(self.req),
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/install/legacy.py", line 86, in install
    raise LegacyInstallFailure
pip._internal.operations.install.legacy.LegacyInstallFailure

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 224, in _main
    status = self.run(options, args)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 180, in wrapper
    return func(self, options, args)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 403, in run
    pycompile=options.compile,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/__init__.py", line 90, in install_given_reqs
    pycompile=pycompile,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 858, in install
    six.reraise(*exc.parent)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/six.py", line 703, in reraise
    raise value
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/install/legacy.py", line 76, in install
    cwd=unpacked_source_directory,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/utils/subprocess.py", line 275, in runner
    spinner=spinner,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/utils/subprocess.py", line 240, in call_subprocess
    raise InstallationError(exc_msg)
pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-av9m897s/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-av9m897s/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-2aev5wdr/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/hadoop-basecv/.local/include/python3.6m/apex Check the logs for full command output.

None of checkouts worked for me. I am with cuda10.0 and py1.4.

Updated to gcc7 and it worked. It seems gcc 4.8.5 doesn't go with it.

Updated to gcc7 and it worked. It seems gcc 4.8.5 doesn't go with it.

I think you are right!!!

Ubuntu 18.04 LTS
CUDA 11.2
pytorch 1.8.1+cu111
gcc 7.5
求大佬帮我,整了一天了
cuda toolkit 11.1

commented

Not sure if this would help, but I encountered the same issue and had to rollback to an earlier version of apex:

git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0

It works for me! It is absolutely gorgeous! Thanks for saving my time!
Configuration:
cuda 10.0, pytorch1.1, python3.7

I am also seeing this error, the env is:

  • Ubuntu 18.04
  • CUDA 10.0.130
  • pytoch 1.2.0
  • gcc 7.4.0

I find that at least commit 5b71d3695bf39 can compile without errors. Here is what I do:

git clone https://github.com/NVIDIA/apex.git && \   
cd apex && \                                        
git checkout 5b71d3695bf39 && \                     
python setup.py install --cuda_ext --cpp_ext        

It can compile without errors.

  • Ubuntu 16.04
  • CUDA 8.0 CUDNN 7.2
  • pytoch 0.4.1(conda install pytorch=0.4.1 cuda80 -c pytorch)
  • python 3.6
git clone https://github.com/NVIDIA/apex.git && \   
cd apex && \                                        
git checkout 5b71d3695bf39 && \                     
python setup.py install  

Avoided Python 3.6 annotations error and the --cuda_ext and --cpp_ext options require Torch version > 1.0.