pytorch / translate

Translate - a PyTorch Language Library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Python version degrades

kalyangvs opened this issue · comments

while using this conda install -y -c caffe2 "pytorch-caffe2-cuda${TMP_CUDA_VERSION}.0-cudnn7"
image python version degrades to 2.7.15 and hence is not able to export model using https://github.com/pytorch/translate#exporting-a-model-with-onnx

Can this conversion be done on a CPU
Does pytorch1.0 does not need many of the above steps and facilitate the conversion process..

commented

export should be doable on CPU. Note that none of our ONNX tests require GPU: https://github.com/pytorch/translate/blob/master/pytorch_translate/test/test_onnx.py

whereas training DOES require GPU: https://github.com/pytorch/translate/blob/master/pytorch_translate/test/test_train.py

Tests which require GPU have this decorator @unittest.skipIf(torch.cuda.device_count() < 1, "No GPU available for test.")

the build fails at make 2>&1 | tee MAKE_OUT
with the log ::

Scanning dependencies of target translation_decoder
[ 16%] Building CXX object CMakeFiles/translation_decoder.dir/Decoder.cpp.o
In file included from /home/local/usr/miniconda3/envs/pytrans/include/caffe2/core/logging.h:12:0,
from /home/local/usr/miniconda3/envs/pytrans/include/caffe2/core/init.h:6,
from /home/local/usr/translate/pytorch_translate/cpp/Decoder.cpp:32:
/home/local/usr/miniconda3/envs/pytrans/include/caffe2/proto/caffe2.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
#error This file was generated by an older version of protoc which is
^

Please help @liezl200

According to this BVLC/caffe#5645 uninstall libprotobuf in conda installed, solves it, but the pytorch+caffe2 also gets uninstalled with it .

These are exactly the steps I follow :: (I just need to export a fairseq trained translation model to onnx on CPU)

git clone https://github.com/pytorch/translate.git
pushd translate

conda install -y -c caffe2 pytorch-caffe2
conda install -y numpy==1.14 --no-deps
export CONDA_PATH="$(dirname $(which conda))/.."

git clone --recursive https://github.com/onnx/onnx.git
yes | pip install ./onnx 2>&1 | tee ONNX_OUT

pip uninstall -y pytorch-translate
python3 setup.py build develop
pushd pytorch_translate/cpp

mkdir build && pushd build
cmake
-DCMAKE_PREFIX_PATH="${CONDA_PATH}/usr/local"
-DCMAKE_INSTALL_PREFIX="${CONDA_PATH}" ..
2>&1 | tee CMAKE_OUT

Is there average attention, shared embeddings, knowledge distillation under works as the Marian-NMT people do? @liezl200

@gvskalyan, knowledge distillation is indeed in the works. Note also that shared embeddings are already an option for the transformer architecture [1]. We don't have immediate plans to add average attention, but we suggest looking at the hybrid architecture [2], which in the same spirit (faster inference for transformer models).

[1]

parser.add_argument(
"--share-decoder-input-output-embed",
default=False,
action="store_true",
help="share decoder input and output embeddings",
)
parser.add_argument(
"--share-all-embeddings",
default=False,
action="store_true",
help="share encoder, decoder and output embeddings"
" (requires shared dictionary and embed dim)",
)

[2]

class HybridTransformerRNNModel(FairseqModel):

I am still unable to make build due to the above error.

I am still unable to make build due to the above error.

I had the same problem. Python 3.6 degrades to 2.7.

@FuKaiYin You could try training the model in translate itself and use the docker image to export the model that might it!