THUDM / GLM

GLM (General Language Model)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

160G内存,两张24G3090,800G硬盘的环境下,对GLM-10-chinese进行finetune

694344851 opened this issue · comments

bash scripts/ds_finetune_seq2seq.sh config_tasks/model_blocklm_10B_chinese.sh config_tasks/seq_customization.sh

[2023-03-14 14:41:37,971] [INFO] [runner.py:548:main] cmd = /root/miniconda3/envs/glm3.8/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=20736 --enable_each_rank_log=None finetune_glm.py --deepspeed --deepspeed_config config_tasks/config_blocklm_10B_cnndm.json --finetune --experiment-name GLM-10B-chinese-customization_03-14-14-41 --task customization --data-dir /root/autodl-tmp/GLM-main/data/customization --save /root/autodl-tmp/GLM-main/data/finetune_checkpoints --checkpoint-activations --num-workers 1 --no-load-lr-scheduler --block-lm --cloze-eval --task-mask --num-layers 48 --hidden-size 4096 --num-attention-heads 64 --max-position-embeddings 1024 --tokenizer-type ChineseSPTokenizer --load-pretrained /root/autodl-tmp/GLM-main/models_glm/glm-10b-chinese_MP2 --epochs 10 --lr 1e-5 --lr-decay-style linear --warmup 0.06 --label-smoothing 0.1 --save-interval 10000 --log-interval 50 --eval-interval 1000 --eval-iters 100 --eval-epoch 2 --src-seq-length 512 --tgt-seq-length 128 --min-tgt-length 55 --length-penalty 0.7 --no-repeat-ngram-size 3 --num-beams 5 --select-topk --eval-batch-size 1 --fp16 --model-parallel-size 2 --overwrite
[2023-03-14 14:41:39,805] [INFO] [launch.py:135:main] 0 NCCL_IB_DISABLE=0
[2023-03-14 14:41:39,805] [INFO] [launch.py:135:main] 0 NCCL_DEBUG=info
[2023-03-14 14:41:39,805] [INFO] [launch.py:135:main] 0 NCCL_NET_GDR_LEVEL=2
[2023-03-14 14:41:39,805] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2023-03-14 14:41:39,805] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=2, node_rank=0
[2023-03-14 14:41:39,805] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2023-03-14 14:41:39,805] [INFO] [launch.py:162:main] dist_world_size=2
[2023-03-14 14:41:39,805] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1
using world size: 2 and model-parallel size: 2

using dynamic loss scaling
[2023-03-14 14:41:42,648] [INFO] [comm.py:657:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
initializing model parallel with size 2
initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
{'pad': 50000, 'eos': 50000, 'sep': 50001, 'ENC': 50002, 'MASK': 50003, 'unk': 50004, 'sop': 50006, 'eop': 50007, 'gMASK': 50007, 'sMASK': 50008}
padded vocab (size: 50009) with 39 dummy tokens (new size: 50048)
found end-of-document token: 50000
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.3<0>
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 0.

autodl-container-84b1118916-3580f22d:1919:1919 [0] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed

autodl-container-84b1118916-3580f22d:1919:1919 [0] transport/net_ib.cc:149 NCCL WARN NET/IB : Unable to open device mlx5_bond_0
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO NET/IB : No device found.
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.3<0>
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO Using network Socket
NCCL version 2.10.3+cuda11.3
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO Bootstrap : Using eth0:172.17.0.3<0>
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 0.

autodl-container-84b1118916-3580f22d:1920:1920 [1] misc/ibvwrap.cc:212 NCCL WARN Call to ibv_open_device failed

autodl-container-84b1118916-3580f22d:1920:1920 [1] transport/net_ib.cc:149 NCCL WARN NET/IB : Unable to open device mlx5_bond_0
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO NET/IB : No device found.
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.3<0>
autodl-container-84b1118916-3580f22d:1920:1920 [1] NCCL INFO Using network Socket
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,00000000,ffffffff,00000000
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Channel 00/02 : 0 1
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Channel 01/02 : 0 1
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,00000000,ffffffff
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Channel 00 : 0[4f000] -> 1[d1000] via direct shared memory
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Channel 01 : 0[4f000] -> 1[d1000] via direct shared memory
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Channel 00 : 1[d1000] -> 0[4f000] via direct shared memory
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Channel 01 : 1[d1000] -> 0[4f000] via direct shared memory
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
autodl-container-84b1118916-3580f22d:1919:1999 [0] NCCL INFO comm 0x7f45a40030d0 rank 0 nranks 2 cudaDev 0 busId 4f000 - Init COMPLETE
autodl-container-84b1118916-3580f22d:1920:2000 [1] NCCL INFO comm 0x7f9e800030d0 rank 1 nranks 2 cudaDev 1 busId d1000 - Init COMPLETE
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO Launch mode Parallel
Creating customization-train dataset from /root/autodl-tmp/GLM-main/data/customization
Return 8000 train examples
building train and validation dataloaders ...
Creating customization-dev dataset from /root/autodl-tmp/GLM-main/data/customization
Return 1000 dev examples
Creating customization-test dataset from /root/autodl-tmp/GLM-main/data/customization
Return 1000 test examples
building GLM model ...

number of parameters on model parallel rank 0: 4944609280
number of parameters on model parallel rank 1: 4944609280
DeepSpeed is enabled.
[2023-03-14 14:43:17,844] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed info: version=0.8.1, git-hash=unknown, git-branch=unknown
[2023-03-14 14:43:17,851] [WARNING] [config_utils.py:74:_process_deprecated_field] Config parameter cpu_offload is deprecated use offload_optimizer instead
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 00/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 01/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 02/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 03/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 04/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 05/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 06/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 07/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 08/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 09/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 10/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 11/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 12/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 13/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 14/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 15/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 16/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 17/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 18/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 19/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 20/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 21/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 22/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 23/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 24/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 25/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 26/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 27/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 28/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 29/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 30/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Channel 31/32 : 0
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,00000000,ffffffff
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer
autodl-container-84b1118916-3580f22d:1919:2139 [0] NCCL INFO comm 0x7f43140030d0 rank 0 nranks 1 cudaDev 0 busId 4f000 - Init COMPLETE
[2023-03-14 14:43:17,912] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2023-03-14 14:43:18,202] [WARNING] [config_utils.py:74:_process_deprecated_field] Config parameter cpu_offload is deprecated use offload_optimizer instead
NCCL version 2.10.3+cuda11.3
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 00/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 01/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 02/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 03/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 04/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 05/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 06/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 07/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 08/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 09/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 10/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 11/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 12/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 13/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 14/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 15/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 16/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 17/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 18/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 19/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 20/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 21/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 22/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 23/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 24/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 25/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 26/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 27/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 28/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 29/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 30/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Channel 31/32 : 0
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,00000000,ffffffff,00000000
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer
autodl-container-84b1118916-3580f22d:1920:2148 [1] NCCL INFO comm 0x7f9be80030d0 rank 0 nranks 1 cudaDev 1 busId d1000 - Init COMPLETE
Using /root/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py38_cu113/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 3.5013041496276855 seconds
Loading extension module cpu_adam...
Time to load cpu_adam op: 3.348848581314087 seconds
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.000005, betas=(0.900000, 0.950000), weight_decay=0.010000, adam_w=1
[2023-03-14 14:43:25,097] [INFO] [logging.py:75:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adam as basic optimizer
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.000005, betas=(0.900000, 0.950000), weight_decay=0.010000, adam_w=1
[2023-03-14 14:43:25,160] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
[2023-03-14 14:43:25,160] [INFO] [utils.py:53:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2023-03-14 14:43:25,160] [INFO] [logging.py:75:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 2 optimizer
[2023-03-14 14:43:25,160] [INFO] [stage_1_and_2.py:144:init] Reduce bucket size 50000000
[2023-03-14 14:43:25,160] [INFO] [stage_1_and_2.py:145:init] Allgather bucket size 50000000
[2023-03-14 14:43:25,160] [INFO] [stage_1_and_2.py:146:init] CPU Offload: True
[2023-03-14 14:43:25,160] [INFO] [stage_1_and_2.py:147:init] Round robin gradient partitioning: False
Using /root/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Emitting ninja build file /root/.cache/torch_extensions/py38_cu113/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module utils...
Time to load utils op: 0.6427571773529053 seconds
Loading extension module utils...
Time to load utils op: 0.7038333415985107 seconds
Rank: 1 partition count [1, 1] and sizes[(4942733312, False), (1875968, False)]
Rank: 0 partition count [1, 1] and sizes[(4942733312, False), (1875968, False)]
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Channel 00/02 : 0 1
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Channel 01/02 : 0 1
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,00000000,ffffffff,00000000
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,00000000,ffffffff
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Channel 00 : 0[4f000] -> 1[d1000] via direct shared memory
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Channel 01 : 0[4f000] -> 1[d1000] via direct shared memory
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Channel 00 : 1[d1000] -> 0[4f000] via direct shared memory
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Channel 01 : 1[d1000] -> 0[4f000] via direct shared memory
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Connected all rings
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO Connected all trees
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
autodl-container-84b1118916-3580f22d:1919:2302 [0] NCCL INFO comm 0x7f45a00030d0 rank 0 nranks 2 cudaDev 0 busId 4f000 - Init COMPLETE
autodl-container-84b1118916-3580f22d:1920:2303 [1] NCCL INFO comm 0x7f9be8007a90 rank 1 nranks 2 cudaDev 1 busId d1000 - Init COMPLETE
autodl-container-84b1118916-3580f22d:1919:1919 [0] NCCL INFO Launch mode Parallel
[2023-03-14 14:43:45,285] [INFO] [utils.py:825:see_memory_usage] Before initializing optimizer states
[2023-03-14 14:43:45,285] [INFO] [utils.py:826:see_memory_usage] MA 9.4 GB Max_MA 9.4 GB CA 9.42 GB Max_CA 9 GB
[2023-03-14 14:43:45,286] [INFO] [utils.py:834:see_memory_usage] CPU Virtual Memory: used = 85.79 GB, percent = 11.4%
[2023-03-14 14:44:16,067] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1919
[2023-03-14 14:44:18,952] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1920

这是哪里出现的问题。

python change_mp.py path_to_the_checkpoint 2
还有这个分割过的模型,应该怎么结合在微调里面

遇到同样问题

@694344851 @wujindou 请假两位大佬,上面的问题解决了吗?我有4张3090,想做finetune,不知道模型并行怎么写?可以分享下相关的代码吗?谢谢

@RileyShe 多卡只要修改ds_fintune_seq2seq 里面的配置就可以了,比如num_nodes,num_gpus 就可以了