apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Home Page:https://tvm.apache.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug][Meta-schedule][tensorizing] BERT Meta-schedule tensorizing Runtime Error

SharynHu opened this issue · comments

I was tensorizing BERT using meta schedule and get a runtime error.

Expected behavior

It should

  • Extract task
  • Tune task
  • Give me a tuned model to build

Actual behavior

Raised a runtime error duing task tuning "fused_batch_matmul"

E       RuntimeError: parallel_for_dynamic error with [11:05:35] ~/tvm/src/tir/transforms/unify_thread_binding.cc:112: Check failed: (ana.CanPro
veEqual(dom->extent, new_iter_var->dom->extent)) is false: ValueError: All loops that are bound to `threadIdx.y` should have the same extent. However, there are two
 loops with extent T.int64(32) and T.int64(16), which are not equal

Environment

Hardware: Nvidia/rtx-3060
OS: ubuntu 2204
Commit id: cfe1711
Cuda: 11.8
Llvm: 17

Steps to reproduce

Simply run the official test code located at
~/tvm/tests/python/integration/test_auto_tensorize.py

Triage

  • tune:meta_schedule