iree-org / iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.

Home Page:http://iree.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Compiler crashes for some models when folding TensorSliceOp

maxbartel opened this issue · comments

What happened?

The compiler asserts on some models when trying to run TensorSliceOp::fold.

Error message:

ElementsAttr does not provide iteration facilities for type `mlir::Attribute`, see attribute: dense_resource<__auto.constant_1_4251_192_torch.float32> : tensor<1x4251x192xf32>
invalid `T` for ElementsAttr::getValues
UNREACHABLE executed at /Users/bartel/Documents/neuro-mlir/third-party/iree/third_party/llvm-project/mlir/include/mlir/IR/BuiltinAttributeInterfaces.h:307!
Please report issues to https://github.com/iree-org/iree/issues and include the crash backtrace.
 #0 0x000000011f48c3d8 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x1143d8)
 #1 0x000000011f48a8d4 llvm::sys::RunSignalHandlers() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x1128d4)
 #2 0x000000011f48ca60 SignalHandler(int) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x114a60)
 #3 0x0000000197e17584 (/usr/lib/system/libsystem_platform.dylib+0x180477584)
 #4 0x0000000197de6c20 (/usr/lib/system/libsystem_pthread.dylib+0x180446c20)
 #5 0x0000000197cf3a20 (/usr/lib/system/libsystem_c.dylib+0x180353a20)
 #6 0x000000011f3f7c10 llvm::install_out_of_memory_new_handler() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x7fc10)
 #7 0x000000011f37fd5c mlir::detail::ElementsAttrRange<mlir::detail::ElementsAttrIterator<mlir::Attribute>>::ElementsAttrRange(mlir::ShapedType, mlir::detail::ElementsAttrIterator<mlir::Attribute>, mlir::detail::ElementsAttrIterator<mlir::Attribute>) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x7d5c)
 #8 0x000000011f37ba24 std::__1::enable_if<std::is_same<mlir::Attribute, mlir::Attribute>::value || !std::is_base_of<mlir::Attribute, mlir::Attribute>::value, mlir::detail::ElementsAttrRange<mlir::detail::ElementsAttrIterator<mlir::Attribute>>>::type mlir::ElementsAttr::getValues<mlir::Attribute>() const (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x3a24)
 #9 0x000000012262d420 mlir::iree_compiler::IREE::Flow::TensorSliceOp::fold(mlir::iree_compiler::IREE::Flow::TensorSliceOpGenericAdaptor<llvm::ArrayRef<mlir::Attribute>>) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x32b5420)
#10 0x0000000122623560 mlir::LogicalResult mlir::Op<mlir::iree_compiler::IREE::Flow::TensorSliceOp, mlir::OpTrait::ZeroRegions, mlir::OpTrait::OneResult, mlir::OpTrait::OneTypedResult<mlir::RankedTensorType>::Impl, mlir::OpTrait::ZeroSuccessors, mlir::OpTrait::AtLeastNOperands<1u>::Impl, mlir::OpTrait::AttrSizedOperandSegments, mlir::OpTrait::OpInvariants, mlir::BytecodeOpInterface::Trait, mlir::iree_compiler::IREE::Util::HoistableOpInterface::Trait, mlir::iree_compiler::IREE::Util::ShapeAwareOpInterface::Trait, mlir::ConditionallySpeculatable::Trait, mlir::OpTrait::AlwaysSpeculatableImplTrait, mlir::MemoryEffectOpInterface::Trait>::foldSingleResultHook<mlir::iree_compiler::IREE::Flow::TensorSliceOp>(mlir::Operation*, llvm::ArrayRef<mlir::Attribute>, llvm::SmallVectorImpl<mlir::OpFoldResult>&) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x32ab560)
#11 0x00000001226228b0 mlir::RegisteredOperationName::Model<mlir::iree_compiler::IREE::Flow::TensorSliceOp>::foldHook(mlir::Operation*, llvm::ArrayRef<mlir::Attribute>, llvm::SmallVectorImpl<mlir::OpFoldResult>&) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x32aa8b0)
#12 0x000000011f57d29c mlir::Operation::fold(llvm::ArrayRef<mlir::Attribute>, llvm::SmallVectorImpl<mlir::OpFoldResult>&) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x20529c)
#13 0x000000011f57d604 mlir::Operation::fold(llvm::SmallVectorImpl<mlir::OpFoldResult>&) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x205604)
#14 0x0000000122b7358c (anonymous namespace)::GreedyPatternRewriteDriver::processWorklist() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x37fb58c)
#15 0x0000000122b70b64 mlir::applyPatternsAndFoldGreedily(mlir::Region&, mlir::FrozenRewritePatternSet const&, mlir::GreedyRewriteConfig, bool*) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x37f8b64)
#16 0x0000000120d47790 mlir::iree_compiler::IREE::Flow::(anonymous namespace)::FormDispatchWorkgroupsPass::runOnOperation() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x19cf790)
#17 0x000000011f60f958 mlir::detail::OpToOpPassAdaptor::run(mlir::Pass*, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x297958)
#18 0x000000011f60ff64 mlir::detail::OpToOpPassAdaptor::runPipeline(mlir::OpPassManager&, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int, mlir::PassInstrumentor*, mlir::PassInstrumentation::PipelineParentInfo const*) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x297f64)
#19 0x000000011f61496c std::__1::__function::__func<mlir::LogicalResult mlir::failableParallelForEach<std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::$_15&>(mlir::MLIRContext*, std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::$_15&)::'lambda'(), std::__1::allocator<mlir::LogicalResult mlir::failableParallelForEach<std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::$_15&>(mlir::MLIRContext*, std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, std::__1::__wrap_iter<mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::OpPMInfo*>, mlir::detail::OpToOpPassAdaptor::runOnOperationAsyncImpl(bool)::$_15&)::'lambda'()>, void ()>::operator()() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x29c96c)
#20 0x000000011f59dd58 std::__1::__deferred_assoc_state<void, std::__1::__async_func<std::__1::function<void ()>>>::__execute() (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0x225d58)
#21 0x0000000197d10548 (/usr/lib/libc++.1.dylib+0x180370548)
#22 0x000000011f43c2b0 llvm::StdThreadPool::processTasks(llvm::ThreadPoolTaskGroup*) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0xc42b0)
#23 0x000000011f43db6c void* llvm::thread::ThreadProxy<std::__1::tuple<llvm::StdThreadPool::grow(int)::$_0>>(void*) (/Users/bartel/Documents/neuro-mlir/build_iree/lib/libIREECompiler.dylib+0xc5b6c)
#24 0x0000000197de6f94 (/usr/lib/system/libsystem_pthread.dylib+0x180446f94)
#25 0x0000000197de1d34 (/usr/lib/system/libsystem_pthread.dylib+0x180441d34)
[1]    73162 abort      build_iree/tools/iree-compile --iree-input-type=auto   --mlir-print-debuginfo

I am pretty sure this is a new regression, because I compiled that model successfully before.

Steps to reproduce your issue

Link to an example model (hustvl/yolos-tiny from HuggingFace through iree-turbine already converted to mlir including a reproducer) https://rooflineai-my.sharepoint.com/:u:/g/personal/bartel_roofline_ai/EfLRVhT_0StBtQP4DJzoqrwB5Flzzsd8xNXnXdv0diZflg?e=9q6zjg

Pipeline:
build_iree/tools/iree-compile --iree-input-type=auto --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=llvm-cpu --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --mlir-pass-pipeline-crash-reproducer=/home/bartel/Documents/neuro-mlir/temp/core-reproducer.mlir --iree-input-type=torch --iree-opt-data-tiling=true --iree-llvmcpu-target-triple=aarch64-linux-gnu '--iree-preprocessing-pass-pipeline=builtin.module(util.func(iree-preprocessing-convert-conv2d-to-img2col))' --iree-llvmcpu-enable-ukernels=all temp/core-reproducer.mlir

What component(s) does this issue relate to?

No response

Version information

commit 3803de50d93eac83328005962fe441c2d610bb2e from Jun 3

Additional context

No response

The folder has been busted for 12mo it seems as it was never updated to support resource attrs. You may be hitting it now because something is folding a constant that then triggers this folder or you're now using resource attrs. Would be good to fix.

@jpienaar is this something we could make work with resource attrs or should we just check that the operands.getSource() is a DenseElementsAttr before trying to fold?

OpFoldResult TensorSliceOp::fold(FoldAdaptor operands) {
if (llvm::count(operands.getOperands(), nullptr) == 0) {
// Fully constant arguments so we can perform the slice here.
auto tensor = llvm::cast<ElementsAttr>(operands.getSource());