microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Home Page:https://onnxruntime.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Mobile] Subgraphs duplicate initializers in RAM during execution

niedev opened this issue · comments

Describe the issue

I created a model that is composed of an If node that based on a Boolean input executes one of two subgraphs, both subgraphs use the same weight matrix (about 1GB, for the Gather and for the Transpose node), saved in the initializers of the parent model, there are no duplications of this matrix (in fact on the disk the model weighs 1GB overall), but when running on Android the model consumes 2GB of RAM instead of 1, most likely because the matrix shared by the two subgraphs is duplicated, is this a bug or is it an expected behavior? And if it is expected, are there ways to avoid it?

This is the structure of the model:

To reproduce

Here is the code used for loading the session in Android:

onnxEnv = OrtEnvironment.getEnvironment();
OrtSession.SessionOptions embedAndLmHeadOptions = new OrtSession.SessionOptions();
embedAndLmHeadOptions.setMemoryPatternOptimization(false);
embedAndLmHeadOptions.setCPUArenaAllocator(false);
embedAndLmHeadSession = onnxEnv.createSession(embedAndLmHeadPath, embedAndLmHeadOptions);

The model is saved here: https://github.com/niedev/testModel/releases/download/testModel/nllb_embed_and_lm_head1.onnx

Urgency

Not so urgent

Platform

Android

OS Version

14

ONNX Runtime Installation

Released Package

Compiler Version (if 'Built from Source')

No response

Package Name (if 'Released Package')

onnxruntime-android

ONNX Runtime Version or Commit ID

1.17.3

ONNX Runtime API

Java/Kotlin

Architecture

ARM64

Execution Provider

Default CPU

Execution Provider Library Version

No response

I found the problem, when onnxruntime performs the basic optimizations on the lm_head subgraph it transposes the weight matrix (called model.shared.weight) and saves it as another initializer, eliminating the Transpose node (if I deactivate the basic optimizations the transpose is simply done anyway on model.shared.weight by duplicating it during execution, so in addition to consuming 1GB more it also slows down execution), so I thought of applying this theorem on the lm_head, executing the Transpose on the other matrix which is multiplied with model.shared.weight (pre_logits) and on the final result (and inverting the order of the two matrices of the MatMul), in this way we obtain an equivalent MatMul but without having to perform the Transpose on model.shared.weight (which is much larger than the other two matrices on which we now perform the transpose), in this way I managed to reduce the RAM consumption by 1GB, but the problem is that the execution time increases.

This is the updated model: https://github.com/niedev/testModel/releases/download/testModel_2.0/nllb_embed_and_lm_head_if3.onnx

I used the onnx profiler on the new model (without optimizations) to understand which node causes the performance decrease, and the two added Transposes execute practically instantly, the node that takes longer than before is lm_head's MatMul (goes from 36ms to 50ms), but I can't understand why, given that the multiplication is practically equivalent.

This is the profiling result of the old model: https://github.com/niedev/testModel/blob/main/embed_and_lmhead_log_2024-05-19_old.json

This is the profiling result of the new model: https://github.com/niedev/testModel/blob/main/embed_and_lmhead_log_2024-05-19_new.json

The model I'm working on is the result of extracting the components that perform the embed and lm_head of NLLB, so if you could solve the MatMul problem (if it is solvable) and integrate this modification into the basic optimization process of onnxruntime this would lead to a significant reduction of RAM consumption for NLLB (and also for other Transformers that share the embed and lm_head matrix).