tensorflow / tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

On-Device Training with TensorFlow Lite for Microcontrollers (TFLM)

Black3rror opened this issue · comments

commented

It's quite unexpected that TensorFlow Lite for Microcontrollers (TFLM) has not yet introduced the capability for on-device training.

It is a correct argument that on-device training will significantly increase the memory requirement (although there are some approaches to mitigate this requirement), but in many cases, the hardware might be able to tolerate the additional memory demand. Meanwhile, training models on end devices can be very useful in many projects and products, marking it as an important feature for many companies.

I'm curious to know why TFLM is not considering adding this feature to its toolbox. Is there any theoretical problem stopping you from going in that direction or any practical difficulties?

"This issue is being marked as stale due to inactivity. Remove label or comment to prevent closure in 5 days."

commented

Still waiting ...
Any answer?

As far as I am aware, there are no fundamental issues with this outside of needing considerably more memory and performance will be really poor. It would be a very large undertaking (more ops to support, flex delegate / tf ops).