microsoft / nn-Meter

A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

in the next two rounds, the accuracy rate drops

SaltFish11 opened this issue · comments

Hi, I am using nn-meter to train a dwconv-bn-renlu predictor on a custom backend with the configuration in the image below. In the first round of training, the accuracy rate can reach about 90%, but in the next two rounds, the accuracy rate drops. Want to ask for leave Have you ever encountered this situation? And how to solve it?
image

In addition, when the initial sampling point is set to 5000, I noticed that some of the collected samples are repeated. Do I need to do data preprocessing for this situation?

Hi, thanks for raising this issue! The objective of adaptive data sampling is to achieve a high level of accuracy in predicting latency. If satisfactory accuracy can be attained in the initial round, subsequent rounds become unnecessary.

Sorry, maybe I didn't express the question clearly. I make the following summary.

  1. For dwconv-bn-relu, using the official parameters can only achieve 90% accuracy in the first round (while the paper is 97%), I think this is far from enough. Then in the next few rounds, the problem of accuracy loss occurs every round. For this case, is it necessary to readjust the hyperparameters of the random forest according to your own backend?
  2. There are a large number of repeated points among the sampled data points. Is it necessary to deduplicate these repeated points?
  3. Is it necessary to perform data preprocessing for the sampled data?

Another very important thing, is your office located in Zhongguancun, Beijing? Can I invite you to have a meal and discuss about nn-meter?