nanodet model conerted from pytorch to coreml problem
minushuang opened this issue · comments
Hi, I converted a object detection model from pytorch to coreml with the code
def main(config, model_path, output_path, input_shape=(320, 320)):
logger = Logger(-1, config.save_dir, False)
model = build_model(config.model)
checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage)
load_model_weight(model, checkpoint, logger)
dummy_input = torch.autograd.Variable(
torch.randn(1, 3, input_shape[0], input_shape[1])
)
traced_model = torch.jit.trace(model, dummy_input)
logging.info("convert coreml start.")
core_model = ct.convert(
traced_model,
inputs=[ct.ImageType(shape=dummy_input.shape, name='input', scale=0.017429, bias=(-103.53 * 0.017429, -116.28 * 0.017507, -123.675 * 0.017125))],
outputs=[ct.TensorType(name="output")],
debug=True
)
core_model.save(output_path)
logging.info("finish convert coreml.")
the infer code with nms
image = Image.open(img_path).resize((320, 320)).convert('RGB')
model = ct.models.MLModel(mlmodel_path)
preds = model.predict({'input': image})
#post-process nms for preds
and the coreml result seems not correct as below
the pytorch results
the coreml results
could you please give me some advice about hot to get a correct coreml model, and it is so much better if you can convert the model for me if it is convenient, many thanks.
all the model and the code are in https://github.com/minushuang/nanodet-for-coreml now. please let me know if I can provide you with any further assistance for reproduce the problem.
sorry, i made a stupid mistake, i forgot to set the model to eval mode before converting, and it's ok now after model.eval()