dk-liang / FIDTM

[IEEE TMM] Focal Inverse Distance Transform Maps for Crowd Localization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training set image size

neversettle-tech opened this issue · comments

If I use my private data set for training, what are the requirements for image size?
I saw that the fidt_generate_xx.py treated image sizes differently for different data sets

Limit the training images' length no longer than 2048 is better.

There is a note "No more than 2048" mentioned in paper

There is a note "No more than 2048" mentioned in paper
师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。

There is a note "No more than 2048" mentioned in paper
师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。

请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了

There is a note "No more than 2048" mentioned in paper
师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。

请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了
我的运行demo的步骤:
1.我下载了预训练模型
2.运行了python video_demo.py --pre model_best.pth --video_path demo.mp4然后运行的时候一直卡在这个地方了,希望师兄可以给个思路!
{WWK(_0NDL8I$ 3APT}VBK0

There is a note "No more than 2048" mentioned in paper
师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。

请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了
我的运行demo的步骤:
1.我下载了预训练模型
2.运行了python video_demo.py --pre model_best.pth --video_path demo.mp4然后运行的时候一直卡在这个地方了,希望师兄可以给个思路!
{WWK(_0NDL8I$ 3APT}VBK0

把seg_hrnet.py中下面部分注释掉看看
if train==True:
if os.path.isfile(pretrained):

            pretrained_dict = torch.load(pretrained)
            logger.info('=> loading pretrained model {}'.format(pretrained))
            model_dict = self.state_dict()
            pretrained_dict = {k: v for k, v in pretrained_dict.items()
                               if k in model_dict.keys()}
            model_dict.update(pretrained_dict)
            self.load_state_dict(model_dict)

            print("load ImageNet pre_trained parameters for HR_Net")
        else:
            print('please check HRNET ImageNet pretrained model, the path ' + pretrained + ' is wrong')
            exit()