researchmm / STTN

[ECCV'2020] STTN: Learning Joint Spatial-Temporal Transformations for Video Inpainting

Home Page:https://arxiv.org/abs/2007.10247

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DAVIS2018 does not exist

phananh1010 opened this issue · comments

Hello,

I am looking for the Davis-2018 from your link but all the datasets there do not have zip files. Currently, you code does not work for the Davis datasets.

Is it because they remove the Davis-2018 dataset, or is it because the link was incorrect? Either way, can you please check it.

Thanks,
Anh

commented

Hi, there,

I've written a short python file to process the youtube-vos folders into zips. You may refer to this for processing the Davis dataset.

import os
import zipfile

def zipDir(dirpath, outFullName):
	zipname = zipfile.ZipFile(outFullName, 'w', zipfile.ZIP_DEFLATED)
	for path, dirnames, filenames in os.walk(dirpath):
		fpath= path.replace(dirpath, '')

		for filename in filenames:
			zipname.write(os.path.join(path, filename), os.path.join(fpath, filename))
	zipname.close()

if __name__=="__main__":   
    g=os.walk('/home/yz/Downloads/STTN/datasets/youtube-vos/train_all_frames/JPEGImages')
    for path, dir_list, file_list in g:
        for dir_name in dir_list:
            input_path = os.path.join(path,dir_name)
            output_path = '/home/yz/Downloads/STTN/datasets/youtube-vos/JPEGImages/'+dir_name+'.zip'
            print(input_path, '\n', output_path)
            zipDir(input_path, output_path)

Hi, there,

I've written a short python file to process the youtube-vos folders into zips. You may refer to this for processing the Davis dataset.

import os
import zipfile

def zipDir(dirpath, outFullName):
	zipname = zipfile.ZipFile(outFullName, 'w', zipfile.ZIP_DEFLATED)
	for path, dirnames, filenames in os.walk(dirpath):
		fpath= path.replace(dirpath, '')

		for filename in filenames:
			zipname.write(os.path.join(path, filename), os.path.join(fpath, filename))
	zipname.close()

if __name__=="__main__":   
    g=os.walk('/home/yz/Downloads/STTN/datasets/youtube-vos/train_all_frames/JPEGImages')
    for path, dir_list, file_list in g:
        for dir_name in dir_list:
            input_path = os.path.join(path,dir_name)
            output_path = '/home/yz/Downloads/STTN/datasets/youtube-vos/JPEGImages/'+dir_name+'.zip'
            print(input_path, '\n', output_path)
            zipDir(input_path, output_path)

I think your srcipt need to improve , some image in the youtube-vos don't seem to be start from 1. So when use the dataset.py in STTN. It may be wrong.

commented

Hi, there,
I've written a short python file to process the youtube-vos folders into zips. You may refer to this for processing the Davis dataset.

import os
import zipfile

def zipDir(dirpath, outFullName):
	zipname = zipfile.ZipFile(outFullName, 'w', zipfile.ZIP_DEFLATED)
	for path, dirnames, filenames in os.walk(dirpath):
		fpath= path.replace(dirpath, '')

		for filename in filenames:
			zipname.write(os.path.join(path, filename), os.path.join(fpath, filename))
	zipname.close()

if __name__=="__main__":   
    g=os.walk('/home/yz/Downloads/STTN/datasets/youtube-vos/train_all_frames/JPEGImages')
    for path, dir_list, file_list in g:
        for dir_name in dir_list:
            input_path = os.path.join(path,dir_name)
            output_path = '/home/yz/Downloads/STTN/datasets/youtube-vos/JPEGImages/'+dir_name+'.zip'
            print(input_path, '\n', output_path)
            zipDir(input_path, output_path)

I think your srcipt need to improve , some image in the youtube-vos don't seem to be start from 1. So when use the dataset.py in STTN. It may be wrong.

Your are right. Thanks for pointing this out. : )

你好,

我正在从您的链接中寻找 Davis-2018,但那里的所有数据集都没有 zip 文件。目前,您的代码不适用于 Davis 数据集。

是因为他们删除了 Davis-2018 数据集,还是因为链接不正确?无论哪种方式,请您检查一下。

谢谢, 安

Hello,

I am looking for the Davis-2018 from your link but all the datasets there do not have zip files. Currently, you code does not work for the Davis datasets.

Is it because they remove the Davis-2018 dataset, or is it because the link was incorrect? Either way, can you please check it.

Thanks, Anh

Hello, do you know how to get the DAVIS2018 dataset?