andrewowens / multisensory

Code for the paper: Audio-Visual Scene Analysis with Self-Supervised Multisensory Features

Home Page:http://andrewowens.com/multisensory/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TypeError: convolution() got multiple values for argument 'weights_regularizer'

chouqin3 opened this issue · comments

commented

I got error like this, what happened, please help me to fix.
Traceback (most recent call last):
File "D:/Workspace/PythonProjects/studyProjects/multisensory/src/sep_video.py", line 398, in
ret = run(arg.vid_file, t, arg.clip_dur, pr, gpus[0], mask = arg.mask, arg = arg, net = net)
File "D:/Workspace/PythonProjects/studyProjects/multisensory/src/sep_video.py", line 294, in run
net.init()
File "D:/Workspace/PythonProjects/studyProjects/multisensory/src/sep_video.py", line 42, in init
pr, reuse = False, train = False)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\sourcesep.py", line 953, in make_net
vid_net_full = shift_net.make_net(ims, sfs, pr, None, reuse, train)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\shift_net.py", line 419, in make_net
sf_net = conv2d(sf_net,num_outputs= 64, kernel_size= [65, 1], scope = 'sf/conv1_1', stride = [4, 1], padding='SAME', reuse = reuse) # by lg 8.20
File "C:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 183, in func_with_args
return func(*args, **current_args)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py", line 1154, in convolution2d
conv_dims=2)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 183, in func_with_args
return func(*args, **current_args)
TypeError: convolution() got multiple values for argument 'weights_regularizer'

Try installing TensorFlow 1.8, e.g. with "pip install tensorflow-gpu==1.8". This code broke in newer versions of TensorFlow.

commented

Thank you for your respose. I just did as you said, and I fixed the problem. Nevertheless, another appears, like below
Traceback (most recent call last):
File "D:/Workspace/PythonProjects/studyProjects/multisensory/src/sep_video.py", line 450, in
ig.show(table)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\img.py", line 13, in show
return imtable.show(*args, **kwargs)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 72, in show_table
html_rows = html_from_rows(table, output_dir)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 413, in html_from_rows
html_rows.append("" + "".join(html_from_cell(x, output_dir) for x in row))
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 413, in
html_rows.append("" + "".join(html_from_cell(x, output_dir) for x in row))
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 308, in html_from_cell
return x.make_html(output_dir)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 587, in make_html
make_video(fname, self.ims, self.fps, sound = self.sound)
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\imtable.py", line 498, in make_video
[(i, x, in_dir, tmp_ext) for i, x in enumerate(ims)])
File "D:\Workspace\PythonProjects\studyProjects\multisensory\src\aolib\util.py", line 2725, in parmap
ret = pool.map_async(f, xs).get(10000000)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\multiprocessing\pool.py", line 638, in get
self.wait(timeout)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\multiprocessing\pool.py", line 635, in wait
self._event.wait(timeout)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\threading.py", line 551, in wait
signaled = self._cond.wait(timeout)
File "C:\Anaconda3\envs\tensorflow-gpu\lib\threading.py", line 299, in wait
gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large

This might be a Windows compatibility issue. Try decreasing the number in the line: "pool.map_async(f, xs).get(10000000)" in util.py -- say, to 100000.

commented

Thank you for your response again. It's really the problem of compatibility.

I've now (hopefully) fixed both of these compatibility issues. Please let me know if you still have problems.