cap-ntu / autocomplete

code completion using neural network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TypeError: __init__() missing 1 required positional argument: 'units'

huangyz0918 opened this issue · comments

This error happens in the tensorflow backend and I have seen that many times.

How to reproduce,

TensorFlow Version: 1.14.0
Step:

  1. Start the TensorFlow server
  2. Start the web application
  3. choose the model keras (model was tested and no issue in ./web)
  4. type some words and press tab to trigger completing.
  5. the auto-complete is ok and give the expected results.
  6. type words again, there is a backend error.

The auto-complete works well in the first time, but in the second time there will be an error like that.

I think the new UI should use exactly same api calls to the backend.

However, in the backend, I modified

from keras.layers import Activation, Dense, LSTM
from keras.models import Sequential, load_model
from keras.optimizers import RMSprop

to

from tensorflow.keras.layers import Activation, Dense, LSTM
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.optimizers import RMSprop

since there is an error:

RuntimeError: It looks like you are trying to use a version of multi-backend Keras that does not support TensorFlow 2.0. We recommend using `tf.keras`, or alternatively, downgrading to TensorFlow 1.14.

If we are using tf 1.14 in this project, you can revert this.

However, in the backend, I modified ...

I think this change is safe, since I have some successful history.

The auto-complete works well in the first time, but in the second time there will be an error like that.

Can you also try using get request in the browser twice to test this behavior? I think it's a backend problem but I do almost nothing to it except the above.

Yes, I have tested several times and make sure we have this issue.

The first prediction The second prediction
Screenshot 2020-03-23 17 00 09 Screenshot 2020-03-23 17 00 23

I'm confused, because we don't have this issue in the old version (we can predict continually).

Can you also try using get request in the browser twice to test this behavior?

Ok, I'll take a look and check if there are some issues in the backend.

I have on online demo on http://34.82.114.191:9078/
It is also configured with tf 2.1.0
It seems that no such errors will occur with "500_token"

Closed since tested this only happens in TensorFlow 1.14.0.