Skuldur / Classical-Piano-Composer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Predicting the same note

satashree27 opened this issue · comments

err

@Skuldur
The network seems to be predicting the same note index even over 500 iterations. What could be the error?

Hi,

This is a problem caused by the network failing to converge. The one in the repository at the moment is very sensitive and is prone to generalizing to the same note.

You can replace it with this similar model:

model = Sequential()
    model.add(LSTM(
        512,
        input_shape=(network_input.shape[1], network_input.shape[2]),
        recurrent_dropout=0.3,
        return_sequences=True
    ))
    model.add(LSTM(512, return_sequences=True, recurrent_dropout=0.3,))
    model.add(LSTM(512))
    model.add(BatchNormalization())
    model.add(Dropout(0.3))
    model.add(Dense(256))
    model.add(Activation('relu'))
    model.add(BatchNormalization())
    model.add(Dropout(0.3))
    model.add(Dense(n_vocab))
    model.add(Activation('softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

The differences are:

  • We're using recurrent_dropout instead of regular dropout for the LSTM layers. This is because using regular dropout in between recurrent layers such as LSTM can actually harm the performance of the model
  • Added BatchNormalization which helps normalize the outputs of layers and helps the model converge. In my experience it almost always prevents the model from generalizing to one note

I will be updating the repository and the article later today since I finally have the time to do it.

commented

I have the same issue with the updated model, though I ran it on my own MIDI files of Bach music and tried 50 epochs because I'm running it on my home laptop. I'll see how the full 200 epochs works.