- Normalize the Dataset
- Build the Model
- Train the model
train_images = train_images / 255.0
test_images = test_images / 255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
Flatten
transforms the 2D images (28px x 28px) to a a 1D Array (of size 28 * 28)- 1st layer has 128 nodes (relu)
- 2nd layer has 10 nodes (softmax)
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
- optimizer: Define training procedure - how the model is updated
- loss: minimization function used in optimization
- metrics: used to monitor training and testing steps
model.fit(train_images, train_labels, epochs=10)
- Feeds training data to model. The model learns to associate images with labels
- epoch: iteration over the entire input data.
Compare how models pferoms on test dataset
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
Apply trained model on new datasets
predictions = model.predict(test_images)