To learn multiclass classification using Tensorflow, we will divide this task in these simple parts-
- Introduction with Tensorflow
- Understanding Dataset
- Loading dataset
- Building and saving the multiclass classification model.
- Inference model
- Future Learning
Introduction with Tensorflow
Tensorflow is an open-source software library for numerical computation using data flow graphs that enables machine learning practitioners to do more data-intensive computing.
It provides a robust implementation of some widely used deep learning algorithms and has a flexible architecture.
If you are new to Tensorflow, then to study more about Tensorflow and understanding it’s basic programming model go through Starting with Tensorflow: the basics before proceding to this article.
Understanding Dataset
The dataset which we will work on is 102 flower classification.
This dataset contains flowers of 102 categories, each class consisting of between 40 and 258 images. As classes were quite many so accordingly dataset was quite less which was a total of 8,189 images.
The data format is simple, a directory containing images and a .mat file containing labels.
Loading dataset
Dataset can be downloaded using this link if you are making model locally and would like to do changes in dataset according to you.
I would prefer using Google Colab notebooks as it gives a good environment for training, as it may crash your kernel if you are training model locally.
If you are using Google Colab or even in your local notebook, you can use this code to download and extract data:
!wget http://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz !tar -xzf 102flowers.tgz !rm 102flowers.tgz !wget http://www.robots.ox.ac.uk/~vgg/data/flowers/102/imagelabels.mat
Building and saving the multiclass classification model
As always we will start with importing needed libraries:
import os import numpy as np import scipy.io import cv2 import tensorflow.compat.v1 as tf tf.disable_v2_behavior() from keras.utils import to_categorical from sklearn.model_selection import train_test_split
Pre-Processing
Loading labels:
img_labels = scipy.io.loadmat("imagelabels.mat") img_labels = img_labels["labels"] img_labels = img_labels[0] for i in range(len(img_labels)): img_labels[i] = img_labels[i] - 1
Loading images and converting them to NumPy array:
train_x = [] train_y = [] dir = "jpg/" for imgs in os.listdir(dir): img_num = int(imgs[7:11])-1 train_y.append(img_labels[img_num]) image = cv2.imread(os.path.join(dir, imgs)) resized = cv2.resize(image, (150,150)) normalized_img = cv2.normalize(resized, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) train_x.append(normalized_img) train_x = np.array(train_x)
Splitting data in training and testing sets:
trainx, valx, trainy, valy = train_test_split(train_x, train_y, test_size=0.15, random_state=10)
Acknowledging data:
print('Training Dataset Shape: {}'.format(trainx.shape)) print('No. of Training Dataset Labels: {}'.format(len(trainy))) Training Dataset Shape: (6960, 150, 150, 3) No. of Training Dataset Labels: 6960
Reshaping and exploring data again:
training_images= trainx/255.0 test_images=valx/255.0 training_images = trainx.reshape((6960,150,150,3)) valx = valx.reshape((1229,150,150,3)) print('Training Dataset Shape: {}'.format(trainx.shape)) print('No. of Training Dataset Labels: {}'.format(len(trainy))) print('Test Dataset Shape: {}'.format(valx.shape)) print('No. of Test Dataset Labels: {}'.format(len(valy))) Training Dataset Shape: (6960, 150, 150, 3) No. of Training Dataset Labels: 6960 Test Dataset Shape: (1229, 150, 150, 3) No. of Test Dataset Labels: 1229
Converting to categorical:
trainy = to_categorical(trainy) valy = to_categorical(valy)
Building the model
I was able to do this by following these simple steps:
Firstly making important functions which will be used to build CNN model like functions for adding weights, biases, and layers.
# fn to create weights def create_weights(shape): return tf.Variable(tf.truncated_normal(shape, stddev=0.05)) # fn to create biases def create_biases(size): return tf.Variable(tf.constant(0.05, shape=[size])) # fn to create convolutional layer def create_convolutional_layer(input,num_input_channels, conv_filter_size, num_filters): weights = create_weights(shape=[conv_filter_size, conv_filter_size, num_input_channels, num_filters]) biases = create_biases(num_filters) layer = tf.nn.conv2d(input=input, filter=weights,strides=[1, 1, 1, 1],padding='SAME') layer += biases layer = tf.nn.relu(layer) layer = tf.nn.max_pool(value=layer, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') return layer # fn to create flatten layer def create_flatten_layer(layer): layer_shape = layer.get_shape() num_features = layer_shape[1:4].num_elements() layer = tf.reshape(layer, [-1, num_features]) return layer # fn to create fully connected layers def create_fc_layer(input, num_inputs, num_outputs, use_relu=True): weights = create_weights(shape=[num_inputs, num_outputs]) biases = create_biases(num_outputs) if use_relu: layer = tf.add(tf.matmul(input, weights), biases) layer = tf.nn.relu(layer) else: layer = tf.add(tf.matmul(input, weights), biases, name='y_preds') return layer
Then initializing constants which will be used further like Batch size and Epochs.
## INITIALIZING CONSTANTS x = tf.placeholder(tf.float32, shape=[None, 150,150,3], name='x') y = tf.placeholder(tf.float32, shape=[None, 102], name='y') NUM_EPOCHS = 100 BATCH_SIZE = 500 KEEP_PROB = 0.5
Building the model with three convolutional layers, then flatten and fully connected and then finally output. Choosing filters and activation fn will make accuracy rate change, try playing with it and see difference between different activation functions.
## BUILDING CNN # Adding 1st convolutional layer block1_conv1 = create_convolutional_layer(input=x, num_input_channels=3, conv_filter_size=3, num_filters=64) block1_conv2 = create_convolutional_layer(input=block1_conv1, num_input_channels=64,conv_filter_size=3, num_filters=128) batch1 = tf.layers.batch_normalization(block1_conv2) drop1 = tf.nn.dropout(batch1, KEEP_PROB) # Adding 2nd convolutional layer block2_conv1 = create_convolutional_layer(input=drop1, num_input_channels=128, conv_filter_size=3, num_filters=128) block2_conv2 = create_convolutional_layer(input=block2_conv1, num_input_channels=128, conv_filter_size=3, num_filters=256) batch2 = tf.layers.batch_normalization(block2_conv2) drop2 = tf.nn.dropout(batch2, KEEP_PROB) # Adding 3rd convolutional layer block3_conv1 = create_convolutional_layer(input=drop2, num_input_channels=256, conv_filter_size=3, num_filters=256) block3_conv2 = create_convolutional_layer(input=block3_conv1, num_input_channels=256, conv_filter_size=3, num_filters=512) batch3 = tf.layers.batch_normalization(block3_conv2) drop3 = tf.nn.dropout(batch3, KEEP_PROB) layer_flat = create_flatten_layer(drop3) layer_fc1 = create_fc_layer(input=layer_flat, num_inputs=layer_flat.get_shape()[1:4].num_elements(), num_outputs=1024, use_relu=True) batch5 = tf.layers.batch_normalization(layer_fc1) drop5 = tf.nn.dropout(batch5, KEEP_PROB) #output layer y_preds = create_fc_layer(input=drop5, num_inputs=1024, num_outputs=102, use_relu=False)
Then defining cost and accuracy and finally, training data will be done:
## CALCULATING COST AND ACCURACY cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_preds, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost) correct_pred = tf.equal(tf.argmax(y_preds, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
Training and saving model:
init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(NUM_EPOCHS): for batch in range(int(len(trainx)/BATCH_SIZE)): batch_x = trainx[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(trainx))] batch_y = trainy[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(trainy))] opt = sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x, y: batch_y}) for batch in range(int(len(valx)/BATCH_SIZE)): val_batch_x = valx[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(valx))] val_batch_y = valy[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(valy))] val_loss, val_acc= sess.run([cost, accuracy], feed_dict={x: val_batch_x, y: val_batch_y}) print("Epoch "+str(epoch+1)+": Train Loss= "+"{:.4f}".format(loss)+" Train Accuracy= " + "{:.4f}".format(acc)+ " Valid Loss= "+"{:.4f}".format(val_loss)+" Valid Accuracy= " + "{:.4f}".format(val_acc)) ## SAVING THE MODEL os.mkdir('/model5') tf.saved_model.simple_save(sess, '/model5', inputs={"x": x}, outputs={"y_preds": y_preds}) print('--- MODEL SAVED ---') Epoch 1: Train Loss= 4.5610 Train Accuracy= 0.0360 Valid Loss= 4.5592 Valid Accuracy= 0.0280 Epoch 2: Train Loss= 4.5270 Train Accuracy= 0.0260 Valid Loss= 4.5455 Valid Accuracy= 0.0280 Epoch 3: Train Loss= 4.5328 Train Accuracy= 0.0420 Valid Loss= 4.5355 Valid Accuracy= 0.0280 Epoch 4: Train Loss= 4.5230 Train Accuracy= 0.0200 Valid Loss= 4.5111 Valid Accuracy= 0.0380 Epoch 5: Train Loss= 4.4415 Train Accuracy= 0.0520 Valid Loss= 4.4353 Valid Accuracy= 0.0420 Epoch 6: Train Loss= 4.3174 Train Accuracy= 0.0400 Valid Loss= 4.3343 Valid Accuracy= 0.0340 Epoch 7: Train Loss= 4.2062 Train Accuracy= 0.0400 Valid Loss= 4.2172 Valid Accuracy= 0.0440 Epoch 8: Train Loss= 4.0857 Train Accuracy= 0.0680 Valid Loss= 4.1239 Valid Accuracy= 0.0660 Epoch 9: Train Loss= 3.9556 Train Accuracy= 0.0760 Valid Loss= 4.0335 Valid Accuracy= 0.0660 Epoch 10: Train Loss= 3.8688 Train Accuracy= 0.0920 Valid Loss= 3.9253 Valid Accuracy= 0.0920 Epoch 11: Train Loss= 3.6981 Train Accuracy= 0.1020 Valid Loss= 3.8285 Valid Accuracy= 0.1000 Epoch 12: Train Loss= 3.6284 Train Accuracy= 0.1080 Valid Loss= 3.6926 Valid Accuracy= 0.1120 Epoch 13: Train Loss= 3.4732 Train Accuracy= 0.1360 Valid Loss= 3.4599 Valid Accuracy= 0.1400 Epoch 14: Train Loss= 3.3243 Train Accuracy= 0.1740 Valid Loss= 3.3971 Valid Accuracy= 0.1400 Epoch 15: Train Loss= 3.2982 Train Accuracy= 0.1780 Valid Loss= 3.3000 Valid Accuracy= 0.1920 Epoch 16: Train Loss= 3.1810 Train Accuracy= 0.1980 Valid Loss= 3.2256 Valid Accuracy= 0.1840 Epoch 17: Train Loss= 3.0427 Train Accuracy= 0.2140 Valid Loss= 3.1736 Valid Accuracy= 0.2080 Epoch 18: Train Loss= 2.9300 Train Accuracy= 0.2460 Valid Loss= 3.0729 Valid Accuracy= 0.2380 Epoch 19: Train Loss= 2.9101 Train Accuracy= 0.2360 Valid Loss= 3.0065 Valid Accuracy= 0.2260 Epoch 20: Train Loss= 2.8035 Train Accuracy= 0.2740 Valid Loss= 3.0514 Valid Accuracy= 0.2300 Epoch 21: Train Loss= 2.7696 Train Accuracy= 0.2800 Valid Loss= 2.8689 Valid Accuracy= 0.2680 Epoch 22: Train Loss= 2.7375 Train Accuracy= 0.2800 Valid Loss= 2.8008 Valid Accuracy= 0.2580 Epoch 23: Train Loss= 2.5499 Train Accuracy= 0.3040 Valid Loss= 2.8039 Valid Accuracy= 0.2640 Epoch 24: Train Loss= 2.5288 Train Accuracy= 0.3300 Valid Loss= 2.7110 Valid Accuracy= 0.3240 Epoch 25: Train Loss= 2.5570 Train Accuracy= 0.3220 Valid Loss= 2.6961 Valid Accuracy= 0.2780 Epoch 26: Train Loss= 2.3363 Train Accuracy= 0.3600 Valid Loss= 2.6657 Valid Accuracy= 0.3140 Epoch 27: Train Loss= 2.3392 Train Accuracy= 0.3860 Valid Loss= 2.7392 Valid Accuracy= 0.2940 Epoch 28: Train Loss= 2.3059 Train Accuracy= 0.3740 Valid Loss= 2.5674 Valid Accuracy= 0.3320 Epoch 29: Train Loss= 2.2573 Train Accuracy= 0.3820 Valid Loss= 2.6017 Valid Accuracy= 0.3100 Epoch 30: Train Loss= 2.3139 Train Accuracy= 0.3800 Valid Loss= 2.6024 Valid Accuracy= 0.3460 Epoch 31: Train Loss= 2.1934 Train Accuracy= 0.4120 Valid Loss= 2.5987 Valid Accuracy= 0.3160 Epoch 32: Train Loss= 2.1353 Train Accuracy= 0.4120 Valid Loss= 2.5823 Valid Accuracy= 0.3300 Epoch 33: Train Loss= 2.0025 Train Accuracy= 0.4520 Valid Loss= 2.5230 Valid Accuracy= 0.3300 Epoch 34: Train Loss= 2.0876 Train Accuracy= 0.4480 Valid Loss= 2.4660 Valid Accuracy= 0.3420 Epoch 35: Train Loss= 2.0272 Train Accuracy= 0.4460 Valid Loss= 2.5104 Valid Accuracy= 0.3580 Epoch 36: Train Loss= 1.9314 Train Accuracy= 0.4580 Valid Loss= 2.4818 Valid Accuracy= 0.3360 Epoch 37: Train Loss= 1.9678 Train Accuracy= 0.4400 Valid Loss= 2.4376 Valid Accuracy= 0.3660 Epoch 38: Train Loss= 1.8952 Train Accuracy= 0.4660 Valid Loss= 2.4705 Valid Accuracy= 0.3620 Epoch 39: Train Loss= 1.7317 Train Accuracy= 0.5040 Valid Loss= 2.3966 Valid Accuracy= 0.3760 Epoch 40: Train Loss= 1.7187 Train Accuracy= 0.5000 Valid Loss= 2.3268 Valid Accuracy= 0.3860 Epoch 41: Train Loss= 1.6114 Train Accuracy= 0.5380 Valid Loss= 2.3375 Valid Accuracy= 0.3880 Epoch 42: Train Loss= 1.4994 Train Accuracy= 0.5840 Valid Loss= 2.3062 Valid Accuracy= 0.3920 Epoch 43: Train Loss= 1.5728 Train Accuracy= 0.5580 Valid Loss= 2.3092 Valid Accuracy= 0.3920 Epoch 44: Train Loss= 1.4242 Train Accuracy= 0.5640 Valid Loss= 2.3921 Valid Accuracy= 0.4040 Epoch 45: Train Loss= 1.6332 Train Accuracy= 0.5440 Valid Loss= 2.4230 Valid Accuracy= 0.3820 Epoch 46: Train Loss= 1.5109 Train Accuracy= 0.5660 Valid Loss= 2.3713 Valid Accuracy= 0.3760 Epoch 47: Train Loss= 1.3078 Train Accuracy= 0.6100 Valid Loss= 2.2190 Valid Accuracy= 0.4360 Epoch 48: Train Loss= 1.2545 Train Accuracy= 0.6140 Valid Loss= 2.3364 Valid Accuracy= 0.4200 Epoch 49: Train Loss= 1.2538 Train Accuracy= 0.6320 Valid Loss= 2.2794 Valid Accuracy= 0.4160 Epoch 50: Train Loss= 1.4150 Train Accuracy= 0.5860 Valid Loss= 2.4471 Valid Accuracy= 0.3960 Epoch 51: Train Loss= 1.2998 Train Accuracy= 0.6260 Valid Loss= 2.4838 Valid Accuracy= 0.4240 Epoch 52: Train Loss= 1.1249 Train Accuracy= 0.6560 Valid Loss= 2.4239 Valid Accuracy= 0.3980 Epoch 53: Train Loss= 1.2563 Train Accuracy= 0.6020 Valid Loss= 2.4165 Valid Accuracy= 0.4240 Epoch 54: Train Loss= 1.2628 Train Accuracy= 0.6140 Valid Loss= 2.5151 Valid Accuracy= 0.3920 Epoch 55: Train Loss= 1.2430 Train Accuracy= 0.6220 Valid Loss= 2.5540 Valid Accuracy= 0.3660 Epoch 56: Train Loss= 1.3016 Train Accuracy= 0.6260 Valid Loss= 2.5964 Valid Accuracy= 0.3940 Epoch 57: Train Loss= 1.1355 Train Accuracy= 0.6600 Valid Loss= 2.3696 Valid Accuracy= 0.4380 Epoch 58: Train Loss= 1.1922 Train Accuracy= 0.6200 Valid Loss= 2.6243 Valid Accuracy= 0.3500 Epoch 59: Train Loss= 1.2429 Train Accuracy= 0.6320 Valid Loss= 2.5847 Valid Accuracy= 0.3780 Epoch 60: Train Loss= 1.1705 Train Accuracy= 0.6620 Valid Loss= 2.5255 Valid Accuracy= 0.4080 Epoch 61: Train Loss= 1.0152 Train Accuracy= 0.7080 Valid Loss= 2.2771 Valid Accuracy= 0.4440 Epoch 62: Train Loss= 0.9831 Train Accuracy= 0.6860 Valid Loss= 2.3519 Valid Accuracy= 0.4180 Epoch 63: Train Loss= 0.9307 Train Accuracy= 0.7240 Valid Loss= 2.5321 Valid Accuracy= 0.4360 Epoch 64: Train Loss= 0.8948 Train Accuracy= 0.7320 Valid Loss= 2.4357 Valid Accuracy= 0.4140 Epoch 65: Train Loss= 0.8641 Train Accuracy= 0.7560 Valid Loss= 2.4448 Valid Accuracy= 0.4320 Epoch 66: Train Loss= 0.8365 Train Accuracy= 0.7360 Valid Loss= 2.5993 Valid Accuracy= 0.4180 Epoch 67: Train Loss= 0.9156 Train Accuracy= 0.7380 Valid Loss= 2.5269 Valid Accuracy= 0.4400 Epoch 68: Train Loss= 0.7791 Train Accuracy= 0.7700 Valid Loss= 2.5653 Valid Accuracy= 0.4380 Epoch 69: Train Loss= 0.7678 Train Accuracy= 0.7500 Valid Loss= 2.4584 Valid Accuracy= 0.4420 Epoch 70: Train Loss= 0.7890 Train Accuracy= 0.7560 Valid Loss= 2.6564 Valid Accuracy= 0.4240 Epoch 71: Train Loss= 0.7305 Train Accuracy= 0.7560 Valid Loss= 2.5257 Valid Accuracy= 0.4420 Epoch 72: Train Loss= 0.6724 Train Accuracy= 0.7880 Valid Loss= 2.5614 Valid Accuracy= 0.4500 Epoch 73: Train Loss= 0.6608 Train Accuracy= 0.7800 Valid Loss= 2.4524 Valid Accuracy= 0.4600 Epoch 74: Train Loss= 0.6743 Train Accuracy= 0.7820 Valid Loss= 2.6465 Valid Accuracy= 0.4160 Epoch 75: Train Loss= 0.6123 Train Accuracy= 0.7840 Valid Loss= 2.5477 Valid Accuracy= 0.4580 Epoch 76: Train Loss= 0.5975 Train Accuracy= 0.8060 Valid Loss= 2.5948 Valid Accuracy= 0.4440 Epoch 77: Train Loss= 0.5729 Train Accuracy= 0.8320 Valid Loss= 2.5001 Valid Accuracy= 0.4580 Epoch 78: Train Loss= 0.5812 Train Accuracy= 0.8060 Valid Loss= 2.5002 Valid Accuracy= 0.4740 Epoch 79: Train Loss= 0.5880 Train Accuracy= 0.8180 Valid Loss= 2.6265 Valid Accuracy= 0.4460 Epoch 80: Train Loss= 0.5560 Train Accuracy= 0.8380 Valid Loss= 2.6190 Valid Accuracy= 0.4740 Epoch 81: Train Loss= 0.5650 Train Accuracy= 0.8100 Valid Loss= 2.7758 Valid Accuracy= 0.4380 Epoch 82: Train Loss= 0.5578 Train Accuracy= 0.8020 Valid Loss= 2.6602 Valid Accuracy= 0.4680 Epoch 83: Train Loss= 0.4984 Train Accuracy= 0.8340 Valid Loss= 2.6272 Valid Accuracy= 0.4200 Epoch 84: Train Loss= 0.5304 Train Accuracy= 0.8360 Valid Loss= 2.7632 Valid Accuracy= 0.4240 Epoch 85: Train Loss= 0.5252 Train Accuracy= 0.8320 Valid Loss= 2.6909 Valid Accuracy= 0.4540 Epoch 86: Train Loss= 0.5040 Train Accuracy= 0.8420 Valid Loss= 2.5981 Valid Accuracy= 0.4560 Epoch 87: Train Loss= 0.4537 Train Accuracy= 0.8600 Valid Loss= 2.6374 Valid Accuracy= 0.4680 Epoch 88: Train Loss= 0.3761 Train Accuracy= 0.8620 Valid Loss= 2.6532 Valid Accuracy= 0.4880 Epoch 89: Train Loss= 0.4870 Train Accuracy= 0.8340 Valid Loss= 2.7934 Valid Accuracy= 0.4440 Epoch 90: Train Loss= 0.4725 Train Accuracy= 0.8320 Valid Loss= 2.8045 Valid Accuracy= 0.4700 Epoch 91: Train Loss= 0.4484 Train Accuracy= 0.8700 Valid Loss= 2.7132 Valid Accuracy= 0.4580 Epoch 92: Train Loss= 0.3966 Train Accuracy= 0.8720 Valid Loss= 2.9380 Valid Accuracy= 0.4460 Epoch 93: Train Loss= 0.3164 Train Accuracy= 0.9020 Valid Loss= 2.7071 Valid Accuracy= 0.4640 Epoch 94: Train Loss= 0.3765 Train Accuracy= 0.8820 Valid Loss= 2.8896 Valid Accuracy= 0.4500 Epoch 95: Train Loss= 0.4218 Train Accuracy= 0.8600 Valid Loss= 3.0589 Valid Accuracy= 0.4400 Epoch 96: Train Loss= 0.3771 Train Accuracy= 0.8760 Valid Loss= 2.6421 Valid Accuracy= 0.4680 Epoch 97: Train Loss= 0.3731 Train Accuracy= 0.8860 Valid Loss= 2.8042 Valid Accuracy= 0.4360 Epoch 98: Train Loss= 0.3305 Train Accuracy= 0.8900 Valid Loss= 2.9577 Valid Accuracy= 0.4760 Epoch 99: Train Loss= 0.4255 Train Accuracy= 0.8540 Valid Loss= 3.0951 Valid Accuracy= 0.4440 Epoch 100: Train Loss= 0.3030 Train Accuracy= 0.9040 Valid Loss= 2.7476 Valid Accuracy= 0.4940 INFO:tensorflow:Assets added to graph. INFO:tensorflow:No assets to write. INFO:tensorflow:SavedModel written to: /model5/saved_model.pb --- MODEL SAVED ---
Here I trained 100 epochs and got:
Train Loss= 0.3030 Train Accuracy= 0.9040
Loading saved model for inference
This is the code to load the above-saved model, which can be used in separate inference file with test data to predict values without training again.
graph = tf.Graph() with graph.as_default(): with tf.Session(graph=graph) as sess: tf.saved_model.loader.load(sess, ["serve"], '/model5') x = graph.get_tensor_by_name('x:0') y_preds = graph.get_tensor_by_name('y_preds:0') y_true = [] preds = [] for batch in range(int(len(valx)/BATCH_SIZE)): batch_x = valx[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(valx))] batch_y = valy[batch*BATCH_SIZE:min((batch+1)*BATCH_SIZE,len(valy))] y_true.append(batch_y) preds.append(sess.run(y_preds, feed_dict={x: batch_x})) y_true = np.stack(np.array(y_true), axis=0) preds = np.stack(np.array(preds), axis=0)
Calculate loss and accuracy:
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = tf.cast(preds, tf.float32), labels=tf.cast(y_true, tf.float32))) correct = tf.equal(np.argmax(preds, axis=2), np.argmax(y_true, axis=2)) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
Printing results:
with tf.Session() as sess: print('Loss :',loss.eval()) print('Accuracy :', accuracy.eval()) Loss : 1.3531618 Accuracy : 0.614
Future learning
As this is just a basic model for learning phase, these things can be further done to improve effeciency:
- Learn datapreprocessing with tensorflow.
- As dataset was small, so need of data augumentation.
- Finding more architectures to improve the accuracy.
Summary
We successfully made a TensorFlow model to classify 102 categories of flowers.