Baseline - MNIST

Baseline for MNIST Educational Challenge on AIcrowd

Author : Ayush Shivani

To open this notebook on Google Computing platform Colab, click below!

Open In Colab

Download Necessary Packages

In [76]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
Requirement already satisfied: numpy in /home/ayush/.local/lib/python3.7/site-packages (1.18.1)
Requirement already satisfied: pandas in /home/ayush/.local/lib/python3.7/site-packages (0.25.0)
Requirement already satisfied: pytz>=2017.2 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ayush/anaconda3/lib/python3.7/site-packages (from pandas) (2.8.0)
Requirement already satisfied: numpy>=1.13.3 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (1.18.1)
Requirement already satisfied: six>=1.5 in /home/ayush/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas) (1.12.0)
Requirement already satisfied: scikit-learn in /home/ayush/.local/lib/python3.7/site-packages (0.21.3)
Requirement already satisfied: joblib>=0.11 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (0.14.0)
Requirement already satisfied: numpy>=1.11.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.18.1)
Requirement already satisfied: scipy>=0.17.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.4.1)

Download dataset

In [1]:
--2020-03-31 13:44:01--
Resolving (,,
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2202667 (2.1M) [application/zip]
Saving to: ‘’            100%[===================>]   2.10M  11.3KB/s    in 90s     

2020-03-31 13:45:43 (23.8 KB/s) - ‘’ saved [2202667/2202667]

--2020-03-31 13:45:43--
Resolving (,,
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13270303 (13M) [application/zip]
Saving to: ‘’           100%[===================>]  12.66M  22.8KB/s    in 7m 50s  

utime( No such file or directory
2020-03-31 13:53:42 (27.6 KB/s) - ‘’ saved [13270303/13270303]

unzip:  cannot find or open, or
unzip:  cannot find or open, or

Import packages

In [77]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load the data

In [80]:
train_data_path = "train.csv" #path where data is stored
In [79]:
train_data = pd.read_csv(train_data_path,header=None) #load data in dataframe using pandas

Visualise the Dataset

In [ ]:

You can see the columns goes from 0 to 784, where columns from 1 to 784 denotes pixel values each between 0-255 and the first column i.e. 0th is the digit it represents between 0-9.

Split the data in train/test

In [65]:
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)

Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.

Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.

In [66]:
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0]
X_test,y_test = X_test.iloc[:,1:],X_test.iloc[:,0]

Define the classifier

In [81]:
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)

We have used Logistic Regression as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit here.

We can also use other classifiers. To read more about sklean classifiers visit here. Try and use other classifiers to see how the performance of your model changes.

Train the classifier

In [ ]:, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Predict on test set

In [69]:
y_pred = classifier.predict(X_test)

Find the scores

In [70]:
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
In [71]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.92225
Recall of the model is : 0.92225
Precision of the model is : 0.92225
F1 score of the model is : 0.9213314758432045

Here are some of the images predicted correctly by your model.Cheers!

In [ ]:
import matplotlib.pyplot as plt
correct_pred = np.where(y_pred == y_test)
fig = plt.figure()
for i in range(1,10):
  img = X_test.iloc[i,:]
  img = np.array(img).reshape(28,28)

Prediction on Evaluation Set

Load the evaluation data

In [72]:
final_test_path = "test.csv"
final_test = pd.read_csv(final_test_path)

Predict on evaluation set

In [73]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [74]:
submission = pd.DataFrame(submission)

Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".

To download the generated csv in colab run the below command

In [ ]:
from google.colab import files'/tmp/submission.csv')

Go to platform. Participate in the challenge and submit the submission.csv generated.