To open this notebook on Google Colab, click below!¶
Download Necessary Packages¶
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
!{sys.executable} -m pip install matplotlib tqdm
Download data¶
The first step is to download the training data and the test data
# #Donwload the datasets
!rm -rf data/
!mkdir data/
!curl https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/orientme/v0.2/training.tar.gz -o data/training.tar.gz
!curl https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/orientme/v0.2/test.tar.gz -o data/test.tar.gz
!curl https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/orientme/v0.2/sample_submission.csv -o data/sample_submission.csv
!tar xvzf data/training.tar.gz -C data/
!tar xvzf data/test.tar.gz -C data/
## Now the data is available at the following locations:
TRAINING_IMAGES_FOLDER = "data/training/images/"
TRAINING_LABELS_PATH = "data/training/labels.csv"
TEST_IMAGES_FOLDER = "data/images"
SAMPLE_SUBMISSION_FILE_PATH = "data/sample_submission.csv"
Import packages¶
import os
import tqdm
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error,mean_absolute_error
import matplotlib.pyplot as plt
%matplotlib inline
from PIL import Image
Load Data¶
We use PIL library to load our images. Here we are creating our array where our input features are the mean colours and output features are the rotations along the 3 axis.
training_labels_df = pd.read_csv(TRAINING_LABELS_PATH)
def pre_process_data_X(image):
"""
This file takes a loaded image and returns a particular
representation of the data point
NOTE: This current baseline implements a **very** silly approach
of representing every image by the mean RGB values for every image.
You are encourage to try to alternate representations of the data,
or figure out how to learn the best representation from the data ;)
"""
im_array = np.array(im)
mean_rgb = im_array.mean(axis=(0, 1))
return mean_rgb
ALL_DATA = []
for _idx, row in tqdm.tqdm(training_labels_df.iterrows(), total=training_labels_df.shape[0]):
filepath = os.path.join(
TRAINING_IMAGES_FOLDER,
row.filename
)
im = Image.open(filepath)
data_X = pre_process_data_X(im)
data_Y = [row.xRot]
ALL_DATA.append((data_X, data_Y))
Exploratory Data Analysis¶
We now see the kind of images the dataset contains to get a better idea. The title signifies clockwise rotation of the cube along that axis
plt.figure(figsize=(20,20))
for i in range(16):
filename,xRot = training_labels_df.iloc[i]
filepath = os.path.join(
TRAINING_IMAGES_FOLDER,
filename
)
im = Image.open(filepath)
plt.subplot(4,4,i+1)
plt.axis('off')
plt.title("xRot: %.3f"%(xRot))
plt.imshow(im)
Split Data into Train and Validation¶
We split the dataset into Training data and Validation datasets to help us test the generalizability of our models, and to ensure that we are not overfitting on the training set.
training_set, validation_set= train_test_split(ALL_DATA, test_size=0.2, random_state=42)
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
X_train, y_train = zip(*training_set)
X_val, y_val = zip(*validation_set)
X_train = np.array(X_train)
y_train = np.array(y_train)
X_val = np.array(X_val)
y_val = np.array(y_val)
Define the Classifier¶
Now we finally come to the juicy part.
Now that all the data is all loaded and available nice, we can finally get to training the classifier. Here we use sklearn MLPRegressor
to train our network. We can tune the hyper parameters based on cross validation scores
model = MLPRegressor(hidden_layer_sizes=[10, 10], verbose=True)
# NOTE : This is again silly hyper parameter instantiation of this problem,
# and we encourage you to explore what works the best for you.
Train the classifier¶
model.fit(X_train, y_train)
Predict on Validation¶
Now we predict our trained classifier on the validation set and evaluate our model
y_pred = model.predict(X_val)
print('Mean Absolute Error:', mean_absolute_error(y_val, y_pred))
print('Mean Squared Error:', mean_squared_error(y_val, y_pred))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_val, y_pred)))
Load Test Set¶
Load the test data now
import glob
TEST_DATA = []
TEST_FILENAMES = []
for _test_image_path in tqdm.tqdm(glob.glob(os.path.join(TEST_IMAGES_FOLDER, "*.jpg"))):
filename = os.path.basename(_test_image_path)
im = Image.open(_test_image_path)
data_X = pre_process_data_X(im)
TEST_DATA.append(data_X)
TEST_FILENAMES.append(filename)
TEST_DATA = np.array(TEST_DATA)
# model = classifier
Make predictions on the test set¶
test_predictions = model.predict(TEST_DATA)
test_df = pd.DataFrame(test_predictions, columns=['xRot'])
test_df["filename"] = TEST_FILENAMES
Save the prediction to csv¶
test_df.to_csv('submission.csv', index=False)
Note: Do take a look at the submission format.The submission file should contain the following header : filename,xRot
.
To download the generated csv in Google Colab run the below command¶
from google.colab import files
files.download('submission.csv')