Baseline - AUTODRI

Contribute Download Execute In Colab

Baseline for AUTODRI Challenge on AIcrowd

Author : Ayush Shivani

Download Necessary Packages

In [ ]:
import sys
!pip install numpy
!pip install pandas
!pip install scikit-learn 
!pip install matplotlib tqdm

Download data

The first step is to download the training data and the test data

In [ ]:
# #Donwload the datasets
!rm -rf data/
!mkdir data/

!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/autodri/v0.1/train.zip
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/autodri/v0.1/test.zip
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/autodri/v0.1/val.zip
!unzip train.zip  
!unzip test.zip 
!unzip val.zip
!mv train data/train
!mv test data/test
!mv val data/val
In [ ]:
## Now the data is available at the following locations:

TRAINING_IMAGES_FOLDER = "data/train/cameraFront"
TRAINING_LABELS_PATH = "data/train/train.csv"
TESTING_LABELS_PATH = "data/test/test.csv"
TESTING_IMAGES_FOLDER = "data/test/cameraFront"
# For this baseline, we will only be using the front camera angle of the car just for demonstration purpose. For actual one should try and see the best combination of all the angles

Import packages

In [ ]:
import os
import tqdm

import pandas as pd
import numpy as np

from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error,mean_absolute_error
import matplotlib.pyplot as plt
%matplotlib inline

from PIL import Image

Load Data

We use PIL library to load our images. Here we are creating our array where our input features are the mean colours and output features are the rotations along the x axis.

In [ ]:
training_labels_df = pd.read_csv(TRAINING_LABELS_PATH)

def pre_process_data_X(image):
    """
    This file takes a loaded image and returns a particular 
    representation of the data point
    
    
    NOTE: This current baseline implements a **very** silly approach
    of representing every image by the mean RGB values for every image.
    
    You are encourage to try to alternate representations of the data,
    or figure out how to learn the best representation from the data ;)
    """
    im_array = np.array(im)
    mean_rgb = im_array.mean(axis=(0, 1))
    return mean_rgb


ALL_DATA = []

for _idx, row in tqdm.tqdm(training_labels_df.iterrows(), total=training_labels_df.shape[0]):
    filepath = os.path.join(
        TRAINING_IMAGES_FOLDER,
        row.filename
    )
    im = Image.open(filepath)
    
    data_X = pre_process_data_X(im)
    data_Y = [row.canSteering]
    
    ALL_DATA.append((data_X, data_Y))

Exploratory Data Analysis

We now see the kind of images the dataset contains to get a better idea.

In [ ]:
plt.figure(figsize=(20,20))
for i in range(16):
  filename,xRot = training_labels_df.iloc[i]
  filepath = os.path.join(
        TRAINING_IMAGES_FOLDER,
        filename
    )
  im = Image.open(filepath)
  plt.subplot(4,4,i+1)
  plt.axis('off')
  plt.title("canSteering: %.3f"%(xRot))
  plt.imshow(im)

Split Data into Train and Validation

We split the dataset into Training data and Validation datasets to help us test the generalizability of our models, and to ensure that we are not overfitting on the training set.

In [ ]:
training_set, validation_set= train_test_split(ALL_DATA, test_size=0.2, random_state=42)

Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.

Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.

In [ ]:
X_train, y_train = zip(*training_set)
X_val, y_val = zip(*validation_set)


X_train = np.array(X_train)
y_train = np.array(y_train)
X_val = np.array(X_val)
y_val = np.array(y_val)

Define the Classifier

Now we finally come to the juicy part. Now that all the data is all loaded and available nice, we can finally get to training the classifier. Here we use sklearn MLPRegressor to train our network. We can tune the hyper parameters based on cross validation scores

In [ ]:
model = MLPRegressor(hidden_layer_sizes=[10, 10], verbose=True)
# NOTE : This is again silly hyper parameter instantiation of this problem,
# and we encourage you to explore what works the best for you.

Train the classifier

In [ ]:
model.fit(X_train, y_train)

Predict on Validation

Now we predict our trained classifier on the validation set and evaluate our model

In [ ]:
y_pred = model.predict(X_val)

Evaluate the Performance

We use the same metrics as that will be used for the test set.
MAE and RMSE are the metrics for this challenge

In [ ]:
print('Mean Absolute Error:', mean_absolute_error(y_val, y_pred))  
print('Mean Squared Error:', mean_squared_error(y_val, y_pred))  
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_val, y_pred)))

Load Test Set

Load the test data now

In [ ]:
import glob
testing_labels_df = pd.read_csv(TESTING_LABELS_PATH)

TEST_DATA = []
TEST_FILENAMES = []
for _idx, row in tqdm.tqdm(testing_labels_df.iterrows(), total=testing_labels_df.shape[0]):
    filepath = os.path.join(
        TESTING_IMAGES_FOLDER,
        row.filename
    )
    print(filepath)
    im = Image.open(filepath)
    
    data_X = pre_process_data_X(im)
    TEST_DATA.append(data_X)
    TEST_FILENAMES.append(row.filename)

Make predictions on the test set

In [ ]:
test_predictions = model.predict(TEST_DATA)
In [ ]:
test_predictions.shape
In [ ]:
test_df = pd.DataFrame(test_predictions, columns=['canSteering'])
test_df["filename"] = TEST_FILENAMES
In [ ]:
test_df.shape

Save the prediction to csv

In [ ]:
test_df.to_csv('submission.csv', index=False)

Note: Do take a look at the submission format.The submission file should contain the following header : filename,xRot.

To download the generated csv in Google Colab run the below command

In [ ]:
from google.colab import files
files.download('submission.csv')

Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Lets navigate to challenge page and make one.

In [ ]: