Baseline for CRDSM

Contribute Download Execute In Colab

Getting Started Code for CRDSM Educational Challenge

Author - Pulkit Gera

In [0]:
!pip install numpy
!pip install pandas
!pip install sklearn

Download data

The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions

In [0]:
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv
--2020-05-16 21:33:33--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/test.csv
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.12, 130.117.252.10, 130.117.252.13, ...
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.12|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72142 (70K) [text/csv]
Saving to: ‘test.csv’

test.csv            100%[===================>]  70.45K   150KB/s    in 0.5s    

2020-05-16 21:33:34 (150 KB/s) - ‘test.csv’ saved [72142/72142]

--2020-05-16 21:33:36--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/train.csv
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.12, 130.117.252.10, 130.117.252.13, ...
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.12|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2543764 (2.4M) [text/csv]
Saving to: ‘train.csv’

train.csv           100%[===================>]   2.43M  1.47MB/s    in 1.6s    

2020-05-16 21:33:39 (1.47 MB/s) - ‘train.csv’ saved [2543764/2543764]

Import packages

In [0]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [0]:
all_data = pd.read_csv('data/train.csv')

Analyse Data

In [0]:
all_data.head()
Out[0]:
max_ndvi 20150720_N 20150602_N 20150517_N 20150501_N 20150415_N 20150330_N 20150314_N 20150226_N 20150210_N 20150125_N 20150109_N 20141117_N 20141101_N 20141016_N 20140930_N 20140813_N 20140626_N 20140610_N 20140525_N 20140509_N 20140423_N 20140407_N 20140322_N 20140218_N 20140202_N 20140117_N 20140101_N class
0 997.904 637.5950 658.668 -1882.030 -1924.36 997.904 -1739.990 630.087 -1628.240 -1325.64 -944.084 277.107 -206.7990 536.441 749.348 -482.993 492.001 655.770 -921.193 -1043.160 -1942.490 267.138 366.608 452.238 211.328 -2203.02 -1180.190 433.906 4
1 914.198 634.2400 593.705 -1625.790 -1672.32 914.198 -692.386 707.626 -1670.590 -1408.64 -989.285 214.200 -75.5979 893.439 401.281 -389.933 394.053 666.603 -954.719 -933.934 -625.385 120.059 364.858 476.972 220.878 -2250.00 -1360.560 524.075 4
2 3800.810 1671.3400 1206.880 449.735 1071.21 546.371 1077.840 214.564 849.599 1283.63 1304.910 542.100 922.6190 889.774 836.292 1824.160 1670.270 2307.220 1562.210 1566.160 2208.440 1056.600 385.203 300.560 293.730 2762.57 150.931 3800.810 4
3 952.178 58.0174 -1599.160 210.714 -1052.63 578.807 -1564.630 -858.390 729.790 -3162.14 -1521.680 433.396 228.1530 555.359 530.936 952.178 -1074.760 545.761 -1025.880 368.622 -1786.950 -1227.800 304.621 291.336 369.214 -2202.12 600.359 -1343.550 4
4 1232.120 72.5180 -1220.880 380.436 -1256.93 515.805 -1413.180 -802.942 683.254 -2829.40 -1267.540 461.025 317.5210 404.898 563.716 1232.120 -117.779 682.559 -1813.950 155.624 -1189.710 -924.073 432.150 282.833 298.320 -2197.36 626.379 -826.727 4

Here we use the describe function to get an understanding of the data. It shows us the distribution for all the columns. You can use more functions like info() to get useful info.

In [0]:
all_data.describe()
#all_data.info()
Out[0]:
max_ndvi 20150720_N 20150602_N 20150517_N 20150501_N 20150415_N 20150330_N 20150314_N 20150226_N 20150210_N 20150125_N 20150109_N 20141117_N 20141101_N 20141016_N 20140930_N 20140813_N 20140626_N 20140610_N 20140525_N 20140509_N 20140423_N 20140407_N 20140322_N 20140218_N 20140202_N 20140117_N 20140101_N class
count 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000
mean 7282.721268 5713.832981 4777.434284 4352.914883 5077.372030 2871.423540 4898.348680 3338.303406 4902.600296 4249.307925 5094.772928 2141.881486 3255.355465 2628.115168 2780.793602 2397.228981 1548.151856 3015.626776 4787.492858 3640.367446 3027.313647 3022.054677 2041.609136 2691.604363 2058.300423 6109.309315 2563.511596 2558.926018 0.550213
std 1603.782784 2283.945491 2735.244614 2870.619613 2512.162084 2675.074079 2578.318759 2421.309390 2691.397266 2777.809493 2777.504638 2149.931518 2596.151532 2256.234526 2446.439258 2387.652138 1034.798320 1670.965823 2745.333581 2298.281052 2054.223951 2176.307289 2020.499263 2408.279935 2212.018257 1944.613487 2336.052498 2413.851082 1.009424
min 563.444000 -433.735000 -1781.790000 -2939.740000 -3536.540000 -1815.630000 -5992.080000 -1677.600000 -2624.640000 -3403.050000 -3024.250000 -4505.720000 -1570.780000 -3305.070000 -1633.980000 -482.993000 -1137.170000 372.067000 -3765.860000 -1043.160000 -4869.010000 -1505.780000 -1445.370000 -4354.630000 -232.292000 -6807.550000 -2139.860000 -4145.250000 0.000000
25% 7285.310000 4027.570000 2060.600000 1446.940000 2984.370000 526.911000 2456.310000 1017.710000 2321.550000 1379.210000 2392.480000 559.867000 1068.940000 616.822000 947.793000 513.204000 718.068000 1582.530000 2003.930000 1392.390000 1405.020000 1010.180000 429.881000 766.451000 494.858000 5646.670000 689.922000 685.680000 0.000000
50% 7886.260000 6737.730000 5270.020000 4394.340000 5584.070000 1584.970000 5638.400000 2872.980000 5672.730000 4278.880000 6261.950000 1157.170000 2277.560000 1770.350000 1600.950000 1210.230000 1260.280000 2779.570000 5266.930000 3596.680000 2671.400000 2619.180000 1245.900000 1511.180000 931.713000 6862.060000 1506.570000 1458.870000 0.000000
75% 8121.780000 7589.020000 7484.110000 7317.950000 7440.210000 5460.080000 7245.040000 5516.610000 7395.610000 7144.480000 7545.880000 3006.960000 5290.800000 4513.960000 4066.930000 3963.590000 1994.910000 4255.580000 7549.430000 5817.750000 4174.010000 4837.610000 3016.520000 4508.510000 2950.880000 7378.020000 4208.730000 4112.550000 1.000000
max 8650.500000 8377.720000 8566.420000 8650.500000 8516.100000 8267.120000 8499.330000 8001.700000 8452.380000 8422.060000 8401.100000 8477.560000 8624.780000 7932.690000 8630.420000 8210.230000 5915.740000 7492.230000 8489.970000 7981.820000 8445.410000 7919.070000 8206.780000 8235.400000 8247.630000 8410.330000 8418.230000 8502.020000 5.000000

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [0]:
X = all_data.drop('class',1)
y = all_data['class']
# Validation testing
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [0]:
# classifier = LogisticRegression()

classifier = SVC(gamma='auto')

# from sklearn import tree
# classifier = tree.DecisionTreeClassifier()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [0]:
classifier.fit(X_train, y_train)
Out[0]:
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
    decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
    max_iter=-1, probability=False, random_state=None, shrinking=True,
    tol=0.001, verbose=False)

got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [0]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score is the metric for this challenge
In [0]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [0]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.7140825035561877
Recall of the model is : 0.7140825035561877
Precision of the model is : 0.7140825035561877
F1 score of the model is : 0.138865836791148

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [0]:
test_data = pd.read_csv('data/test.csv')

Predict Test Set

Time for the moment of truth! Predict on test set and time to make the submission.

In [0]:
y_test = classifier.predict(test_data)

Save the prediction to csv

In [0]:
df = pd.DataFrame(y_test,columns=['class'])
df.to_csv('submission.csv',index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in collab run the below command

In [0]:
try:
  from google.colab import files
  files.download('submission.csv')
except ImportError as e:
  print("Only for Collab")

Well Done! 👍 We are all set to make a submission and see you name on leaderborad. Let navigate to challenge page and make one.