Baseline INCPR Educational Challenge

Baseline for INCPR Educational Challenge on AIcrowd

Author : Ayush Shivani

To open this notebook on Google Computing platform Colab, click below!

Open In Colab

Download Necessary Packages

In [ ]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn

Download data

In [ ]:
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_incpr/data/public/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_incpr/data/public/train.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load the data

In [ ]:
train_data_path = "train.csv" #path where data is stored
In [ ]:
train_data = pd.read_csv(train_data_path) #load data in dataframe using pandas

Visualize data

In [ ]:
train_data.head()

We can see each column contains different attributes. The dataset is a mixture of both data type "int" and "text".

Select the columns which you need to train on. We can also select columns with data type as text, but then we need to map these text to numbers i.e. encode them and then select them. To know more about encoding, visit here.

In [ ]:
train_data = train_data[['age','education num','capital gain','capital loss','working hours per week','income']]

Split the data in train/Validation

In [ ]:
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)

Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.

Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.

Check which column contains the variable that needs to be predicted. Here it is the last column.

In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_test,y_test = X_test.iloc[:,:-1],X_test.iloc[:,-1]

Define the classifier

In [ ]:
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto', max_iter=10)

We have used Logistic Regression as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit here.

We can also use other classifiers. To read more about sklean classifiers visit here. Try and use other classifiers to see how the performance of your model changes.

Train the classifier

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Predict on test set

In [ ]:
y_pred = classifier.predict(X_test)

Find the scores

In [ ]:
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Prediction on Evaluation Set

Load the evaluation data

In [ ]:
final_test_path = "test.csv"
final_test = pd.read_csv(final_test_path)
final_test = final_test[['age','education num','capital gain','capital loss','working hours per week']]

Predict on evaluation set

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('/tmp/submission.csv',header=['income'],index=False)

Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "income".

To download the generated csv in collab run the below command

In [ ]:
from google.colab import files
files.download('.tmp/submission.csv')

Go to platform. Participate in the challenge and submit the submission.csv generated.

1 Like