import sys !pip install numpy !pip install pandas !pip install scikit-learn
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
!rm -rf data !mkdir data !wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/incpr/v0.1/test.csv !wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/incpr/v0.1/train.csv !mv train.csv data/train.csv !mv test.csv data/test.csv
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score
train_data_path = "train.csv" #path where data is stored
train_data = pd.read_csv(train_data_path) #load data in dataframe using pandas
We can see each column contains different attributes. The dataset is a mixture of both data type "int" and "text".
Select the columns which you need to train on. We can also select columns with data type as text, but then we need to map these text to numbers i.e. encode them and then select them. To know more about encoding, visit here.
train_data = train_data[['age','education num','capital gain','capital loss','working hours per week','income']]
Split Data into Train and Validation¶
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont overfit on the train dataset. There are many ways to do validation like k-fold,leave one out, etc
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)
Here we have selected the size of the validation data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
Check which column contains the variable that needs to be predicted. Here it is the last column.
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1] X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]
Define the Classifier¶
We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
classifier = SVC(gamma='auto',max_iter=100) #from sklearn.linear_model import LogisticRegression # classifier = LogisticRegression()
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
Predict on Validation¶
Now we predict our trained classifier on the validation set and evaluate our model
y_pred = classifier.predict(X_val)
precision = precision_score(y_val,y_pred,average='micro') recall = recall_score(y_val,y_pred,average='micro') accuracy = accuracy_score(y_val,y_pred) f1 = f1_score(y_val,y_pred,average='macro')
print("Accuracy of the model is :" ,accuracy) print("Recall of the model is :" ,recall) print("Precision of the model is :" ,precision) print("F1 score of the model is :" ,f1)
final_test_path = "test.csv" final_test = pd.read_csv(final_test_path) final_test = final_test[['age','education num','capital gain','capital loss','working hours per week']]
Predict Test Set¶
Time for the moment of truth! Predict on test set and time to make the submission.
submission = classifier.predict(final_test)
submission = pd.DataFrame(submission) submission.to_csv('submission.csv',header=['income'],index=False)
Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "income".
try: from google.colab import files files.download('submission.csv') except: print("Only in Colab")