Download Necessary Packages¶
import sys !pip install numpy !pip install pandas !pip install scikit-learn
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
#Donwload the datasets !rm -rf data !mkdir data !wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/adclk/v0.1/test.csv !wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/adclk/v0.1/train.csv !mv train.csv data/train.csv !mv test.csv data/test.csv
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score,log_loss
all_data_path = "data/train.csv" #path where data is stored
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas
Visualize the data 👀¶
We can see the dataset contains 12 columns,where columns 2-12 denotes the information about the person that is called and the first column tell whether he clicked on the ad or not.
Let us now pre-process the data to remove any unwanted columns. We remove url_hash and advertiser_id
Split Data into Train and Validation 🔪¶
- The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
- The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
- There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
- Validation sets are also used to avoid your model from overfitting on the train dataset.
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
- We have decided to split the data with 20 % as validation and 80 % as training.
- To learn more about the train_test_split function click here. 🧐
- This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
- Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
- with this step we are all set move to the next step with a prepared dataset.
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0] X_val,y_val = X_val.iloc[:,1:],X_val.iloc[:,0] print(X_train)
TRAINING PHASE 🏋️¶
Define the Model¶
We have fixed our data and now we are ready to train our model.
Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐
classifier = SVC(gamma='auto') #classifier = MLPClassifier(hidden_layer_sizes=(1024,512), max_iter=300,activation = 'relu',solver='adam',random_state=1) # from sklearn.linear_model import LogisticRegression # classifier = LogisticRegression()
- To start you off, We have used a basic Support Vector Machines classifier here.
- But you can tune parameters and increase the performance. To see the list of parameters visit here.
- Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.
Train the Model¶
Got a warning! Dont worry, its just because the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
Validation Phase 🤔¶
Wonder how well your model learned! Lets check it.
Predict on Validation¶
Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.
y_pred = classifier.predict(X_val) print(y_pred)
Evaluate the Performance¶
- We have used basic metrics to quantify the performance of our model.
- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
- F1 score and Log Loss are the metrics for this challenge
precision = precision_score(y_val,y_pred,average='micro') recall = recall_score(y_val,y_pred,average='micro') accuracy = accuracy_score(y_val,y_pred) f1 = f1_score(y_val,y_pred,average='macro')
print("Accuracy of the model is :" ,accuracy) print("Recall of the model is :" ,recall) print("Precision of the model is :" ,precision) print("F1 score of the model is :" ,f1)
Testing Phase 😅¶
We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.
Load Test Set¶
Load the test data on which final submission is to be made.
final_test_path = "data/test.csv" final_test = pd.read_csv(final_test_path) final_test.drop(["url_hash","advertiser_id"],axis=1,inplace=True) len(final_test)
Predict Test Set¶
Predict on the test set and you are all set to make the submission !
submission = classifier.predict(final_test) len(submission)
Save the prediction to csv¶
#change the header according to the submission guidelines
submission = pd.DataFrame(submission) submission.to_csv('submission.csv',header=['click'],index=False)
🚧 Note :
- Do take a look at the submission format.
- The submission file should contain a header.
- Follow all submission guidelines strictly to avoid inconvenience.
To download the generated csv in colab run the below command¶
try: from google.colab import files files.download('submission.csv') except ImportError as e: print("Only for Collab")