import sys !pip install numpy !pip install pandas !pip install scikit-learn
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.18.5) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (1.0.5) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from pandas) (1.18.5) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas) (2.8.1) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas) (1.12.0) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (0.22.2.post1) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (0.15.1) Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.4.1) Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.18.5)
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on val and test data. We submit our predictions
#Donwload the datasets !rm -rf data !mkdir data !wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/train.zip !wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/val.zip !wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/test.zip !unzip train.zip !unzip val.zip !unzip test.zip !mv train.csv data/train.csv !mv val.csv data/val.csv !mv test.csv data/test.csv
--2020-07-08 12:15:38-- https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/train.zip Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 188.8.131.52, 184.108.40.206, 220.127.116.11, ... Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|18.104.22.168|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 78914911 (75M) [application/zip] Saving to: ‘train.zip’ train.zip 100%[===================>] 75.26M 17.7MB/s in 4.8s 2020-07-08 12:15:44 (15.6 MB/s) - ‘train.zip’ saved [78914911/78914911] --2020-07-08 12:15:45-- https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/val.zip Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 22.214.171.124, 126.96.36.199, 188.8.131.52, ... Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|184.108.40.206|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 13159762 (13M) [application/zip] Saving to: ‘val.zip’ val.zip 100%[===================>] 12.55M 10.2MB/s in 1.2s 2020-07-08 12:15:46 (10.2 MB/s) - ‘val.zip’ saved [13159762/13159762] --2020-07-08 12:15:47-- https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/scrbl/v0.1/test.zip Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 220.127.116.11, 18.104.22.168, 22.214.171.124, ... Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|126.96.36.199|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 39038803 (37M) [application/zip] Saving to: ‘test.zip’ test.zip 100%[===================>] 37.23M 13.7MB/s in 2.7s 2020-07-08 12:15:51 (13.7 MB/s) - ‘test.zip’ saved [39038803/39038803] Archive: train.zip inflating: train.csv Archive: val.zip inflating: val.csv Archive: test.zip inflating: test.csv
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score,log_loss
train_path = "data/train.csv" #path where train data is stored val_path = "data/val.csv" #path where val data is stored
train_data = pd.read_csv(train_path) #load data in dataframe using pandas val_data = pd.read_csv(val_path)
|0||A captive portal is a web page accessed with a...||unscrambled|
|1||Honeymoon Ahead is a 1945 American comedy film...||unscrambled|
|2||Pass Creek Bridge is a covered bridge in the c...||unscrambled|
|3||A machine-readable passport (MRP) is a machine...||unscrambled|
|4||Three Jane's 1997 and by Kevin Addiction direc...||scrambled|
|0||Lewellyn Anthony Gonsalvez (born 11 September ...||unscrambled|
|1||Paul D. Thacker, sometimes bylined as Paul Tha...||unscrambled|
|2||A Lego clone is a line or brand of children's ...||scrambled|
|3||An enhancer trap is a method in molecular biol...||unscrambled|
|4||Henry de Botebrigge or Henry of Budbridge (die...||scrambled|
The dataset contains texts along with the labels as unscrambled or scrambled.
X_train,y_train = train_data['text'],train_data['label'] X_val,y_val = val_data['text'],val_data['label'] print(X_train)
0 A captive portal is a web page accessed with a... 1 Honeymoon Ahead is a 1945 American comedy film... 2 Pass Creek Bridge is a covered bridge in the c... 3 A machine-readable passport (MRP) is a machine... 4 Three Jane's 1997 and by Kevin Addiction direc... ... 599997 A gas-filled tube, also known as a discharge t... 599998 M-68 is an east west state trunkline highway l... 599999 Brian E. Mueller is an American academic and u... 600000 The Zagreb Indoors (currently sponsored by PBZ... 600001 Cryptostylis ovata, commonly known as the slip... Name: text, Length: 600002, dtype: object
Text files are actually series of words (ordered). In order to run machine learning algorithms we need to convert the text files into numerical feature vectors. We will be using
bag of words model for our example. Briefly, we segment each text file into words (for English splitting by space), and count number of times each word occurs in each document and finally assign each word an integer id. Each unique word in our dictionary will correspond to a feature (descriptive feature).
Scikit-learn has a high level component which will create feature vectors for us
CountVectorizer. More about it here.
count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(X_train) X_train_counts.shape
Here by doing
count_vect.fit_transform(X_train), we are learning the vocabulary dictionary and it returns a Document-Term matrix. [n_samples, n_features].
TF: Just counting the number of words in each document has 1 issue: it will give more weightage to longer documents than shorter documents. To avoid this, we can use frequency (TF - Term Frequencies) i.e.
#count(word) / #Total words, in each document.
TF-IDF: Finally, we can even reduce the weightage of more common words like (the, is, an etc.) which occurs in all document. This is called as
TF-IDF i.e Term Frequency times inverse document frequency.
tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) X_train_tfidf.shape
Define the Model¶
We have fixed our data and now we are ready to train our model.
Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐
# classifier = SVC(gamma='auto') classifier = MultinomialNB() # from sklearn.linear_model import LogisticRegression # classifier = LogisticRegression()
- To start you off, We have used a basic Naive Bayes classifier here.
- But you can tune parameters and increase the performance. To see the list of parameters visit here.
- Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.
Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:
text_clf = Pipeline([('vect', CountVectorizer(stop_words='english')), ('tfidf', TfidfTransformer()), ('clf', classifier)]) text_clf = text_clf.fit(X_train, y_train)
Tip: To Improve your accuracy you can do something called stemming.
Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form. E.g. A stemming algorithm reduces the words “fishing”, “fished”, and “fisher” to the root word, “fish”.
We need NLTK which can be installed from here. NLTK comes with various stemmers which can help reducing the words to their root form. Below we have used Snowball stemmer which works very well for English language.
"""import nltk # Download the correct package nltk.download('stopwords') from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer("english", ignore_stopwords=True) # Creating a new Count Vectorizer class StemmedCountVectorizer(CountVectorizer): def build_analyzer(self): analyzer = super(StemmedCountVectorizer, self).build_analyzer() return lambda doc: ([stemmer.stem(w) for w in analyzer(doc)]) stemmed_count_vect = StemmedCountVectorizer(stop_words='english') text_clf = Pipeline([('vect', stemmed_count_vect), ('tfidf', TfidfTransformer()), ('clf', classifier)]) text_clf = text_clf.fit(X_train, y_train)"""
Predict on Validation¶
Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.
y_pred = text_clf.predict(X_val) print(y_pred)
['unscrambled' 'unscrambled' 'unscrambled' ... 'scrambled' 'unscrambled' 'scrambled']
Evaluate the Performance¶
- We have used basic metrics to quantify the performance of our model.
- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
- F1 score and Log Loss are the metrics for this challenge
precision = precision_score(y_val,y_pred,average='micro') recall = recall_score(y_val,y_pred,average='micro') accuracy = accuracy_score(y_val,y_pred) f1 = f1_score(y_val,y_pred,average='macro')
print("Accuracy of the model is :" ,accuracy) print("Recall of the model is :" ,recall) print("Precision of the model is :" ,precision) print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.55085 Recall of the model is : 0.55085 Precision of the model is : 0.55085 F1 score of the model is : 0.5506074930182191
Testing Phase 😅¶
We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.
final_test_path = "data/test.csv" final_test = pd.read_csv(final_test_path) len(final_test)
submission = text_clf.predict(final_test['text'])
submission = pd.DataFrame(submission) submission.to_csv('submission.csv',header=['label'],index=False)
🚧 Note :
- Do take a look at the submission format.
- The submission file should contain a header.
- Follow all submission guidelines strictly to avoid inconvenience.
try: from google.colab import files files.download('submission.csv') except ImportError as e: print("Only for Colab")