The dataset contains around 200k news headlines from the year 2012 to 2018 obtained from HuffPost. The model trained on this dataset could be used to identify tags for untracked news articles or to identify the type of language used in different news articles.
Each news headline has a corresponding category. Categories and corresponding article counts are as follows:
POLITICS: 32739
WELLNESS: 17827
ENTERTAINMENT: 16058
TRAVEL: 9887
STYLE & BEAUTY: 9649
PARENTING: 8677
HEALTHY LIVING: 6694
QUEER VOICES: 6314
FOOD & DRINK: 6226
BUSINESS: 5937
COMEDY: 5175
SPORTS: 4884
BLACK VOICES: 4528
HOME & LIVING: 4195
PARENTS: 3955
THE WORLDPOST: 3664
WEDDINGS: 3651
WOMEN: 3490
IMPACT: 3459
DIVORCE: 3426
CRIME: 3405
MEDIA: 2815
WEIRD NEWS: 2670
GREEN: 2622
WORLDPOST: 2579
RELIGION: 2556
STYLE: 2254
SCIENCE: 2178
WORLD NEWS: 2177
TASTE: 2096
TECH: 2082
MONEY: 1707
ARTS: 1509
FIFTY: 1401
GOOD NEWS: 1398
ARTS & CULTURE: 1339
ENVIRONMENT: 1323
COLLEGE: 1144
LATINO VOICES: 1129
CULTURE & ARTS: 1030
EDUCATION: 1004
We divide the dataset as train, development and test.
Build a vocabulary as list.
[‘the’ ‘I’ ‘happy’ … ] (omit rare words for example if the occurrence is less than five times)
A reverse index as the key value might be handy {“the”: 0, “I”:1, “happy”:2 , … }
We shall calculate the following probability as
Probability of the occurrence
P[“the”] = num of documents containing ‘the’ / num of all documents
Conditional probability based on the sentiment
P[“the” | Positive] = # of positive documents containing “the” / num of all positive review documents
We shall Calculate accuracy using dev dataset
Conduct five fold cross validation
To ensure and compare the effect of Smoothing
Derive Top 10 words that predicts positive and negative class
P[Positive| word]
Using the test dataset by using the optimal hyperparameters you found in the step, and use it to calculate the final accuracy.
Objective
Classification of news headlines is a text classification, where we have used Kaggle platform's data set to identify news headlines from past years. We apply Naive Bayes classifier for news category. We extract, select and evaluate as a part of text classification.
Naive Bayes Classifier ---> P(A/B) = P(B/A).P(A)/P(B)
NLTK Tokenizer Package
Tokenizers divide strings into lists of substrings. Tokenizers can be used to find the words and punctuation in a string:
![](https://static.wixstatic.com/media/2f0fe6_9986a7ef6c474283addc57bdb05a092a~mv2.png/v1/fill/w_980,h_332,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/2f0fe6_9986a7ef6c474283addc57bdb05a092a~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_d48a09ff079c4b368750ed61a0ccb570~mv2.png/v1/fill/w_980,h_401,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/2f0fe6_d48a09ff079c4b368750ed61a0ccb570~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_d580e6d1be654f3297e335fee6fc997e~mv2.png/v1/fill/w_966,h_467,al_c,q_90,enc_avif,quality_auto/2f0fe6_d580e6d1be654f3297e335fee6fc997e~mv2.png)
We Perform category filtering
![](https://static.wixstatic.com/media/2f0fe6_6f0391ad839a4ccca7a92aed20273c95~mv2.png/v1/fill/w_768,h_461,al_c,q_85,enc_avif,quality_auto/2f0fe6_6f0391ad839a4ccca7a92aed20273c95~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_33b153e068b94ef796c97648a9ecc30e~mv2.png/v1/fill/w_633,h_474,al_c,q_85,enc_avif,quality_auto/2f0fe6_33b153e068b94ef796c97648a9ecc30e~mv2.png)
By picking top 5 categories
![](https://static.wixstatic.com/media/2f0fe6_37dffd7701744741a0ed8cde5297187a~mv2.png/v1/fill/w_980,h_467,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/2f0fe6_37dffd7701744741a0ed8cde5297187a~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_62b514a634ad44ca809f7a0b953f9481~mv2.png/v1/fill/w_937,h_467,al_c,q_90,enc_avif,quality_auto/2f0fe6_62b514a634ad44ca809f7a0b953f9481~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_872faee9ae124b108da555dd9944d750~mv2.png/v1/fill/w_956,h_468,al_c,q_90,enc_avif,quality_auto/2f0fe6_872faee9ae124b108da555dd9944d750~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_5fbeb4b7f3ad445780fc61b497d5f6bd~mv2.png/v1/fill/w_942,h_348,al_c,q_85,enc_avif,quality_auto/2f0fe6_5fbeb4b7f3ad445780fc61b497d5f6bd~mv2.png)
We perform category wise probabilities
![](https://static.wixstatic.com/media/2f0fe6_4087f48b624c496da15395201b1f24b1~mv2.png/v1/fill/w_888,h_448,al_c,q_90,enc_avif,quality_auto/2f0fe6_4087f48b624c496da15395201b1f24b1~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_a9041cca184d42f5a8a2028bcdd441e8~mv2.png/v1/fill/w_802,h_450,al_c,q_90,enc_avif,quality_auto/2f0fe6_a9041cca184d42f5a8a2028bcdd441e8~mv2.png)
Then, we perform word wise probability on data
![](https://static.wixstatic.com/media/2f0fe6_d449755dc916451683aa98edc764ca48~mv2.png/v1/fill/w_980,h_454,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/2f0fe6_d449755dc916451683aa98edc764ca48~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_95d9192c6f674e1f8b988fde67e2df63~mv2.png/v1/fill/w_919,h_468,al_c,q_90,enc_avif,quality_auto/2f0fe6_95d9192c6f674e1f8b988fde67e2df63~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_77c7956d14484817983faab5aac8ad29~mv2.png/v1/fill/w_798,h_358,al_c,q_85,enc_avif,quality_auto/2f0fe6_77c7956d14484817983faab5aac8ad29~mv2.png)
NLTK contains a module called tokenize() which further classifies into two sub-categories:
Word tokenize: We use word_tokenize() method to split a sentence into tokens or words
Sentence tokenize: We use sent_tokenize() method to split a paragraph into sentences
![](https://static.wixstatic.com/media/2f0fe6_984f6fdb0c074dfc8600213b1c292bd6~mv2.png/v1/fill/w_826,h_298,al_c,q_85,enc_avif,quality_auto/2f0fe6_984f6fdb0c074dfc8600213b1c292bd6~mv2.png)
![](https://static.wixstatic.com/media/2f0fe6_a512c047e08c44f0a550aebde848fe69~mv2.png/v1/fill/w_728,h_320,al_c,q_85,enc_avif,quality_auto/2f0fe6_a512c047e08c44f0a550aebde848fe69~mv2.png)
Naive Bayes Classifier - https://colab.research.google.com/drive/1vyioeWZNZOZWfc5BPCf3KGHyGzhf5k0E#scrollTo=54a84fa3
References
stemming
naive bayes
Comments