Social Media & Public Opinion - Project Progress

From Analytics Practicum
Jump to navigation Jump to search

Home   HOME

 

Team   TEAM

 

Project Overview   PROJECT OVERVIEW

 

Project Progress   PROGRESS

 

Project Management   PROJECT MANAGEMENT

 

Documentation   DOCUMENTATION


Prototype

Overview

SMPO-Prototype-Overview.gif


Demographics

SMPO-Prototype-Demographic.gif


Word Association

SMPO-Prototype-Word Association.gif


Analytical Findings

Limitations of Hedonometer – What it cannot detect

  • Negation handling

SMPO-Negation handling.png [1]


  • Abbreviations, smileys/emoticons and special symbols

SMPO-Abbreviations.png [2]


  • Local languages & slangs (Singlish)

SMPO-Singlish.png [3]


SMPO-Singlish2.png [4]


SMPO-Singlish3.png [5]


  • ambiguity

SMPO-Ambiguity1.png [6]


SMPO-Ambiguity2.png [7]


  • sarcasm

SMPO-Sarcasm1.png [8]


SMPO-Sarcasm2.png [9]


Varying happiness level for exclamation of laughter

SMPO-ha.png


Our Approach

Machine Learning

Given the limitations of the Happiness index score by Hedonometer, we are attempting to use these sample tweets to learn and generate a more robust set of lexicon/dictionary. Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model from example inputs and using that to make predictions or decisions rather than following a strictly static program.

This dictionary will be built on top of the research done by Hedonometer as use their dictionary as a starting point. To calculate the score of a particular tweet, words that appears in a given tweet and in the Hedonometer dictionary are used to calculate the overall happiness score of the entire tweet. To determine whether a tweet is positive , the overall score of the tweet has to be more than 5 (center score in the happiness index) multiplied by the number of words that coincide in the dictionary, and less than that amount to be considered negative. Based on a given set of sample tweets, we track the number of times a particular word appears in a "positive" tweet and the number of times it appears in a "negative" tweet. The percentage in which it appears positive will be how positive it is against other words. On top of that, words that were previously not documented will also be included and their score counted as well.

Lexical Affinity

Another limitation of the Hedometer is that it only considers the score of one word at a time, which can paint a very different picture if we were to look at words association. Take for example, a tweet " I dislike New York" may seem negative to a human observer, but neutral to the machine as the scores of "dislike" and "new" cancels one another out, or in the case of "That dog is pretty ugly", where "pretty" and "ugly" cancels one another out. Thus, we need to associate words together to understand the tweet a little better.

Lexical Affinity assigns arbitrary words a probabilistic affinity for a particular topic or emotion. For example, ‘accident’ might be assigned a 75% probability of indicating a negative event, as in ‘car accident’ or ‘hurt in an accident’. There are a few lexical affinity types that share high co-occurrence frequency of their constituents [10]:

  • grammatical constructs (e.g. “due to”)
  • semantic relations (e.g. “nurse” and “doctor”)
  • compounds (e.g. “New York”)
  • idioms and metaphors (e.g. “dead serious)

The way to do this is to first determine or define the support and confidence threshold that we are willing to accept before associating words with one another. As a rule of thumb, we will go ahead with 75%. The support of a bigram (2 words) is defined as the proportion of all the set of words which contains these 2 words. Essentially, it is to see if these 2 words occur sufficient number of time to consider the pairing significant. The confidence of a rule is defined by the proportion of these 2 words occurring over the number of times tweets containing the former of these words occurs. Each tweet may contain more than 1 pairing. For example, "It's a pleasant and wonderful experience" yields 3 pairings where "[Pleasant,wonderful][pleasant,experience][wonderful,experience]" can be grouped. Once we have determine the support and confidence level of each of these pairings, we will be able to generate a new dictionary containing these pairings to be run onto new data.

Testing our new dictionary

To determine the accuracy of the dictionary, human test subjects will be employed to judge whether or not the dictionary is in fact effective in determining the polarity of the tweet. Each human subject will be given 2 tweets to judge, with each of these tweets having a pre-defined score after running through the new dictionary. If the human subject's perception of the 2 tweets coincides with that of the dictionary, the test will be given a positive, else a negative is awarded. A random sample of 100 users will be chosen to do at least 10 comparisons each. At the end of these tests, we will calculate the number of positives over the total tests done. The proportion will determine the accuracy of our dictionary.