Difference between revisions of "Team Accuro Project Overview"
Line 104: | Line 104: | ||
<div align="left"> | <div align="left"> | ||
− | ==<div style="background: #B22222; padding: 15px; font-family:Helvetica Neue; font-size: 18px; font-weight: bold; line-height: 1em; text-indent: 15px; border-left: #800000 solid 32px;"><font color="white"> | + | ==<div style="background: #B22222; padding: 15px; font-family:Helvetica Neue; font-size: 18px; font-weight: bold; line-height: 1em; text-indent: 15px; border-left: #800000 solid 32px;"><font color="white">Limitations and Assumptions</font></div>== |
<div style="border-left: #EAEAEA solid 12px; padding: 0px 30px 0px 18px; "> | <div style="border-left: #EAEAEA solid 12px; padding: 0px 30px 0px 18px; "> | ||
− | + | <div style="text-align: justify; font-family:Helvetica Neue; font-size: 15px"> In doing our analysis, we have overall concluded below some of the major limitations we can foresee from this project: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | </div> | ||
</div> | </div> | ||
Line 130: | Line 124: | ||
* Final report | * Final report | ||
* Project poster | * Project poster | ||
− | * | + | * Project Wiki |
+ | * Visualization tool on Tableau | ||
</div> | </div> | ||
Revision as of 19:39, 4 September 2015
Contents
Introduction and Background
The Yelp Dataset Challenge provides data on ratings for several businesses across 4 countries and 10 cities to give students an opportunity to explore and apply analytics techniques to design a model that improves the pace and efficiency of Yelp’s recommendation systems. Using the dataset provided for existing businesses, we aim to identify the main attributes of a business that make it a high performer (highly rated) on Yelp. Since restaurants form a large chunk of the businesses reviewed on Yelp, we decided to build a model specifically to advice new restaurateurs on how to become their customers’ favourite food destination.
With Yelp’s increasing popularity in the United States, businesses are starting to care more and more about their ratings as “an extra half star rating causes restaurants to sell out 19 percentage points more frequently”. This profound effect of Yelp ratings on the success of a business makes our analysis even more crucial and relevant for new restaurant owners. Why do some businesses rank higher than others? Do customers give ratings purely based on food quality, does ambience triumph over service or do geographic locations of businesses affect the rating pattern of customers? Through our project we hope to analyse such questions and thereby be able to advice restaurant owners on what factors to look out for.
Review of Similar Work
The aim of the study is to aid businesses to compare performances (Yelp ratings) with other similar businesses based on location, category, and other relevant attributes.
The visualization focuses on three main parts:
a) Distribution of ratings: A bar chart showing the frequency of each star rating (1 through 5) for a single business.
b) Number of useful votes vs. star rating A scatter plot showing every review for a given business, with the x-position representing the “useful” votes received and y-position representing the for the business.
c) Ratings over time: This chart was the same as Chart 2, but with the date of the review on the x-axis
The final product is designed as an interactive display, allowing users to select a business of interest and indicate the radius in miles to filter the businesses for comparison. We will use this as a base and help expand on some of its shortcomings in terms of usability and UI. We will further supplement this with analysis of our own using other statistical methods to help derive meaning from the dataset.
2) Your Neighbors Affect Your Ratings: On Geographical Neighborhood Influence to Rating Prediction
This study focuses on the influence of geographical location on user ratings of a business assuming that a user’s rating is determined by both the intrinsic characteristics of the business as well as the extrinsic characteristics of its geographical neighbors.
The authors use two kinds of latent factors to model a business: one for its intrinsic characteristics and the other for its extrinsic characteristics (which encodes the neighborhood influence of this business to its geographical neighbors).
The study shows that by incorporating geographical neighborhood influences, much lower prediction error is achieved than the state-of-the-art models including Biased MF, SVD++, and Social MF. The prediction error is further reduced by incorporating influences from business category and review content.
Motivation
We believe that our topic of analysis is crucial for the following reasons:
1) It will make the redirection of customers to high quality restaurants much easier and more efficient.
2) It can encourage low quality restaurants to improve in response to insights about customer demand.
Project Scope and Methodology
- Primary requirements (for “restaurants” and one city only):
Step 1: Descriptive Analysis - Analysing Restaurants specifically for what differentiates High performers, low performers and Hit or Miss restaurants. The analysis will further be segmented into for example region, review count, operating hours, etc. For each of the 3 segments mentioned, the following analysis will be done:
A. Clustering to analyse business profiles that characterize the market. Explore various algorithms and evaluate each of the algorithms to decide which works best for the dataset.
B. Time series analysis of whether any major trends have emerged in restaurants by region – further decipher the does and don’ts for success
Step 2: Key factors identification for prescriptive analysis (feature extraction) for new restaurants by region, in order to succeed. Regression will be used to identify the most important factors and the model will be validated so that we can analyse how good the model is.
Step 3: For each segment (i.e. high performers, low performers and Hit & Miss restaurants), our analysis will include the following:
o Regression to predict the rating for new restaurants regions (through analysis of success factors over time. For example, restaurants that started 2 years ago, and achieved high ratings a year later will be used to test for restaurants that started a year ago and have high ratings now to study patterns in determining a successful business)
Step 4: Build a visualization tool for client for continual updates on business strategy. Focus will be to build a robust tool that helps the client recreate the same analysis on tableau.
- Secondary requirements:
Expand and recreate the analysis for all other cities.
This analysis will be recreated to include other kinds of businesses eg. Bars, Salons, etc. For some businesses, new methods of analysis such as latent factorization will be employed (especially for those with minimal information on attributes)
- Future research:
Evaluating the importance of review ratings for restaurants – Are they effective to improve ratings? Do restaurants that utilize recommended changes succeed?
Limitations and Assumptions
Deliverables
- Project Proposal
- Mid-term presentation
- Mid-term report
- Final presentation
- Final report
- Project poster
- Project Wiki
- Visualization tool on Tableau
Introduction and Project Background
Overview
Demographics
Word Association
Methodology
Dashboard
The interactive visual model prototype should allow the user to be able to see the past tweets based upon certain significant events and derive a conclusion from the results shown. To be able to do this, we will propose the following methodology. Tweet data will be provided to us from the user via uploading a csv file containing the tweets in the JSON format.
First, we will first display an overview of the tweets that we are looking at. Tweets will be aggregated into intervals based upon the span of tweets’ duration as given in the file upload. Each tweet will have a ‘happiness’ score tagged to it. “Happiness” score is derived from the study at Hedometer.org. Out of the 10,100 words that have a score tagged to it, some of them may not be applicable to words on Twitter. (Please refer to the study to find out how the score is derived). Words that are not applicable will not be used to calculate the score of the tweet and will be considered as a stop/neutral word on the application.
To visualise the words that are mentioned in these tweets, we will use a dynamically generated word cloud. A word cloud is useful in showing the users which are the words that are commonly mentioned in the tweets. The more a particular word is mentioned, the bigger it will appear on the word cloud. Stop/neutral words will be removed to ensure that only relevant words show up on the tag cloud. One thing to note is that the source of the text is from Twitter, which means that depending on the users, these tweets may contain localized words which may be hard to filter out. The list of stop words that we will be using to filter will be based upon this list.
Secondly, there is a list of predicted user attributes that is provided by the client. Each line contains attributes of one user in JSON format. The information is shown below:
- id: refers to twitter id
- gender
- ethnicity
- religion
- age_group
- marital_status
- sleep
- emotions
- topics
This predicted user attributes will be displayed in the 2nd segment where the application allows users to have a quick glance of the demographics of the users.
Third, we will also display the score of the words mentioned based upon the happiness level. This will allow the user to quickly identify the words that are attributing to the negativity or positivity of the set of tweets.
The entire application will entirely be browser based and some of the benefits of doing so include:
- Client does not need to download any software to run the application
- It clean and fast as most of the people who own a computer would probably have a browser installed by default
- It is highly scalable. Work is done on the front-end rather than on the server which may be choked when handling too many requests.
HTML5 and CSS3 will be used primarily for the display. Javascript will be used for the manipulation of the document objects front-end. Some of the open source plugins that we will be using includes:
- Highchart.js – a visualisation plugin to create charts quickly.
- Jquery – a cross-platform JavaScript library designed to simplify the client-side scripting of HTML
- Openshift – Online free server for live deployment
- Moment.js – date manipulation plugin
Machine Learning
Lexical Affinity
Lexical Affinity assigns arbitrary words a probabilistic affinity for a particular topic or emotion. For example, ‘accident’ might be assigned a 75% probability of indicating a negative event, as in ‘car accident’ or ‘hurt in an accident’. There are a few lexical affinity types that share high co-occurrence frequency of their constituents [1]:
- grammatical constructs (e.g. “due to”)
- semantic relations (e.g. “nurse” and “doctor”)
- compounds (e.g. “New York”)
- idioms and metaphors (e.g. “dead serious)
The way to do this is to first determine or define the support and confidence threshold that we are willing to accept before associating words with one another. As a rule of thumb, we will go ahead with 75%.
The support of a bigram (2 words) is defined as the proportion of all the set of words which contains these 2 words. Essentially, it is to see if these 2 words occur sufficient number of time to consider the pairing significant. The confidence of a rule is defined by the proportion of these 2 words occurring over the number of times tweets containing the former of these words occurs. Each tweet may contain more than 1 pairing. For example, "It's a pleasant and wonderful experience" yields 3 pairings where "[Pleasant,wonderful][pleasant,experience][wonderful,experience]" can be grouped. Once we have determine the support and confidence level of each of these pairings, we will be able to generate a new dictionary containing these pairings to be run onto new data.Testing our New Dictionary
Limitations & Assumptions
What Hedonometer Cannot Detect
Negation Handling
Abbreviations, Smileys/Emoticons & Special Symbols
Local Languages & Slangs (Singlish)
Ambiguity
Sarcasm
Project Overall
Limitations | Assumptions |
Insufficient predicted information on the users (location, age etc.) | Data given by LARC is sufficiently accurate for the user |
Fake Twitter users | LARC will determine whether or not the users are real or not |
Ambiguity of the emotions | Emotions given by the dictionary (as instructed by LARC) is conclusive for the Tweets that is provided |
Dictionary words limited to the ones instructed by LARC | A comprehensive study has been done to come up with the dictionary |
ROI Analysis
Future Work
- Scalable larger sets of data without hindering on time and performance
- Able to accommodate real-time data to provide instantaneous analytics on-the-go