JAR v.IS Project Findings

From Analytics Practicum
Jump to navigation Jump to search

Click here to return to AY16/17 T2 Group List

Jarvis.png

HOME

 

PROJECT PROPOSAL

 

PROJECT FINDINGS

 

PROJECT MANAGEMENT

 

DOCUMENTATION

 

ABOUT US

Articles Videos R
Multiple Linear Regression Model

What makes a good Facebook post? This section outlines the explanatory model on the article dataset from Facebook Insights supplemented with our crawled variables to form a holistic complete article dataset.

Response / Dependent Variables

We choose to make use of “Total Engagement” as the response/ dependent variable. “Total Engagement” for each post is the sum of the total number of reactions (like, love, wow, haha, angry, sad), comments and shares of that post as of the data retrieval date. Reactions are similar to the ‘likes’ on Facebook, but provides the additional option of reacting with five animated emoji rather than a simple ‘like’ reaction.


Other possible response variables include the comment sentiment score measures, and individual engagement metrics but they are ruled out due to reasons such as their non-normal distribution and utility for our sponsor.

Explanatory / Independent Variables

Article Dataset Metadata for Analysis
Header Description
Post Message Sentiment Crawled Variable: Sentiment Score calculated using PyCharm python script, AFINN Sentiment words and emoji package
Article Text Sentiment Derived Variable: Sentiment Score calculated using PyCharm python script, AFINN Sentiment words and emoji package
Number of Images Crawled Variable: Number of Images in the article
Number of Videos Crawled Variable: Number of Videos in the article
Number of Links Crawled Variable: The number of embedded links in the article
Number of syllables Crawled Variable: number of syllables within text
Word count Crawled Variable: Total word count
Sentence count Crawled Variable: Total sentence count
Words per Sentence Crawled Variable: Number of words/sentence in the body of text
Flesch reading ease Crawled Variable: Readability Index value of Flesch Reading Ease
Flesch kincaid grade Crawled Variable: Readability Index value of Flesch kincaid grade
Gunning fog Crawled Variable: Readability Index value of Gunning fog
Smog index Crawled Variable: Readability Index value of Smog index
Automated readability index Crawled Variable: Readability Index value of Automated readability index
Coleman liau index Crawled Variable: Readability Index value of Coleman liau index
Linsear write formula Crawled Variable: Readability Index value of Linsear write formula
Dale chall readability score Crawled Variable: Readability Index value of Dale chall readability score
Difficult words count Crawled Variable: Total count of difficult words
Article Category Crawled Variable: The categories of the article, 9 levels
Day of Week Derived Variable: The time of the day from the (adjusted) posted column of the article categorical 7 levels
Time Interval (Hour) Derived Variable: The time intervals of the articles derived from recursive splitting of the hour from the time of day column, to coincide with morning, afternoon, evening and night, categorical 4 levels
Article Authors Crawled Variable: The author of the article. Authors who wrote fewer than 9 articles are collectively grouped into others. Categorical 20 levels
Data Transformation / Excluding Outliers

We perform the transformation on the variables to make them more suitable for regression analysis. We perform a square root transformation as well as a natural logarithm transformation on all response and explanatory variables whose distributions are not normal to reduce skewness and yield a more normal distribution.


Article transformation.png
Transforming the Response Variables and removing the outliers


The outliers for the explanatory variables are judged by the independent variable distributions as well as the scatterplots of the response variable against the explanatory variables. We remove the following data points (as circled in the figure) as outliers.


Outlier.png
Transforming the Explanatory Variables and removing the outliers


Bivariate Fit

We also conduct bivariate analysis on the response variable against each transformed explanatory variable to review the linearity of fit. This step helps us to decide if the transformation of the variable is necessary, and we pick the transformation that provides the highest R2 value.

Bivfit.png
Bivariate fit of difficult words count. we select the SQRT transformation instead of the Ln transformation

This is repeated across all the explanatory variables, and we realise that all the readability indices have very poor R2 values (close to zero). We then examine further if the stepwise model will pick these measures even though individually the variables do not have strong explanatory power.

Checking for Multi-collinearity

We also ran bivariate fit against all the 18 numerical explanatory variables to test for multicollinearity. The figure below shows the bivariate correlation scatterplot.

Bivfitscattermatrix.png
Bivariate correlation scatterplot matrix for all 18 numerical variables for the article model

Using this scatterplot together with the bivariate correlation matrix, we eliminated 8 variables that are highly correlated. We ran Standard Least Squares regression on continuous numerical variables to verify the absence of multicollinearity in our remaining variables.


Vifparamest.png
Parameter Estimates with VIF statistics


As a result, we have the narrowed down version of our final list of numerical continuous explanatory variables to explain the variation of our response variables for the article regression model in preparation for the next step which is the stepwise regression.

Stepwise Regression

We proceed with the creation of our explanatory model by running stepwise regression within the Fit Model platform on JMP Pro 13 on the variables filtered from the steps above with the inclusion of categorical variables (that will be dummy coded by JMP). We conduct a p-value threshold regression at 5% which gives the best R2 and adjusted R2 values, indicating the best model fit given the available data. We ran the regression for the forward, backward and mixed directions and realised that the R2 values for the three different directions are the same. We then select the mixed direction to run our model with. AICC and BICC measures are not used since we are looking at an explanatory model instead of a predictive model.


The regression equation and parameter estimates are shown below:

Artregeqn.png
Article Regression equation for Ln(Total engagement)
Artparam.png
Article Regression Parameter Estimates for Ln(Total engagement)
Model Fit and Model Assumptions

Artmodfit.png
Article Regression Model Fit

The goodness of fit is represented by the R2 value. R2 is a statistical measure known as the coefficient of determination which measures how close data points are to the line generated by the model. The R2 value here for the articles model is 0.18 and represents that the variation in Ln Total Engagement for articles is 18% explained by the model.

To gauge the explanatory power of each additional explanatory variable added, we also consider the adjusted R2 value, which adjusts for the number of explanatory variables in the model – that is, it would only increase if each explanatory variable added improves the model more than what is expected by chance. The adjusted R2 value here for the articles model is 0.17 and represents that the variation in Ln Total Engagement for articles is 17% explained by those explanatory variables that affect the response variable.


We then move on to the model assumptions to validate our regression model findings. There are several assumptions of linear regression models which need to be met, as seen below:

  • Relationship between the dependent variable and independent variables is linear
  • Expected mean error of the regression model is zero
  • Errors/Residuals have constant variance (Homoscedastic)
  • Errors/Residuals are independent of each other
  • Errors/Residuals are normally distributed and have a population mean of zero

Assumption 1: Linearity

Assumption 1n3.png
Residual by predicted plot

The points are quite symmetrically distributed around the line, and this indicates that the points are random and hence fulfills the linearity assumption.

Assumption 2: Zero expected mean error

Assumption 2.png
Distribution of residuals

The residuals largely follow a normal distribution with a mean close to zero and a standard deviation close to one.

Assumption 3: Homoscedasticity

Assumption 1n3.png
Residual by predicted plot

The distribution of the points in the plot is rather symmetrical, with no signs of increasing residuals with the increase of the predicted values (it is not funnel shaped). This indicates that the residuals have constant variance and are hence homoscedastic

Assumption 4: Independent Residuals

Assumption 4a.png
Residual by row plot

The scatter plot shows that the residuals are randomly distributed around the line and hence shows that they are time independent. This also suggests that residuals are not autocorrelated.

Assumption 4b.png
Durbin-Watson test of no autocorrelation

The Durbin-Watson d = 2.15, which is between the two critical values of 1.5 < d < 2.5. Therefore, we can assume that there is no first order linear auto-correlation in our multiple linear regression data

Assumption 5: Residuals are normally distributed

Assumption 5.png
Studentized Residual distribution

The residuals largely follow a normal distribution with a mean close to zero and a standard deviation close to one, hence fulfilling this assumption.


Interpretation and Managerial insights


A multiple stepwise linear regression was run to explain Ln(Total Engagement) for article performance from post message sentiment score, number of links, SQRT(Number of images) and article authors. These variables statistically significantly explained Ln(Total Engagement), F(33.79, 1.06) = 31.96, p < 0.0001***, adjusted R2 = 0.17. All selected variables provided statistically significantly to the explanation, p < .05. The article regression model has met all 5 assumptions highlighted above, and we believe that our sponsor can benefit from the knowledge of the different determinants of their different social media engagement performance based on the regression equation on their article performance.



While our article explanatory regression models can explain up to 17-18% of the variation in the post’s engagement performance, insights can still be gleaned from it. Below are the points that can be drawn for the article regression model:

  • A positive sounding post message to accompany the article can help increase engagement.
  • Articles that contain too many embedded links may not perform well in terms of engagement. This could suggest possibly that viewer tend not to read the article or are referred elsewhere as a result.
  • The number of images used in an article matters and more images can help improve the engagement level of the article. This is applicable for categories that require visually appealing information
  • Authors A, B, C, D, E, F, G, H, I, and J are performing well and can be considered suited for writing their relevant categories whereas authors K, L, M, N, O, P, Q, R, S, and T are performing poorly, suggesting the need for either improvement or adjustment of assignments.