Adv Quant: Use of Bayesian Analysis in research

A provided example of how Bayesian analysis can be used in the context of social media data.

Advertisements

Just using knowledge before data collection and the knowledge gained from data collection doesn’t tell the full story until they are combined, hence establishing the need for Bayesian analysis (Hubbard, 2010).  Bayes’ theory is a conditional probability that takes into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015).  Bayesian analysis aids in avoiding overconfidence and underconfidence because it doesn’t ignore prior or new data (Hubbard, 2010).  There are many examples of how Bayesian analysis can be used in the context of social media data.  Below are just three ways of many,

  • With high precision, Bayesian Analysis was able to detect spam twitter accounts from legitimate users, based on their followers/following ration information and their most 100 recent tweets (McCord & Chuah, 2011). McCord and Chuah (2011) was able to use Bayesian analysis to achieve a 75% accuracy in detecting spam just by using user-based features, and ~90% accuracy in detecting spam when using both user and content based features.
  • Boulle (2014) used Bayesian Analysis off of 60,000 URLs in 100 websites. The goal was to predict the number of visits and messages on Twitter and Facebook after 48 hours, and Boulle (2014) was able to come close to the actual numbers through using Bayesian Analysis, showcasing the robustness of the approach.
  • Zaman, Fox, and Bradlow (2014), was able to use Bayesian analysis for predicting the popularity of tweets by measuring the final count of retweets a source tweet gets.

An in-depth exploration of Zaman, et al. (2014)

Goal:

The researchers aimed to predict how popular a tweet can become a Bayesian model to analyze the time path of retweets a tweet receives, and the eventual number of retweets of a tweet one week later.

  • They were analyzing 52 tweets varying among different topics like music, politics, etc.
    • They narrowed down the scope to analyzing tweets with a max of 1800 retweets per root tweets.

Defining the parameters:

  • Twitter = microblogging site
  • Tweets = microblogging content that is contained in up to 140 characters
  • Root tweets = original tweets
  • Root user = generator of the root tweet
  • End user = those who read the root tweet and retweeted it
  • Twitter followers = people who are following the content of a root user
  • Follower graph = resulting connections into a social graph from known twitter followers
  • Retweet = a twitter follower’s sharing of content from the user for their followers to read
  • Depth of 1 = how many end users retweeted a root tweet
  • Depth of 2 = how many end users retweeted a retweet of the root tweet

Exploration of the data:

From the 52 sampled root tweets, the researchers found that the tweets had anywhere between 21-1260 retweets associated with them and that the last retweet that could have occurred between a few hours to a few days from the root tweet’s generation.  The researchers calculated the median times from the last retweet, yielding scores that ranged from 4 minutes to 3 hours.  The difference between the median times was not statistically significant to reject a null hypothesis, which involved a difference in the median times.  This gave potentially more weight to the potential value of the Bayesian model over just descriptive/exploratory methods, as stated by the researchers.

The researchers explored the depth of the retweets and found that 11,882 were a depth of 1, whereas 314 were a depth of 2 or more in those 52 root tweets, which suggested that root tweets get more retweets than retweeted tweets.  It was suggested by the researchers that the depth seemed to have occurred because of a large number of followers from the retweeter’s side.

It was noted by the researchers that retweets per time path decays similarly to a log-normally distribution, which is what was used in the Bayesian analysis model.

Bayesian analysis results:

The researchers partitioned their results randomly into a training set with 26 observations, and a testing set of 26 observations, and varied the amount of retweets observations from 10%-100% of the last retweet.  Their main results are plotted in boxplots, where the whiskers cover 90% of the posterior solution (Figure 10).

IP3F12.png

The figure above is directly from Zaman, et al. (2014). The authors mentioned that as the observation fraction increased the absolute percent errors decreased.    For future work, the researchers suggested that their analysis could be parallelized to incorporate more data points, take into consideration the time of day the root tweet was posted, as well as understanding the content within the tweets and their retweet-ability because of it.

References

  • Boullé, M. (2014). Selective Naive Bayes Regressor with Variable Construction for Predictive Web Analytics.
  • Hubbard, D. W. (2010). How to measure anything: Finding the values of “intangibles” in business. (2nd e.d.) New Jersey, John Wiley & Sons, Inc.
  • Mccord, M., & Chuah, M. (2011). Spam detection on twitter using traditional classifiers. In International Conference on Autonomic and Trusted Computing (pp. 175-186). Springer Berlin Heidelberg.
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html
  • Zaman, T., Fox, E. B., & Bradlow, E. T. (2014). A Bayesian approach for predicting the popularity of tweets. The Annals of Applied Statistics8(3), 1583-1611.

Adv Quant: Bayesian Analysis

Discussion of the reasons for using Bayesian analysis when faced with uncertainty in making decisions.

Uncertainty in making decisions

Generalizing something that is specific from a statistical standpoint, is the problem of induction, and that can cause uncertainty in making decisions (Spiegelhalter & Rice, 2009). Uncertainty in making a decision could also arise from not knowing how to incorporate new data with old assumptions (Hubbard, 2010).

According to Hubbard (2010) conventional statistics assumes:

(1)    The researcher has no prior information about the range of possible values (which is never true) or,

(2)    The researcher does have prior knowledge that the distribution of the population and it is never any of the messy ones (which is not true more often than not)

Thus, knowledge before data collection and the knowledge gained from data collection doesn’t tell the full story until they are combined, hence the need for Bayes’ analysis (Hubbard, 2010).  Bayes’ theory can be reduced to a conditional probability that aims to take into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015; Spiegelhalter & Rice, 2009; Yudkowsky, 2003).  Bayesian analysis avoids overconfidence and underconfidence from ignoring prior data or ignoring new data (Hubbard, 2010), through the implementation of the equation below:

 eq4                           (1)

Where P(hypothesis|data) is the posterior data, P(hypothesis) is the true probability of the hypothesis/distribution before the data is introduced, P(data) marginal probability, and P(data|hypothesis) is the likelihood that the hypothesis/distribution is still true after the data is introduced (Hubbard, 2010; Smith, 2015; Spiegelhalter & Rice, 2009; Yudkowsky, 2003).  This forces the researcher to think about the likelihood that different and new observations could impact a current hypothesis (Hubbard, 2010). Equation (1) shows that evidence is usually a result of two conditional probabilities, where the strongest evidence comes from a low probability that the new data could have led to X (Yudkowsky, 2003).  From these two conditional probabilities, the resultant value is approximately the average from that of the prior assumptions and the new data gained (Hubbard, 2010; Smith, 2015).  Smith (2015) describe this approximation in the following simplified relationship (equation 2):

 eq5.PNG                                            (2)

Therefore, from equation (2) the type of prior assumptions influence the posterior resultant. Prior distributions come from Uniform, Constant, or Normal distribution that results in a Normal posterior distribution and a Beta or Binomial distribution results in a Beta posterior distribution (Smith, 2015).  To use Bayesian Analysis one must take into account the analysis’ assumptions.

Basic Assumptions of Bayesian Analysis

Though these three assumptions are great to have for Bayesian Analysis, it has been argued that they are quite unrealistic when real life data, particularly unstructured text-based data (Lewis, 1998; Turhan & Bener, 2009):

  • Each of the new data samples is independent of each other and identically distributed (Lewis, 1998; Nigam & Ghani, 2000; Turhan & Bener, 2009)
  • Each attribute has equation importance (Turhan & Bener, 2009)
  • The new data is compatible with the target posterior (Nigam & Ghani, 2000; Smith 2015).

Applications of Bayesian Analysis

There are typically three main situations where Bayesian Analysis is used (Spiegelhalter, & Rice, 2009):

  • Small data situations: The researcher has no choice but to include prior quantitative information, because of a lack of data, or lack of a distribution model.
  • Moderate size data situations: The researcher has multiple sources of data. They can create a hierarchical model on the assumption of similar prior distributions
  • Big data situations: where there are huge join probability models, with 1000s of data points or parameters, which can then be used to help make inferences of unknown aspects of the data

Pros and Cons

Applying Bayesian Analytics to data has its advantages and disadvantages.  Those Advantages and Disadvantages with Bayesian Analysis as identified by SAS (n.d.) are:

Advantages

+    Allows for a combination of prior information with data, for a strong decision-making

+    No reliance on asymptotic approximation, thus the inferences are conditional on the data

+    Provides easily interpretive results.

Disadvantages

– Posteriors are heavily influenced by their priors.

– This method doesn’t help the researcher to select the proper prior, given how much influence it has on the posterior.

– Computationally expensive with large data sets.

The key takeaway from this discussion is that the prior knowledge can heavily influence the posterior, which can easily be seen in equation (2).  That is because knowledge before data collection and the knowledge gained from data collection doesn’t tell the full story unless they are combined.

Reference

Adv Quant: Logistic Regression in R

The German credit data contains attributes and outcomes on 1,000 loan applications. The data are available at this Web site, where datasets are provided for the machine learning community.

Introduction

The German credit data contains attributes and outcomes on 1,000 loan applications. The data are available at this Web site, where datasets are provided for the machine learning community.

Results

IP3F1.PNG

Figure 1: Image shows the first six entries in the German credit data.

IP3F2.png

Figure 2: Matrix scatter plot, showing the 2×2 relationships between all the variables within the German credit data.

IP3F3.png

Figure 3: A summary of the credit data with the variables of interest.

IP3F3.png

Figure 4: Shows the entries in the designer matrix which will be used for logistical analysis.

IP3F4

Figure 5: Summarized logistic regression information based on the training data.

IP3F6.1.pngIP3F6.2.png

Figure 6: The coeficients’ confidence interval at the 95% level using log-likelihood vlaues, with values to the right including the standard errors values.

IP3F7.png

Figure 7: Wald Test statistic to test the significance level of the entire ranked variable.

IP3F8.png

Figure 8: The Odds Ratio for each independent variable along with the 95% confidence interval for those odds ratio.

IP3F9.png

Figure 9: Part of the summarized test data set for the logistics regression model.

IP3F10.png

Figure 10: The ROC curve, which illustrates the false positive rate versus the true positive rate of the prediction model.

Discussion

The results from Figure 1 means that the data needs to be formatted before any analysis could be conducted on the data.  Hence, the following lines of code were needed to redefine the variables in the German data set.   Given the data output (Figure 1), the matrix scatter plot (Figure 2) show that duration, amount, and age are continuous variables, while the other five variables are factor variables, which have categorized scatterplots.  Even though installment and default show box plot data in the summary (Figure 3), the data wasn’t factored like history, purpose, or rent, thus it won’t show a count.  From the count data (Figure 3), the ~30% of the purpose of loans are for cars, where as 28% is for TVs.  In this German credit data, about 82% of those asking for credit do not rent and about 53% of borrowers have an ok credit history with 29.3% having a horrible credit history.  The mean average default rate is 30%.

Variables (Figure 5) that have statistical significance at the 0.10 include duration, amount, installment, age, history (per category), rent, and some of the purposes categories.  Though it is preferred to see a large difference in the null deviance and residual deviance, it is still a difference.  The 95% confidence interval for all the logistic regression equation don’t show much spread from their central tendencies (Figure 6).  Thus, the logistic regression model is (from figure 5):

IP3F11.PNG

The odds ratio measures the constant strength of association between the independent and dependent variables (Huck, 2011; Smith, 2015).  This is similar to the correlation coefficient (r) and coefficient of determination (r2) values for linear regression.  According to UCLA: Statistical Consulting Group, (2007), if the P value is less than 0.05, then the overall effect of the ranked term is statistically significant (Figure 7), which in this case the three main terms are.  The odds ratio data (Figure 8) is promising, as values closer to one is desirable for this prediction model (Field, 2013). If the value of the odds ratio is greater than 1, it will show that as the independent variable value increases, so do the odds of the dependent variable (Y = n) occurs increases and vice versa (Fields, 2013).

Moving into the testing phase of the logistics regression model, the 100 value data set needs to be extracted, and the results on whether or not there will be a default or not on the loan are predicted. Comparing the training and the test data sets, the maximum values between the both are not the same for durations and amount of the loan.  All other variables and statistical distributions are similar to each other between the training and the test data.  Thus, the random sampling algorithm in R was effective.

The area underneath the ROC curve (Figure 10), is 0.6994048, which is closer to 0.50 than it is to one, thus this regression does better than pure chance, but it is far from perfect (Alice, 2015).

In conclusion, the regression formula has a 0.699 prediction accuracy, and the purpose, history, and rent ranked categorical variables were statistically significant as a whole.  Therefore, the logistic regression on these eight variables shows more promise in prediction accuracy than pure chance, on who will and will not default on their loan.

Code

#

## The German credit data contains attributes and outcomes on 1,000 loan applications.

##    •   You need to use random selection for 900 cases to train the program, and then the other 100 cases will be used for testing.

##    •   Use duration, amount, installment, and age in this analysis, along with loan history, purpose, and rent.

### ———————————————————————————————————-

## Data source: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data

## Metadata file: https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.doc

#

#

## Reading the data from source and displaying the top six entries.

#

credits=read.csv(“https://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data”, header = F, sep = ” “)

head(credits)

#

## Defining the variables (Taddy, n.d.)

#

default = credits$V21 – 1 # set default true when = 2

duration = credits$V2

amount = credits$V5

installment = credits$V8

age = credits$V13

history = factor(credits$V3, levels = c(“A30”, “A31”, “A32”, “A33”, “A34”))

purpose = factor(credits$V4, levels = c(“A40″,”A41″,”A42″,”A43″,”A44″,”A45″,”A46″,”A48″,”A49″,”A410”))

rent = factor(credits$V15==”A151″) # renting status only

# rent = factor(credits$V15 , levels = c(“A151″,”A152″,”153”)) # full property status

#

## Re-leveling the variables (Taddy, n.d.)

#

levels(history) = c(“great”, “good”, “ok”, “poor”, “horrible”)

levels(purpose) = c(“newcar”, “usedcar”, “furniture/equip”, “radio/TV”, “apps”, “repairs”, “edu”, “retraining”, “biz”, “other”)

# levels(rent) = c(“rent”, “own”, “free”) # full property status

#

## Create a new matrix called “cred” with the 8 defined variables (Taddy, n.d.)

#

credits$default = default

credits$duration= duration

credits$amount  = amount

credits$installment = installment

credits$age     = age

credits$history = history

credits$purpose = purpose

credits$rent    = rent

cred = credits[,c(“default”,”duration”,”amount”,”installment”,”age”,”history”,”purpose”,”rent”)]

#

##  Plotting & reading to make sure the data was transfered correctly into this dataset and present summary stats (Taddy, n.d.)

#

plot(cred)

cred[1:3,]

summary(cred[,])

#

## Create a design matrix, such that factor variables are turned into indicator variables

#

Xcred = model.matrix(default~., data=cred)[,-1]

Xcred[1:3,]

#

## Creating training and prediction datasets: Select 900 rows for esitmation and 100 for testing

#

set.seed(1)

train = sample(1:1000,900)

## Defining which x and y values in the design matrix will be for training and for testing

xtrain = Xcred[train,]

xnew = Xcred[-train,]

ytrain = cred$default[train]

ynew = cred$default[-train]

#

## logistic regresion

#

datas=data.frame(default=ytrain,xtrain)

creditglm=glm(default~., family=binomial, data=datas)

summary(creditglm)

#

## Confidence Intervals (UCLA: Statistical Consulting Group, 2007)

#

confint(creditglm)

confint.default(creditglm)

#

## Overall effect of the rank using the wald.test function from the aod library (UCLA: Statistical Consulting Group, 2007)

#

install.packages(“aod”)

library(aod)

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 6:9) # for all ranked terms for history

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 10:18) # for all ranked terms for purpose

wald.test(b=coef(creditglm), Sigma = vcov(creditglm), Terms = 19) # for the ranked term for rent

#

## Odds Ratio for model analysis (UCLA: Statistical Consulting Group, 2007)

#

exp(coef(creditglm))

exp(cbind(OR=coef(creditglm), confint(creditglm))) # odds ration next to the 95% confidence interval for odds ratios

#

## Predicting default from the test data (Alice, 2015; UCLA: Statistical Consulting Group., 2007)

#

newdatas=data.frame(default=ynew,xnew)

newestdata=newdatas[,2:19] #removing the variable default from the data matrix

newestdata$defaultPrediction = predict(creditglm, newdata=newestdata, type = “response”)

summary(newdatas)

#

## Plotting the true positive rate against the false positive rate (ROC Curve) (Alice, 2015)

#

install.packages(“ROCR”)

library(ROCR)

pr  = prediction(newestdata$defaultPrediction, newdatas$default)

prf = performance(pr, measure=”tpr”, x.measure=”fpr”)

plot(prf)

## Area under the ROC curve (Alice, 2015)

auc= performance(pr, measure = “auc”)

auc= auc@y.values[[1]]

auc # The closer this value is to 1 the better, much better than to 0.5

 

 

References

Adv Quant: More on Logistic Regression

In assessing the predictive power of categorical predictors of a binary outcome, should logistic regression be used? In other words, how is the logistic function used to predict categorical outcomes?

Logistic regression is another flavor of multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).  Zheng and Agresti (2000) defines predictive power as a measure that helps compare competing regressions via analyzing the importance of the independent variables.  For linear regression and multiple linear regression, the correlation coefficient and coefficient of determination are adequate for predictive power (Field, 2014; Zheng & Agresti, 2000). The more data that is collected could yield a stronger predictive power (Field, 2014).  Predictive power is used to sell the relationships between variables to management (Ahlemeyer-Stubbe, & Coleman, 2014).

For logistic regression, the predictive power of the independent variables can be evaluated by the concept of the odds ratio for each independent variable (Huck, 2011). Field (2013) and Schumacker (2014) explained that when the logistic regression is calculated, the categorical/binary variables are transformed into ln(odds ratio) and a regression is then performed on this newly scaled variable (scale factor seen in equation 1):

eq1.PNG                                                 (1)

Since the probability of one categorical variable varies between 0à0.999…, the odds ratio value can vary between 0 à 999.999… (Schumacker, 2014). If the value of the odds ratio is greater than 1, it will show that as the independent variable value increases, so do the odds of the dependent variable (Y = n) occurs increases and vice versa (Field, 2013). Thus, the odds ratio measures the constant strength of association between the independent and dependent variables (Huck, 2011; Smith, 2015).  Due to this ln(odds ratio) transformation, logistic regression should be used for binary outcomes.

Field (2013) and Schumacker (2014) further explained that given that this ln(odds ratio) transformation needs to be made on the variables; the way to predict categorical outcomes from the regression formula (2),

  eq2.PNG                                            (2)

is best to explained the probability of the categorical outcome value one is trying to calculate:

eq3                                                (3)

The probability equation (3) can be expressed in multiple ways, through typical algebraic manipulations.  Thus, the probability/likelihood of the dependent variable Y is defined between 0-100% and the odds ratio is used to discuss the strength of these relationships.

References

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Gall, M. D., Gall, J. P., Borg, W. R. (2006). Educational Research: An Introduction, 8th Edition. [VitalSource Bookshelf Online].
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
  • Huck, Schuyler W. (2011). Reading Statistics and Research, 6th Edition. [VitalSource Bookshelf Online].
  • Schumacker, Randall E. (2014). Learning Statistics Using R, 1st Edition. [VitalSource Bookshelf Online].
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html
  • Zheng, B. and Agresti, A. (2000) Summarizing the predictive power of a generalized linear model.  Retrieved from http://www.stat.ufl.edu/~aa/articles/zheng_agresti.pdf

Adv Quant: Logistic Vs Linear Regression

A discussion on the assumptions that must be met for logistic regression and assumptions for regular regression that do not apply in logistic regression and on the types of variables used in logistic regression and regular regression.

To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006; Smith, 2015).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value. Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.  The orders of procedures are important to apply statistical inferences to regressions, if not the prediction formula will not be generalizable.

Logistic regression is another flavor of multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).  Logistic regression is an alternative to linear regression, which assumes all variables are continuous (Ahlemeyer-Stubbe, & Coleman, 2014). Both the multi-variable linear regression and logistic regression formula are (Field, 2013; Schumacker, 2014):

Y = a + b11 + b2X2 + …                                                       (1)

The main difference between these two regressions is that the variables in the equation (1) represent different types of dependent (Y) and independent variables (Xi).  These different types of variables may have to undergo a transformation before the regression analysis begins (Field, 2013; Schumacker 2014).  Due to the difference in the types of variables between logistic and linear regression the assumptions on when to use either regression are also different (Table 1).

Table 1: Discusses and summarizes the types of assumptions and variables used in both logistic and regular regression, created from Ahlemeyer-Stubbe & Coleman (2014), Field (2013), Gall et al. (2006), Huck (2011) and Schumacker, (2014).

 

Assumptions of Logistic Regression Assumptions for Linear Regression
·         Multicollinearity should be minimized between the independent variables

·         There is no need for linearity between the dependent and independent variables

·         Normality only on the continuous independent variables

·         No need for homogeneity of variance within the categorical variables

·         Error terms a not normally distributed

·         Independent variables don’t have to be continuous

·         There are no missing data (no null values)

·         Variance that is not zero

·         Multicollinearity should be minimized between the multiple independent variables

·         Linearity exists between all variables

·         Additivity (for multi-variable linear regression)

·         Errors in the dependent variable and its predicted values are independent and uncorrelated

·         All variables are continuous

·         Normality on all variables

·         Normality on the error values

·         Homogeneity of variance

·         Homoscedasticity- variance between residuals are constant

·         Variance that is not zero

Variable Types of Logistic Regression Variable Types of Linear Regression
·         2 or more Independent variables

·         Independent variables: continuous, dichotomous, binary, or categorical

·         Dependent variable: dichotomous, binary

·         1 or more Independent variables

·         Independent variables: continuous

·         Dependent variables: continuous

References

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Gall, M. D., Gall, J. P., Borg, W. R. (2006). Educational Research: An Introduction, 8th Edition. [VitalSource Bookshelf Online].
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
  • Huck, Schuyler W. (2011). Reading Statistics and Research, 6th Edition. [VitalSource Bookshelf Online].
  • Schumacker, Randall E. (2014). Learning Statistics Using R, 1st Edition. [VitalSource Bookshelf Online].
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html

Adv Quant: Overfitting & Parsimony

Parsimony in statistical modeling is often discussed in terms of Occam’s razor in the formulation of hypotheses.

Overfitting and Parsimony

Overfitting a regression model is stuffing it with so many variables that have little contributional weight to help predict the dependent variable (Field, 2013; Vandekerckhove, Matzke, & Wagenmakers, 2014).  Thus, to avoid the over-fitting problem, the use of parsimony is important in big data analytics.  Parsimony is describing a dependent variable with the fewest independent variables as possible (Field, 2013; Huck, 2013; Smith, 2015).  The best way to describe this is to use the “Keep It Simple Sweaty,” concept on the regression model.  The concept of parsimony could be attributed to Occam’s Razor, which states “plurality out never be posited without necessity” (Duignan, 2015).  Vandekerckhove et al. (2014) describe parsimony as a way of removing the noise from the signal to create better predictive regression models.

Overfitting in General Least Squares Model (GLM)

For multivariate regressions, a correlation matrix could be conducted on all the variables, to help with identifying parsimony, such that the software will try to maximize the correlation while minimizing the number of variables (Field, 2013).  Smith (2015) stated that the proportion of variation should remain high between the variables and that the correlation between the separate independent variables should be as low as possible. If the correlation coefficient between the independent variables is high (0.8 or higher), then there is a chance that there are extraneous variables (Smith, 2015).   Another technique to achieve parsimony is called the backward stepwise method, which is to run a regression model with all variables, and remove those variables that don’t contribute to the models significantly, or the model could start with one variable and add variables until it has maximized correlation and variance in a forward stepwise method (Field, 2013; Huck, 2015).

Unfortunately, there is still a problem of overfitting when conducting a backward stepwise method, forward stepwise method, or correlation matrix in multivariate linear models.  That is because, computers tend to remove, add or consider variables systematically and mathematically, not based on human knowledge (Field, 2013; Huck, 2015). Thus, it is still important to have a human to evaluate the computational output for logic, consistency, and reliability.  However, if the focus is to reduce overfitting, it should be noted that underfitting should also be avoided.  Underfitting a regression model happens when the model leaves out key independent variables that can help predict the dependent variable from the model (Field, 2013).

Hierarchical regression methods

When the researcher builds a multivariate regression model, they build it in stages, as they tend to add known independent variables first, and add newer independent variables to avoid overfitting in a technique called hierarchical regression (Austin, Goel & van Walraven, 2001; Field, 2013; Huck 2013).  The new and unknown independent variables could be entered in through a stepwise algorithm as abovementioned, or another step could be created where suspected new variables that may have a high contribution to the predictability of the dependent variables are added next (Field, 2013).  Hierarchical regression methods allow the researcher to analyze the differing hierarchical levels by examining not only the correlations between the levels but also the intercepts and slopes, helping drive valid statistical inferences (Austin et al., 2001).

Vandekerckhove et al. (2014) listed these three hierarchical methods for model selection; where each method is balancing between the goodness of fit and parsimony:

  • Akaike’s Information Criterion (AIC) considers how much-observed data influences the belief of one model over the other, but is unreliable with huge amounts of data
  • Bayesian Information Criterion (BIC) considers how much-observed data influences the belief of one model over the other and can handle huge amounts of data, but is known to underfit
  • Minimum Description Length (MDL) considers how much a model can compress the observed data, through identifying regularity within the data values

Vandekerckhove et al, (2014), also stated that the model with the lowest AIC and/or BIC score would be the best to choose.

In conclusion, under parsimony, if adding another variable does not improve the regression formula, then should not be added into the assessment to avoid overfitting (Field, 2013). General Least Squares Models have issues in overfitting because computers systematically and mathematically conduct their analysis and lack the human knowledge to keep removing unneeded variables from the equation.  Hierarchical regression methods can help minimize overfitting through indirect calculation of a parsimony value (Vandekerckhove et al., 2014).

References

  • Austin, P. C., Goel, V., & van Walraven, C. (2001). An introduction to multilevel regression models. Canadian Journal of Public Health92(2), 150.
  • Duignan, B. (2015). Occam’s razor. Encyclopaedia Britannica. Retrieved from https://www.britannica.com/topic/Occams-razor
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
  • Huck, S. W. (2013) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. [VitalSource Bookshelf Online].
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html
  • Vandekerckhove, J., Matzke, D., & Wagenmakers, E. J. (2014). Model comparison and the principle of parsimony.

Adv Quant: Polynomial Regression in R

For this local polynomial regression, the “oldfaithful.csv” will be used from the open-source data. The eruption times (in minutes) and the waiting time to the next eruption (in minutes) of 272 eruptions are provided for the Old Faithful geyser.

Introduction

For this local polynomial regression, the “oldfaithful.csv” will be used from the open-source data. The eruption times (in minutes) and the waiting time to the next eruption (in minutes) of 272 eruptions are provided for the Old Faithful geyser.

Results

IP2F1.PNG

Figure 1: Density Histograms for eruptions times and eruption waiting times.

IP2F2.PNG

Figure 2: Smoothed density histrogram from local polynomial regresion.

IP2F3.png

Figure 3: Intercomparisson of linear regression (blue), lowess regresion(red), and polynomial regression (green) on the eruption data.

IP2F4

Figure 4: Residual plots for both linear and polynomial regression.

Discussion

The histogram plots (Figure 1) illustrate that both variables, eruption times and eruptions waiting time are both bimodal distributions.  Thus, a linear regression (Figure 3), would not capture the relationship between these two variables.  A polynomial smoothed version of the bimodal curve (Figure 2) show that for low values of the geysers magnitude, there is a low wait time for the next occurrence and vice versa.  The smoothed density curve shows the estimate values of the geyser’s variable distribution better than the bar histogram

LOCFIT (locally fitted regression) and LOWESS (locally weighted scatterplot smoothing regression) are assessed alongside the typical LM (linear regression).  LOCFIT is based on LOWESS, which allows the end user to specify the smoothing parameter and neighborhood size, but LOCFIT affords the end user more control over other the smoothing parameters (Futschik & Crompton, 2004).  Both LOCFIT and LOWESS are methods for regression that uses the nearest-neighbor-based model (Field, 2013; Futschik & Crompton, 2004; Loader, 2013; Smith, 2015).  This analysis will look at all three.

The goal is to see if there is a relationship between the waiting time to the next eruption to the magnitude of the eruption (per eruption time).  Through the linear regression algorithm, the linear model is eruptions = 0.075628 (waiting) – 1.874016.  The Pearson’s correlation coefficient is 0.9008112. Thus 81.14% of the variation could be explained by a linear regression model.  The lowess regression appears not to capture the distribution of data at smaller eruption times, but it is better than the linear regression model since its correlation is 0.9809684, and can explain 0.9622990 of the variation between the variables.

Finally, to evaluate the effectiveness of the linear model and the polynomial model, residuals must be assessed (Figure 4). Both of the residual plots don’t show any discernable pattern. However, the residuals are closer to zero in the polynomial regression, suggesting that it does a better job at explaining the variance between the eruption magnitude and the next eruption wait time.  In conclusion, the best regression for this data set appears to be the polynomial regression.

Code

#

## Use R to analyze the faithful dataset.

## This is a version of the eruption data from the “Old Faithful” geyser in Yellowstone National Park, Wyoming.

##  •     X (primary key)

##  •     eruptions (eruption time [mins])

##  •     waiting (wait time for this eruptions [mins])

#

fateful = read.csv(file=”https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/faithful.csv”, header = TRUE, sep = “,”)

head(fateful)

# Produce density histograms of eruption times and of waiting times.

hist(fateful$eruptions, freq=F, xlab = “eruptions time [mins]”,  main = “Histogram of the eruptions time”)

hist(fateful$waiting, freq=F, xlab = “eruptions waiting time [mins]”,  main = “Histogram of the eruptions waiting time”)

# Produce a smoothed density histogram from local polynomial regression.

install.packages(“locfit”)

library(locfit)

plot(locfit(~lp(fateful$eruptions),data=fateful), xlab = “eruptions time [mins]”,  main = “Histogram of the eruptions time”)

plot(locfit(~lp(fateful$waiting),data=fateful), xlab = “eruptions waiting time [mins]”,  main = “Histogram of the eruptions waiting time”)

# Compare local polynomial regression to regular regression.

lowessRegression = lowess(fateful$waiting, faithful$eruptions, f=2/3)

polynomialRegression = locfit(fateful$eruptions~lp(fateful$waiting))

linearRegression = lm(fateful$eruptions~fateful$waiting)

# Graphing the data

plot(fateful$waiting, fateful$eruptions, main = “Eruption Times”, xlab=”eruption time [min]”, ylab = “Waiting time to next eruption [min]”)

lines(lowessRegression, col=”red”)

abline(linearRegression, col=”blue”)

lines(polynomialRegression, col=”green”)

# summary on the regressions

summary(linearRegression)

# correlations on the regressions

cor(fateful$eruptions,fateful$waiting)

cor(lowessRegression$x, lowessRegression$y)

# Plotting residuals

plot(residuals(linearRegression), main = “residuals for the linear regression”, ylab = “residuals”)

plot(residuals(polynomialRegression), main = “residuals for the polynomial regression”, ylab=”residuals”)

References