## Adv Quant: Logistic Vs Linear Regression

To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006; Smith, 2015).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value. Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.  The orders of procedures are important to apply statistical inferences to regressions, if not the prediction formula will not be generalizable.

Logistic regression is another flavor of multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).  Logistic regression is an alternative to linear regression, which assumes all variables are continuous (Ahlemeyer-Stubbe, & Coleman, 2014). Both the multi-variable linear regression and logistic regression formula are (Field, 2013; Schumacker, 2014):

Y = a + b11 + b2X2 + …                                                       (1)

The main difference between these two regressions is that the variables in the equation (1) represent different types of dependent (Y) and independent variables (Xi).  These different types of variables may have to undergo a transformation before the regression analysis begins (Field, 2013; Schumacker 2014).  Due to the difference in the types of variables between logistic and linear regression the assumptions on when to use either regression are also different (Table 1).

Table 1: Discusses and summarizes the types of assumptions and variables used in both logistic and regular regression, created from Ahlemeyer-Stubbe & Coleman (2014), Field (2013), Gall et al. (2006), Huck (2011) and Schumacker, (2014).

 Assumptions of Logistic Regression Assumptions for Linear Regression ·         Multicollinearity should be minimized between the independent variables ·         There is no need for linearity between the dependent and independent variables ·         Normality only on the continuous independent variables ·         No need for homogeneity of variance within the categorical variables ·         Error terms a not normally distributed ·         Independent variables don’t have to be continuous ·         There are no missing data (no null values) ·         Variance that is not zero ·         Multicollinearity should be minimized between the multiple independent variables ·         Linearity exists between all variables ·         Additivity (for multi-variable linear regression) ·         Errors in the dependent variable and its predicted values are independent and uncorrelated ·         All variables are continuous ·         Normality on all variables ·         Normality on the error values ·         Homogeneity of variance ·         Homoscedasticity- variance between residuals are constant ·         Variance that is not zero Variable Types of Logistic Regression Variable Types of Linear Regression ·         2 or more Independent variables ·         Independent variables: continuous, dichotomous, binary, or categorical ·         Dependent variable: dichotomous, binary ·         1 or more Independent variables ·         Independent variables: continuous ·         Dependent variables: continuous

References

• Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
• Gall, M. D., Gall, J. P., Borg, W. R. (2006). Educational Research: An Introduction, 8th Edition. [VitalSource Bookshelf Online].
• Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
• Huck, Schuyler W. (2011). Reading Statistics and Research, 6th Edition. [VitalSource Bookshelf Online].
• Schumacker, Randall E. (2014). Learning Statistics Using R, 1st Edition. [VitalSource Bookshelf Online].

## Quant: ANOVA and Multiple Comparisons in SPSS

Introduction

The aim of this analysis is to look at the relationship between the dependent variable of the income level of respondents (rincdol) and the independent variable of their reported level of happiness (happy).   This independent variable has at least 3 or more levels within it.

From the SPSS outputs the goal is to:

• How to use the ANOVA program to determine the overall conclusion. Use of the Bonferroni correction as a post-hoc analysis to determine the relationship of specific levels of happiness to income.

Hypothesis

• Null: There is no basis of difference between the overall rincdol and happy
• Alternative: There is are real differences between the overall rincdol and happy
• Null2: There is no basis of difference between the certain pairs of rincdol and happy
• Alternative2: There is are real differences between the certain pairs of rincdol and happy

Methodology

For this project, the gss.sav file is loaded into SPSS (GSS, n.d.).  The goal is to look at the relationships between the following variables: rincdol (Respondent’s income; ranges recoded to midpoints) and happy (General Happiness). To conduct a parametric analysis, navigate to Analyze > Compare Means > One-Way ANOVA.  The variable rincdol was placed in the “Dependent List” box, and happy was placed under “Factor” box.  Select “Post Hoc” and under the “Equal Variances Assumed” select “Bonferroni”.  The procedures for this analysis are provided in video tutorial form by Miller (n.d.). The following output was observed in the next two tables.

The relationship between rincdol and happy are plotted by using the chart builder.  Code to run the chart builder code is shown in the code section, and the resulting image is shown in the results section.

Results

Table 1: ANOVA

 Respondent’s income; ranges recoded to midpoints Sum of Squares df Mean Square F Sig. Between Groups 11009722680.000 2 5504861341.000 9.889 .000 Within Groups 499905585000.000 898 556687733.900 Total 510915307700.000 900

Through the ANOVA analysis, Table 1, it shows that the overall ANOVA shows statistical significance, such that the first Null hypothesis is rejected at the 0.05 level. Thus, there is a statistically significant difference in the relationship between the overall rincdol and happy variables.  However, the difference between the means at various levels.

Table 2: Multiple Comparisons

 Dependent Variable:   Respondent’s income; ranges recoded to midpoints Bonferroni (I) GENERAL HAPPINESS (J) GENERAL HAPPINESS Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval Lower Bound Upper Bound VERY HAPPY PRETTY HAPPY 4093.678 1744.832 .058 -91.26 8278.61 NOT TOO HAPPY 12808.643* 2912.527 .000 5823.02 19794.26 PRETTY HAPPY VERY HAPPY -4093.678 1744.832 .058 -8278.61 91.26 NOT TOO HAPPY 8714.965* 2740.045 .005 2143.04 15286.89 NOT TOO HAPPY VERY HAPPY -12808.643* 2912.527 .000 -19794.26 -5823.02 PRETTY HAPPY -8714.965* 2740.045 .005 -15286.89 -2143.04 *. The mean difference is significant at the 0.05 level.

According to Table 2, for the pairings of “Very Happy” and “Pretty Happy” did not disprove the Null2 for that case at the 0.05 level. But, all other pairings “Very Happy” and “Not Too Happy” with “Pretty Happy” and “Not Too Happy” can reject the Null2 hypothesis at the 0.05 level.  Thus, there is a difference when comparing across the three different pairs. Figure 1: Graphed means of General Happiness versus incomes.

The relationship between general happiness and income are positively correlated (Figure 1).  That means that a low level of general happiness in a person usually have lower recorded mean incomes and vice versa.  There is no direction or causality that can be made from this analysis.  It is not that high amounts of income cause general happiness, or happy people make more money due to their positivism attitude towards life.

SPSS Code

DATASET NAME DataSet1 WINDOW=FRONT.

ONEWAY rincdol BY happy

/MISSING ANALYSIS

/POSTHOC=BONFERRONI ALPHA(0.05).

* Chart Builder.

GGRAPH

/GRAPHDATASET NAME=”graphdataset” VARIABLES=happy MEAN(rincdol)[name=”MEAN_rincdol”]

MISSING=LISTWISE REPORTMISSING=NO

/GRAPHSPEC SOURCE=INLINE.

BEGIN GPL

SOURCE: s=userSource(id(“graphdataset”))

DATA: happy=col(source(s), name(“happy”), unit.category())

DATA: MEAN_rincdol=col(source(s), name(“MEAN_rincdol”))

GUIDE: axis(dim(1), label(“GENERAL HAPPINESS”))

GUIDE: axis(dim(2), label(“Mean Respondent’s income; ranges recoded to midpoints”))

SCALE: cat(dim(1), include(“1”, “2”, “3”))

SCALE: linear(dim(2), include(0))

ELEMENT: line(position(happy*MEAN_rincdol), missing.wings())

END GPL.

References: