Adv Quant: Locally Weighted Scatterplot Smothing (LOWESS) in R

When using the locally weighted scatterplot smoothing (LOWESS) method for multiple regression models in a k-nearest-neighbor-based model, the discussion will revolve on if this is a parametric or nonparametric method as well as some of the advantages and disadvantages of LOWESS from a computational standpoint.

Advertisements

Locally weighted scatterplot smoothing (LOWESS) method for multiple regression models in a k-nearest-neighbor-based model is a regression model with 1+ independent variables, which uses a non-parametric method which creates a smoothed surface/curve (Field, 2013; Smith, 2015).  LOWESS aims not to introduce a parametric model to the data, because doing so, would require much more resources (Cleveland, 1979).  Non-parametric tests have fewer assumptions than parametric tests, such as there are no assumptions on the sampled variable’s population distribution (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015).

Assumptions in the parametric analysis, which are based on the normal distribution, include (1) additivity and linearity; (2) normality; (3) homoscedasticity/homogeneity of variance; and (3) independence (Field, 2013; Huck, 2013). However, the assumption of independence still exists in the non-parametric analysis (Huck, 2013).  Smith (2015) states that these non-parametric analyses are less powerful than parametric analysis.  However, Field (2013) disagrees and says that they are powerful, but admits that there is a loss of information about the magnitude between the observed values.  Huck (2013), states that when using non-parametric analysis correctly, they have the similar power/weight to them as parametric analysis on data that meet the parametric assumptions. Thus, to conduct non-parametric analysis, data values are ranked and arranged, thus higher valued data have higher valued ranks and vice versa (Field, 2013; Huck, 2013; Smith, 2015). Cleveland (1979), describes that only a fraction of the data (local neighbors) are considered at a time, to minimize the weighing function.  Thus, a LOWESS regression is carried out on the ranked data, which help eliminates the effects of outliers, irons out skewed distributions (Field, 2013; Smith, 2015).

Advantages and disadvantages

+ LOWESS doesn’t depend on an underlying population distribution (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015)

+ Looking at the data’s local neighboring data creates a smoothing function, which visually enhances pattern (Cleveland, 1979)

– The LOWESS technique is not a substitute for parametric regression analysis (Huck, 2013).  Thus, to use non-parametric tests, one must reject the null hypothesis: the data follows a defined distribution; with its corresponding alternative hypothesis: the data does not follow a defined distribution (Field, 2013; Huck, 2013).

– LOWESS is computationally heavy, especially depending on the weights chosen (Cleveland, 1979).

– Though the regression formula is easily and visually represented/smoothed, but the regression formula may not be as cleanly written (Cleveland, 1979).

Multiple Regression Analysis

From the R dataset archived website (http://vincentarelbundock.github.io/Rdatasets/), the NOxEmissions.csv file was downloaded, which is the Nox Air Pollution Data and it has 5 variables: primary key, Julian Calendar Day (julday), hourly mean of NOx concentrations in the air in parts per billion (LNOx), hourly sum of NOx car emissions in parts per billion (LNOxEm), and square root of the wind speed in meters per second (sqrtWS).

From this dataset, it is hypothesized, that the wind speed combined with the sum of NOx from car emissions could contribute to the mean Nox concentrations in the atmosphere.  Thus, given that there are multiple independent variables for one dependent variable, then multiple regression analysis is best suited (Field, 2013; Huck, 2013; Schumacker, 2014; Smith, 2015).

IP1.51F1

Figure 1: Histogram of each of the variables in the data set.

IP1.51F2
Figure 2: Simple Linear Regression between each of the independent variables to the dependent variables.  For the image on the right the regression formula is LNOx = -0.86443(sqrtWS) + 5.55885, with a correlation of -0.4300 and for the image on the left the regression formula is LNOx = 0.590066 (LNOxEm) + 0.048645, with a correlation of 0.6399.

IP1.51F3

Figure 3: The summation output of the Linear Multiple Regression, where the regression formula is LNOx= -1.018198 (sqrtWS) + 0.641412 (LNOxEm) + 1.061950, which explains 66.3% of the variation between these variables.

IP1.51F4.png

Figure 4: Normal Quantile-Quantile plot, for the multiple linear regression as described by Figure 3.

The histograms (Figure 1) are not convincing that this could be tested with a normal multiple linear regression analysis, but from the Normal quantile-quantile plot (Figure 4), shows normalcy in the data, justifying the results (Figure 3).  For furthering the understanding of the multiple linear regression, the simple linear regression per independent variable (Figure 2), shows that neither independent variable alone explain the variance between the variables as well as with the multiple regression analysis.

IP1.51F5.png

Figure 5: Multiple LOWESS regression plot with varying smoothing span.

Even though there is normalcy in the data, a LOWESS was still plotted on the data, just to illustrate how the differences between smoothing factors can influence the result.  The smoothing factor describes how small the neighborhood is on the k-nearest neighbor (Cleveland, 1979).  The smaller the smoothing factor, the smaller the neighborhood, and the blue line (f=2/3) is the default value in R (R, n.d.e,).  The larger the smoothing factor, the bigger the neighborhood, over simplifying the result.

Code

NOxData = read.csv(file=”https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/robustbase/NOxEmissions.csv”, header = TRUE, sep = “,”)

head(NOxData)

hist(NOxData$LNOx, freq=F, xlab = “hourly mean of NOx concentrations [ppb]”,  main = “Histogram of the hourly mean of NOx concentrations”)

hist(NOxData$LNOxEm, freq=F, xlab = “hourly sum of NOx car emissions [ppb]”,  main = “Histogram of the hourly sum of NOx car emissions”)

hist(NOxData$sqrtWS, freq=F, xlab = “square root of winds [m/s]”, main = “Histogram of the square root of winds”)

# Single Linear Regressions on LNOxEm

## LNOx

plot(NOxData$LNOxEm, NOxData$LNOx)

abline(lm(NOxData$LNOx~NOxData$LNOxEm), col=”red”)

summary(lm(NOxData$LNOx~NOxData$LNOxEm))

cor(NOxData$LNOx,NOxData$LNOxEm)

## sqrtWS

plot(NOxData$sqrtWS, NOxData$LNOx)

abline(lm(NOxData$LNOx~NOxData$sqrtWS), col=”red”)

summary(lm(NOxData$LNOx~NOxData$sqrtWS))

cor(NOxData$LNOx,NOxData$sqrtWS)

# Multiple Linear Regression on both LNOxEM and sqrtWS variables on LNOx

RegressionModel = lm(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS)

summary(RegressionModel)     

plot(RegressionModel)

# Pearson’s Correlation between independent variables

cor(NOxData$LNOxEm, NOxData$sqrtWS)

# 95% Confidence Intervals on the regression model

confint(RegressionModel, conf.level=0.95)

# LOWESS MODEL

LowessModel = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=2/3)

LowessModel2 = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=0.01)

LowessModel3 = lowess(NOxData$LNOx~ NOxData$LNOxEm + NOxData$sqrtWS, f=1)

plot(LowessModel,type=”l”,col=”blue”, main=”LOWESS Regression: green is f=1, blue is f=2/3, & red is f=0.01″)

lines(LowessModel2, col=”red”)

lines(LowessModel3, col=”green”)

References

 

Adv Quant: General Linear Regression Model in R

Converting the dataset to a dataframe for analysis and performing a regression on this dataset.

Introduction

A goal for this post is to convert the dataset to a dataframe for analysis and performing a regression on the state.x77 dataset.

Results

IP1.5F1.png

Figure 1: Scatter plot matrix of the dataframe state.x77.  The red box illustrates the relationship that is personally identified for further analysis.

IP1.5F2.PNG

IP1.5F3.png

Figure 2: Scatter plot of murder rates versus illiteracy rates across the united states, with the linear regression function of illiteracy = 0.11607 * Murder + 0.31362; with a correlation of 0.729752.

Discussion

This post analyzes the dataset state.x77 under the MASS R library, was converted into a data frame (see code section), and an analysis of the data was conducted.  To identify which variable relationship would be interesting to conduct a regression on this dataset, all the relationships within the data frame were plotted in a matrix (Figure 1).  The relationship that personally seemed interesting was the relationship between illiteracy and murder.  Thus, moving forward with these variables a simple linear regression was conducted on that data.  It was determined that there is a positive correlation on this data of 0.729752, and the relationship between the data is defined by

illiteracy = 0.11607 * Murder + 0.31362                                        (1)

From this equation that describes the relationship (Figure 2) between these variables, can explain, 53.25% of the variance between these variables. Both the intercept value and the regression weight are statistically significant at the 0.01 level, meaning that there is less than a 1% chance that this relationship could be developed from pure random chance (R output between Figure 1 & 2).  In conclusion, this data is stating that states with lower illiteracy rates will have the least amount of murder rates in their state, and vice versa. 

Code

#

## Converting a dataset to a dataframe for analysis.

#

library(MASS)             # Activate the MASS library

library(nutshell)         # Activate the nutshell library to access the plot function

data()                    # Lists all data and datasets within the Mass Library

data(state)               # Data in question is located in state

head(state.x77)           # Print out the top five entries of state.x77

df= data.frame(state.x77) # Convert the state.x77 data into a dataframe

#

## Regression formulation

#

plot(df)                                           # Scatter plot matrix, of all relationships between the variables in the df

stateRegression = lm(Illiteracy~Murder, data= df)  # Selecting this relationship for further analysis

summary(stateRegression)                           # Plotting a summary of the regression data

# Plotting a scatterplot from a dataframe below

plot(df$Murder, df$Illiteracy, type=”p”, main=”Illiteracy rates vs Murder rates”, xlab=”Murder”, ylab=”Illiteracy”)           # Plotting a scatterplot from a dataframe

abline(lm(Illiteracy~Murder, data= df), col=”red”) # Plotting a red regression line

cor(df$Murder, df$Illiteracy)

References

Adv Quant: General Least Squares Model

This post compares the assumptions of General Least Square Model (GLM) modeling for regression and correlations and it covers the issues with transforming variables to make them linear.

Regression formulas are useful for summarizing the relationship between the variables in question (Huck, 2011). There are multiple types of regression all of them are tests of prediction (Huck, 2011; Schumacker, 2014).  The least squares (linear) regression is the most well-known because it uses basic algebra, a straight line, and the correlation coefficient to aid in stating the regression’s prediction strength (Huck, 2011; Schumacker, 2014).  The linear regression model is:

y = (a + bx) + e                                                                   (1)

Where y is the dependent variable, x is the independent variable, a (the intercept) and b (the regression weight, also known as the slope) are a constants that are to be defined through the regression analysis, and e is the regression prediction error (Field, 2013; Schumacker, 2014).  The sum of the squared errors should be minimized per the least squares criterion, and that is reflected in the b term in equation 1 (Schumacker, 2014).

Correlation coefficients help define the strength of the regression formula in defining the relationships between the variables, and can vary in value from -1 to +1.  The closer the correlation coefficient is to -1 or +1; it informs the researcher that the regression formula is a good predictor of the variance between the variables.  The closer the correlation coefficient is to zero, indicates that there is hardly any relationship between the variable (Field, 2013; Huck, 2011; Schumacker, 2014).  Correlations never imply causation, but they can help determine the percentage of the variances between the variables by the regression formula result when the correlation value is squared (r2) (Field, 2013).

Assumptions for the General Least Square Model (GLM) modeling for regression and correlations

The General Least Squares Model (GLM) is the line of best fit, for linear regressions modeling along with its corresponding correlations (Smith, 2015).  There are five assumptions to a linear regression model: additivity, linearity, independent errors, homoscedasticity, and normally distributed errors.  Variables should be linearly related the independent variables(s), and the combined effects of multiple independent variables should be additive. A residual is the difference between the predicted value from the observed value: (1) no two residuals should be correlated, which can be numerically tested by using the Durbin-Watson test; (2) the variance of these residuals should be constant for each independent variable; and (3) the residuals should be random and normally distributed with a mean of 0 (Field, 2013; Schumacker, 2014).

Covering the issues with transforming variables to make them linear

When viewing the data through scatter plots, if the linearity and additivity assumptions could not be met, then transformations to the variables could be made to make the relationship linear. The above is an iterative trial and error process.  Transformation must occur to every point of the data set to correct for the linearity and addititvity issues since it changes the difference between the variables due to the change of units in the variables (Field, 2013).

Table 1: Types of data transformations and their uses (adapted from Field (2013) Table 5.1).

Data Transformation Can Correct for
Log [independent variable(s)] Positive skew, positive kurtosis, unequal variances, lack of linearity
Square root [independent variable(s)] Positive skew, positive kurtosis, unequal variances, lack of linearity
Reciprocal [independent variable(s)] Positive skew, positive kurtosis, unequal variances
Reverse score [independent variable(s)]: subtracting the highest value in the variable for each data set Negative skew

Describe the R procedures for linear regression

lm( ) is a function for running linear regression, glm( ) is a function for running logistic regression (should not be confused for GLM), and loglm( ) is a function for running log-linear regression in R (Schumacker, 2014; Smith, 2015). The summary( ) function is used to output the results of the linear regression. Dependent variables are represented with a tilde “~” and independent variables are represented with a “+” (Schumacker, 2014). Thus, the R procedures for linear regression are (Marin, 2013):

> cor (x, y) # correlation coefficient

> myRegression = lm (y ~ x, data = dataSet ) # conduct a linear regression on x and y

> summary(myRegression) # produces the outputs of the lm( ) function calculations

> attributes(myRegression) # lists the attributes of the lm( ) function

> myRegression$coefficients # gives you the slope and intercept coefficients

> plot (x, y, main=“Title to graph”) # scatter plot

> abline(myRegression) # regression line

> confint(myRegression, level= 0.99) # 99% level of confidence intervals for the regression coefficients

> anova(myRegression) # anova analysis on the regression analysis

References

  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Marin, M. (2013) Linear regression in R (R tutorial 5.1). Retrieved from https://www.youtube.com/watch?v=66z_MRwtFJM
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html

Adv Quant: Birth Rate Dataset in R

Built in the R library is the Births dataset with 400,000 records and 13 variables. The following is an analysis of this dataset.

Introduction

Built in the R library is the Births dataset with 400,000 records and 13 variables.  The following is an analysis of this dataset.

Results

IP1F1

Figure 1. The first five data point entries in the births2006.smpl data set.

IP1F2

Figure 2. The frequency of births in 2006 per day of the week.

IP1F3.png

Figure 3. Histogram of 2006 births frequencies graphed by day of the week and separated by method of delivery.

IP1F4.png

Figure 4. A trellis histogram plot of 2006 birth weight per birth number.

IP1F5

Figure 5. A trellis histogram plot of 2006 birth weight per birth delivery method.

IP1F6.png

Figure 6. A boxplot of 2006 birth weight per Apgar score.

IP1F7

Figure 7. A boxplot of 2006 birth weight per day of week.

IP1F8

Figure 8. A histogram of 2006 average birth weight per multiple births separated by gender.

Discussion

Given the open-sourced nature of the R software, many libraries are being built and shared with the greater community, and the Comprehensive R Archive Network (CRAN), has a ton of these programs as part of R Packages (Schumacker, 2014).  Thus, as part of the nutshell library, there exists a data set of 2006 births called “births2006.smpl”.  To view the first few entries the head() command can be used (R, n.d.g.).  The printout from the head() command (Figure 1) shows all 13 variables of the dataset along with the first five entries in the births2006.smpl dataset.

The number of birth seems to be approximately uniform (but not precisely) during the work week, assuming Sunday is 1 and Saturday is 7.  However, Tuesday-Thursday has the highest births in the week with the weekends having the least amount of births in the week.

Breaking down the method of deliveries in 2006 per day of the week, it can be seen that Vaginal birth in all seven days of the week outnumbers C-section deliveries in 2006 (Figure 3).  Also on Tuesday-Thursday there are more vaginal births compared to those during the weekend, and in C-section deliveries, there are most deliveries occur between Tuesday-Friday, and the least amount occurs during the weekends.

Breaking down the number of births frequencies per birth weight (Figure 4), it can be seen that the normal distribution of birth weight in grams shifts to the left as the number of multiple births increases.  This seems to suggest that babies born as a set of twins, triplets, etc. have lower birth rates on average and per distribution.  Birth weight is almost normally distributed for the single child birth but begins to lose normality as the number of births increases.

Further analysis of birth weights in 2006, per delivery method, shows that for whether or not the delivery method is known or not and its type of delivery method doesn’t play too much of a huge role in the determination of the child’s birth weight (Figure 5).  Statistical tests and effect size analysis could be conducted to verify and enhance the discussion and this assertion that is made through the graphical representation in Figure 5.

Apgar test is tested on the child after one and five minutes of birth looking at the skin color, heart rate, reflexes, muscle tone, and respiration rate of the child, where 10 is the highest but rarely obtain score (Hirsch, 2014).  Thus, observing the Apgar score variable (1-10) on birth weight in grams those with higher Apgar scores had on average higher median birth weights.  Typically, as Apgar score increases the tighter the distribution becomes, and the more outliers begin to appear (disregarding the results from Apgar score of 1).  These results from the boxplots tend to confirm Hirsch (2014) assertion that higher Apgar scores are harder to obtain.

Looking at the boxplot analysis of birth weight per day of the week (Figure 7) shows that the median, Q1, Q3, max, and min are normally distributed and unchanging per day of the week.  Outliers, the heavier babies, tend to occur without respect of the day of the week, and also appears to have little to no effect on the distribution of birth weight per day of the week.

Finally, looking at a mean birth weight per gender and per multiple births, shows a similar distribution of males and females (Figure 8). The main noticeable difference is the male Quintuplet or higher number of births on average weigh more than the corresponding female Quintuplet or higher number of births.  This chart also confirms the conclusions made (from Figure 4) where as the number of births increases the average weight of the children decrease.

In conclusion, the day of the week doesn’t predict birth weights, but probably birth frequency. In general, babies are heavier if they are single births and if they achieve Apgar score of 10.  Birth weights are not predictable through delivery method.  All of these conclusions are made on the visual representation of the dataset births2006.smpl.  What would increase the validity of these statements would be to conduct statistical significance tests and the effect size, to add further weight to what could be derived from through these images.

Code

#
## Use R to analyze the Birth dataset. 
## The Birth dataset is in the Nutshell library. 
##  • SEX and APGAR5 (SEX and Apgar score) 
##  • DPLURAL (single or multiple birth) 
##  • WTGAIN (weight gain of mother) 
##  • ESTGEST (estimated gestation in weeks) 
##  • DOB_MM, DOB_WK (month and day of week of birth) 
##  • BWT (birth weight) 
##  • DMETH_REC (method of delivery)
#
install.packages(“nutshell”)
library(nutshell)
data(births2006.smpl)

# First, list the data for the first 5 births. 
head(births2006.smpl)

# Next, show a bar chart of the frequencies of births according to the day of the week of the birth.
births.dayofweek = table(births2006.smpl$DOB_WK) #Goal of this variable is to speed up the calculations
barplot(births.dayofweek, ylab=”frequency”, xlab=”Day of week”, col = “darkred”, main= “Number of births in 2006 per day of the week”)

# Obtain frequencies for two-way classifications of birth according to the day of the week and the method of delivery.
births.methodsVdaysofweek = table(births2006.smpl$DOB_WK,births2006.smpl$DMETH_REC) 
head(births.methodsVdaysofweek,7)
barplot(births.methodsVdaysofweek[,-2], col=heat.colors(length(rownames(births.methodsVdaysofweek))), width=2, beside=TRUE, main = “bar plot of births per method per day of the week”)
legend (“topleft”, fill=heat.colors(length(rownames(births.methodsVdaysofweek))),legend=rownames(births.methodsVdaysofweek))

# Use lattice (trellis) graphs (R package lattice) to condition density histograms on the values of a third variable. 
library(lattice)

# The variable for multiple births and the method of delivery are conditioning variables. 
# Separate the histogram of birth weight according to these variable.
histogram(~DBWT|DPLURAL,data=births2006.smpl,layout=c(1,5),col=”black”, xlab = “birth weight”, main = “trellis plot of birth weight vs birth number”)

histogram(~DBWT|DMETH_REC,data=births2006.smpl,layout=c(1,3),col=”black”, xlab = “birth weight”, main = “trellis plot of birth weight vs birth method”)

# Do a box plot of birth weight against Apgar score and box plots of birth weight by day of week of delivery. 
boxplot(DBWT~APGAR5,data=births2006.smpl,ylab=”birth weight”,xlab=”AGPAR5″, main=”Boxplot of birthweight per Apgar score”)

boxplot(DBWT~DOB_WK,data=births2006.smpl,ylab=”birth weight”,xlab=”Day of Week”, main=”Boxplot of birthweight per day of week”)

# Calculate the average birth weight as a function of multiple births for males and females separately. 
# Use the “tapply” function, and for missing values use the “option nz.rm=TRUE.” 
listed = list(births2006.smpl$DPLURAL,births2006.smpl$SEX)
tapplication=tapply(births2006.smpl$DBWT,listed,mean,na.rm=TRUE)
barplot(tapplication,ylab=”birth weight”, beside=TRUE, legend=TRUE,xlab=”gender”, main = “bar plot of average birthweight per multiple births by gender”)

References

  • CRAN (n.d.). Using lattice’s historgram (). Retrieved from https://cran.r-project.org/web/packages/tigerstats/vignettes/histogram.html
  • Hirsch, L. (2014). About the Apgar score. Retrieved from http://kidshealth.org/en/parents/apgar.html#
  • R (n.d.a.). Add legends to plots. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/legend.html
  • R (n.d.b.). Apply a function over a ragged array. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/base/html/tapply.html
  • R (n.d.c.). Bar plots. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/barplot.html
  • R (n.d.d.). Cross tabulation and table creation. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/base/html/table.html
  • R (n.d.e.). List-Generic and dotted pairs. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/base/html/list.html
  • R (n.d.f.). Produce box-and-wisker plot(s) of a given (grouped) values.  Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/boxplot.html
  • R (n.d.g.). Return the first or last part of an object. Retrieved from https://stat.ethz.ch/R-manual/R-devel/library/utils/html/head.html
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.

Adv Quant: Statistical Significance and Machine Learning

Data mining and analytics are used to test hypotheses and detect trends from very large datasets. In statistics, the significance is determined to some extent by the sample size. How can supervised learning be used in such large data sets to overcome the problem where everything is significant with statistical analysis?

Statistical significance on large samples sizes can be affected by small differences and can show up as significant, while in smaller samples large differences may be deemed statistically insignificant (Field, 2014).  Statistically significant results allow the researcher to reject a null hypothesis but do not test the importance of the observations made (Huck, 2011). Statistical analysis is highly deductive (Creswell, 2014), and supervised learning is highly inductive (Connolly & Begg, 2014).  Also, statistical analysis tries to identify trends in a given sample size by assuming normality, linearity or constant variance; whereas in machine learning it aims to find a pattern in a large sample of data and it is expected that these statistical analysis assumptions are not met and therefore require a higher random sampling set (Ahlemeyer-Stubbe, & Coleman, 2014).

Machine learning tries to emulate the way humans learn. When humans learn, they create a model based off of observations to help describe key features of a situation and help them predict an outcome, and thus machine learning does predictive modeling of large data sets in a similar fashion (Connolly & Begg, 2014).  The biggest selling point of supervised machine learning is that the machine can build models that identify key patterns in the data when humans can no longer compute the volume, velocity, and variety of the data (Ahlemeyer-Stubbe, & Coleman, 2014). There are many applications that use machine learning: marketing, investments, fraud detection, manufacturing, telecommunication, etc. (Fayyad, Piatetsky-Shapiro, & Smyth, 1996). Figure 1 illustrates how supervised learning can classify data or predict their values through a two-phase process.  The two-phase process consists of (1) training where the model is built by ingesting huge amounts of historical data; and (2) testing where the new model is tested on new current data that helps establish its accuracy, reliability, and validity (Ahlemeyer-Stubbe & Coleman, 2014; Connolly & Begg, 2014). The model that is created by machines through this learning is quickly adaptable to new data (Minelli, Chambers, & Dhiraj, 2013).  These models themselves are a set of rules or formulas, and that depends on which analytical algorithm is used (Ahlemeyer-Stubbe & Coleman, 2014).  Given that the supervised machine learning is trained with known responses (or outputs) to make its future predictions, it is vital to have a clear purpose defined before running the algorithm.  The model is only as good as the data that goes in it.

U1db2F1.PNG

Figure 1:  Simplified process diagram on supervised machine learning.

Thus, for classification the machine is learning a function to map data into one or many different defining characteristics, and it could consist of decision trees and neural network induction techniques (Connolly & Begg, 2014; Fayyad et al., 1996).  Fayyad et al. (1996) mentioned that it is impossible to classify data cleanly into one camp versus another. For value prediction, regression is used to map a function to the data that when followed gives an estimate on where the next value would be (Connolly & Begg, 2014; Fayyad et al. 1996).  However, in these regression formulas, it is good to remember that correlation between the data/variables does not imply causation.

Random sampling is core to statistics and the concept of statistical inference (Smith, 2015; Field, 2011), but it also serves a purpose in supervised learning (Ahlemeyer-Stubbe & Coleman, 2014).  Random sampling of data, is selecting a proportion of the data from a population, where each data point has an equal opportunity of being selected (Smith, 2015; Huck, 2013). The larger the sample, on average tends to represent the population fairly well (Field, 2014; Huck, 2013). Given nature big data, high volume, velocity, and variety, it is assumed that there is plenty of data to draw upon and run a supervised machine learning algorithm.  However, too much data that is fed into the machine learning algorithm can increase the process and analysis time.  Also, the bigger the random sampling size used for the learning, the more time it would take to process and analyze the data.

There are also unsupervised learning algorithms, where it also needs training and testing, but unlike supervised learning, it doesn’t need to validate its model on some predetermined output value (Ahlemeyer-Stubbe & Coleman, 2014, Conolly & Begg, 2014).   Therefore, unsupervised learning tries to find the natural relationships in the input data (Ahlemeyer-Stubbe & Coleman, 2014).  Cluster analysis is an example of unsupervised learning, where the model seeks to find a finite set of the cluster that can help describe the data into subsets of similarities (Ahlemeyer-Stubbe & Coleman, 2014, Fayyad et al., 1996). Finally, in supervised learning the results could be checked through estimation error; however it is not so easy with unsupervised learning because of a lack of a target but requires retesting to see if the patterns are similar or repeatable (Ahlemeyer-Stubbe & Coleman, 2014).

References

  • Ahlemeyer-Stubbe, A., & Coleman, S. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Connolly, T., Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, 6th Edition. [VitalSource Bookshelf Online].
  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. Advances in Knowledge Discovery and Data Mining, 17(3), 37–54.
  • Field, A. (2011) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Huck, S. W. (2013) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Minelli, M., Chambers, M., Dhiraj, A. (2013). Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses, 1st Edition. [VitalSource Bookshelf Online].
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html

Adv Quant: Statistical Features of R

Comparing the statistical features of R to its programming features and an explanation on how they are useful in analyzing big datasets.
• Describe how the analytics of R are suited for Big Data.

Ward and Barker (2013) traced back definition of Volume, Velocity, and Variety from Gartner.  Now, a predominately widely accepted definition for big data is any set of data that has high velocity, volume, and variety (Davenport & Dyche, 2013; Fox & Do 2013, Kaur & Rani, 2015. Mao, Xu, Wu, Li, Li, & Lu, 2015; Podesta, Pritzker, Moniz, Holdren, & Zients, 2014; Richards & King, 2014; Sagiroglu & Sinanc, 2013; Zikopoulous and Eaton, 2012). Davenport et al. (2012), stated that IT companies define big data as “more insightful data analysis”, but if used properly companies can gain a competitive edge.  Data scientists from companies like Google, Facebook, and LinkedIn, use R for their finance and data analytics (Revolution Analytics, n.d.). According to Minelli, Chambers and Dhiraj (2013) R has 2 million end-users and is used in industries like health, finance, etc.

Why is R so popular and have that many users?  It could be that R is a free open-source software that works on multiple platforms (Unix, Windows, Mac), and has an extensive statistical library to help conduct basic statistical data analysis, to multivariate analysis, scaling up to big data analytics (Hothorn, 2016; Leisch & Gruen, 2016; Schumacker, 2014 & 2016; Templ, 2016; Theussl & Borchers, 2016; Wild, 2015).  Given the open-sourced nature of the R software, many libraries are being built and shared with the greater community, and the Comprehensive R Archive Network (CRAN), has a ton of these programs as part of R Packages (Schumacker, 2014).  Other advantages of R, is the customizable statistical analysis, control over the analytical processes, extensive documentation, and references (Schumacker, 2016).  R Packages allow for everyday data analytics, visually aesthetic data visualizations, faster results than legacy statistical software that the end-user can control, drawing upon the talents of leading data scientists (Revolution Analytics, n.d.).  R programming features include dealing with a whole suite of data types, (scalars, vectors, matrices, arrays, and data frames), as well as impetrating and exporting data into multiple other commercially available statistical/data software (SPSS, SAS, Excel, etc.) (Schumacker, 2014 & 2016).  All the features of R related to big data analytics, statistical, and programming features are listed in Table 1 (below).  Given all the R Packages listed below and the importing and exporting features to other big data statistical software illustrates how useful R is for analyzing big datasets of various types (Schumacker, 2014, 2016).

Finally, R is the most dominant analytics tool for Big Data Analytics (Minelli et al., 2013).  Big data analytics is at the border of computing science, data mining, and statistics, it is natural to see multiple R Packages and libraries listed within CRAN that are freely available to use.  Within the field of big data analytics, some (but not all) of common sets of techniques that have R Packages are machine learning, cluster analysis, finite mixture models, and natural language processing. Given the extensive libraries through R Packages and extensive documentation, R is well suited for Big Data.

Table 1: Big Data Analytics, Statistical, and Programmable features of R

R Programming Features (Schumacker, 2014) Input, Process, Output, R Packages
Variables in R (Schumacker, 2014) number, character, logical
Data Types in R (Schumacker, 2014) scalars, arrays, vectors, matrices, list, data frames
Flow control: Loops (Schumacker, 2014) Loops (for, if, while, else, …)

Boolean Operators (and, not, or)

Visualizations (Schumacker, 2014) pie charts, bar charts, histogram, stem-and-leaf plots, scatter plots, box-whiskers plot, surface plots, contour plots, geographic maps, colors, plus others from the many R Packages
Statistical Analysis (Schumacker, 2014) Central tendency, dispersion, correlation test, linear Regression, multiple regression, logistic regression, log-linear regression, analysis of variance, probability, confidence intervals, plus others from the many R Packages
Distributions: population, sampling, and statistical (Schumacker, 2014) Binomial, Uniform, Exponential, Normal, Hypothesis testing, chi-square, z-test, t-test, f-test, plus others from the many R Packages
Multivariate Statistical Analysis (Schumacker, 2016) MANOVA, MANCOVA, factor analysis, principle components analysis, structural equation modeling, multidimensional scaling, discriminant analysis, canonical correlation, multiple group multivariate statistical analysis, plus others from the many R Packages
Big Data Analytics: Cluster Analysis (Leisch & Gruen, 2016)

 

hierarchical clustering, partitioning clustering, model-based clustering, K-means clustering, fuzzy clustering, cluster-wise regression, principal component analysis, self-organizing maps, density based clustering
Big Data Analytics: Machine Learning

(Hothorn, 2016; Templ, 2016)

neural networks, recursive partitioning, random forests, regularized and shrinkage methods, boosting, support vector machines, association rules, fuzzy rules based systems, model selection and validation, tree methods, expectation-maximization, nearest neighbor
Big Data Analytics: Natural Language Processing (Wild, 2015)

 

Frameworks, lexical databases, keyword extraction, string manipulation, stemming, semantic, pragmatics
Big Data Analytics: Optimization and Mathematical Programing (Theussl & Borchers, 2016)

 

optimization infrastructure packages, general purpose continuous solvers, least-squares problems, semidefinite and convex solvers, global and stochastic optimization, mathematical programming solvers

 

References

  • Davenport, T. H., Barth, P., & Bean, R. (2012). How big data is different. MIT Sloan Management Review, 54(1), 43.
  • Fox, S., & Do, T. (2013). Getting real about Big Data: applying critical realism to analyse Big Data hype. International Journal of Managing Projects in Business, 6(4), 739–760. http://doi.org/10.1108/IJMPB-08-2012-0049
  • Hothorn, T. (2016). CRAN task view: Machine learning & statistical learning. Retrieved from https://cran.r-project.org/web/views/MachineLearning.html
  • Kaur, K., & Rani, R. (2015). Managing Data in Healthcare Information Systems: Many Models, One Solution. Big Data Management, 52–59.
  • Leisch, F. & Gruen, B. (2016). CRAN task view: Cluster analysis & finite mixture models. Retrieved from https://cran.r-project.org/web/views/Cluster.html
  • Mao, R., Xu, H., Wu, W., Li, J., Li, Y., & Lu, M. (2015). Overcoming the Challenge of Variety: Big Data Abstraction, the Next Evolution of Data Management for AAL Communication Systems. Ambient Assisted Living Communications, 42–47.
  • Minelli, M., Chambers M., & Dhiraj A. (2013) Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses. John Wiley & Sons P&T. VitalBook file.
  • Podesta, J., Pritzker, P., Moniz, E. J., Holdren, J., & Zients, J. (2014). Big Data: Seizing Opportunities. Executive Office of the President of USA, 1–79.
  • Revolution Analytics (n.d.). What is R? Retrieved from http://www.revolutionanalytics.com/what-r
  • Richards, N. M., & King, J. H. (2014). Big Data Ethics. Wake Forest Law Review, 49, 393–432.
  • Sagiroglu, S., & Sinanc, D. (2013). Big Data : A Review. Collaboration Technologies and Systems (CTS), 42–47.
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.
  • Schumacker, R. E. (2016) Using R with multivariate statistics. California, SAGE Publications, Inc.
  • Templ, M. (2016). CRAN task view: Official statistics & survey methodology. Retrieved from https://cran.r-project.org/web/views/OfficialStatistics.html
  • Theussl, S. & Borchers, H. W. (2016). CRAN task view: Optimization and mathematical programming. Retrieved from https://cran.r-project.org/web/views/Optimization.html
  • Ward, J. S., & Barker, A. (2013). Undefined by data: a survey of big data definitions. arXiv preprint arXiv:1309.5821.
  • Wild, F. (2015). CRAN task view: Natural language processing. Retrieved from https://cran.r-project.org/web/views/NaturalLanguageProcessing.html
  • Zikopoulos, P., &Eaton, C. (2012). Understanding Big Data: Analytics for enterprise class hadoop and streaming data. McGraw-Hill Osborne Media.

Quant: Compelling topics

A discussion on what were the most compelling topics learned in the subject of Quantitative Analysis.

Most Compelling Topics

Field (2013) states that both quantitative and qualitative methods are complimentary at best, none competing approaches to solving the world’s problems. Although these methods are quite different from each other. Simply put, quantitative methods are utilized when the research contains variables that are numerical, and qualitative methods are utilized when the research contains variables that are based on language (Field, 2013).  Thus, central to quantitative research and methods is to understand the numerical, ordinal, or categorical dataset and what the data represents. This can be done through either descriptive statistics, where the researcher uses statistics to help describe a data set, or it can be done through inferential statistics, where conclusions can be drawn about the data set (Miller, n.d.).

Field (2013) and Schumacker (2014), defined central tendency as an all-encompassing term to help describe the “center of a frequency distribution” through the commonly used measures mean, median, and mode.  Outliers, missing values, and multiplication of a constant, and adding a constant are factors that affect the central tendency (Schumacker, 2014).  Besides just looking at one central tendency measure, researchers can also analyze the mean and median together to understand how skewed the data is and in which direction.  Heavily skewed distributions would heavily increase the distance between these two values, and if the mean less than the median the distribution is skewed negatively (Field, 2013).  To understand the distribution, better other measures like variance and standard deviations could be used.

Variance and standard deviations are considered as measures of dispersion, where the variance is considered as measures of average dispersion (Field, 2013; Schumacker, 2014).  Variance is a numerical value that describes how the observed data values are spread across the data distribution and how they differ from the mean on average (Huck, 2011; Field, 2013; Schumacker, 2014).  The smaller the variance indicates that the observed data values are close to the mean and vice versa (Field, 2013).

Rarely is every member of the population studied, and instead a sample from that population is randomly taken to represent that population for analysis in quantitative research (Gall, Gall, & Borg 2006). At the end of the day, the insights gained from this type of research should be impersonal, objective, and generalizable.  To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value.  Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.

Most flaws in research methodology exist because the validity and reliability weren’t established (Gall et al., 2006). Thus, it is important to ensure a valid and reliable assessment instrument.  So, in using any existing survey as an assessment instrument, one should report the instrument’s: development, items, scales, reports on reliability, and reports on validity through past uses (Creswell, 2014; Joyner, 2012).  Permission must be secured for using any instrument and placed in the appendix (Joyner, 2012).  The validity of the assessment instrument is key to drawing meaningful and useful statistical inferences (Creswell, 2014).

Through sampling of a population and using a valid and reliable survey instrument for assessment, attitudes and opinions about a population could be correctly inferred from the sample (Creswell, 2014).  Sometimes, a survey instrument doesn’t fit those in the target group. Thus it would not produce valid nor reliable inferences for the targeted population. One must select a targeted population and determine the size of that stratified population (Creswell, 2014).

Parametric statistics, are inferential and based on random sampling from a distinct population, and that the sample data is making strict inferences about the population’s parameters, thus tests like t-tests, chi-square, f-tests (ANOVA) can be used (Huck, 2011; Schumacker, 2014).  Nonparametric statistics, “assumption-free tests”, is used for tests that are using ranked data like Mann-Whitney U-test, Wilcoxon Signed-Rank test, Kruskal-Wallis H-test, and chi-square (Field, 2013; Huck, 2011).

First, there is a need to define the types of data.  Continuous data is interval/ratio data, and categorical data is nominal/ordinal data.  Modified from Schumacker (2014) with data added from Huck (2011):

Statistic Dependent Variable Independent Variable
Analysis of Variance (ANOVA)
     One way Continuous Categorical
t-Tests
     Single Sample Continuous
     Independent groups Continuous Categorical
     Dependent (paired groups) Continuous Categorical
Chi-square Categorical Categorical
Mann-Whitney U-test Ordinal Ordinal
Wilcoxon Ordinal Ordinal
Kruskal-Wallis H-test Ordinal Ordinal

So, meaningful results get reported and their statistical significance, confidence intervals and effect sizes (Creswell, 2014). If the results from a statistical test have a low probability of occurring by chance (5% or 1% or less) then the statistical test is considered significant (Creswell, 2014; Field, 2014; Huck, 2011Statistical significance test can have the same effect yet result in different values (Field, 2014).  Statistical significance on large samples sizes can be affected by small differences and can show up as significant, while in smaller samples large differences may be deemed insignificant (Field, 2014).  Statistically significant results allow the researcher to reject a null hypothesis but do not test the importance of the observations made (Huck, 2011).  Huck (2011) stated two main factors that could influence whether or not a result is statistically significant is the quality of the research question and research design.

Huck (2011) suggested that after statistical significance is calculated and the research can either reject or fail to reject a null hypothesis, effect size analysis should be conducted.  The effect size allows researchers to measure objectively the magnitude or practical significance of the research findings through looking at the differential impact of the variables (Huck, 2011; Field, 2014).  Field (2014), defines one way of measuring the effect size is through Cohen’s d: d = (Avg(x1) – Avg(x2))/(standard deviation).  If d = 0.2 there is a small effect, d = 0.5 there is a moderate effect, and d = 0.8 or more there is a large effect (Field, 2014; Huck, 2011). Thus, this could be the reason why a statistical test could yield a statistically significant value, but further analysis with effect size could show that those statistically significant results do not explain much of what is happening in the total relationship.

In regression analysis, it should be possible to predict the dependent variable based on the independent variables, depending on two factors: (1) that the productivity assessment tool is valid and reliable (Creswell, 2014) and (2) we have a large enough sample size to conduct our analysis and be able to draw statistical inference of the population based on the sample data which has been collected (Huck, 2011). Assuming these two conditions are met, then regression analysis could be made on the data to create a prediction formula. Regression formulas are useful for summarizing the relationship between the variables in question (Huck, 2011).

When modeling predict the dependent variable based upon the independent variable the regression model with the strongest correlation will be used as it is that regression formula that explains the variance between the variables the best.   However, just because the regression formula can predict some or most of the variance between the variables, it will never imply causation (Field, 2013).  Correlations help define the strength of the regression formula in defining the relationships between the variables, and can vary in value from -1 to +1.  The closer the correlation coefficient is to -1 or +1; it informs the researcher that the regression formula is a good predictor of the variance between the variables.  The closer the correlation coefficient is to zero, indicates that there is hardly any relationship between the variable (Field, 2013; Huck, 2011; Schumacker, 2014).  It should never be forgotten that correlation doesn’t imply causation, but can help determine the percentage of the variances between the variables by the regression formula result, when the correlation value is squared (r2) (Field, 2013).

 

References:

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Gall, M. D., Gall, J., & Borg W. (2006). Educational research: An introduction (8th ed.). Pearson Learning Solutions. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Joyner, R. L. (2012) Writing the Winning Thesis or Dissertation: A Step-by-Step Guide (3rd ed.). Corwin. VitalBook file.
  • Miller, R. (n.d.). Week 1: Central tendency [Video file]. Retrieved from http://breeze.careeredonline.com/p9fynztexn6/?launcher=false&fcsContent=true&pbMode=normal
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.