An Innovation that is possible 15-20 years from now

This post describes an innovation idea that is not possible today but could be available in the next 15–20 years in the United States.

Advertisements

Innovation idea that is not possible today but will be in the next 15-20 years

Mobile technology is everywhere today, and their use is prolific among all the diverse populations in the U.S., even to segments of the populations that do not own a computer own a smartphone (Kumar, 2015).  Electronic transactions carrying trillions of dollars, sensitive flight data, etc. take place all the time (Kumar, 2015; Safian, 2015).  Safian (2015) is calling that mobile voting will be one of the many things that will occur in the next 20 years.

Thirty-three states offer online voter registration and that allowed for 6.5% of the electorate to register for 2014 up from 1.7% in 2010 (Election Assistance Commission [EAC], 2015; Jayakumar, 2015). About 19.2% of ballots in 2014 were rejected due to improper registration (EAC, 2015).  Eighty cities and towns in Canada have experimented with mobile voting since 2003, and Sweden, Latvia, and Switzerland have tested the idea (Gross, 2011).  Since 2005, Estonia with a mobile voting period that last about seven days and is available for all citizens had about 1/4 to 1/3 votes cast were online (Vabariigi Valimiskomisjon, 2016).

Mobile voting, can help reduce the cost of elections, reduce the need for polling places, encourage and engage disenfranchised voters, reduce the time it takes to cast a vote, reduce the need to travel to a polling place, facilitate fast results, more convenient way of collecting huge data about the voting population and their turnout, while finally allowing for easier voter registration (Jayakumar, 2015; Kumar, 2015). However, to make mobile voting a key innovation in the next 15-20 years, the main goals of mobile voting must be addressed: security, accessibility, anonymity, conveniency, and verifiable (Gross, 2011; Jayakumar, 2015; Kumar, 2015 Safian, 2015).

Forces that define the innovation that may facilitate or reduce its likelihood of success

Technological: Paper ballots allow for and provide anonymity, free from manipulation (Jayakumar, 2015). Even though, some ballots could be switched. Mobile voting devices currently have issues with security and verifiability (Jayakumar, 2015).  However, other countries are working on providing democracy to all through allowing both paper and electronic ballots as previously discussed.  However, mobile voting is not like other typical transactional data from a bank, where a user can correct errors (Jayakumar, 2015).  Technology must take this into account.  Such that, voting data is unalterable in transit from the mobile device to the main destination (Jayakumar, 2015).  However, in 2014, Zimmerman and Kiniry were able to show how Alaska’s PDF Ballots are insecure, as proof that the technology is currently not as reliable to ensure a tamper free election.

Ethical: Mobile voting can allow for the lowest income workers afraid to take time off from work to vote, or single parents with no daycare options, or people without cars in a remote rural area, increase turnout during midterm and off-season elections, e.g. runoff elections (Jayakumar, 2015; Kumar, 2015). It is suggested that voter intimidation may also be resolved through mobile voting, as people can vote in the privacy of the person’s home (Kumar, 2015).

Financial: Huge cost savings could be realized because, in 2014, 732K poll workers were hired for 114K polling locations, which amounts to 6.4 people per polling location (Election Assistance Commission [EAC], 2015).

Resources

Adv Quant: Compelling Topics

A discussion on what were the most compelling topics learned in the subject of Advance Quantitative Analysis.

Compelling topics summary/definitions

  • Supervised machine learning algorithms: is a model that needs training and testing data set. However it does need to validate its model on some predetermined output value (Ahlemeyer-Stubbe & Coleman, 2014, Conolly & Begg, 2014).
  • Unsupervised machine learning algorithms: is a model that needs training and testing data set, but unlike supervised learning, it doesn’t need to validate its model on some predetermined output value (Ahlemeyer-Stubbe & Coleman, 2014, Conolly & Begg, 2014). Therefore, unsupervised learning tries to find the natural relationships in the input data (Ahlemeyer-Stubbe & Coleman, 2014).
  • General Least Squares Model (GLM): is the line of best fit, for linear regressions modeling along with its corresponding correlations (Smith, 2015). There are five assumptions to a linear regression model: additivity, linearity, independent errors, homoscedasticity, and normally distributed errors.
  • Overfitting: is stuffing a regression model with so many variables that have little contributional weight to help predict the dependent variable (Field, 2013; Vandekerckhove, Matzke, & Wagenmakers, 2014). Thus, to avoid the over-fitting problem, the use of parsimony is important in big data analytics.
  • Parsimony: is describing a dependent variable with the fewest independent variables as possible (Field, 2013; Huck, 2013; Smith, 2015). The concept of parsimony could be attributed to Occam’s Razor, which states “plurality out never be posited without necessity” (Duignan, 2015).  Vandekerckhove et al. (2014) describe parsimony as a way of removing the noise from the signal to create better predictive regression models.
  • Hierarchical Regression: When the researcher builds a multivariate regression model, they build it in stages, as they tend to add known independent variables first, and add newer independent variables in order to avoid overfitting in a technique called hierarchical regression (Austin, Goel & van Walraven, 2001; Field, 2013; Huck 2013).
  • Logistic Regression: multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).
  • Nearest Neighbor Methods: K-nearest neighbor (i.e. K =5) is when a data point is clustered into a group, by having 5 of the nearest neighbors vote on that data point, and it is particularly useful if the data is a binary or categorical (Berson, Smith, & Thearling, 1999).
  • Classification Trees: aid in data abstraction and finding patterns in an intuitive way (Ahlemeyer-Stubbe & Coleman, 2014; Brookshear & Brylow, 2014; Conolly & Begg, 2014) and aid the decision-making process by mapping out all the paths, solutions, or options available to the decision maker to decide upon.
  • Bayesian Analysis: can be reduced to a conditional probability that aims to take into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015; Spiegelhalter & Rice, 2009; Yudkowsky, 2003).
  • Discriminate Analysis: how should data be best separated into several groups based on several independent variables that create the largest separation of the prediction (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013).
  • Ensemble Models: can perform better than a single classifier, since they are created as a combination of classifiers that have a weight attached to them to properly classify new data points (Bauer & Kohavi, 1999; Dietterich, 2000), through techniques like Bagging and Boosting. Boosting procedures help reduce both bias and variance of the different methods, and bagging procedures reduce just the variance of the different methods (Bauer & Kohavi, 1999; Liaw & Wiener, 2002).

 

References

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Austin, P. C., Goel, V., & van Walraven, C. (2001). An introduction to multilevel regression models. Canadian Journal of Public Health92(2), 150.
  • Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine learning,36(1-2), 105-139.
  • Berson, A. Smith, S. & Thearling K. (1999). Building Data Mining Applications for CRM. McGraw-Hill. Retrieved from http://www.thearling.com/text/dmtechniques/dmtechniques.htm
  • Brookshear, G., & Brylow, D. (2014). Computer Science: An Overview, 12th Edition. [VitalSource Bookshelf Online].
  • Connolly, T., & Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, 6th Edition. [VitalSource Bookshelf Online].
  • Dietterich, T. G. (2000). Ensemble methods in machine learning. International workshop on multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.
  • Duignan, B. (2015). Occam’s razor. Encyclopaedia Britannica. Retrieved from https://www.britannica.com/topic/Occams-razor
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
  • Gall, M. D., Gall, J. P., Borg, W. R. (2006). Educational Research: An Introduction, 8th Edition. [VitalSource Bookshelf Online].
  • Hubbard, D. W. (2010). How to measure anything: Finding the values of “intangibles” in business. (2nd e.d.) New Jersey, John Wiley & Sons, Inc.
  • Huck, Schuyler W. (2011). Reading Statistics and Research, 6th Edition. [VitalSource Bookshelf Online].
  • Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R news, 2(3), 18-22.
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html
  • Spiegelhalter, D. & Rice, K. (2009) Bayesian statistics. Retrieved from http://www.scholarpedia.org/article/Bayesian_statistics
  • Vandekerckhove, J., Matzke, D., & Wagenmakers, E. J. (2014). Model comparison and the principle of parsimony.
  • Yudkowsky, E.S. (2003). An intuitive explanation of Bayesian reasoning. Retrieved from http://yudkowsky.net/rational/bayes

Adv Quant: Association Rules in R

Online radio keeps track of everything you play. This information is used to make recommendations to you for additional music. This large dataset was mined with arules in R to recommend new music to this community of radio listeners which has ~300,000 records and ~15,000 users.

Introduction

Online radio keeps track of everything you play. This information is used to make recommendations to you for additional music. This large dataset was mined with arules in R to recommend new music to this community of radio listeners which has ~300,000 records and ~15,000 users.

Results

 5ip1.PNG

Figure 1. The output of the apriori command, which filtered data for the rules under a support of 0.01, a confidence of 0.5, and max length of 3.

5ip2.PNG

Figure 2. The output of the apriori, searching for only a subset of rules: (a) all rules with lift is greater than 5, (b) all rules where the confidence is greater than 0.6, (c) all rules with support > 0.02 and confidence greater than 0.6, (d) all the rules where Rihanna appears on the right-hand side, and (e) the top ten rules with the largest lift.

 5ip3.PNG

Figure 3. The output of the apriori command, which filtered data for the rules as aforementioned under a support of 0.001, a confidence of 0.5, and max length of 2.

5ip4.1.PNG5ip4.2.PNG5ip4.3

Figure 4. The output of the apriori, searching for only a subset of rules: (a) all rules with lift is greater than 5, (b) all rules where the confidence is greater than 0.6, (c) all rules with support > 0.02 and confidence greater than 0.6, (d) all the rules where Rihanna appears on the right-hand side, and (e) the top ten rules with the largest lift.

Discussion

There are a total of 289,956 data points, with 15,001 unique users that are listening to 1,005 unique artists.  From this dataset, there is a total of 48 rules under a support of 0.01, a confidence of 0.5, and a max length of 3.  When inspecting the first five rules (Figure 1), the results show each rule, and its corresponding support, confidence and lift if it meets the restrictions placed above.   Also, there is a total of 93 rules under a support of 0.001, the confidence of 0.5, and max length of 2.  When inspecting the first five rules (Figure 2), the results show each rule, and its corresponding support, confidence and lift if it meets the restrictions placed above.

 Apriori counts the transactions within the “playtrans” matrix.  According to Hahsler et al. (n.d.), the most used constraints for apriori are known as support and confidence, where the lower the confidence or support values, the more rules the algorithm will generate.  This relationship is illustrated between the two rule sets, where with higher support values, there were fewer rules generated.  Essentially, support can be seen as the proportion (%) of transactions in the data set with that exact item, whereas confidence is the proportion (%) of transaction where the rule is correct (Hahsler et al., n.d.).  The effects between just varying the support values can be seen in the number of subset rules for each rule set (Figure 2 & 4).    When reducing the support levels, there was an increase in the number of rules with Rihanna on the right-hand side (Figure 2d & 4d), and this happened across inspecting all the subset rules, even though the support, confidence, and lift values are the same between the rule sets.

Finally, the greater the lift value, the stronger the association rule (Hahsler et al., n.d.).  When relaxing the constraints, higher lift values could be observed (Figure 1-4).  This happens due to showing more rules, as constraints are weakened, then lift values can increase. Analyzing the top 10 lift values between both rule sets (Figure 2e and 4e), the top value with stricter results doesn’t appear in the top 10 lift values for relaxed constraints.  However, with stricter constraints (Figure 2e), users that listen to “the pussycat dolls” have a higher chance of listening to “rihanna”, than any other artist.  Whereas with relaxed constraints (Figure 4e), users that listen to “madvillain” have a higher chance of listening to “mf doom”, than any other artist, and that is more likely than the “the pussycat doll”-“rihanna” rule.  Similar associations can be made from the data found in the figures (1-4).

 Code

setwd(“C:/Users/fj998d/Documents/R/dataSets”)

LastFM=read.csv(“lastfm.csv”, header = F, sep = “,”) ## (Celma, 2009)

#

##

###—————————————————————————————————————-

## Variables: UserID = V1; ArtistID = V2; ArtistName = V3; PlayCount = V4

###—————————————————————————————————————-

## Apriori info(Hahsler, Grun, Hornic, & Buchta, n.d.):

##   Constraints for apriori are known as support and confidence, the lower the confidence or supprot the more rules.

##     * Support is the proportion (%) of transactions in the data set with that exact item.

##     * Confidence is the proportion (%) of transaction where the rule is correct.

##   The greater the lift, the stronger the assocition rule, thus lift is a deviation measure of the total rule

##   support from the support expected under independence.

##   Other Contraints used

##     * Max length defines the maximum size of mined frequent item rules.

###—————————————————————————————————————-

##

#

head(LastFM)

length(LastFM$V1)

summary(levels(LastFM$V1))

summary(levels(LastFM$V2))

## a-rules package for asociation rules

install.packages(“arules”)

library(arules)

## Computational enviroment for mining association rules and frequent item sets

## we need to manpulate the data a bit before using arules, we split the data in the vector

## x into groups defined in vector f. (Hahsler, Grun, Hornic, & Buchta, n.d.)

playlists = split(x=LastFM[,”V2″],f=LastFM$V1) # Convert the data to a matrix so that each fan is a row for artists across the clmns (R, n.d.c.)

playlists = lapply(playlists,unique)           # Find unique attributes in playlist, and create a list of those in playlists (R, n.d.a.; R, n.d.b.)

playtrans = as(playlists,”transactions”)       # Converts data and produce rule sets

## Create association rules with a support of 0.01 and confidence of 0.5, with a max length of 3

## which will show the support that listening to one artist gives to other artists; in other words,

## providing lift to an associated artist.

musicrules = apriori(playtrans, parameter=list(support=0.01, confidence=0.5, maxlen=3)) # filter the data for rules

musicrules

inspect(musicrules[1:5])

## Choose any subset

inspect(subset(musicrules, subset=lift>5))                        # tell me all the rules with a lift > 5

inspect(subset(musicrules, subset=confidence>0.6))                # tell me all the rules with a confidence of 0.6 or greater

inspect(subset(musicrules, subset=support>0.02& confidence >0.6)) # tell me the rules within a particular CI

inspect(subset(musicrules, subset=rhs%in%”rihanna”))              # tell me all the rules with rihanna in the left hand side

inspect(head(musicrules, n=10, by=”lift”))                        # tell me the top 10 rules with the largest lift

## Create association rules with a support of 0.001 and confidence of 0.1, with a max length of 2

artrules = apriori(playtrans, parameter=list(support=0.001, confidence=0.5, maxlen=2)) # filter the data for rules

artrules

inspect(artrules[1:5])

 ## Choose any subset

inspect(subset(artrules, subset=lift>5))

inspect(subset(artrules, subset=confidence>0.6))

inspect(subset(artrules, subset=support>0.02& confidence >0.6))

inspect(subset(artrules, subset=rhs%in%”rihanna”))

inspect(head(artrules, n=10, by=”lift”))

## Write down all the rules into a CSV file for co

write(musicrules, file=”musicRulesFromApriori.csv”, sep = “,”, col.names = NA)

write(artrules, file=”artistRulesFromApriori.csv”, sep = “,”, col.names = NA)

 Reference

Adv Quant: Decision Trees in R

This post will use the prostate cancer dataset available in R, in which biopsy results are given for 97 men. Here, this post will predict tumor spread in this dataset of 97 men who had undergone a biopsy and the measures to be used for prediction are BPH, PSA, Gleason Score, CP, and size of prostate.

Classification, Regression, and Conditional Tree Growth Algorithms

The variables used for tree growth algorithms are the log of benign prostatic hyperplasia amount (lbph), log of prostate-specific antigen (lpsa), Gleason score (gleason), log of capsular penetration (lcp) and log of the cancer volume (lcavol) to understand and predict tumor spread (seminal vesicle invasion=svi).

Results

5db3f1.PNG

Figure 1: Visualization of cross-validation results, for the classification tree (left) and regression tree (right).

5db3f2

Figure 2: Classification tree (left), regression tree (center), and conditional tree (right).

5db3f3.PNG

Figure 3: Summarization of tree data: (a) classification tree, (b) regression tree, and (c) conditional tree.

Discussion

For the classification tree growth algorithm, the head node is the seminal vesicle invasion which helps show the tumor spread in this dataset, and the cross-validation results show that there is only one split in the tree, with an x-value relative value for the first split of 0.71429 (Figure 1 & Figure 3a), and an x-value standard deviation of 0.16957 (Figure 3a).  The variable that was used to split the tree was the log of capsular penetration (Figure 2), when the log of capsular penetration at <1.791.

Next, for the regression tree growth algorithm, there are three leaf nodes, because the algorithm split the data three times.  In this case, the relative error for the first split is 1.00931, and a standard deviation of 0.18969 and at the second split the relative error is 0.69007 and a standard deviation of 0.15773 (Figure 1 & Figure 3b).  The tree was split at first at the log of capsular penetration at <1.791, and with the log of prostate specific antigen value at <2.993 (Figure 2).  It is interesting that the first split occurred at the same value for these two different tree growth algorithm, but that the relative errors and standard deviations were different and that the regression tree created one more level.

Finally, the conditional tree growth algorithm produced a split at <1.749 of the log capsular penetration at the 0.001 significance level and <2.973 for the log of prostate specific antigen also at the 0.001 significance level (Figure 2 & Figure 3c).  The results are similar to the regression tree, with the same number of leaf nodes and values in which they are split against, but more information is gained from the conditional tree growth algorithm than the classification and regression tree growth algorithm.

Code

#

### ———————————————————————————————————-

## Use the prostate cancer dataset available in R, in which biopsy results are given for 97 men.

## Goal:  Predict tumor spread in this dataset of 97 men who had undergone a biopsy.

## The measures to be used for prediction are BPH=lbhp, PSA=lpsa, Gleason Score=gleason, CP=lcp,

## and size of prostate=lcavol.

### ———————————————————————————————————-

##

install.packages(“lasso2”)

library(lasso2)

data(“Prostate”)

install.packages(“rpart”)

library(rpart)

## Grow a classification tree

classification = rpart(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate, method=”class”)

printcp(classification) # display the results

plotcp(classification)  # visualization cross-validation results

plot(classification, uniform = T, main=”Classification Tree for prostate cancer”) # plot tree

text(classification, use.n = T, all = T, cex=.8)                                  # create text on the tree

## Grow a regression tree

Regression = rpart(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate, method=”anova”)

printcp(Regression) # display the results

plotcp(Regression)  # visualization cross-validation results

plot(Regression, uniform = T, main=”Regression Tree for prostate cancer”) # plot tree

text(Regression, use.n = T, all = T, cex=.8)                              # create text on the tree

install.packages(“party”)

library(party)

## Grow a conditional inference tree

conditional = ctree(svi~lbph+lpsa+gleason+lcp+lcavol, data=Prostate)

conditional # display the results

plot(conditional, main=”Conditional inference tree for prostate cancer”)

References

Adv Quant: Ensemble Classifiers and RandomForests

A discussion on creating ensembles from different methods such as logistic regression, nearest neighbor methods, classification trees, Bayesian, or discriminate analysis and a discussion on the use of RandomForest to do analysis.

Ensembles classifiers can perform better than a single classifier since they are created as a combination of classifiers that have a weight attached to them to properly classify new data points (Bauer & Kohavi, 1999; Dietterich, 2000).  The ensemble classifier can include methods such as:

  • Logistic Regression: multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).
  • Nearest Neighbor Methods: K-nearest neighbor (i.e. K =5) is when a data point is clustered into a group, by having 5 of the nearest neighbors vote on that data point, and it is particularly useful if the data is a binary or categorical (Berson, Smith, & Thearling, 1999).
  • Classification Trees: aid in data abstraction and finding patterns in an intuitive way (Ahlemeyer-Stubbe & Coleman, 2014; Brookshear & Brylow, 2014; Conolly & Begg, 2014) and aid the decision-making process by mapping out all the paths, solutions, or options available for the decision maker to decide upon.
  • Bayesian Analysis: can be reduced to a conditional probability that aims to take into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015; Spiegelhalter & Rice, 2009; Yudkowsky, 2003).
  • Discriminate Analysis: how should data be best separated into several groups based on several independent variables that create the largest separation of the prediction (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013).

As mentioned above, the ensemble classifier can create weights for each classifier to help improve the accuracy of the total “ensemble classifier result,” through boosting and bagging procedures.  Boosting procedures help reduce both bias and variance of the different methods, and bagging procedures reduce just the variance of the different methods (Bauer & Kohavi, 1999; Liaw & Wiener, 2002).

  • Boosting: helps boost weak classifying algorithms done serially in systems, to force a reduction in the expected error (Bauer & Kohavi, 1999). The reason why this algorithm is done serially is that the classifier done previously had voted on the variables previously, and that vote is taken into account in this next classifier prediction (Liaw & Wiener, 2002)
  • Bagging (Bootstrap aggregating): assigns values to classifiers which are created from different uniform samples from the training data set with replacement, which is computed in parallel because they don’t depend on other classifiers’ votes to run the next classification prediction (Bauer & Kohavi, 1999; Liaw & Wiener, 2002). This is also known as an averaging method or a random forest (Ahlemeyer-Stubbe & Coleman, 2014).

Random Forest

According to Ahlemeyer-Stubbe and Coleman (2014), random forests are multiple decision trees conducted from selecting multiple random samples from the same data set (either through resampled or disjoint sampling), and the variables that appear more frequently in the forest adds more confidence that this variable has a real influence on the dependent variable.  Liaw and Wiener (2002) affirmed this by stating not only does a variable that frequently appears among many trees in the forest add more confidence in its influence, but also can help determine its proximity to the root node.  Random forests add a new level of randomness to bagging algorithms and is robust against over fitting which is a problem with some decision trees algorithms (Ahlemeyer-Stubbe & Coleman, 2014; Liaw & Wiener, 2002).

The use of random forests is most helpful when relationships between the variables are weak or if there is very little data available (Ahlemeyer-Stubbe and Coleman, 2014).  Also, it is worth considering that the numbers of trees needed to achieve great performance increases as the number of variables under consideration increases (Liaw & Wiener, 2002). To learn how to run random forests algorithms in the statistical programming language R, Liaw and Wiener (2002) shared some of their coding syntax as well as observations on how to effectively meet the objectives.

References:

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine learning,36(1-2), 105-139.
  • Berson, A. Smith, S. & Thearling K. (1999). Building Data Mining Applications for CRM. McGraw-Hill. Retrieved from http://www.thearling.com/text/dmtechniques/dmtechniques.htm
  • Brookshear, G., & Brylow, D. (2014). Computer Science: An Overview, 12th Edition. [VitalSource Bookshelf Online].
  • Connolly, T., & Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, 6th Edition. [VitalSource Bookshelf Online].
  • Dietterich, T. G. (2000). Ensemble methods in machine learning. International workshop on multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].

Adv Quant: Decision Trees

The topic for this discussion is decision trees. This post will compare classification and regression decision trees.

Decision Trees

Humans when facing a decision tend to seek out a path, solution, or option that appears closest to the goal (Brookshear & Brylow, 2014). Decision trees are helpful as they are predictive models (Ahlemeyer-Stubbe & Coleman, 2014).  Thus, decisions tree aid in data abstraction and finding patterns in an intuitive way (Ahlemeyer-Stubbe & Coleman, 2014; Brookshear & Brylow, 2014; Conolly & Begg, 2014) and aid the decision-making process by mapping out all the paths, solutions, or options available for the decision maker to decide upon.  Every decision is different and varies in complexity. Therefore there is no way to write a simple and well thought out decision tree (Sadalage & Fowler, 2012).

Ahlemeyer-Stubbe and Coleman (2014) stated that the decision trees are a great way to identify possible variables for inclusion in statistical models that are mutually exclusive and collectively exhaustive, even if the relationship between the target and inputs are weak. To help facilitate decision making, each node on a decision tree can have questions attached to it that needs to be asked with leaves associated with each node that represents the differing answers (McNurlin, Sprague, & Bui, 2008). The variable with the strongest influence becomes the top most branch of the decision tree (Ahlemeyer-Stubbe & Coleman, 2014). Chaudhuri, Lo, Loh, & Yang (1995) defines regression decision trees as those where the target question/variable is either continuous, real, or logistic yielding. Murthy (1998), confirms this definition for regression decision trees, while also defining that when to target question/variables needs to be split up into different, finite, and discrete classes is what defines classification decision trees.

Aiming to mirror the way human brain works, the classification decision trees can be created by using neural networks algorithms, which contains a connection of nodes that can have multiple inputs, outputs and processes in each node (Ahlemeyer-Stubbe & Coleman, 2014; Connolly & Begg, 2014). Neural network algorithms contrast the typical decision trees, which usually have one input, one output, and one process per node (similar to Figure 1). Once a root question has been identified, the decision tree algorithm keeps recursively iterating through the data, in an aim to answer the root question (Ahlemeyer-Stubbe & Coleman, 2014).

However, the larger the decision tree, the weaker the leaves get, because the model is tending to overfit the data. Thus thresholds could be applied to the decision tree modeling algorithm to prune back the unstable leaves (Ahlemeyer-Stubbe & Coleman, 2014).  Thus, when looking for a decision tree algorithm to parse through data, it is best to find one that has pruning capabilities.

5db1f1

Figure 1: A left-to-right decision tree on whether or not to take an umbrella, assuming the person is going to spend any amount of time outside during the day.

Advantages of a decision tree

According to Ahlemeyer-Stubbe & Coleman (2014) some of the advantages of using decision tress are:

+ Few assumptions are needed about the distribution of the data

+ Few assumptions are needed about the linearity

+ Decision trees are not sensitive to outliers

+ Decision trees are best for large data, because of their adaptability and minimal assumptions needed to begin parsing the data

+ For logistic and linear regression trees, parameter estimation and hypothesis testing are possible

+ For neural network (Classification) decision trees, predictive equations can be derived

According to Murthy (1998) the advantages of using classification decision trees are:

+ Pre-classified examples mitigate the needs for a subject matter expert knowledge

+ It is an exploratory method as opposes to inferential method

According to Chaudhuri et al. (1995) the advantages of using a regression decision tree are:

+ It can easily handle model complexity in an easily interpretable way

+ Covariates values are conveyed by the tree structure

+ Statistical properties can be derived and studied

References

  • Ahlemeyer-Stubbe, A., & Coleman, S. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Brookshear, G., & Brylow, D. (2014). Computer Science: An Overview, 12th Edition. [VitalSource Bookshelf Online].
  • Chaudhuri, P., Lo, W. D., Loh, W. Y., & Yang, C. C. (1995). Generalized regression trees. Statistica Sinica, 641-666. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.4786&rep=rep1&type=pdf
  • Connolly, T., & Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, 6th Edition. [VitalSource Bookshelf Online].
  • McNurlin, B., Sprague, R., & Bui, T. (2008). Information Systems Management, 8th Edition. [VitalSource Bookshelf Online].
  • Murthy, S. K. (1998). Automatic construction of decision trees from data: A multi-disciplinary survey. Data mining and knowledge discovery2(4), 345-389. Retrieved from http://cmapspublic3.ihmc.us/rid=1MVPFT7ZQ-15Z1DTZ-14TG/Murthy%201998%20DMKD%20Automatic%20Construction%20of%20Decision%20Trees.pdf
  • Sadalage, P. J., & Fowler, M. (2012). NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence, 1st Edition. [VitalSource Bookshelf Online].

Adv Quant: Bayesian analysis in R

The Delayed Airplanes Dataset consists of airplane flights from Washington D.C. into New York City. The date range for this data is for the entire month of February 2016, and there are 702 cases to be studied.

Introduction

Bayes’ theory is a conditional probability that takes into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015).  The formulation of Bayes’ theory is p(θ |y)= p(theta)*P(y| θ)/(∑(P(θ)*P(y| θ))), where p(θ) is the prior probabilities, and P(y| θ) are the likelihoods (Cowles, Kass, & O’Hagan, 2009).

The Delayed Airplanes Dataset consists of airplane flights from Washington D.C. into New York City.  The date range for this data is for the entire month of February 2016, and there are 702 cases to be studied.

Results

4ip1.PNG

Figure 1: Histogram showcasing the density of flight delays that are 15 minutes or longer.

4ip2.PNG

Figure 2: Shows summary data for the variables in this Bayesian Analysis before training and testing.

4ip3.PNG

Figure 3: Bayesian Prediction of the flight delay data from Washington, D.C. to New York City, NY.

4ip4

Figure 4: Bayesian prediction results versus the test data results, where false negatives are encircled in blue, while false positives are encircled in red.

Discussion

 The histogram (Figure 1) showcases that there are almost three times as many cases that flights depart on time from Washington, D.C. to New York City, NY.  Summation data proves this (Table 2).

The above summary (Table 2) states that 77.813% of the flights were not delayed equal to or more than 15 minutes, for the cases we do have data on. There is null data in the departure time, delayed 15 minutes or more, and weather delay variables.  To know the percentage of flights per day of the week, or carrier, destination, etc. the prior probabilities need to be calculated below.

About 77.2973% of the training model didn’t have a delay, but 22.7027% did have a delay of 15 or greater minutes (from tdelay variable).  These values are close to those above summation (Figure 2). Thus the training data could be trusted, even though a random sampling wasn’t taken.  The reason for not taking a random sampling is to be able to predict into the future, given 60% of the data is already collected.

Comparing both sets of histograms (Figure 1 and Figure 3), the distribution of the first histogram is binomial.  However, the posterior distribution, the secondary histogram, is similarly shaped as a positively skewed distribution.  This was an expected result described by Smith (2015), which is why the author states that the prior distribution has an effect on the posterior distribution.

The Bayesian prediction results tend to produce a bunch false negatives, compared to the real data sets, thus indicating more type II error than type I error.  When looking at the code below, the probability of finding a result that is 0.5 or larger is 15.302%.

Code

#

## Locate the data, filter out the data, and pull it into R from the computer (R, n.d.b.)

#

setwd(“C:/Users/XXX/Documents/R/dataSets”)

airplaneData=read.csv(“022016DC2NYC_1022370032_T_ONTIME.csv”, header = T, sep = “,”)

#

##

### ———————————————————————————————————-

##  Data Source: http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time

##        Dependent:   Departure Delay Indicator, 15 minutes or more (Dep_Del15)

##        Independent: Arrival airports of Newark-EWR, Kennedy-JFK, and LaGuardia-LGA (Origin)

##        Independent: Departure airports of Baltimore-BWI, Dulles-IAD, and Reagan-DCA (Dest)

##        Independent: Carriers (Carrier)

##        Independent: Hours of departure (Dep_Time)

##        Independent: Weather conditions (Weather_Delay)

##        Independent: Monday = 1, Tuesday = 2, …Sunday = 7 (Day_Of_Week)

### ———————————————————————————————————-

##  bayes theory => p(theta|y)= p(theta)*P(y|theta)/(SUM(P(theta)*P(y|theta))) (Cowles, Kass, & O’Hagan, 2009)

### ———————————————————————————————————-

##

#

## Create a data.frame

delay = data.frame(airplaneData)

## Factoring and labeling the variables (Taddy, n.d.)

delay$DEP_TIME = factor(floor(delay$DEP_TIME/100))

delay$DAY_OF_WEEK = factor(delay$DAY_OF_WEEK, labels = c(“M”, “T”, “W”, “R”, “F”, “S”, “U”))

delay$DEP_DEL15 = factor(delay$DEP_DEL15)

delay$WEATHER_DELAY= factor(ifelse(delay$WEATHER_DELAY>=1,1,0)) # (R, n.d.a.)

delay$CARRIER = factor(delay$CARRIER, levels = c(“AA”,”B6″,”DL”,”EV”,”UA”))

levels(delay$CARRIER) = c(“American”, “JetBlue”, “Delta”, “ExpressJet”, “UnitedAir”)

## Quick understanding the data

delayed15 = as.numeric(levels(delay$DEP_DEL15)[delay$DEP_DEL15])

hist(delayed15, freq=F, main = “Histogram of Delays of 15 mins or longer”, xlab = “time >= 15 mins (1) or time < 15 (0)”)

summary(delay)

### Create the training and testing data (60/40%)

ntotal=length(delay$DAY_OF_WEEK)    # Total number of datapoints assigned dynamically

ntrain = sample(1:ntotal,floor(ntotal*(0.6))) # Take values 1 – n*0.6

ntest = ntotal-floor(ntotal*(0.6))       # The number of test cases (40% of the data)

trainingData = cbind(delay$DAY_OF_WEEK[ntrain], delay$CARRIER[ntrain],delay$ORIGIN[ntrain],delay$DEST[ntrain],delay$DEP_TIME[ntrain],delay$WEATHER_DELAY[ntrain],delayed15[ntrain])

testingData  = cbind(delay$DAY_OF_WEEK[-ntrain], delay$CARRIER[-ntrain],delay$ORIGIN[-ntrain],delay$DEST[-ntrain],delay$DEP_TIME[-ntrain],delay$WEATHER_DELAY[-ntrain],delayed15[-ntrain])

## Partitioning the train data by half

trainFirst= trainingData[trainingData[,7]<0.5,]

trainSecond= trainingData[trainingData[,7]>0.5,]

### Prior probabilities = p(theta) (Cowles, Kass, & O’Hagan, 2009)

## Dependent variable: time delayed >= 15

tdelay=table(delayed15[ntrain])/sum(table(delayed15[ntrain]))

### Prior probabilities between the partitioned training data

## Independent variable: Day of the week (% flights occured in which day of the week)

tday1=table(trainFirst[,1])/sum(table(trainFirst[,1]))

tday2=table(trainSecond[,1])/sum(table(trainSecond[,1]))

## Independent variable: Carrier (% flights occured in which carrier)

tcarrier1=table(trainFirst[,2])/sum(table(trainFirst[,2]))

tcarrier2=table(trainSecond[,2])/sum(table(trainSecond[,2]))

## Independent variable: Origin (% flights occured in which originating airport)

tOrigin1=table(trainFirst[,3])/sum(table(trainFirst[,3]))

tOrigin2=table(trainSecond[,3])/sum(table(trainSecond[,3]))

## Independent variable: Destination (% flights occured in which destinateion airport)

tdest1=table(trainFirst[,4])/sum(table(trainFirst[,4]))

tdest2=table(trainSecond[,4])/sum(table(trainSecond[,4]))

## Independent variable: Department Time (% flights occured in which time of the day)

tTime1=table(trainFirst[,5])/sum(table(trainFirst[,5]))

tTime2=table(trainSecond[,5])/sum(table(trainSecond[,5]))

## Independent variable: Weather (% flights delayed because of adverse weather conditions)

twx1=table(trainFirst[,6])/sum(table(trainFirst[,6]))

twx2=table(trainSecond[,6])/sum(table(trainSecond[,6]))

### likelihoods = p(y|theta) (Cowles, Kass, & O’Hagan, 2009)

likelihood1=tday1[testingData[,1]]*tcarrier1[testingData[,2]]*tOrigin1[testingData[,3]]*tdest1[testingData[,4]]*tTime1[testingData[,5]]*twx1[testingData[,6]]

likelihood2=tday2[testingData[,1]]*tcarrier2[testingData[,2]]*tOrigin2[testingData[,3]]*tdest2[testingData[,4]]*tTime2[testingData[,5]]*twx2[testingData[,6]]

### Predictions using bayes theory = p(theta|y)= p(theta)*P(y|theta)/(SUM(P(theta)*P(y|theta))) (Cowles, Kass, & O’Hagan, 2009)

Bayes=(likelihood2*tdelay[2])/(likelihood2*tdelay[2]+likelihood1*tdelay[1])

hist(Bayes, freq=F, main=”Bayesian Analysis of flight delay data”)

plot(delayed15[-ntrain]~Bayes, main=”Bayes results versus actual results for flights delayed >= 15 mins”, xlab=”Bayes Analysis Prediction of which cases will be delayed”, ylab=”Actual results from test data showing delayed cases”)

## The probability of 0.5 or larger

densityMeasure = table(delayed15[-ntrain],floor(Bayes+0.5))

probabilityOfXlarger=(densityMeasure[1,2]+densityMeasure[2,1])/ntest

probabilityOfXlarger

References