Adv Quant: Statistical Features of R

Comparing the statistical features of R to its programming features and an explanation on how they are useful in analyzing big datasets.
• Describe how the analytics of R are suited for Big Data.

Advertisements

Ward and Barker (2013) traced back definition of Volume, Velocity, and Variety from Gartner.  Now, a predominately widely accepted definition for big data is any set of data that has high velocity, volume, and variety (Davenport & Dyche, 2013; Fox & Do 2013, Kaur & Rani, 2015. Mao, Xu, Wu, Li, Li, & Lu, 2015; Podesta, Pritzker, Moniz, Holdren, & Zients, 2014; Richards & King, 2014; Sagiroglu & Sinanc, 2013; Zikopoulous and Eaton, 2012). Davenport et al. (2012), stated that IT companies define big data as “more insightful data analysis”, but if used properly companies can gain a competitive edge.  Data scientists from companies like Google, Facebook, and LinkedIn, use R for their finance and data analytics (Revolution Analytics, n.d.). According to Minelli, Chambers and Dhiraj (2013) R has 2 million end-users and is used in industries like health, finance, etc.

Why is R so popular and have that many users?  It could be that R is a free open-source software that works on multiple platforms (Unix, Windows, Mac), and has an extensive statistical library to help conduct basic statistical data analysis, to multivariate analysis, scaling up to big data analytics (Hothorn, 2016; Leisch & Gruen, 2016; Schumacker, 2014 & 2016; Templ, 2016; Theussl & Borchers, 2016; Wild, 2015).  Given the open-sourced nature of the R software, many libraries are being built and shared with the greater community, and the Comprehensive R Archive Network (CRAN), has a ton of these programs as part of R Packages (Schumacker, 2014).  Other advantages of R, is the customizable statistical analysis, control over the analytical processes, extensive documentation, and references (Schumacker, 2016).  R Packages allow for everyday data analytics, visually aesthetic data visualizations, faster results than legacy statistical software that the end-user can control, drawing upon the talents of leading data scientists (Revolution Analytics, n.d.).  R programming features include dealing with a whole suite of data types, (scalars, vectors, matrices, arrays, and data frames), as well as impetrating and exporting data into multiple other commercially available statistical/data software (SPSS, SAS, Excel, etc.) (Schumacker, 2014 & 2016).  All the features of R related to big data analytics, statistical, and programming features are listed in Table 1 (below).  Given all the R Packages listed below and the importing and exporting features to other big data statistical software illustrates how useful R is for analyzing big datasets of various types (Schumacker, 2014, 2016).

Finally, R is the most dominant analytics tool for Big Data Analytics (Minelli et al., 2013).  Big data analytics is at the border of computing science, data mining, and statistics, it is natural to see multiple R Packages and libraries listed within CRAN that are freely available to use.  Within the field of big data analytics, some (but not all) of common sets of techniques that have R Packages are machine learning, cluster analysis, finite mixture models, and natural language processing. Given the extensive libraries through R Packages and extensive documentation, R is well suited for Big Data.

Table 1: Big Data Analytics, Statistical, and Programmable features of R

R Programming Features (Schumacker, 2014) Input, Process, Output, R Packages
Variables in R (Schumacker, 2014) number, character, logical
Data Types in R (Schumacker, 2014) scalars, arrays, vectors, matrices, list, data frames
Flow control: Loops (Schumacker, 2014) Loops (for, if, while, else, …)

Boolean Operators (and, not, or)

Visualizations (Schumacker, 2014) pie charts, bar charts, histogram, stem-and-leaf plots, scatter plots, box-whiskers plot, surface plots, contour plots, geographic maps, colors, plus others from the many R Packages
Statistical Analysis (Schumacker, 2014) Central tendency, dispersion, correlation test, linear Regression, multiple regression, logistic regression, log-linear regression, analysis of variance, probability, confidence intervals, plus others from the many R Packages
Distributions: population, sampling, and statistical (Schumacker, 2014) Binomial, Uniform, Exponential, Normal, Hypothesis testing, chi-square, z-test, t-test, f-test, plus others from the many R Packages
Multivariate Statistical Analysis (Schumacker, 2016) MANOVA, MANCOVA, factor analysis, principle components analysis, structural equation modeling, multidimensional scaling, discriminant analysis, canonical correlation, multiple group multivariate statistical analysis, plus others from the many R Packages
Big Data Analytics: Cluster Analysis (Leisch & Gruen, 2016)

 

hierarchical clustering, partitioning clustering, model-based clustering, K-means clustering, fuzzy clustering, cluster-wise regression, principal component analysis, self-organizing maps, density based clustering
Big Data Analytics: Machine Learning

(Hothorn, 2016; Templ, 2016)

neural networks, recursive partitioning, random forests, regularized and shrinkage methods, boosting, support vector machines, association rules, fuzzy rules based systems, model selection and validation, tree methods, expectation-maximization, nearest neighbor
Big Data Analytics: Natural Language Processing (Wild, 2015)

 

Frameworks, lexical databases, keyword extraction, string manipulation, stemming, semantic, pragmatics
Big Data Analytics: Optimization and Mathematical Programing (Theussl & Borchers, 2016)

 

optimization infrastructure packages, general purpose continuous solvers, least-squares problems, semidefinite and convex solvers, global and stochastic optimization, mathematical programming solvers

 

References

  • Davenport, T. H., Barth, P., & Bean, R. (2012). How big data is different. MIT Sloan Management Review, 54(1), 43.
  • Fox, S., & Do, T. (2013). Getting real about Big Data: applying critical realism to analyse Big Data hype. International Journal of Managing Projects in Business, 6(4), 739–760. http://doi.org/10.1108/IJMPB-08-2012-0049
  • Hothorn, T. (2016). CRAN task view: Machine learning & statistical learning. Retrieved from https://cran.r-project.org/web/views/MachineLearning.html
  • Kaur, K., & Rani, R. (2015). Managing Data in Healthcare Information Systems: Many Models, One Solution. Big Data Management, 52–59.
  • Leisch, F. & Gruen, B. (2016). CRAN task view: Cluster analysis & finite mixture models. Retrieved from https://cran.r-project.org/web/views/Cluster.html
  • Mao, R., Xu, H., Wu, W., Li, J., Li, Y., & Lu, M. (2015). Overcoming the Challenge of Variety: Big Data Abstraction, the Next Evolution of Data Management for AAL Communication Systems. Ambient Assisted Living Communications, 42–47.
  • Minelli, M., Chambers M., & Dhiraj A. (2013) Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses. John Wiley & Sons P&T. VitalBook file.
  • Podesta, J., Pritzker, P., Moniz, E. J., Holdren, J., & Zients, J. (2014). Big Data: Seizing Opportunities. Executive Office of the President of USA, 1–79.
  • Revolution Analytics (n.d.). What is R? Retrieved from http://www.revolutionanalytics.com/what-r
  • Richards, N. M., & King, J. H. (2014). Big Data Ethics. Wake Forest Law Review, 49, 393–432.
  • Sagiroglu, S., & Sinanc, D. (2013). Big Data : A Review. Collaboration Technologies and Systems (CTS), 42–47.
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.
  • Schumacker, R. E. (2016) Using R with multivariate statistics. California, SAGE Publications, Inc.
  • Templ, M. (2016). CRAN task view: Official statistics & survey methodology. Retrieved from https://cran.r-project.org/web/views/OfficialStatistics.html
  • Theussl, S. & Borchers, H. W. (2016). CRAN task view: Optimization and mathematical programming. Retrieved from https://cran.r-project.org/web/views/Optimization.html
  • Ward, J. S., & Barker, A. (2013). Undefined by data: a survey of big data definitions. arXiv preprint arXiv:1309.5821.
  • Wild, F. (2015). CRAN task view: Natural language processing. Retrieved from https://cran.r-project.org/web/views/NaturalLanguageProcessing.html
  • Zikopoulos, P., &Eaton, C. (2012). Understanding Big Data: Analytics for enterprise class hadoop and streaming data. McGraw-Hill Osborne Media.

Big Data Analytics: Compelling Topics

This post reviews and reflects on the knowledge shared for big data analytics and my opinions on the current compelling topics in the field.

Big Data and Hadoop:

According to Gray et al. (2005), traditional data management relies on arrays and tables in order to analyze objects, which can range from financial data, galaxies, proteins, events, spectra data, 2D weather, etc., but when it comes to N-dimensional arrays there is an “impedance mismatch” between the data and the database.    Big data, can be N-dimensional, which can also vary across time, i.e. text data (Gray et al., 2005). Big data, by its name, is voluminous. Thus, given the massive amounts of data in Big Data that needs to get processed, manipulated, and calculated upon, parallel processing and programming are there to use the benefits of distributed systems to get the job done (Minelli, Chambers, & Dhiraj, 2013).  Parallel processing allows making quick work on a big data set, because rather than having one processor doing all the work, you split up the task amongst many processors.

Hadoop’s Distributed File System (HFDS), breaks up big data into smaller blocks (IBM, n.d.), which can be aggregated like a set of Legos throughout a distributed database system. Data blocks are distributed across multiple servers. Hadoop is Java-based and pulls on the data that is stored on their distributed servers, to map key items/objects, and reduces the data to the query at hand (MapReduce function). Hadoop is built to deal with big data stored in the cloud.

Cloud Computing:

Clouds come in three different privacy flavors: Public (all customers and companies share the all same resources), Private (only one group of clients or company can use a particular cloud resources), and Hybrid (some aspects of the cloud are public while others are private depending on the data sensitivity.  Cloud technology encompasses Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).  These types of cloud differ in what the company managers on what is managed by the cloud provider (Lau, 2011).  Cloud differs from the conventional data centers where the company managed it all: application, data, O/S, virtualization, servers, storage, and networking.  Cloud is replacing the conventional data center because infrastructure costs are high.  For a company to be spending that much money on a conventional data center that will get outdated in 18 months (Moore’s law of technology), it’s just a constant sink in money.  Thus, outsourcing the data center infrastructure is the first step of company’s movement into the cloud.

Key Components to Success:

You need to have the buy-in of the leaders and employees when it comes to using big data analytics for predictive, prescriptive or descriptive purposes.  When it came to buy-in, Lt. Palmer had to nurture top-down support as well as buy-in from the bottom-up (ranks).  It was much harder to get buy-in from more experienced detectives, who feel that the introduction of tools like analytics, is a way to tell them to give up their long-standing practices and even replace them.  So, Lt. Palmer had sold Blue PALMS as “What’s worked best for us is proving [the value of Blue PALMS] one case at a time, and stressing that it’s a tool, that it’s a compliment to their skills and experience, not a substitute”.  Lt. Palmer got buy-in from a senior and well-respected officer, by helping him solve a case.  The senior officer had a suspect in mind, and after feeding in the data, the tool was able to predict 20 people that could have done it in an order of most likely.  The suspect was on the top five, and when apprehended, the suspect confessed.  Doing, this case by case has built the trust amongst veteran officers and thus eventually got their buy in.

Applications of Big Data Analytics:

A result of Big Data Analytics is online profiling.  Online profiling is using a person’s online identity to collect information about them, their behaviors, their interactions, their tastes, etc. to drive a targeted advertising (McNurlin et al., 2008).  Profiling has its roots in third party cookies and profiling has now evolved to include 40 different variables that are collected from the consumer (Pophal, 2014).  Online profiling allows for marketers to send personalized and “perfect” advertisements to the consumer, instantly.

Moving from online profiling to studying social media, He, Zha, and Li (2013) stated their theory, that with higher positive customer engagement, customers can become brand advocates, which increases their brand loyalty and push referrals to their friends, and approximately 1/3 people followed a friend’s referral if done through social media. This insight came through analyzing the social media data from Pizza Hut, Dominos and Papa Johns, as they aim to control more of the market share to increase their revenue.  But, is this aiding in protecting people’s privacy when we analyze their social media content when they interact with a company?

HIPAA described how we should conduct de-identification of 18 identifiers/variables that would help protect people from ethical issues that could arise from big data.   HIPAA legislation is not standardized for all big data applications/cases; it is good practice. However, HIPAA legislation is mostly concerned with the health care industry, listing those 18 identifiers that have to be de-identified: Names, Geographic data, Dates, Telephone Numbers, VIN, Fax, Device ID and serial numbers, emails addresses, URLs, SSN, IP address, Medical Record Numbers, Biometric ID (fingerprints, iris scans, voice prints, etc), full face photos, health plan beneficiary numbers, account numbers, any other unique ID number (characteristic, codes, etc), and certifications/license numbers (HHS, n.d.).  We must be aware that HIPAA compliance is more a feature of the data collector and data owner than the cloud provider.

HIPAA arose from the human genome project 25 years ago, where they were trying to sequence its first 3B base pair of the human genome over a 13 year period (Green, Watson, & Collins, 2015).  This 3B base pair is about 100 GB uncompressed and by 2011, 13 quadrillion bases were sequenced (O’Driscoll et al., 2013). Studying genomic data comes with a whole host of ethical issues.  Some of those were addressed by the HIPPA legislation while other issues are left unresolved today.

One of the ethical issues that arose were mentioned in McEwen et al. (2013), for people who have submitted their genomic data 25 years ago can that data be used today in other studies? What about if it was used to help the participants of 25 years ago to take preventative measures for adverse health conditions?  However, ethical issues extend beyond privacy and compliance.  McEwen et al. (2013) warn that data has been collected for 25 years, and what if data from 20 years ago provides data that a participant can suffer an adverse health condition that could be preventable.  What is the duty of the researchers today to that participant?

Resources:

Big Data Analytics: Future Predictions?

This is a world that is constantly going through change, especially technological change. There are many predictions regarding where we will be as a society as a result of leveraging big data. This post will, focus on what my prediction on where society will be in 10–15 years as a result of big data analytics.

Big data analytics and stifling future innovation?

One way to make a prediction about the future is to understand the current challenges faced in certain parts of a particular field.  In the case of big data analytics, machine learning analyzes data from the past to make a prediction or understanding of the future (Ahlemeyer-Stubbe & Coleman, 2014).  Ahlemeyer-Stubbe and Coleman (2014), argued that learning from the past can hinder innovation.  Although Basole, Seuss, and Rouse (2013), studied past popular IT journal articles to see how the field of IT is evolving, and in Yang, Klose, Lippy,  Barcelon-Yang, and Zhang, (2014) they conclude that analyzing current patent information can lead to discovering trends, and help provide companies actionable items to guide and build future business strategies around a patent trend.  The danger of stifling innovation per Ahlemeyer-Stubbe and Coleman (2014), comes from when we consider a situation of only relying on past data and experiences and not allowing for experiencing or trying anything new.  An example is like trying to optimize a horse-drawn carriage; then the automobile will never have been invented (Ahlemeyer-Stubbe & Coleman, 2014).   This example is a very bad analogy.  We should not focus on only collecting data on one item, but its tangential items as well.  We should focus on collecting a wide range of data from different fields and different sources, to allow for new patterns to form, connections to be made, which can promote innovation (Basole et al. 2013).

Future of Health Analytics:

Another way to analyze the future is to dream big or from a movie (Carter, Farmer, and Siegel, 2014). What if we could analyze our blood daily to aid in tracking our overall health, besides the daily blood sugar levels data that most diabetics are accustom to?  The information generated from here can aid in generating a healthier lifestyle.  Currently, doctors aid patients in their care with their diet and monitor their overall health.  When you are going home, this care disappears.  But, constant monitoring may help outpatient care and daily living.  Alerts could be sent to your doctor or to other family members if certain biomarkers get to a critical threshold.  This could aid in better care, allowing people’s social network to help them keep accountable in making healthy life and lifestyle choices, and possibly lessen the time between symptom detection to emergency care in severe cases (Carter, Farmer, and Siegel, 2014).

Generating revenue from analyzing consumers:

Soon, it is not enough to conduct item affinity analysis (market basket analysis).  Item affinity (market basket analysis) uses rules-based analytics to understand what items frequently co-occur during transactions (Snowplow Analytics, 2016). Item affinity is similar to the Amazon.com current method to drive more sales through getting their customers to consume more.  However, what if we started to look at what a consumer intends to buy (Minelli, Chambers, and Dhiraj, 2013)? Analyzing data from consumer product awareness, brand awareness, opinion (sentiment analysis), consideration, preferences, and purchases from a consumer’s multiple social media platforms account in real time can allow marketers to create the perfect advertisement (Minelli et al., 2013).  Establishing the perfect advertisement will allow companies to gain a bigger market share, or to lure customers to their product and/or services from their competitors.  According to Minelli et al. (2013) predicted that companies in the future should be moving towards:

  • Data that can be refreshed every second
  • Data validation exists in real time and alerts sent if something is wrong before data is published in aiding data driven decisions
  • Executives will receive daily data briefs from their internal processes and from their competitors to allow them to make data-driven decisions to increase revenue
  • Questions that were raised in staff meetings or other organizational meetings can be answered in minutes to hours, not weeks
  • A cultural change in companies where data is easily available and the phrase “let me show you the facts” can be easily heard amongst colleagues

Big data analytics can affect many other areas as well, and there is a whole new world opening up to this.  More and more companies and government agencies are hiring data scientists, because they don’t just see the current value that these scientists bring, but they see their potential value 10-15 years from now.  Thus, the field is expected to change as more and more talent is being recruited into the field of big data analytics.

References:

Ahlemeyer-Stubbe, A., & Coleman, S.  (2014). A Practical Guide to Data Mining for Business and Industry. Wiley-Blackwell. VitalBook file.

Basole, R. C., Seuss, D. C., & Rouse, W. B. (2013). IT innovation adoption by enterpirses: knowledge discovery through text analyztics. Decision Support Systems V(54). 1044-1054.

Carter, K.  B., Farmer, D., Siegel, C. (2014). Actionable Intelligence: A Guide to Delivering Business Results with Big Data Fast!. John Wiley & Sons P&T. VitalBook file.

Minelli, M., Chambers, M., Dhiraj, A. (2013). Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses. John Wiley & Sons P&T. VitalBook file.

Snowplow Analytics (2016). Market basket analysis: identifying products and content that go well together. Retrieved from http://snowplowanalytics.com/analytics/recipes/catalog-analytics/market-basket-analysis-identifying-products-that-sell-well-together.html

Yang, Y. Y., Klose, T., Lippy, J., Barcelon-Yang, C. S. & Zhang, L. (2014). Leveraging text analytics in patent analysis to empower business decisions – a competitive differentiation of kinase assay technology platforms by I2E text mining software. World Patent Information V(39). 24-34.

Big Data Analytics: Career Prospects

There is a wealth of employment opportunities for professionals with knowledge of big data analytics. This post explores what employment opportunities currently exist for people graduating with a concentration in data analytics.

Masters and Doctoral graduates have some advantages over Undergraduates, because they have done research or capstones involving big datasets, they can explain the motivations and reasoning behind the work (chapter 1 & 2 of the dissertation), they can learn and adapt quickly (chapter 3 reflects what you have learned and how you will apply it), and they can think critically about problems (chapter 4 & 5 of the dissertation).  Doctoral student, work on a problem for multiple months/years to see a solution (filling in a gap in the knowledge) that they couldn’t dream of seeing as incomplete (or unfillable).  But, to prepare best for a data science position or big data position, the doctoral shouldn’t be purely theoretical, and should contain an analysis of huge datasets.  Based on my personal analysis, I have noticed that when applying for a senior level position or a team lead position in data science, a doctorate gives you an additional three years of experience on top of what you have already.  Whereas if you lack a doctorate, you need a Master’s degree and three years of experience to be considered for that senior level position or a team lead position in data science.

Master levels courses in big data help build a strong mathematical, statistical, computational, and programming skills. Doctorate level courses help you learn and push the limits of knowledge in all these above mentioned fields, but also aid in becoming a domain expert in a particular area in data science.  Commanding that domain expertise, which is what you get through going through a doctoral program, can make you more valuable in the job market (Lo, n.d.).  Being more valuable in the job market can allow you to demand more in compensation.  Multiple sources of can quote multiple ranges for salaries, mostly because, this field has yet to be standardized (Lo, n.d.).  Thus, I would only provide two sources for salary ranges.

According to Columbus (2014), jobs that involve big data could include Big Data Solution Architect, Linux Systems and Big Data Engineer, Big Data Platform Engineer, Lead Software Engineer, Big Data (Java, Hadoop, SQL) have the following salary statistics:

  • Q1: $84,650
  • Median: $103,000
  • Q3: $121,300

Columbus (2014) also stated that it is very difficult to find the right people for an open requisite and that most requisites remain open for 47 days.  According to Columbus (2014), the most wanted skills for analytics data jobs based on of requisition postings in the field are: in Python (96.90% growth in demand in the past year), Linux and Hadoop (with 76% growth in demand, each).

Lo (n.d.) states that individuals with just a BS or MS degree and no full-time work experience should expect $50-75K whereas data scientist with experience can command up from $65-110K.

  • Data scientist can earn $85-170K
  • Data science/analytics managers can earn $90-140K for 1-3 direct reports
  • Data science/analytics managers can earn $130-175K for 4-9 direct reports
  • Data science/analytics managers can earn $160-240K for 10+ direct reports
  • Database Administrators can earn $50-120K, which varies upwards per more experience
  • Junior Big data engineers can earn $79-115K
  • Domain Expert Big data engineers can earn $100-165K

One way to look for opportunities in the field that are currently available is looking into the Gartner’s Magic Quadrant for Business Intelligence and Analytics Platforms (Parenteau et al., 2016). If you want to help push a tool into a higher ease of execution and completeness of vision as a data scientist consider employment in: Pyramid Analytics, Yellowfin, Platfora, Datawatch, Information Builders, Sisense, Board International, Salesforce, GoodData, Domo, Birst, SAS, Alteryx, SAP, MicroStrategy, Logi Analytics, IBM, ClearStory Data, Pentaho, TIBCO Software, BeyondCore, Qlik, Microsoft, and Tableau.  That is one way to look at this data.  Another way to look at this data is to see which tools are the best in the field and (Tableau, Qlik, Microsoft, with SAS Birst, Alterxyx, and SAP following behind) and learn those tools to to become more marketable.

Resources

Big Data Analytics: POTUS Report

This has become a data-centric society, relying on real-time data and technology (i.e., cell phone, shopping online, social networking) more than ever. Although there are many advantages associated with the use of this data, there are concerns that the collection of massive amounts of data can lead to an invasion of privacy. In January, 2014, President Obama asked his staff to take the next 90 days to prepare a report for him on how big data is affecting people’s privacy. This post revolves around this report.

The aims of big data analytics are for data scientist to fuse data from various data sources, various data types, and in huge amounts so that the data scientist could find relationships, identify patterns, and find anomalies.  Big data analytics can help provide either a descriptive, prescriptive, or predictive result to a specific research question.  Big data analytics isn’t perfect, and sometimes the results are not significant, and we must realize that correlation is not causation.  Regardless, there are a ton of benefits from big data analytics, and this is a field where policy has yet to catch up to the field to protect the nation from potential downsides while still promoting and maximizing benefits.

Policies for maximizing benefits while minimizing risk in public and private sector

In the private sector, companies can create detailed personal profiles will enable personalized services from a company to a consumer.  Interpreting personal profile data would allow a company to retain and command more of the market share, but it can also leave room for discrimination in pricing, services quality/type, and opportunities through “filter bubbles” (Podesta, Pritzker, Moniz, Holdren, & Zients, 2014).  Policy recommendation should help to encourage de-identifying personally identifiable information to a point that it would not lead to re-identification of the data. Current policies for the private sector for promoting privacy are (Podesta, et al., 2014):

  • Fair Credit Reporting Act, helps to promote fairness and privacy of credit and insurance information
  • Health insurance Portability and Accountably Act enables people to understand and control how personal health data is used
  • Gramm-Leach-Bliley Act, helps consumers of financial services have privacy
  • Children’s Online Privacy Protection Act minimizes the collection/use of children data under the age of 13
  • Consumer Privacy bill of rights is a privacy blueprint that aids in allowing people to understand what their personal data is being collected and used for that are consistent with their expectation.

In the public sector, we run into issues, when the government has collected information about their citizens for one purpose, to eventually, use that same citizen data for a different purpose (Podesta, et al., 2014).  This has the potential of the government to exert power eventually over certain types of citizens and tamper civil rights progress in the future.  Current policies in the public sector are (Podesta, et al., 2014):

  • The Affordable Care Act allows for building a better health care system from a “fee-for-service” program to a “fee-for-better-outcomes.” This has allowed for the use of big data analytics to promote preventative care rather than emergency care while reducing the use of that data to eliminate health care coverage for “pre-existing health conditions.”
  • The Family Education Rights and Privacy Act, the Protection of Pupil Rights Amendment and the Children’s Online Privacy Act help seal children educational records to prevent misuse of that data.

Identifying opportunities for big data in the economy, health, education, safety, energy-efficiency

In the economy, the use of the internet of things to equip parts of product with sensors to help monitor and transmit live, thousands of data points for sending alerts.  These alerts can tell us when maintenance is needed, for which part and where it is located, making the entire process save time and improving overall safety(Podesta, et al., 2014).

In medicine, the use of predictive analytics could be used to identify instances of insurance fraud, waste, and abuse, in real time saving more than $115M per year (Podesta, et al., 2014).  Another instance of using big data is for studying neonatal intensive care, to help use current data to create prescriptive results to determine which newborns are likely to come into contact with which infection and what would that outcome be (Podesta, et al., 2014).  Monitoring newborn’s heart rate and temperature along with other health indicators can alert doctors of an onset of an infection, to prevent it from getting out of hand. Huge amounts of genetic data sets are helping locate genetic variant to certain types of genetic diseases that were once hidden in our genetic code (Podesta, et al., 2014).

With regards to national safety and foreign interests, data scientist and data visualizers have been using data gathered by the military, to help commanders solve real operational challenges in the battlefield (Podesta, et al., 2014).  Using big data analytics on satellite data, surveillance data, and traffic flow data through roads, are making it easier to detect, obtain, and properly dispose of improvised explosive devices (IEDs).  The Department of Homeland Security is aiming to use big data analytics to identify threats as they enter the country and people of higher than the normal probability to conduct acts of violence within the country (Podesta, et al., 2014). Another safety-related used of big data analytics is the identification of human trafficking networks through analyzing the “deep web” (Podesta, et al., 2014).

Finally for energy-efficiency, understanding weather patterns and climate change, can help us understand our contribution to climate change based on our use of energy and natural resources. Analyzing traffic data, we can help improve energy efficiency and public safety in our current lighting infrastructure by dimming lights at appropriate times (Podesta, et al., 2014).  Energy efficiencies can be maximized within companies using big data analytics to control their direct, and indirect energy uses (through maximizing supply chains and monitoring equipment).  Another way we are moving to a more energy efficient future is when the government is partnering with the electric utility companies to provide businesses and families access to their personal energy usage in an easy to digest manner to allow people and companies make changes in their current consumption levels (Podesta, et al., 2014).

Protecting your own privacy outside of policy recommendation

In this report it is suggested that we can control our own privacy through using the browse in private function in most current internet browsers, this would help prevent the collection of personal data (Podesta, et al., 2014). But, this private browsing varies from internet browser to internet browser.  For important information like being denied employment, credit or insurance, consumers should be empowered to know why they were denied and should ask for that information (Podesta, et al., 2014).  Find out the reason why can allow people to address those issues in order to persevere in the future.  We can encrypt our communications as well, in order to protect our privacy, with the highest bit protection available.  We need to educate ourselves on how we should protect our personal data, digital literacy, and know how big data could be used and abused (Podesta, et al., 2014).  While we wait for currently policies to catch up with the time, we actually have more power on our own data and privacy than we know.

 

Reference:

Podesta, J., Pritzker, P., Moniz, E. J., Holdren, J. & Zients,  J. (2014). Big Data: Seizing Opportunities, Preserving Values.  Executive Office of the President. Retrieved from https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf

Big Data Analytics: Crime Fighting

Big data analytics can have a profound effect on the success of a business. Several case studies regarding this success can be found. This post will introduce and examine the content of the case study, explain what were the key problems that needed to be resolved, and identify key components that lead to the case’s success.

Case Study: Miami-Dade Police Department: New patterns offer breakthroughs for cold cases. 

Introduction:

Tourism is key to South Florida, bringing in $20B per year in a county of 2.5M people.  Robbery and the rise of other street crimes can hurt tourism and a 1/3 of the state’s sale tax revenue.  Thus, Lt. Arnold Palmer from the Robbery Investigation Police Department of Miami-Dade County teamed up with IT Services Bureau staff and IBM specialist to develop Blue PALMS (Predictive Analytics Lead Modeling Software), to help fight crime and protect the citizens and tourist to Miami-Dade County. When testing the tool it has achieved a 73% success rate when tested on 40 solved cases. The tool was developed because most crimes are usually committed by the same people who committed previous crimes.

 Key Problems:

  1. Cold cases needed to be solved and finally closed. Besides relying on old methods (mostly people skills and evidence gathering), patterns still could be missed, by even the most experienced officers.
  2. Other crimes like, robbery happen in predictable patterns (times of the day and location), which is explicit knowledge amongst the force. So, a tool shouldn’t tell them the location and the time of the next crime; the police need to know who did it, so a narrowed down list of who did it would help.
  3. The more experienced police officers are retiring, and their experience and knowledge leave with them. Thus, the tool that is developed must allow junior officers to ask the same questions of it and get the same answers as they would from asking those same questions to experienced officers.  Fortunately, the opportunity here is that newer officers come in with an embracing technology whenever they can, whereas veteran officers tread lightly when it comes to embracing technology.

Key Components to Success:

It comes to buy-in. Lt. Palmer had to nurture top-down support as well as buy-in from the bottom-up (ranks).  It was much harder to get buy-in from more experienced detectives, who feel that the introduction of tools like analytics, is a way to tell them to give up their long-standing practices and even replace them.  So, Lt. Palmer had sold Blue PALMS as “What’s worked best for us is proving [the value of Blue PALMS] one case at a time, and stressing that it’s a tool, that it’s a compliment to their skills and experience, not a substitute”.  Lt. Palmer got buy-in from a senior and well-respected officer, by helping him solve a case.  The senior officer had a suspect in mind, and after feeding in the data, the tool was able to predict 20 people that could have done it in an order of most likely.  The suspect was on the top five, and when apprehended, the suspect confessed.  Doing, this case by case has built the trust amongst veteran officers and thus eventually got their buy in.

 Similar organizations could benefit:

Other policing counties in Florida, who have similar data collection measures as Miami-Dade County Police Departments would be a quick win (a short-term plan) for tool adoption.  Eventually, other police departments in Florida and other states can start adopting the tool, after more successes have been defined and shared by fellow police officers.  Police officers have a brotherhood mentality and as acceptance of this tool grows. Eventually it will reach critical mass and adoption of the tool will come much more quickly than it does today.  Other places similar to police departments that could benefit from this tool is firefighters, other emergency responders, FBI, and CIA.

 Resources:

Big Data Analytics: Open-Sourced Tools

It is critical that big data analysts are provided with software tools to effectively synthesize large data sets. Various software tools for analyzing big data have emerged in popularity over the past few years. This post will compare and contrast at least 3 software tools that I found most effective in analyzing big data. This discussion includes the advantages and disadvantages each application.

Here are three open source text mining software tools for analyzing unstructured big data:

  1. Carrot2
  2. Weka
  3. Apache OpenNLP.

One of the great things about these three software tools is that they are free.  Thus, there is no cost per each software solution.

 Carrot2

A Java based code, which also has a native integration with PHP, and C#/.NET API (Gonzalez-Aguilar & Ramirez Posada, 2012).  Carrot2 can organize a collection of documents into categories based on themes in a visual manner; it can also be used as a web clustering engine. Carpineto, Osinski, Romano, and Weiss (2009) stated that web clustering search engines like Carrot2 help you with fast subtopic retrievals, (i.e. searching for tiger, you can get tiger woods, tigers, Bengals, Bengals football team, etc.), Topic exploration (through a cluster hierarchy), and alleviation information overlook (does more than the first page of results search). The algorithms it uses for categorization is Lingo (Lingo3G), K-mean, and STC, which can support multiple language clustering, synonyms, etc. (Carrot, n.d.).  This software can be used online instead of regular search engines as well (Gonzalez-Aguilar & Ramirez Posada, 2012).  Gonzalez-Aguilar and Ramirez Posada (2012) explain that the interface has three phases for processing information: entry, filtration, and exit.  It represents the cluster data in three visual formats: Heatmap, Network, and pie chart.

The disadvantage of this tool is that it only does clustering analysis, but its advantage is that it can be applied to a search engine to facilitate faster and more accurate searches through its subtopic analysis.  If you would like to use Carrot2 as a search engine, go to http://search.carrot2.org/stable/search and try it out.

Weka

It was originally developed for analyzing agricultural data and has evolved to house a comprehensive collection of data preprocessing and modeling techniques (Patel & Donga 2015).  It is a java based machine learning algorithm for data mining tasks as well as text mining that could be used for predictive modeling, housing pre-processing, classification, regression, clustering, association rules, and visualization (Weka, n.d). Weka can be applied to big data (Weka, n.d.) and SQL Databases (Patel & Donga, 2015).

A disadvantage of using this tool is its lack of supporting multi-relational data mining, but if you can link all the multi-relational data into one table, it can do its job (Patel & Donga, 2015). The comprehensiveness of analysis algorithms for both data and text mining and pre-processing is its advantage.

 Apache OpenNLP

A Java code conventional machine learning toolkit, with tasks such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and conference resolution (OpenNLP, n.d.) OpenNLP works well with the NetBeans and Eclipse IDE, which helps in the development process.  This tool has dependencies on Maven, UIMA Annotators, and SNAPSHOT.

The advantage of OpenNLP is that specification of rules, constraints, and lexicons don’t need to be entered in manually. Thus, it is a machine learning method which aims to maximize entropy (Buyko, Wermter, Poprat, & Hahn, 2006).  Maximizing entropy allows for collect facts consistently and uniformly.  When the sentence splitter, tokenization, part-of-speech tagging, named entity extraction, chunking, parsing, and conference resolution was tested on two medical corpora, accuracy was up in the high 90%s (Buyko et al., 2006).

This software has high accuracy as its advantage, but also produces quite a bit of false negatives which is its disadvantage.   In the sentence splitter function, it picked up literature citations, and in tokenization, it took specialized characters “-” and “/” (Buyko et al., 2006).

 References:

  • Buyko, E., Wermter, J., Poprat, M., & Hahn, U. (2006). Automatically adapting an NLP core engine to the biology domain. In Proceedings of the Joint BioLINK-Bio-Ontologies Meeting. A Joint Meeting of the ISMB Special Interest Group on Bio-Ontologies and the BioLINK Special Interest Group on Text Data M ining in Association with ISMB (pp. 65-68).
  • Carpineto, C., Osinski, S., Romano, G., and Weiss, D. 2009. A survey of web clustering engines. ACM Comput. ´ Surv. 41, 3, Article 17 (July 2009), 38 pages. DOI = 10.1145/1541880.1541884 http://doi.acm.org/10.1145/1541880.1541884
  • Carrot (n.d.) Open source framework for building search clustering engines. Retrieved from http://project.carrot2.org/index.html
  • Gonzalez-Aguilar, A. AND Ramirez-Posada, M. (2012): Carrot2: Búsqueda y visualización de la información (in Spanish). El Profesional de la Informacion. Retrieved from http://project.carrot2.org/publications/gonzales-ramirez-2012.pdf
  • openNLP (n.d.) The Apache Software Foundation: OpenNLP. Retrieved from https://opennlp.apache.org/
  • Weka (n.d.) Weka 3: Data Mining Software in Java. Retrieved from http://www.cs.waikato.ac.nz/ml/weka/
  • Patel, K., & Donga, J. (2015). Practical Approaches: A Survey on Data Mining Practical Tools. Foundations, 2(9).