Big Data Analytics: Health Care Industry

Since its inception 25 years ago, the human genome project has been trying to sequence its first 3B base pair of the human genome over a 13 year period (Green, Watson, & Collins, 2015).  This 3B base pair is about 100 GB uncompressed and by 2011, 13 quadrillion bases were sequenced (O’Driscoll, Daugelaite, & Sleator, 2013).  With the advancement in technology and software as a service, the cost of sequencing a human genome has been drastically cut from $1M to $1K in 2012 (Green et al., 2015 and O’Driscoll et al., 2013).  It is so cheap now that 23andMe and others were formed as a consumer drove genetic testing industry that has been developed (McEwen, Boyer, & Sun, 2013).  At the beginning of this project, the researcher was wondering what insights the sequencing could bring to understanding decease, to the now explosion of research dealing with studying millions of other genomes from biological pathways, cancerous tumors, microbiomes, etc. (Green et al., 2015 and O’Driscoll et al., 2013).  Storing 1M genomes will exceed 1 Exabyte (O’Driscoll et al., 2013).  Based on the definition of Volume (size like 1 EB), Variety (different types of genomes), and Velocity (processing huge amounts of genomic data), we can classify that the whole genomic project in the health care industry as big data.

This project has paved the way for other projects like sharing MRI data from 511 participants, (exceeding 18 TB) to be shared and analyzed (Poldrak & Gorgolewski, 2014).  Green et al. (2015) have stated that the genome project has led to huge innovation in tangent fields, not directly related to biology, like chemistry, physics, robotics, computer science, etc.  It was due to this type of research that a capillary-based DNA sequencing instruments were invented to be used for sequencing genomes (Green et al., 2015).  The Ethical, legal and Social Implication project, got 5% of the National Institute of Health Budget, to study ethical implications of this data, opening up a new field of study (Green et al., 2015 & O’Driscoll et al., 2013).  O’Driscoll et al. (2013), suggested that solutions like Hadoop’s MapReduce would greatly advance this field.  However, he argues that current java intensive knowledge is needed, which can be a bottleneck on the biologist.   Luckily, this field is helping to provide a need to create a Guided User Interface, which will allow scientist to conduct research and not learn to program.  O’Driscoll et al. (2013), also state that the biggest drawback of using Hadoop MapReduce function is that it reduces data line by line, whereas genomic data needs to be reduced in groups.  This project, should, with time improve the service offering of Hadoop to other fields outside of biomedical research.

In the medical field, cancer diagnosis and treatments will now be possible due to this project (Green et al., 2015).  Green et al. (2015) also predict that a maturation of the microbiome science, routine use of stem-cell therapies could result from this.  These predictions are not far from becoming reality and are the foundation of predictive and preventative medicine.  This is not so far into the future that McEwen et al. (2013) have stated what are the ethical issues, for people who have submitted their genomic data 25 years ago, and they found data that could help the participants take preventative measures for adverse health conditions.  Mostly because clinical versions of this data are starting to become available like from companies like 23andMe. This information so far has yield genealogy data, a few predictive medical measures (to a certain confidence interval).  Predictive and preventative medical advances are still primary and currently in the research phase (McEwen et al., 2013).  Finally, genomics research will pave the way for metagenomics, which is the study of microbiome data of as many of the ~4-6* 10^30 bacterial cells (O’Driscoll et al., 2013).

From this discussion, there is no doubt that genomic data can fall under the classification of big data.  The analysis of this data has yielded advances in the medical fields and other tangential fields.  Future work, to expanding the predictive and preventative medicine is still needed; it is only in research studies, where the participants can learn about their genomic indicators that may lead them to certain types of adverse health conditions.

Resources:

  • Green, E. D., Watson, J. D., & Collins, F. S. (2015). Twenty-five years of big biology. Nature, 526.
  • McEwen, J. E., Boyer, J. T., & Sun, K. Y. (2013). Evolving approaches to the ethical management of genomic data. Trends in Genetics, 29(6), 375-382.
  • O’Driscoll, A., Daugelaite, J., & Sleator, R. D. (2013). ‘Big data,’ Hadoop and cloud computing in genomics. Journal of biomedical informatics, 46(5), 774-781.
  • Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: data sharing in neuroimaging. Nature neuroscience, 17(11), 1510-1517.

 

Advertisements

Big Data Analytics: Pizza Industry

Pizza, pizza! A competitive analysis was completed on Dominos, Pizza Hut, and Papa Johns.  Competitive analysis is gathering external data that is available freely, i.e. social media like Twitter tweets and Facebook posts.  That is what He, Zha, and Li (2013) studied, approximately 307 total tweets (266 from Dominos, 24 from Papa John, 17 from Pizza Hut) and 135 wall post (63 from Dominos, 37 from Papa Johns, 35 from Pizza Hut), for the month October 2011(He et al, 2013).  It should be noted that these are the big three pizza chain controlling 23% of the total market share (7.6% from Dominos, 4.23% from Papa Johns, 11.65% from Pizza Hut)(He et al., 2013) (He et al., 2013). Posts and tweets contain text data, videos, and pictures.  All the data collected was text-based data and collected manually, and SPSS Clementine tool was used to discover themes in their text (He et al., 2013).

He et al. (2013), found that Domino’s Pizza was using social media to engage their customers the most.  Domino’s Pizza did the most to reply to as many tweets and posts.  The types of posts in all three companies varied from the promotion to marketing to polling (i.e. “What is your favorite topping?”), facts about pizza, Halloween-themed posts, baseball themed posts, etc. (He et al., 2013).  Results from the text mining of all three companies: Ordering and delivery was key (customers shared the experience and feelings about their experience), Pizza Quality (taste & quality), Feedback on customers’ purchase decisions, Casual socialization posts (i.e. Happy Halloween, Happy Friday), and Marketing tweets (posts on current deals, promotions and advertisement) (He et al, 2013).  Besides text mining, there was also content analysis on each of their sites (367 pictures & 67 videos from Dominos, 196 pictures & 40 videos from Papa Johns, and 106 pictures and 42 videos from Pizza Hut), which showed that the big three were trying to drive customer engagement (He et al., 2013).

He et al. (2013) lists the theory that with higher positive customer engagement, customers can become brand advocates, which increases their brand loyalty and push referrals to their friends, and approximately 1/3 people followed a friend’s referral if done through social media.  Thus, evaluating the structure and unstructured data provided to an organization about their own product and theirs of their competitors, they could use it to help increase their customer services, driving improvements in their own products, and driving more customers to their products (He et al., 2013).  Key lessons from this study, which would help any organization gain an advantage in the market are to (1) Constantly monitor your social media and those of your competitors, (2) Establish a benchmark of how many posts, likes, shares, etc. between you and your competitors, (3) Mine the conversational data for content and context, and (4) analyze the impact of your social media footprint to your own business (when prices rise or fall what is the response, etc.) (He et al, 2013).

Resources:

  • He, W., Zha, S., & Li, L. (2013). Social media competitive analysis and text mining: A case study in the pizza industry. International Journal of Information Management, 33(3), 464-472.

 

What is Big Data Analytics?

 

What makes big data different from conventional data that you use every day?
The differentiation exists where big data and conventional deals with data storage and data analysis. Big data is complex, challenging, and significant (Ward & Barker, 2013). Ward and Barker (2013) traced back the definition of Volume, Velocity, and Variety from Gartner. They then compare its definition to Oracle’s, which is data to mean the value derived from merging relational database with unstructured data that can vary in size, structure, format, etc. Finally, the authors state that Intel big data definition is a company generating about 300 TB weekly, and typically it can come from transactions, documents, emails, sensor data, social media, etc. They use all of this information to state that the true definition should lie with the size of the data, a complexity of the data, and the technologies used to analyze the data. This is how you can differentiate it from conventional data.

Davenport, Barth, and Bean (2012), stated that IT companies define big data as “more insightful data analysis”, but if used properly companies can gain a competitive edge. Companies that use big data: are aware of data flows (customer-facing data, continuous process data, network relationships, which is dynamic and always changing in a continuous flow), rely on data scientists (upgraded data management skill, programing, math, stats, business acumen, and effective communication) and move away from IT functions (concerned with automation) into ops or prod functions (since its goals is to present information to the business first). Data in a continuous flow needs to have business processes set up for obtaining/gathering/capturing, storing, extracting, filtering, manipulating, structuring, monitoring, analyzing and interpreting, to help facilitate data-driven decisions.

Finally, Lazer, Kennedy, King, and Vespignani (2014), talked about big data hubris, where the assumption that big data can do it all and is a great substitute for conventional data analysis. They state that errors in measurement, validity, reliability and dependencies in the data cannot be ignored. Big data analysis can overfit its analysis to a small number of cases. Greater value to any big dataset is to marry it with other near-real-time data from different sources, but continuous evaluation and improvement should always be incorporated. Sources of errors in analysis can arise from measurement (is it stable and comparable across cases and over time, are there systematic errors), algorithm dynamics, search algorithms, and changes in the data-generating process. The authors finally state that transparency and replicability of data analysis (especially secondary or aggregate data, since there are fewer privacy concerns in that), could help improve the results of big data analysis. Without transparency and replicability, how will other scientist learn and build on the knowledge (thus destroying the accumulation of knowledge)?

There is a difference between big data and conventional data. But, no matter how big, fast, and different the data sets are, one cannot deny that because of big data, conventional data gathering, analysis, and techniques are not influenced either. Improvements have been made, to allow doctoral students to conduct surveys at a much faster rate, gather more unstructured data through interview processes, and transcription software used for audio files in big data can also be used in smaller conventional data. Though vastly different, and can come with their errors as we improve one, we inadvertently improve the other.

Public Sites that provide free access to big data sets:

References:

  • Davenport, T. H., Barth, P., & Bean, R. (2012). How big data is different. MIT Sloan Management Review, 54(1), 43.
  • Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: Traps in big data analysis. Science, 343(14 March).
  • Ward, J. S., & Barker, A. (2013). Undefined by data: a survey of big data definitions. arXiv preprint arXiv:1309.5821.

Zeno’s Paradox

Some infinities are bigger than others.

A paradox to motion:

Zeno described a paradox of motion, which helps describes the one type of many infinities. Zeno’s paradox is described below (Stanford Encyclopedia of Philosophy, 2010):

“Imagine Achilles chasing a tortoise, and suppose that Achilles is running at 1 m/s, that the tortoise is crawling at 0.1 m/s and that the tortoise starts out 0.9 m ahead of Achilles. On the face of it Achilles should catch the tortoise after 1s, at a distance of 1m from where he starts (and so 0.1m from where the Tortoise starts). We could break Achilles’ motion up … as follows: before Achilles can catch the tortoise he must reach the point where the tortoise started. But in the time he takes to do this the tortoise crawls a little further forward. So next Achilles must reach this new point. But in the time it takes Achilles to achieve this the tortoise crawls forward a tiny bit further. And so on to infinity: every time that Achilles reaches the place where the tortoise was, the tortoise has had enough time to get a little bit further, and so Achilles has another run to make, and so Achilles has in infinite number of finite catch-ups to do before he can catch the tortoise, and so, Zeno concludes, he never catches the tortoise.”

This paradox was used to illustrate that not all infinities are the same, and one infinity can indeed be bigger than another.  An interpretation of this paradox was written poetically in a eulogy for the book of The Fault in Our Stars (Green, 2012):

“There is an infinite between 0 and 1. There’s .1 and .12 and .112 and an infinite collection of others. Of course there is a bigger infinite set of numbers between 0 and 2, or between 0 and a million. Some infinities are bigger than other infinities. … There are days, many days of them, when I resent the size of my unbounded set. I want more numbers than I’m likely to get, and God, I want more numbers for Augustus Waters than he got. But, Gus, my love, I cannot tell you how thankful I am for our little infinity. I wouldn’t trade it for the world. You have me a forever within the numbered days, and I’m grateful.” (pg. 259-260)

So to my readers out there, I want to thank you in advance for the little infinity(ies) I will get to share with each of you through this blog, and for that I am grateful.

Resources:

  • Green, J. (2012). The fault in our stars.  New York, New York: Penguin Group (USA) Inc.
  • Stanford Encyclopedia of Philosophy (2010). Zeno’s Paradoxes. Retrieved from http://plato.stanford.edu/entries/paradox-zeno/#AchTor