Compelling topics on analytics of big data

  • Big data is defined as high volume, high variety/complexity, and high velocity, which is known as the 3Vs (Services, 2015).
  • Depending on the goal and objectives of the problem, that should help define which theories and techniques of big data analytics to use. Fayyad, Piatetsky-Shapiro, and Smyth (1996) defined that data analytics can be divided into descriptive and predictive analytics. Vardarlier and Silaharoglu (2016) agreed with Fayyad et al. (1996) division but added prescriptive analytics. Thus, these three divisions of big data analytics are:
    • Descriptive analytics explains “What happened?”
    • Predictive analytics explains “What will happen?”
    • Prescriptive analytics explains “Why will it happen?”
  • The scientific method helps give a framework for the data analytics lifecycle (Dietrich, 2013; Services, 2015). According to Dietrich (2013), it is a cyclical life cycle that has iterative parts in each of its six steps: discovery; pre-processing data; model planning; model building; communicate results, and
  • Data-in-motion is the real-time streaming of data from a broad spectrum of technologies, which also encompasses the data transmission between systems (Katal, Wazid, & Goudar, 2013; Kishore & Sharma, 2016; Ovum, 2016; Ramachandran & Chang, 2016). Data that is stored on a database system or cloud system is considered as data-at-rest and data that is being processed and analyzed is considered as data-in-use (Ramachandran & Chang, 2016).  The analysis of real-time streaming data in a timely fashion is also known as stream reasoning and implementing solutions for stream reasoning revolve around high throughput systems and storage space with low latency (Della Valle et al., 2016).
  • Data brokers are tasked collecting data from people, building a particular type of profile on that person, and selling it to companies (Angwin, 2014; Beckett, 2014; Tsesis, 2014). The data brokers main mission is to collect data and drop down the barriers of geographic location, cognitive or cultural gaps, different professions, or parties that don’t trust each other (Long, Cunningham, & Braithwaite, 2013). The danger of collecting this data from people can raise the incidents of discrimination based on race or income directly or indirectly (Beckett, 2014).
  • Data auditing is assessing the quality and fit for the purpose of data via key metrics and properties of the data (Techopedia, n.d.). Data auditing processes and procedures are the business’ way of assessing and controlling their data quality (Eichhorn, 2014).
  • If following an agile development processes the key stakeholders should be involved in all the lifecycles. That is because the key stakeholders are known as business user, project sponsor, project manager, business intelligence analyst, database administers, data engineer, and data scientist (Services, 2015).
  • Lawyers define privacy as (Richard & King, 2014): invasions into protecting spaces, relationships or decisions, a collection of information, use of information, and disclosure of information.
  • Richard and King (2014), describe that a binary notion of data privacy does not Data is never completely private/confidential nor completely divulged, but data lies in-between these two extremes.  Privacy laws should focus on the flow of personal information, where an emphasis should be placed on a type of privacy called confidentiality, where data is agreed to flow to a certain individual or group of individuals (Richard & King, 2014).
  • Fraud is deception; fraud detection is needed because as fraud detection algorithms are improving, the rate of fraud is increasing (Minelli, Chambers, &, Dhiraj, 2013). Data mining has allowed for fraud detection via multi-attribute monitoring, where it tries to find hidden anomalies by identifying hidden patterns through the use of class description and class discrimination (Brookshear & Brylow, 2014; Minellli et al., 2013).
  • High-performance computing is where there is either a cluster or grid of servers or virtual machines that are connected by a network for a distributed storage and workflow (Bhokare et al., 2016; Connolly & Begg, 2014; Minelli et al., 2013).
  • Parallel computing environments draw on the distributed storage and workflow on the cluster and grid of servers or virtual machines for processing big data (Bhokare et al., 2016; Minelli et al., 2013).
  • NoSQL (Not only Structured Query Language) databases are databases that are used to store data in non-relational databases i.e. graphical, document store, column-oriented, key-value, and object-oriented databases (Sadalage & Fowler, 2012; Services, 2015). NoSQL databases have benefits as they provide a data model for applications that require a little code, less debugging, run on clusters, handle large scale data and evolve with time (Sadalage & Fowler, 2012).
    • Document store NoSQL databases, use a key/value pair that is the file/file itself, and it could be in JSON, BSON, or XML (Sadalage & Fowler, 2012; Services, 2015). These document files are hierarchical trees (Sadalage & Fowler, 2012). Some sample document databases consist of MongoDB and CouchDB.
    • Graph NoSQL databases are used drawing networks by showing the relationship between items in a graphical format that has been optimized for easy searching and editing (Services, 2015). Each item is considered a node and adding more nodes or relationships while traversing through them is made simpler through a graph database rather than a traditional database (Sadalage & Fowler, 2012). Some sample graph databases consist of Neo4j Pregel, etc. (Park et al., 2014).
    • Column-oriented databases are perfect for sparse datasets, ones with many null values and when columns do have data the related columns are grouped together (Services, 2015). Grouping demographic data like age, income, gender, marital status, sexual orientation, etc. are a great example for using this NoSQL database. Cassandra is an example of a column-oriented database.
  • Public cloud environments are where a supplier to a company provides a cluster or grid of servers through the internet like Spark AWS, EC2 (Connolly & Begg, 2014; Minelli et al. 2013).
  • A community cloud environment is a cloud that is shared exclusively by a set of companies that share the similar characteristics, compliance, security, jurisdiction, etc. (Connolly & Begg, 2014).
  • Private cloud environments have a similar infrastructure to a public cloud, but the infrastructure only holds the data one company exclusively, and its services are shared across the different business units of that one company (Connolly & Begg, 2014; Minelli et al., 2013).
  • Hybrid clouds are two or more cloud structures that have either a private, community or public aspect to them (Connolly & Begg, 2014).
  • Cloud computing allows for the company to purchase the services it needs, without having to purchase the infrastructure to support the services it might think it will need. This allows for hyper-scaling computing in a distributed environment, also known as hyper-scale cloud computing, where the volume and demand of data explode exponentially yet still be accommodated in public, community, private, or hybrid cloud in a cost efficiently (Mainstay, 2016; Minelli et al., 2013).
  • Building block system of big data analytics involves a few steps Burkle et al. (2001):
    • What is the purpose that the new data will and should serve
      • How many functions should it support
      • Marking which parts of that new data is needed for each function
    • Identify the tool needed to support the purpose of that new data
    • Create a top level architecture plan view
    • Building based on the plan but leaving room to pivot when needed
      • Modifications occur to allow for the final vision to be achieved given the conditions at the time of building the architecture.
      • Other modifications come under a closer inspection of certain components in the architecture

 

References

  • Angwin, J. (2014). Privacy tools: Opting out from data brokers. Pro Publica. Retrieved from https://www.propublica.org/article/privacy-tools-opting-out-from-data-brokers
  • Beckett, L. (2014). Everything we know about what data brokers know about you. Pro Publica. Retrieved from https://www.propublica.org/article/everything-we-know-about-what-data-brokers-know-about-you
  • Bhokare, P., Bhagwat, P., Bhise, P., Lalwani, V., & Mahajan, M. R. (2016). Private Cloud using GlusterFS and Docker.International Journal of Engineering Science5016.
  • Brookshear, G., & Brylow, D. (2014). Computer Science: An Overview, (12th). Pearson Learning Solutions. VitalBook file.
  • Burkle, T., Hain, T., Hossain, H., Dudeck, J., & Domann, E. (2001). Bioinformatics in medical practice: what is necessary for a hospital?. Studies in health technology and informatics, (2), 951-955.
  • Connolly, T., Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, (6th). Pearson Learning Solutions. [Bookshelf Online].
  • Della Valle, E., Dell’Aglio, D., & Margara, A. (2016). Tutorial: Taming velocity and variety simultaneous big data and stream reasoning. Retrieved from https://pdfs.semanticscholar.org/1fdf/4d05ebb51193088afc7b63cf002f01325a90.pdf
  • Dietrich, D. (2013). The genesis of EMC’s data analytics lifecycle. Retrieved from https://infocus.emc.com/david_dietrich/the-genesis-of-emcs-data-analytics-lifecycle/
  • Eichhorn, G. (2014). Why exactly is data auditing important? Retrieved from http://www.realisedatasystems.com/why-exactly-is-data-auditing-important/
  • Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI Magazine, 17(3), 37. Retrieved from: http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/
  • Katal, A., Wazid, M., & Goudar, R. H. (2013, August). Big data: issues, challenges, tools and good practices. InContemporary Computing (IC3), 2013 Sixth International Conference on (pp. 404-409). IEEE.
  • Kishore, N. & Sharma, S. (2016). Secure data migration from enterprise to cloud storage – analytical survey. BIJIT-BVICAM’s Internal Journal of Information Technology. Retrieved from http://bvicam.ac.in/bijit/downloads/pdf/issue15/09.pdf
  • Long, J. C., Cunningham, F. C., & Braithwaite, J. (2013). Bridges, brokers and boundary spanners in collaborative networks: a systematic review.BMC health services research13(1), 158.
  • (2016). An economic study of the hyper-scale data center. Mainstay, LLC, Castle Rock, CO, the USA, Retrieved from http://cloudpages.ericsson.com/ transforming-the-economics-of-data-center
  • Minelli, M., Chambers, M., &, Dhiraj, A. (2013). Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses. John Wiley & Sons P&T. [Bookshelf Online].
  • Ovum (2016). 2017 Trends to watch: Big Data. Retrieved from http://info.ovum.com/uploads/files/2017_Trends_to_Watch_Big_Data.pdf
  • Park, Y., Shankar, M., Park, B. H., & Ghosh, J. (2014, March). Graph databases for large-scale healthcare systems: A framework for efficient data management and data services. In Data Engineering Workshops (ICDEW), 2014 IEEE 30th International Conference on (pp. 12-19). IEEE.
  • Ramachandran, M. & Chang, V. (2016). Toward validating cloud service providers using business process modeling and simulation. Retrieved from http://eprints.soton.ac.uk/390478/1/cloud_security_bpmn1%20paper%20_accepted.pdf
  • Richards, N. M., & King, J. H. (2014). Big Data Ethics. Wake Forest Law Review, 49, 393–432.
  • Sadalage, P. J., Fowler, M. (2012). NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence, 1st Edition. [Bookshelf Online].
  • Services, E. E. (2015). Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data, (1st). [Bookshelf Online].
  • Technopedia (n.d.). Data audit. Retrieved from https://www.techopedia.com/definition/28032/data-audit
  • Tsesis, A. (2014). The right to erasure: Privacy, data brokers, and the indefinite retention of data.Wake Forest L. Rev.49, 433.
  • Vardarlier, P., & Silahtaroglu, G. (2016). Gossip management at universities using big data warehouse model integrated with a decision support system. International Journal of Research in Business and Social Science, 5(1), 1–14. Doi: http://doi.org/10.1108/ 17506200710779521

Modeling and analyzing big data in health care

Let’s consider using the building blocks system for healthcare systems, on a healthcare problem that wants to monitor patient vital signs similar to Chen et al. (2010).

  • The purpose that the new data will serve: Most hospitals measure the following vitals for triaging patients: blood pressure and flow, core temperature, ECG, carbon dioxide concentration (Chen et al. 2010).
    1. Functions should it serve: gathering, storing, preprocessing, and processing the data. Chen et al. (2010) suggested that they should also perform a consistency check, aggregating and integrate the data.
    2. Which parts of the data are needed to serve these functions: all
  • Tools needed: distributed database system, wireless network, parallel processing, graphical user interface for healthcare providers to understand the data, servers, subject matter experts to create upper limits and lower limits, classification algorithms that used machine learning
  • Top level plan: The data will be collected from the vital sign sensors, streaming at various time intervals into a central hub that sends the data in packets over a wireless network into a server room. The server can divide the data into various distributed systems accordingly. A parallel processing program will be able to access the data per patient per window of time to conduct the needed functions and classifications to be able to provide triage warnings if the vitals hit any of the predetermined key performance indicators that require intervention by the subject matter experts.  If a key performance indicator is sparked, send data to the healthcare provider’s device via a graphical user interface.
  • Pivoting is bound to happen; the following can happen:
    1. Graphical user interface is not healthcare provider friendly
    2. Some of the sensors need to be able to throw a warning if they are going bad
    3. Subject matter experts may need to readjust the classification algorithm for better triaging

Thus, the above problem as discussed by Chen et al. (2010), could be broken apart to its building block components as addressed in Burkle et al. (2011).  These components help to create a system to analyze this set of big health care data through analytics, via distributed systems and parallel processing as addressed by Services (2015) and Mirtaheri et al. (2008).

Draw on a large body of data to form a prediction or variable comparisons within the premise of big data.

Fayyad, Piatetsky-Shapiro, and Smyth (1996) defined that data analytics can be divided into descriptive and predictive analytics. Vardarlier and Silaharoglu (2016) agreed with Fayyad et al. (1996) division but added prescriptive analytics.  Depending on the goal of diagnosing illnesses with the use of big data analytics should depend on the theory/division one should choose.  Raghupathi & Raghupathi (2014), stated some common examples of big data in the healthcare field to be: personal medical records, radiology images, clinical trial data, 3D imaging, human genomic data, population genomic data, biometric sensor reading, x-ray films, scripts, and traditional paper files.  Thus, the use of big data analytics to understand the 23 pairs of chromosomes that are the building blocks for people. Healthcare professionals are using the big data generated from our genomic code to help predict which illnesses a person could get (Services, 2013). Thus, using predictive analytics tools and algorithms like decision trees would be of some use.  Another use of predictive analytics and machine learning can be applied to diagnosing an eye disease like diabetic retinopathy from an image by using classification algorithms (Goldbloom, 2016).

Examine the unique domain of health informatics and explain how big data analytics contributes to the detection of fraud and the diagnosis of illness.

A process mining framework for the detection of healthcare fraud and abuse case study (Yang & Hwang, 2006): Fraud exists in processing health insurance claims because there are more opportunities to commit fraud because there are more channels of communication: service providers, insurance agencies, and patients. Any one of these three people can commit fraud, and the highest chance of fraud occurs where service providers can do unnecessary procedures putting patients at risk. Thus this case study provided the framework on how to conduct automated fraud detection. The study collected data from 2543 gynecology patients from 2001-2002 from a hospital, where they filtered out noisy data, identified activities based on medical expertise, identified fraud in about 906.

Summarize one case study in detail related to big data analytics as it relates to organizational processes and topical research.

The use of Spark about the healthcare field case study by Pita et al. (2015): Data quality in healthcare data is poor and in particular that of the Brazilian Public Health System.  Spark was used to help in data processing to improve quality through deterministic and probabilistic record linking within multiple databases.  Record linking is a technique that uses common attributes across multiple databases and identifies a 1-to-1 match.  Spark workflows were created to help do record linking by (1) analyzing all data in each database and common attributes with high probabilities of linkage; (2) pre-processing data where data is transformed, anonymization, and cleaned to a single format so that all the attributes can be compared to each other for a 1-to-1 match; (3) record linking based on deterministic and probabilistic algorithms; and (4) statistical analysis to evaluate the accuracy. Over 397M comparisons were made in 12 hours.  They concluded that accuracy depends on the size of the data, where the bigger the data, the more accuracy in record linking.

References

  • Burkle, T., Hain, T., Hossain, H., Dudeck, J., & Domann, E. (2001). Bioinformatics in medical practice: What is necessary for a hospital?. Studies in health technology and informatics, (2), 951-955.
  • Chen, B., Varkey, J. P., Pompili, D., Li, J. K., & Marsic, I. (2010). Patient vital signs monitoring using wireless body area networks. In Bioengineering Conference, Proceedings of the 2010 IEEE 36th Annual Northeast (pp. 1-2). IEEE.
  • Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI magazine, 17(3), 37. Retrieved from: http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/
  • Goldbloom, A. (2016). The jobs we’ll lose to machines – and the ones we won’t. TED Talks. Retrieved from https://www.youtube.com/watch?v=gWmRkYsLzB4
  • Mirtaheri, S. L., Khaneghah, E. M., Sharifi, M., & Azgomi, M. A. (2008). The influence of efficient message passing mechanisms on high performance distributed scientific computing. In Parallel and Distributed Processing with Applications, 2008. ISPA’08. International Symposium on (pp. 663-668). IEEE.
  • Pita, R., Pinto, C., Melo, P., Silva, M., Barreto, M., & Rasella, D. (2015). A Spark-based Workflow for Probabilistic Record Linkage of Healthcare Data. In EDBT/ICDT Workshops (pp. 17-26).
  • Raghupathi, W. Raghupathi, V. (2014). Big Data Analytics in healthcare: promise and potential. Heath Information Science and Systems. 2(3). Retrieved from http://hissjournal.biomedcentral.com/articles/10.1186/2047-2501-2-3
  • Services, E. E. (2015). Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data, 1st Edition. [Bookshelf Online].
  • Vardarlier, P., & Silahtaroglu, G. (2016). Gossip management at universities using big data warehouse model integrated with a decision support system. International Journal of Research in Business and Social Science, 5(1), 1–14. Doi: http://doi.org/10.1108/ 17506200710779521
  • Yang, W. S., & Hwang, S. Y. (2006). A process-mining framework for the detection of healthcare fraud and abuse.Expert Systems with Applications31(1), 56-68.

Data in-motion, Data at-rest, & Data in-use

Data in-motion is the real-time streaming of data from a broad spectrum of technologies, which also encompasses the data transmission between systems (Katal, Wazid, & Goudar, 2013; Kishore & Sharma, 2016; Ovum, 2016; Ramachandran & Chang, 2016).  Data that is stored on a database system or cloud system is considered as data-at-rest and data that is being processed and analyzed is considered as data-in-use (Ramachandran & Chang, 2016).  The analysis of real-time streaming data in a timely fashion is also known as stream reasoning and implementing solutions for stream reasoning revolve around high throughput systems and storage space with low latency (Della Valle et al., 2016). Cisco (2017), stated that data in motion’s value decreases with time, unlike data-at-rest.

Data-in-motion focuses on the velocity and variety portion of the Gartner’s 3Vs of Big Data definition (Della Valle, Dell’Aglio, Margara, 2016). This is becoming an important issue in data analytics due to the emergence of the Internet of Things (IoT), which could be deployed in the cloud and can constitute as the variety portion of data-in-motion (Ovum, 2016).  Della Valle et al. (2016), stated that knowledge had been represented in various ways and the analysis of this data would allow for understanding implicit information hidden in these different forms of explicit knowledge.

5db3

Figure 1 is adapted from Della Valle et al. (2016), which is a conceptual model for real-time streaming that can provide a scalable solution for large volumes of data or a large variety of data sources. In this diagram (Figure 1), a wrapper hides the individuality of the data source by transforming it to look like one data source, while the mapping ties all the data together (Della Valle et al., 2016).

Kishore and Sharma (2016) and Ramachandran and Chang (2016), describes in their conceptual model the definition of data-in-motion as data-in-transit from two systems of data-at-rest.  Kishore and Sharma (2016) stated that data is most vulnerable while it is in motion. Given the vulnerabilities of data-in-motion, Kishore and Sharma (2016), discussed that protecting data-in-motion could be done either through encryption or Virtual Private Network (VPN) connections through the entire process. Ramachandran and Chang (2016), stated that encryption is the only security technique for data-in-motion.  However, security is not addressed in Della Valle et al. (2016) system, and this is just one reason of many on why Kishore and Sharma (2016) suggested that security for data-in-motion as an area for future research.

Cisco (2017), illustrates the need for further knowledge and development of data-in-motion research because retailers have the most to benefit from it. For retail environments, data collection and processing is key for thriving and increasing their profit margins because retailers are trying to build brand recognition, brand affinity, and a relationship with their customers.  All of this is done to enhance the customer experience, for example using the data coming from the web camera to create a virtual mirror where the customer can try on accessories and see how this accessory fits their personal style, is creating a customer experience from data-in-motion (Cisco, 2017).  This virtual mirror must use facial recognition technology similar to the use of Snapchat filters.  Other ways retailers could use data-in-motion data is by collecting phone data location and demographic data to create a real-time promotion for nearby travelers or in-store customers (Cisco, 2017).  Finally, Cisco (2017), also discussed how data in motion could help in providing proactive and cost-effective health care, enhancing manufacturing supply chain, provide scalable and secure energy production, etc.

References

International health care data laws

Governing the way that health is dealt with internationally since 1969 is the International Health Regulations (IHR) and it had been updated in 2005 (Georgetown Law, n.d.; World Health Organization [WHO], 2005). Under Article 45 of the IHR deals with the treatment of personal data (WHO, 2005):

  • Personal identifiable data and information that has been collected or received shall be confidential and processed anonymously.
  • Data can be disclosed for purposes that are vital for public health. However, the data that is transferred must be adequate, accurate, relevant, up-to-date, and not excessive data that has to be processed fairly and lawfully.
  • Bad or incompatible data is either corrected or deleted.
  • Personal data is not kept any longer than what is necessary.
  • WHO will provide data of the patient to the patient upon request in a timely fashion and allow for data correction from the patients

The European Union has the Directive on Data Protection of 1998 (DDP), and Canada has Personal Information Protection and Electronic Documents Act of 2000 (PIPEDA) that is similar to the U.S. HIPAA regulations set forth by the U.S. Department of Health and Human Services (Guiliano, 2014). Eventually, the EU in 2012 proposed the addition of the Data Protection Regulation (DPR) of 2016 (Hordern, 2015, Justice, n.d.).

EU’s DDP allows (Guiliano, 2014):

  • It is outlawed to transfer data to any non-EU entity that doesn’t meet EU data protection standards.
  • The government must give consent before gathering sensitive data for certain situations only
  • Only data that is needed at the time that has an explicit and reasonPable purpose.
  • Patients should be allowed to correct errors in personal data, and if the data is outdated or useless, they must be discarded.
  • People with access to this data must have been properly trained.

EU’s DPR allows (Hordern, 2015; Justice, n.d.):

  • People can allow for data to be used for future scientific research where the purpose is still unknown as long as the research is conducted by “recognized ethical ”
  • Processing data for scientific studies based on the data that has already been collected is legal without the need to get additional consent
  • Health data may be used without the consent of the individual for public health
  • Health data cannot be used by employers, insurance, and banking companies
  • If data is being or will be used for future research, data can be retained further than current regulations

Canadian’s PIPEDA allows (Guiliano, 2014):

  • Patients should know the business justification for using their personal and medical data.
  • Patients can review their data and have errors corrected
  • Organizations must request from their patients the right to use their data for each situation except in criminal cases or emergencies
  • Organizations cannot collect patient and medical data that is not needed for the current situation unless they ask for permission from their patients and telling them how it will be used and who will use it.

Other Internal laws or regulations regard big data from Australia, Brazil, China, France, Germany, India, Israel, Japan, South Africa and the United Kingdom are summarized in the International and Comparative Study on Big Data (der Sloot & van Schendel, 2016).  When it comes to transferring U.S. collected and processed data internationally, the U.S. holds all U.S. regulated entities liable to all U.S. data regulations (Jolly, 2016).  Some states in the U.S. further restrict the export of personal data to international entities (Jolly, 2016).  Thus, any data exported or imported from other countries must deal with the regulations of the country (or state) of origin and those of the country (or state) to which it is exported in.

In the United Kingdom, a legal case on health care data was presented and was ruled upon.  This case dealt with the rate of de-identifiable primary care physician prescription habits data breached confidentiality laws because of the lack of consent (Knoppers, 2000).  The consent had to cover both commercial and public issues purposes.  This lack of both types of consent meant that there was a misuse of data. In the Supreme Court of Canada, consent was not collected properly and violated the expectation of privacy between the patients and private healthcare provider (Knoppers, 2000).  All of these laws and regulations amongst international and domestic views of data usage, consent, and expectation of privacy with healthcare data all are trying to protect people from the misuse of data.

References

Data auditing for health care

Data auditing is assessing the quality and fit for purpose of data via key metrics and properties of the data (Techopedia, n.d.).  Data auditing processes and procedures are the business’ way of assessing and controlling their data quality (Eichhorn, 2014). Doing data audits allows a business to fully realize the value of their data and provides higher fidelity to their data analytics results (Jones, Ross, Ruusalepp, & Dobreva, 2009). Data auditing is needed because the data could contain human error or it could be subject to IT data compliance like HIPAA, SOX, etc. regulations (Eichhorn, 2014). When it comes to health care data audits, it can help detect unauthorized access to confidential patient data, reduce the risk of unauthorized access to data, help detect defects, help detect threats and intrusion attempts, etc. (Walsh & Miaolis, 2014).

Data auditors can perform a data audit by considering the following aspects of a dataset (Jones et al., 2009):

  • Data by origin: observation, computed, experiments
  • Data by data type: text, images, audio, video, databases, etc.
  • Data by Characteristics: value, condition, location

A condensed data audits process for research is proposed by Shamoo (1989):

  • Select published, claimed, or random data from a figure, table, or data source
  • Evaluate if all the formulas and equations are correct and used correctly
  • Convert all the data into numerical values
  • Re-derive the original data using the formulas and equations
  • Segregate the various parameters and values to identify the sources of the original data
  • If the data is the same as those in (1), then the audit turned up no quality issues, if not a cause analysis needs to be conducted to understand where the data quality faulted
  • Formulate a report based on the results of the audit

Jones et al. (2009) provided a four stage process with a detailed swim lane diagram:

5db1

For some organizations, it is the creation of log file for all data transactions that can aid in improving data integrity (Eichhorn, 2014).  The creation of the log file must be scalable and separated from the system under audit (Eichhorn, 2015).  Log files can be created for one system or many. Meanwhile, all the log files should be centralized in one location, and the log data must be abstracted into a common and universal format for easy searching (Eichhorn, 2015). Regardless of the techniques, HIPAA section 164.308-3012 talk about information and audits in the health care system (Walsh & Miaolis, 2014).

HIPAA has determined key activities for a healthcare system to have a data auditing protocol (Walsh & Miaolis, 2014):

  • Determine the activities that will be tracked or audited: creating a process flow or swim lane diagram like the one above, involve key data stakeholders, and evaluate which audit tools will be used.
  • Select the tools that will be deployed for auditing and system activity reviews: one that can detect unauthorized access to data, ability to drill down into the data, collect audit logs, and present the findings in a report or dashboard.
  • Develop and employ the information system activity review/audit policy: determine the frequency of the audits and what events would trigger other audits.
  • Develop appropriate standard operating procedures: to deal with presenting the results, dealing with the fallout of what the audit reveals, and efficient audit follow-up

References

Sample HIPAA compliance Memoranda

Memoranda Title: Healthcare industry: Data privacy requirements per the Health Insurance Portability Accountability Act (HIPAA)

Date: March 1, 2017

Introduction and Problem Definition

Health care data can be used for providing preventative and emergent health care to health care consumers.  The use of this data in aggregate can provide huge datasets, which will allow big data analytics find hidden patterns that could be used to improve healthcare.  However, the Health Insurance Portability Accountability Act (HIPAA) is a health care consumer data protection act, which must be followed.  This Act protects health care consumers’ data from being improperly disclosed or used; and any data exchanged between health care providers, health plans, and healthcare clearinghouse should be necessarily minimized for both parties to accomplish their tasks (Health and Human Services [HHS], n.d.a.; HHS, n.d.b.).  Though the use of big health care data is promising, we must follow our Hippocratic Oath, and HIPAA is the way of keeping our oath while providing new services to our consumers.

Methods

All health care data either physical or mental from a person’s past, present and future is protected under HIPAA (HHS, n.d.a). According to the HHS (n.d.b.), groups with health care consumers’ data should always place limits on those who have read, write, and edit access to the data.  Identifiable data can include name, address, birth date, social security number, other demographic data, mental and physical health data or condition, and health care payments (HHS, n.d.a). Any disclosure of health data must be obtained from the individual is via a consent form that states specifically who will get what data and for what purposes (HHS, n.d.a; HHS, n.d.b.).

Consequences of data breaches

A violation is obtaining or disclosing individually identifiable health information (Indest, 2014). Those that are subject to follow the HIPAA regulations are health plans, healthcare providers, and health care clearinghouses (HHS, n.d.a.; HHS, n.d.b.). Any violations by any of the abovementioned parties that have been detected must be corrected within 30 days of discovery to avoid any of the civil or criminal penalties (up to one year of imprisonment) from an HIPAA Violations (Indest, 2014).

Table 1: List of tiered civil penalties for HIPAA Violations (HHS, n.d.a.; Indest, 2014).

HIPAA Violation Minimum Penalty Maximum Penalty
Unknowingly causing a violation $100 per violation until $25K is reached per year $50K per violation until $1.5M is reached per year
Reasonable violation not done by willful neglect $1K per violation until $100K is reached per year $50K per violation until $1.5M is reached per year
Willful neglect with a corrective action plan but requiring time to enact $10K per violation until $250K is reached per year $50K per violation until $1.5M is reached per year
Willful neglect with no corrective action plan $50K per violation until $1.5M is reached per year $50K per violation until $1.5M is reached per year

References 

Data privacy and governance in health care

Lawyers define privacy as (Richard & King, 2014):

  1. Invasions into protecting spaces, relationships or decisions
  2. Collection of information
  3. Use of information
  4. Disclosure of information

Given the body of knowledge of technology and data analytics, data collection and analysis may give off the appearance of a “Big Brother” state (Li, 2010). The Privacy Act of 1974, prevents the U.S. government from collecting its citizen’s data and storing in databases, but it does not expand to companies (Brookshear & Brylow, 2014).  Confidentiality does exist for health records via the Health Insurance Portability and Accountability Act (HIPAA) of 1996, and for financial records through the Fair Credit Act, which also allows people to correct erroneous information in the credit (Richard & King, 2014). The Electronic Communication Privacy Act of 1986 limits wiretapping communications by the government, but it does not expand to companies (Brookshear & Brylow, 2014). The Video Privacy Protection Act of 1988 protects people via videotaped records (Richard and King, 2014). Finally, in 2009 the HITECH Act, strengthened the enforcement of HIPAA (Pallardy, 2015). Some people see the risk of the loss of privacy via technology and data analytics, while another embrace it due to the benefits they perceive that they would gain from disclosing this information (Wade, 2012).  All of these privacy protection laws are outdated and do not extend to the rampant use, collection, and mining of data based on the technology of the 21st century.

However, Richard and King (2014), describe that a binary notion of data privacy does not exist.  Data is never completely private/confidential nor completely divulged, but data lies in-between these two extremes.  Privacy laws should focus on the flow of personal information, where an emphasis should be placed on a type of privacy called confidentiality, where data is agreed to flow to a certain individual or group of individuals (Richard & King, 2014).  Thus, from a future legal perspective data privacy should focus on creating rules on how data should flow, be used, and the concept of confidentiality between people and groups.  Right now the only thing preventing abuse of personal privacy from companies is the negative public outcry that will affect their bottom line (Brookshear & Brylow, 2014).

Healthcare Industry

In the healthcare industry, patients and healthcare providers are concerned about data breaches, where personal confidential information could be accessed, and if a breach did occur 54% of patients were willing of switching from their current provider (Pallardy, 2015).

In healthcare, if data gets migrated into a public cloud rather than a community cloud-specific to healthcare, the data privacy enters into legal limbo.  According to Brookshear and Brylow (2014), cloud computing data privacy and security becomes an issue because, in a public cloud, healthcare will not own the infrastructure that houses the data.  HIPAA government regulations provide patient privacy standard that the healthcare industry must follow.  HIPAA covers a patient’s right to privacy by asking for permission on how to use their personally identifiable information in medical records, personal health, health plans, healthcare clearinghouses, and healthcare transactions (HHS, n.d.b.).  The Department of Health & Human Services collects complaints that deal directly with a violation of the HIPAA regulations (HHS, n.d.a.).  Brown (2014), outlines the cost of each violation that is based on the type of violation, the willful or willful neglect, and how many identical violations have occurred, where penalty costs can range from $10-50K per incident. Industry best practices on how to avoid HIPAA violations come from (Pallardy, 2015):

  • De-identify personal data: Names, Birth dates, death dates, treatment dates, admission dates, discharge dates, telephone numbers, contact information, address, social security numbers, medical record numbers, photographs, finger and voice prints, etc.
  • Install technical controls: anti-malware, data loss prevention, two-factor authentication, patch management, disc encryption, and logging and monitoring software
  • Install certain security controls: Security and compliance oversight committee, formal security assessment process, security incident response plan, ongoing user awareness and training, information classification system, security policies

References