Observational protocol and qualitative documentations

As a researcher, you could be a non-participant to a full-on participant when observing your subjects in a study.  Thus, the observed/empathized behavioral and activities of individuals in the study are jotted down in field notes (Creswell, 2013).  Most researchers use an observational protocol to jotting down these notes as they observe their subjects.  According to Creswell (2013), this protocol could consist of: “separate descriptive notes (portraits of the participants, a reconstruction of dialogue, a description of the physical setting, accounts of particular events, or activities) [to] reflective notes (the researcher’s personal thoughts, such as “speculation, feelings, problems, ideas, hunches, impressions, and prejudices), … this form might [have] demographic information about the time, place, and date of the field setting where the observation takes place.”

Whereas, observational work can be combined with in-depth interviewing, and sometimes the observational work (which can be an everyday activity) can help prepare the researcher for the interviews (Rubin, 2012).  Doing so can increase the quality of the interviews because the interviewers know what the researcher has seen or read and can provide more information on those materials.  This can also allow the researcher to master the terminology before entering the interview. Finally, Rubin (2012) also states that cultural norms become more visible through observation rather than just a pure in-depth interview.

In Creswell (2013), Qualitative Documents are information contained within documents that could help a researcher out in their study that could be either public (newspapers, meeting minutes, official reports) and/or private (personal journals/diaries, letters, emails, internal manuals, written procedures, etc.) documents.  This can also include pictures, videos, educational materials, books, files. Whereas, Artifact Analysis is the analysis of the written text, usually are charts, flow sheets, intake forms, reports, etc.

The main analysis approach of this document would be to read the document to gain a subject matter understanding.  Document analysis would aid in quickly grouping, sorting and resort the data obtained for a study.  This manual will not be included in the coded dataset, but will help provide appropriate codes/categories for the interview analysis, in other words give me suggestions about what might be related to what.   Finally, one way to interpret this document would be for triangulation of data (data from multiple sources that are highly correlated) between the observation, interviews and this document.   

References

Organizational research & Participant Observer

For organizational research, some of their major goals for research are to examine their formation, recruitment of talent, adaption to constraints, types and causes, factors for growth, change and demise, which all fall under ethnographic studies (Lofland, 2005).  Ethnographic studies lend themselves much more nicely to participant-observers.

Participant observer is where the researcher/observer is not just only watching their subjects, but also actively participates (joins in) with their subject. The level of participation of the observer might impact what is observed (the more participation the harder it is to observe and take notes), thus low-key role participation is preferred.  Participating before the interviews will allow the observer to be sensitive to important issues otherwise missed. It is a more in-depth version of interviewing building on a regular conversation.  Participation may occur after watching for a while, focusing on a specific topic/question. (Rubin, 2012)

References:

Data Analysis of Qualitative data

Each of the methods has at its core a thematic analysis of data, which is methodically and categorically linking data, phrases, sentences, paragraphs, etc. into a particular theme.  Coring up these themes by their thematic properties helps in understanding the data and developing meaningful themes aiding in building a conclusion to the central question.

Ethnographic Content Analysis (Herron, 2015):  Thick descriptions (collection of field notes that describe and recorded learning and a collection of perceptions of the researcher) help in the creation of cultural themes (themes related to behaviors on an underlying action) from which information was interpreted.

Phenomenological data analysis (Kerns, 2014): Connections among different classes of data through a thematic analysis were used for which results could be derived from.

Case study analysis (Hartsock, 2014): Through the organization of data within a specific case design and treating each distinct data set as a case study, one could derive some general themes within each individual case.  Once, all these general themes are identified, we should look for some cross-case themes.

Grounded Theory Data Analysis (Falciani-White, 2013): Code data through comparing incidents/data to a category (by breaking down, analyzing, comparing, labeling and categorizing data into meaningful units of data), and integrating categories by their properties, in order to help you identify a few themes in order to drive a theory in a systematic manner.

References:

Interviewing strategy and qualitative sampling

As an interviewing strategy, open-ended questions leave the responses open to participant experience and categories and don’t close down the discussion or allow the participant to answer the question in one word (Snow et al, 2005).  Though in the past it was rejected because it did not involve a precise measurement, sometimes data that may not be easily measurable or counted, have value because of its intrinsic complexity and showcase of the “conditional nature of reality” (Rubin, 2012).  A whole field of text-analytics is aiming to prove that this data, considered as unstructured data, is an important part of knowledge discovery and knowledge sharing. Thus, Rubin (2012) says that open-ended questions grant the participant the chance to respond to the question in any way they choose, as elaborated on a response, allow participants to raise issues that are important to them, or even raise new issues not thought of by the interviewer.  Creswell (2013), further states that the more open the questions the better because it will allow the interviewer to listen to what people say and how they say, which can allow the participants to share their own views.  Usually, there are a few open-ended questions.  Finally, open-ended questions are used primarily in qualitative studies, but a mixture of both close-ended and open-ended questions could be asked in mixed methods studies.

One thing is to have the right questions as part of your interviewing strategies, it is another thing to have the right qualitative sampling plan.

Sampling Plans {purposeful/judgmental sampling, maximum variation sampling, sampling extreme or deviant cases, theoretical sampling, snowball/chain-referral sampling, cluster sampling, single-stage sampling, random sampling} (Creswell, 2013, Rubin, 2012, & Lofland et al, 2005). Here are just three of the many sampling plans listed in the sampling plan space.

  • Purposeful/judgmental sampling: In order to learn about a selective character, group, or category or their variations, you group the population into different characters, groups, or categories to collect data from with the participants now representing those divisions. (Creswell, 2013 & Lofland et al, 2005)
  • Maximum variation sampling: Allows for an analysis of error and bias in a phenomenon, through sampling and discovering the widest range of diversity in the phenomena of interest. (Lofland et al, 2005)
  • Snowball/chain-referral sampling: Asking your initial set of contacts with characteristics X, if they can refer to you their network that has the same characteristics X that you are studying. This is a means to enlarge your sample size and break down barriers to the entrance of your future participant. (Lofland et al, 2005). Depending on the characteristic X, like domestic violence, sexual assault, etc., this technique may run into IRB issues (Rubin, 2012).  Rubin (2012), stated that the way to avoid IRB issues if you have the current participants contact the future participants on your behalf to participate in the interview process, but this can drastically reduce the number maximum number of participants you could have gotten.

References:

Some Qualitative Methodologies

This blog post will differentiate among the following qualitative designs:

    • Phenomenology (e.g. Georgi, Moustakas, etc.)
    • Grounded theory (e.g. Glaser, Strauss, etc.)
    • Ethnography (e.g. White, Benedict, Mead, etc.)
    • Case Studies (e.g. Yin, etc.)

The Implicit goal of qualitative data analysis is truth, objectivity, trustworthiness, and accuracy of data (Glaser, 2004). All methods have the observer usually exercising little bias in their thoughts to help further their analysis or development of their core theory.  Researchers here are observers taking notes to help them in their study.

Phenomenology (Giorgi, 2006): It is the study of experiential phenomena through encountering an instance of it, describing it, and using free imagination variation to determine its essence. Thus, making the phenomena more generalizable.  Though it should be noted that the experience should exist without preconceived biases (a neutral party), and one way of doing so is listing out your entire biases related to the phenomena.  This removal of biases will help limit the claims to the way we experienced the phenomena.

Grounded Theory (Glaser, 2004): It is the study of a set of grounded concepts, which create a core theory/category that forms a hypothesis.  Data is collected, but as it is analyzed “line by line”, the researcher asks: “What is this data a study of?”, “What category does this incident indicate?”, “What is actually happening in the data?”, “What is the main concern being faced by the participants?”, and “What Accounts for the continual resolving of this concern?”  These questions are asked within the most minimum of preconception.  The use of literature is treated as another source of data to be integrated into the analysis and core theory/category.  However, literature is not used before the emergence of a core theory/category arises from the data.

Ethnography (Atkinson & Hammersley, 1994, Mead, 1933): It is studying the customs of people and cultures, usually on a few numbers of cases (maybe one case), through analyzing unstructured data (not previously coded) with no aim of testing a hypothesis.  Analysis of the data may revolve quantification and statistics on the explicit interpretation of the data.

Thus, grounded theory seeks to find meaning in data and find a core concept/category/theory/variable.  Ethnography tends to seek meaning in the customs of people, which can exist in a single case study.  Phenomenology seeks to study the phenomena that have occurred while keeping in mind all the possible variables that can influence it.  So, a certain topic can be explored using each of these methods, and they are looking at the same problem just with different preconceptions (or lack thereof), thus adding to the further understanding of that topic.  These are all collection of data methods, whereas case studies are a research strategy.

A problem needs to arise in order for research to occur.  A gap in knowledge can be seen as a problem.  Thus, case studies are a strategy that can be used to help shine some light at that gap and using any of the techniques aforementioned the research can try to fill in that gap of knowledge.  If you are aiming for grounded theory, you may have a ton of case studies to look through to seek common themes, whereas ethnography may be concerned about one or two cases and what happened in those cases.  Phenomenology can use as many case studies necessary to explore any particular phenomena in question.

Case Studies Research (Yin, 1981): Can contain both qualitative and quantitative data (e.g. fieldwork, records, reports, verbal reports, observations, memos, etc.), and it is independent of any particular data collection method.  Case studies concern themselves in a real-life phenomenon, and when the boundaries between phenomenon and context are not known, yet aim to be either exploratory, descriptive and/or explanatory.  It is a strategy similar to experiments, simulations, and histories.

Since, case studies can be “an accurate rendition of the facts of the case” (Yin, 1981), most of that data cannot be described quantitatively in a quick manner. Sometimes, descriptions and qualitative data paint the picture of what is being studied much more clearly than if we were to do this with just numbers.  Can you picture that over a million people saw the ball drop on Time Square in 2015, or 14 blocks of thousands of people adorned in foam Planet Fitness hats and waving purple noodle balloons, eagerly cheered as the ball dropped on Time Square in 2015. This is why most case study research involves the collection of qualitative data.

References:

  • Atkinson, P., & Hammersley, M. (1994). Ethnography and participant observation. Handbook of qualitative research, 1(23), 248-261.
  • Glaser, B. G., & Holton, J. (2004, May). Remodeling grounded theory. In Forum Qualitative Sozialforschung/Forum: Qualitative Social Research (Vol. 5, No. 2).
  • Giorgi, A. (2008). Difficulties encountered in the application of the phenomenological method in the social sciences. Indo-Pacific Journal of Phenomenology, 8(1).
  • Mead, M. (1933). More comprehensive field methods. American Anthropologist, 35(1), 1-15.
  • Yin, R. K. (1981). The case study crisis: Some answers. Administrative science quarterly, 58-65.

Decluttering & Recycling

Last year I mentioned that I am a minimalist, though I do not subscribe to the 100 item challenge.  However, there is value in disposing of items that are no longer providing any value in your life.  Rather than trashing them, why not recycle them for cash.  Here are a few places that accept gently used and sometimes roughly used items, in an effort to create a more sustainable economy and the planet.  For really old devices, they extract the precious metals to be used in new devices.

Note: Shop around all these sites and programs to get the most money for your product. Also, one site or store may not take it, but another might so keep shopping around. Also, if you are getting store credit make sure it’s at a store you will actually use.

Note: This is not a comprehensive list.  Comment down below if you know of any other places or apps that have worked for you really well.  Some apps work best in the city versus the suburbs.

  1. Amazon.com Trade-In: They will give you Amazon gift card, for Kindle e-readers, tablets, streaming media players, BlueTooth speakers, Amazon Echos, Textbooks, Phones, and video games.
  2. Best Buy: Will buy your iPhones, iPads, Gaming Systems, Laptops, Samsung mobile devices, Microsoft Surface devices, video games, and smartwatches for BestBuy gift cards.
  3. Game Stop (one of my favorites): Will take your video games, Gaming systems, most obscure phones, tablets, iPods, etc. and will give you cash back.
  4. Staples: Smartphones, tablets, and laptops can be sold here for store credit.
  5. Target: Phones, tablets, gaming systems, smartwatches, voice speakers for a target gift card.
  6. Walmart: Phones, tablets, gaming systems, and voice speakers can be cashed in for Walmart gift cards.
  7. Letgo app: A great way to sell almost anything.  Just make sure you meet up in a public place to make the exchange, like a mall or in front of a police station. Your safety is more important than any piece you were willing to part with in the first place.
  8. Facebook.com Marketplace: Another great way to sell almost anything. The same warning is attached here as in Letgo.
  9. Decluttr.com: They pay you back via check, PayPal, or direct deposit.
  10. Gazelle: They will reward you with PayPal, check or Amazon gift cards.
  11. Raise: This is for those gift cards you know you won’t use.  You can sell them for up to 85% of its value, via PayPal, direct deposit, or check.
  12. SecondSpin: This is for those CDs, DVDs, and Blu-rays, and you can earn money via store credit, check, or PayPal.
  13. Patagonia: For outdoor gear and it is mostly for store credit.
  14. thredUp: This is for your clothes. Once they are sold via the app you can receive cash or credit.
  15. Plato’s Closet: Shoes, Clothes, and bags can be turned in for cash.  Though they take mostly current trendy items.
  16. Half Price Books: Books, textbooks, audiobooks, music, CDs, LPs, Movies, E-readers, phones, tablets, video games, and gaming systems for cash.
  17. Powells.com: For your books and you can get paid via PayPal or credit in your account.

My advice, I try to sell first to a retailer, because they are going to always be there, it’s their job, it’s safer, you can do it at your own schedule, and you will get what they promise you.  No hassle of no-shows, fear of meeting a stranger, getting further bargained down when you are there and they conveniently forget to bring the full amount, or them arriving way late.

Another piece of advice is to hold on to at least one old phone (usually the latest one), for two reasons: (1) if your current phone breaks, you can use this as an interim phone, (2) international travel, if the phone is unlocked.

Subsequent advice is to make sure you turn off and clear out all our old data from electronic devices.  The last thing you want to do is have your data compromised when doing something positive for the earth.

Also, Look for Consignment shops, local book stores, and ask around. You never know who you may be able to sell stuff to.  At a consignment shop, you deposit your items there, and if they sell, you get a part of the earnings. When all else fails, what you cannot sell, recycle it by donating it to goodwill, habitat for humanity, etc.

Financial Hacks

The last post, I talked about cyber hacking, but this month let’s talk about when Equifax credit report data was hacked in 2017 when names, social security numbers, birth date, driver’s license and addresses were taken from millions of people (Smith 2017; Oliver, 2017).  Smith (2017), knew of the breach that started in late May and ended in Early June 2017 but didn’t advise the public until 2017.  In that gap from all affected consumers being hacked until public release, multiple people’s lives could have been ruined.

This breach means that when the data is sold in the black market or dark web, thieves can open lines of credit for the rest of your life.  The only way to combat this is to freeze your credit from all three credit bureaus:

My journey in doing so means going to each of these sites and setting this up.  When I wanted to pull my credit for housing, a new credit card, etc. I would have to unfreeze the account for less than a few days and refreeze it so that my credit can be checked.  Unfortunately, this has become an inconvenience, as it can mean a delay in many major life situations, like getting a new job.  However, this is a minor inconvenience as opposed to finding out you were hacked, proving your real identity, and recovering if you can your life.

The advice to freeze your credit report is one way to protect yourself.  Another is to check your credit report.  Every year you get 1 free credit report from each of the three credit reporting agencies.  Things that appear in one report may not appear in another, so it is key to routinely check all three credit reports.  A link to do so can be found here:

or by phone:

  • 1-877-322-8228

Resources:

Storytime:  The Hacker!

Systems and companies get hacked.  The biggest one in the tech sector is Yahoo back in August 2013 where 3 billion accounts were targeted. and again in 2014 where 500 million accounts were targeted unrelated (Larson, 2017). As reported vital information that was compromised from the yahoo hacks was the sign-in information, most importantly, passwords.

Now fast forward to December 2019, and I got an email saying that there was an attempt to get into my personal social media accounts.  Not saying that the Yahoo incident is at all related since it could have come from multiple other sites I use.  However, it illustrates a key aspect of living a digital life… Are we really safe from hackers?  Thankfully they didn’t succeed to access my account, but that won’t stop them in the future from trying my accounts again or yours.

Mark Goodman (n.d.a.), explains that there is an asymmetry in cyber threats, where the white hats (good guys) have to explore every possible corner to prevent a hack, whereas the hackers only have to find one weakness to hacking into a system.

Goodman (n.d.a., n.d.b.) in the Art of Charm podcast and Lewis Howes podcast proposed the following acronym: UPDATE, as one of many ways to protect yourself.

  • U – update frequently. (LastPass, 1Password)
  • P – passwords. Use a different password for every site and get a reliable password manager. Don’t use your Facebook account to login to other site.
  • D – downloads. Watch your downloads and be cautious about what you install. Download from authorized sources only.
  • A – administrator. Don’t run your computer using the administrator account (unless necessary).
  • T – turn off your computer. If it isn’t fully turned off it’s still accessible, especially when not in use, or at least the wifi.
  • E – encrypt. This scrambles your data unless you have the password and proper computational keys. There are 2 types: you can encrypt the data on your computer and encrypt the data as it is sent out using a VPN.

Resources:

Plagiarism: A word

The following article found on https://www.econtentpro.com/blog/, talks about abuses that can lead to various forms of plagiarism.  eContent Pro (2019) is a really great article showcasing that there is more than one way to plagiarise.  However, they did not provide examples to showcase each case, nor explained the nuance in case 2 all that well (eContent Pro, 2019):

  1. Self-plagiarism
  2. Overreliance on Multiple Sources
  3. Patchwriting
  4. Overusing the same source

The following is my attempt to do just that.

Example of Self-plagiarism

If I were to use the following two paragraphs verbatim in a new paper or as a book chapter … even though these are my words from Hernandez (2017a), it is considered self-plagiarism.  It is good to recycle your work cited page, it is not good to recycle your words, just like you would recycle plastic bottles.

Chapter 1: An Introduction to Data Analytics

Data analytics has existed before 1854. Snow (1854) had a theory on how cholera outbreaks occur, and he was able to use that theory to remove the pump handle off of a water pump, where that water pump had been contaminated in the summer of 1854. He had set out to prove that his hypothesis on how cholera epidemics originated from was correct, so he then drew his famous spot maps for the Board of Guardians of St. James’ parish in December 1854. These maps were showed in his eventual 2nd edition of his book “On the Mode of Communication of Cholera” (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000; Snow, 1855). As Brody et al. (2000) stated, this case was one of the first famous examples of the theory being proven by data, but the earlier usage of spot maps has existed.

However, the use of just geospatial data analytics can be quite limiting in finding a conclusive result if there is no underlying theory as to why the data is being recorded (Brody et al., 2000). Through the addition of subject matter knowledge and subject matter relationships before data analytics, context can be added to the data for which it can help yield better results (Garcia, Ferraz, & Vivacqua, 2009). In the case of Snow’s analysis, it could have been argued by anyone that the atmosphere in that region of London was causing the outbreak. However, Snow’s original hypothesis was about the transmission of cholera through water distribution systems, the data then helped support his hypothesis (Brody et al., 2000; Snow 1854). Thus, the suboptimal results generated from the outdated Edisonian-esque, which is a test-and-fail methodology, can prove to be very costly regarding Research and Development, compared to the results and insights gained from text mining and manipulation techniques (Chonde & Kumara, 2014).

Example of Overreliance on Multiple Sources

The following was taken from my Dissertation (Hernandez, 2017b).  There is definitely an overreliance on sources here. As with any dissertation, master’s thesis, or even interdisciplinary work. However, my voice still shines through. That is where the line is drawn by eContent Pro (2019): Is the author’s voice still present?

In this excerpt, it shows how I gathered multiple methodologies, from multiple sources and combined them all to form a best practice for data preprocessing. Another word for this process is called Synthesizing. Not one source had all the components, and listing which source contained which parts of the best practice methodologies was the purpose of these three paragraphs.  If my voice wasn’t present in these paragraphs, then it would be considered plagiarism.

Collecting the raw and unaltered real world data is the first step of any data or text
mining research study (Coralles et al., 2015; Gera & Goel, 2015; He et al., 2013; Hoonlor, 2011; Nassirtoussi et al., 2014). Next, preprocessing raw text data is needed, because raw text data files are unsuitable for predictive data analytics software tools like WEKA (Hoonlor, 2011; Miranda, n.d.). Barak and Modarres (2015), Miranda (n.d.), and Nassirtoussi et al. (2014) concluded that in both data and text mining, data preprocessing has the most significant impact on the research results.

Raw data can have formats that change across time, therefore converting the data into one common format for analysis is necessary for data analytics (Mandrai & Barkar, 2014). Also, the removal of HTML tags from web-based data sources allows for the removal of extraneous data points that can provide unpredictable results (Netzer et al., 2012). Finally, deciding on a strategy about how to deal with missing or defective data fields can aid in mitigating noise from the results (Barak & Modarres, 2015; Fayyad et al., 1996; Mandrai & Barskar, 2014; Netzer, 2012). Furthermore, to gain the most insights surrounding a research problem, data from multiple data
sources should be collected and integrated (Corrales et al., 2015).

Predictive data analytics tools can analyze unstructured text data after the preprocessing step. Preprocessing involves tokenization, stop word removal, and word-normalization (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). Tokenization is when a body of text is reduced to a set of units, phrases, or groups of keywords for analysis (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). For
example, the term eyewall replacement would be considered one token, rather than two words or two different tokens. Stopword removal is the removal of the words that add no value to the predictive analytics algorithm from the body of text; these words are prepositions, articles, and conjunctions (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). Miranda (n.d.) stated that sometimes stop-word removals could also be context-dependent because some contextual words can yield little to no value in the analysis. For instance, meteorological forecast models in this study are considered context-dependent stopwords. Lastly, word-normalization transforms the letters into a body of text to one single case type and removes the conjugations of words (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). For example, stemming the following words cooler, coolest, and colder becomes cool-, which heightens the fidelity of the results due to the reduction of dimensionalities.

Example of Pathwriting and overusing the same source

This self-created meta-post for this post, which happens to be a curation post for Service Operations KPIs and CSF. The words below have been lifted from various sections of:

Each sample Critical Success Factors (CSFs) is followed by a small number of typical Key Performance Indicators (KPIs) that support the CSF. These KPIs should not be adopted without careful consideration. Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the CSI register for evaluation and possible implementation.

Service Operations: Ensures that services operate within agreed parameters, when it’s interrupted they restore services ASAP 

Request Fulfillment Management: Request Fulfillment is responsible for

  • Managing the initial contact between users and the Service Desk.
  • Managing the lifecycle of service requests from initial request through delivery of the expected results.
  • Managing the channels by which users can request and receive services via service requests.
  • Managing the process by which approvals and entitlements are defined and managed for identified service requests (future).
  • Managing the supply chain for service requests and assisting service providers in ensuring that the end-to-end ddelivery is managed according to plan.
  • Working with the Service Catalog and Service Portfolio managers to ensure that all standard service requests are appropriately defined and managed in the service catalog (future).

 

  • CSF Requests must be fulfilled in an efficient and timely manner that is aligned to agreed service level targets for each type of request

o    KPI The mean elapsed time for handling each type of service request

o    KPI The number and percentage of service requests completed within agreed target times

o    KPI Breakdown of service requests at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of service requests closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of service requests resolved remotely or through automation, without the need for a visit

o    KPI Total numbers of requests (as a control measure)

o    KPI The average cost per type of service request

  • CSF Only authorized requests should be fulfilled

o    KPI Percentage of service requests fulfilled that were appropriately authorized

o    KPI Number of incidents related to security threats from request fulfilment activities

  • CSF User satisfaction must be maintained

o    KPI Level of user satisfaction with the handling of service requests (as measured in some form of satisfaction survey)

o    KPI Total number of incidents related to request fulfilment activities

o    KPI The size of current backlog of outstanding service requests.

Incident Management: Incident Management is responsible for the resolution of any incident, reported by a tool or user, which is not part of normal operations and causes or may cause a disruption to or decrease in the quality of a service.

  • CSF Resolve incidents as quickly as possible minimizing impacts to the business

o    KPI Mean elapsed time to achieve incident resolution or circumvention, broken down by impact code

o    KPI Breakdown of incidents at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of incidents resolved remotely, without the need for a visit

o    KPI Number of incidents resolved without impact to the business (e.g. incident was raised by event management and resolved before it could impact the business)

  • CSF Maintain quality of IT services

o    KPI Total numbers of incidents (as a control measure)

o    KPI Size of current incident backlog for each IT service

o    KPI Number and percentage of major incidents for each IT service

  • CSF Maintain user satisfaction with IT services

o    KPI Average user/customer survey score (total and by question category)

o    KPI Percentage of satisfaction surveys answered versus total number of satisfaction surveys sent

  • CSF Increase visibility and communication of incidents to business and IT support staff

o    KPI Average number of service desk calls or other contacts from business users for incidents already reported

o    KPI Number of business user complaints or issues about the content and quality of incident communications

  • CSF Align incident management activities and priorities with those of the business

o    KPI Percentage of incidents handled within agreed response time (incident response-time targets may be specified in SLAs, for example, by impact and urgency codes)

o    KPI Average cost per incident

  • CSF Ensure that standardized methods and procedures are used for efficient and prompt response, analysis, documentation, ongoing management and reporting of incidents to maintain business confidence in IT capabilities

o    KPI Number and percentage of incidents incorrectly assigned

o    KPI Number and percentage of incidents incorrectly categorized

o    KPI Number and percentage of incidents processed per service desk agent

o    KPI Number and percentage of incidents related to changes and releases.

Problem Management: Problem Management is responsible for the activities required to

  • Diagnose the root cause of incidents.
  • Determine the resolution to related problems.
  • Perform trend analysis to identify and resolve problems before they impact the live environment.
  • Ensure that resolutions are implemented through the appropriate control procedures, especially change management and release management.

Problem Management maintains information about problems and appropriate workarounds and resolutions to help the organization reduce the number and impact of incidents over time. To do this, Problem Management has a strong interface with Knowledge Management and uses tools such as the Known Error Database.

  • CSF Minimize the impact to the business of incidents that cannot be prevented

o    KPI The number of known errors added to the KEDB

o    KPI The percentage accuracy of the KEDB (from audits of the database)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Average incident resolution time for those incidents linked to problem records

  • CSF Maintain quality of IT services through elimination of recurring incidents

o    KPI Total numbers of problems (as a control measure)

o    KPI Size of current problem backlog for each IT service

o    KPI Number of repeat incidents for each IT service

  • CSF Provide overall quality and professionalism of problem handling activities to maintain business confidence in IT capabilities

o    KPI The number of major problems (opened and closed and backlog)

o    KPI The percentage of major problem reviews successfully performed

o    KPI The percentage of major problem reviews completed successfully and on time

o    KPI Number and percentage of problems incorrectly assigned

o    KPI Number and percentage of problems incorrectly categorized

o    KPI The backlog of outstanding problems and the trend (static, reducing or increasing?)

o    KPI Number and percentage of problems that exceeded their target resolution times

o    KPI Percentage of problems resolved within SLA targets (and the percentage that are not!)

o    KPI Average cost per problem.

Event Management: These processes have planning, design, and operations activity. Event Management is responsible for any aspect of Service Management that needs to be monitored or controlled and where the monitoring and controls can be automated. This includes:

  • Configuration items.
  • Environmental controls.
  • Software licensing.
  • Security.
  • Normal operational activities.

Event Management includes defining and maintaining Event Management solutions and managing events.

  • CSF Detecting all changes of state that have significance for the management of CIs and IT services

o    KPI Number and ratio of events compared with the number of incidents

o    KPI Number and percentage of each type of event per platform or application versus total number of platforms and applications underpinning live IT services (looking to identify IT services that may be at risk for lack of capability to detect their events)

  • CSF Ensuring all events are communicated to the appropriate functions that need to be informed or take further control actions

o    KPI Number and percentage of events that required human intervention and whether this was performed

o    KPI Number of incidents that occurred and percentage of these that were triggered without a corresponding event

  • CSF Providing the trigger, or entry point, for the execution of many service operation processes and operations management activities

o    KPI Number and percentage of events that required human intervention and whether this was performed

  • CSF Provide the means to compare actual operating performance and behaviour against design standards and SLAs

o    KPI Number and percentage of incidents that were resolved without impact to the business (indicates the overall effectiveness of the event management process and underpinning solutions)

o    KPI Number and percentage of events that resulted in incidents or changes

o    KPI Number and percentage of events caused by existing problems or known errors (this may result in a change to the priority of work on that problem or known error)

o    KPI Number and percentage of events indicating performance issues (for example, growth in the number of times an application exceeded its transaction thresholds over the past six months)

o    KPI Number and percentage of events indicating potential availability issues (e.g. failovers to alternative devices, or excessive workload swapping)

  • CSF Providing a basis for service assurance, reporting and service improvement

o    KPI Number and percentage of repeated or duplicated events (this will help in the tuning of the correlation engine to eliminate unnecessary event generation and can also be used to assist in the design of better event generation functionality in new services)

o    KPI Number of events/alerts generated without actual degradation of service/functionality (false positives – indication of the accuracy of the instrumentation parameters, important for CSI).

Access Management: Access Management aims to grant authorized users the right to use a service, while preventing access to non-authorized users. The Access Management processes essentially execute policies defined in [[IT Security Management |Information Security Management]]. Access Management is sometimes also referred to as ”Rights Management” or ”Identity Management”.

  • CSF Ensuring that the confidentiality, integrity and availability of services are protected in accordance with the information security policy

o    KPI Percentage of incidents that involved inappropriate security access or attempts at access to services

o    KPI Number of audit findings that discovered incorrect access settings for users that have changed roles or left the company

o    KPI Number of incidents requiring a reset of access rights

o    KPI Number of incidents caused by incorrect access settings

  • CSF Provide appropriate access to services on a timely basis that meets business needs

o    KPI Percentage of requests for access (service request, RFC etc.) that were provided within established SLAs and OLAs

  • CSF Provide timely communications about improper access or abuse of services on a timely basis

o    KPI Average duration of access-related incidents (from time of discovery to escalation).

Resources:

Communication with English as a Second Language

Comunicación en inglés como segundo idioma

Key takeaway / Llave para llevar

  • Paraphrased quote: No one will know what you wanted to say but didn’t, so don’t worry if you forget something. They will remember how you made them feel.
  • Cita parafraseada: Nadie sabrá lo que querías decir pero no dijiste, así que no necesitas preocuparte si olvidaste algo. Recuerda, nosotros solo recordamos cómo nos hiciste sentir.