Some Qualitative Methodologies

This blog post will differentiate among the following qualitative designs:

    • Phenomenology (e.g. Georgi, Moustakas, etc.)
    • Grounded theory (e.g. Glaser, Strauss, etc.)
    • Ethnography (e.g. White, Benedict, Mead, etc.)
    • Case Studies (e.g. Yin, etc.)

The Implicit goal of qualitative data analysis is truth, objectivity, trustworthiness, and accuracy of data (Glaser, 2004). All methods have the observer usually exercising little bias in their thoughts to help further their analysis or development of their core theory.  Researchers here are observers taking notes to help them in their study.

Phenomenology (Giorgi, 2006): It is the study of experiential phenomena through encountering an instance of it, describing it, and using free imagination variation to determine its essence. Thus, making the phenomena more generalizable.  Though it should be noted that the experience should exist without preconceived biases (a neutral party), and one way of doing so is listing out your entire biases related to the phenomena.  This removal of biases will help limit the claims to the way we experienced the phenomena.

Grounded Theory (Glaser, 2004): It is the study of a set of grounded concepts, which create a core theory/category that forms a hypothesis.  Data is collected, but as it is analyzed “line by line”, the researcher asks: “What is this data a study of?”, “What category does this incident indicate?”, “What is actually happening in the data?”, “What is the main concern being faced by the participants?”, and “What Accounts for the continual resolving of this concern?”  These questions are asked within the most minimum of preconception.  The use of literature is treated as another source of data to be integrated into the analysis and core theory/category.  However, literature is not used before the emergence of a core theory/category arises from the data.

Ethnography (Atkinson & Hammersley, 1994, Mead, 1933): It is studying the customs of people and cultures, usually on a few numbers of cases (maybe one case), through analyzing unstructured data (not previously coded) with no aim of testing a hypothesis.  Analysis of the data may revolve quantification and statistics on the explicit interpretation of the data.

Thus, grounded theory seeks to find meaning in data and find a core concept/category/theory/variable.  Ethnography tends to seek meaning in the customs of people, which can exist in a single case study.  Phenomenology seeks to study the phenomena that have occurred while keeping in mind all the possible variables that can influence it.  So, a certain topic can be explored using each of these methods, and they are looking at the same problem just with different preconceptions (or lack thereof), thus adding to the further understanding of that topic.  These are all collection of data methods, whereas case studies are a research strategy.

A problem needs to arise in order for research to occur.  A gap in knowledge can be seen as a problem.  Thus, case studies are a strategy that can be used to help shine some light at that gap and using any of the techniques aforementioned the research can try to fill in that gap of knowledge.  If you are aiming for grounded theory, you may have a ton of case studies to look through to seek common themes, whereas ethnography may be concerned about one or two cases and what happened in those cases.  Phenomenology can use as many case studies necessary to explore any particular phenomena in question.

Case Studies Research (Yin, 1981): Can contain both qualitative and quantitative data (e.g. fieldwork, records, reports, verbal reports, observations, memos, etc.), and it is independent of any particular data collection method.  Case studies concern themselves in a real-life phenomenon, and when the boundaries between phenomenon and context are not known, yet aim to be either exploratory, descriptive and/or explanatory.  It is a strategy similar to experiments, simulations, and histories.

Since, case studies can be “an accurate rendition of the facts of the case” (Yin, 1981), most of that data cannot be described quantitatively in a quick manner. Sometimes, descriptions and qualitative data paint the picture of what is being studied much more clearly than if we were to do this with just numbers.  Can you picture that over a million people saw the ball drop on Time Square in 2015, or 14 blocks of thousands of people adorned in foam Planet Fitness hats and waving purple noodle balloons, eagerly cheered as the ball dropped on Time Square in 2015. This is why most case study research involves the collection of qualitative data.

References:

  • Atkinson, P., & Hammersley, M. (1994). Ethnography and participant observation. Handbook of qualitative research, 1(23), 248-261.
  • Glaser, B. G., & Holton, J. (2004, May). Remodeling grounded theory. In Forum Qualitative Sozialforschung/Forum: Qualitative Social Research (Vol. 5, No. 2).
  • Giorgi, A. (2008). Difficulties encountered in the application of the phenomenological method in the social sciences. Indo-Pacific Journal of Phenomenology, 8(1).
  • Mead, M. (1933). More comprehensive field methods. American Anthropologist, 35(1), 1-15.
  • Yin, R. K. (1981). The case study crisis: Some answers. Administrative science quarterly, 58-65.

Decluttering & Recycling

Last year I mentioned that I am a minimalist, though I do not subscribe to the 100 item challenge.  However, there is value in disposing of items that are no longer providing any value in your life.  Rather than trashing them, why not recycle them for cash.  Here are a few places that accept gently used and sometimes roughly used items, in an effort to create a more sustainable economy and the planet.  For really old devices, they extract the precious metals to be used in new devices.

Note: Shop around all these sites and programs to get the most money for your product. Also, one site or store may not take it, but another might so keep shopping around. Also, if you are getting store credit make sure it’s at a store you will actually use.

Note: This is not a comprehensive list.  Comment down below if you know of any other places or apps that have worked for you really well.  Some apps work best in the city versus the suburbs.

  1. Amazon.com Trade-In: They will give you Amazon gift card, for Kindle e-readers, tablets, streaming media players, BlueTooth speakers, Amazon Echos, Textbooks, Phones, and video games.
  2. Best Buy: Will buy your iPhones, iPads, Gaming Systems, Laptops, Samsung mobile devices, Microsoft Surface devices, video games, and smartwatches for BestBuy gift cards.
  3. Game Stop (one of my favorites): Will take your video games, Gaming systems, most obscure phones, tablets, iPods, etc. and will give you cash back.
  4. Staples: Smartphones, tablets, and laptops can be sold here for store credit.
  5. Target: Phones, tablets, gaming systems, smartwatches, voice speakers for a target gift card.
  6. Walmart: Phones, tablets, gaming systems, and voice speakers can be cashed in for Walmart gift cards.
  7. Letgo app: A great way to sell almost anything.  Just make sure you meet up in a public place to make the exchange, like a mall or in front of a police station. Your safety is more important than any piece you were willing to part with in the first place.
  8. Facebook.com Marketplace: Another great way to sell almost anything. The same warning is attached here as in Letgo.
  9. Decluttr.com: They pay you back via check, PayPal, or direct deposit.
  10. Gazelle: They will reward you with PayPal, check or Amazon gift cards.
  11. Raise: This is for those gift cards you know you won’t use.  You can sell them for up to 85% of its value, via PayPal, direct deposit, or check.
  12. SecondSpin: This is for those CDs, DVDs, and Blu-rays, and you can earn money via store credit, check, or PayPal.
  13. Patagonia: For outdoor gear and it is mostly for store credit.
  14. thredUp: This is for your clothes. Once they are sold via the app you can receive cash or credit.
  15. Plato’s Closet: Shoes, Clothes, and bags can be turned in for cash.  Though they take mostly current trendy items.
  16. Half Price Books: Books, textbooks, audiobooks, music, CDs, LPs, Movies, E-readers, phones, tablets, video games, and gaming systems for cash.
  17. Powells.com: For your books and you can get paid via PayPal or credit in your account.

My advice, I try to sell first to a retailer, because they are going to always be there, it’s their job, it’s safer, you can do it at your own schedule, and you will get what they promise you.  No hassle of no-shows, fear of meeting a stranger, getting further bargained down when you are there and they conveniently forget to bring the full amount, or them arriving way late.

Another piece of advice is to hold on to at least one old phone (usually the latest one), for two reasons: (1) if your current phone breaks, you can use this as an interim phone, (2) international travel, if the phone is unlocked.

Subsequent advice is to make sure you turn off and clear out all our old data from electronic devices.  The last thing you want to do is have your data compromised when doing something positive for the earth.

Also, Look for Consignment shops, local book stores, and ask around. You never know who you may be able to sell stuff to.  At a consignment shop, you deposit your items there, and if they sell, you get a part of the earnings. When all else fails, what you cannot sell, recycle it by donating it to goodwill, habitat for humanity, etc.

Financial Hacks

The last post, I talked about cyber hacking, but this month let’s talk about when Equifax credit report data was hacked in 2017 when names, social security numbers, birth date, driver’s license and addresses were taken from millions of people (Smith 2017; Oliver, 2017).  Smith (2017), knew of the breach that started in late May and ended in Early June 2017 but didn’t advise the public until 2017.  In that gap from all affected consumers being hacked until public release, multiple people’s lives could have been ruined.

This breach means that when the data is sold in the black market or dark web, thieves can open lines of credit for the rest of your life.  The only way to combat this is to freeze your credit from all three credit bureaus:

My journey in doing so means going to each of these sites and setting this up.  When I wanted to pull my credit for housing, a new credit card, etc. I would have to unfreeze the account for less than a few days and refreeze it so that my credit can be checked.  Unfortunately, this has become an inconvenience, as it can mean a delay in many major life situations, like getting a new job.  However, this is a minor inconvenience as opposed to finding out you were hacked, proving your real identity, and recovering if you can your life.

The advice to freeze your credit report is one way to protect yourself.  Another is to check your credit report.  Every year you get 1 free credit report from each of the three credit reporting agencies.  Things that appear in one report may not appear in another, so it is key to routinely check all three credit reports.  A link to do so can be found here:

or by phone:

  • 1-877-322-8228

Resources:

Storytime:  The Hacker!

Systems and companies get hacked.  The biggest one in the tech sector is Yahoo back in August 2013 where 3 billion accounts were targeted. and again in 2014 where 500 million accounts were targeted unrelated (Larson, 2017). As reported vital information that was compromised from the yahoo hacks was the sign-in information, most importantly, passwords.

Now fast forward to December 2019, and I got an email saying that there was an attempt to get into my personal social media accounts.  Not saying that the Yahoo incident is at all related since it could have come from multiple other sites I use.  However, it illustrates a key aspect of living a digital life… Are we really safe from hackers?  Thankfully they didn’t succeed to access my account, but that won’t stop them in the future from trying my accounts again or yours.

Mark Goodman (n.d.a.), explains that there is an asymmetry in cyber threats, where the white hats (good guys) have to explore every possible corner to prevent a hack, whereas the hackers only have to find one weakness to hacking into a system.

Goodman (n.d.a., n.d.b.) in the Art of Charm podcast and Lewis Howes podcast proposed the following acronym: UPDATE, as one of many ways to protect yourself.

  • U – update frequently. (LastPass, 1Password)
  • P – passwords. Use a different password for every site and get a reliable password manager. Don’t use your Facebook account to login to other site.
  • D – downloads. Watch your downloads and be cautious about what you install. Download from authorized sources only.
  • A – administrator. Don’t run your computer using the administrator account (unless necessary).
  • T – turn off your computer. If it isn’t fully turned off it’s still accessible, especially when not in use, or at least the wifi.
  • E – encrypt. This scrambles your data unless you have the password and proper computational keys. There are 2 types: you can encrypt the data on your computer and encrypt the data as it is sent out using a VPN.

Resources:

Plagiarism: A word

The following article found on https://www.econtentpro.com/blog/, talks about abuses that can lead to various forms of plagiarism.  eContent Pro (2019) is a really great article showcasing that there is more than one way to plagiarise.  However, they did not provide examples to showcase each case, nor explained the nuance in case 2 all that well (eContent Pro, 2019):

  1. Self-plagiarism
  2. Overreliance on Multiple Sources
  3. Patchwriting
  4. Overusing the same source

The following is my attempt to do just that.

Example of Self-plagiarism

If I were to use the following two paragraphs verbatim in a new paper or as a book chapter … even though these are my words from Hernandez (2017a), it is considered self-plagiarism.  It is good to recycle your work cited page, it is not good to recycle your words, just like you would recycle plastic bottles.

Chapter 1: An Introduction to Data Analytics

Data analytics has existed before 1854. Snow (1854) had a theory on how cholera outbreaks occur, and he was able to use that theory to remove the pump handle off of a water pump, where that water pump had been contaminated in the summer of 1854. He had set out to prove that his hypothesis on how cholera epidemics originated from was correct, so he then drew his famous spot maps for the Board of Guardians of St. James’ parish in December 1854. These maps were showed in his eventual 2nd edition of his book “On the Mode of Communication of Cholera” (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000; Snow, 1855). As Brody et al. (2000) stated, this case was one of the first famous examples of the theory being proven by data, but the earlier usage of spot maps has existed.

However, the use of just geospatial data analytics can be quite limiting in finding a conclusive result if there is no underlying theory as to why the data is being recorded (Brody et al., 2000). Through the addition of subject matter knowledge and subject matter relationships before data analytics, context can be added to the data for which it can help yield better results (Garcia, Ferraz, & Vivacqua, 2009). In the case of Snow’s analysis, it could have been argued by anyone that the atmosphere in that region of London was causing the outbreak. However, Snow’s original hypothesis was about the transmission of cholera through water distribution systems, the data then helped support his hypothesis (Brody et al., 2000; Snow 1854). Thus, the suboptimal results generated from the outdated Edisonian-esque, which is a test-and-fail methodology, can prove to be very costly regarding Research and Development, compared to the results and insights gained from text mining and manipulation techniques (Chonde & Kumara, 2014).

Example of Overreliance on Multiple Sources

The following was taken from my Dissertation (Hernandez, 2017b).  There is definitely an overreliance on sources here. As with any dissertation, master’s thesis, or even interdisciplinary work. However, my voice still shines through. That is where the line is drawn by eContent Pro (2019): Is the author’s voice still present?

In this excerpt, it shows how I gathered multiple methodologies, from multiple sources and combined them all to form a best practice for data preprocessing. Another word for this process is called Synthesizing. Not one source had all the components, and listing which source contained which parts of the best practice methodologies was the purpose of these three paragraphs.  If my voice wasn’t present in these paragraphs, then it would be considered plagiarism.

Collecting the raw and unaltered real world data is the first step of any data or text
mining research study (Coralles et al., 2015; Gera & Goel, 2015; He et al., 2013; Hoonlor, 2011; Nassirtoussi et al., 2014). Next, preprocessing raw text data is needed, because raw text data files are unsuitable for predictive data analytics software tools like WEKA (Hoonlor, 2011; Miranda, n.d.). Barak and Modarres (2015), Miranda (n.d.), and Nassirtoussi et al. (2014) concluded that in both data and text mining, data preprocessing has the most significant impact on the research results.

Raw data can have formats that change across time, therefore converting the data into one common format for analysis is necessary for data analytics (Mandrai & Barkar, 2014). Also, the removal of HTML tags from web-based data sources allows for the removal of extraneous data points that can provide unpredictable results (Netzer et al., 2012). Finally, deciding on a strategy about how to deal with missing or defective data fields can aid in mitigating noise from the results (Barak & Modarres, 2015; Fayyad et al., 1996; Mandrai & Barskar, 2014; Netzer, 2012). Furthermore, to gain the most insights surrounding a research problem, data from multiple data
sources should be collected and integrated (Corrales et al., 2015).

Predictive data analytics tools can analyze unstructured text data after the preprocessing step. Preprocessing involves tokenization, stop word removal, and word-normalization (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). Tokenization is when a body of text is reduced to a set of units, phrases, or groups of keywords for analysis (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). For
example, the term eyewall replacement would be considered one token, rather than two words or two different tokens. Stopword removal is the removal of the words that add no value to the predictive analytics algorithm from the body of text; these words are prepositions, articles, and conjunctions (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). Miranda (n.d.) stated that sometimes stop-word removals could also be context-dependent because some contextual words can yield little to no value in the analysis. For instance, meteorological forecast models in this study are considered context-dependent stopwords. Lastly, word-normalization transforms the letters into a body of text to one single case type and removes the conjugations of words (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). For example, stemming the following words cooler, coolest, and colder becomes cool-, which heightens the fidelity of the results due to the reduction of dimensionalities.

Example of Pathwriting and overusing the same source

This self-created meta-post for this post, which happens to be a curation post for Service Operations KPIs and CSF. The words below have been lifted from various sections of:

Each sample Critical Success Factors (CSFs) is followed by a small number of typical Key Performance Indicators (KPIs) that support the CSF. These KPIs should not be adopted without careful consideration. Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the CSI register for evaluation and possible implementation.

Service Operations: Ensures that services operate within agreed parameters, when it’s interrupted they restore services ASAP 

Request Fulfillment Management: Request Fulfillment is responsible for

  • Managing the initial contact between users and the Service Desk.
  • Managing the lifecycle of service requests from initial request through delivery of the expected results.
  • Managing the channels by which users can request and receive services via service requests.
  • Managing the process by which approvals and entitlements are defined and managed for identified service requests (future).
  • Managing the supply chain for service requests and assisting service providers in ensuring that the end-to-end ddelivery is managed according to plan.
  • Working with the Service Catalog and Service Portfolio managers to ensure that all standard service requests are appropriately defined and managed in the service catalog (future).

 

  • CSF Requests must be fulfilled in an efficient and timely manner that is aligned to agreed service level targets for each type of request

o    KPI The mean elapsed time for handling each type of service request

o    KPI The number and percentage of service requests completed within agreed target times

o    KPI Breakdown of service requests at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of service requests closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of service requests resolved remotely or through automation, without the need for a visit

o    KPI Total numbers of requests (as a control measure)

o    KPI The average cost per type of service request

  • CSF Only authorized requests should be fulfilled

o    KPI Percentage of service requests fulfilled that were appropriately authorized

o    KPI Number of incidents related to security threats from request fulfilment activities

  • CSF User satisfaction must be maintained

o    KPI Level of user satisfaction with the handling of service requests (as measured in some form of satisfaction survey)

o    KPI Total number of incidents related to request fulfilment activities

o    KPI The size of current backlog of outstanding service requests.

Incident Management: Incident Management is responsible for the resolution of any incident, reported by a tool or user, which is not part of normal operations and causes or may cause a disruption to or decrease in the quality of a service.

  • CSF Resolve incidents as quickly as possible minimizing impacts to the business

o    KPI Mean elapsed time to achieve incident resolution or circumvention, broken down by impact code

o    KPI Breakdown of incidents at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of incidents resolved remotely, without the need for a visit

o    KPI Number of incidents resolved without impact to the business (e.g. incident was raised by event management and resolved before it could impact the business)

  • CSF Maintain quality of IT services

o    KPI Total numbers of incidents (as a control measure)

o    KPI Size of current incident backlog for each IT service

o    KPI Number and percentage of major incidents for each IT service

  • CSF Maintain user satisfaction with IT services

o    KPI Average user/customer survey score (total and by question category)

o    KPI Percentage of satisfaction surveys answered versus total number of satisfaction surveys sent

  • CSF Increase visibility and communication of incidents to business and IT support staff

o    KPI Average number of service desk calls or other contacts from business users for incidents already reported

o    KPI Number of business user complaints or issues about the content and quality of incident communications

  • CSF Align incident management activities and priorities with those of the business

o    KPI Percentage of incidents handled within agreed response time (incident response-time targets may be specified in SLAs, for example, by impact and urgency codes)

o    KPI Average cost per incident

  • CSF Ensure that standardized methods and procedures are used for efficient and prompt response, analysis, documentation, ongoing management and reporting of incidents to maintain business confidence in IT capabilities

o    KPI Number and percentage of incidents incorrectly assigned

o    KPI Number and percentage of incidents incorrectly categorized

o    KPI Number and percentage of incidents processed per service desk agent

o    KPI Number and percentage of incidents related to changes and releases.

Problem Management: Problem Management is responsible for the activities required to

  • Diagnose the root cause of incidents.
  • Determine the resolution to related problems.
  • Perform trend analysis to identify and resolve problems before they impact the live environment.
  • Ensure that resolutions are implemented through the appropriate control procedures, especially change management and release management.

Problem Management maintains information about problems and appropriate workarounds and resolutions to help the organization reduce the number and impact of incidents over time. To do this, Problem Management has a strong interface with Knowledge Management and uses tools such as the Known Error Database.

  • CSF Minimize the impact to the business of incidents that cannot be prevented

o    KPI The number of known errors added to the KEDB

o    KPI The percentage accuracy of the KEDB (from audits of the database)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Average incident resolution time for those incidents linked to problem records

  • CSF Maintain quality of IT services through elimination of recurring incidents

o    KPI Total numbers of problems (as a control measure)

o    KPI Size of current problem backlog for each IT service

o    KPI Number of repeat incidents for each IT service

  • CSF Provide overall quality and professionalism of problem handling activities to maintain business confidence in IT capabilities

o    KPI The number of major problems (opened and closed and backlog)

o    KPI The percentage of major problem reviews successfully performed

o    KPI The percentage of major problem reviews completed successfully and on time

o    KPI Number and percentage of problems incorrectly assigned

o    KPI Number and percentage of problems incorrectly categorized

o    KPI The backlog of outstanding problems and the trend (static, reducing or increasing?)

o    KPI Number and percentage of problems that exceeded their target resolution times

o    KPI Percentage of problems resolved within SLA targets (and the percentage that are not!)

o    KPI Average cost per problem.

Event Management: These processes have planning, design, and operations activity. Event Management is responsible for any aspect of Service Management that needs to be monitored or controlled and where the monitoring and controls can be automated. This includes:

  • Configuration items.
  • Environmental controls.
  • Software licensing.
  • Security.
  • Normal operational activities.

Event Management includes defining and maintaining Event Management solutions and managing events.

  • CSF Detecting all changes of state that have significance for the management of CIs and IT services

o    KPI Number and ratio of events compared with the number of incidents

o    KPI Number and percentage of each type of event per platform or application versus total number of platforms and applications underpinning live IT services (looking to identify IT services that may be at risk for lack of capability to detect their events)

  • CSF Ensuring all events are communicated to the appropriate functions that need to be informed or take further control actions

o    KPI Number and percentage of events that required human intervention and whether this was performed

o    KPI Number of incidents that occurred and percentage of these that were triggered without a corresponding event

  • CSF Providing the trigger, or entry point, for the execution of many service operation processes and operations management activities

o    KPI Number and percentage of events that required human intervention and whether this was performed

  • CSF Provide the means to compare actual operating performance and behaviour against design standards and SLAs

o    KPI Number and percentage of incidents that were resolved without impact to the business (indicates the overall effectiveness of the event management process and underpinning solutions)

o    KPI Number and percentage of events that resulted in incidents or changes

o    KPI Number and percentage of events caused by existing problems or known errors (this may result in a change to the priority of work on that problem or known error)

o    KPI Number and percentage of events indicating performance issues (for example, growth in the number of times an application exceeded its transaction thresholds over the past six months)

o    KPI Number and percentage of events indicating potential availability issues (e.g. failovers to alternative devices, or excessive workload swapping)

  • CSF Providing a basis for service assurance, reporting and service improvement

o    KPI Number and percentage of repeated or duplicated events (this will help in the tuning of the correlation engine to eliminate unnecessary event generation and can also be used to assist in the design of better event generation functionality in new services)

o    KPI Number of events/alerts generated without actual degradation of service/functionality (false positives – indication of the accuracy of the instrumentation parameters, important for CSI).

Access Management: Access Management aims to grant authorized users the right to use a service, while preventing access to non-authorized users. The Access Management processes essentially execute policies defined in [[IT Security Management |Information Security Management]]. Access Management is sometimes also referred to as ”Rights Management” or ”Identity Management”.

  • CSF Ensuring that the confidentiality, integrity and availability of services are protected in accordance with the information security policy

o    KPI Percentage of incidents that involved inappropriate security access or attempts at access to services

o    KPI Number of audit findings that discovered incorrect access settings for users that have changed roles or left the company

o    KPI Number of incidents requiring a reset of access rights

o    KPI Number of incidents caused by incorrect access settings

  • CSF Provide appropriate access to services on a timely basis that meets business needs

o    KPI Percentage of requests for access (service request, RFC etc.) that were provided within established SLAs and OLAs

  • CSF Provide timely communications about improper access or abuse of services on a timely basis

o    KPI Average duration of access-related incidents (from time of discovery to escalation).

Resources:

Communication with English as a Second Language

Comunicación en inglés como segundo idioma

Key takeaway / Llave para llevar

  • Paraphrased quote: No one will know what you wanted to say but didn’t, so don’t worry if you forget something. They will remember how you made them feel.
  • Cita parafraseada: Nadie sabrá lo que querías decir pero no dijiste, así que no necesitas preocuparte si olvidaste algo. Recuerda, nosotros solo recordamos cómo nos hiciste sentir.

How to do a podcast

In this post, you will get a behind the scenes look as you learn what it takes to produce your very own podcast. An opportunity to increase your presentation skill potential through this new landscape. This interactive session will teach attendees to learn how to plan, prepare and produce their very own podcast. You’ll get a bird’s eye view on the ins and outs of what it takes to become a successful podcaster. As an added bonus, the attendees of this seminar interacted and became a part of an official District 58 Podcast recording.

Below is the PowerPoint presentation I gave:

 

The raw audio file can be found below:

Whereas the production quality audio file can be found below:

Links to the products I used can be found:

Unconventional Hurricane Prep

We are at the height of Hurricane Season again for the United States. Although, we have hurricane preparation lists from multiple websites.  I think I like to share some unconventional items:

To do:

  1. As bad as a Hurricane is, it is a fantastic way to really meet your neighbors.
  2. Fill up your bathtubs with water. If there is no water after, you can put a bucket out of the tub in the back of your toilet so you can flush.
  3. Fill up a cup of water and put it in the freezer to freeze. Then put a coin on the top of the frozen water. If you come back from evacuating and don’t know if you lost power for a while check the cup. If the coin is frozen to the bottom of the cup you know the food defrosted and refroze when the power came back on. Throw out the food. If the coin is still on top your food is fine.
  4. Also, fill your Tupperware with water and freeze it. It can be used after the storm in coolers to keep food and drinks cold. Ice will be precious!
  5. Fill up coolers with water for drinking after the storm because the water after the storm is contaminated and may have to be boiled before use.
  6. Make sure nothing is left outside that can hit the house. Pull all outside furniture, bird feeders, etc. into the garage. Most become flying objects left outside. If you can’t bring trash cans or recycling in, let them fill with water. Make sure they are empty before the storm hits.
  7. Don’t buy hurricane snacks too early because you will eat them before it hits.
  8. Fill up all cars with gasoline or diesel prior to evacuating or staying put.  You never know when the next shipment of gas will come in nor how much more expensive it will be, because of low supply and high demand. Save your gas after the storm. If you must sight-see, use a bicycle.
  9. Close all doors in the home. Especially if you evacuate. If you lose a window, that should confine water damage.
  10. Metal Garage doors, especially the 9 to 10 Ft variety, can’t handle strong winds. they will collapse inward. Either park in front of them or back a car up against them from the inside. Don’t forget the heavy blanket between the door and the car.
  11. Always assume a downed power line may be present in standing water.
  12. If you lose power and if you have any solar-powered lights, bring some inside to light the house at night and back in the sun during the day. It saves batteries!
  13. If you’re evacuating, don’t forget to unplug any electrical items that you can, eg TV, router, desktop pc, etc. If your area loses power, there could be a surge when your power company is trying to bring folks back online. Most surge protectors will help, but it’s usually better to be safe than sorry. Also, when cutting areas back online there may be power “blips” (on then off really quick). It’s best to wait until the power stabilizes to plug stuff back in.
  14. Charge all your electronics and turn them off before the storm.  This includes computers, cellphones, etc.
  15. Make sure all your dishes and laundry are clean. It might be a week before the power comes back on. In Miami, for Hurricane Wilma, I was without power for almost a month.
  16. If you have no landline for the phone, arrange to work with someone that does. It can take weeks if not a month after a storm before all cell towers were realigned.

Do this every year prior to the storm season

  1. Check your trees for dead limbs. Service them if you can.
  2. If you have a Generator or Chain saw, service it now. Make sure it is ready to go.
  3. Check your grille. You’ll need gas or lots of wood/ charcoal for cooking. Never use a gas grill indoors.
  4.  Stock your and your family’s medical needs as well.

If you have pets

  1. Fill up a clean large plastic bin with water for my dog.
  2. Of course, all pets should be brought in.
  3. With all the noise from wind and rain, if your pet is crate trained it will help keep her calm to go into her crate.
  4. If you’re evacuating and you have a To-Go bag, make one for your pet – medicines, collar and leash with id, crate, towel(s), blanket or bed, favorite toy, treats, food, and water; also, any pertinent medical records (shots, medical history). Take your pet with you.
  5. Should the worse occur and your pet gets out and lost, having her microchipped will help ensure she gets home. VIP Petcare Clinic will chip them for about ~$19.  After a storm, many pet rescue groups will come in to pick up “strays”. If your pet has no microchip, they could pick her up and your pet could end up being adopted out to someone hundreds or even thousands of miles away.

Again these are unconventional advice to be followed along with conventional advice. However, this advice is just as valuable.

P-Hacking: The Menace In Science

In the American Statistician Association (2016a) statement, stated the following conversation:

Q: Why do so many colleges and grad schools teach p = 0.05?

A: Because that’s still what the scientific community and journal editors use.

Q: Why do so many peope still use p = 0.05?

A: Because that’s what they were taught incollege or grad school.

Someone doesn’t need to be studying philosophy, or for the Law School Acceptance Test (LSAT) to see the flaw in that argument.  It’s circular reasoning, and that is the point.  The p-value is being overused when there are so many other ways to measure the strength of the data and it’s significance. Plus, a p = 0.05 is arbitrary and dependent on many fields.  I have seen papers use p = 0.10; p = 0.05, p = 0.01 and rarely p = 0.001.  But, are the results reliable, replicable, and reproducible? There are even studies that manipulate their data to get these elusive p-values…

Scientific research is at the bedrock of pushing society forward. However, not every study’s results published can represent the best of science. Some in the field have tried to alter how long the study lasts, not take into account of a confounding variable that could be causing the results, make the sample size too small to be reliable and allowing luck to be in play, or attempt p-hacking (Adam Ruins Everything, 2017; CrashCourse, 2018; Oliver, 2016).

P-hacking is defined as gathering as many variables as possible, then massaging the huge amounts of data to get a statistically significant result (CrashCourse, 2018; Oliver, 2016). However, that result could be completely meaningless. Similar to when the 538 blog did a p-hacking study called “You can’t trust what you read about nutrition” surveyed 54 people and collected over 1000 variables, found a statistically significant correlation between eating raw tomatoes to Judaism. 538 did this study just to point out the issue of p-hacking (Aschwanden, 2016).

As mentioned earlier, the best way to protect ourselves from p-hacking is to replicate the study and see if we can get similar results to the original study (Adam Ruins Everything, 2017; John Olver, 2016). Unfortunately, in science, there is no prize for fact-checking (John Oliver, 2016). That is why when we do research, we must make sure our results are robust, by testing multiple times if possible.  If it is not possible to do it in your own research, then a replication study is called for by others.  However, Replication studies are rarely ever funded and rarely get published (Adam Ruins Everything, 2017). A great way to do this, is collaborating with scientific peers from multiple universities, work on the same problem, with the same methodology, but different datasets and publish one or a series of papers that confirms a result as replicable and robust.  If we don’t do this, it forces the scientific field to only fund exploratory studies to get developed and published, and the results never get evaluated. Unfortunately, the adage for most scientists is to “publish or perish,” and as Prof. Brian Nosek from Center for Open Science said, “There is NO COST to getting things WRONG. THE COST is not getting them PUBLISHED.” (John Oliver, 2016).

The American Statistical Association (2016b), suggested the following to be used with p-values to give a more accurate representation of the significances:

  • Methods that emphasize estimation over testing
    • Confidence intervals
    • Credibility intervals
    • Prediction intervals
  • Bayesian methods
  • Alternatives measure of evidence
    • Likelihood ratios
    • Bayesian Factors
  • Decision-Theoretic modeling
  • False discovery rates

Have hope, most reputable scientists don’t take the result of one study to heart, but look at in the context of all the work done in that field (Adam Ruins Everything, 2017). Also, most reputable scientists tend to downplay the implications and generalizations of their results when they publish their findings (American Statistical Association, 2016b; Adam Ruins Everything, 2017; CrashCourse, 2018; Oliver, 2016). Looking for those kinds of studies and knowing how p-hacking is done is the best ammunition to defend against spurious results.

Resources

Finance/Accounting 101: Capital and Operating Expense

Capital Expenditure – CapEx (Finance/Accounting): Includes all spending on an asset that is supposed to last for over a year (Apptio, 2018).  Usually, it is used to undertake a new project, but it can be used for purchasing or changing equipment, buildings, etc. (Investopedia, n.d.b.). CapEx contains depreciation, look at my previous post for that (Apptio, n.d.; Investopedia, n.d.b.). A car is a great example for your personal CapEx, given that it depreciates over time and you purchase or lease it typically for more than a year.

Operating Expense – OpEx (Finance/Accounting): Includes all the ongoing costs for running as normal (Apptio, 2018; Investopedia, n.d.a.). For instance, OpEx could include rent, equipment, inventory costs, marketing, payroll, insurance, and funds allocated for research and development (Investopedia, n.d.a). Essentially, if you look at the rent you pay for living or for driving, that can be considered your own OpEx.  If you also consider your health, dental, vision, disability, housing, car, etc. it can also fall under this category.  Even gas to fuel up a car, given that it is used to make your asset operable fits under this category.  According to Apptio (2018), bills like electricity, water, etc. can fall under this category as well.

You can be more CapEx or OpEx heavy in your budgets.  Each with their benefits.  For instances being more CapEx heavy, your costs are more predictable in the long run and you can easily calculate your net worth. In that scenario, you may not have enough cash to continue to pay for some opportunities.  If you are more OpEx heavy you tend to save more money for investment purposes.  Here you have more flexibility to take on an opportunity, but its harder to show/calculate your net worth.

Another way to look at this is OpEx is like the cloud service on your phone, you pay for what you use, be it 5 gigs, 25 gigs, 50 gigs, etc. Whereas, CapEx is steady and saying I rather pay for the entire asset and enjoy as much or as little as I want.

Resources: