Financial Hacks

The last post, I talked about cyber hacking, but this month let’s talk about when Equifax credit report data was hacked in 2017 when names, social security numbers, birth date, driver’s license and addresses were taken from millions of people (Smith 2017; Oliver, 2017).  Smith (2017), knew of the breach that started in late May and ended in Early June 2017 but didn’t advise the public until 2017.  In that gap from all affected consumers being hacked until public release, multiple people’s lives could have been ruined.

This breach means that when the data is sold in the black market or dark web, thieves can open lines of credit for the rest of your life.  The only way to combat this is to freeze your credit from all three credit bureaus:

My journey in doing so means going to each of these sites and setting this up.  When I wanted to pull my credit for housing, a new credit card, etc. I would have to unfreeze the account for less than a few days and refreeze it so that my credit can be checked.  Unfortunately, this has become an inconvenience, as it can mean a delay in many major life situations, like getting a new job.  However, this is a minor inconvenience as opposed to finding out you were hacked, proving your real identity, and recovering if you can your life.

The advice to freeze your credit report is one way to protect yourself.  Another is to check your credit report.  Every year you get 1 free credit report from each of the three credit reporting agencies.  Things that appear in one report may not appear in another, so it is key to routinely check all three credit reports.  A link to do so can be found here:

or by phone:

  • 1-877-322-8228

Resources:

Storytime:  The Hacker!

Systems and companies get hacked.  The biggest one in the tech sector is Yahoo back in August 2013 where 3 billion accounts were targeted. and again in 2014 where 500 million accounts were targeted unrelated (Larson, 2017). As reported vital information that was compromised from the yahoo hacks was the sign-in information, most importantly, passwords.

Now fast forward to December 2019, and I got an email saying that there was an attempt to get into my personal social media accounts.  Not saying that the Yahoo incident is at all related since it could have come from multiple other sites I use.  However, it illustrates a key aspect of living a digital life… Are we really safe from hackers?  Thankfully they didn’t succeed to access my account, but that won’t stop them in the future from trying my accounts again or yours.

Mark Goodman (n.d.a.), explains that there is an asymmetry in cyber threats, where the white hats (good guys) have to explore every possible corner to prevent a hack, whereas the hackers only have to find one weakness to hacking into a system.

Goodman (n.d.a., n.d.b.) in the Art of Charm podcast and Lewis Howes podcast proposed the following acronym: UPDATE, as one of many ways to protect yourself.

  • U – update frequently. (LastPass, 1Password)
  • P – passwords. Use a different password for every site and get a reliable password manager. Don’t use your Facebook account to login to other site.
  • D – downloads. Watch your downloads and be cautious about what you install. Download from authorized sources only.
  • A – administrator. Don’t run your computer using the administrator account (unless necessary).
  • T – turn off your computer. If it isn’t fully turned off it’s still accessible, especially when not in use, or at least the wifi.
  • E – encrypt. This scrambles your data unless you have the password and proper computational keys. There are 2 types: you can encrypt the data on your computer and encrypt the data as it is sent out using a VPN.

Resources:

Plagiarism: A word

The following article found on https://www.econtentpro.com/blog/, talks about abuses that can lead to various forms of plagiarism.  eContent Pro (2019) is a really great article showcasing that there is more than one way to plagiarise.  However, they did not provide examples to showcase each case, nor explained the nuance in case 2 all that well (eContent Pro, 2019):

  1. Self-plagiarism
  2. Overreliance on Multiple Sources
  3. Patchwriting
  4. Overusing the same source

The following is my attempt to do just that.

Example of Self-plagiarism

If I were to use the following two paragraphs verbatim in a new paper or as a book chapter … even though these are my words from Hernandez (2017a), it is considered self-plagiarism.  It is good to recycle your work cited page, it is not good to recycle your words, just like you would recycle plastic bottles.

Chapter 1: An Introduction to Data Analytics

Data analytics has existed before 1854. Snow (1854) had a theory on how cholera outbreaks occur, and he was able to use that theory to remove the pump handle off of a water pump, where that water pump had been contaminated in the summer of 1854. He had set out to prove that his hypothesis on how cholera epidemics originated from was correct, so he then drew his famous spot maps for the Board of Guardians of St. James’ parish in December 1854. These maps were showed in his eventual 2nd edition of his book “On the Mode of Communication of Cholera” (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000; Snow, 1855). As Brody et al. (2000) stated, this case was one of the first famous examples of the theory being proven by data, but the earlier usage of spot maps has existed.

However, the use of just geospatial data analytics can be quite limiting in finding a conclusive result if there is no underlying theory as to why the data is being recorded (Brody et al., 2000). Through the addition of subject matter knowledge and subject matter relationships before data analytics, context can be added to the data for which it can help yield better results (Garcia, Ferraz, & Vivacqua, 2009). In the case of Snow’s analysis, it could have been argued by anyone that the atmosphere in that region of London was causing the outbreak. However, Snow’s original hypothesis was about the transmission of cholera through water distribution systems, the data then helped support his hypothesis (Brody et al., 2000; Snow 1854). Thus, the suboptimal results generated from the outdated Edisonian-esque, which is a test-and-fail methodology, can prove to be very costly regarding Research and Development, compared to the results and insights gained from text mining and manipulation techniques (Chonde & Kumara, 2014).

Example of Overreliance on Multiple Sources

The following was taken from my Dissertation (Hernandez, 2017b).  There is definitely an overreliance on sources here. As with any dissertation, master’s thesis, or even interdisciplinary work. However, my voice still shines through. That is where the line is drawn by eContent Pro (2019): Is the author’s voice still present?

In this excerpt, it shows how I gathered multiple methodologies, from multiple sources and combined them all to form a best practice for data preprocessing. Another word for this process is called Synthesizing. Not one source had all the components, and listing which source contained which parts of the best practice methodologies was the purpose of these three paragraphs.  If my voice wasn’t present in these paragraphs, then it would be considered plagiarism.

Collecting the raw and unaltered real world data is the first step of any data or text
mining research study (Coralles et al., 2015; Gera & Goel, 2015; He et al., 2013; Hoonlor, 2011; Nassirtoussi et al., 2014). Next, preprocessing raw text data is needed, because raw text data files are unsuitable for predictive data analytics software tools like WEKA (Hoonlor, 2011; Miranda, n.d.). Barak and Modarres (2015), Miranda (n.d.), and Nassirtoussi et al. (2014) concluded that in both data and text mining, data preprocessing has the most significant impact on the research results.

Raw data can have formats that change across time, therefore converting the data into one common format for analysis is necessary for data analytics (Mandrai & Barkar, 2014). Also, the removal of HTML tags from web-based data sources allows for the removal of extraneous data points that can provide unpredictable results (Netzer et al., 2012). Finally, deciding on a strategy about how to deal with missing or defective data fields can aid in mitigating noise from the results (Barak & Modarres, 2015; Fayyad et al., 1996; Mandrai & Barskar, 2014; Netzer, 2012). Furthermore, to gain the most insights surrounding a research problem, data from multiple data
sources should be collected and integrated (Corrales et al., 2015).

Predictive data analytics tools can analyze unstructured text data after the preprocessing step. Preprocessing involves tokenization, stop word removal, and word-normalization (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). Tokenization is when a body of text is reduced to a set of units, phrases, or groups of keywords for analysis (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Pletscher-Frankild et al., 2015; Thanh & Meesad, 2014). For
example, the term eyewall replacement would be considered one token, rather than two words or two different tokens. Stopword removal is the removal of the words that add no value to the predictive analytics algorithm from the body of text; these words are prepositions, articles, and conjunctions (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). Miranda (n.d.) stated that sometimes stop-word removals could also be context-dependent because some contextual words can yield little to no value in the analysis. For instance, meteorological forecast models in this study are considered context-dependent stopwords. Lastly, word-normalization transforms the letters into a body of text to one single case type and removes the conjugations of words (Hoonlor, 2011; Miranda, n.d.; Nassirtoussi et al., 2014; Nassirtoussi et al., 2015; Thanh & Meesad, 2014). For example, stemming the following words cooler, coolest, and colder becomes cool-, which heightens the fidelity of the results due to the reduction of dimensionalities.

Example of Pathwriting and overusing the same source

This self-created meta-post for this post, which happens to be a curation post for Service Operations KPIs and CSF. The words below have been lifted from various sections of:

Each sample Critical Success Factors (CSFs) is followed by a small number of typical Key Performance Indicators (KPIs) that support the CSF. These KPIs should not be adopted without careful consideration. Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the CSI register for evaluation and possible implementation.

Service Operations: Ensures that services operate within agreed parameters, when it’s interrupted they restore services ASAP 

Request Fulfillment Management: Request Fulfillment is responsible for

  • Managing the initial contact between users and the Service Desk.
  • Managing the lifecycle of service requests from initial request through delivery of the expected results.
  • Managing the channels by which users can request and receive services via service requests.
  • Managing the process by which approvals and entitlements are defined and managed for identified service requests (future).
  • Managing the supply chain for service requests and assisting service providers in ensuring that the end-to-end ddelivery is managed according to plan.
  • Working with the Service Catalog and Service Portfolio managers to ensure that all standard service requests are appropriately defined and managed in the service catalog (future).

 

  • CSF Requests must be fulfilled in an efficient and timely manner that is aligned to agreed service level targets for each type of request

o    KPI The mean elapsed time for handling each type of service request

o    KPI The number and percentage of service requests completed within agreed target times

o    KPI Breakdown of service requests at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of service requests closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of service requests resolved remotely or through automation, without the need for a visit

o    KPI Total numbers of requests (as a control measure)

o    KPI The average cost per type of service request

  • CSF Only authorized requests should be fulfilled

o    KPI Percentage of service requests fulfilled that were appropriately authorized

o    KPI Number of incidents related to security threats from request fulfilment activities

  • CSF User satisfaction must be maintained

o    KPI Level of user satisfaction with the handling of service requests (as measured in some form of satisfaction survey)

o    KPI Total number of incidents related to request fulfilment activities

o    KPI The size of current backlog of outstanding service requests.

Incident Management: Incident Management is responsible for the resolution of any incident, reported by a tool or user, which is not part of normal operations and causes or may cause a disruption to or decrease in the quality of a service.

  • CSF Resolve incidents as quickly as possible minimizing impacts to the business

o    KPI Mean elapsed time to achieve incident resolution or circumvention, broken down by impact code

o    KPI Breakdown of incidents at each stage (e.g. logged, work in progress, closed etc.)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Number and percentage of incidents resolved remotely, without the need for a visit

o    KPI Number of incidents resolved without impact to the business (e.g. incident was raised by event management and resolved before it could impact the business)

  • CSF Maintain quality of IT services

o    KPI Total numbers of incidents (as a control measure)

o    KPI Size of current incident backlog for each IT service

o    KPI Number and percentage of major incidents for each IT service

  • CSF Maintain user satisfaction with IT services

o    KPI Average user/customer survey score (total and by question category)

o    KPI Percentage of satisfaction surveys answered versus total number of satisfaction surveys sent

  • CSF Increase visibility and communication of incidents to business and IT support staff

o    KPI Average number of service desk calls or other contacts from business users for incidents already reported

o    KPI Number of business user complaints or issues about the content and quality of incident communications

  • CSF Align incident management activities and priorities with those of the business

o    KPI Percentage of incidents handled within agreed response time (incident response-time targets may be specified in SLAs, for example, by impact and urgency codes)

o    KPI Average cost per incident

  • CSF Ensure that standardized methods and procedures are used for efficient and prompt response, analysis, documentation, ongoing management and reporting of incidents to maintain business confidence in IT capabilities

o    KPI Number and percentage of incidents incorrectly assigned

o    KPI Number and percentage of incidents incorrectly categorized

o    KPI Number and percentage of incidents processed per service desk agent

o    KPI Number and percentage of incidents related to changes and releases.

Problem Management: Problem Management is responsible for the activities required to

  • Diagnose the root cause of incidents.
  • Determine the resolution to related problems.
  • Perform trend analysis to identify and resolve problems before they impact the live environment.
  • Ensure that resolutions are implemented through the appropriate control procedures, especially change management and release management.

Problem Management maintains information about problems and appropriate workarounds and resolutions to help the organization reduce the number and impact of incidents over time. To do this, Problem Management has a strong interface with Knowledge Management and uses tools such as the Known Error Database.

  • CSF Minimize the impact to the business of incidents that cannot be prevented

o    KPI The number of known errors added to the KEDB

o    KPI The percentage accuracy of the KEDB (from audits of the database)

o    KPI Percentage of incidents closed by the service desk without reference to other levels of support (often referred to as ‘first point of contact’)

o    KPI Average incident resolution time for those incidents linked to problem records

  • CSF Maintain quality of IT services through elimination of recurring incidents

o    KPI Total numbers of problems (as a control measure)

o    KPI Size of current problem backlog for each IT service

o    KPI Number of repeat incidents for each IT service

  • CSF Provide overall quality and professionalism of problem handling activities to maintain business confidence in IT capabilities

o    KPI The number of major problems (opened and closed and backlog)

o    KPI The percentage of major problem reviews successfully performed

o    KPI The percentage of major problem reviews completed successfully and on time

o    KPI Number and percentage of problems incorrectly assigned

o    KPI Number and percentage of problems incorrectly categorized

o    KPI The backlog of outstanding problems and the trend (static, reducing or increasing?)

o    KPI Number and percentage of problems that exceeded their target resolution times

o    KPI Percentage of problems resolved within SLA targets (and the percentage that are not!)

o    KPI Average cost per problem.

Event Management: These processes have planning, design, and operations activity. Event Management is responsible for any aspect of Service Management that needs to be monitored or controlled and where the monitoring and controls can be automated. This includes:

  • Configuration items.
  • Environmental controls.
  • Software licensing.
  • Security.
  • Normal operational activities.

Event Management includes defining and maintaining Event Management solutions and managing events.

  • CSF Detecting all changes of state that have significance for the management of CIs and IT services

o    KPI Number and ratio of events compared with the number of incidents

o    KPI Number and percentage of each type of event per platform or application versus total number of platforms and applications underpinning live IT services (looking to identify IT services that may be at risk for lack of capability to detect their events)

  • CSF Ensuring all events are communicated to the appropriate functions that need to be informed or take further control actions

o    KPI Number and percentage of events that required human intervention and whether this was performed

o    KPI Number of incidents that occurred and percentage of these that were triggered without a corresponding event

  • CSF Providing the trigger, or entry point, for the execution of many service operation processes and operations management activities

o    KPI Number and percentage of events that required human intervention and whether this was performed

  • CSF Provide the means to compare actual operating performance and behaviour against design standards and SLAs

o    KPI Number and percentage of incidents that were resolved without impact to the business (indicates the overall effectiveness of the event management process and underpinning solutions)

o    KPI Number and percentage of events that resulted in incidents or changes

o    KPI Number and percentage of events caused by existing problems or known errors (this may result in a change to the priority of work on that problem or known error)

o    KPI Number and percentage of events indicating performance issues (for example, growth in the number of times an application exceeded its transaction thresholds over the past six months)

o    KPI Number and percentage of events indicating potential availability issues (e.g. failovers to alternative devices, or excessive workload swapping)

  • CSF Providing a basis for service assurance, reporting and service improvement

o    KPI Number and percentage of repeated or duplicated events (this will help in the tuning of the correlation engine to eliminate unnecessary event generation and can also be used to assist in the design of better event generation functionality in new services)

o    KPI Number of events/alerts generated without actual degradation of service/functionality (false positives – indication of the accuracy of the instrumentation parameters, important for CSI).

Access Management: Access Management aims to grant authorized users the right to use a service, while preventing access to non-authorized users. The Access Management processes essentially execute policies defined in [[IT Security Management |Information Security Management]]. Access Management is sometimes also referred to as ”Rights Management” or ”Identity Management”.

  • CSF Ensuring that the confidentiality, integrity and availability of services are protected in accordance with the information security policy

o    KPI Percentage of incidents that involved inappropriate security access or attempts at access to services

o    KPI Number of audit findings that discovered incorrect access settings for users that have changed roles or left the company

o    KPI Number of incidents requiring a reset of access rights

o    KPI Number of incidents caused by incorrect access settings

  • CSF Provide appropriate access to services on a timely basis that meets business needs

o    KPI Percentage of requests for access (service request, RFC etc.) that were provided within established SLAs and OLAs

  • CSF Provide timely communications about improper access or abuse of services on a timely basis

o    KPI Average duration of access-related incidents (from time of discovery to escalation).

Resources:

Communication with English as a Second Language

Comunicación en inglés como segundo idioma

Key takeaway / Llave para llevar

  • Paraphrased quote: No one will know what you wanted to say but didn’t, so don’t worry if you forget something. They will remember how you made them feel.
  • Cita parafraseada: Nadie sabrá lo que querías decir pero no dijiste, así que no necesitas preocuparte si olvidaste algo. Recuerda, nosotros solo recordamos cómo nos hiciste sentir.

How to do a podcast

In this post, you will get a behind the scenes look as you learn what it takes to produce your very own podcast. An opportunity to increase your presentation skill potential through this new landscape. This interactive session will teach attendees to learn how to plan, prepare and produce their very own podcast. You’ll get a bird’s eye view on the ins and outs of what it takes to become a successful podcaster. As an added bonus, the attendees of this seminar interacted and became a part of an official District 58 Podcast recording.

Below is the PowerPoint presentation I gave:

 

The raw audio file can be found below:

Whereas the production quality audio file can be found below:

Links to the products I used can be found:

Unconventional Hurricane Prep

We are at the height of Hurricane Season again for the United States. Although, we have hurricane preparation lists from multiple websites.  I think I like to share some unconventional items:

To do:

  1. As bad as a Hurricane is, it is a fantastic way to really meet your neighbors.
  2. Fill up your bathtubs with water. If there is no water after, you can put a bucket out of the tub in the back of your toilet so you can flush.
  3. Fill up a cup of water and put it in the freezer to freeze. Then put a coin on the top of the frozen water. If you come back from evacuating and don’t know if you lost power for a while check the cup. If the coin is frozen to the bottom of the cup you know the food defrosted and refroze when the power came back on. Throw out the food. If the coin is still on top your food is fine.
  4. Also, fill your Tupperware with water and freeze it. It can be used after the storm in coolers to keep food and drinks cold. Ice will be precious!
  5. Fill up coolers with water for drinking after the storm because the water after the storm is contaminated and may have to be boiled before use.
  6. Make sure nothing is left outside that can hit the house. Pull all outside furniture, bird feeders, etc. into the garage. Most become flying objects left outside. If you can’t bring trash cans or recycling in, let them fill with water. Make sure they are empty before the storm hits.
  7. Don’t buy hurricane snacks too early because you will eat them before it hits.
  8. Fill up all cars with gasoline or diesel prior to evacuating or staying put.  You never know when the next shipment of gas will come in nor how much more expensive it will be, because of low supply and high demand. Save your gas after the storm. If you must sight-see, use a bicycle.
  9. Close all doors in the home. Especially if you evacuate. If you lose a window, that should confine water damage.
  10. Metal Garage doors, especially the 9 to 10 Ft variety, can’t handle strong winds. they will collapse inward. Either park in front of them or back a car up against them from the inside. Don’t forget the heavy blanket between the door and the car.
  11. Always assume a downed power line may be present in standing water.
  12. If you lose power and if you have any solar-powered lights, bring some inside to light the house at night and back in the sun during the day. It saves batteries!
  13. If you’re evacuating, don’t forget to unplug any electrical items that you can, eg TV, router, desktop pc, etc. If your area loses power, there could be a surge when your power company is trying to bring folks back online. Most surge protectors will help, but it’s usually better to be safe than sorry. Also, when cutting areas back online there may be power “blips” (on then off really quick). It’s best to wait until the power stabilizes to plug stuff back in.
  14. Charge all your electronics and turn them off before the storm.  This includes computers, cellphones, etc.
  15. Make sure all your dishes and laundry are clean. It might be a week before the power comes back on. In Miami, for Hurricane Wilma, I was without power for almost a month.
  16. If you have no landline for the phone, arrange to work with someone that does. It can take weeks if not a month after a storm before all cell towers were realigned.

Do this every year prior to the storm season

  1. Check your trees for dead limbs. Service them if you can.
  2. If you have a Generator or Chain saw, service it now. Make sure it is ready to go.
  3. Check your grille. You’ll need gas or lots of wood/ charcoal for cooking. Never use a gas grill indoors.
  4.  Stock your and your family’s medical needs as well.

If you have pets

  1. Fill up a clean large plastic bin with water for my dog.
  2. Of course, all pets should be brought in.
  3. With all the noise from wind and rain, if your pet is crate trained it will help keep her calm to go into her crate.
  4. If you’re evacuating and you have a To-Go bag, make one for your pet – medicines, collar and leash with id, crate, towel(s), blanket or bed, favorite toy, treats, food, and water; also, any pertinent medical records (shots, medical history). Take your pet with you.
  5. Should the worse occur and your pet gets out and lost, having her microchipped will help ensure she gets home. VIP Petcare Clinic will chip them for about ~$19.  After a storm, many pet rescue groups will come in to pick up “strays”. If your pet has no microchip, they could pick her up and your pet could end up being adopted out to someone hundreds or even thousands of miles away.

Again these are unconventional advice to be followed along with conventional advice. However, this advice is just as valuable.

P-Hacking: The Menace In Science

In the American Statistician Association (2016a) statement, stated the following conversation:

Q: Why do so many colleges and grad schools teach p = 0.05?

A: Because that’s still what the scientific community and journal editors use.

Q: Why do so many peope still use p = 0.05?

A: Because that’s what they were taught incollege or grad school.

Someone doesn’t need to be studying philosophy, or for the Law School Acceptance Test (LSAT) to see the flaw in that argument.  It’s circular reasoning, and that is the point.  The p-value is being overused when there are so many other ways to measure the strength of the data and it’s significance. Plus, a p = 0.05 is arbitrary and dependent on many fields.  I have seen papers use p = 0.10; p = 0.05, p = 0.01 and rarely p = 0.001.  But, are the results reliable, replicable, and reproducible? There are even studies that manipulate their data to get these elusive p-values…

Scientific research is at the bedrock of pushing society forward. However, not every study’s results published can represent the best of science. Some in the field have tried to alter how long the study lasts, not take into account of a confounding variable that could be causing the results, make the sample size too small to be reliable and allowing luck to be in play, or attempt p-hacking (Adam Ruins Everything, 2017; CrashCourse, 2018; Oliver, 2016).

P-hacking is defined as gathering as many variables as possible, then massaging the huge amounts of data to get a statistically significant result (CrashCourse, 2018; Oliver, 2016). However, that result could be completely meaningless. Similar to when the 538 blog did a p-hacking study called “You can’t trust what you read about nutrition” surveyed 54 people and collected over 1000 variables, found a statistically significant correlation between eating raw tomatoes to Judaism. 538 did this study just to point out the issue of p-hacking (Aschwanden, 2016).

As mentioned earlier, the best way to protect ourselves from p-hacking is to replicate the study and see if we can get similar results to the original study (Adam Ruins Everything, 2017; John Olver, 2016). Unfortunately, in science, there is no prize for fact-checking (John Oliver, 2016). That is why when we do research, we must make sure our results are robust, by testing multiple times if possible.  If it is not possible to do it in your own research, then a replication study is called for by others.  However, Replication studies are rarely ever funded and rarely get published (Adam Ruins Everything, 2017). A great way to do this, is collaborating with scientific peers from multiple universities, work on the same problem, with the same methodology, but different datasets and publish one or a series of papers that confirms a result as replicable and robust.  If we don’t do this, it forces the scientific field to only fund exploratory studies to get developed and published, and the results never get evaluated. Unfortunately, the adage for most scientists is to “publish or perish,” and as Prof. Brian Nosek from Center for Open Science said, “There is NO COST to getting things WRONG. THE COST is not getting them PUBLISHED.” (John Oliver, 2016).

The American Statistical Association (2016b), suggested the following to be used with p-values to give a more accurate representation of the significances:

  • Methods that emphasize estimation over testing
    • Confidence intervals
    • Credibility intervals
    • Prediction intervals
  • Bayesian methods
  • Alternatives measure of evidence
    • Likelihood ratios
    • Bayesian Factors
  • Decision-Theoretic modeling
  • False discovery rates

Have hope, most reputable scientists don’t take the result of one study to heart, but look at in the context of all the work done in that field (Adam Ruins Everything, 2017). Also, most reputable scientists tend to downplay the implications and generalizations of their results when they publish their findings (American Statistical Association, 2016b; Adam Ruins Everything, 2017; CrashCourse, 2018; Oliver, 2016). Looking for those kinds of studies and knowing how p-hacking is done is the best ammunition to defend against spurious results.

Resources

Finance/Accounting 101: Capital and Operating Expense

Capital Expenditure – CapEx (Finance/Accounting): Includes all spending on an asset that is supposed to last for over a year (Apptio, 2018).  Usually, it is used to undertake a new project, but it can be used for purchasing or changing equipment, buildings, etc. (Investopedia, n.d.b.). CapEx contains depreciation, look at my previous post for that (Apptio, n.d.; Investopedia, n.d.b.). A car is a great example for your personal CapEx, given that it depreciates over time and you purchase or lease it typically for more than a year.

Operating Expense – OpEx (Finance/Accounting): Includes all the ongoing costs for running as normal (Apptio, 2018; Investopedia, n.d.a.). For instance, OpEx could include rent, equipment, inventory costs, marketing, payroll, insurance, and funds allocated for research and development (Investopedia, n.d.a). Essentially, if you look at the rent you pay for living or for driving, that can be considered your own OpEx.  If you also consider your health, dental, vision, disability, housing, car, etc. it can also fall under this category.  Even gas to fuel up a car, given that it is used to make your asset operable fits under this category.  According to Apptio (2018), bills like electricity, water, etc. can fall under this category as well.

You can be more CapEx or OpEx heavy in your budgets.  Each with their benefits.  For instances being more CapEx heavy, your costs are more predictable in the long run and you can easily calculate your net worth. In that scenario, you may not have enough cash to continue to pay for some opportunities.  If you are more OpEx heavy you tend to save more money for investment purposes.  Here you have more flexibility to take on an opportunity, but its harder to show/calculate your net worth.

Another way to look at this is OpEx is like the cloud service on your phone, you pay for what you use, be it 5 gigs, 25 gigs, 50 gigs, etc. Whereas, CapEx is steady and saying I rather pay for the entire asset and enjoy as much or as little as I want.

Resources:

Finance/Accounting 101: Sunk and Opportunity Costs

Opportunity Cost (Finance): Is the cost one misses out on, when you go with one option over another (Investopedia, n.b.a.).  This usually occurs when you have limited resources.  If you have little money to budget, if you factor out all your needs, you only have so much left for your wants.  You cannot buy all your wants and therefore when you buy one want you may not be able to afford another.  The biggest limited resource we have is time, and time is usually associated with money.  When I did my doctorate, I couldn’t use that time to go to law school, so my opportunity cost was law school during my doctorate.  However, going to law school now will mean that the opportunity cost I have to pay is time with family, friends, and pets.  As Ursula from the little mermaid said “You can’t get something for nothing,” even free things have their cost.  You can get a free cookie, cake piece, ice cream, or pizza slice, but that may mean more time in the gym to burn those unneeded calories.

Opportunity cost can be calculated in dollars, time, or any other metric, but since you forgo that opportunity in exchange for another, you cannot claim it in accounting purposes. However, calculating it can be extremely useful for decision making.

Sunk cost (finance/accounting): It is the cost that has already been incurred to date that cannot be recovered, especially when deciding whether to continue to invest or divest (Investopedia, n.d.b.).  There is a fallacy that we as humans tend in include sunk cost when making our decision to continue moving over.  For instance, if you had a major in college, let’s say physics and you are on your senior year, and you realized you want to be a biologist instead.  The decision you have to make is to finish physics as a double major with biology, finish physics and stay in that field, or stop studying physics and pursue biology.  The sunk costs are all the classes that won’t count towards a degree in physics.  Some people may look at the problem and say they are 3-4 classes away from the degree, I might as well suck it up.  Or others may say, I have enough for a minor, and I should cut my losses.  When making a decision, like this, we should look at the problem new, without looking at what was already invested, because if you hate physics, but are 23-4 classes away, you will hate those three or four classes and your future career.  It makes no sense to continue.

In sunk cost, it doesn’t mean that you cannot try to claim some value from what you invested in.  For instance, claiming a minor in physics, or seeing which of those credits can transfer to lessen the load of classes you want to take for a biology major.  That is a smart way to minimize sunk cost.  But, if there is a sunk cost, it is ok.  The problem is not to keep wasting resources towards a lost cause and increasing the sunk cost.

Personally, I fall victim to this sunk cost fallacy a few times, when it comes to being a life-long learner.  Especially when reading a book.  Just because I checked out a book in the library doesn’t mean I have to read it from cover to cover, especially if I don’t like it after a few chapters.  But, again we have a tendency to want to see things through to the end.  The letting go of a book exercise is a great exercise in building resilience against the sunk cost fallacy.  Give it a try.  Has there been a book on your nightstand that you just don’t want to read anymore? Then let it go.  Donate it to a library, to a school, etc.  Relish in the fact that you didn’t give in to the sunk cost fallacy to keep reading that book to the end.

Resources:

Finance/Accounting 101: Direct and Indirect Costs

Direct Cost (Finance/Accounting): Can consist of fixed and variable costs, but that is 100% dedicated to a service, an asset, etc (Apptio, 2018; Investopedia, n.d.a.).  Imagine you buy a new laptop.  The cost is fixed direct cost to acquire it.

Indirect Cost (Finance/Accounting): Are costs that are shared amongst a service, an asset, etc. (Apptio, 2018).  Let’s look at the laptop you just bought above.  Even though the price of the physical laptop is fixed and direct, you have indirect fixed and variable costs associated with it.  Some of the indirect fixed cost will come from purchasing software, OS license, virus and malware detection software, etc. While some of the indirect variable cost will come with how much electricity you will spend to keep your laptop’s battery charged. Indirect costs can be hard to find if your budget isn’t transparent (Apptio, 2018).

Resources: