Compelling Topics in Leadership

Leadership Theories:

  • Chapman and Sisodia (2015) define leadership as the value they bring to people. The author’s primary guiding value is that “We measure success by the way we touch the lives of people.” This type of leadership practice stems from treating their followers the similarly to how someone would like their kids to be treated in the work environment. This type of leadership relies on coaching the leader’s followers to build on the follower’s greatness. Then recognition is done that shake employee to the core by involving the employee’s family so that the employee’s family could be proud of their spouse or parent. The goal of this type of leadership is to have the employee seen, valued, and heard such that they want to be their best and do their best not just for the company but for their co-workers as well.
  • Cashman (2010) defines leadership from an inside-out approach to personal mastery. This type of leadership style is focused on self-awareness of the leader’s conscious beliefs and shadow beliefs to grow and deepen the leader’s authenticity. Cashman pushes the leader to identify, reflect and recognize their core talents, values and purpose. With the purpose of any leadership is understanding “How am I going to make a difference?” and “How am I going to enhance other people’s lives?” Working from the leader’s core purpose releases more of that untapped leader’s energy to do more meaning work that frees the leader and opens leaders up to different possibilities, more so than just working towards a leader’s goals.
  • Open Leadership Has five rules, which allow for respect and empowerment of the customers and employees, to consistently build trust, nurtures curiosity and humility, holding openness accountable, and allows for forgiving failures (Li, 2010).  These leaders must let go of the old mentality of micromanaging because once they do let go of micromanagement, these leaders are now open to growing into new opportunities. This thought process is shared commonalities with knowledge sharing, if people were to share the knowledge that they accumulated, these people would be able to let go of your current tasks, such that these people can focus on new and better opportunities. Li stated that open Leadership allows for leaders to build, deepen, and nurture relationships with the customers and employees.  Open leadership is a theory of leadership that is customer and employee centered.
  • Values-based leadership requires four principles: self-reflection, balance, humble, and self-confidence (Kraemer, 2015). Through self-reflection, leaders identify their core beliefs and values that matter to the leader. Leaders that view situations from multiple perspectives to gain a deeper understanding of the situation are considered balanced. Humility in leaders refers to not forgetting who the leader is and where the leaders come from to gain an appreciation for each person. Finally, self-confidence is the leader accepting themselves as they are, warts and all.

Ethical Behavior

No one wakes up one day and says they will be unethical, however, small acts can build up to unethical behavior (Prentice, 2007). This conclusion on ethics is similar to a slippery slope argument. Understandably, unethical people and unethical actions aren’t equivalent to evil people or evil actions (Prentice, 2007). As stated by Chapman and Sisodia (2015), “Ethics is people.” Ethics usually involves and revolves around people. However, good intentions are not enough to ensure ethical behavior (Prentice, 2007). Thus, Prentice outlined how unethical decisions could be made:

  • Obedience to authority: following orders blindly
  • Conformity bias: observing others in a group and conforming to consciously or unconsciously
  • Incrementalism: the slippery slope argument
  • Group think: pressures to not stand out from a group consensus
  • Over-optimism: irrational beliefs led by a strong tendency of optimistic beliefs
  • Overconfidence: irrational beliefs led by a strong tendency of confidence
  • Selfserving bias: gathering information that only strengthens one’s views or self-interest and discarding challenging viewpoints
  • Framing: how a problem or situation is framed can yield different results
  • Sunk costs: continual consideration and loyalty to a bad idea, just because a significant amount or resources have been poured into the idea
  • The tangible, the close and the near term: having something tangible that is near you and close by weights more than those that are separated by distance or time or in the abstract
  • Loss aversion: people prefer not to act for fear of losing something
  • Endowment effect: people getting attached to something

Power and conflict

“‘Leadership is difficult.’ Inherent in any leadership challenge is stress. Stress comes from the environment, interpersonal conflict, the nature or amount of work, or simply the uncertain of what lies ahead.” (Shankman, Allen, & Haber Curran, 2015). Best teams can fall apart easily, due to conflict, if the conflict is not handled properly (Kraemer, 2015). Thus, when a conflict breaks, there are five strategies that people could use: forcing, accommodating, avoiding, compromising and collaborative; but usually, people tend to gravitate towards one or two of them (Williams, n.d.).

Kraemer (2015), illustrates the example of Campbell Soup, a company that recruited and grew in size with employees that were not aligned with the company’s values, and eventually, these people got promoted. These newly promoted ill-fitted employees were unequipped to create the best teams, and a few bad apples and negative influences almost destroyed the company, because of their concentration on short-term goals rather than long-term goals by increasing the price of their products above the value of private-labeled store brands. The CEO had a lot of changes to make to turn that company around and with change brings conflict. Williams (n.d.), illustrates an example of a conflict where Shaun Williams didn’t handle conflict appropriately, used physical forcing during a football game, which got his team penalized heavily, cost the team the game, and ended the team’s season. However, constructive conflict and trust are needed to openly and honestly have engaging relationships (Cashman, 2010).

Trust

Trust is multidimensional and is key to build all types of relationships between teammates, partners, and oneself. Trust is key to help build the best team, where teammates can have a constructive conflict on each other’s ideas to achieve innovation (Cashman, 2010; Kraemer, 2015). This is because all relationships are built on trust, and it takes just one inauthentic or untrustworthy action to ruin the relationship (Shankman, Allen, Haber-Curran, 2015). Once trustworthiness is lost, it takes time and hard work to regain it. Now, for being the best partner to someone that person must be truly committed to the other person’s success as well as their own while building trust along with mutual respect towards each other’s experience, and working towards long-term collaboration are key (Kraemer, 2015; Shankman et al., 2015). But, trust and belief in oneself are needed to get oneself from a fixed mindset into a growth mindset (Cashman, 2010; Sivers, 2014). Trust is key for a person to be authentic, vulnerable, and personal mastery (Cashman, 2010). Trust in oneself is the first thing that must occur prior to being able and open to trusting others. Trustworthiness attracts other people to believe in and follows their leader (Shankman et al., 2015).

Cashman (2010) and Shankman et al. (2015) state that engendering trust amongst people is by living authentically to oneself and trusting in oneself. To build up trust in oneself Shankman et al. (2015) suggested to: follow through on your commitments and being open and vulnerable to others by exposing your flaws in a positive way.

Important aspects of Emotional Intelligence

There are four aspects of emotional intelligence: self-awareness, self-management, social awareness, and relationship management (Bardberry, Greaves, & Lencioni, 2009; Help Guide, n.d.). It is important to recognize emotions felt and how it leads one to act, which is known as self-awareness (Bardberry, Greaves, & Lencioni, 2009; Goleman, n.d.; Help Guide, n.d.). Patterson, Grenny, McMillan and Switzler (2002), analogized how emotions are formed by talking about emotions as if it were an arrow. The analogy goes that the facts are described as the feathers of the arrow and providing stability to the arrow. Note that some arrows may have many feathers and others don’t, but these feathers are tied to the shaft of the arrow, which helps build a story. This story that is built on the facts, guide us to the point of the arrow, which points in the direction towards the emotions that are felt. Everyone can have different facts to the same scenario, thus would form different stories. Thus different people would react differently emotionally to the same situation.

Therefore, it is important to understand and recognize what emotions are being felt and what are the stories that have led to this emotion (Goleman, n.d.; O’Niel, 1996; Patterson et al., 2002). Remembering to question the facts are a great way to diffuse certain emotional responses to make good life decisions, thus known as self-management (Bardberry et al., 2009; Help Guide, n.d.; O’Niel, 1996; Patterson et al., 2002). O’Niel’s 1996 interview also informed the readers that learning and emotions are strongly connected to the prefrontal cortex. Consequently, if strong emotions are felt and not dealt with, there is little bandwidth to focus on learning. Furthering the need to understand and being in control of one’s emotions. Plus, without self-awareness and self-management one cannot master social awareness and relationship management, because if one cannot understand themselves how that person can seek to understand others or be understood (Bardberry et al., 2009).

 Resources:

  • Bardberry, T., Greaves, J., & Lencioni, P. (2009). Emotional Intelligence 2.0. San Diego: Talent Smart.
  • Cashman, K. (2010) Leadership from the inside out Becoming a leader for life. (2nd ed.). San Francisco, Berrett-Koehler Publishing, Inc.
  • Chapman, B. & Sisodia, R. (2015) Everybody matters: The extraordinary power of caring for your people like family. New York, Penguin.
  • Goleman, D. (n.d.). Emotional Intelligence. Retrieved from http://www.danielgoleman.info/topics/emotional-intelligence/
  • Help Guide (n.d.) Improving emotional intelligence (EQ): key skills for managing your emotions and improving your relationships. Retrieved from https://www.helpguide.org/articles/emotional-health/emotional-intelligence-eq.htm

 

Advertisements

Adv Quant: Compelling Topics

Compelling topics summary/definitions

  • Supervised machine learning algorithms: is a model that needs training and testing data set. However it does need to validate its model on some predetermined output value (Ahlemeyer-Stubbe & Coleman, 2014, Conolly & Begg, 2014).
  • Unsupervised machine learning algorithms: is a model that needs training and testing data set, but unlike supervised learning, it doesn’t need to validate its model on some predetermined output value (Ahlemeyer-Stubbe & Coleman, 2014, Conolly & Begg, 2014). Therefore, unsupervised learning tries to find the natural relationships in the input data (Ahlemeyer-Stubbe & Coleman, 2014).
  • General Least Squares Model (GLM): is the line of best fit, for linear regressions modeling along with its corresponding correlations (Smith, 2015). There are five assumptions to a linear regression model: additivity, linearity, independent errors, homoscedasticity, and normally distributed errors.
  • Overfitting: is stuffing a regression model with so many variables that have little contributional weight to help predict the dependent variable (Field, 2013; Vandekerckhove, Matzke, & Wagenmakers, 2014). Thus, to avoid the over-fitting problem, the use of parsimony is important in big data analytics.
  • Parsimony: is describing a dependent variable with the fewest independent variables as possible (Field, 2013; Huck, 2013; Smith, 2015). The concept of parsimony could be attributed to Occam’s Razor, which states “plurality out never be posited without necessity” (Duignan, 2015).  Vandekerckhove et al. (2014) describe parsimony as a way of removing the noise from the signal to create better predictive regression models.
  • Hierarchical Regression: When the researcher builds a multivariate regression model, they build it in stages, as they tend to add known independent variables first, and add newer independent variables in order to avoid overfitting in a technique called hierarchical regression (Austin, Goel & van Walraven, 2001; Field, 2013; Huck 2013).
  • Logistic Regression: multi-variable regression, where one or more independent variables are continuous or categorical which are used to predict a dichotomous/ binary/ categorical dependent variable (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013; Gall, Gall, & Borg, 2006; Huck, 2011).
  • Nearest Neighbor Methods: K-nearest neighbor (i.e. K =5) is when a data point is clustered into a group, by having 5 of the nearest neighbors vote on that data point, and it is particularly useful if the data is a binary or categorical (Berson, Smith, & Thearling, 1999).
  • Classification Trees: aid in data abstraction and finding patterns in an intuitive way (Ahlemeyer-Stubbe & Coleman, 2014; Brookshear & Brylow, 2014; Conolly & Begg, 2014) and aid the decision-making process by mapping out all the paths, solutions, or options available to the decision maker to decide upon.
  • Bayesian Analysis: can be reduced to a conditional probability that aims to take into account prior knowledge, but updates itself when new data becomes available (Hubbard, 2010; Smith, 2015; Spiegelhalter & Rice, 2009; Yudkowsky, 2003).
  • Discriminate Analysis: how should data be best separated into several groups based on several independent variables that create the largest separation of the prediction (Ahlemeyer-Stubbe, & Coleman, 2014; Field, 2013).
  • Ensemble Models: can perform better than a single classifier, since they are created as a combination of classifiers that have a weight attached to them to properly classify new data points (Bauer & Kohavi, 1999; Dietterich, 2000), through techniques like Bagging and Boosting. Boosting procedures help reduce both bias and variance of the different methods, and bagging procedures reduce just the variance of the different methods (Bauer & Kohavi, 1999; Liaw & Wiener, 2002).

 

References

  • Ahlemeyer-Stubbe, Andrea, Shirley Coleman. (2014). A Practical Guide to Data Mining for Business and Industry, 1st Edition. [VitalSource Bookshelf Online].
  • Austin, P. C., Goel, V., & van Walraven, C. (2001). An introduction to multilevel regression models. Canadian Journal of Public Health92(2), 150.
  • Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine learning,36(1-2), 105-139.
  • Berson, A. Smith, S. & Thearling K. (1999). Building Data Mining Applications for CRM. McGraw-Hill. Retrieved from http://www.thearling.com/text/dmtechniques/dmtechniques.htm
  • Brookshear, G., & Brylow, D. (2014). Computer Science: An Overview, 12th Edition. [VitalSource Bookshelf Online].
  • Connolly, T., & Begg, C. (2014). Database Systems: A Practical Approach to Design, Implementation, and Management, 6th Edition. [VitalSource Bookshelf Online].
  • Dietterich, T. G. (2000). Ensemble methods in machine learning. International workshop on multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.
  • Duignan, B. (2015). Occam’s razor. Encyclopaedia Britannica. Retrieved from https://www.britannica.com/topic/Occams-razor
  • Field, Andy. (2013). Discovering Statistics Using IBM SPSS Statistics, 4th Edition. [VitalSource Bookshelf Online].
  • Gall, M. D., Gall, J. P., Borg, W. R. (2006). Educational Research: An Introduction, 8th Edition. [VitalSource Bookshelf Online].
  • Hubbard, D. W. (2010). How to measure anything: Finding the values of “intangibles” in business. (2nd e.d.) New Jersey, John Wiley & Sons, Inc.
  • Huck, Schuyler W. (2011). Reading Statistics and Research, 6th Edition. [VitalSource Bookshelf Online].
  • Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R news, 2(3), 18-22.
  • Smith, M. (2015). Statistical analysis handbook. Retrieved from http://www.statsref.com/HTML/index.html?introduction.html
  • Spiegelhalter, D. & Rice, K. (2009) Bayesian statistics. Retrieved from http://www.scholarpedia.org/article/Bayesian_statistics
  • Vandekerckhove, J., Matzke, D., & Wagenmakers, E. J. (2014). Model comparison and the principle of parsimony.
  • Yudkowsky, E.S. (2003). An intuitive explanation of Bayesian reasoning. Retrieved from http://yudkowsky.net/rational/bayes

Quant: Compelling topics

Most Compelling Topics

Field (2013) states that both quantitative and qualitative methods are complimentary at best, none competing approaches to solving the world’s problems. Although these methods are quite different from each other. Simply put, quantitative methods are utilized when the research contains variables that are numerical, and qualitative methods are utilized when the research contains variables that are based on language (Field, 2013).  Thus, central to quantitative research and methods is to understand the numerical, ordinal, or categorical dataset and what the data represents. This can be done through either descriptive statistics, where the researcher uses statistics to help describe a data set, or it can be done through inferential statistics, where conclusions can be drawn about the data set (Miller, n.d.).

Field (2013) and Schumacker (2014), defined central tendency as an all-encompassing term to help describe the “center of a frequency distribution” through the commonly used measures mean, median, and mode.  Outliers, missing values, and multiplication of a constant, and adding a constant are factors that affect the central tendency (Schumacker, 2014).  Besides just looking at one central tendency measure, researchers can also analyze the mean and median together to understand how skewed the data is and in which direction.  Heavily skewed distributions would heavily increase the distance between these two values, and if the mean less than the median the distribution is skewed negatively (Field, 2013).  To understand the distribution, better other measures like variance and standard deviations could be used.

Variance and standard deviations are considered as measures of dispersion, where the variance is considered as measures of average dispersion (Field, 2013; Schumacker, 2014).  Variance is a numerical value that describes how the observed data values are spread across the data distribution and how they differ from the mean on average (Huck, 2011; Field, 2013; Schumacker, 2014).  The smaller the variance indicates that the observed data values are close to the mean and vice versa (Field, 2013).

Rarely is every member of the population studied, and instead a sample from that population is randomly taken to represent that population for analysis in quantitative research (Gall, Gall, & Borg 2006). At the end of the day, the insights gained from this type of research should be impersonal, objective, and generalizable.  To generalize the results of the research the insights gained from a sample of data needs to use the correct mathematical procedures for using probabilities and information, statistical inference (Gall et al., 2006).  Gall et al. (2006), stated that statistical inference is what dictates the order of procedures, for instance, a hypothesis and a null hypothesis must be defined before a statistical significance level, which also has to be defined before calculating a z or t statistic value.  Essentially, a statistical inference allows for quantitative researchers to make inferences about a population.  A population, where researchers must remember where that data was generated and collected from during quantitative research process.

Most flaws in research methodology exist because the validity and reliability weren’t established (Gall et al., 2006). Thus, it is important to ensure a valid and reliable assessment instrument.  So, in using any existing survey as an assessment instrument, one should report the instrument’s: development, items, scales, reports on reliability, and reports on validity through past uses (Creswell, 2014; Joyner, 2012).  Permission must be secured for using any instrument and placed in the appendix (Joyner, 2012).  The validity of the assessment instrument is key to drawing meaningful and useful statistical inferences (Creswell, 2014).

Through sampling of a population and using a valid and reliable survey instrument for assessment, attitudes and opinions about a population could be correctly inferred from the sample (Creswell, 2014).  Sometimes, a survey instrument doesn’t fit those in the target group. Thus it would not produce valid nor reliable inferences for the targeted population. One must select a targeted population and determine the size of that stratified population (Creswell, 2014).

Parametric statistics, are inferential and based on random sampling from a distinct population, and that the sample data is making strict inferences about the population’s parameters, thus tests like t-tests, chi-square, f-tests (ANOVA) can be used (Huck, 2011; Schumacker, 2014).  Nonparametric statistics, “assumption-free tests”, is used for tests that are using ranked data like Mann-Whitney U-test, Wilcoxon Signed-Rank test, Kruskal-Wallis H-test, and chi-square (Field, 2013; Huck, 2011).

First, there is a need to define the types of data.  Continuous data is interval/ratio data, and categorical data is nominal/ordinal data.  Modified from Schumacker (2014) with data added from Huck (2011):

Statistic Dependent Variable Independent Variable
Analysis of Variance (ANOVA)
     One way Continuous Categorical
t-Tests
     Single Sample Continuous
     Independent groups Continuous Categorical
     Dependent (paired groups) Continuous Categorical
Chi-square Categorical Categorical
Mann-Whitney U-test Ordinal Ordinal
Wilcoxon Ordinal Ordinal
Kruskal-Wallis H-test Ordinal Ordinal

So, meaningful results get reported and their statistical significance, confidence intervals and effect sizes (Creswell, 2014). If the results from a statistical test have a low probability of occurring by chance (5% or 1% or less) then the statistical test is considered significant (Creswell, 2014; Field, 2014; Huck, 2011Statistical significance test can have the same effect yet result in different values (Field, 2014).  Statistical significance on large samples sizes can be affected by small differences and can show up as significant, while in smaller samples large differences may be deemed insignificant (Field, 2014).  Statistically significant results allow the researcher to reject a null hypothesis but do not test the importance of the observations made (Huck, 2011).  Huck (2011) stated two main factors that could influence whether or not a result is statistically significant is the quality of the research question and research design.

Huck (2011) suggested that after statistical significance is calculated and the research can either reject or fail to reject a null hypothesis, effect size analysis should be conducted.  The effect size allows researchers to measure objectively the magnitude or practical significance of the research findings through looking at the differential impact of the variables (Huck, 2011; Field, 2014).  Field (2014), defines one way of measuring the effect size is through Cohen’s d: d = (Avg(x1) – Avg(x2))/(standard deviation).  If d = 0.2 there is a small effect, d = 0.5 there is a moderate effect, and d = 0.8 or more there is a large effect (Field, 2014; Huck, 2011). Thus, this could be the reason why a statistical test could yield a statistically significant value, but further analysis with effect size could show that those statistically significant results do not explain much of what is happening in the total relationship.

In regression analysis, it should be possible to predict the dependent variable based on the independent variables, depending on two factors: (1) that the productivity assessment tool is valid and reliable (Creswell, 2014) and (2) we have a large enough sample size to conduct our analysis and be able to draw statistical inference of the population based on the sample data which has been collected (Huck, 2011). Assuming these two conditions are met, then regression analysis could be made on the data to create a prediction formula. Regression formulas are useful for summarizing the relationship between the variables in question (Huck, 2011).

When modeling predict the dependent variable based upon the independent variable the regression model with the strongest correlation will be used as it is that regression formula that explains the variance between the variables the best.   However, just because the regression formula can predict some or most of the variance between the variables, it will never imply causation (Field, 2013).  Correlations help define the strength of the regression formula in defining the relationships between the variables, and can vary in value from -1 to +1.  The closer the correlation coefficient is to -1 or +1; it informs the researcher that the regression formula is a good predictor of the variance between the variables.  The closer the correlation coefficient is to zero, indicates that there is hardly any relationship between the variable (Field, 2013; Huck, 2011; Schumacker, 2014).  It should never be forgotten that correlation doesn’t imply causation, but can help determine the percentage of the variances between the variables by the regression formula result, when the correlation value is squared (r2) (Field, 2013).

 

References:

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Gall, M. D., Gall, J., & Borg W. (2006). Educational research: An introduction (8th ed.). Pearson Learning Solutions. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Joyner, R. L. (2012) Writing the Winning Thesis or Dissertation: A Step-by-Step Guide (3rd ed.). Corwin. VitalBook file.
  • Miller, R. (n.d.). Week 1: Central tendency [Video file]. Retrieved from http://breeze.careeredonline.com/p9fynztexn6/?launcher=false&fcsContent=true&pbMode=normal
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.