Quant: Statistical Significance

Presume that you have analyzed a relationship between 2 management styles and found they are significantly related. A statistician has looked at your output and said that the results really do not explain much of what is happening in the total relationship.

Advertisements

In quantitative research methodologies, meaningful results get reported and their statistical significance, confidence intervals and effect sizes (Creswell, 2014). If the results from a statistical test have a low probability of occurring by chance (5% or 1% or less) then the statistical test is considered significant (Creswell, 2014; Field, 2014; Huck, 2011).  Low statistical significance values usually try to protect against type I errors (Huck, 2011). Statistical significance test can have the same effect yet result in different values (Field, 2014).  Statistical significance on large samples sizes can be affected by small differences and can show up as significant, while in smaller samples large differences may be deemed insignificant (Field, 2014).  Statistically significant results allow the researcher to reject a null hypothesis but do not test the importance of the observations made (Huck, 2011).  Huck (2011) stated two main factors that could influence whether or not a result is statistically significant is the quality of the research question and research design.  This is why Creswell (2014) also stated confidence intervals and effect size. Confidence intervals explain a range of values that describe the uncertainty of the overall observation and effect size defines the strength of the conclusions made on the observations (Creswell, 2014).  Huck (2011) suggested that after statistical significance is calculated and the research can either reject or fail to reject a null hypothesis, effect size analysis should be conducted.  The effect size allows researchers to measure objectively the magnitude or practical significance of the research findings through looking at the differential impact of the variables (Huck, 2011; Field, 2014).  Field (2014), defines one way of measuring the effect size is through Cohen’s d: d = (Avg(x1) – Avg(x2))/(standard deviation).  There are multiple ways to pick a standard deviation for the denominator of the effect size equation: control group standard deviation, group standard deviation, population standard deviation or pooling the groups of standard deviations that are assuming there is independence between the groups (Field, 2014).   If d = 0.2 there is a small effect, d = 0.5 there is a moderate effect, and d = 0.8 or more there is a large effect (Field, 2014; Huck, 2011). Thus, this could be the reason why the statistical test yielded a statistically significant value, but further analysis with effect size could show that those statistically significant results do not explain much of what is happening in the total relationship.

Resources

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2011) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Huck, S. W. (2013) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.

Quant: Regression and Correlations

Top management of a large company has told you that they really would like to be able to determine what the impact of years of service at their company has on workers’ productivity levels, and they would like to be able to predict potential productivity based upon years of service. The company has data on all of its employees and has been using a valid productivity measure that assesses each employee’s productivity. You have told management that there is a possible way to do that.

Through a regression analysis, it should be possible to predict the potential productivity based upon years of service, depending on two factors: (1) that the productivity assessment tool is valid and reliable (Creswell, 2014) and (2) we have a large enough sample size to conduct our analysis and be able to draw statistical inference of the population based on the sample data which has been collected (Huck, 2011). Assuming these two conditions are met, then regression analysis could be made on the data to create a prediction formula. Regression formulas are useful for summarizing the relationship between the variables in question (Huck, 2011). There are multiple types of regression all of them are tests of prediction: Linear, Multiple, Log-Linear, Quadratic, Cubic, etc. (Huck, 2011; Schumacker, 2014).  The linear regression is the most well-known because it uses basic algebra, a straight line, and the Pearson correlation coefficient to aid in stating the regression’s prediction strength (Huck, 2011; Schumacker, 2014).  The linear regression formula is: y = a + bx + e, where y is the dependent variable (in this case the productivity measure), x is the independent variable (years of service), a (the intercept) and b (the regression weight) are a constants that are to be defined through the regression analysis, and e is the regression prediction error (Field, 2013; Schumacker, 2014).  The sum of the errors should be equal to zero (Schumacker, 2014).

Linear regression models try to describe the relationship between one dependent and one independent variable, which are measured at the ratios or interval level (Schumacker, 2014).  However, other regression models are tested to find the best regression fit over the data.  Even though these are different regression tests, the goal for each regression model is to try to describe the current relationship between the dependent variable and the independent variable(s) and for predicting.  Multiple regression is used when there are multiple independent variables (Huck, 2011; Schumacker, 2014). Log-Linear Regression is using a categorical or continuously independent variable (Schumacker, 2014). Quadratic and Cubic regressions use a quadratic and cubic formula to help predict trends that are quadratic or cubic in nature respectively (Field, 2013).  When modeling predict potential productivity based upon years of service the regression with the strongest correlation will be used as it is that regression formula that explains the variance between the variables the best.   However, just because the regression formula can predict some or most of the variance between the variables, it will never imply causation (Field, 2013).

Correlations help define the strength of the regression formula in defining the relationships between the variables, and can vary in value from -1 to +1.  The closer the correlation coefficient is to -1 or +1; it informs the researcher that the regression formula is a good predictor of the variance between the variables.  The closer the correlation coefficient is to zero, indicates that there is hardly any relationship between the variable (Field, 2013; Huck, 2011; Schumacker, 2014).  A negative correlation could show that as the years of service increases the productivity measured is decreased, which could be caused by apathy or some other factor that has yet to be measured.  A positive correlation could show that as the years of service increases the productivity also measured increases, which could also be influenced by other factors that are not directly related to the years of service.  Thus, correlation doesn’t imply causation, but can help determine the percentage of the variances between the variables by the regression formula result, when the correlation value is squared (r2) (Field, 2013).

References

  • Creswell, J. W. (2014) Research design: Qualitative, quantitative and mixed method approaches (4th ed.). California, SAGE Publications, Inc. VitalBook file.
  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.

Quant: ANOVA and Multiple Comparisons in SPSS

The aim of this analysis is to look at the relationship between the dependent variable of the income level of respondents (rincdol) and the independent variable of their reported level of happiness (happy). This independent variable has at least 3 or more levels within it.

Introduction

The aim of this analysis is to look at the relationship between the dependent variable of the income level of respondents (rincdol) and the independent variable of their reported level of happiness (happy).   This independent variable has at least 3 or more levels within it.

From the SPSS outputs the goal is to:

  • How to use the ANOVA program to determine the overall conclusion. Use of the Bonferroni correction as a post-hoc analysis to determine the relationship of specific levels of happiness to income.

Hypothesis

  • Null: There is no basis of difference between the overall rincdol and happy
  • Alternative: There is are real differences between the overall rincdol and happy
  • Null2: There is no basis of difference between the certain pairs of rincdol and happy
  • Alternative2: There is are real differences between the certain pairs of rincdol and happy

Methodology

For this project, the gss.sav file is loaded into SPSS (GSS, n.d.).  The goal is to look at the relationships between the following variables: rincdol (Respondent’s income; ranges recoded to midpoints) and happy (General Happiness). To conduct a parametric analysis, navigate to Analyze > Compare Means > One-Way ANOVA.  The variable rincdol was placed in the “Dependent List” box, and happy was placed under “Factor” box.  Select “Post Hoc” and under the “Equal Variances Assumed” select “Bonferroni”.  The procedures for this analysis are provided in video tutorial form by Miller (n.d.). The following output was observed in the next two tables.

The relationship between rincdol and happy are plotted by using the chart builder.  Code to run the chart builder code is shown in the code section, and the resulting image is shown in the results section.

Results

Table 1: ANOVA

Respondent’s income; ranges recoded to midpoints
Sum of Squares df Mean Square F Sig.
Between Groups 11009722680.000 2 5504861341.000 9.889 .000
Within Groups 499905585000.000 898 556687733.900
Total 510915307700.000 900

Through the ANOVA analysis, Table 1, it shows that the overall ANOVA shows statistical significance, such that the first Null hypothesis is rejected at the 0.05 level. Thus, there is a statistically significant difference in the relationship between the overall rincdol and happy variables.  However, the difference between the means at various levels.

Table 2: Multiple Comparisons

Dependent Variable:   Respondent’s income; ranges recoded to midpoints
Bonferroni
(I) GENERAL HAPPINESS (J) GENERAL HAPPINESS Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval
Lower Bound Upper Bound
VERY HAPPY PRETTY HAPPY 4093.678 1744.832 .058 -91.26 8278.61
NOT TOO HAPPY 12808.643* 2912.527 .000 5823.02 19794.26
PRETTY HAPPY VERY HAPPY -4093.678 1744.832 .058 -8278.61 91.26
NOT TOO HAPPY 8714.965* 2740.045 .005 2143.04 15286.89
NOT TOO HAPPY VERY HAPPY -12808.643* 2912.527 .000 -19794.26 -5823.02
PRETTY HAPPY -8714.965* 2740.045 .005 -15286.89 -2143.04
*. The mean difference is significant at the 0.05 level.

According to Table 2, for the pairings of “Very Happy” and “Pretty Happy” did not disprove the Null2 for that case at the 0.05 level. But, all other pairings “Very Happy” and “Not Too Happy” with “Pretty Happy” and “Not Too Happy” can reject the Null2 hypothesis at the 0.05 level.  Thus, there is a difference when comparing across the three different pairs.

u3db3f1

Figure 1: Graphed means of General Happiness versus incomes.

The relationship between general happiness and income are positively correlated (Figure 1).  That means that a low level of general happiness in a person usually have lower recorded mean incomes and vice versa.  There is no direction or causality that can be made from this analysis.  It is not that high amounts of income cause general happiness, or happy people make more money due to their positivism attitude towards life.

SPSS Code

DATASET NAME DataSet1 WINDOW=FRONT.

ONEWAY rincdol BY happy

  /MISSING ANALYSIS

  /POSTHOC=BONFERRONI ALPHA(0.05).

* Chart Builder.

GGRAPH

  /GRAPHDATASET NAME=”graphdataset” VARIABLES=happy MEAN(rincdol)[name=”MEAN_rincdol”]

    MISSING=LISTWISE REPORTMISSING=NO

  /GRAPHSPEC SOURCE=INLINE.

BEGIN GPL

  SOURCE: s=userSource(id(“graphdataset”))

  DATA: happy=col(source(s), name(“happy”), unit.category())

  DATA: MEAN_rincdol=col(source(s), name(“MEAN_rincdol”))

  GUIDE: axis(dim(1), label(“GENERAL HAPPINESS”))

  GUIDE: axis(dim(2), label(“Mean Respondent’s income; ranges recoded to midpoints”))

  SCALE: cat(dim(1), include(“1”, “2”, “3”))

  SCALE: linear(dim(2), include(0))

  ELEMENT: line(position(happy*MEAN_rincdol), missing.wings())

END GPL.

References:

Quant: Group Statistics in SPSS

The aim of this analysis is to make a decision about whether a person is alive or dead ten years after a coronary is reflected in a significant difference in his diastolic blood pressure taken when that event occurred. The variable “DBP58” will be used as a dependent variable and “Vital10” as an independent variable.

Introduction

The aim of this analysis is to make a decision about whether a person is alive or dead ten years after a coronary is reflected in a significant difference in his diastolic blood pressure taken when that event occurred. The variable “DBP58” will be used as a dependent variable and “Vital10” as an independent variable.

From the SPSS outputs the goal is to:

  • Analyze these conditions to determine if there is a significant difference between the DBP levels of those (vital10) who are alive 10 years later compared to those who died within 10 years.

Hypothesis

  • Null: There is no basis of difference between the DBP58 and Vital10
  • Alternative: There is are real differences between the DBP58 and Vital10

Methodology

For this project, the electric.sav file is loaded into SPSS (Electric, n.d.).  The goal is to look at the relationships between the following variables: DBP58 (Average Diastolic Blood Pressure) and Vital10 (Status at Ten Years). To conduct a parametric analysis, navigate to Analyze > Compare Means > Paired-Samples T Test.  The variable DBP58 was placed in the “Test Variables” box, and Vital10 was placed under “grouping variable” box.  Then select the “Define Groups” button and enter 0 for “Group 1” and 1 for “Group 2”.  The procedures for this analysis are provided in video tutorial form by Miller (n.d.). The following output was observed in the next two tables.

Results

Table 1: Group Statistics

Status at Ten Years N Mean Std. Deviation Std. Error Mean
Average Diast Blood Pressure 58 Alive 178 87.56 11.446 .858
Dead 61 92.38 16.477 2.110

According to the results in Table 1, the mean diastolic blood pressure of those who have passed away ten years later was 5 points higher and had a huge standard deviation.  Thus, those who are alive ten years later have a smaller variation of their diastolic blood pressure.

Table 2: Independent Samples Test

Levene’s Test for Equality of Variances t-test for Equality of Means
F Sig. t df Sig. (2-tailed) Mean Difference Std. Error Difference 95% Confidence Interval of the Difference
Lower Upper
Average Diast Blood Pressure 58 Equal variances assumed 8.815 .003 -2.515 237 .013 -4.815 1.915 -8.587 -1.043
Equal variances not assumed -2.114 80.735 .038 -4.815 2.277 -9.347 -.284

According to the independent t-test for equality of means, shows that there is no equality in the variance at the 0.05 level, such that when equal variances are not assumed, the null hypothesis could be rejected at the 0.05 level because the significance value is 0.038.  Thus, there is a statistically significant difference between the means of diastolic blood pressure of those who are alive and those who have passed away.

SPSS Code

DATASET NAME DataSet1 WINDOW=FRONT.

T-TEST GROUPS=vital10(0 1)

  /MISSING=ANALYSIS

  /VARIABLES=dbp58

  /CRITERIA=CI(.95).

References:

Quant: Paired Sample Statistics in SPSS

The aim of this analysis is to conduct a comparison of productivity under two organizational structures: The data are artificial estimates of productivity with column 1 representing traditional vertical management and column 2 representing other autonomous work teams (ATW). The background is that a company of 100 factory workers had been operating under traditional vertical management and decided to move to ATW. The same employees were involved in both systems having first worked under vertical management and then being converted to ATW.

Introduction

The aim of this analysis is to conduct a comparison of productivity under two organizational structures: The data are artificial estimates of productivity with column 1 representing traditional vertical management and column 2 representing other autonomous work teams (ATW). The background is that a company of 100 factory workers had been operating under traditional vertical management and decided to move to ATW. The same employees were involved in both systems having first worked under vertical management and then being converted to ATW.

From the SPSS outputs the goal is to:

  • Analyze the productivity levels of the 2 management approaches, and decide which is superior.

Hypothesis

  • Null: There is no basis of difference between the prodpre and prodpost
  • Alternative: There is are real differences between the prodpre and prodpost

Methodology

For this project, the atw.sav file is loaded into SPSS (ATW, n.d.).  The goal is to look at the relationships between the following variables: prodpre (productivity level preceding the new process) and prodpost (productivity level following the new process). To conduct a parametric analysis, navigate to Analyze > Compare Means > Paired-Samples T Test.  The variable prodpre was placed in the “Paired Variables” box under “Pair” 1 and “Variable 1”, and prodpost was placed under “Pair” 1 and “Variable 2”.  The procedures for this analysis are provided in video tutorial form by Miller (n.d.). The following output was observed in the next three tables.

Results

Table 1: Paired Sample Statistics

Mean N Std. Deviation Std. Error Mean
Pair 1 productivity level preceding the new process 76.43 100 16.820 1.682
productivity level following the new process 84.24 100 9.797 .980

Descriptively, productivity on average increased by 8 points, and the standard deviation about the mean decreased by 7 points.  This means that the estimates of productivity under the traditional vertical management are less than and showcase a wider spread than those of the estimates of productivity under the autonomous work teams.  Essentially these distributions tell the story that the workers are getting better productivity estimates with less deviation under autonomous work teams.

Table 2: Paired Samples Correlation

N Correlation Sig.
Pair 1 productivity level preceding the new process & productivity level following the new process 100 .040 .695

Based on Table 2, there is a weak correlation (r = 0.040) between the estimates of productivity under the traditional vertical management and the autonomous work teams.  Although correlation does not imply causation.

Table 3: Paired Samples Test

Paired Differences t df Sig. (2-tailed)
Mean Std. Deviation Std. Error Mean 95% Confidence Interval of the Difference
Lower Upper
Pair 1 productivity level preceding the new process – productivity level following the new process -7.817 19.126 1.913 -11.612 -4.022 -4.087 99 .000

Based on the results from the 2-tailed student t-tests (Table 3), the null hypothesis can be rejected.  There is a significant difference between the two variables prodpre and prodpost at the 0.05 level or less.  The data based on 100 workers (with degrees of freedom of 99) show that there is a significance in the estimates of productivity under the traditional vertical management and the autonomous work teams.

SPSS Code

DATASET NAME DataSet1 WINDOW=FRONT.

T-TEST PAIRS=prodpre WITH prodpost (PAIRED)

  /CRITERIA=CI(.9500)

  /MISSING=ANALYSIS.

References:

Quant: Lack of detail

You have found a notice about a research study examining two styles of leadership. The researchers only told you that they are trying to recruit subjects for a research project to determine which leadership style is more effective. They have put out this scant, general description of the project on a website, asking for volunteers as subjects.

Concerns about the lack of detail

In this scenario, there is a lack of detail, and to get subjects to participate in this research problem Miller (n.d.) said: “People need to know the specifics.” From the scenario described above, there is no indication of who these researchers are nor their credentials.  Without a quick biography on the website, it is hard to discern if these researchers are credible to conduct the research. From the scenario, the recruit of subjects for their study seems to be lacking a statement of purpose, which sets the stage, intent, objectives, and major idea of the study to begin with (Creswell, 2014).  The statement of purpose gives the reader (the subjects) the reason as to why these researchers want to examine the two styles of leadership.  The statement of purpose demonstrates the problem statement, and defines the specific research questions these researchers are studying (Creswell, 2014).  Creswell, (2014) stated that effective purpose statements for quantitative research will be written in a deductive language and should include the variables, the relationships between the variables, the participants, and the research location.  The intent in quantitative research is demonstrated in the purpose statement through describing the relationships or lack thereof between the variable to be found through either a survey or experiments.  Miller (n.d.) and Creswell (2014) stated that identification of theory or conceptual framework is needed to build a strong statement of purpose.  Miller (n.d.) goes further to explain that there needs to be a statement of which two leadership styles theories or dimensions will be evaluated in this study.

There is no mention of whether the recruitment of the subjects is part the pilot study, which is used to help develop and try out methods and procedures, or conducting the main study, which is where the collection of actual data for the study is collected (Gall, Gall, & Borg, 2006).  The methodology section of this call for subjects should have addressed this.  It should also address what type of instrument these researchers are using to collect data from the subjects.  There are two main types of data collection: Survey and experiments.  It is more likely that this study recruiting subjects to study two types of leadership styles will use surveys as their means of quantitative data collection.  Creswell (2014) defines surveys a numerical data collected, studied and analyzed from a sample of the population to find out participant opinions and attitudes.  If done correctly, a statistical inference could be applied to aid in applying the results gained from this study to those of the population these researchers are trying to understand on these two leadership styles (Gall et al., 2006). Miller (n.d.) suggested that the surveys could ask about the subjects’ opinions or attitudes towards certain leadership style traits, or the survey could state a few scenarios and have the subjects select a multiple choice answer.

The survey instrument should be either valid and reliable.  It should have been either used before in other studies, with slight modifications to fit the parameters of this study, and it should be listed on their website.  A slight modification to the instrument may not have held the same validity and reliability as the old instrument.  Plus, if there is a lack of validity or reliability in the study’s instrument, then why should the subjects participate and waste their time.  Validity and reliability ensure that the results captured through the instrument will provide valid and meaningful results (Creswell, 2014; Miller, n.d.).  If the current instrument is not fully valid and reliable, then this could be indications of a pilot study to help refine and build validity and reliability in the instrument (Gall et al., 2006).   According to Creswell (2014), there are internal, external, and statistical conclusion threats to validity that must be controlled or mitigated as to help draw out the correct inference of the population.

There is no mention of the population in which these researchers are trying to study on the two leadership styles.  If the subject doesn’t fall under the conditions of the population, then the subject doesn’t know if even applying would seem like a waste of time.  Creswell (2014) states that depending on the population of the certain study instruments could work better than others, while others are just not well-suited enough to provide the needed validity and reliability needed to generalize results to that population.  The researchers could try to narrow their population by stating, “This study aims to understand the relationship between X, Y, and Z, that are displayed in A & B leadership styles among the Latin(x)-Americans population in the state of Oklahoma, from the ages of 25-35 and 45-55.”  Thus, subjects that do not fall under this population should not need to worry about applying for the study, saving time for both prospective subjects and the researchers.  The study has not mentioned how the population should be narrowed into a few dimensions to fit their study.  Thus, one can assume that these researchers may be trying to study the general population, which has a huge number of diverse dimensions that are impossible to study (Miller, n.d.).  Unless otherwise stated, any assumption goes based on the facts of this scenario.  The scenario does not mention how the researchers plan to obtain a rand selection from this population, and submitting a call through their website, would only draw a special type of population, which may or may not represent the population these researchers are trying to study.  The closer the sample represents the study’s aimed population, the more powerful is the statistical inference is to help draw inferences that are more representative of the population (Gall et al., 2006; Miller, n.d.).

Finally, there is a need for subject participation information that would entice participation: how long will the survey take; is there compensation; and will the subject be informed of the results at the end of the survey.  If the survey takes too much time and the population that these researchers are trying to sample doesn’t have that time readily available, then the participation rate would decrease.  The longer it takes to fill out an assessment, the need for compensation for the subjects is needed.  There are two ways to compensate subjects in a study: hand out small amounts of compensation to each participant or at the conclusion of the study; a random drawing is conducted to give out 2-3 prizes of substantial size (Miller, n.d.).  Regardless, if there is or is not any form of compensation available, the researchers should consider if there are at least some results or “lessons learned” that would be earned by the subjects through the participation in their study.

References:

Quant: Parametric and Non-Parametric Stats

There are numerous times when the information collected from a real organization will not conform to the requirements of a parametric analysis. That is, a practitioner would not be able to analyze the data with a t-test or F-test (ANOVA). Presume that a young professional came to you and said he or she had read about tests—such as the Chi-Square, the Mann-Whitney U test, the Wilcoxon Signed-Rank test, and Kruskal-Wallis one-way analysis of variance—and wanted to know when you would use each and why each would be used instead of the t-tests and ANOVA.

Parametric statistics is inferential and based on random sampling from a well-defined population, and that the sample data is making strict inferences about the population’s parameters. Thus tests like t-tests, chi-square, f-tests (ANOVA) can be used (Huck, 2011; Schumacker, 2014).  Nonparametric statistics, “assumption-free tests”, is used for tests that are using ranked data like Mann-Whitney U-test, Wilcoxon Signed-Rank test, Kruskal-Wallis H-test, and chi-square (Field, 2013; Huck, 2011).

First, there is a need to define the types of data.  Continuous data is interval/ratio data, and categorical data is nominal/ordinal data.  Modified from Schumacker (2014) with data added from Huck (2011):

Statistic Dependent Variable Independent Variable
Analysis of Variance (ANOVA)
     One way Continuous Categorical
t-Tests
     Single Sample Continuous
     Independent groups Continuous Categorical
     Dependent (paired groups) Continuous Categorical
Chi-square Categorical Categorical
Mann-Whitney U-test Ordinal Ordinal
Wilcoxon Ordinal Ordinal
Kruskal-Wallis H-test Ordinal Ordinal

ANOVAs (or F-tests) are used to analyze the differences in a group of three or more means, through studying the variation between the groups, and tests the null hypothesis to see if the means between the groups are equal (Huck, 2011). Student t-tests, or t-tests, test as a null hypothesis that the mean of a population has some specified number and is used when the sample size is relatively small compared to the population size (Field, 2013; Huck, 2011; Schumacker, 2014).  The test assumes a normal distribution (Huck, 2011). With large sample sizes, t-test/values are the same as z-tests/values, the same can happen with chi-square, as t and chi-square are distributions with samples size in their function (Schumacker, 2014).  In other words, at large sample sizes the t-distribution and chi-square distribution begin to look like a normal curve.  Chi-square is related to the variance of a sample, and the chi-square tests are used for testing the null hypothesis, which is the sample mean is part of a normal distribution (Schumacker, 2014).  Chi-square tests are so versatile it can be used as a parametric and non-parametric test (Field, 2013; Huck, 2011; Schumacker, 2014).

The Mann-Whiteney U-test and Wilcox signed-rank test are both equivalent, since they are the non-parametric equivalent of the t-tests and the samples don’t even have to be of the same sample length (Field, 2013).

The nonparametric Mann-Whitney U-test can be substituted for a t-test when the normal distribution cannot be assumed and was designed for two independent samples that do not have repeated measures (Field, 2013; Huck, 2011). Thus, this makes this a great substitution for the independent group’s t-test (Field, 2013). A benefit of choosing the Mann-Whitney U test is that it probably will not produce type II error-false negative (Huck, 2011). The null hypothesis is that the two independent samples come from the same population (Field, 2013; Huck, 2011).

The nonparametric Wilcoxon signed-rank test is best for distributions that are skewed, where variance homogeneity cannot be assumed, and a normal distribution cannot be assumed (Field, 2013; Huck, 2011).  Wilcoxon signed test can help compare two related/correlated samples from the same population (Huck, 2011). Each pair of data is chosen randomly and independently and not repeating between the pairs (Huck, 2011).  This is a great substitution for the dependent t-tests (Field, 2013; Huck, 2011).  The null hypothesis is that the central tendency is 0 (Huck, 2011).

The nonparametric Kruskal-Wallis H-test can be used to compare two or more independent samples from the same distribution, which is considered to be like a one-way analysis of variance (ANOVA) and focuses on central tendencies (Huck, 2011).  It is usually an extension of the Mann-Whitney U-test (Huck, 2011). The null hypothesis is that the medians in all groups are equal (Huck, 2011).

References

  • Field, A. (2013) Discovering Statistics Using IBM SPSS Statistics (4th ed.). UK: Sage Publications Ltd. VitalBook file.
  • Huck, S. W. (2011) Reading Statistics and Research (6th ed.). Pearson Learning Solutions. VitalBook file.
  • Schumacker, R. E. (2014) Learning statistics using R. California, SAGE Publications, Inc, VitalBook file.