Non-parametric Tests

Corresponding author: Shengping Yang Contact Information: Shengping.yang@ttuhsc.edu. DOI: 10.12746/swrccc2014.0208.109 I n some studies, the instrument used cannot provide precise measurements of the outcome of interest for some of the samples. In such cases, usually, a value, for example, “undetectable” is assigned to those samples. Statistically, analyzing these data is difficult using parametric methods, such as t test, ANOVA, without making major assumptions or censoring. For example, supposing that we assign two different arbitrary values (beyond the detectable threshold) to the non-detectable observations, we might get very different results because assigning different values to the non-detectables changes the mean and variance of the whole sample. As a simple and easy to implement alternative, a non-parametric method is usually recommended.


Introduction
A parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn.
A non-parametric test is one that makes no such assumptions. In this strict sense, "non-parametric" is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s).
Which is more powerful? Non-parametric statistical procedures are less powerful because they use less information in their calculation. For example, a parametric correlation uses information about the mean and deviation from the mean while a non-parametric correlation will use only the ordinal position of pairs of scores.

Parametric Assumptions
 The observations must be independent  The observations must be drawn from normally distributed populations  These populations must have the same variances  The means of these normal and homoscedastic populations must be linear combinations of effects due to columns and/or rows

Nonparametric Assumptions
Certain assumptions are associated with most nonparametric statistical tests, but these are fewer and weaker than those of parametric tests.

Advantages of Nonparametric Tests
 Probability statements obtained from most nonparametric statistics are exact probabilities, regardless of the shape of the population distribution from which the random sample was drawn  If sample sizes as small as N=6 are used, there is no alternative to using a nonparametric test  Easier to learn and apply than parametric tests  Based on a model that specifies very general conditions.  No specific form of the distribution from which the sample was drawn.  Hence nonparametric tests are also known as distribution free tests.

Nonparametric Methods
There is at least one nonparametric test equivalent to a parametric test

Chi-square Test
This goodness-of-fit test compares the observed and expected frequencies in each category to test either that all categories contain the same proportion of values or that each category contains a user-specified proportion of values.

Examples
The chi-square test could be used to determine if a basket of fruit contains equal proportions of apples, bananas, oranges, and peaches. Follow the steps as shown Get the count in the test variable list Click OK and get the output as shown below

Interpretation:
Here p value is 0.981 which is more than 0.05. Hence it is not significant and we fail to reject the null hypothesis and conclude that there is no significant difference in the proportions of apples, bananas, oranges, and peaches. We could also test to see if a basket of fruit contains 10% apples, 20% bananas, 50% oranges, and 20% peaches. For this we have to define the proportions by checking the button "Values" and keep on adding.

Binomial Test
The Binomial Test procedure is useful when you want to compare a single sample from a dichotomous variable to an expected proportion. If the dichotomy does not exist in the data as a variable, one can be dynamically created based upon a cut point on a scale variable (take age as example from the data). If your variable has more than two outcomes, try the Chi-Square Test procedure. If you want to compare two dichotomous variables, try the McNemar test in the Two-Related-Samples Tests procedure.

Example
Say we wish to test whether the proportion of females from the variable "gender" differs significantly from 50%, i.e., from 0.5. We will use the exact statement to produce the exact p-values. Get the data.
Follow the steps as shown below Get the variable gender in the test variable list.
Click OK and get the output

Interpretation:
Since p value is 1 it is not significant and we fail to reject null hypothesis and conclude that the proportion of females from the variable "gender" does not differ significantly from 50%.

Run Test for Randomness
Run test is used for examining whether or not a set of observations constitutes a random sample from an infinite population. Test for randomness is of major importance because the assumption of randomness underlies statistical inference. In addition, tests for randomness are important for time series analysis. Departure from randomness can take many forms. The cut point is based either on a measure of central tendency (mean, median, or mode) or a custom value. A sample with too many or too few runs suggests that the sample is not random.

Example
Let's see whether the variable "AGE" in the dataset below is random. Load the data.
Follow the following steps.
Select "AGE" in the test variables list.
This variable "AGE" must be divided into two spate groups. Therefore we must indicate a cut point. Now lets take Median as the cut point. Any value blow the median point will belong to one group and any value greater than or equal to median will belong to the other group. Now click OK to get output.

Interpretation:
Now p value is 0.450. So it is not significant and we cannot say that AGE is not random.

One-Sample Kolmogorov-Smirnov Test
The One-Sample Kolmogorov-Smirnov procedure is used to test the null hypothesis that a sample comes from a particular distribution. Four theoretical distribution functions are available--normal, uniform, Poisson, and exponential. If we want to compare the distributions of two variables, use the two-sample Kolmogorov-Smirnov test in the Two-Independent-Samples Tests procedure.
Example: Let us test the variable "AGE" in the cancer dataset used for Run test above is normal distribution or uniform distribution.

SPSS Steps
Get the data as done before. Then… Select "AGE" in the test variable list.
Check the distribution for which you want to test. Click OK and get the output.

Interpretation:
The p value is 0.997 which is not significant and therefore we cannot say that "AGE" does not have an approximate normal distribution. If the p value were less than 0.05 we would say it is significant and AGE does not follow an approximate normal distribution.

Two-Independent-Samples Tests
The nonparametric tests for two independent samples are useful for determining whether or not the values of a particular variable differ between two groups. This is especially true when the assumptions of the t test are not met. Enter variable sales in test variable list and design in grouping variable.
Since we are performing two independent sample tests we have to designate which two groups in our factor design we want to compare. So click "Define groups".
Here we type group 2 and 1. Order is not important, only we have to enter two distinct groups. Then click continue and OK to get output.

Interpretation:
Now two p values are displayed, asymptotic which is appropriate for large sample and exact which is independent of sample size. Therefore we will take the exact p value i. e. 0.548 which is not significant and we conclude that there is no significant difference in sales between the design group 1 and group 2.

Multiple Independent Samples Tests
The nonparametric tests for multiple independent samples are useful for determining whether or not the values of a particular variable differ between two or more groups. This is especially true when the assumptions of ANOVA are not met.  Median test: This method tests the null hypothesis that two or more independent samples have the same median. It assumes nothing about the distribution of the test variable, making it a good choice when you suspect that the distribution varies by group  Kruskal-Wallis H: This test is a one-way analysis of variance by ranks. It tests the null hypothesis that multiple independent samples come from the same population.  Jonckheere-terpstra test: Exact test

Example:
We want to find out whether the sales are different between the designs (Comparing more than two samples simultaneously)

SPSS Steps:
Get the data in SPSS window as done before. Then…

Define range
Click continue then OK to get output.

Interpretation:
P value is 0.003 which is significant. Therefore we conclude that there is significant difference between the groups (meaning-at least two groups are different)

Tests for Two Related Samples
The nonparametric tests for two related samples allow you to test for differences between paired scores when you cannot (or would rather not) make the assumptions required by the paired-samples t test.  Sign test: Wilkoxon and Sign are used for contineous data and of the two wilkoxon is more powerful Example: Use the cancer data deployed in Run Test to test whether the condition of the cancer patient at the end of 2 nd week and 4 th week are significantly different. (here higher the reading, better is the condition) Output: Interpretation: P value is 0.006 which is significant. This indicates that the condition of cancer patient at the end of 2 nd week and 4 th week are different.

Tests for Multiple Related Samples
The nonparametric tests for multiple related samples are useful alternatives to a repeated measures analysis of variance. Output Interpretation: P value is less than 0.05. Hence there is significant difference between the four groups (meaning-at least two groups are different)

Exact Tests and Monte Carlo Method
These new methods, the exact and Monte Carlo methods, provide a powerful means for obtaining accurate results when your data set is small, your tables are sparse or unbalanced, the data are not normally distributed, or the data fail to meet any of the underlying assumptions necessary for reliable results using the standard asymptotic method.

The Exact Method
By default, IBM® SPSS® Statistics calculates significance levels for the statistics in the Crosstabs and Nonparametric Tests procedures using the asymptotic method. This means that p values are estimated based on the assumption that the data, given a sufficiently large sample size, conform to a particular distribution.
However, when the data set is small, sparse, contains many ties, is unbalanced, or is poorly distributed, the asymptotic method may fail to produce reliable results. In these situations, it is preferable to calculate a significance level based on the exact distribution of the test statistic. This enables you to obtain an accurate p value without relying on assumptions that may not be met by your data.

The Monte Carlo Method
Although exact results are always reliable, some data sets are too large for the exact p value to be calculated, yet don't meet the assumptions necessary for the asymptotic method. In this situation, the Monte Carlo method provides an unbiased estimate of the exact p value, without the requirements of the asymptotic method. The Monte Carlo method is a repeated sampling method. For any observed table, there are many tables, each with the same dimensions and column and row margins as the observed table. The Monte Carlo method repeatedly samples a specified number of these possible tables in order to obtain an unbiased estimate of the true p value.
The Monte Carlo method is less computationally intensive than the exact method, so results can often be obtained more quickly. However, if you have chosen the Monte Carlo method, but exact results can be calculated quickly for your data, they will be provided.

When to Use Exact Tests
Calculating exact results can be computationally intensive, time-consuming, and can sometimes exceed the memory limits of your machine. In general, exact tests can be performed quickly with sample sizes of less than 30. Table 1.1 provides a guideline for the conditions under which exact results can be obtained quickly.