The StatCrunch Hypothesis tests for a mean applet has been designed to help students develop a better understanding of the underlying concepts of hypothesis testing. This interactive applet allows the user to easily simulate thousands of data sets of a user specified size from a specific distribution in terms of shape, mean and standard deviation. The user also specifies a hypothesis test for the mean that will be performed on each of these data sets. The results of all of the hypothesis tests are tabled and graphed so that the performance of the hypothesis testing procedure can be easily evaluated. In situations where the null mean matches the underlying population mean, one can observe the null distribution of the the test statistic and compare the proportion of tests rejecting the null with the specified level of significance. The hypothesis testing results can also be compared for two different significance levels. The user can explore the power of the hypothesis testing procedure by examining the proportion of times the null is rejected for different values of the underlying population mean.
Click on StatCrunch > Applets > Hypothesis tests > for a mean to view the dialog shown above. One may choose to sample from either a normal or right skewed population with Mean and Std. Dev. specified to alter the location and spread of the underlying population. The user then specifies the Null Mean value, the alternative hypothesis and up to two significance levels that will define the hypothesis test(s) to be applied to each sample data set. One can also choose to perform either T or Z test(s) which determines the critical value(s) used. The user may also specify the graphics used to summarize results. The choice of Distribution plots will result in histograms for the resulting test statistics and P-values. The choice of Stick plots will create graphs featuring a sequence of bars representing the values of the resulting test statistics and P-values. Both graphs will color code the results according to significance level. One can also specify an optional custom Title for the applet.
The example resulting applet shown above will automatically simulate 100 sample data sets and associated hypothesis tests when it is loaded. Each time the user clicks the Simulate 100 button the results for an additional 100 simulated sample data sets will be tallied. Note the user may modify the associated sample size, but doing so will clear all of the current results. A listing of the sample data sets generated is provided to the left. When a sample is selected from the listing, the individual sample values are displayed along with the corresponding data summaries and hypothesis test results. The accumulated hypothesis testing outcomes are tabled and graphed to the right with a green color used to indicate results where the null was rejected if only one significance level was specified. If two significance levels were specified, green is used for the larger level and blue is used for the lower level. The tabled results include the desired significance level(s), the associated critical value(s) along with the number/proportion of times the null was rejected in each case. For the outcome graphics, extreme test statistics compared to the associated critical value(s) and P-values less than the specified significance level(s) are color-coded accordingly. With stick plots, vertical lines represent the test statistics and P-values of individual hypothesis tests and horizontal lines represent the respective critical value(s) and significance level(s) for comparison. Note that with two significance levels specified blue colored objects represent results that are significant at both levels.
One usage of this applet is to compare the specified rejection level to the “Prop. rejected”. After hundreds of simulations, that proportion rejected may be different from the set rejection level depending on the sample size and the type of hypothesis test.
On the left is a table that displays every observation from the selected simulation. The far left column lists every simulation that was made. Selecting a different number in this column will display the data generated for those samples. Under “Samples” are the actual numbers generated in a given simulation. The amount of numbers generated is set based on the Sample size in the top menu. The samples will be sorted from largest to smallest if the column header is clicked. At the bottom of the data column are summary statistics for the sample proportion along with the z-stat and p-value. Another usage is to show students how to calculate the z-value and p-value based on the mean, standard deviation, and null mean.
A selected sample in the far left column will have its corresponding bar highlighted for the distribution plotsin outlined pink, or in the stick plotsby coloring the bar pink. Likewise, clicking on bars in the graph will interact with the table on the left. Clicking on a bar will open up all sample indexes that fit within that range with the distributional plots. Selecting one and clicking View will display that index in the columns on the left. If a vertical line is clicked then its corresponding index will be show in the columns on the left with the stick plots.
Note that the applet can also be saved to you’re My Results folder at statcrunch.com. When the applet is saved using the Options > Export to My Results option of the applet window, the applet will then be saved in its current form with the same seed (to generate the same results) or with a new seed. If one so chooses, the result can then be shared with others by clicking the Edit link on the resulting statcrunch.com result page. One can then share the link to the result page with other users. This may be an enticing option for those who want to construct prepackaged applets for classroom usage.