![]() |
![]() |
Back |
Can someone help me with some SAS commands please? I have a 2x4 mixed repeated ANOVA. I have a between subjects manipulation, ITK, (2 levels) and each subject goes through 4 treatments (and so I have 4 scores from each subject). I found a significant interactive effect. I want to do (1) contrast tests, and (2) trend tests for linear, quadratic and cubic. I looked through the SAS manuals but I could not figure out the way to do these tests. The following is part of the SAS program I have been using: DATA ONE; INFILE 'INTRA DATA A'; /* A DESCRIPTION OF THE VARIABLES IN INTRA*/ INPUT ITK SUBJNO Y1 Y2 Y3 Y4; PROC PRINT; PROC GLM DATA=ONE; CLASS ITK; MODEL Y1-Y4 = ITK; REPEATED TIME 4 / SUMMARY; MEANS ITK; RUN; TIA. -- Kim Tan (PhD student, Temple University).Return to Top
Whether one uses a one-way or two-way anova to calculate ICC depends on the theoretical status of any differences in means between raters. In the one-way approach, the single factor is always the sampling units (e.g., subjects, samples) that were measured or rated. The error in this case is made up of any differences in the means of raters as well as the interaction of subjects X raters. If there are good reasons to exclude variation in rater means from the error term, then a two-way anova, where the factors are subject and rater, would be appropriate. The latter analysis removes systematic biases between raters from the error term, and leaves only the rater X subject interaction. An article that I often refer my clients to is Tinsley, H. E. A., & Weiss, D. J. (1975). Interrater reliability and agreement of subjective judgments. Journal of Counseling Psychology, 22, 358-376. This gives a more thorough discussion of this topic. Christina M. Gullion In articleReturn to Top, Chauncey Parker writes: >Subject: ICC:interrater reliability; ?anova models? >From: Chauncey Parker >Date: Sun, 22 Dec 1996 11:31:40 -0800 > >I see articles talk about using onway vs two way anova error terms to >calculate ICC. However, It seems that all cases use the error terms from a >"single factor repeated measure anova" that provides error terms for BMS & >WMS and the WMS is partitioned into JMS and EMS. > >first, am I right to see this anova (the sing fact repeated meas) as a >oneway? > >and, this is synonomous with a within subs oneway anova? > > >OK, what is the deal with the talk about two way anovas and when, if ever, >do you use: > >two way anova? > > >simple oneway anova (gives only BMS & WMS terms)? > > >Thanks a clinically significant amount . . . =;/ >Chauncey@UW
As far as I can make out. you appear to be doing an analysis of covariance (ancova) with needle density as the covariate, family as the class variable, and egg counts as the dependent variable. Whether needle density is a continuous variable or not may not be the right question for your analytic problem. Even though you have chosen to "put" your needle density data into 6 bins, density probably is continuous. Of more relevance is whether your data are approximately symmetrically distributed over the 6 points, or are sharply skewed to left or right. Differences between families in the variability of the needle density ratings is likely to be your biggest problem in analyzing and interpreting your experimental results. Also, the method you are using assumes that the regression slopes of needle density on egg count are equal across the families. This is unlikely to be the case if some families have sharply different needle density distributions. What you are doing is not a simple analysis. You would be wise to find a statistician who could lead you through the tests of assumptions and diagnostics that are needed to do an ancova properly. Best wishes, Christina M. Gullion, PhDReturn to Top
Whether you can compare the two surveys depends on the substantive comparison between the 5-point and 7-point scales. If both refer to the same theoretically CONTINUOUS underlying concept or construct but simply have a different number of anchors, then it may be possible to standardize the longer scale to 5 points with a simple linear transformation. For example, if in both years you had a scale running from strongly disagree to strongly agree, but you put a couple more intermediate points, then it may be reasonable to rescale. An even simpler case is if the measures were done essentially on "rulers", e.g., -2 -1 0 1 2 vs -3 -2 -1 0 1 2 3. Multiplying ratings on the latter scale by 2/3 will put them on the same -2 to +2 scale as the shorter ruler. This approach assumes that the longer scale does not "reach" further into the extremes, but simply partitions the continuous distribution into more bins. If, however, the two surveys represent different conceptualizations of the dimension being measured or if the scales had specific anchor points, which differ between years, then you cannot compare the two surveys--they were measuring different things. In article <5ah83k$sb6@boursy.news.erols.com>, Claude LijoiReturn to Topwrites: >Subject: scale comparison >From: Claude Lijoi >Date: 2 Jan 1997 21:08:04 GMT > >I want to track results of a survey. Problem is that last year's survey >was on a five point scale and this year's is on a seven point scale. What >is the best way to convert from one form to another for comparison? I am >an SPSS user. > >Thanks for the interest, > >Claude Lijoi >cglijoi@erols.com