Back


Newsgroup sci.stat.consult 21737

Directory

Subject: Re: systat + spss -- From: Clay Helberg
Subject: Re: Sample Size Question -- From: wpilib+@pitt.edu (Richard F Ulrich)
Subject: weighted kappa & intrarater reliability -- From: Edward Kuczynski
Subject: Poisson etc. -- From: ssimon@cmh.edu
Subject: Job Opportunity Porto -- From: Maria Eduarda Silva
Subject: Re: Sample Size Question -- From: hamer@rci.rutgers.edu (Robert Hamer)
Subject: Re: High Noon for Biomedical Journals -- From: "John R. Vokey"
Subject: Re: weighted kappa & intrarater reliability -- From: wpilib+@pitt.edu (Richard F Ulrich)
Subject: Re: SAS help -- From: sasrdt@shewhart.unx.sas.com (Randall D. Tobias)
Subject: Re: help with reliability/validity -- From: Clay Helberg
Subject: Re: Sample Size Question -- From: bwallet@nswc.navy.mil (Brad Wallet)
Subject: (no subject given) -- From: Steve Des Jardins
Subject: Re: High Noon for Biomedical Journals -- From: rwhite@rideau.carleton.ca (Robert White)
Subject: Re: (no subject given) -- From: rwhite@rideau.carleton.ca (Robert White)
Subject: Outcomes Research Analyst Position -- From: Melvin Ott
Subject: Re: What do we mean by "The Null Hypothesis"? -- From: Clay Helberg
Subject: Position (fwd) -- From: schanzer@compmore.net (Dena Schanzer)
Subject: Re: Sample Size Question -- From: wthomp2081@aol.com (WThomp2081)
Subject: Re: systat + spss -- From: David Mataix-Cols
Subject: Re: WEB courses and Walmart (whats the problem) -- From: amcorso@aol.com (AMCorso)
Subject: Farm's Data Analysis -- From: Milad Elhadri

Articles

Subject: Re: systat + spss
From: Clay Helberg
Date: Fri, 20 Dec 1996 09:43:12 -0600
Chris Barker 415-852-3152 wrote:
> 
> Is systat software  now owned and/or marketed by SPSS?
Yes. You can find out more by looking at
.
						--Clay
--
Clay Helberg         | Internet: helberg@execpc.com
Publications Dept.   | WWW: http://www.execpc.com/~helberg/
SPSS, Inc.           | Speaking only for myself....
Return to Top
Subject: Re: Sample Size Question
From: wpilib+@pitt.edu (Richard F Ulrich)
Date: 20 Dec 1996 17:19:26 GMT
Adlis, Susan A. (ADLISS@FOUND.HSMNET.COM) wrote:
: We are testing a new lab test versus the standard test.  We will calculate
: sensitivity and specificity of the new test.  How do I determine the sample
: size needed to have valid results?
Can you say, what you consider to be "valid results"?  That is, when 
you consider the number of mistakes, should the new test be twice as
good? or is it okay to be just half as good?  
Then, what are the sensitivity and specificity of the OLD test?  Do
you have any preferences as to what kind of errors there are (False
positives vs. False negatives)?  Do your tests work by cutoff points,
so you can calibrate between the errors, or are you stuck with Yes/No?
Rich Ulrich, wpilib+@pitt.edu
Return to Top
Subject: weighted kappa & intrarater reliability
From: Edward Kuczynski
Date: Fri, 20 Dec 1996 09:18:03 -0500
Can anyone comment on or suggest alternatives to the use of the weighted
kappa to assess intrarater reliabilty under the following scenario:
a scoring system ranging from 0 (no staining) to 4 (intense staining) has
been created to measure signal strength, in this case, in situ
hybridization staining for tissue factor on endometrial specimens.  Each
specimen was scored on three separate occasions by two raters.
Issues:
1.  can we make use of the third set of scores, since most of the
appropriate measures of reliability I've encountered assess the agreement
between only two sets of scores?
2.  can we perform a composite test of inter-rater reliability which uses
all three sets of the two raters' scores, with a single correlation
statistic?
3.  are these manual calculations, or does someone know of/have software or
an add-in which will perform these calculations?
Please reply to me directly; I'll post a summary to list.
__________
   Edward Kuczynski, PhD                           telephone: 212.263.8589
   Obstetrics and Gynecology                        facsimile: 212.263.8887
   NYU Medical Center                                  e-mail:
kuczye01@mcrcr.med.nyu.edu
   550 First Avenue, Suite NB-9E2
   New York, NY  10016
Return to Top
Subject: Poisson etc.
From: ssimon@cmh.edu
Date: Fri, 20 Dec 1996 12:31:15 -0600
Dale Glaser writes:
>Hi there....would appreciate some assistance with the following: I am
>analyzing a data base with 153 in a treatment group and approximately 350 in
>the control group; the DV is cost of medical services with the procedure
>involving coronary bypass......the tx group may have only 3 or 4 patients
>with this procedure with possibly slightly more in the control
>group.....thus, the data points are predominantly 0 (zero).....obviously, any
>of the parametric statistics are inappropriate.....a colleague indicated that
>he recently purchased STATA which is able to analyze Poisson or Tobit
>distributions, which may suit my data....this is foreign territory for
>me......any suggestions how to compare the two groups when data points are
>primarily zero....the means make no sense (e.g., average of $200 for tx
>group).........is there anything in SPSS which is appropriate (e.g., using
>COMPUTE function to create Poisson distribution)..
First of all, the average is still very interpretable.  Suppose you are an 
insurance company.  The average tells you how much each group of
patients will cost your company on a per patient basis.
Second, resign yourself that you have very little data.  It isn’t 153+350
 operations, it’s around 3 or 4 times two.  Even the fanciest statistical
 model isn’t going to help much for such a small sample size.
I would still summarize the data and try to answer the following questions.
Does the treatment decrease the probability that and operation is needed,
or does it decrease the average cost of the operation, (or possibly both).
To answer the first question, compute the proportion of operations in each
group and compare them using Fisher’s Exact test.
To answer the second question, compute an independent sample t-test on
the cost of the operation, including only those few patients in each group
that actually had an operation.
With only eight or so data points, you are unlikely to achieve statistical
significance unless the effect of treatment is huge.  You either need to follow
your patients for a longer period of time (so that you get more operations) or
you need to study a much larger group of patients.
Still, look at the probability of an operation in each group and look
at the cost in each group and decide whether a larger study should
focus on relative cost or on the probability of needing an operation.
Steve Simon, ssimon@cmh.edu, Standard Disclaimer.
P.S.  I’ve been trying to post to STAT-L/SCI.STAT.CONSULT using an
alternative means (so as to avoid all the MIME garbage that my current
e-mail system provides).  Could a few of you let me know if you see this
message and let me know if you are reading it on sci.stat.consult or on
stat-l.
Thanks!
-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet
Return to Top
Subject: Job Opportunity Porto
From: Maria Eduarda Silva
Date: Fri, 20 Dec 1996 16:53:36 +0100
Job oportunity- University of Porto, Portugal
The Department of Applied Mathematics is accepting applications for
Lecturer and
Assistant Professor. The deadline is the 3rd of January. Applications may
be sent by fax: ++ 351 2 200 4109 or ++ 351 2 6007082. For more information
please see the following web address: www.fcma.up.pt.
Maria Eduarda R.P. Augusto da Silva
------------------------------------------------------
Grupo de Matematica Aplicada, Faculdade de Ciencias
Universidade do Porto,
Rua das Taipas, 135, 4050 Porto, PORTUGAL
Tel: (351-2) 2080313, Fax: (351-2) 200 4109
E-MAIL: mesilva@ncc.up.pt
------------------------------------------------------
Return to Top
Subject: Re: Sample Size Question
From: hamer@rci.rutgers.edu (Robert Hamer)
Date: 20 Dec 1996 14:46:41 -0500
"Adlis, Susan A."  writes:
>We are testing a new lab test versus the standard test.  We will calculate
>sensitivity and specificity of the new test.  How do I determine the sample
>size needed to have valid results?
37.6
-- 
--(Signature)      Robert M. Hamer hamer@rci.rutgers.edu 908 235 4218
  Do not send me unsolicited email advertisements.  I have never and
  will never buy.  I will complain to your postmaster.
  "Mit der Dummheit kaempfen Goetter selbst vergebens" -- Schiller
Return to Top
Subject: Re: High Noon for Biomedical Journals
From: "John R. Vokey"
Date: Fri, 20 Dec 1996 09:50:32 -0700
One of the difficulties with the complete abandonment of paper journals
is the loss of a clear criterion for the canonical version of the
article that they provide.  The edit/review/revise process to which most
journal articles are submitted will only increase with web/net
publishing making the definition of the finished or canonical article
quite difficult: in these days of proliferating "preprints" and posting
of draft versions to web-pages, freezing an article in a printed version
has become the basis of which version we mean when referencing works,
and is an advantage of paper publishing that should not be overlooked,
among others (e.g., relative permanance, little concern about media
obsolescence or an inability to decode it in 20 or 100 years, etc. --
something that even "freezing" on CD-ROM can't guarantee).
At least some of the rather strong desire for web-publishing for
journals is a wish to escape the hegemony of major journal publishers
(and, possibly, their associated editorial processes), and, I suspect,
the slow and tedious process associated with much journal publishing
these days, both of which would presumably be ameliorated by
web-publishing, but we should be careful not to throw out some of the
advantages of paper.
I would like to see the whole editorial/reviewing process moved to the
web to gain some of the freedoms others have described, but I would
still prefer that the end of the process be a printed version (or some
well-defined, long-lived format) housed in public and government
libraries to define the canonical article.  Scholarship *requires* a
permanent, public record; the evolving internet is anything but.
--
Dr. John R. Vokey, Associate Professor, Department of Psychology
University of Lethbridge, Lethbridge, Alberta, CANADA  T1K 3M4
mailto:vokey@hg.uleth.ca  http://www.uleth.ca/~vokey
Return to Top
Subject: Re: weighted kappa & intrarater reliability
From: wpilib+@pitt.edu (Richard F Ulrich)
Date: 20 Dec 1996 21:38:12 GMT
Edward Kuczynski (kuczye01@MCRCR6.MED.NYU.EDU) wrote:
: Can anyone comment on or suggest alternatives to the use of the weighted
: kappa to assess intrarater reliabilty under the following scenario:
  << scores 0-4, each rater 3 times >>
  -- Under any scenario, use your ordinary choice of Intraclass
Correlation coefficients, instead of "weighted kappa."  Ordinary
kappa gets some legitimage use in comparing two dichotomies, but
it gets confusing beyond two sets of data.  If you use the easy
weights, the weighted kappa is exactly a correlation;  if you don't,
you have a confusing mess on your hands.
You can get details about ICCs and an SPSS macro to do them, from 
the SPSS Web page. Which you can find from my Web page, see sig.
Rich Ulrich, biostatistician                wpilib+@pitt.edu
http://www.pitt.edu/~wpilib/index.html   Univ. of Pittsburgh
Return to Top
Subject: Re: SAS help
From: sasrdt@shewhart.unx.sas.com (Randall D. Tobias)
Date: Fri, 20 Dec 1996 21:09:11 GMT
In article <59cfi4$ns6@amenti.rutgers.edu>, hamer@rci.rutgers.edu (Robert Hamer) writes:
|> nakhob@mat.ulaval.ca (Renaud Langis) writes:
|> 
|> >On Fri, 13 Dec 1996 00:07:22 -0500, Ya-Fen Lo 
|> >wrote:
|> 
|> >>Is it possible to perform tests of simple effects
|> >>(as defined in APPLIED STATISTICS by HINKEL/WIERSMA/JURS)
|> >>in SAS ? I am using the following setup
|> 
|> Yes.
|> 
|> >You can use the TEST statement in proc GLM. May be also in proc ANOVA. Do you
|> >simply want to know if an effect is significant? if so, just check the ANOVA
|> >table.
|> 
|> That is not what the original question asked.  That person wants
|> to contrast levels of one effect at specific levels of the other
|> effect.  One has to do that with CONTRAST or ESTIMATE statements.
Easier to  use  the  new  SLICE=  option  on  the  LSMEANS  statement,
available in both GLM and MIXED.
   Example code:
      data a;
         do a = 1 to 5;
         do b = 1 to 5;
            do i = 1 to 5;
               if (a < 3) then y =     rannor(1);
               else            y = b + rannor(1);
               output;
               end;
            end; end;
      proc glm data=a;
         class a b;
         model y = a|b;
         lsmeans a*b / slice=a;
         run;
   Example output:
   +-----------------------------------------------------------------+
   |                   A*B Effect Sliced by A for Y                  |
   |                                                                 |
   |                 Sum of           Mean                           |
   |  A    DF       Squares         Square        F Value     Pr > F |
   |                                                                 |
   |  1     4        5.017547        1.254387      1.3555     0.2548 |
   |  2     4        2.380745        0.595186      0.6432     0.6330 |
   |  3     4       52.314143       13.078536     14.1331     0.0001 |
   |  4     4       32.807457        8.201864      8.8632     0.0001 |
   |  5     4       55.464609       13.866152     14.9842     0.0001 |
   +-----------------------------------------------------------------+
-- 
Randy Tobias          SAS Institute Inc.     sasrdt@unx.sas.com
(919) 677-8000 x7933  SAS Campus Dr.         us024621@interramp.com
(919) 677-8123 (Fax)  Cary, NC   27513-2414
   Faith, faith is an island in the setting sun.
   But proof, yes: proof is the bottom line for everyone.
                                                       -- Paul Simon
Return to Top
Subject: Re: help with reliability/validity
From: Clay Helberg
Date: Fri, 20 Dec 1996 15:33:50 -0600
Dennis Roberts wrote:
> find a copy of a decent educational/psychological measurement book and check
> under reliability. one such book is ...
> Suen, Hoi (1990) principle of test theories,  Lawrence Erlbaum Associates,
> Hillsdale NJ ...
> 
> At 09:36 AM 12/12/96 +0100, you wrote:
> >Does anyone know how to calculate the validity (or reliability) of survey
> >data? I know I can use factor analysis to tap the construct validity. But
> >how can I use Cronbach's alpha, to what conclusion does the alpha lead. Do
> >I need to standardize data before calculating the alpha? If yes, do I
> >still have to use standaridized scores in further analyses?
> >
> >Thanks in advance for helping.
> >F.Bellour
Mike Miller did a nice piece on Coefficient Alpha in the context of
confirmatory factor analysis which might be informative for you. The
reference is:
Miller, M. B. (1995). Coefficient alpha: A basic introduction from the
perspectives of classical test theory and structural equation modeling.
Structural Equation Modeling, 2(3), 255-273. 
						--Clay
--
Clay Helberg         | Internet: helberg@execpc.com
Publications Dept.   | WWW: http://www.execpc.com/~helberg/
SPSS, Inc.           | Speaking only for myself....
Return to Top
Subject: Re: Sample Size Question
From: bwallet@nswc.navy.mil (Brad Wallet)
Date: Fri, 20 Dec 1996 21:09:12 GMT
In article <32B9BD86@MAIL.HSMNET.COM>, "Adlis, Susan A."  writes:
|> We are testing a new lab test versus the standard test.  We will calculate
|> sensitivity and specificity of the new test.  How do I determine the sample
|> size needed to have valid results?
30.  As we all know, as soon as you have 30 observations, you can
assume you have a normal distribution.  *grins*
Brad
Return to Top
Subject: (no subject given)
From: Steve Des Jardins
Date: Fri, 20 Dec 1996 15:22:56 -0600
Hi,
How can I temporarily stop my mail from this and other listservs?
Steve
Return to Top
Subject: Re: High Noon for Biomedical Journals
From: rwhite@rideau.carleton.ca (Robert White)
Date: 20 Dec 1996 22:50:55 GMT
How are the WEB based journals going to carry out the process of
review and who is going to pay for the time that is taken on the part
of editorial staff? In short, the present paper journals are bought
and sold to libraries across the world. If paper journals go out of
existence the same processing will have to be developed on-line and
staff overseeing the manufacture of these on-line journals will have
to charge a fee to earn a living for the countless hours they would 
need to spend. No one is going to do it for free and we cannot expect
quality journals if there are no staff to edit out and verify claims
made by scientists. Frankly, the transition from paper holdings to
WEB library holdings is going to cause quite a bit of damage to
the publishing process and validity of scientific work. Personally,
I would rather have my own work put in paper journals and then
transfered on-line. Moreover, reading on-line is more difficult and
review cannot be done as simply.
0.02
In <32BAC3D8.9E@hg.uleth.ca> "John R. Vokey"  writes:
>One of the difficulties with the complete abandonment of paper journals
>is the loss of a clear criterion for the canonical version of the
>article that they provide.  The edit/review/revise process to which most
>journal articles are submitted will only increase with web/net
>publishing making the definition of the finished or canonical article
>quite difficult: in these days of proliferating "preprints" and posting
>of draft versions to web-pages, freezing an article in a printed version
>has become the basis of which version we mean when referencing works,
>and is an advantage of paper publishing that should not be overlooked,
>among others (e.g., relative permanance, little concern about media
>obsolescence or an inability to decode it in 20 or 100 years, etc. --
>something that even "freezing" on CD-ROM can't guarantee).
>At least some of the rather strong desire for web-publishing for
>journals is a wish to escape the hegemony of major journal publishers
>(and, possibly, their associated editorial processes), and, I suspect,
>the slow and tedious process associated with much journal publishing
>these days, both of which would presumably be ameliorated by
>web-publishing, but we should be careful not to throw out some of the
>advantages of paper.
>I would like to see the whole editorial/reviewing process moved to the
>web to gain some of the freedoms others have described, but I would
>still prefer that the end of the process be a printed version (or some
>well-defined, long-lived format) housed in public and government
>libraries to define the canonical article.  Scholarship *requires* a
>permanent, public record; the evolving internet is anything but.
>--
>Dr. John R. Vokey, Associate Professor, Department of Psychology
>University of Lethbridge, Lethbridge, Alberta, CANADA  T1K 3M4
>mailto:vokey@hg.uleth.ca  http://www.uleth.ca/~vokey
-- 
   ----------------------------------------- Carleton University ----------
               Robert G. White               Dept. of Psychology   
                                             Ottawa, Ontario. CANADA
   INTERNET ADDRESS ----- rwhite@ccs.carleton.ca ------------------- E-MAIL
   ------------------------------------------------------------------------
Return to Top
Subject: Re: (no subject given)
From: rwhite@rideau.carleton.ca (Robert White)
Date: 21 Dec 1996 02:08:33 GMT
In <1.5.4.32.19961220212256.0066502c@maroon.tc.umn.edu> Steve Des Jardins  writes:
>Hi,
>How can I temporarily stop my mail from this and other listservs?
>Steve
send a msg to LISTSERV with the following if ever in need of help on a
command.
HELP
:-)
I'm not kidding and know that all the listervs respond to this
universal command. Lastly, I think the following might work
for you.
suspend mail 
NOTE: most systems are the same now and all one needs for
99.9% of the applications is a REFCARD from any LISTSERV.
hope that helps.
-- 
   ----------------------------------------- Carleton University ----------
               Robert G. White               Dept. of Psychology   
                                             Ottawa, Ontario. CANADA
   INTERNET ADDRESS ----- rwhite@ccs.carleton.ca ------------------- E-MAIL
   ------------------------------------------------------------------------
Return to Top
Subject: Outcomes Research Analyst Position
From: Melvin Ott
Date: Fri, 20 Dec 1996 20:13:08 -0800
Provides technical and outcomes research support for multiple departments
and medical staff.  Work with physicians and other health care professionals
in problem solving data/information issues including; study design, sampling
methods, validity testing, performing data analysis functions, investigating
comparable benchmark data, performing statistical analysis of study data,
designing report formats and data definitions across all data end users,
conducting background investigations of scientific and medical research
literature.  Other responsibilities include; utilizing the QI process,
maintaining competency in technical skills(specific to CHARS data/tape
manipulation, SAS operations and market analysis), identifying and
implementing staff educational needs.  A minimum of 2 years experience in
health care or related field and 1 year experience with SAS.  Coding
knowledge desired.  We offer a competitive salary and excellent benefit
package.  Qualified applicants send resume to:
Deaconess Medical Center
Human Resources Department
Box 248
Spokane, WA 99210-0248
Attn: Kathy Sewell
Fax 509-744-7662
e-mail CNORWOOD@ica.com
Please don't reply to me.  Iam just posting this notice for someone else.
Return to Top
Subject: Re: What do we mean by "The Null Hypothesis"?
From: Clay Helberg
Date: Fri, 20 Dec 1996 23:28:01 -0600
Richard F Ulrich wrote:
>  -- Okay.  Rather than engage in a dissection of the dialog, I will try
> to address the central issue.  Clay is endorsing Hays, but he does not
> accept the fact that I find Hays less-than-satisfactory.  With that
> in mind, I will try to convey my argument by re-writing part of Hays.
It was obvious from your previous post that you did not find Hays
satisfactory--but I did not see why until just now. I still do not agree
with you, but I believe I see the reasoning behind your viewpoint.
> 
> How about, "A tested hypothesis must specify a value that does have
> a particular meaning, or _gravitas_.  Though that may be any of the
> possible values for one (or more) parameters, the use of the word
> *null*  is always appropriate because the test is looking for "no
> experimental effect" (in the words of Hays, above)  -  even though
> 'no-effect'  sometimes is represented by a number.
> 
> "Metaphorically, the null is also reminiscent of a singularity, or
> a black-hole, which is a sort of zero  -  it is what your conclusions
> have to collapse to, if your data come out totally noisy.  It is
> certainly different from the way we regard 'alternate" hypotheses'."
> What we are discussing here is a pedagogical question, rather
> than a statistical one.  In the TECHNICAL terms, I am right and
> Hays is wrong, I think, because every hypothesis *is*  reduced
> to what Clay termed a 'tautological' form, where there is a zero.
> (At least, that is the way for writing formal, mathematical hypotheses
> for t-tests and ANOVAs, where you show that the computed term does
> have the intended distribution, of t or chisquared.  I don't really
> remember writing hypotheses  for anything else.)
I agree that it is a pedagogical issue, rather than a statistical one. I
will not deny that Rich's interpretation (that a null hypothesis can
always be made to contain a zero) is technically correct. However, I
would argue that Hays' interpretation is equally correct from a
technical standpoint, and has the added pedagogical benefit that it
places emphasis on the fact that the null hypothesis merely states a
*specific* effect, which needn't always represent an absence of effect.
It is very common for students learning statistics to get the idea that
the null hypothesis is only good for testing "some effect vs. no effect"
types of hypotheses. However, as Hays points out, it is also good for
testing "this specific predicted (nonzero) effect vs. some other effect"
hypotheses as well.
> Further, I am using "effect size"  in the same technical sense that
> Hays uses the phrase, above, where the effect size *is*  zero under
> the null.  (Note: Clay has been saying it differently, using
> effect-size as synonomous with, say, raw-change-score.  I would
> rather keep it as a technical term.)
Perhaps I have been sloppy with my terminology (again). But this is
another reason that I have lost most of my enthusiasm for hypothesis
testing--it has a way of turning quantitative relationships inside out,
making it very difficult to keep track of relationships in a meaningful
way. If I have a theory that says that a particular regression slope
should be 2, and I want to test this theory by rejecting the null
hypothesis if the slope is sufficiently different from 2, I would still
call a slope of 2 an effect with a nonzero size. However, Rich is right
that the traditional framework would say that a slope of 2 represents an
effect size of zero. That's (one of many reasons) why I'm growing more
and more dissatisfied with the traditional framework!
> For the sake of pedagogy, the Hays approach does de-emphasize
> zero as COMPARISON value.  Is that a major problem?  Personally, I
> have not had trouble explaining the difference between effect-size
> and comparison-value.  But I do my explaining to one or two
> persons at a time.  Also, I have not read Hays, so I do not know
> what further use he might make of the ideas in the course of
> his presentation.  If the citation came from his introduction,
> then maybe he had a lot more to say.  If it came from his summary,
> then I think that he just made a meager point, where he could have
> argued more fruitfully.
My copy of Hays is at work and I'm home right now, but as I recall that
quote was a sort of "aside" at the end of the section where he
elaborates hypothesis testing. I think the point of the paragraph was to
emphasize the fact that you could use other values than zero as
comparison values, rather than to provide a true summary of hypothesis
testing (that was done in another place in the chapter).
To get down to more practical matters, how you explain things depends
heavily on your audience. When I was teaching statistics to psychology
sophomores, many of them were quite unsophisticated mathematically.
There were a number of students who could have (and did!) get lost going
from "mu=5" to "mu-5=0", due to lack of algebra skills and severe math
anxiety. For such people especially, but also more generally, I think
Hays' approach has clear benefits (without sacrificing rigor) over
Richard's more technical approach.
(I know some folks would say that such people have no business studying
statistics, but that's an argument for another day....)
						--Clay
-- 
Clay Helberg      | Internet: helberg@execpc.com
SPSS, Inc.        | WWW: http://www.execpc.edu/~helberg/
Chicago, IL       | Speaking only for myself....
Return to Top
Subject: Position (fwd)
From: schanzer@compmore.net (Dena Schanzer)
Date: Sat, 21 Dec 1996 02:11:54 GMT
Dabrowski  at Internet
Date: Tue, 17 Dec 1996 15:53:55 -0500
From: George O'Brien 
To: d-ssc@mcmail.cis.mcmaster.ca
Cc: obrien@mathstat.yorku.ca
Subject: statistics position
Please draw the attention of interested individuals to the following job 
advertisement.  Thanks,
George O'Brien
********************************************************************************
Applications are invited for a cross appointment at the Assistant Professor 
level in the tenure-track, in the Departments of Mathematics & Statistics and 
Sociology, to commence July 1, 1997, subject to budgetary approval. The 
successful candidate must have a Ph.D. and is expected to have established a 
record of research and teaching excellence in statistics and its application to 
sociology.  The selection process will begin on January 15, 1997. Applicants 
should send resumes and arrange for at least three letters of recommendation to 
be sent directly to: George L. O'Brien, Chair, Dept of Mathematics & 
Statistics, York University, 4700 Keele Street, North York, Ontario, Canada, 
M3J 1P3. Fax: (416) 736-5757 E-mail: chair@mathstat.yorku.ca. York is 
implementing a policy of employment equity, including affirmative action for
women faculty.  In
accordance with Canadian immigration requirements, this advertisement is 
directed to Canadian citizens and permanent residents.
George L. O'Brien
Return to Top
Subject: Re: Sample Size Question
From: wthomp2081@aol.com (WThomp2081)
Date: 21 Dec 1996 10:58:29 GMT
> the question you have asked on the surface would seem to have a simple
> answer but .. it does not.
I think the answer is always simple, just a number or a range.  The
process to arrive at the answer varies in complexity and accuracy
depending on what information you have to start with and how you use the
information (i.e., charts, programs, formulas).
Occasionally, questions about what sample size to use come up on this
newsgroup, and many times I have seen responses that to me give a correct 
indication that it is hard to give a pat answer, but that also seem to
leave the inquirer at a loss for what to do next.  
As a junior stat consultant (ie graduate student consultant), I've had to
determine sample size a few times for researchers who "want significant
results".  
Here is the general process I use.  I have left most technical details in
my head...really in a folder in my office across the nation, and am vague
about some things that don't involve the client. But I hope that
_inquirers_ can get an idea of what to specify when they ask the million
dollar question.
First, I ask or determine if they would like to be able to detect a large,
medium, or small effect.  For something like comparing to a standard (like
the case here), usually they want to detect the smallest effect possible. 
Thus, they would want small differences between the new lab test and the
standard test to be detectable by the study.
Then I ask if a pilot study was done, and if it was, can I analyze those
data.  From the analysis, I get the estimate of the within-grps variance
and the estimate of the sum of squared treatment effects (if fixed
effects) for this study.  I use those along with the effect information
above and significance level to compute the parameters needed to use
something like Tang's charts, or whatever works best, to get an estimate
of n.  
If no pilot study was done, I ask, in this two-group (two lab tests) case
what difference in the units of measure would you consider to be a
meaningful difference for your research (read, to be able to publish)? 
Again, for comparison-to-a-standard test this should be a small difference
between the standard and new test if they want to show the new test is
just like an established standard, but usually the client will say they
have no idea at first; so, I suggest values until they say okay.  Then I
standardize this difference with their help, and use the information to
get the n estimate needed to detect a difference like this in a
statistical analysis.
If they can't even specify the above, or they aren't around to do so, I
can use just the effect size (large, medium, small) information and
information about sig level, number of levels of treatments, type of
design, estimate of rho (for RB designs...they really have to be around
for that), to get a crude estimate or range of n.
This basic outline works for most types of experimental designs. 
I have attempted some of the SAS programs for sample size, but could not
get them to run.  I notice that one is just an automated version of
Bratcher et al.'s charts in NWK.  Any opinions on any of the computerized
sample size programs?  It's a pain to do it by "hand".
Laura
Return to Top
Subject: Re: systat + spss
From: David Mataix-Cols
Date: 20 Dec 1996 11:19:12 GMT
I think so.
Best, 
David
Return to Top
Subject: Re: WEB courses and Walmart (whats the problem)
From: amcorso@aol.com (AMCorso)
Date: 21 Dec 1996 16:20:05 GMT
Dennis Roberts writes
> The big players will wipe out the smaller ones ... and the
>greed continues.
>The Walmart schools will knock out the Central Michigans ... either by
>taking a loss financially or ... making their courses more "user
friendly"
.and the problem with lower costs would be . . . .?
perhaps reduced quality i hear some one say.
well there is still room in the world for Leswplinase as well as McDonalds
and of course, there is the famous drop in quality 'twixt your 1996 $1500
133mhz pentium and that 1970 $5 million IBM 360 ;-)
Regards
Tony
Return to Top
Subject: Farm's Data Analysis
From: Milad Elhadri
Date: Sat, 21 Dec 1996 20:46:48 EST
Data were collected from eight farms on the following variables: Net
Income, Yield, Ditch-length (8 levels), Stocking-Density (3 levels),
Feed, Fingerlings, and Labor Hours. One purpose of the study is to
determine an economically optimum ditch-length and stocking-density
.
Is it possible to conduct two-factorial analysis using ditch-length as
one factor with 8 levels and the stocking-density as the second factor,
given that we have only one observation at each level (for each farm,
there is only one level of ditch-length)?
Is there any other suggestion to analyze the data?
I would appreciate your assistance. Thanks.
Milad Elhadri
717 652-0497
Elhadri@JUNO.com
Return to Top

Downloaded by WWW Programs
Byron Palmer