Back


Newsgroup sci.stat.consult 21564

Directory

Subject: Re: Baseball study -- From: Sean Lahman
Subject: Re: Reliability of regression-weighted factors -- From: "David L. Ronis"
Subject: methods to reduce the level of p values in multiple tests -- From: Jacques PARIES
Subject: methods to reduce the level of p values in multiple tests -- From: Jacques PARIES
Subject: ANNOUNCE: Conference in Auckland, New Zealand -- From: dscott@stat.auckland.ac.nz (David Scott)
Subject: Algorithm for Moments -- From: arte@panix.com (Arthur Ellen)
Subject: help with reliability/validity -- From: bellour@upso.ucl.ac.be (F. Bellour)
Subject: Calculating SEs for Relative Ratios -- From: ccox@sophia.sph.unc.edu

Articles

Subject: Re: Baseball study
From: Sean Lahman
Date: Wed, 11 Dec 1996 15:56:17 -0500
Richard F Ulrich wrote:
> 
> Mitchell R. Watnik (mwatnik@rocket.cc.umr.edu) wrote
> : I do *not* believe in the "livelier" ball. 
>
> In 100 years of Major League Baseball, I suspect that the ball is not
> the same as it was at the start, though I do not remember reading of
> official changes.
I was refering to the change in 1920 to a ball with a cork center. 
Baseball prior to that is generally known as "the deadball era", because
homeruns were rare and the style of play focused on other elements of
startegy, such as bunting, stolen bases, and the hit & run play. 
And just for the record, the National League started play in 1876, which
is _120_ years of MLB.  A much bigger data set than the other three
American professional team sports combined.
I appreciate the assistance of people in this newsgroup who know much
more about statistical analysis than I do.  I'm pretty knowledgable
about the history of baseball, but profess my ognorance when it cames to
complex data analysis.  Your insight has been helpful.
-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sean Lahman - lahmans@vivanet.com
Sean Lahman's Baseball Archive
http://www.vivanet.com/~lahmans/baseball.html
Return to Top
Subject: Re: Reliability of regression-weighted factors
From: "David L. Ronis"
Date: Wed, 11 Dec 1996 10:12:41 -0500
On Tue, 10 Dec 1996 ChaberrySM@AOL.COM wrote:
> Hello all,
> I did a factor analysis that resulted in five
> interpretable factors.  Instead of using unit
> weighting to construct scales from the items,
> I used SPSS's regression procedure to form
> scales.
I prefer unit weighting, but I'll answer the posed question anyway.
>    I know how to check the reliability
> of a unit weighted scale, but it is it possible
> to assess the reliability of my scales?
Yes.
I prefer unit weighting.
The way you would assess internal consistency reliability (e.g.,
Cronbach's alpha) of this kind of scale would be to create new variables
that equal the original variables multiplied by the weights you want to
use to construct the scale scores.
Then (in SPSS) feed these new variables into the RELIABILITY program.  Be
sure to look at the regular alpha, not the standardized item alpha.
Usually (but not always) notably unequal weighting of items hurts
reliability.
David
David Ronis  alias  dronis@umich.edu  Offices in Ann Arbor, MI
  University of Michigan and Department of Veterans Affairs
    home page-->  http://www-personal.umich.edu/~dronis/
       School of Nursing              (313) 647-0462
       Institute for Social Research  (313) 936-0462
       VA                             (313) 930-5119
Return to Top
Subject: methods to reduce the level of p values in multiple tests
From: Jacques PARIES
Date: Wed, 11 Dec 1996 23:13:30 +0100
Hi all.
I have a question relating to a publication and repeated 
measures analysis.
Consider a biological variable B which is measured over three 
points in time (B0, B8, B15).
Here is the design I used:
"STATISTICAL ANALYSIS:Related  measures.
For each variable we used a repeated measures analysis of 
variance design and two linear combinations of the differences 
between values at the three periods [contrasts].
These contrasts were orthonormalized and the Mauchly’s test was 
used to verify the assumption of sphericity of the variance 
covariance matrix. When this assumption appeared to be violated 
an adjustement of degrees of freedom was made [Huynh-Feldt 
Epsilon].
Three sorts of graphs are illustrating the results for the most 
interesting of them.
· First, error bars  [means +- sem] describes the data at the 
three periods, and the degree of signification given by the 
averaged univariate F test is specified in a footnote.
· Second, errors bars [means +- sem] describes the two linear 
combinations of differences, showing their situation in relation 
to the zero of scale. The degrees of signification given by the 
univariate F tests are specified by annotations in the graph. "
And here is in part what the Reviewer answered me: 
The p value is for the ANOVA; Usually, when this is significant, 
secondary testing is done to identify which of the specific 
points are different from each other; When assessing this many 
outcome variables, some method needs to be used to reduce the 
level of the p values for the multiple tests being done.
Is it Bonferroni procedure?  Must I use a reduced alpha and how? 
The measures are related and I ask vainly myself.
Many thanks in anticipation
Return to Top
Subject: methods to reduce the level of p values in multiple tests
From: Jacques PARIES
Date: Wed, 11 Dec 1996 23:13:57 +0100
Hi all.
I have a question relating to a publication and repeated 
measures analysis.
Consider a biological variable B which is measured over three 
points in time (B0, B8, B15).
Here is the design I used:
"STATISTICAL ANALYSIS:Related  measures.
For each variable we used a repeated measures analysis of 
variance design and two linear combinations of the differences 
between values at the three periods [contrasts].
These contrasts were orthonormalized and the Mauchly’s test was 
used to verify the assumption of sphericity of the variance 
covariance matrix. When this assumption appeared to be violated 
an adjustement of degrees of freedom was made [Huynh-Feldt 
Epsilon].
Three sorts of graphs are illustrating the results for the most 
interesting of them.
· First, error bars  [means +- sem] describes the data at the 
three periods, and the degree of signification given by the 
averaged univariate F test is specified in a footnote.
· Second, errors bars [means +- sem] describes the two linear 
combinations of differences, showing their situation in relation 
to the zero of scale. The degrees of signification given by the 
univariate F tests are specified by annotations in the graph. "
And here is in part what the Reviewer answered me: 
The p value is for the ANOVA; Usually, when this is significant, 
secondary testing is done to identify which of the specific 
points are different from each other; When assessing this many 
outcome variables, some method needs to be used to reduce the 
level of the p values for the multiple tests being done.
Is it Bonferroni procedure?  Must I use a reduced alpha and how? 
The measures are related and I ask vainly myself.
Many thanks in anticipation
Return to Top
Subject: ANNOUNCE: Conference in Auckland, New Zealand
From: dscott@stat.auckland.ac.nz (David Scott)
Date: 11 Dec 1996 23:26:47 GMT
New Zealand Statistical Association
48th Annual Conference
University of Auckland
Wednesday July 9--Friday, July 11, 1997
Themes of the Conference are Bayesian Statistics including Markov Chain
Monte Carlo, and Statistical Ecology.
It is expected that there will also be sessions on Official Statistics,
Biostatistics, Statistical Theory, and Statistical Education.
Contributed papers in any area of statistics will however be accepted for
the conference program.
Keynote speakers who have accepted invitations to speak at the Conference
are Peter Hall (ANU), Luke Tierney (Minnesota), Steve Buckland (St Andrews),
Keith Worsley (McGill), and Richard Huggins (La Trobe).
Peter Hall's talk will be presented jointly with the joint meeting of the
Australian Mathematical Society and the New Zealand Mathematics Colloquium,
which is being held in Auckland from July 7 to July 11.
Steve Buckland is to present a Workshop on Line Transect and Distance
Sampling for Estimation of Wildlife Populations on the morning
of July 11. The Workshop and the sessions on Statistical Ecology are intended
to be interdisciplinary, bringing together researchers from Biology,
Ecology and Statistics.
Accommodation has been reserved for participants in the student residence
Grafton Hall which is close to the University.
The deadline for submission of abstracts is May 23, 1997.
For further details concerning the Conference, or to register your interest,
there is a link on the home page of the Statistics Department at the
University of Auckland (http://www.stat.auckland.ac.nz/).
Alternatively, contact
Associate Professor David J Scott,
Department of Statistics,
Tamaki Campus,
The University of Auckland,
PB 92019, Auckland,
New Zealand
Phone: +64 9 373 7599 Fax: +64 9 373 7177
Email: d.scott@auckland.ac.nz or dscott@scitec.auckland.ac.nz
Return to Top
Subject: Algorithm for Moments
From: arte@panix.com (Arthur Ellen)
Date: 11 Dec 1996 19:34:24 -0500
Can someone post an algorithm for the 4 moments in either basic or 
pascal with a brief explanation. I'm a bit puzzled by AS52's recursion.
tia 
art
arte@panix.com
Return to Top
Subject: help with reliability/validity
From: bellour@upso.ucl.ac.be (F. Bellour)
Date: Thu, 12 Dec 1996 09:36:26 +0100
Does anyone know how to calculate the validity (or reliability) of survey
data? I know I can use factor analysis to tap the construct validity. But
how can I use Cronbach's alpha, to what conclusion does the alpha lead. Do
I need to standardize data before calculating the alpha? If yes, do I
still have to use standaridized scores in further analyses?
Thanks in advance for helping.
F.Bellour
-- 
F.Bellour
PhD Student
U.C.L. Belgium
E-mail: bellour@upso.ucl.ac.be
Phone office: 00-32-10-478640
Return to Top
Subject: Calculating SEs for Relative Ratios
From: ccox@sophia.sph.unc.edu
Date: Thu, 12 Dec 1996 10:34:51
I have calculated ratios for various population
subgroups using SUDAAN's proc ratio.  Now I want
standard errors for relative ratios between 
subgroups, e.g., the 2:1 ratio of the ratio for
Blacks to the ratio for Whites, to get t-tests.
A statistician at RTI said SUDAAN can't do this.
Also, what would the degrees of freedom be for 
a relative ratio?
Thanks, 
Christine
Return to Top

Downloaded by WWW Programs
Byron Palmer