Back


Newsgroup sci.stat.consult 21703

Directory

Subject: Re: What do we mean by "The Null Hypothesis"? -- From: rwhite@rideau.carleton.ca (Robert White)
Subject: Re: Probability -- From: Ellen Hertz
Subject: Re: SAS help -- From: nakhob@mat.ulaval.ca (Renaud Langis)
Subject: Re: Time-Series Analysis -- From: nakhob@mat.ulaval.ca (Renaud Langis)
Subject: Need Reference for Cluster Analysis, Market Segmentation, Neural Network -- From: mswann@pluto.njcc.com (MHS)
Subject: High Noon for Biomedical Journals -- From: "Leif Peterson, Ph.D."
Subject: Papers for intro to research/stats -- From: John Roden

Articles

Subject: Re: What do we mean by "The Null Hypothesis"?
From: rwhite@rideau.carleton.ca (Robert White)
Date: 19 Dec 1996 02:16:32 GMT
In <599ov6$par@usenet.srv.cis.pitt.edu> wpilib+@pitt.edu (Richard F Ulrich) writes:
>How about, "A tested hypothesis must specify a value that does have
>a particular meaning, or _gravitas_.  Though that may be any of the
There is a tolerance on _all_ measures. + or - errors. Even when
working with calibrated ratio level data there will be error in
measures. When physicists measure jewels they work with instruments
that can only measure to a certain degree of accuracy. Beyond half
microns one needs optical measures because hand held instruments can't
measure beyonf half a micron. In toolmaking and machining measures
are always verified by gage blocks that are tested for accuracy. All
instruments are constantly re-calibrated every few months etc. What
I am trying to say is that these Ho:/Ha: hypotheses are instruments
just like a vernier caliper or a micrometer. These instruments cannot
measure _anything_ with 100% accuracry and there is _always_ a
tolerance on measurements. Now Hays attempts to point this out to all
and does an admirable job of it, but few really get it. In short, and
don't ever forget this for as long as you are in science, the Null
is an instrument and as such it is liable to fall out of 'calibration'
and need to be re-set by standard decision making. Moreover, the Null
cannot answer or ask questions for us and as such it is an
interpretative device no unlike a micrometer or vernier caliper.
>'no-effect'  sometimes is represented by a number.
>"Metaphorically, the null is also reminiscent of a singularity, or 
>a black-hole, which is a sort of zero  -  it is what your conclusions
>have to collapse to, if your data come out totally noisy.  It is 
>certainly different from the way we regard 'alternate" hypotheses'."
There is no 'sort of zero' when one is working in theory or
theoretical frames. Zero itself is a theory, but the Null is
not claiming to find an absolute, but more to the point, the
Null is a device or instrument that, if calibrated properly,
may test out to find as close an approximation to 'sort of zero'
in probabilistic terminology. And even probability theory is
less than perfect if you would like to argue about chance factors
built into theory. As far as I am concerned you statisticians are
taking the theory to be absolute when it fact everyone in science
knows that theory is a tautology in and of itself. If we did not have
this tautology we would not be able to conduct the discipline.
Now stop saying that things are real are starting thinking
conceptually and THEORETICALLY.
Sorry for shouting.
these H0:/Ha debates have gone on enough to warrant a FAQ on the Null
alone.
Lets get Hays to write the FAQ.
merry x-mas
Robert [never passed a math course in my life] White
>What we are discussing here is a pedagogical question, rather
>than a statistical one.  In the TECHNICAL terms, I am right and
>Hays is wrong, I think, because every hypothesis *is*  reduced 
>to what Clay termed a 'tautological' form, where there is a zero.
>(At least, that is the way for writing formal, mathematical hypotheses
>for t-tests and ANOVAs, where you show that the computed term does 
>have the intended distribution, of t or chisquared.  I don't really
>remember writing hypotheses  for anything else.)
>Further, I am using "effect size"  in the same technical sense that
>Hays uses the phrase, above, where the effect size *is*  zero under 
>the null.  (Note: Clay has been saying it differently, using 
>effect-size as synonomous with, say, raw-change-score.  I would
>rather keep it as a technical term.)
>For the sake of pedagogy, the Hays approach does de-emphasize 
>zero as COMPARISON value.  Is that a major problem?  Personally, I
>have not had trouble explaining the difference between effect-size
>and comparison-value.  But I do my explaining to one or two
>persons at a time.  Also, I have not read Hays, so I do not know
>what further use he might make of the ideas in the course of
>his presentation.  If the citation came from his introduction,
>then maybe he had a lot more to say.  If it came from his summary,
>then I think that he just made a meager point, where he could have
>argued more fruitfully.
>Rich Ulrich, biostatistician              wpilib+@pitt.edu
>Western Psychiatric Inst. and Clinic   Univ. of Pittsburgh
-- 
   ----------------------------------------- Carleton University ----------
               Robert G. White               Dept. of Psychology   
                                             Ottawa, Ontario. CANADA
   INTERNET ADDRESS ----- rwhite@ccs.carleton.ca ------------------- E-MAIL
   ------------------------------------------------------------------------
Return to Top
Subject: Re: Probability
From: Ellen Hertz
Date: Wed, 18 Dec 1996 22:51:25 -0500
KIM3264 wrote:
> 
> Please review the following problem and let me know if i am on the right
> track...
> 
> Suppose that you own a portfolio (randomly selected) of 16 stocks.  On a
> certain day, you hear the news that the average stock rose 1.5 points.
> Assuming that the std deviation of stock price movement that day was 2
> points and assuming stock price movements were normally disaround their
> mean of 1.5, what is the probability that the average stock in the
> portfolio increased in price?
> 
> My solution:
> 
>            1.5/(2/sqrt16)
>          = 1.5/.5 = 3
> 
> Am i on the right track?
Yes, given the assumptions that the stock price movements were a normal
population with mean 1.5 and std 2 and that you had a random sample
of size 16. Then its average, Xbar, is normal with mean 1.5 and std .5.
Pr(Xbar >0)= Pr((Xbar-1.5)/.5 > -3)= 1-N(-3) = N(3) = .9987 where N is
the standard normal cumulative distribution function.
Return to Top
Subject: Re: SAS help
From: nakhob@mat.ulaval.ca (Renaud Langis)
Date: Thu, 19 Dec 1996 03:09:34 GMT
On Fri, 13 Dec 1996 00:07:22 -0500, Ya-Fen Lo 
wrote:
>Hi,
>
>This is a beginners' SAS question.
>I am a social scientist trying to 
>finish my final project in a research class.
>
>Is it possible to perform tests of simple effects
>(as defined in APPLIED STATISTICS by HINKEL/WIERSMA/JURS)
>in SAS ? I am using the following setup
>
>PROC ANOVA DATA=PROJECT;
>     CLASSES A B;
>     MODEL S=A B A*B;
>     MEANS A B A*B
>     MEANS A B A*B/TUKEY BON;
>     FORMAT A AA. B BB.;
>     TITLE 'THE TWO-WAY FIXED-MODEL ANOVA';
>
>The second means statement doesn't perform the simple
>effects as I would have expected.
>
You can use the TEST statement in proc GLM. May be also in proc ANOVA. Do you
simply want to know if an effect is significant? if so, just check the ANOVA
table.
I suppose this is just a typing error but CLASSES should be written CLASS.
R
Return to Top
Subject: Re: Time-Series Analysis
From: nakhob@mat.ulaval.ca (Renaud Langis)
Date: Thu, 19 Dec 1996 03:19:12 GMT
On Mon, 16 Dec 1996 17:34:09 -0500, "Joseph K. Lyou"  wrote:
>I want to analyze whether there is a significant trend over time in the
>annual failure rate of a product.  I have 20 years of measurements (i.e., n =
>20).  As I understand it, an ordinary regression analysis would be
>inappropriate because the residuals are not independent (i.e., the error
>associated with a failure rate for 1974 is more highly correlated with the
>1975 failure rate than the 1994 failure rate).  Is it appropriate to simply
>divide the data into two groups (the 1st 10 years vs. the 2nd 10 years) and
>do a between-groups ANOVA?  Or is there some other (better) way to analyze
>these data?
>
I think you should better use ARIMA models to deal with autocorrelations.
Otherwise, there are some methods of modelling the error term in regression
analysis. I know a book on that, unfortunately, i don't remember the it's name.
The author is a guy called Ostrom.
But there is no autocorrelation in your data.
R
Return to Top
Subject: Need Reference for Cluster Analysis, Market Segmentation, Neural Network
From: mswann@pluto.njcc.com (MHS)
Date: 19 Dec 1996 02:54:21 GMT
I know it's a long list, but could someone recommend a book(s) that 
would contain good reference material on Cluster Analysis, Market 
Segmentation, and Neural Networks and preferrably on how to use these 
techniques to do modeling.
Thanks for your E-mail in advance.
Return to Top
Subject: High Noon for Biomedical Journals
From: "Leif Peterson, Ph.D."
Date: Wed, 18 Dec 1996 16:57:33 -0500
Dear list members:
Ron LaPorte at Univ. of Pittsburgh posted this on the epidemio-l list server
at Univ. of Montreal and asked that it be forwarded to other lists.  The
posting describes a Web home page that is addressing the challenges journal
houses face with regard to publishing on the internet.  In view of Ron's
request, I have forwarded it to the following list servers:
icrher@listserv.bcm.tmc.edu
DOSE-NET@orau.gov
MEDPHYS@CMS.CC.WAYNE.EDU
cdn-nucl-l@listserv.cis.mcmaster.ca
radsafe@romulus.ehs.uiuc.edu
EPIWORLD@UNIVSCVM.CSD.SCAROLINA.EDU
stat-l@vm1.mcgill.ca
Please don't send it on to these lists.
Leif E. Peterson, Ph.D.
ICRHER List Administrator (icrher@listserv.bcm.tmc.edu)
International Consortium for Research on Health Effects of Radiation
Baylor College of Medicine
Houston, Texas
peterson@bcm.tmc.edu
Message follows:
-----------------------------------------
Date: Tue, 17 Dec 1996 18:31:20 -0400 (EDT)
From: "Ronald E. LaPorte from Pittsburgh" 
To: epidemio-l@CC.UMontreal.CA
Subject: Re: EPIDEMIO-L digest 804
Message-ID: <01ID4K8JA5TU936DOO@vms.cis.pitt.edu>
Dec. 1996
High Noon for Biomedical Journals
Scientists  from the Global Health Network
(www.pitt.edu/HOME/GHNet/GHNet.html) predict that within
5 years most scientists will move their intellectual properties
to the Internet.  This will spell the demise of most paper
journals as we know them.  In two recent communications in
the British Medical Journal they indicated that an Internet
based system would be much more powerful and available to
scientists then journals.  Moreover, they are questioning the
current copyright practice of the journals as this inhibits the
use of the Internet for posting communications.  A major
problem, however, is that little is known about how best to
present research communications on the Internet.  In their
web site (www.pitt.edu/HOME/GHNet/publications/assassin/),
an experiment is being conducted where a research
communication called Scientists Assassinate Journals is
presented in English, Spanish, Portuguese and Japanese.
This is presented in a lay version, scientific version, or an
editor version.  In addition, it is presented in a "hypertext
comic book form", all include sound.  Within each version
there are considerable opportunities to provide constructive
comments concerning the presentation or content.
We encourage scientists, editors, and lay people from all
walks of life to come to our site and comment.  In this
manner, we will have data from the scientific community as to
how best to present scientific research communications.  We
would suggest  that you forward this to your friends and to list
servers and news groups as this affects the total scientific
community, therefore, the more input the better.
Ronald LaPorte, Ph.D.
Deborah Aaron, Ph.D.
Akira Sekikawa, M.D.
Ingrid Libman, M.D., Ph.D.
Benjamin Acosta, M.D.
Lucia Iochida, Ph.D.
Eugene Boostrom, M.D.
Anthony Villasenor, B.S.
Amy Brenen, B.S.
Return to Top
Subject: Papers for intro to research/stats
From: John Roden
Date: Wed, 11 Dec 1996 12:08:34 -0500
Hello - I used to teach research methods and wrote some short papers to
help the students understand the material in the text.  I put these
papers on my homepage at:
http://www.frontiernet.net/~roden/RES1.HTM
If they are helpful, enjoy them, give me credit if used as handouts.
John Roden, Ph.D.
Evaluation and Systems Solutions
Return to Top

Downloaded by WWW Programs
Byron Palmer