Back


Newsgroup sci.stat.math 11844

Directory

Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: kenneth paul collins
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: kenneth paul collins
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: kenneth paul collins
Subject: Re: Prob Density Function for sine -- From: Dick DeLoach
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Subject: Re: correllating 2 data sets -- From: Benjamin Roberts
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: Jim Balter
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: patrick@gryphon.psych.ox.ac.uk (Patrick Juola)
Subject: Re: CONFIDENCE INTERVAL -- From: Robert L Strawderman
Subject: Re: Bootstrap algorithm -- From: Rodney Sparapani
Subject: Re: Implausible null hypotheses -- From: mcohen@cpcug.org (Michael Cohen)
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: owinebar@ezinfo.ucs.indiana.edu (Onnie Lynn Winebarger)
Subject: Re: Implausible null hypotheses -- From: johno@vcd.hp.com (John Ongtooguk)
Subject: Multinomial Logistic Regression, SAS question -- From: tstanley@lamar.ColoState.EDU (Thomas R Stanley)
Subject: Need Help!!! -- From: jtahara@chat.carleton.ca (James Tahara)
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Subject: Need cosine-distributed random help. -- From: ronb@cc.usu.edu (DR TE$TH & THE ELECTRIC MAYHEM)
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Subject: Re: Implausible null hypotheses -- From: aacbrown@aol.com
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: n_nelson@ix.netcom.com(Neil Nelson)
Subject: Re: CONFIDENCE INTERVAL -- From: aacbrown@aol.com
Subject: Re: E[ X | X => X^*] = ? -- From: Frank Tuyl
Subject: Re: Need cosine-distributed random help. -- From: Paul Abbott
Subject: Re: Occam's razor & WDB2T [was Decidability question] -- From: ikastan@sol.uucp (ilias kastanas 08-14-90)
Subject: Significance of standard residuals in Chi-square -- From: Daniel Davis
Subject: Re: Bonferroni's Method -- From: engp6373@leonis.nus.sg (Than Su Ee)

Articles

Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: kenneth paul collins
Date: Mon, 25 Nov 1996 01:36:22 -0500
Ilias Kastanas wrote:
>         The "rules" of "proof" have been shown to be _the_ rules of proof;
>    that is the Completeness theorem.  If you see something wrong there, maybe
>    you could state what.  As it is, breaking the rules is pointless and
>    self-defeating... and irrelevant to the Incompleteness theorem.
I don't have a problem with that. What I stand against are over-generalized uses 
of Godel's "incompleteness"... "Math is 'incomplete', and cannot possibly be 
'completed'"... instances in which a nay sayer invokes Godel to aruge against 
the possibility of this or that without even considering the thing in question.
Godel's Proof is consequential only within the realm of the rules of Godel's 
Proof. When one looks, one finds that the realm of the rules of Godel's Proof is 
just an island within the realm of all possible Mathematics. It just doesn't say 
anything about anything that's not "on that island". And yet, it's most-often 
invoked as if it says stuff about all possible Mathematics. It doesn't.
ken collins
_____________________________________________________
People hate because they fear, and they fear because
they do not understand, and they do not understand 
because hating is less work than understanding.
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: kenneth paul collins
Date: Mon, 25 Nov 1996 01:50:10 -0500
Jonathan Gibbs wrote:
> 
> kenneth paul collins (KPCollins@postoffice.worldnet.att.net) wrote:
> : A machine that is designed so that it can "divide & conquer"
> : can render such infinities irrelevant. Basically, such a
> : machine transforms all problems into Geometry, and, instead
> : of "algorithms", use cross-correlation among continuua to
> : arrive at solutions.
> 
> [tons snipped]
> 
> Sounds very facinating ken, can you point me to a good reference on
> this stuff? Perhaps a journal paper...
I've developed a unified theory of CNS function, cognition, affect, 
and behavior (Neuroscientific Duality Theory). During that 
development, I modeled and tested concepts pertaining to the neural 
dynamics. The "continuum Geometry computer" (CGC) that I was 
discussing is just a generalization of the models. It's not formally 
published anywhere yet. The Neuroscience discussion is available in a 
hypertext doc that runs on MSDOS machines. I can email you a copy if 
you want it.
I can discuss the CGC in person, and have been looking for a place to 
do so. It's a bit hard to do so online because everything has to be 
converted to Geometry, diagrammed, and discussed from the perspective 
of the diagrams. In person, I just have at it with a box or two of 
colored chalk. ken collins
_____________________________________________________
People hate because they fear, and they fear because
they do not understand, and they do not understand 
because hating is less work than understanding.
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: kenneth paul collins
Date: Mon, 25 Nov 1996 02:00:52 -0500
Ilias Kastanas wrote:
>         Journal article or not, this is about analogue computation...
The CGC is envisioned as an analogue-digital hybrid.
>    Which can take many forms; e.g. build a model of a graph, edges being
>    pieces of string of appropriate lengths, and hold it up under the
>    effect of gravity;  use pegs and rubber bands around them; and so on.
>    One can obtain approximate solutions to a number of problems.  On the
>    other hand, it is certainly a different subject.
Not really. I've been discussing computation that reduces to what's 
described by 2nd Thermo (WDB2T), which is entirely analogue, and which can 
be shown to be an inclusive superset of all possible digital computation. 
And so such is extremely-commonplace... it's been so "familiar" that it's 
been "invisible". ken collins
_____________________________________________________
People hate because they fear, and they fear because
they do not understand, and they do not understand 
because hating is less work than understanding.
Return to Top
Subject: Re: Prob Density Function for sine
From: Dick DeLoach
Date: Mon, 25 Nov 1996 04:09:56 -0500
Dick DeLoach wrote:
> 
> I'm having the statistical equivalent of writer's block.  Can someone
> PLEASE remind me what the probability density function is for a simple
> sine wave?  It's that U-shaped thingy -- you know what I mean.  Thanks!
> 
>    --- Dick
I can help you, now that my writer's block has gone away.
For y=A*sin(x):
   pdf=(1/(A*Pi))*(1/SQRT(1-((y/A)^2)))
Cum prob function: (1/2) + (1/Pi)*Arcsin(y/A)
Confidence interval at alpha level of significance:
  CI = A*sin((Pi/2)*(1-alpha))  (<-- This is what I was really after)
   --- Dick
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Date: 25 Nov 1996 10:13:10 GMT
In article <32993E66.25E@postoffice.worldnet.att.net>,
kenneth paul collins   wrote:
>Ilias Kastanas wrote:
>
>>         The "rules" of "proof" have been shown to be _the_ rules of proof;
>>    that is the Completeness theorem.  If you see something wrong there, maybe
>>    you could state what.  As it is, breaking the rules is pointless and
>>    self-defeating... and irrelevant to the Incompleteness theorem.
>
>I don't have a problem with that. What I stand against are over-generalized uses 
>of Godel's "incompleteness"... "Math is 'incomplete', and cannot possibly be 
>'completed'"... instances in which a nay sayer invokes Godel to aruge against 
>the possibility of this or that without even considering the thing in question.
	I agree.  G.I. is misunderstood and misused a lot.  
>Godel's Proof is consequential only within the realm of the rules of Godel's 
>Proof. When one looks, one finds that the realm of the rules of Godel's Proof is 
>just an island within the realm of all possible Mathematics. It just doesn't say 
>anything about anything that's not "on that island". And yet, it's most-often 
>invoked as if it says stuff about all possible Mathematics. It doesn't.
	"From P, Q follows" is explicated: in any structure where P holds,
   Q also holds.  It is a semantic notion, covering math. entailment as we
   know it.  The simple-looking formal deduction rules _do_ capture this no-
   tion; every known proof can be written as such a formal deduction.
	So G.I. does talk about all of Mathematics (as we presently see it,
   at least): no formal system will ever "do it all"; instead, insight and
   creativity are needed.  A positive message, and its first proponent was
   Goedel himself.
							Ilias
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Date: 25 Nov 1996 10:41:04 GMT
In article <32994424.136C@postoffice.worldnet.att.net>,
kenneth paul collins   wrote:
>Ilias Kastanas wrote:
>
>>         Journal article or not, this is about analogue computation...
>
>The CGC is envisioned as an analogue-digital hybrid.
>
>>    Which can take many forms; e.g. build a model of a graph, edges being
>>    pieces of string of appropriate lengths, and hold it up under the
>>    effect of gravity;  use pegs and rubber bands around them; and so on.
>>    One can obtain approximate solutions to a number of problems.  On the
>>    other hand, it is certainly a different subject.
>
>Not really. I've been discussing computation that reduces to what's 
>described by 2nd Thermo (WDB2T), which is entirely analogue, and which can 
>be shown to be an inclusive superset of all possible digital computation. 
>And so such is extremely-commonplace... it's been so "familiar" that it's 
>been "invisible". ken collins
	"Analogue" measures "continuous" quantities, and so yields appro-
   ximate answers (which of course may be fully adequate for various prac-
   tical problems).   "Digital" (discrete), with exact answers, is different
   (and classical computability theory applies).  You can obtain it by quan-
   tizing, more than 3.7 V is "1" etc; but whatever the implementation, its
   properties are there.
	Whenever you employ "analogue" and somehow reach an exact answer,
   "digital" can reach that answer (and in effect you _are_ using the latter).
   Is there a counterexample to this (inevitably vague) statement?
							Ilias
Return to Top
Subject: Re: correllating 2 data sets
From: Benjamin Roberts
Date: Mon, 25 Nov 1996 19:22:30 +0900
Cross correlation can be a useful exploratory technique. It is in
essence what the previous posters have described, however you will have
a 'lag' in the correlation. ie
	y = a(x[i]) + b // where i is the lag (the distance on your time scale
between the x time series and the y time series).  
-- 
Benjamin Roberts 
Cognitive Laboratory 
Psychology Department 
University of Western Australia
MY REAL ADDRESS -> benjamin@psy.uwa.edu.au
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: Jim Balter
Date: Mon, 25 Nov 1996 03:53:34 -0800
Ilias Kastanas wrote:
> 
> In article <32993E66.25E@postoffice.worldnet.att.net>,
> kenneth paul collins   wrote:
> >Ilias Kastanas wrote:
> >
> >>         The "rules" of "proof" have been shown to be _the_ rules of proof;
> >>    that is the Completeness theorem.  If you see something wrong there, maybe
> >>    you could state what.  As it is, breaking the rules is pointless and
> >>    self-defeating... and irrelevant to the Incompleteness theorem.
> >
> >I don't have a problem with that. What I stand against are over-generalized uses
> >of Godel's "incompleteness"... "Math is 'incomplete', and cannot possibly be
> >'completed'"... instances in which a nay sayer invokes Godel to aruge against
> >the possibility of this or that without even considering the thing in question.
> 
>         I agree.  G.I. is misunderstood and misused a lot.
> 
> >Godel's Proof is consequential only within the realm of the rules of Godel's
> >Proof. When one looks, one finds that the realm of the rules of Godel's Proof is
> >just an island within the realm of all possible Mathematics. It just doesn't say
> >anything about anything that's not "on that island". And yet, it's most-often
> >invoked as if it says stuff about all possible Mathematics. It doesn't.
> 
>         "From P, Q follows" is explicated: in any structure where P holds,
>    Q also holds.  It is a semantic notion, covering math. entailment as we
>    know it.  The simple-looking formal deduction rules _do_ capture this no-
>    tion; every known proof can be written as such a formal deduction.
> 
>         So G.I. does talk about all of Mathematics (as we presently see it,
>    at least): no formal system will ever "do it all"; instead, insight and
>    creativity are needed.  A positive message, and its first proponent was
>    Goedel himself.
a) Insight and creativity won't "do it all" either.
b) Inconsistent formal systems can "do it all".
Godel's message" is an irrelevant metaphysical flight of fancy.
--

Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: patrick@gryphon.psych.ox.ac.uk (Patrick Juola)
Date: 25 Nov 1996 12:37:15 GMT
In article <57bt40$brh@gap.cco.caltech.edu> ikastan@alumnae.caltech.edu (Ilias Kastanas) writes:
>In article <32994424.136C@postoffice.worldnet.att.net>,
>kenneth paul collins   wrote:
>>Ilias Kastanas wrote:
>>
>>>         Journal article or not, this is about analogue computation...
>>
>>The CGC is envisioned as an analogue-digital hybrid.
>
>	"Analogue" measures "continuous" quantities, and so yields appro-
>   ximate answers (which of course may be fully adequate for various prac-
>   tical problems).   "Digital" (discrete), with exact answers, is different
>   (and classical computability theory applies).  You can obtain it by quan-
>   tizing, more than 3.7 V is "1" etc; but whatever the implementation, its
>   properties are there.
>
>	Whenever you employ "analogue" and somehow reach an exact answer,
>   "digital" can reach that answer (and in effect you _are_ using the latter).
>   Is there a counterexample to this (inevitably vague) statement?
Not all analog data is necessarily continuous.  For example, identity
is almost always discrete.  So if you use an analog calculation to
get a "subject identity" result, the result will be exact but not
necessarily available to a digital calculation (within the same
complexity bounds).
The classic example of this, of course, is the O(1) argmax algorithm.
	Patrick
Return to Top
Subject: Re: CONFIDENCE INTERVAL
From: Robert L Strawderman
Date: Mon, 25 Nov 1996 09:01:13 -0500
George Caplan wrote:
> 
> I have been reading about premarketing clinical trials
> of a medication. The manufacturer says that there
> were 3 adverse reactions out ot 2796 patients. This, he
> says, yields a crude incidence of adverse reactions
> of 1.1 per thousand with a "very wide" 95% confidence
> interval of 2.2 cases per 10,000 to 3.1 cases
> per 1000.
> 
> How does one make a confidence interval estimate in
> this case? The number of reactions is so small that
> I don't think I can approximate the s.d. by
> sqrt(((p(1-p))/n); and when I do, I don't get the result
> given.
The formula for the sd is appropriate regardless of sample
size - it is the formula for a binomial proportion.
However, the method of forming the CI (ie using the normal 
approximation to the binomial) may not be. Using the usual
normal approximation, the 95% CI is (-.00014, .00228). This
procedure is inappropriate, but also does not match their
quoted answer. 
Using the binomial distribution, I obtain (0.00039,0.00313) 
as the 95% CI, which is "exact" - the exact sampling distribution
is used in this calculation. It also does not match what they
report, but is closer.
There are other possible methods  (e.g., Poisson,
or arcsin transformation) which may have been used to
obtain their results.
-- 
***************************************************************************
Robert Strawderman, Sc.D.	Email:  strawder@umich.edu
Department of Biostatistics  	Office:	(313) 936 - 1002
University of Michigan		Fax:    (313) 763 - 2215
1420 Washington Heights		
Ann Arbor, MI 48109-2029	Web:	http://www.sph.umich.edu/~strawder/
***************************************************************************
Return to Top
Subject: Re: Bootstrap algorithm
From: Rodney Sparapani
Date: Mon, 25 Nov 1996 10:18:13 -0500
This is a multi-part message in MIME format.
--------------46554F45EAC
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Eric Saczuk wrote:
> 
> I am wondering if anyone out there knows where and/or if I can get my
> hands on software which would allow me to carry out the Bootstrap
> algorithm?  I have about 7,000 reiteration to perform and I don't look
> forward to programming my own macro in Fortran.  If anyone has any
> usefull info on this, please let me know, it would be greatly
> appreciated, thanks!
> 
> Cheers, Eric S
Eric:
Attached is a SAS macro that does bootstrapping, but it does so
by creating large files.  Judicous use of keeps or drops is
warranted and by variables are out of the question.
Rodney
--------------46554F45EAC
Content-Type: text/plain; charset=us-ascii; name="boot.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="boot.txt"
%global _iter_;
%macro boot(iter=1000, data=&syslast;, out=_null_, seed=int(time()), class=, by=&class;);
%*
MACRO:	BOOT
	INPUT PARAMETERS
		BY:	The stratification variable that the dataset is sorted by (not required, default=&CLASS;).
		CLASS:	The stratification variable (not required, no default).
		DATA:	The input dataset (not required, default=&SYSLAST;).
		ITER:	The number of iterations to perform (not required, default=1000).
		OUT:	The output dataset that you would like to create (required, default=_NULL_).
		SEED:	The seed to use for the RANUNI() function (not required, default=INT(TIME())).
	OUTPUT DATASET VARIABLES
		BOOT:	Zero for input data, one for re-sampled data.
		OBS:	The observation number of the input data corresponding to the re-sample.
		SEED:	The seed used for the RANUNI() function.
		STRAP:	Zero for input data, the current re-sample iteration otherwise.
	INTERMEDIATE OUTPUT DATASET VARIABLES
		END:	A flag set to true on the last record of the input dataset.
		OFFSET:	Pointer to the beginning of the current stratification block.
		I:	Counter that loops NOBS times for the current STRAP re-sample.
		NOBS:   The number of observations in the current stratification block.
		P:	Pointer to the observation to be selected from the current stratification block.
	GLOBAL MACRO VARIABLES CREATED
		_ITER_:	 Set to the input parameter value &ITER.;
		_STRATA_:Set to the stratification input parameter &BY;, (if any).
	LOCAL MACRO VARIABLES CREATED
		LAST:	Set to the last stratification variable from the &BY; list, (if any).
		I:	Steps through the &BY; list, to find the &LAST; variable, (if any).
;
%local last i;
%let _iter_=&iter;
%if %length(&class;) %then %do;
proc sort data=&data; out=&out;
%end;
%else %do;
data &out;
set &data;
%end;
by &by;
run;
%if %length(&by;) %then %do;
	%global _strata_;
	%let _strata_=%upcase(&by;);
	%let i=1;
	%do %while(%length(%scan(&by;, &i;, %str( ))));
		%let last=%scan(&by;, &i;, %str( ));
		%let i=%eval(&i;+1);
	%end;
%end;
data &out;
set &out; end=end;
by &by;
drop i nobs offset;
retain offset boot strap 0;
seed=&seed;
obs=_n_;
output;
if end then do;
	link boot;
	stop;
end;
%if %length(&by;) %then %do;
else if last.&last; then do;
	link boot;
end;
%end;
return;
boot:
boot=1;
nobs=_n_-offset;
do strap=1 to &iter;
	do i=1 to nobs;
		p=ranuni(seed);
		p=ceil(nobs*p)+offset;
		set &out; point=p;
		obs=p;
		output;
	end;
end;
boot=0;
strap=0;
offset=_n_;
return;
run;
%if %length(&by;) %then %do;
proc sort data=&out;
by boot strap &by; obs;
run;
%end;
%mend boot;
--------------46554F45EAC--
Return to Top
Subject: Re: Implausible null hypotheses
From: mcohen@cpcug.org (Michael Cohen)
Date: 25 Nov 1996 19:59:09 GMT
Marks Nester (marks@qfri.se2.dpi.qld.gov.au) wrote:
: 
: If a null hypothesis can reasonably be assumed to be implausible
: (surely this is generally the case)
: then why can't a statistician have the gumption to bypass testing 
: a silly null hypothesis and proceed directly to 
: point/interval estimates etc.?
: 
Point and interval estimates are very useful, but what about testing a
non-silly null hypothesis?  For example, H0: |x|
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: owinebar@ezinfo.ucs.indiana.edu (Onnie Lynn Winebarger)
Date: 25 Nov 1996 20:44:24 GMT
In article <32993E66.25E@postoffice.worldnet.att.net>,
kenneth paul collins   wrote:
>I don't have a problem with that. What I stand against are over-generalized uses 
>of Godel's "incompleteness"... "Math is 'incomplete', and cannot possibly be 
>'completed'"... instances in which a nay sayer invokes Godel to aruge against 
>the possibility of this or that without even considering the thing in question.
>
>Godel's Proof is consequential only within the realm of the rules of Godel's 
>Proof. When one looks, one finds that the realm of the rules of Godel's Proof is 
>just an island within the realm of all possible Mathematics. It just doesn't say 
>anything about anything that's not "on that island". And yet, it's most-often 
>invoked as if it says stuff about all possible Mathematics. It doesn't.
>
>ken collins
    Actually Goedel's Incompleteness Theorem applies to any set of
axioms that is consistent and satisfies Peano's Axioms.  Since the
Axioms of Set Theory satisfy Peano's Axioms and (we hope) are
consistent, and all Mathematics can be cast as parts of Set Theory,
we can conclude that the Incompleteness theorem does indeed "say stuff"
about all mathematics.  Most notably, there are true sentences that 
talk about themselves.
Lynn
Return to Top
Subject: Re: Implausible null hypotheses
From: johno@vcd.hp.com (John Ongtooguk)
Date: 25 Nov 1996 20:55:28 GMT
Michael Cohen (mcohen@cpcug.org) wrote:
: Point and interval estimates are very useful, but what about testing a
: non-silly null hypothesis?  For example, H0: |x|
Return to Top
Subject: Multinomial Logistic Regression, SAS question
From: tstanley@lamar.ColoState.EDU (Thomas R Stanley)
Date: 25 Nov 1996 14:33:17 -0700
Greetings:
  I am using SAS Proc Catmod for multinomial logistic regression and have 
a question somebody may be able to help me with.  Basically, I have a 
set of data where each observation is known to belong to one of 10 classes.
My goal is to parameterize a set of functions that predict (when given
a set of predictor variables) class membership with a low misclassification
rate.  So far no problem.  SAS parameterizes the functions and will output 
a predicted probability of class membership (i.e. the predicted probability
that observation i belongs to class j, j=1,...,J) from which I can get 
misclassification rates.  The next step, however, is to use the functions
to predict class membership for an independent set of "test" data so I can 
get an independent estimate of the misclassification rate.  Is there some 
way I can get SAS to give me the predicted class membership for a set of 
observations not used to parameterize the predictive functions?  I can do 
it by hand (that is, hard code the predictive functions in a program to 
compute the predicted probabilities) but this is impractical since there
is a huge number of parameters and I want to evaluate several different
predictive models.  I tried appending the "test" data to the "training"
data (i.e. the data used to parameterize the functions) after setting 
the dependent variable equal to . (hoping SAS would still give predicted 
values for observations where class membership was unknown), but SAS simply 
deleted these observations and gave predicted probabilities for the 
training data only.  Any suggestions (including other programs that 
might be better than SAS Proc Catmod)?  A final note, the dependent variable
is nominal whereas the independent variables are a mix of discrete (nominal) 
and continuous variables.  Any help would be appreciated.
Tom Stanley
Midcontinent Ecological Science Center
4512 McMurry Ave.
Fort Collins, CO  80525
tstanley@lamar.colostate.edu
Return to Top
Subject: Need Help!!!
From: jtahara@chat.carleton.ca (James Tahara)
Date: 25 Nov 1996 20:20:54 GMT
----------------------------------------------------------------------
James Tahara
Carleton University
Email address: jtahara@chat.carleton.ca
----------------------------------------------------------------------
HI I am a student in economics and am reviewing for an exam.  I got some
old midterm questions and I can't answer these.  I am taking a first year
statistics course in which we are currently doing random sampling and
normal distributions, and confidence intervals.  Can someone help me?
Here are the questions:
----------------------
4.  Two independent random samples of sizes 50 and 100, respectively, are
to be drawn from a large binomial population.
  (a) Determine the relative efficiency of the sample proportion based on 
      the sample of size 50 with respect to the sample proportion based on
      the sample of size 100.
  (b) Propose a new estimator of the population proportion that makes more
      efficient use of the information from these two samples; i.e., your 
      proposed estimator should be relatively more efficient than either
      of the above estimators.
  (c) Show that your proposed estimator from part (b) is unbiased.
6.   Let {X1,X2} be a random sample of size 2 drawn from a population.    
     Consider the following three estimators of the population mean (all  
     of which are unbiased):
         Estimator 1         Estimator 2          Estimator 3
         -----------         -----------          -----------
         X1 + X2             X1 + 3X2             X1 + 2X2
         -------             --------             --------
            2                   4                    3
  (a) Prove that estimator 2 is unbiased.
  (b) Rank the three estimators from most to least efficient.
E-mail me at above address
Thank You!!!!!!!!!!!!!
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Date: 25 Nov 1996 21:57:29 GMT
In article <57c3tr$4s5@news.ox.ac.uk>,
Patrick Juola  wrote:
>In article <57bt40$brh@gap.cco.caltech.edu> ikastan@alumnae.caltech.edu (Ilias Kastanas) writes:
>>In article <32994424.136C@postoffice.worldnet.att.net>,
>>kenneth paul collins   wrote:
>>>Ilias Kastanas wrote:
>>>
>>>>         Journal article or not, this is about analogue computation...
>>>
>>>The CGC is envisioned as an analogue-digital hybrid.
>>
>>	"Analogue" measures "continuous" quantities, and so yields appro-
>>   ximate answers (which of course may be fully adequate for various prac-
>>   tical problems).   "Digital" (discrete), with exact answers, is different
>>   (and classical computability theory applies).  You can obtain it by quan-
>>   tizing, more than 3.7 V is "1" etc; but whatever the implementation, its
>>   properties are there.
>>
>>	Whenever you employ "analogue" and somehow reach an exact answer,
>>   "digital" can reach that answer (and in effect you _are_ using the latter).
>>   Is there a counterexample to this (inevitably vague) statement?
>
>Not all analog data is necessarily continuous.  For example, identity
>is almost always discrete.  So if you use an analog calculation to
>get a "subject identity" result, the result will be exact but not
>necessarily available to a digital calculation (within the same
>complexity bounds).
>
>The classic example of this, of course, is the O(1) argmax algorithm.
	_How_ do you obtain an 'exact' result?  What objects are you opera-
   ting on?
	By the way, there _is_ computability for reals; and the recursive
   ones are finitely describable.  But I don't see this making any diffe-
   rence.
							Ilias
Return to Top
Subject: Need cosine-distributed random help.
From: ronb@cc.usu.edu (DR TE$TH & THE ELECTRIC MAYHEM)
Date: 25 Nov 96 15:28:24 MDT
I need to generate a series of random numbers whose distribution follows the
cosine curve.  I.e.
      |^^---..
      |       ^^--.
      |            ^^-.
      |                ^-
      |                  ^-
      |                    ^.
      |                      -
      |                       -
      |                        -
      |-------------------------|
      0                        PI/2
Each value from 0 to PI/2 must occur cosine (value) times as often as the
frequency of the control value (zero).
I have tried doing it the following way (C code):
do {
 r=rand()/RAND_MAX;    /*generates a random value from 0 to 1...*/
} while (r==1.0);      /*PI/2 has a probability of zero, so throw it out...*/
angle=PI_OVER_2-acos(r); /*this _should_ give me what I'm after, as far as I
                           can tell.*/
If I run a histogram on the distribution of this function, I get something that
looks very much like cosine.  However, when I apply it in a program that I've
written, I do not get correct results in my calibration routines.
If you have ANY idea what is wrong with the above, or can provide a working
function to accomplish the same thing, please let me know.
	rOn
--
Fone:801-787-8525  Page: 801-755-3746 - punch in your phone number and hit '#'
DISCLAIMER - These opoi^H^H "dang", ^H, [esc :q :qq !q "NYRGH!" :Q! "Whaddya
mean, Not an editor command?" :wq! ^C^C^C !STOP ^bye ^quit :quit! !halt ...
^w^q :!w :wq! ^D :qq!! ^STOP [HALT!   HALT!!! "Why's it doing this?" :stopit!
:wwqq!! ^Z ^L ^ESC STOP  :bye  bye  bye! M-X-DOCTOR "HELP! I can't get out of
this stupid editor!!!" And how does this make you feel?
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: ikastan@alumnae.caltech.edu (Ilias Kastanas)
Date: 26 Nov 1996 02:57:37 GMT
In article <329988BE.4370@netcom.com>, Jim Balter   wrote:
>Ilias Kastanas wrote:
>> 
>> In article <32993E66.25E@postoffice.worldnet.att.net>,
>> kenneth paul collins   wrote:
>> >Ilias Kastanas wrote:
>> >
>> >>         The "rules" of "proof" have been shown to be _the_ rules of proof;
>> >>    that is the Completeness theorem.  If you see something wrong there, maybe
>> >>    you could state what.  As it is, breaking the rules is pointless and
>> >>    self-defeating... and irrelevant to the Incompleteness theorem.
>> >
>> >I don't have a problem with that. What I stand against are over-generalized uses
>> >of Godel's "incompleteness"... "Math is 'incomplete', and cannot possibly be
>> >'completed'"... instances in which a nay sayer invokes Godel to aruge against
>> >the possibility of this or that without even considering the thing in question.
>> 
>>         I agree.  G.I. is misunderstood and misused a lot.
>> 
>> >Godel's Proof is consequential only within the realm of the rules of Godel's
>> >Proof. When one looks, one finds that the realm of the rules of Godel's Proof is
>> >just an island within the realm of all possible Mathematics. It just doesn't say
>> >anything about anything that's not "on that island". And yet, it's most-often
>> >invoked as if it says stuff about all possible Mathematics. It doesn't.
>> 
>>         "From P, Q follows" is explicated: in any structure where P holds,
>>    Q also holds.  It is a semantic notion, covering math. entailment as we
>>    know it.  The simple-looking formal deduction rules _do_ capture this no-
>>    tion; every known proof can be written as such a formal deduction.
>> 
>>         So G.I. does talk about all of Mathematics (as we presently see it,
>>    at least): no formal system will ever "do it all"; instead, insight and
>>    creativity are needed.  A positive message, and its first proponent was
>>    Goedel himself.
>a) Insight and creativity won't "do it all" either.
	Of course they won't.  They will "do this"... which the current
    formalization won't.
>b) Inconsistent formal systems can "do it all".
	But they also "undo it all".
>
>Godel's message" is an irrelevant metaphysical flight of fancy.
	Hardly metaphysical; it is a statement about the integers.  Relevance
   lies with the beholder.  Some people do care about the Continuum Hypothe-
   sis, or measurability of projective sets of reals, or similar; some don't.
						Ilias
Return to Top
Subject: Re: Implausible null hypotheses
From: aacbrown@aol.com
Date: 26 Nov 1996 03:27:30 GMT
Marks Nester  in
 writes:
> Bayesians may also test implausible null hypotheses. I think it
> is sad if a majority of statisticians consider it useful to test
> implausible null hypotheses. If tradition binds them to this approach
> then I believe that it is a silly tradition. If a null hypothesis can
> reasonably be assumed to be implausible (surely this is generally
> the case) then why can't a statistician have the gumption to bypass
> testing a silly null hypothesis and proceed directly to point/interval
> estimates etc.? Not all traditions are good traditions.
I agree that Bayesians often test implausible null hypotheses. However the
same "tradition-be-damned, let's be rational" spirit that leads people to
Bayesianism often also leads them to "give me confidence intervals, not
hypothesis tests."
I also agree that not all traditions are good traditions, I assume Mr.
Nester will agree that not all traditions are bad traditions. You must
take them case-by-case.
If everyone agrees on the same model, then there is no doubt that
confidence intervals are better than hypothesis tests of implausible null
hypotheses. But when people disagree on the model, which is usually the
case on questions of practical interest, they will not even agree on what
parameters to set confidence intervals for.
Here is where a null hypothesis, so simple as to be implausible, is
useful. It allows everyone to test within their own model, using their own
parameters. If the data clearly indicate something; every honest, open
person will see that.
Consider the question "which major league baseball team is the best?"
People will argue for their home team. But test the null hypothesis that
all teams are equal and each game is just a 50/50 coin flip. Most years
you cannot reject it. Therefore all honest, open people who have looked at
the data will admit this is just a question of opinion, there is no solid
evidence to support that any team is the best.
The null hypothesis is implausible, but that's not the point. It's so
simple that anyone can test it and agree on the results. A confidence
interval for the "ability" vector of professional teams would require many
assumptions that no two sports fans would agree about.
In a perfect world we would all do hypothesis tests. But in the human
world of disagreement and controversy, hypothesis tests bring a faint
light of hope.
Aaron C. Brown
New York, NY
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: n_nelson@ix.netcom.com(Neil Nelson)
Date: 26 Nov 1996 02:58:22 GMT
owinebar@ezinfo.ucs.indiana.edu (Onnie Lynn Winebarger) wrote:
< In article <32993E66.25E@postoffice.worldnet.att.net>,
< kenneth paul collins  
< wrote:
<> I don't have a problem with that. What I stand against are 
<> over-generalized uses of Godel's "incompleteness"... "Math 
<> is 'incomplete', and cannot possibly be 'completed'"... 
<> instances in which a nay sayer invokes Godel to aruge 
<> against the possibility of this or that without even 
<> considering the thing in question.
>
<> Godel's Proof is consequential only within the realm of the 
<> rules of Godel's Proof. When one looks, one finds that the 
<> realm of the rules of Godel's Proof is just an island within 
<> the realm of all possible Mathematics. It just doesn't say 
<> anything about anything that's not "on that island". And 
<> yet, it's most-often invoked as if it says stuff about all 
<> possible Mathematics. It doesn't.
<>
<> ken collins
<    Actually Goedel's Incompleteness Theorem applies to any 
< set of axioms that is consistent and satisfies Peano's 
< Axioms.  Since the Axioms of Set Theory satisfy Peano's 
< Axioms and (we hope) are consistent, and all Mathematics can 
< be cast as parts of Set Theory, we can conclude that the 
< Incompleteness theorem does indeed "say stuff" about all 
< mathematics.  Most notably, there are true sentences that 
< talk about themselves.
Goedel's Incompleteness Theorem declares eight axioms in 
addition to Peano's three, significantly those for first order 
classical logic.  Given that we are free to specify a different 
mathematics using different axioms, it follows that the 
Incompleteness theorem does not "say stuff" about all of 
mathematics.
Neil Nelson
Return to Top
Subject: Re: CONFIDENCE INTERVAL
From: aacbrown@aol.com
Date: 26 Nov 1996 03:40:15 GMT
george.caplan@channel1.com (George Caplan) in <40.8382.2615@channel1.com>
writes:
> I have been reading about premarketing clinical trials
> of a medication. The manufacturer says that there were 3
> adverse reactions out ot 2796 patients. This, he says,
> yields a crude incidence of adverse reactions of 1.1 per
> thousand with a "very wide" 95% confidence interval of 
> 2.2 cases per 10,000 to 3.1 cases per 1000. 
I'm not sure how they got the results either. The usual 95% confidence
interval for a Poisson mean, given an observed value of 3, is 1.09 to
8.77; this gives an incidence of 0.000390 to 0.003136 (0.390 to 3.136 per
1,000). The upper limit agrees, presumably this is the important one, but
the lower one differs.
Aaron C. Brown
New York, NY
Return to Top
Subject: Re: E[ X | X => X^*] = ?
From: Frank Tuyl
Date: Tue, 26 Nov 1996 14:32:45 +1000
Tatsuo Ochiai wrote:
> 
> Suppose X ~ N(mu, sigma^2). Then, what is the formula for
> 
>      E[X|X=>X^*] : Expected value of X given X is greater than or
> equal      to some fixed number X^*
> 
> Could anyone give me the reference?
> 
> Thanks in advance.
> 
> Tatsuo Ochiai
> tochiai@students.wisc.edu
I'm not sure that Dimitri's response is correct: why doesn't your X^* 
appear anywhere? If I just write down the pdf p(X|X>=X^*), I get the 
original pdf with a correction factor (ie a denominator that's equal to 
the integral of the original pdf between X^* and infinity).
If I then take the expectation of this new pdf (between X^* and 
infinity), after some manipulations I get:
mu + Num/Denom
where Num = sigma*f(z), Denom = Integral of f(x) between z and Infinity, 
and where f(x) is the Standard Normal and z = (X^* - mu)/sigma.
Someone might like to check this! (For mu = X^* = 0 and sigma = 1, I get 
the square root of (2/pi), which I think is correct.)
Regards,
Frank
Return to Top
Subject: Re: Need cosine-distributed random help.
From: Paul Abbott
Date: Tue, 26 Nov 1996 13:21:50 +0800
DR TE$TH & THE ELECTRIC MAYHEM wrote:
> 
> I need to generate a series of random numbers whose distribution 
> follows the cosine curve.  
The (normalised) Probability Distribution Function (pdf) for this
distribution is
	Cos[theta]/Integrate[Cos[theta],{theta,0,Pi/2}] == Cos[theta]
and the Cumulative Distribution Function (cdf) is
	Integrate[Cos[theta],{theta,0,x}] = Sin[x]
Hence the inverse cdf is ArcSin[r].  So, if you apply ArcSin to
UNIFORMLY distributed random numbers in [0,1] you will get random
numbers whose distribution follows the cosine curve.  
Cheers,
	Paul 
_________________________________________________________________ 
Paul Abbott
Department of Physics                       Phone: +61-9-380-2734 
The University of Western Australia           Fax: +61-9-380-1014
Nedlands WA  6907                         paul@physics.uwa.edu.au 
AUSTRALIA                           http://www.pd.uwa.edu.au/Paul
          God IS a weakly left-handed dice player
_________________________________________________________________
Return to Top
Subject: Re: Occam's razor & WDB2T [was Decidability question]
From: ikastan@sol.uucp (ilias kastanas 08-14-90)
Date: 26 Nov 1996 03:44:16 GMT
In article <57dmce$230@sjx-ixn6.ix.netcom.com>,
Neil Nelson  wrote:
@
@owinebar@ezinfo.ucs.indiana.edu (Onnie Lynn Winebarger) wrote:
@< In article <32993E66.25E@postoffice.worldnet.att.net>,
@< kenneth paul collins  
@< wrote:
@
@<> I don't have a problem with that. What I stand against are 
@<> over-generalized uses of Godel's "incompleteness"... "Math 
@<> is 'incomplete', and cannot possibly be 'completed'"... 
@<> instances in which a nay sayer invokes Godel to aruge 
@<> against the possibility of this or that without even 
@<> considering the thing in question.
@>
@<> Godel's Proof is consequential only within the realm of the 
@<> rules of Godel's Proof. When one looks, one finds that the 
@<> realm of the rules of Godel's Proof is just an island within 
@<> the realm of all possible Mathematics. It just doesn't say 
@<> anything about anything that's not "on that island". And 
@<> yet, it's most-often invoked as if it says stuff about all 
@<> possible Mathematics. It doesn't.
@<>
@<> ken collins
@
@<    Actually Goedel's Incompleteness Theorem applies to any 
@< set of axioms that is consistent and satisfies Peano's 
@< Axioms.  Since the Axioms of Set Theory satisfy Peano's 
@< Axioms and (we hope) are consistent, and all Mathematics can 
@< be cast as parts of Set Theory, we can conclude that the 
@< Incompleteness theorem does indeed "say stuff" about all 
@< mathematics.  Most notably, there are true sentences that 
@< talk about themselves.
@
@Goedel's Incompleteness Theorem declares eight axioms in 
@addition to Peano's three, significantly those for first order 
@classical logic.  Given that we are free to specify a different 
@mathematics using different axioms, it follows that the 
@Incompleteness theorem does not "say stuff" about all of 
@mathematics.
	Those axioms capture the semantic notion of mathe-
   matical consequence, as we know it;  so G.I. does cover
   "all"... (at least until we develop something deviant!)
					Ilias
Return to Top
Subject: Significance of standard residuals in Chi-square
From: Daniel Davis
Date: Tue, 26 Nov 1996 00:05:51 -0800
I'm wondering if someone can help me track down a particular formula.  
I've been trying to find out the formula for determing the significance 
of standard residuals per cell for a Chi-square table, and I haven't had 
any luck.  I'm an archaeologist at the University of Kentucky, and I've 
had three stat classes (two social science and one geography), and I own 
five books on stats, and the only standard residual info I can find ( and 
that I'm familiar with) is for regression, not Chi-square.  It would be 
especially useful for me, in terms of the comparison of single cells in 
multiple Chi-square tests, to determine, say, which lithic reduction 
sequences by chert type are closest! Ah, the excitement!
Thank you
Daniel B.
Return to Top
Subject: Re: Bonferroni's Method
From: engp6373@leonis.nus.sg (Than Su Ee)
Date: 25 Nov 1996 05:30:56 GMT
You are right. The true joint confidence level will always be lower than 
the confidence level of the Bonferroni bounds.  As to how much lower, it 
is very problem dependent and you can easily investigate by conducting 
some simulations.  As far as my experience can tell, the Bonferroni 
bounds often gives me something between 97-100% confidence level when the 
target is 95%. Hope this help.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
 Than Su Ee                                 Tel:(65) 7722208(Off)	
 Dept. of Industrial & Systems Engineering  Fax:(65) 7771434
 National University of Singapore      	    Email1:engp6373@leonis.nus.sg
 10 Kent Ridge Crescent                     Email2:suee@post1.com 
 Singapore 119260                           http://www.nus.sg/~ise 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Return to Top

Downloaded by WWW Programs
Byron Palmer