![]() |
![]() |
Back |
Michael Brewer wrote: > > Can you point me towards a paper or book that is available in most > good libraries that tells me about Laws masks... the only references I > have seen are to Laws' original PhD thesis and to some obscure > proceedings... not too useful even if you have a copyright library in W.K. Pratt, 1991, Digital Image Processing 2nd ed., Wiley-Interscience, has a few pages on them. -- Jon CampbellReturn to Top
In article frt@senator-bedfellow.MIT.EDU, lones@lones.mit.edu (Lones A Smith) writes: >This really concerns probability theory. > >Let S(x,c) be a closed Borel subset of [0,1], for any x in [0,1] & real c. > >Suppose S has the "0-1 intersection property": For any c1 and c2, and for >all x1 <> x2, S(x1,c1) and S(x2,c2) have either zero or one point in common. > >CLAIM: {x in [0,1]|union of S(x,c) over all real c has measure >0} is countable > What about S(x,c) = {c, if c is in [0,1], x else}? Then for any x, union over c of S(x,c) = [0,1], but each S(x,c) is a single point so must satisfy the 0-1 intersection prop. Am I missing something? Peter Wollan wollan@mayo.eduReturn to Top
Hello, Can anyone tell me how to solve the quadratic matrix equation for the elliptically shaped confidence region (confidence ellipse or error ellipse) around the estimated slope vectors in the classical, normal multiple linear regression model? -- I have the matrix equation but does not know how to solve it and/or turn it into a two-dimensional graph. Thanks in advance. Anders Alexandersson E-mail: makst28+@pitt.eduReturn to Top
Nick, Unless you have reason to believe the categories are equally spaced, I would worry a little about numbering just as you said. And the general test of association may not tell you what you need to know...how do the proportions relate to each other on an ordinal scale. You have a couple of options, I would think. In PROC FREQ in SAS, you can select scores different from 1,2,3,4 (CMH), but you don't seem to have any strong belief in how much distance there is between categories. You could use ridits which assign scores that take into account the number in each category...but I think we are back to numbering the categories if I remember the ridit procedure...it does some kind of midranking as I recall. You could use a straight ranking procedure...like Kruskal-Wallis-Mann-Whitney. Another technique for analyzing these data (your example fits the classic mold very well) is something called a "proportional odds model." It would be the best choice if the assumptions are met. SAS will do this type of model using PROC LOGISTIC and test the "PO" assumption. If the proportional odds assumption isn't reasonable, you can do a "generalized logits" model using PROC CATMOD...you could compare the "not at all" group to all the rest. Agresti would be a good place to look for references, but Agresti isn't a good "elementary" text...I haven't seen his new book but it might be a little more "elementary". All of these would require a little reading on your part concerning the assumptions. n.w.nelson@education.leeds.ac.uk (nick nelson) wrote: >Say I have responses on an attitude scale > >eg Do you like this? lots / some / a little / not at all > >and two groups eg men and women. What is the best way to >establish whethere the two groups differ significantly? > >On approach I have seen involves numbering the responses 4,3,2,1 >and working with the means, but this seems dubious due to the >non-interval nature of the scale. > >Alternatively you could cast the reponses in a 4x2 table and do >a chi2 on it, but this ignores the order information altogether. > >Is there a middle path? > >Nick.Return to Top
hi! anyone know where i could get an algorithm for generating random numbers from a non-central chi-square distribution ? [please NOTE the ***non-central*** chi-square. i know there are zillions of programs that can generate from standard chi-square and other distributions] ideally i would like to do this on maple v.3 but would accept pascal, fortran,C or S code/algorithm too. or how i could go about moving from a random number from some "standard" distribution to non-central chi square. e-mailed replies will be appreciated since there is a several day lag [upto a week at times] in my newsserver getting articles :( thanx in advance. nadeemReturn to Top
On 14 Nov 1996, Bob Lee wrote: > Hi I have a question that I am having trouble with. I'd appreciate any help. > > Suppose the average family income of an area is $10,000. > > a) Find and upper bound for the percentage of families with incomes > over $50,000. > b) Find a better upper bound if it is known that the standard > deviation of incomes is $8,000. > > I assume that some kind of distribution must be assumed. > I assume that your instructor expects you to do your own homework, or is this a take-home test? You will learn more if you really try to figure things out rather than try and get others to do your work! Raymond V. Liedka Department of Sociology University of New MexicoReturn to Top
I need to know how to determine Std. Error for the term Vmax/Km after they have been calculated from a number of data points using the following equation. Equation 1 Y=Vmax*X/(Km+X) Variables VMAX 10.87 KM 1.445 Std. Error VMAX 0.1802 KM 0.09183 95% Confidence Intervals VMAX 10.50 to 11.25 KM 1.255 to 1.636 Goodness of Fit Degrees of Freedom 22 RČ 0.9802 Absolute Sum of Squares 3.226 Sy.x 0.3830 Data Number of X values 24 Number of Y replicates 1 Total number of values 24 Number of missing values 0 What is the general procedure for determining SE when you have two variables and their SE and need to do various manipulations of them? It seems to me I learned a set of rules for addition and multiplication of errors, but I don't remember them and cannot find them. For example: ( I know the data is messy, but) 1/y intercept = 10.2 but what would the ± BE? Certainly not 38.1. Equation y=mx + b Slope 65.21 ± 14.80 Y-intercept 0.09822 ± 0.02627 X-intercept -0.001506 1/slope 0.01533 95% Confidence Intervals Slope 24.12 to 106.3 Y-intercept 0.02530 to 0.1711 Goodness of Fit rČ 0.8291 Sy.x 0.03690 Is slope significantly non-zero? F 19.41 DFn, DFd 1.000, 4.000 P value 0.0116 Deviation from zero? Significant Data Number of X values 6 Maximum number of Y replicates 1 Total number of values 6 Number of missing values 10 ***************** T. Harter harter@am.seer.wustl.eduReturn to Top
>< Radford Neal: >< >< One often sees people using priors that are such that the >< effective complexity of the model increases as the amount of >< data increases. This makes no sense - it amounts to using a >< prior that one knows is going to be contradicted by future >< data. Neil NelsonReturn to Topwrote: >... Of course the difficulty here is the >determination of the prior probabilities and algorithmic >relation, for which our only effective recourse is an analysis >of the previously and currently available data. This implies >that our prior probabilities and algorithm may change depending >on any increase in the available data; or more simply, we would >not want to hold to our previous judgment if new information >indicated we were previously in error. This is not the case for a full Bayesian analysis, since the prior decided on before any data is collected will implicitly contain all the revisions of judgement that would be prompted by any possible data set. In practice, a Bayesian is likely to use a model and prior that do not contain certain possibilities that seem very unlikely at first, simply because formalising all these possibilities is too much work. If the actual data indicate that these possibilities need to be considered, then the Bayesian might revise the prior and model, perhaps adopting a more complex one. However, I think that this scenario has little to do with the usual reasons why people think that you can't use complex models with small datasets. The usual reasons are not compatible with a Bayesian viewpoint. Radford Neal ---------------------------------------------------------------------------- Radford M. Neal radford@cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford@utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ----------------------------------------------------------------------------
I have several concentric ellipses in trellis graphics. They are plotted in xy-plot. I started with a vector of X's and solved for the y's using an eqn. Then I plotted the two vectors of data points in xy-plot. The problem is: I would like to locate the center of the innermost ellipse, then shift that ellipse to the left so that the rt endpoint of the major axis (which will be in direction of abscissa) is now touching the ctr. Then I want to shift all outer ellipses so that each right endpt of the major axis of each ellipse goes through that original ctr point. I hope that's clear. It sounds like a matter of determining the appropriate constant of shift for each ellipse through length of that axis, then adding the constant to each data point, but I really would like a nice way of doing it. If you have something in a different code than S, that would be fine to work with too. Thanks. Please email me as well as post.Return to Top
A function f(x,w), where w is a random variable and x is deterministic, is convex in x for fixed w, and is also convex in w for fixed x. We know that the expectation: E[f(x,w)] is then convex in x. Is the variance: Var[f(x,w)] convex in x ? Any ideas, suggestions, references will be greatly appreciated. SHABBIR AhmedReturn to Top