![]() |
![]() |
Back |
Bill Rowe (ghostbit@sprynet.com) wrote: : In article <33AA08EB.75F61B58@eos.ncsu.edu>, peguaris@eos.ncsu.edu says... : > : >The central limit theorem allows us to assume a normal distribution for : >large number of observations of any random variable. : No, this is not at all correct. << snip >> : The central limit theorem states the z-score, i.e., the difference between : the observation and population mean divided by the population standard : deviation tends to be normally distributed under certain conditions. This : z-scrore is much different than the observation itself. -- No, this is not correct. But it may be an example of entropy, as Gregory Bateson once characterized it, speaking of a messy bookcase, "Among the ways that things can be ordered, a lot more of them are wrong than right." Rich Ulrich, wpilib+@pitt.edu.Return to Top
In article <33b83986.20386622@news.telecom.pt> arosa@mail.telepac.pt (Antonio Rosa) writes: >I'm using the following formulas: > >T0=(Xn-X1)/(N-1) , where N is the length of series >S0=X1-(T0/2) > >Formula of trend: T(t)=beta* ( F(t) - F(t-1) ) + (1-beta)*T(t-1) , >where T is trend and F is forecast >F(t+1)=alpha*( A(t) ) + (1-alpha)*( F(t-1) - T(t-1) ) , where A is >the real value > > I think that this last formula is wrong, can anyone tell me the >correct formula(s) so that i can get the same values of STATISTICA ? Yes, I think the last formula needs help. You can do linear trend projection with: Given A(i), the observed actual values, and alpha, beta and k, subjectively chosen parameters _ _ A(t+1) = alpha*A(t) + (1-alpha)*A(t) [smoothed average] _ _ B(t) = A(t+1) - A(t) [contribution to trend] _ _ B(t+1) = beta*B(t) + (1-beta)*B(t) [smoothed trend] _ _ F(t+1) = A(t+1) + k*B(t+1) [forecast] This is second-order exponential smoothing, which has been shown to be unstable for some values of beta and k. Both this and first-order exponential smothing have equivalent ARIMA representations, and you may wish to approach the problem from that perspective. But there are so many variations on exponential smoothing, it may not be possible to duplicate the results your software produces. Forecasting software is now abundantly available, and I recommend that you refer to "Forecasting Software Survey" by Jack Yurkiewicz in "OR/MS Today," December 1996. In contrast, forecasting practices used in US companies continue to rely heavily on judgemental methods and on the simpler quantitative methods. See "Forecasting Practices in US Corporations: Survey Results" by Nadra R. Sanders and Karl B. Mandrodt in "INTERFACES" March-April 1994. Good luck! ---------------------------------------------------------------------- Charley Harp, N8MQL FBP-425 charp@ford.com Operations Research Dept. 555 Republic Dr. V: (313) 845-5873 Ford Motor Company Allen Park, MI 48101 F: (313) 621-8381 ----------------------------------------------------------------------Return to Top
real.email@my.webpage (Jose Fernando Camoes Mendonca Oliveira Silva) writes: > Jim Clark wrote: > > > > Are there standard general measures of the predictability of a > > sequence? > > > Spectrum analysis will give you the constituends. The more homogeneous > the spectrum, the less predictable the sequence. In the limit, with a > uniform spectrum, you have random (white) noise. The most homogenous spectrum possesses a single impulse. Now this hardly qualifies as an "impredictable" sequence. Also you will have constraints defining the sequence. For example, you can rely on a random number generator to not produce a rabbit instead of a number. A random number generator will deliver values from a well-defined distribution (typically a uniform distribution 0-1). The tyupical statistical tests for random number generators take successive random number values as multidimensional grid coordinates and try to find the maximal distance between planes or hyperplanes covered by dots. If this maximal distance is larger than the statistics of a truly random number generator would suggest, you have predictable patterns in your sequences. This test can be done with coordinates sampled at various distances of the sequence in order to discover long-term patterns as well as short-term ones. Viewing the spectrum alone might be ok for finding out that a random generator is bad, but it is by far insufficient for judging it good. -- David Kastrup Phone: +49-234-700-5570 Email: dak@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209 Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, GermanyReturn to Top
In article 6l7@power.Stanford.EDU, clint@leland.Stanford.EDU (Clint Cummins) writes: > StatManTHReturn to Topwrote: > >I am trying to find an algorithm that can handle computing a "near > >singular" matrix. All square matrices that have a nonzero determinate are SNIP If you do have a non-singular matrix with a very small determinate, try using MATLAB with their symbolic processing toolbox. If you can get access to this product, I have found that the symbolic inverse of any matrix with a nonzero determinant can be found. Ross
In article <01bc7e55$71524bf0$ccb2d4d0@bwheeler>, bwheeler,@,echip,.,com says... > >Yes, but not in the way they are thinking. > >The average game scores of the occasional players >will fluctuate more, and more frequently throw up >a high average by chance. > >Suggest you divide each person's average score by >the range (largest minus smallest) of their scores. You have corrected one problem only to introduce another. By dividing the average by the range you might be giving a consitent but poor player a better score than a less consitent but better player. I simpler solution to the original problem would be to rank players by the median score rather than average. -- "Against stupidity, the Gods themselves contend in vain"Return to Top
In article <33AA08EB.75F61B58@eos.ncsu.edu>, peguaris@eos.ncsu.edu says... > >The central limit theorem allows us to assume a normal distribution for >large number of observations of any random variable. No, this is not at all correct. For example, consider the time between decay of two atoms of a radioactive atom. No matter how many observations you care to make, you will never observe negative times. The distribution will not appear normal. In fact, the more observations you make the more apparent it will be the distribution of decay times isn't normal. The central limit theorem states the z-score, i.e., the difference between the observation and population mean divided by the population standard deviation tends to be normally distributed under certain conditions. This z-scrore is much different than the observation itself. -- "Against stupidity, the Gods themselves contend in vain"Return to Top
In article <5p0jt4$7n4@usenet.srv.cis.pitt.edu>, wpilib+@pitt.edu (Richard F Ulrich) wrote: > Bill Rowe (ghostbit@sprynet.com) wrote: > : The central limit theorem states the z-score, i.e., the difference between > : the observation and population mean divided by the population standard > : deviation tends to be normally distributed under certain conditions. This > : z-scrore is much different than the observation itself. > > -- No, this is not correct. But it may be an example of entropy, > as Gregory Bateson once characterized it, speaking of a messy > bookcase, "Among the ways that things can be ordered, a lot more of > them are wrong than right." What then do you believe the central limit theorem states? Please also provide a reference. I didn't provide a reference for my earlier posting since I didn't think it was needed. I will gladly provide a reference if desired. (I am at home at the moment and the appropriate references are at the office) As far as the quote from Gregory Bateson, this reflects a popular but incorrect understanding of entropy. I will also gladly provide the correct defintion to be found in various physics texts including Thermal Physics by Kittel and Kromer if desired.Return to Top
You are right. Standardizing is a solution to only one of many problems that might be chosen. Unfortunately we do not have the non-statistical component -- the politics. What is really troubling the "regular" players? Find that out, and a statistical solution might be found. Some possibilities are: (1) "regulars" object to "occasionals" getting high scores by chance. Standardizing might help. (2) "regulars" think "occasionals" should not be eligible for a prize because they contribute too little to the activity. Since they can't get rid of them, they are asking for some sort of "magic number" that will eliminate them. Differential scoring which weights scores with the sequence number of the game might help. (3) "occasionals" think the "regulars" are fuddy-duddies, and are having fun twitting them, or vice versa, or something else. No statistical solution. -- Bob Wheeler, ECHIP, Inc. Reply to bwheeler@echip.com) Richard F UlrichReturn to Topwrote in article <5ougar$sp@usenet.srv.cis.pitt.edu>... > I am one more reader who still does not understand the rationale > behind standardizing the average with the range for this problem: > > Bob Wheeler (bwheeler,@,echip,.,com) wrote: > : My reading of this was that the problem was only partially > : statistical, as most real problems are. I suspect the "regular" > : players object to an "occasional" player who by chance obtains > : a very high score and thus wins a top spot. In such a case, dividing > : by the range helps. Of course, without talking to Carolyn Longwoth, > > In Carolyn's problem, there were people with 13 scores, on up to > 34 scores. A SINGLE score at scrabble will not give a person the high > average; you cannot score 5000, say. For 13 vs 34 scores, it seems > to me that the persons with 34 scores would tend to have the larger > personal ranges of scores, too, so that the adjustment that Bob > recommends would make the injustice WORSE, not better - for two > people with the same, rather-high score, the higher number would > come for the person with the smaller range, or, typically, the > shorter series.... > > I try to consider what happens if you look at Average/Max rather > than Average/Range, but that does not seem to improve fairness, > either. Am I overlooking something? > > > (For the real problem, - as I suggested last time - give multiple > prizes.) > > > > Rich Ulrich, biostatistician wpilib+@pitt.edu > http://www.pitt.edu/~wpilib/index.html Univ. of Pittsburgh > > >
In article <5op9d2$35@hacgate2.hac.com>, ghostbit@sprynet.com (Bill Rowe) wrote: > In article <33AA08EB.75F61B58@eos.ncsu.edu>, peguaris@eos.ncsu.edu says... > > > >The central limit theorem allows us to assume a normal distribution for > >large number of observations of any random variable. > > > The central limit theorem states the z-score, i.e., the difference between > the observation and population mean divided by the population standard > deviation tends to be normally distributed under certain conditions. This > z-scrore is much different than the observation itself. > The central limit theorem is usually a statement about _averages_ of a fixed number, n, of values randomly selected from an arbitrary distribution. As the value of n increases, the distribution of the _averages_ becomes more normal. For large enough n, the nature of the underlying distribution is ignored and the _averages- are treated as if they were exactly normal. In addition, the mean of all the _averages_ possible is the same as the mean of the underlying distribution, and the variance of all the _averages_ possible is 1/n times the variance of the underlying distribution ( assuming the underlying variance is finite). -- V. Hancher xvmhjr@xfrii.com x's in adress to defeat mechanical spamming, please remove x's" from email address before transmitting.Return to Top
In other words: "In the long run, we are all dead", as the old Keynes used to say about all great long run properties...Return to Top
Bankroll $100,000 Win % 53% Payoff 1-1, even money Therefore, Return of Investment, Edge .06 (For every 100 wagers I expect to have a net profit of 6 wagers.) From Kelly Criterion Edge * Probability = Best Percentage of Bankroll to Wager on single non overlapping event ..06 * .53 = .0318 Best Percentage of Bankroll to Wager on single non overlapping event * Bankroll = Amount to bet per on single non overlapping event ..0318 * 100,000 = 3180 Question: What percentage of Bankroll do I wager when 4 events occur at the same time?Return to Top