![]() |
![]() |
Back |
In article <56gl11$iat@newton.pacific.net.sg>, u08c4@altron.com.sg (Anke) wrote: > > i used to be a lover of science And i still am!!! -------------------==== Posted via Deja News ====----------------------- http://www.dejanews.com/ Search, Read, Post to UsenetReturn to Top
aacbrown@aol.com writes: > I think you have not given enough information for discussion of methods or > references. I accept that I was a bit vague in my first posting. Let me be a bit more precise. The fixtures lists are completely arbitrary and I would not like to put any restrictions on them. Similarly there are no restrictions on the subset of the competions which are actually played. The only information we get from the experiment is the score vector ie. for each individual the number of competetions won by it. This, and the fixture lists are the only available data. Aim is now to test certain hypotheses about the "fitness" of subsets of the individuals. Basically the observed scores should be compared to "random" scores which are obtainable from the fixture list. One hopes that after a series of experiments one would see a tendency of certain individuals to achieve a higher score than if the outcomes of the competitions were completely random (ie win,loss, or tie equiprobable). My original question now was: Does anybody know of any publication were a similar set up has been studied? I would like to see which statistical methods are used in similar situations. I am aware of the basic methods, eg generating all possible score vectors compatible with the fixture list and than working out the statistics. But due to the size of the experiments this isn't really an option. thanks, Eric BartelsReturn to Top
Hello Any recommendations on locations for C/C++ source code for elementary statistical functions. The first thing I am looking for is an inverse Chi-Square function with fractional degrees of freedom. Eventually I will need functions for the T and Normal distributions also. Scott_Depuy@nih.govReturn to Top
In article <57l5n1$9d7@mirv.unsw.edu.au>, Glen BarnettReturn to Topwrote: >In article <329CE3F8.59B6@mgmt.dal.ca>, >Gus Gassmann wrote: >>Jim Box wrote: >>> I solved this problem long ago, but have forgotten the solution. Any >>> takers? >>> Redesign Dice: >>> You can use any integers you want. Come up with a new pair of dice that >>> will have the same probability distribution as standard dice. You are >>> allowed to have repeats on one die (ie one can have two fours). >>> About all I remember of the solution is that 7 and 0 appeared exactly >>> once, and one die had two fours. >>I came up with >>1 2 2 3 3 4 >>1 3 4 5 6 8 >If you add one to the 1st die, and subtract one from the >second, you get dice matching the original poster's recollection; >but I prefer the pair listed above, since they start from 1. There is essentially only the usual solution; one can add an integer to all the faces of one die, and subtract it from the faces of the other. So we can reduce the problem to that of integers on the faces, with both faces starting from 1. Even loaded dice can be considered. Now the generating function of the total on the two dice, fully factored, is x^2(x+1)^2(x^2+x+1)^2(x^2-x+1)^2/36. We can assign any of these factors to either die, with the provision that One factor x is assigned to each die. The coefficients on each die are non-negative. There are at most 6 non-zero coefficients for each product. There are other solutions satisfying these conditions. They are the factorizations (ignoring the 1/36) [x(1+2x+x^2)][x(1+2x^2+3x^4+2x^6+x^8] and [x(1+2x^3+x^6)][x(1+2x+3x^2+2x^3+x^4]. In both of these cases, both dice would have to be loaded. -- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (317)494-6054 FAX: (317)494-0558
Larry Culver wrote: > > Bryan Austin wrote: > > > > Hello all, > > > > I am in the market for a UNIX operating system. I have narrowed the > > search down to three 3 prospects: SCO UNIX 2.1, Solaris x86 UNIX, and > > Lunix. My question is, which of the three is the best choice, and more > > importantly, Why? I will be using the operating system for business and > > personal use. > > > > I am positive that all three OSs have some strengths and weaknesses. > > This has been my method of evaluation so far. If anyone can help please > > reply. > > -- > > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > > > > _/ _/ _/_/ _/ _/_/ Bryan Austin > > _/ _/ _/ _/ _/ _/ _/ Dept. of Economics > > _/ _/ _/ _/ _/_/_/_/ University of California > > _/ _/ _/ _/ _/ _/ _/ Los Angeles > > _/_/ _/_/_/ _/_/_/_/ _/ _/ > > > > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > > Do you have a need to run multiple processors? Does Linux support more > than a single CPU yet? I'm not sure of some of the others U mentioned, > but Solaris does ... one of the reasons I went with Solaris (2.5.1) was > the fact that it does support multiple CPUs. > > Larry I don't know about linux, but I know that SCO's new UNIXware has the dual CPU capability, but I am not positive if it has the multi-cpu capability. BryanReturn to Top
James Tahara (jtahara@chat.carleton.ca) wrote: : ---------------------------------------------------------------------- : James Tahara : Carleton University : Email address: jtahara@chat.carleton.ca : ---------------------------------------------------------------------- : Who was the first person to do a census in Canada? Hi James - why don't you give StatsCan a call? Best wishes, Kent.Return to Top
In articleReturn to Top, Bill Simpson wrote: >BillS: This it seems to me is a good thing. > -- hmm.... given what I just said, I have to ask, >"Why do you say that? " >Rich Ulrich >====================== >Well I think the reasons have been stated already >- point null hypo is always false >- CI gives same info as hypo test plus extra >- most people are hopelessly confused over hypo tests and interpreting >them (try teaching a stats class and you'll see what I mean) >- we really want to know the size of some effect (e.g. is the diff between >groups one IQ point? 10 IQ points?), not if it is "significant" or not >- plus many other reasons put forward on this group over the years >I personally haven't been in a situation that demanded hypo tests. >Probably such situations exist. The current state of affairs in psych is >that the default option is hypo tests. I think a default option of CI >makes more sense. What is needed is a sound use of hypothesis tests. The question is whether the hypothesis is close enough to being correct that it is worth while using it. To use your example above, if we KNEW the difference between the mean IQs of two groups is one point, would we continue to act as if they were equal? If so, we should accept the hypothesis, even if the difference comes out to 50 times its standard deviation. >Yes I agree that morons will be morons, and that CIs are just as prone to >abuse as hypo tests. The "moronic" idea is to let some quasi-religious mantra decide what action to take. You have decisions to make; statistical decision theory is designed to help YOU make the decision appropriate for YOU; use it, instead of following the blind. -- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (317)494-6054 FAX: (317)494-0558
In article <329CB17A.C9F@ucla.edu>, Bryan AustinReturn to Topwrote: >I am in the market for a UNIX operating system. I have narrowed the >search down to three 3 prospects: SCO UNIX 2.1, Solaris x86 UNIX, and >Lunix. My question is, which of the three is the best choice, and more >importantly, Why? I will be using the operating system for business and >personal use. >I am positive that all three OSs have some strengths and weaknesses. >This has been my method of evaluation so far. If anyone can help please >reply. You are absolutely right about the strengths and weaknesses. What you should get depends not on any particular absolute, but rather which strengths match your specific needs. We sell both UnixWare and the Caldera distribution of Linux, and still have very few instances where both are appropriate. Here are some observations; I hope they don't further confuse you. EDUCATION: Linux Source code helps a lot, but Linux is also documented far better. In addition to the man pages, there's a large number of documents called HOWTOs that deal with specific Linux issues such as foreign languages and SCSI programming; I've found them *very* helpful. There are many companies offering SCO training courses, few if any offering Linux training. This is understandable since Linux seems to work on the principle of learning by doing, great for propellerheads but intimidating for others. APPLICATIONS: UnixWare You still can't get Oracle or Sybase or Progress supported under Linux, though in most cases the SCO binaries of these apps will work. With Gemini coming out UnixWare benefits from SCO's huge base of vertical business apps, which can be made to work on Linux but will only infrequently be supported there. This may change in the future but the status quo is totally in SCO's favour. While freeware applications tend to come out first for Linux, most of them have been ported to UnixWare. Skunkware is a great effort but it can't match some of the Linux 6-CD archives packages. NETWORKING: Linux This is from personal experience; Linux was simply easier to get working smoothly in a networking environment. UnixWare's PPP seems optimized for occasional links, Linux's seems better for permanent ones and is generally easier to set up. Most Linux distributions supply a richer set of networking tools (such as 'dig') than UnixWare. And UnixWare's networking mail-transport, mailsurr, is an abomination. The Caldera version of Linux has complete NetWare 4 client services as good as UnixWare's, and all Linux distributions come out-of-the-box running the Samba server to let them do SMB services on Windows95/NT/WfW nets. SCO's VisionFS is an additional cost add-on (though Samba should work fine on UnixWare too). LARGE SYSTEMS: UnixWare Journaling filesystems, robust multiprocessing, support for big-server hardware such as RAID controllers, and of course availability of large-system applications makes this decision easy. Some hardware, such as some Adaptec controllers, are supported better by UnixWare because the vendor won't release public details of programming details. SMALL SYSTEMS/WORKSTATIONS: Linux Better support for cheap and slow hardware, stuff like floppy-tapes and ZIP drives and sound cards are far better supported. While UnixWare's desktop is nice, there are a variety of look-and-feels available for Linux X including the fvwm95 Windows95-clone. Commercial implementors of Linux such as RedHat and Caldera now bundle X servers which outperform UnixWare's. Linux takes less horsepower to be runnable than UnixWare; a non-X Linux system can still run fine on a 386/20 with 8MB RAM. Linux co-exists much better on multi-OS, multi-boot system; UnixWare has no equivalent to LILO. ADMINISTRATION: Toss-up Both have their little areas of excellence. Linux makes better use of the /proc filesystem; UnixWare is easier to fine-tune. STABILITY: Toss-up A non-issue, really; for most purposes both kinds of systems, configured properly, are rock-solid. The only issue here is that upgrades for UnixWare are handled in a more-sane manner than Linux (with the notable exception of the Caldera distribution). It appears easier to screw up the operation of a Linux box than UW, but not by much. COST: It depends... The most expensive Linux you can get is $99, unlimited users, full source, with most of the software one would find on SCO's Internet Fast Start at a tenth the price. Linux is starting to show up in retail outlets next to OS/2, as well as in bookstores (especially on campus). SCO's FreeUnix initiative is great for home explorers, but not intended for home businesses. Because of their restrictions on commerial use, free SCO products will not have much third-party application support. Note also that those third-party apps which you *do* get will be more expensive on SCO than Linux; one apples-to-apples comparison is Wabi, whose Linux cost is half that of SCO's. SUPPORT: It depends... If you're in a business and you need someone to blame, SCO used to have the edge because they could be blamed. Now there are Linux distributors willing to play that role as well. If you're comfortable with Internet culture, Linux support is easy; while SCO technical answers usually depend on a small core of people, it seems that there are hundreds of people willing to answer Linux questions. If one goes on vacation, a dozen others take up the slack. OTOH, if you like Compuserve, the tables are turned and there are more SCO people capable of helping. Go figure. -- Evan Leibovitch, Sound Software Ltd, located in beautiful Brampton, Ontario Supporting PC-based Unix since 1985 / Caldera & SCO authorized / 905-452-0504 Unix is user-friendly - it's just a bit more choosy about who its friends are
JUNJIA@morst.govt.nz in <329F356C.617D@morst.govt.nz> writes: > I have a data set containing 20 countries' data from 1980 > to 1990. I like to calculate average value among 20 countries > in each year from 1980 to 1990. My problem is that in some > years, several countries' data are missing. There are a number of ad hoc procedures for this situation. The simplest is to begin by taking a grand average of all available data, subtract this from each value. Then compute the average residual for each country, subtract the country-average residual from each residual. Average these remaining values by year. An estimate of the annual average is the grand average plus the annual average you computed above. For example if you have data: Year Country A Country B Country C 1 1 2 2 3 3 4 5 6 your grand average is (1+2+3+4+5+6)/6 = 3.5. Subtracting this from each value gives: Year Country A Country B Country C 1 -2.5 2 -1.5 -0.5 3 0.5 1.5 2.5 The country averages are -1.17 (A), 0.5 (B), and 2.5 (C). Subtracing these gives: Year Country A Country B Country C 1 -1.33 2 -0.33 -1.00 3 1.67 1.00 0.00 The estimated annual averages are 3.5 - 1.33 = 2.17 (1), 3.5 - 0.67 = 2.83 (2), and 3.5 + 0.89 = 4.39 (3). Whether these are reasonable or not is up to you. Aaron C. Brown New York, NYReturn to Top
Michael KamenReturn to Topin <57kj67$2pj6@news.doit.wisc.edu> writes: > Since the the sampling distribution of the mean follows a > normal distribution around the expected value for the > population regardless of the distribution of the individual > sample, isn't s/sqrt(n) of my sample really an estimate of > sigma for the distribution of sample means (even though it > is based on only one sample)? If this is so perhaps it does > not matter that my sample looks non-normal. The conf. > interval is for mu around which x-bar is always normally > distributed. As Rainer Dyckerhoff pointed out, x-bar is only approximately Normal if the underlying distribution is not Normal; the approximation depends on several assumptions. In most cases, if you have 100 data points and no outliers, the Normal confidence intervals will be pretty good. Aaron C. Brown New York, NY
> Do you have a need to run multiple processors? Does Linux support more > than a single CPU yet? I'm not sure of some of the others U mentioned, > but Solaris does ... one of the reasons I went with Solaris (2.5.1) was > the fact that it does support multiple CPUs. > > Larry > -- Actually, Linux does support SMP and Posix scheduling policy options.Return to Top
Greg Heath wrote: > If you are really sorry, why not provide an ASCII translation? The translation into "newsgroup TeX" (thanks to Herman Rubin) is the following: f(x) = A exp(-B |x|^\nu)Return to Top
In article <329EF5DC.1936@iah.com>, Larry CulverReturn to Topwrites >Bryan Austin wrote: >> >> Hello all, >> >> I am in the market for a UNIX operating system. I have narrowed the >> search down to three 3 prospects: SCO UNIX 2.1, Solaris x86 UNIX, and >> Lunix. My question is, which of the three is the best choice, and more >> importantly, Why? I will be using the operating system for business and >> personal use. >> >> I am positive that all three OSs have some strengths and weaknesses. >> This has been my method of evaluation so far. If anyone can help please >> reply. >> -- >> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ >> >> _/ _/ _/_/ _/ _/_/ Bryan Austin >> _/ _/ _/ _/ _/ _/ _/ Dept. of Economics >> _/ _/ _/ _/ _/_/_/_/ University of California >> _/ _/ _/ _/ _/ _/ _/ Los Angeles >> _/_/ _/_/_/ _/_/_/_/ _/ _/ >> >> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > >Do you have a need to run multiple processors? Does Linux support more >than a single CPU yet? I'm not sure of some of the others U mentioned, >but Solaris does ... one of the reasons I went with Solaris (2.5.1) was >the fact that it does support multiple CPUs. > >Larry I understand that Linux is now doing SMP with x86's I'm not sure about Alphas -- Robin Becker
Bryan Austin wrote: > > Do you have a need to run multiple processors? Does Linux support more > > than a single CPU yet? I'm not sure of some of the others U mentioned, > > but Solaris does ... one of the reasons I went with Solaris (2.5.1) was > > the fact that it does support multiple CPUs. > > > > Larry > I don't know about linux, but I know that SCO's new UNIXware has the > dual CPU capability, but I am not positive if it has the multi-cpu > capability. > > Bryan We ship a SMP box that has up to 10 200MHz Pentium Pro processors, and that runs UnixWare. -- Alan Burlison alanburlison@unn.unisys.com My opinions may be incorrect, but they are my own.Return to Top
amukhtar@mail.bcpl.lib.md.us wrote: > > If someone can help me on this problem, I really appreciated..... > > Y = ( 19x - 12 ) / (5x^2 - 15x) > > I need you to solve for x... > for example > Y = 3x then x = y/3 > thank you again > amukhtar@mail.bcpl.lib.md.us Multiply both sides by 5x^2-15x and you can rewrite your equation as (5y)x^2 +(-15y-19)x +12 = 0. Now you can apply the quadratic formula, x = (-b +(-)sqrt(b^2-4ac))/2a, with a = 5y, b=-15y-19 and c = 12.Return to Top
If you are interested in a Statistical Analysis software package that can do over 180 tests and routines, contact me. Only $22.00 with 30-day money back guarantee. RCKnodt@aol.comReturn to Top
I have been trying for years to find an out of print book by Arpad Elo, _The Ratings of Chess Players Past and Present_. [If anyone has a copy they'd part with mail me] His methods are used for pairing opponents in Chess Tournaments(sp) and also other games (Backgammon and Tennis). I'm not sure if they are relevant to the discussion, but if you want a method of establishing a numeric ratings indicating the probability of one player beating another you should try to find it.Return to Top
Could someone please explain the Gram-Schmidt Process in regards to the orthogonalization process (inner spaces)? Thanks in advance. -- Robert Gelb Senior Systems Analyst Data Express Garden Grove, California USA (714)895-8832Return to Top
InReturn to Top, josh@racing.saratoga.ny.us (Josh Kuperman) writes: >I have been trying for years to find an out of print book by Arpad Elo, >_The Ratings of Chess Players Past and Present_. [If anyone has a copy >they'd part with mail me] His methods are used for pairing opponents in >Chess Tournaments(sp) and also other games (Backgammon and Tennis). I'm >not sure if they are relevant to the discussion, but if you want a method >of establishing a numeric ratings indicating the probability of one player >beating another you should try to find it. Maybe I was missing something, but I definitely had the impression that Elo's book was strictly limited to historical interest. He raises numerous valid questions, but his methods and answers may not longer be considered state of the art.
Robert Gelb (rgelb@engr.csulb.edu) wrote: : Could someone please explain the Gram-Schmidt Process in regards to the : orthogonalization process (inner spaces)? Given a set of not-necessarily orthogonal vectors, you pick one (in practical problems, this will almost always be the unit vector) and make it the first element of your set of orthogonal vectors. Then for each original vector, you convert it to a vector that's orthogonal to the rest of the vectors in your orthogonal set by subtracting a linear combination of the already-orthogonalized vectors from it. The weights of this linear combination are given by the inner product of the original vector and the orthogonalized vector, divided by the squared length of the orthogonalized vector. You repeat this until all your vectors have been orthogonalized. Note that this is *not* a good algorithm to implement on a computer, because roundoff error will pile up (since a lot of the computations will involve subtracting numbers that are almost equal). Orthogonalization algorithms based on singular value decomposition are more stable numerically.Return to Top