![]() |
![]() |
Back |
In articleReturn to Top, Greg Heath wrote: >On Fri, 8 Nov 1996, Dukki Chung wrote: > >> Hi. Reently, I had to use Bayes classifier for a pattern classification >> problem. The Bayes discriminant function is: >> di(x) = - [ ln|Ci| + (x-mu)^t Ci^-1 (x-mu)] > > di(x) = - [ ln|Ci| + (x-mui)^t Ci^+ (x-mui) -2 lnPi] > >> The problem was, the covariance matrix Ci was near singular, so the >> inverse could not be calculated. So, I used pseudoinverse instead of real >> inverse. What I'm wondering is whether this is a valid, justifiable >> mathematical or statistical approach. > >Yes. I've always used the pseudoinverse. The ill-conditioning of the >covariance matrix results in near zero eigenvalues corresponding to >directions in space for which the distribution has nearly a constant >value(i.e., nearly a zero variance). > >> I would be appreciated for any comments, suggestions, references, or any >> pointers. > >Check the eigendirections associated with the near-zero eigenvalues. >Classes with near constant values in those directions might be able to be >classified quite easily based on that fact alone. > >Hope this helps. > >Gregory E. Heath heath@ll.mit.edu The views expressed here are >M.I.T. Lincoln Lab (617) 981-2815 not necessarily shared by >Lexington, MA (617) 981-0908(FAX) M.I.T./LL or its sponsors >02173-9185, USA > > If possible, I would like to get references on Baysian classifiers, etc.; by mail or post . This is for self-study. Thanks in advance. Paul T. Karch
Hi, I developed a method of of integration which seems to be very quick and accurate and was wondering what problems may occur with it or if it has been done before (very likely). It goes like this: You wish to integrate a function between A and B, fit a polynomial (5th order seems good) to the function and you have your first approximation. then repeat by integrating between A and (B-A)/2 + integral between (B-A)/2 to B if the result is within a suitable tolerance to the first attempt then end if it is not then continue dividing and checking. the point been that incoding this you can use a simple recursive procedure. Now there are obvious problems with particular functions but if you ensure that the integral is chopped up into sufficient pieces then all is well. I have found this to be very quick. ChanningReturn to Top
Hello there, I have a problem that puzzles me related to signal processing, and I hope somebody can help me, or at least give me direction on where to look for an answer. Suppose I have a continous signal x(t), causal. Its Fourier transform, X(f) is continuous, and verify the Kramers Kronig relations, i.e. Re(X(f)) and Im(X(f)) are related through a Hilbert Transform. This I understand. Now let suppose I have a discrete signal, xn, which is non zero for n = 0, 1, ..., N-1. I also suppose xn to be complex, in which case the discrete signal I have contains 2*N experimental information. Its Discrete Fourier Transform, Xn, is also complex, and is calculated over N frequency. However, according to the Kramers Kronig relationship, the imaginary parts and the real parts of Xn are not independant, and can be derived from each other using a discrete Hilbert Transform. That would mean that Xn is composed of N independant information, while xn had 2*N information. Where did the N other values go, knowing that DFT is linear and invertible ? Furthermore, if I throw away the complex part of Xn and recalculate it using the discrete Hilbert Transform of Xn, I don't come back to the original values. Am I missing something ? What is wrong in my reasoning ? Thanks in advance for your help. Patrice Koehl koehl@bali.u-strasbg.frReturn to Top
I've found the definitive work, sponsored by Canadian & US gov't, "NBS/NRC Steam Tables", Lester Haar et al. ISBN 0-89116-354-9 (cloth) ISBN 0-89116-353-0 (paper) The book contains an appendix with FORTRAN code calculating thermdynamic and transport properties of both vapour and liquid states of water. ~~~~~~~~ I've written C and VBA(excel) code for thermo properties. I'll trade it for transport property code.Return to Top
Numerical Recipes ( http://cfata2.harvard.edu/numerical-recipes/ ) section 13.8 has a write-up on the analysis of one-dimensional un-evenly sampled data. Perhaps you could extend it for 2-D or just parameterize back to the 1-D case using the light angle as a parameter. +---------------------------------------------------------------------------+ | John Day (jday@csihq.com) | Computer Science Innovations,Inc | Principal Staff Scientist PHONE: (407) 676-2923 ext:410 | Melbourne, Fl FAX: (407) 676-2355 | WWW: http://www.csihq.com | 'Everything has a name' -Helen Keller +---------------------------------------------------------------------------+Return to Top
In article <32806bba.519610889@128.183.251.167>, lynch@gsti.com (David Lynch) writes: |> I have a matrix which I would like to compute the SVD, but the matrix |> consists of physical data. How can I compute the accuracy of the U,V, |> and Sigma's? If I have some estimate of the error in each matrix |> element, can I get some estimeate of the error in the decomposition? |> |> Dave |> **************************************************************************** |> * David Lynch * |> * Global Science and Technology e-mail: * |> * 6411 Ivy Lane Suite 610 lynch@gsti.com * |> * Greenbelt MD. 20770 <\ * |> * >\ * |> * -===================================>:::(0)//////]0 * |> * >/ * |> * Phone (301) 474-9696 * Fax (301) 474-5970 * |> * Cogito cogito, ergo * |> * Cogito sum * |> **************************************************************************** |> |> yes, this is possible. the maximum error in the singular values is the norm (maximum singular value) of the matrix of errors. Since you state you can bound these, you can bound the errors in the singular values. for the errors in the singular vectors the job is a little bit harder. You need perturbation analysis for eigenvectors of hermitian matrices. if the singular values are all different, then roughly speaking the error in a singular vector is the norm of the error matrix divided by the abs-value of the distance of its singular value to the nearest other singular value at most, but there are better estimates. consult golug&van; Loan, Matrix Computations and the literature given there. hope this helpsReturn to Top
Hi. I need to know if there's a way to do the following: I have some data representing a function F(u,v,w) that is the Fourier transform of some unknown function f(x,y,z) (this second function is a stack of images, and it is known to be composed of entirely real data). Is there any way I can find the maximum and minimum values of f(x,y,z) without doing the actual inverse Fourier Transform? It would save me a lot of computing time if there was a way. Thanks, RTS ryans@isgtec.comReturn to Top
Arno Zwegers wrote: > > Hello to you all, > I don't know if this is the right newsgroup to post in, > but I have a question about odds on a casino table. > > The situation is the following: > > The Table has 3 rows, which each have 15 different numbers in it, and > there is the 0. > So in total there are 46 numbers, so the odds for getting a specific > number is 1/46, the odd for getting a specific row is 15/46. > Now when row 1 and 3 are thrown (that means that only one number in > that row had to be thrown) 15 and 20 times, and row 2 hasn't been > thrown yet. > Now are the odds now better to bet on row 2, cause the other two have > been throw more, and row 2 hasn't been thrown at all. > > The throw device is neutral, so has no number one choice. > > Any help would be appreciated, I think the odd for row 2 are better > since the other two have been thrown more, and because the device is > neutral ALL numbers are thrown as much times in the end. > > Thanks > > Arno Zwegers > amzweg@cistron.nl > http://www.cistron.nl/~amzweg I don't claim to know a whole lot about probability, but here is my two cents worth. First of all, you have made a couple of assumptions about the situation: You said that each row is equally as likely as another row -- true? Well, Let's see, the odd's of NOT getting row 2 would be 1 - 15/46, which would be 31/46... Take that number to the 15 or 20 power. (31/46)^15 = 0.00269 roughly. (31/46)^20 = 0.0003733 roughly. In other words, quite unlikely to NOT get row 2 for 15 or 20 moves, but possible. Now to hopefully answer your question. If each event, or throw is independent... for example, tossing a coin. Each toss is independent of every other toss. 50% chance of heads, 50% chance of tails. If I throw 10 heads in a row. Odds of that happening 1/1024, anyway, the odds of getting ANOTHER heads in still 1/2 or 50%. The odds don't change just because the unlikely event occurred previously. I would definitely say that the odds for hitting row 2 doesn't increase, or for that matter change at all. I would almost say that the odds for getting row 2 is lower, just because rows 1 and 3 were hit more, but it was probably just an unlikely event. Anyway, hope this helps! - MarkReturn to Top
Hello, Are there alternatives to Sun's own f77 compiler? Just curious. tia -- Mirko Vukovic, Ph.D 3075 Hansen Way M/S K-109 Varian Associates Palo Alto, CA, 94304 415/424-4969 mirko.vukovic@varian.grc.comReturn to Top
All, or nearly all, automatic integration routines do this. A very simple and popular example is Romberg integration. Look at the FORTRAN quadrature routines on NETLIB. You will find a similar technique used frequently. You are to be congratulated for figuring out the effectiveness of this method independently. I wrote a magazine article for Dr. Dobbs Journal (Oct 96 issue) where that technique is used with several different numerical methods including Newton-Cotes, Gaussian, and Recursive Monotone Stable. Channing WaltonReturn to Topwrote in article <328B495B.5020@eleceng.ucl.ac.uk>... > Hi, > I developed a method of of integration which seems to be very quick and accurate and > was wondering what problems may occur with it or if it has been done before (very > likely). > It goes like this: > You wish to integrate a function between A and B, > fit a polynomial (5th order seems good) to the function and you have your first > approximation. > then repeat by integrating between A and (B-A)/2 + integral between (B-A)/2 to B > if the result is within a suitable tolerance to the first attempt then end > if it is not then continue dividing and checking. > > the point been that incoding this you can use a simple recursive procedure. > > Now there are obvious problems with particular functions but if you ensure that the > integral is chopped up into sufficient pieces then all is well. > > I have found this to be very quick.
In article <328A17D3.2781E494@csi.uottawa.ca>, Alioune NgomReturn to Topwrote: > Let K = {0, 1, ..., k - 1} (k > 1) be a set of k logic values. >Let Union and Intersection be two operators defined on K. Union is >defined as the bitwise OR operation between two elements represented in >binary numbers (having each log(k) bits, the base of the log is 2). Thus >for instance, for k = 8 we have, 1 Union 2 = 001 Union 010 = 011 = 3. >Intersection is defined as the bitwise AND operation between two >elements represented in binary numbers. Thus for instance, for k = 8 we >have, 5 Intersection 6 = 101 Intersection 110 = 100 = 4. > > Let k be a power of 2 (i.e. k = 2^r, with r > 0). Unary central >relations are the non-empty and proper subsets of K. A unary central >relation R is closed under Union and Intersection if x Union y and >x Intersection y are in R whenever x and y are in R. In other words: >(x in R and y in R) implies (x Union y is in R and x Inter y is in R). > > Now the problem statement: For k = 2^r, how many unary central >relations are closed under Union and Intersection ? I don't have an answer, but I do have an insight, which may or may not be obvious; if it's obvious, sorry. (If it's wrong, even sorrier. :-) ) Let me try to state this by demonstration using the case r=2, which has the numbers 00, 01, 10 and 11. Which subsets of this set are closed? Well, all four subsets containing a single element are closed. Of the six sets containing two elements, only {01,10} is not closed, and to make it closed you have to add both 00 and 11. Of the four sets containing three elements, only the two that lack either 00 or 11 are not closed, and that's because a set containing 01 and 10 has to have both 00 and 11 in addition in order to be closed. Let's now talk about arbitrary r. Let's define a pair of positions to be closed if the list of bit patterns that occur there is closed. We've exhaustively enumerated the possible lists of patterns and their closure in the previous paragraph. Now, I contend that if there are N numbers, each with r bits, it is both necessary and sufficient that that all r*(r-1)/2 unique position pairs be closed in order for the list of N numbers to be closed. What condition must a position pair fulfill in order to be closed? The answer is: if( 01 and 10 both occur at this position pair ) { if( 00 and 11 also exist ) { the position is closed } else { the position is not closed } } else { the position is closed } Given a position pair in any nonempty subset of K, there are 15 possible lists of bit patterns that can occur. Of these, all but three are closed. (These three are {01,10}, {00,01,10} and {11,01,10}.) I guess the next step is to consider the question: given all the nonempty subsets of K, how often do the closed patterns occur? -P. -- ****** Multicultural Holiday Song: "I'm Dreaming of a White Kwanzaa" ***** * Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 * ** NY, NY 10027; shenkin@columbia.edu; (212)854-5143; FAX: 678-9039 *** MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
In article <328B6417.38D3@grc.varian.com>, Mirko VukovicReturn to Topwrote: > >Are there alternatives to Sun's own f77 compiler? Just curious. Yes. NAG Fortran 90 is available for Suns, though Sun also have their own Fortran 90 (which, I believe, uses NAG technology under licence). You could probably also port g77, if you are adventurous :-) And dare I mention f2c, for those who don't give a damn about performance or diagnostics? Nick Maclaren, University of Cambridge Computer Laboratory, New Museums Site, Pembroke Street, Cambridge CB2 3QG, England. Email: nmm1@cam.ac.uk Tel.: +44 1223 334761 Fax: +44 1223 334679
In articleReturn to Top, Konrad Hinsen wrote: > Hideo Hirose writes: > > > In Japan, many researchers pronounce LaTeX as "latef." Is it correct? How do you > > pronounce TeX and LaTeX actually, especially in the united states? > > The question has already been adressed by Donald Knuth himself, who > explained that the X in "TeX" is actually the Greek letter chi, and > therefore should be pronounced like "chi" in Greek, "ch" in German, > Irish, or Scottish, or like the corresponding sounds in Russian, > Arabic, etc. That seems to be too difficult for most speakers > of English, so what I hear in practice is either "tek" (in analogy > to the English pronunciation of words like "technical") or "tex", > especially "latex", i.e. totally ignoring the intended etymology. I think it's pronounced "tex" in the US because of Texas. :-) But I usually pronounce it "PAIN." God it's a hard way to write a manuscript. Just my 0.03 US$. I invite all flames. -- Louis M. Pecora pecora@zoltar.nrl.navy.mil == My views and opinions are not those of the U.S. Navy. == -------------------------------------------------------------------- * Check out the home page for the 4th Experimental Chaos Conference! http://natasha.umsl.edu/Exp_Chaos4 ---------------------------------------------------------------------
Miroslav Trajkovic wrote: > I have one problem which looks very nice but I am not sure if it > has nice solution. > > Let a = [1 p q r s]', //where ' means transpose > b = [1 u v w z]' > and A = a*a'; > > Is there an "shortcut" to solve the system > > A*x = b; Actually, unless b is in the range of a, i.e., b and a are collinear, then there will not be a solution. Nevertheless, you can compute a least squares estimate of the problem, x=pinv(A)*b, where pinv(%) is the pseudo-inverse, and x is the smallest vector which minimizes the error ||Ax-b||. Normally, the pseudo-inverse is computed via the singular value decomposition, but in this case it can be written down directly: pinv(A) = a*a'/(a'*a)^2 so: x = ( (a'*b)/(a'*a)^2 ) a which is very cheap to compute. --John ------------------------------------------------- Dr. J.J. Hench Dept. of Mathematics, Univ. of Reading, England Institute of Informatics and Automation, Prague -------------------------------------------------Return to Top
Hamish Hubbard wrote: > > I have an array of data, about 27 by 36, each row and column in the array > is not necessarily evenly spaced (i.e. the axes could be 0, 10, 20, 35, 50, 60, ... > and 0, 2.5, 5, 10, 15, 20, 25,...). (This data is sampled from the output of > a street light on various angles.) > > I need to be able to get an average for a give 'square' defined > points (upper left and lower right) which are arbitrary. The current method > is to use Simpson's rule on the middle row of data that crosses through the > square, but this is not accurate enough, I want to use a method that takes advantage > of all the data I have in 2 dimensions. I don't really know even what sort of > algorithm to look for, this is not my area of expertise. You could fit a two-dimensional polynomial to your data using, say, Lagrange interpolation, and then performing the integration over the polynomial. Once the polynomial is computed (which is trivial with Lagrange), then it can be evaluated over a continuum. Moreover, since your integration regions are rectangular you can find a closed form expression for the definite integral given only the opposite corners. Just a thought. Matt -- maboytim@geocities.com http://www.geocities.com/CapeCanaveral/3041Return to Top
Can anyone refer me to a good explanation of what it means to "pre-whiten" signals, preferably with some public domain algorithms? Thank you. Jeff Miller Dept of Psychology Univ of Otago Dunedin, New Zealand miller@otago.ac.nz http://jomserver.otago.ac.nz/Return to Top
Hamish Hubbard (misc1684@cantua.canterbury.ac.nz) wrote: : I have an array of data, about 27 by 36, each row and column in the array : is not necessarily evenly spaced (i.e. the axes could be 0, 10, 20, 35, 50, 60, ... : and 0, 2.5, 5, 10, 15, 20, 25,...). (This data is sampled from the output of : a street light on various angles.) : I need to be able to get an average for a give 'square' defined : points (upper left and lower right) which are arbitrary. The current method : is to use Simpson's rule on the middle row of data that crosses through the : square, but this is not accurate enough, I want to use a method that takes advantage : of all the data I have in 2 dimensions. I don't really know even what sort of : algorithm to look for, this is not my area of expertise. Check out http://www.iinet.com.au/~watson/nngridr.html -- Dave Watson CSIRO Exporation and Mining email: watson@ned.dem.csiro.au 39 Fairway, P.O. Box 437 tel: (61 9) 284 8428 Nedlands, WA 6009 Australia. fax: (61 9) 389 1906Return to Top
Can you use the square root function? If so a simple approach to generate Cos[Pi x] is to express x in terms of its base-2 binary expansion and use the angle addition and half-angle formulas. For example, let x = a1+b1/2, where a1 is 0 or 1/2 and 0 <= b1 < 1. Then Cos[Pi(a1+b1/2)] = Cos[Pi a1] Cos[Pi b1/2] - Sin[Pi a1] Sin[Pi b1/2] Cos[Pi b1/2] = Sqrt[(1 + Cos[Pi b1])/2] Sin[Pi b1/2] = Sqrt[(1 - Cos[Pi b1])/2] Now let b1 = a2+b2/2, where a2 is 0 or 1/2 and 0 <= b2 < 1; apply the same procedure to express Cos[Pi(a2+b2/2)] in terms of Cos[Pi b2];then let b2 = a3+b3/2 ...Return to Top
kcj0000000@aol.com wrote: > v=0 Not necessarily, if A isn't p.d. For example, take v=(1) (0) and A=(-1 0) ( 0 0) Then the quadratic form is equal to -1, which is less than zero. In fact. there need be no minimum...write -(x^2 + y^2) in this form (that is, let A be neg. def.). (The original post didn't make it here so I don't know what assumptions might have been put on A within the message.) -- Do not imagine that mathematics is hard and crabbed, and repulsive to common sense. It is merely the etherealization of common sense. -Lord KelvinReturn to Top
Hi, I am building a drawing graphical user interface program on Sun station. I am stuck on some mathematical problems. Wish there will be somebody who can answer this question. My project is about to implement a drawing graphical user interface and what I stuck is the way to draw an arc on a canvas. I need to implement a function to click three points in the canvas and draw an arc. This may sound too easy for my problem. The arc that I need to be shown should look like more useful for my project, but just a part of the circle. What I mean is that I do not want to draw an arc just based on the part of the circle, but I want to have an arc which is like "to smoth the angle which based on the three points that were given and to draw an arc. What I need is that if someone can tell me if this is a kind of arc or it has a different name. And, can you give a reference of the formula to draw this "arc" by just given three points? Thank you very much for your time. Paul.Return to Top
I thought, that internally the coprocessor uses 10 byte reals (real10). Those are then dumped into memory with 8 bytes to the double type. I suspect that this transformation spoils your directed rounding. If anybody could tell me a way to control this (in C, maybe mixed with assembler), I would be grateful too. Also you do not need a debugger to observe this. Just subtract the both results. If this is done in the coprocessor stack (as most compilers do), you will notice a difference. So you should intermediately save the results to a variable to check for proper rounding, or use 10 byte real variables, if you have them. Rene. >I have implemented a series of math functions which use the directed >rounding commands available for the floating point unit (fpu) on the >Pentium. In checking through these functions, I divided 1 by 3 rounding up >then rounding down to compare the results. In both cases, I obtained the >same result (.333333 out to the number of places that can be observed in my >debugger (Visual C++ 4.2) which presumably is the full representation of >real10 precision. (My routines are actually written in MASM 6.11d >assembler).Return to Top
v=0Return to Top
In article <328BCB98.7455@mail.geocities.com>, Matt BoytimReturn to Topwrites: |> Hamish Hubbard wrote: |> > |> > I have an array of data, about 27 by 36, each row and column in the array |> > is not necessarily evenly spaced (i.e. the axes could be 0, 10, 20, 35, 50, 60, ... |> > and 0, 2.5, 5, 10, 15, 20, 25,...). (This data is sampled from the output of |> > a street light on various angles.) |> > |> > I need to be able to get an average for a give 'square' defined |> > points (upper left and lower right) which are arbitrary. The current method |> > is to use Simpson's rule on the middle row of data that crosses through the |> > square, but this is not accurate enough, I want to use a method that takes advantage |> > of all the data I have in 2 dimensions. I don't really know even what sort of |> > algorithm to look for, this is not my area of expertise. |> |> You could fit a two-dimensional polynomial to your data using, say, |> Lagrange interpolation, and then performing the integration over the |> polynomial. Once the polynomial is computed (which is trivial with |> Lagrange), then it can be evaluated over a continuum. Moreover, since |> your integration regions are rectangular you can find a closed form |> expression for the definite integral given only the opposite corners. |> |> Just a thought. |> |> Matt |> -- |> maboytim@geocities.com |> http://www.geocities.com/CapeCanaveral/3041 twodimensional integration is the right way, I think, but clearly not using Lagrange-(high-degree)-interpolation. Why not triangulating the implicitly defined (incomplete) rectangular grid, using linear interpolation/ integration on triangles and summing up? cheers, peter
In articleReturn to Top, Bill Simpson writes: |> I have a calibration problem, and am seeking advice. |> |> An x,y,z scope is displaying dots. The luminance of the dot is governed |> by |> z. I step through the values of z, measuring the luminance with a |> photometer (automatically). The point of this is to linearize the z |> values, abd to calibrate the display. After doing this I wish |> to say |> plot(x,y,lum2z(100.0)); |> and get a dot with luminance of 100.0 cd/m^2. That is, lum2z(lum) |> returns the z value that gives a luminance of lum. |> |> So I have measured lum (with error) at many z values (no error). I wish |> to estimate z from lum. This is called a calibration problem in the |> statistics literature (or inverse regression). |> |> I have 4096 z values. I measure in steps of 9 (from 4095 to 0). |> |> My idea is to fit a high-order polynomial to the (z,lum) data points. |> The order has to be high, say 11th or even higher, to get a decent fit. |> I would use SVD. I say the order has to be high from looking at |> the actual data and trying various fits. The fit is done on the |> first call of lum2z(). Suppose that only a 2nd order polynomial is |> fitted: |> lum=b0+b1*z+b2*z^2 |> Then this is solved for z: |> 2 1/2 |> -b1 + (b1 + 4 b2 lum - 4 b2 b0) |> 1/2 ----------------------------------- |> b2 |> |> 2 1/2 |> -b1 - (b1 + 4 b2 lum - 4 b2 b0) |> 1/2 ----------------------------------- |> b2 |> |> (not sure which one to use, lum and z both constrained to be positive) |> |> Then on subsequent calls of lum2z(), I just use the fitted parameters |> b0, b1, b2 in the above equation to get the z value. This will be very |> fast, and speed is important because this function will get a LOT of |> work (8100 calls per image, multiple images). |> |> An alternative is to call z the y value and lum the x variable |> (even though that's not correct) and fit the polynomial to that. That |> way I avoid the symbolic algebra to solve for z. This method should be OK |> since the errors are very small compared to the range of lum. |> |> [Actually I have just read John Chandler's posting on polynomial |> fitting. I will do it the way he suggests than than as written above.] |> |> The other options would include |> - linear interpolation |> - quadratic interpolation |> - spline interpolation |> - ?? |> |> I have tried linear and quadratic interpolation on the data taken as |> (lum,z). They are not dependable. I especially have problems on the |> low z values where the luminance readings are noisy and near 0 and |> luminance is not a monotonic function of z. |> |> I have thought about fitting a spline. The routines I have seen require |> lum to be a monotonic function of z. (I use C). It seems to me this |> will be a slow method, since the spline interpolation must be computed |> on every call. |> |> Please let me know if my proposed solution seems reasonable. If not |> what should I do? Please also suggest available C code. |> |> Thanks very much for any help. |> |> Bill Simpson |> why not use _inverse_ fit directly, that is you fit z=a+b*lum+c*lum**2 . with error in lum, you must use total least squares, but this should be not to hard with a not too large set of (x,y)-pairs ? might be better than your _direct fit + inversion_ . cheers , peter
> I'm looking for an efficient routine to compute the inverse of the Our World Wide Web site on data modeling has excellent links to mathematical resources and software, as well as pointers to the better Internet search engines (I prefer Alta Vista). The URL is: http://www.fred.net/mandalay Yours, James R. Phillips President Mandalay Scientific, Inc.Return to Top
> I am also interested in this. Our World Wide Web site on data modeling has excellent links to mathematical resources and software, as well as pointers to the better Internet search engines (I prefer Alta Vista). The URL is: http://www.fred.net/mandalay Yours, James R. Phillips President Mandalay Scientific, Inc.Return to Top
> Does anyone know of any good and simple arbitrary tetrahedral mesh > generators. Our World Wide Web site on data modeling has excellent links to mathematical resources and software, as well as pointers to the better Internet search engines (I prefer Alta Vista). The URL is: http://www.fred.net/mandalay Yours, James R. Phillips President Mandalay Scientific, Inc.Return to Top
> Anyone know where to find C source code to solve for the eigenvalues and Our World Wide Web site on data modeling has excellent links to mathematical resources and software, as well as pointers to the better Internet search engines (I prefer Alta Vista). The URL is: http://www.fred.net/mandalay Yours, James R. Phillips President Mandalay Scientific, Inc.Return to Top
Hi ! We are two students in electronical engineering(third year) at Politecnico di Torino, Italy and we are in need of help for a problem in numerical analysis. How can you rappresent non polynomial function (not exprimable using a sum of powers of x) on a PC ? (Expecially sin(x), cos(x), and exp(x) ) We tought about Mc Laurin's serie whit several enanchements, such as reducing every angle to the range { -pi/4 , pi/4 }, but our teacher told us that there is a best way, which gives a smaller error when abs(x) is relatively far from 0 (near pi/4). Can anyone help us? We are in a great hurry since we have to terminate this job in a few days and we would like to add the results of theese better algorithms, comparing them with those from the simpler algorithms we used. Thanks a lot and please forgive us for the bad english. Guido & Igor P.S. : If you can help, send the answer not only to the newsgroup, but also through e-mail at guidov@net4u.it because otherwise we couldn't read it until monday.Return to Top
I am looking for a MATLAB program to evaluate DDS (Data Dependant System) methodology. Gabriel Sirat -- From: Ophir Optronics Ltd pob 45021 91450 Jerusalem , Israel phone972-2-5326592 fax972-2-5822338 vecht@ophiropt.co.ilReturn to Top
I am looking for sources (tables or programs) for generating cyclic difference sets, particularly Singer difference sets (from projective planes) and Bose difference sets (from affine planes). Is anyone aware of any (free) programs that generate these and other sets? Thanks.Return to Top
In article <56d6l0$gq4@r02n01.cac.psu.edu>, Mike YukishReturn to Topwrote: > In article <56bc1t$58r@hcunews.hiroshima-cu.ac.jp> Hideo > Hirose, hirose@cs.hiroshima-cu.ac.jp writes: > >In Japan, many researchers pronounce LaTeX as "latef." Is it correct? How do you > >pronounce TeX and LaTeX actually, especially in the united states? > > > > > > I pronounce it so it rhymes with 'luxury yacht' Like "La Tot" (short "o")? Or are you pronouncing the "chaotic" as in church? I'm confused. -- Louis M. Pecora pecora@zoltar.nrl.navy.mil == My views and opinions are not those of the U.S. Navy. == -------------------------------------------------------------------- * Check out the home page for the 4th Experimental Chaos Conference! http://natasha.umsl.edu/Exp_Chaos4 ---------------------------------------------------------------------
In article <328B43C0.45BF@isgtec.com>, Ryan SparkesReturn to Topwrote: >Hi. I need to know if there's a way to do the following: > >I have some data representing a function F(u,v,w) that is the >Fourier transform of some unknown function f(x,y,z) (this second >function is a stack of images, and it is known to be composed >of entirely real data). Is there any way I can find the maximum >and minimum values of f(x,y,z) without doing the actual inverse >Fourier Transform? It would save me a lot of computing time >if there was a way. > >Thanks, > >RTS >ryans@isgtec.com Min and max are non-linear operators. I doubt that you can get at them in a closed form. However, you _can_ get the lower and upper bound of f(x,y,z). Use the Schwarz inequality on the Fourier inverse transform equation: sum(F(u,v,w).exp(...)) <= sum(abs(F(u,v,w) ) ) * total_num_points_in_uvw not exactly the min,max but it might be enough for your application. lakshman
Patrice Koehl wrote: > > Hello there, > > I have a problem that puzzles me related to signal processing, and > I hope somebody can help me, or at least give me direction on where > to look for an answer. > > Suppose I have a continous signal x(t), causal. Its Fourier transform, > X(f) is continuous, and verify the Kramers Kronig relations, i.e. > Re(X(f)) and Im(X(f)) are related through a Hilbert Transform. This > I understand. > > Now let suppose I have a discrete signal, xn, which is non zero for > n = 0, 1, ..., N-1. I also suppose xn to be complex, in which case > the discrete signal I have contains 2*N experimental information. > Its Discrete Fourier Transform, Xn, is also complex, and is calculated > over N frequency. However, according to the Kramers Kronig relationship, > the imaginary parts and the real parts of Xn are not independant, and > can be derived from each other using a discrete Hilbert Transform. > That would mean that Xn is composed of N independant information, > while xn had 2*N information. Where did the N other values go, knowing > that DFT is linear and invertible ? > Furthermore, if I throw away the complex part of Xn and recalculate it > using the discrete Hilbert Transform of Xn, I don't come back to the > original values. Am I missing something ? What is wrong in my > reasoning ? > > Thanks in advance for your help. > > Patrice Koehl > > koehl@bali.u-strasbg.fr Patrice, It is probably best to start thinking in terms of a causal _physical system_. The causality property then reflects an important aspect of the physics going on inside the black box: no response can appear before an excitation is applied. Such a physical system is thus special and it is not so surprising that its frequency response (FT of impulse response) has correlated real and imaginary parts. It is, I believe, impossible to build a physical system that is not causal. In the case of analog filters, one speaks of (_physically_) realisable filters: their transfer functions form a small subset of all possible functions. This is perhaps one of the reasons people turn to digital filters: they are much more flexible. In the years 1930, a then renowned French University Professor (Bouasse) wrote many physics textbooks, each with a long preface explaining some of the author's ideas. One of his pet subject was ridiculing the use of the Fourier Transform in physics. By forgetting causality, he could derive many absurd properties of the Fourier transform of any physical property. In nuclear magnetic resonance (NMR), we observe the response of a causal system, an assembly of magnetic spins. It is possible to show that, accordingly, the real and imaginary parts of the magnetic susceptibility form a Hilbert transform pair: neither the system nor its response are completely arbitrary. Moving on to causal functions, we observe that they _are_ quite special: they vanish from -inf to zero! In other words, a causal function f_c is the product of a general function f by a Heaviside step function u(t)(this is where Hilbert enters, through th FT of u(t)). Causal functions are usually not continuous, since u(t) isn't. In fact, the proper definition of u(o) is of interest when one computes the integral of F_C, the FT of f_c. I don't see much difference with sampled functions: f is represented by the infinite sequence f_n (-inf <= n <= inf} and f_c is represented by a an infinite sequence with all data values zero for negative n. The DFT derived data must somehow reflect this fact. In your post, you introduce yet another idea: all sequences are truncated at N. Noise (in physical systems) or random functions are usually stationary or assumed to be so, since the math is so much simpler) and thus not causal. This is a lucky circumstance for those of us who practice NMR: by recording both real and imaginary parts of the signal (in a process called quadrature detection), we gain some information or improve the signal to noise ratio (as you know !) Finally, you mention back-calculating the signal after deleting the "complex" (I assume you meant imaginary) part of its FT: this is wrong! A causal signal has a_complex_ transform in general (for instance exp(-t)u(t)). The causality appears in the symmetry properties of the FT (hermitian). This is connected with the concept of an "analytic signal", which you may want to look up in a signal analysis textbook. You may wish to post your question to the comp.dsp newsgroup, where the real signal processing takes place. I've been much too long, but perhaps useful, salut! JP GrivetReturn to Top
Hi Experts, A friend of mine asked me to gather numerical algorithmes on two topics: Gauss-analysis Karman-filtering. I couldn't reach any information. Can anyone help me? I'm not perfectly sure about the names of the methods, but I think both of them have some connection to the spectral analysis. What I need is: description of the algorithms (what does it do) and the algorithms (in symbolic language or in Pascal, C, Basic or PC assembly. I can use every internet resource, so if you provide me the address, that's enough. I'm not following this group regularily, so answer via email please! Thank you: Ferenc Wagner (wferi@cs.elte.hu)Return to Top
In article <3282C496.78558E0E@cae.wisc.edu>, Worawut WisutmethangoonReturn to Topwrites: |> Worawut Wisutmethangoon wrote snip snip ... |> The problem is essentially a linear least square problem. |> I was thinking of how to find a linear least sqare plane from a |> set of (> 3) points. I knew that the equation for a plane is |> |> a.x + b.y + c.z + d = 0 |> |> and the square of the distance from a point (xi,yi,zi) to such plane |> is |> |> (a.xi + b.yi + c.zi + d)^2 / (a^2 + b^2 + c^2) |> |> Thus, sum of error squared is |> |> (a^2.sum(xi^2) + b^2.sum(yi^2) + c^2.sum(zi^2) |> + 2.a.b.sum(xi.yi) + 2.b.c.sum(yi.zi) + 2.a.c.sum(xi.zi) |> + 2.a.d.sum(xi) + 2.b.d.sum(yi) + 2.c.d.sum(zi) |> + d^2.sum(1) ) / (a^2 + b^2 + c^2) |> |> If we required that (a^2 + b^2 + c^2) = 1, and rewrite the |> sum of error squared |> |> [ a b c d ][ sum(xi.xi) sum(xi.yi) sum(xi.zi) sum(xi) ] [ a ] |> [ sum(xi.yi) sum(yi.yi) sum(yi.zi) sum(yi) ] [ b ] |> [ sum(xi.zi) sum(yi.zi) sum(zi.zi) sum(zi) ] [ c ] |> [ sum(xi) sum(yi) sum(zi) sum(1) ] [ d ] |> |> This is why I asked the original question. |> |> Now I've found out that for linear least square problems |> A (mxn) . X (nx1) = B (mx1) it is recommend to do QR factorization |> of A and solve for X from |> R1.X = Q1^t.B |> |> And I have the following questions: |> |> Are these two methods really the same? |> If yes, which method would be the more efficient way to solve |> this problem ? |> |> Again, I would like to thank you in advance for any reply. |> |> Thanks, |> Worawut W. no, I guess. What would you take as A and B? B=0 gives your original problem back. You have an orthogonal distance minimzation problem (for this indeed good software solution exist already (netlib/odrpack) but in your case you do better with the first solution. cheers, peter
From spellucci Fri Nov 15 19:54:11 1996 From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci) Subject: Re: please help: non polynomial function Path: fb0446.mathematik.th-darmstadt.de!spellucci Newsgroups: sci.math.num-analysis Distribution: inet Followup-To: References: <56hl47$4k3@galileo.polito.it> Organization: TH Darmstadt, Fachbereich Mathematik Keywords: In article <56hl47$4k3@galileo.polito.it>, s84213@vcldec7.polito.it (Stoppa Igor) writes: |> Hi ! |> We are two students in electronical engineering(third year) at |> Politecnico di Torino, Italy and we are in need of help for a problem in |> numerical analysis. |> How can you rappresent non polynomial function (not exprimable using a |> sum of powers of x) on a PC ? (Expecially sin(x), cos(x), and exp(x) ) |> We tought about Mc Laurin's serie whit several enanchements, such as |> reducing every angle to the range { -pi/4 , pi/4 }, but our teacher told |> us that there is a best way, which gives a smaller error when abs(x) is |> relatively far from 0 (near pi/4). |> Can anyone help us? We are in a great hurry since we have to terminate |> this job in a few days and we would like to add the results of theese |> better algorithms, comparing them with those from the simpler algorithms |> we used. |> Thanks a lot and please forgive us for the bad english. |> Guido & Igor |> |> P.S. : If you can help, send the answer not only to the newsgroup, |> but also through e-mail at guidov@net4u.it because otherwise we |> couldn't read it until monday. why not take netlib/specfunc ? the books of Cody&Wait; as well as Hart et al (SIAM) give lots of information on the subject. hope this helps. peterReturn to Top
I'm solving an ODE system: d2_x/dt_2=f1(t,x,y,z,...) d2_y/dt_2=f2(t,x,y,z,...) d2_z/dt_2=f3(t,x,y,z,...) When I use a numerical method to calculate x(t), y(t), z(t), precision is proportional to dt in n-th power. However, I need to calculate y,z as functions of x. How can I estimate precision in this case? Thanks, Michael dubin@highend.comReturn to Top