![]() |
![]() |
Back |
In article <32AAE45E.41C67EA6@imag.fr>, Rene AidReturn to Topwrote: %Does anyone know where I can find theorem on the composition %of asymptotic expansion of vector valued functions and norms %of these fonctions ? % %Typically, I want to know wether this proposition holds : % % if f(e,x) = f_0(x) + f_1(x) e + ... + O(e^p) % then || f(e,x) || = || f_0(x) || + G_1(x) e + ... + O(e^p), % for any good norm. % %Thanks, % %Rene Aid %-- %---------------------------------------------------------------- %LMC-IMAG net : rene.aid@imag.fr %46, Av. F. Viallet tel : (33) 04 76 57 48 66 %38000 Grenoble France fax : (33) 04 76 57 48 03 The expansion does not exist when f(0,x) = 0, since in that case you can choose e to make any expansion negative. For nonzero f, the existence of the expansion depends on the smoothness of the norm at the point f. For example, the 1-norm will not have an expansion if one of the components of f is zero. Pete Stewart
In article <58g7sq$uom@hecate.umd.edu>, Jason Stratos PapadopoulosReturn to Topwrote: %Jean Debord (jdebord@MicroNet.fr) wrote: %: kamthan@cs.concordia.ca (KAMTHAN pankaj) wrote: % %: >Could anybody suggest any pointers to articles on the historical %: >evolution of any of the various aspects of numerical analysis: %: >LU decompostion, Newton's method, Simpson's rule, etc.? % %The more esoteric stuff like Gaussian quadrature, the Euler-Maclaurin %formula, continued fractions and such can be found in % %Herman H. Goldstine, A History of Numerical Analysis from the 16th %Through the 18th Century. I think it mentions all the things you want, %except maybe the LU decomp (should have something about Gaussian elim. %though) % %jasonp % Regarding the LU decompostion, Gauss worked with positive definite matrices and effectively computed an LDL^T decomposition. The general LU decomposition is due to Jacobi. On of the following two references contains it. @article{jaco:1857, author = "C. G. J. Jacobi", year = "1857, posthumous", title = "{{\"Uber eine elementare Transformation eines in Buzug jedes von zwei Variablen-Systemen linearen und homogenen Ausdrucks}}", journal = " {Journal f\"ur die reine und angewandte Mathematik}", volume = "53", pages = "265-270", kwds = "la, lud, Sylvester-Jacobi inertia theorem, history" } @article{jaco:1857a, author = "C. G. J. Jacobi", year = "1857, posthumous", title = " {\"Uber einen algebraischen Fundamentalsatz und seine Anwendungen}", journal = " {Journal f\"ur die reine und angewandte Mathematik}", volume = "53", pages = "275--280", kwds = "la, lud" Be warned that Gauss and Jacobi worked with quadratic and bilinear forms, not matrices. GWS
First Announcement - Call for Papers Advanced Concepts and Techniques in Thermal Modelling Eurotherm Seminar N.53 October 8-10, 1997 Faculte Polytechnique de Mons, Mons, Belgium. The deadlines are : Abstracts due : February 14, 1997 Final manuscripts due : June 13, 1997 More details on the Web at : http://stecwww.fpms.ac.be/EURO53/ or by e-mail to : euro53@stecsgi.fpms.ac.beReturn to Top
In article <57spth$93m@ttacs7.ttu.edu>, kesinger@math.ttu.edu (Jake Kesinger) writes: |> Could someone please comment on whether the following is a (theoretically) |> valid method of computing the Lobatto points of degree n+1? |> |> 1. The Lobatto points of degree 3 are {-1,0,1}. |> 2. If x and y are consecutive Lobatto points of degree n, then |> there is exactly one Lobatto point `z' of degree n+1 in the |> interval [x,y]. |> 3. Each Lobatto point is a simple root of P'_n, so P'n changes sign |> at z. |> 4. The secant method can be used with initial endpoints [x,y] to |> approximate z. |> 5. The other two Lobatto points are -1 and 1. |> |> This method seems to work, but I have been unable to justify it. |> |> I've also come across an algorithm that uses Newton's method to find |> each Lobatto point with initial guess of cos(j*Pi/n), j=1..n-1, but |> have not found justification for that, either. |> |> Can anybody point me towards some references regarding such justification? this comes from the theory of orthogonal polynomials lobatto-points are -1,1 and the zeroes of pn', there pn is the n-th legendre polynomial, the orthogonal polynomials for weight 1 on [-1,1] orthogonal polynomials have all their roots real, simple and inside their reference interval. by rolle's theorem, this implies the same behaviour for the derivatives. for a complete proof see krylow: approximate calculation of integrals (the rule is also given in davis&rabinowitz; numerical integration) hope this helps, peterReturn to Top
In article <329CF205.4C76@softopia.pref.gifu.jp>, Joel ShellmanReturn to Topwrites: |> Has there been any good methods developed for numerical solution of |> DAE's? Is there anywhere on the net that has information about it? I |> read a book recently and it said there wasn't a good method for this |> yet. The book is a few years old, so I'm wondering what the current |> situation is. |> |> Thanks, |> |> -joel |> |> -- |> taotree Tutor and Stuff |> Math and Physics Solver and thoughts on Creativity |> http://www.geocities.com/CapeCanaveral/8103/ what about DASSL? there has been a lot of successful work in the field by e.g. Gear, Leimkuhler, Petzold, Maerz, whether the problem is "hard" depends on the so called "index" of your system. index1 is easy and can be treated by Gear's BDF or by imlicit Runge-Kutta-formulas. DASSl uses BDF . It is available through netlib. For theory have a look at books e.g. by Roswitha Maerz, Hairer&Norsett;&Wanner; etc. hope this helps peter
> : If all numbers are infinite, and zero is the only whole infinity, then: > > : 1 is a smaller infinity than zero. > : 2 is a smaller infinity than one. > : 10 is a smaller infinity than nine. > : 1 hundred, > : 1 thousand, > : 1 billion > : 1 trillion, in relation to the true value of smaller numbers, or zero, > : have > : a smaller and smaller value. > > : It follows that 1+1+1+1... is a convergent number. Each next sum, > : in relation to the whole, has a smaller true value. I think you are treating oo to have the same proporties as a normal number. In many cases, you considered oo minus oo to equal 0.this is blatantly wrong and it can be shown in an example: lim ( x^2 - x) = (oo - oo) but the limit goes to +oo n->oo By your assumtion, it would make sense that the limit goes to zero. One should also note that infinity is not to be used as a number. It is merely an expression of unboundedness. Look it up in Webster's. Now, when you express 0, 1, 2, 1000, etc. as being infinities, you are violating the definition of infinity. The number 2 is possible to count to. If the range of a function f(x) is entirely below 2, it is most definately bounded. A function g(x) increases to infinity if there DOES NOT EXIST a number M such that the range of g(x) is entirely below M. Thus, saying g(x) increases to infinity is an expression of its unboundedness. > > : Html version of this paper at: > : http://members.aol.com/spfields1/essays/math.htm -- _____________________________________________________________________ thomas delbert wilkinson 038 henday lister hall university of alberta If god were perfect, why did He create discontinuous functions? http://ugweb.cs.ualberta.ca/~wilkinso/Return to Top
In article <58g85b$bk3@mark.ucdavis.edu>, psalzman@landau.ucdavis.edu (Homer Simpson of the Borg) writes: |> |> Hello all |> |> Simple question: Is the Crank Nicolson algorithm unitary? |> |> That is, if I use C.N. to solve Schroedinger's equation and my initial |> condition wavefunction is normalized to 1, will it stay normalized to 1 |> by the n'th iteration? |> |> Thanks! |> Peter C.N. is nothing else than the trapezoidal rule applied to a ode. its propagation function is (for a linear ode y'=\lambda y) g(z)=(z+h*\lambda/2)/(z-h*\lambda/2) and hence |g(z)|=1 for z on the imaginary axes, that is C.N. retains the amplitude (but may severely disturb the phase!). hope this helps peterReturn to Top
In articleReturn to Top, j.xiao@mailbox.uq.oz.au (Jinhong Xiao) writes: |> Does anyone knows where I can find the C source code for Non-Negative |> Least Squares problem? |> |> |> Many thanks in advance |> Jim have a look at netlib/clapack/dgelss
excuse me .... not for z, for \lambda on the imaginary axes of course bad hacking ... peterReturn to Top
Hi, Can anybody point to the recent references on Krylov Integrators. There was J. Sci. Comp. of 1989 by Tuckerman (Exponential propagation..) was there any followup by other people? Stas.Return to Top
Henry Baker (hbaker@netcom.com) wrote: : In article <57nf9l$12q@mathserv.mps.ohio-state.edu>, : mcclure@math.ohio-state.edu (Mark McClure) wrote: : > In articleReturn to Top, : > M. TIBOUCHI wrote: : > >For more flexibility, you can define complexes as a 2x2 matrix. : > : > I'm not sure I understand the advantage of defining a complex via a 2x2 : > matrix? Would anyone care to elaborate? : One can 'conceive' of a complex as a 2x2 matrix, and have it inherit all : of the usual matrix operations (whatever they may be). You can learn a : lot of linear algebra by specializing all that nxn stuff down to 2x2 matrices, : and trying to understand how the general operations work in very specific : instances. : Enjoy! I think you need to elaborate a bit more. After all, representing a complex by a 2x2 matrix uses twice as much space as it needs to; moreover, the normal matrix multiplication it inherits would NOT correspond to complex multiplication; I fail to see ANY advantage to doing this. Linear operators acting on complex numbers are nicely represented with 2x2 matrices, but not the complex numbers themselves. -- *------------------------------------------------------------------* * Bill Stockwell | "The President will keep those * * Computing Science | promises he INTENDED to keep" * * U. of Central Oklahoma | -- George Stephanopoulos * *------------------------------------------------------------------*
Is there an algorithm for calculating the square of a number that is fairly quick and doesn't involve much multiplication. I'm trying to implement a square routine on an 8-bit processor that only does simple addition and subtraction. DaveReturn to Top
I need to solve many sets of algebraic equations. Each one consists of five equations in five unknowns. Three of the equations are of degree four, the other two of degree two. The three equations of degree four, however, are fixed; that is, the only difference between two different sets of equations is that the two second-order equations are different. Can this help to solve the systems, assuming I can do a lot of pre-processing on the three equations which are always the same? Also, I am interested only in real solutions whose absolute value is bounded by 1. Many thanks, -Danny Keren.Return to Top
In article <58k2o6$dvs@canyon.sr.hp.com>, dbrody@sr.hp.com (Dave Brody) wrote: >Is there an algorithm for calculating the square of a number that >is fairly quick and doesn't involve much multiplication. I'm trying >to implement a square routine on an 8-bit processor that only does >simple addition and subtraction. If you only need to square 8-bit numbers, a look-up table won't be very big.Return to Top
Hi, Can anybody point to refs on the Krylov Integrators? Thanks. StasReturn to Top
SCI.PHYS removed from newsgroups... On 10 Dec 1996 16:16:51 GMT, Bill StockwellReturn to Topwrote: >Henry Baker (hbaker@netcom.com) wrote: >: In article <57nf9l$12q@mathserv.mps.ohio-state.edu>, >: mcclure@math.ohio-state.edu (Mark McClure) wrote: > >: > In article , >: > M. TIBOUCHI wrote: >: > >For more flexibility, you can define complexes as a 2x2 matrix. ....< > ...... >I think you need to elaborate a bit more. After all, representing a complex >by a 2x2 matrix uses twice as much space as it needs to; moreover, the normal >matrix multiplication it inherits would NOT correspond to complex multiplication > ; >I fail to see ANY advantage to doing this. Linear operators acting on complex >numbers are nicely represented with 2x2 matrices, but not the complex numbers >themselves. Perhaps if M. Tibouchi had used "encode" instead of "define" as in z=x+iy --> ( x y ) (-y x ) Then multiplication and/or addition properties are preserved, right? Not sure what advantages...except to note the isomorphism... Robert (FEYNMAN is just whimsical hubris)
a) What are they b) How can I check a number (x) to see if it is a Triangular square no.? Thanks - all help is greatly appreaciated! Tim tcs@naturally.clara.netReturn to Top
In article <58k2o6$dvs@canyon.sr.hp.com>, dbrody@sr.hp.com (Dave Brody) writes: |> Is there an algorithm for calculating the square of a number that |> is fairly quick and doesn't involve much multiplication. I'm trying |> to implement a square routine on an 8-bit processor that only does |> simple addition and subtraction. |> |> Dave |> Dave, did you know that on some RISK processors floating point multiplication takes oine cycle less than FP addition, due to the fact that addition (or subtraction) takes a precycle to scale the two numbers to a compatible exponent. W. -- ----------------------------------------------------- Dr. Wolfgang M. Hartmann SAS Institute Inc. saswmh@unx.sas.com SAS Campus Drive R5228 (919) 677-8000 x7612 Cary, NC 27513 -----------------------------------------------------Return to Top
In article <58k2dj$ntl@frazier.backbone.ou.edu> ws@aix1.ucok.edu (Bill Stockwell) writes: > : One can 'conceive' of a complex as a 2x2 matrix, and have it inherit all > : of the usual matrix operations (whatever they may be). ... > I think you need to elaborate a bit more. After all, representing a complex > by a 2x2 matrix uses twice as much space as it needs to; moreover, the normal > matrix multiplication it inherits would NOT correspond to complex multiplication; > I fail to see ANY advantage to doing this. Represent a + bi as [ a -b ] [ b a ] and show in what way matrix multiplication does NOT correspond to complex multiplication. -- dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131 home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/Return to Top
In articleReturn to Top, saswmh@pascal.unx.sas.com (Wolfgang Hartmann) writes: |> |> In article <58k2o6$dvs@canyon.sr.hp.com>, dbrody@sr.hp.com (Dave Brody) writes: |> |> Is there an algorithm for calculating the square of a number that |> |> is fairly quick and doesn't involve much multiplication. I'm trying |> |> to implement a square routine on an 8-bit processor that only does |> |> simple addition and subtraction. |> |> |> |> Dave |> |> |> |> Dave, |> did you know that on some RISK processors floating point |> multiplication takes oine cycle less than FP addition, due ^^^^^ sorry, I meant more! |> to the fact that addition (or subtraction) takes a precycle |> to scale the two numbers to a compatible exponent. |> W. |> |> -- |> |> ----------------------------------------------------- |> Dr. Wolfgang M. Hartmann SAS Institute Inc. |> saswmh@unx.sas.com SAS Campus Drive R5228 |> (919) 677-8000 x7612 Cary, NC 27513 |> ----------------------------------------------------- -- ----------------------------------------------------- Dr. Wolfgang M. Hartmann SAS Institute Inc. saswmh@unx.sas.com SAS Campus Drive R5228 (919) 677-8000 x7612 Cary, NC 27513 -----------------------------------------------------
In articleReturn to Top, saswmh@pascal.unx.sas.com (Wolfgang Hartmann) writes: |> |> In article <58k2o6$dvs@canyon.sr.hp.com>, dbrody@sr.hp.com (Dave Brody) writes: |> |> Is there an algorithm for calculating the square of a number that |> |> is fairly quick and doesn't involve much multiplication. I'm trying |> |> to implement a square routine on an 8-bit processor that only does |> |> simple addition and subtraction. |> |> |> |> Dave |> |> |> |> Dave, |> did you know that on some RISK processors floating point |> multiplication takes oine cycle less than FP addition, due Sorry, I'm not in a good shape today: mult may be faster than add since add needs scaling |> to the fact that addition (or subtraction) takes a precycle |> to scale the two numbers to a compatible exponent. |> W. |> |> -- |> |> ----------------------------------------------------- |> Dr. Wolfgang M. Hartmann SAS Institute Inc. |> saswmh@unx.sas.com SAS Campus Drive R5228 |> (919) 677-8000 x7612 Cary, NC 27513 |> ----------------------------------------------------- -- ----------------------------------------------------- Dr. Wolfgang M. Hartmann SAS Institute Inc. saswmh@unx.sas.com SAS Campus Drive R5228 (919) 677-8000 x7612 Cary, NC 27513 -----------------------------------------------------
I hope your answer was "NO!" The angles will be equal only for equlateral triangles. But a counter-example to the claim that the angles are equal for any triangle is to show that when one side is reduced to a very small length, the angle opposite it (whether at the vertex or at G) is reduced also to a very small value. On Sun, 08 Dec 1996 20:00:27 -0500 in sci.math.num-analysis, Regis Mesnier (swann2@earthlink.net) wrote: > Dear net users, > Having been recently asked by one of my student if (considering any > triangle ABC and G the center of gravity of that triangle) the angles > AGB, BGC, and CGA were all eguals (to 120 degres, of course), I came up > with an answer, but could not find the formal proof for it. If any of > you could indicate me where I'd be able find it, or even show it here, I > would sincerely appreciate it. Thanks in advance. > Regis Mesnier, e-mail:swann2@earthlink.net -- Prof. Ronald J. Hartranft http://www.Lehigh.edu Dept. of Mech. Engr. & Mechanics /~rjh2/rjh2.html Lehigh University Phone: 610-758-4109 19 Memorial Drive West Email: rjh2@Lehigh.edu Bethlehem, Penn. 18015-3085Return to Top
Optech Solutions is proud to announce its super efficient optimization software. Details can be found at: http://www.wbm.ca/users/optimize/ Thank you. We look forward to serving you. Even Jeffrey J. Leaderer who insists on flaming this simple announcement and taking it off the newsgroup... thus potentially depriving many of a fine optimization product. JP -- Dr. Jim Pulfer President Optech Solutions Box 123 Delisle, SK S0L 0P0 Canada E-mail: optimize@eagle.wbm.ca http://www.wbm.ca/users/optimize/Return to Top
In article <58jpcg$1krd@rs18.hrz.th-darmstadt.de>, spellucci@mathematik.th-darmstadt.de (Peter Spellucci) wrote: > In articleReturn to Top, j.xiao@mailbox.uq.oz.au (Jinhong Xiao) writes: > |> Does anyone knows where I can find the C source code for Non-Negative > |> Least Squares problem? > |> > |> > |> Many thanks in advance > |> Jim > have a look at netlib/clapack/dgelss Thanks for those people who provided their helps and suggestions. Although I could not find the C source code for NNLS directly, I did find helpful to translate Fortran to C from these suggestions. Now the problem is solved. Thanks again Jim
http://www.geocities.com/SiliconValley/Park/1879Return to Top
n8tm@aol.com writes: >That's an interesting opinion. Personally, I'd like to see more people >agree with it. I've had the occasion to replace IMSL functions when the >licensing terms became too onerous in the multiple site world. >Tim Haven't seen the original postings on this. I tried to get LAPACK++ working on our DECalphas here. It failed a couple of tests. I was disappointed. It looks well-designed though, and I might borrow some of their ideas. lf.Return to Top
(Please display with monospaced font) All triangular square numbers are given by, z = 1, z = 36, z = 34z - z + 2 1 2 n n-1 n-2 Alternatively, ___ n ___ n ( 3 + 2 \/ 2 ) + ( 3 - 2 \/ 2 ) - 2 z = ----------------------------------------- n 32 e.g., 1, 36, 1225, 41616, 1413721, ... Solution: Euler 1732-33 Proof of completeness: Roberts, 1879 See Martin Gardner, Scientific American, July 1974, for discussion.Return to Top
Hi there ! Can anybody give me the titles of CDROMs featuring math routines for Object/Borland Pascal (preferably those WITH their source codes) and/or send me any interesting math web-site. I personally work with Borland Delphi 2.0 and am looking for almost any math unit or routine available for numerical analysis, number theory and especially 'infinite precision (decimal) operations'. Of course sites with similar topics are also warmly welcome. I plan to open a math site with links to those sites in the near future. Please add - if available - a short description to each CDROM title or web-site. And those reading the various German newsgroups I posted this message to, please forgive me for sending it in English. However I'm sure that those German guys have all a pretty good command of the English language. Please send your reply via email ONLY (as I do not regularly visit all these newsgroups and replies always tend to get lost miraculously). My email address is: landauf@adis.at THANK YOU VERY MUCH !Return to Top
On 10 Dec 1996 02:18:03 GMT, oliver@oak.math.ucla.edu (Mike Oliver) wrote: >In article <32A5DCE9.1DE3@grc.varian.com> mirko.vukovic@grc.varian.com writes: > >>There is a book on the history of Pi by Petr Bekkman (or some spelling >>like >>that). Check it out. > >Yes, do! But don't take everything at quite face value. > >It's a very enjoyable and informative book, but has some minor "crankish" >aspects; I can't remember the details right now. > >By the way his surname was Beckman or possibly Beckmann. The reference: A HISTORY OF # (PI) Petr Beckmann Electrical Engineering Department, University of Colorado ST MARTIN'S PRESS New York Copyright (c) 1971 by THE GOLEM PRESS All rights reserved. For information, write: St. Martin's Press, Inc., 175 Fifth Ave., New York, N. Y. 10010. Manufactured in the United States of America Library of Congress Catalog Card Number: 74-32539 First edition preface signed: Bolder, Colorado, August 1970 Second edition preface signed: Bolder, Colorado, May 1971 Third edition preface signed: Bolder, Colorado, Christmas 1974Return to Top
Hello, Does anybody has a solution or reference to the following problem? Let G be a symmetric and positive definite matrix, block partitioned as [G_11 G_12 G_13 ... G_1n] [G_21 G_22 .... G_2n] [... ] [... ] [G_n1 G_n2 .... G_nn] where all blocks are square and equally sized, and G_ii is positive semi-definite for each i (which might be obvious?) The problem is to minimize Q(a) = (\sum_{i=1}^n a_i G_ii)^{-1} * (\sum_{i,j} a_i a_j G_ij) * (\sum_{i=1}^n a_i G_ii)^{-1} over all vectors a={a_i}, i=1,...,n, satisfying a_1+...+a_n=1 and a_i>=0 for each i. The minimization should be done in the sense of "definiteness", i.e. if a* is the optimal vector and a is any other vector, then Q(a)-Q(a*) is positive semi-definite. The middle part of the expression can be viewed as a quadratic form in the matrix blocks, while the outer parts, that are inverted, are linear combinations of the diagonal blocks. The questions are if a vector a* that is optimal in the sense above exists, and, if so, if there is an algorithm to compute it? Best wishes, Tobias Ryden -- -- Tobias Rydén E-mail: tobias@maths.lth.se Dept. of Mathematical Statistics Tel: int+46-46 222 4778 Lund University Fax: int+46-46 222 4623 -- Box 118, S-221 00 Lund, Sweden WWW: www.maths.lth.se/matstatReturn to Top
Dear Netters, I have got a problem to solve a partial differential equation using the Galerkin method. At the end of the calculation, I obtain a system of differential equations that is not solvable. What can I do? I thank you in advance. Delphine Wolfersberger.Return to Top
In articleReturn to TopDik T. Winter, dik@cwi.nl writes: >In article <58k2dj$ntl@frazier.backbone.ou.edu> ws@aix1.ucok.edu (Bill Stockwell) writes: > > : One can 'conceive' of a complex as a 2x2 matrix, and have it inherit all > > : of the usual matrix operations (whatever they may be). >... > > I think you need to elaborate a bit more. After all, representing a complex > > by a 2x2 matrix uses twice as much space as it needs to; moreover, the normal > > matrix multiplication it inherits would NOT correspond to complex multiplication; > > I fail to see ANY advantage to doing this. > >Represent a + bi as > [ a -b ] > [ b a ] >and show in what way matrix multiplication does NOT correspond to complex >multiplication. > it makes fine sense to me. Multiplication by i corresponds to a rotation of 90 DEG in the complex plane. Multiplication by a complex number corresponds to a rotation and a scaling in the complex plane. A complex number can be written r * EXP(i*theta). Similarly, the number | a -b | | b a | can be written as | cos(theta) -sin(theta) | | sin(theta) cos(theta) | * R where R = SQRT(a^2+b^2), or SQRT(Determinant), and Theta = arctan(b/a) So R scales, and the matrix rotates. Mike Yukish Applied Research Lab may106@psu.edu http://elvis.arl.psu.edu/~may106/
thomas delbert wilkinson wrote: > > > I'm looking for a routine that can create a cubic spline fit from an > > arbitrary set of points in 3 dimensions, represented by $(x,y,z,v)_i$ > > (preferrably in a weighted least squares sense). > > > > For 2 dimensions (i.e. for surfaces with points $(x,y,v)_i$ )these > > routines already exist. For example such a routine is given by NAGs > > E02DAF. > > Is it possible that you can call the 2-D function to calculate $(x,y,v)$ > and call it again for $(z,0,v)$? Mathematically, it makes sense because > you are dealing with linearly independant functions, ie. the value of > the spline for $(z,0,v)$ should have ZERO effect on the values for > $(x,y,v)$. > > The only problems I can see for using this idea is if the functions > require that the known values of $(x,y,v)$ are stored in a > two-dimensional array instead of three one-dimensional array. > > A better idea is if there is code to produce a one-dimensional spline > $(x,v)$, call it three times, for $(x,v)$, $(y,v)$, $(z,v)$. This is > also valid because x, y, and z are linearly independant, but it is a > saivngs in work done by the computer because calculating $(z,0,v)$ > involves a waste of work because the function will calculate a spline > for $(0, v)$ > > > Secondly, has anybody experience with these kind of representations. > > I've been toying with it, but I don't have any code that works > completely, because I have been trying to code a spline funtion by > myself. > > -- > _____________________________________________________________________ > thomas delbert wilkinson 038 henday lister hall university of alberta > If god were perfect, why did He create discontinuous functions? > http://ugweb.cs.ualberta.ca/~wilkinso/ You may want to try using smoothing splines. The free version of O-Matrix has a subrotuine that will fit smoothing splines of any order in any dimension (search for "smoothing splines" in its help index). To obtain a copy of the free version of O-Matrix see: http://world.std.com/~harmonic The method they use is based on the article "Surface fitting with scattered noisy data on Euclidean d-space and on the sphere", by G. Wahba, Rocky Mountain Journal of Mathematics, Volume 14, Number 1, 1984.Return to Top