![]() |
![]() |
Back |
shilkrot@engin.umich.edu (Leonid Evguenievich Shilkrot) writes: >Does anybody know where to find an implementation of red-black trees >in FORTRAN >Thanks a lot. >Leo. >-- >Leonid Shilkrot shilkrot@engin.umich.edu >Dept. of Materials Science (313)213-0807(h) >University of Michigan (313)647-2780(w) >Ann Arbor, MI 48109-2136 (313)763-4788(fax) You'll find them in "Introduction to Fortran 90/95, Algorithms, and Structured Programming", by R. Vowels.Return to Top
shilkrot@engin.umich.edu (Leonid Evguenievich Shilkrot) writes: >Does anybody know where to find an implementation of red-black trees >in FORTRAN >Thanks a lot. >Leo. >-- >Leonid Shilkrot shilkrot@engin.umich.edu >Dept. of Materials Science (313)213-0807(h) >University of Michigan (313)647-2780(w) >Ann Arbor, MI 48109-2136 (313)763-4788(fax) You'll find them in "Introduction to Fortran 90/95, Algorithms, and Structured Programming", by R. Vowels.Return to Top
Hi All, I am interested in studying the stability of FDTD methods of the form U_n+1 = C U_n where, U_n is a M X 1 vector, and C is an mXm martix. The usual text defintion of stability seems to be determined by the spectral radii of C ( rho(C) being less than one gurentees stability for methods when C is real and symmetric). However, in my case C is real but not symmetric, also there are multiple eigen values of C that are identical and lie on the unit circle. Is there any condition based on the spectral radius of the matrix C or otherwise, which I can use to comment on the stability in the time domain of the FDTD method, with the form of the matrix C that I am considering? All comments and ponters to references will be greatly appreciated. Thanks, Nana -- == Nana S. Banerjee ==========================(607)770-4979 (H)==== == br00037@binghamton.edu ====================(607)777-2889 (Fax)==Return to Top
Jeremy Michael May wrote: > > AngelEyes (bluhme@post3.tele.dk) wrote: > : geof wrote: > : > > : > ILLEGAL SCAM!! > : > > : > YOU WILL NEVER SEE A DOLLAR OF YOUR MONEY AGAIN!!! > : > > : Even tho(deleted) > -- > > Jeremy Michael May |----------------------------| Whittington 313 > Post Office Box 5047 | *** HAVE A NICE DAY *** | (601)-925-3074 > Clinton MS 39058 |____________________________| Home (601)-947-7980 Hey Jeremy, Do yourself (and everyone else) a favor. Go to the bookstore and buy a dictionary. After all, you only make yourself look like a complete fuck up with that sort of grammar. If you are really in college, as your address implies, it is obvious as hell that you are wasting your money AND you time. Incidentally, everthing in the post that you replied to is CORRECT, you whiny little bastard.Return to Top
psalzman@landau.ucdavis.edu (I hate grading almost as much as taking in class exams) writes: > Like I said, my knowledge of stuctures is sketchy. Is what I just said > approximately correct? Is there a better way of doing it in ANSI C? You are correct. It's the best way to do it. Only "problem" that might occur is lack of memory. So might need to implement some sort of dynamic stack or linked list, but in general you are correct. - Petri - From the ice-age to the dole-age there is but one concern And I have just discovered Some girls are bigger than others Some girls' mothers are bigger than other girls' mothersReturn to Top
(601)-947-7980 > > Hey Jeremy, > > Do yourself (and everyone else) a favor. Go to the bookstore and buy a > dictionary. After all, you only make yourself look like a complete fuck > up with that sort of grammar. If you are really in college, as your > address implies, it is obvious as hell that you are wasting your money > AND you time. > > Incidentally, everthing in the post that you replied to is CORRECT, you > whiny little bastard. who cares what his gramer is like its not important. If you think it is your just a little small minded and pety. DrKram ************** *well i never* **************Return to Top
OMR@TIGGER.JVNC.NET wrote: > > Hi, everybody, > > Does anybody know a decent BLAS package that supports such > platforms as VAX VMS, Alpha VMS, Digital UNIX, SUN Solaris, > HP UX, IMB AIX and NT? > > Any advice, comments or suggestions will be appreciated. In HP-UX, you will find a library for optimized BLAS in /opt/fortran/lib/libblas.a Bo -- ^ Bo Thide'---------------Director of Science----------------SM5DFW |I| IRFU Swedish Institute of Space Physics, S-755 91 Uppsala, Sweden |R| Office Phone: (+46) 18-30 36 71 Office Fax: (+46) 18-40 31 00 /|F|\ Home Phone: (+46) 18-52 79 11 Home Fax: (+46) 18-55 41 84 ~~U~~ mailto:bt@irfu.se WWW: http://www.wavegroup.irfu.se/~btReturn to Top
In article <56re6m$s7m@villagenet.com>, AlanLivingston@acm.org (Alan J. Livingston) writes: |> Hi all, |> |> Can anyone point me to some code that implements the Gauss-Newton |> method for non-linear regression? |> |> Thanks, |> |> Alan |> nlscon in elib/codelib does the job. telnet elib.zib-berlin.de login as elib (no password) choose linrary index and then codelib. by "preftp" you can download nlscon into to elib's pub-directory and afterwards anonymous ftp it from there. hope this helps. cheers peterReturn to Top
In article <577g5f$ih2@Masala.CC.UH.EDU>, mece2gn@jeston.uh.edu (Gopinath Warrier) writes: |> Hello, |> |> I need to integrate a nonlinear ODE of the form |> |> f1(x,y)*(y'') + f2(x,y)*(y')^2 + f3(x,y)*(y') + f4(x,y) = g ---- (1); |> |> where, f1,f2,f3,f4 are polynomials. At x = 0, y' = 0. To find y(0), I |> |> substitute x= 0 in (1), but it so turns out that f1=f2=f3=0 and the (1) |> |> becomes a nonlinear equation which can be solved for y(0). |> |> Thus it turns out that the condition y'(0) = 0, is not needed to find y(0), so |> |> does this mean that the solution of the ODE is independent of y'(0) ?. |> |> I have tried to solve this ODE using an Adaptive Runge_Kutta-Fehlberg scheme |> |> and a fifth order Runge-Kutta (Lawson form) scheme but with no success. |> |> Are there other ways of solving this problem ?. Any help in this matter |> |> will be greatly appreciated. |> |> |> Gopinath Warrier |> |> Univ. of Houston |> since, as you write, f1=f2=f3=0 at x=0 you have a singular point and you cannot start the integration directly from there. the usual approach is to try a series expansion (known in the ODE-field as Frobenius' technique) at the singular point (hopefully (d/dx)f1(x,y(x)) !=0 ) using the series (has the form sum a_k*x**(alpha+k) or involves a log ) for a small interval [0,x0] and initalizing the numerical integration from x0>0 afterwards. hope this helps peterReturn to Top
In article <32983CDA.746F@irfu.se>, Bo Thide'Return to Topwrites: |> OMR@TIGGER.JVNC.NET wrote: |> > |> > Does anybody know a decent BLAS package that supports such |> > platforms as VAX VMS, Alpha VMS, Digital UNIX, SUN Solaris, |> > HP UX, IMB AIX and NT? |> > |> > Any advice, comments or suggestions will be appreciated. |> |> In HP-UX, you will find a library for optimized BLAS in |> /opt/fortran/lib/libblas.a NAG runs on all those platforms (and more) and includes a complete set of BLAS. But NAG recommends implementators to use a vendor's BLAS (when appropriate) as the NAG BLAS code is tuned for general efficiency, rather than a particular machine. It isn't clear why you are asking for a specific package. Nick Maclaren, University of Cambridge Computer Laboratory, New Museums Site, Pembroke Street, Cambridge CB2 3QG, England. Email: nmm1@cam.ac.uk Tel.: +44 1223 334761 Fax: +44 1223 334679
In article <3292F7D4.446B9B3D@minerva.inesc.pt>, Joao BastosReturn to Topwrites: |> Hi ! |> |> |> I am developing an application that deals with line extraction |> from raster images and i have the following problem: |> |> - for example, given the following sequence of adjacent pixels on a |> XY referential: |> |> |> ^ xxxxxx |> Y | x x |> | x x |> | x x x |> | x xx x |> | x x x x x x -> pixel |> | x x x x x |> | x x |> | xx x |> | xx x |> | x |> +--------------------------------> |> X |> |> i need to get the natural cubic spline (C2) that most |> approximates these pixels. |> |> I would appreciate very much your advice and if such an |> algorithm snip snip from your data I conclude that you need approximate these data by a parametric spline, i.e. x=x(t) and y=y(t) approximated by two splines s1(t) and s2(t) in an least squares sense, with t artificially parametrized by e.g. the euclidean distance. check de Boor; a practical guide to splines. software is in netlib. you may also check directory diercks in netlib. hope this helps peter
complex Newton is equivalent to real newton in 2d using re z and Im z as variables and d1=Re f and f2=Im f as functions. Newton's geometric interpretation in 2d is eaxctly as in 1D. (x,y,f1(x,y)) is an surface in 3d, as well as (x,y,f2(x,y)). at (x0,y0) these surfaces possess tangent planes (due to assumed differen- tiability of f1 and f2, which follows from analycity of f) given invertibility of the Jacobian of (f1,f2), these tangent planes intersect the plane z=0 (in 3d) in two lines, which in turn intersect in a point (x1,y1). this of course is also the point (x1,y1,z=0) on the intersecting line of the two tangent planes. this is the next point. and so on. hope this helps peterReturn to Top
In article <329621AF.A57@asu.edu>, "Hans D. Mittelmann"Return to Topwrote: > Jaroslav Stark wrote: > > > > Can anyone point me to efficient ways of computing the derivative of > > det(A) for non-invertible A, or alternatively an efficient way of > > calculating the matrix of co-factors of A. > > > > Thus for in general we have > > > > D det(A) = trace(B.DA) > > > > where B is the transpose of the matrix of co-factors of A. When A is > > invertible, B is just det(A).A^-1, but what about the genral case? > > > > Answers by e-mail would be appreciated. > > > > J. Stark > Hi, > I really do not see which problem you are having. The matrix B must not > be computed using the determinant of A at all. The elements of B are > determinants of (n-1)x(n-1) submatrices of A and there is absolutely no > problem in evaluating them. Just to make it complete, the element b_ij > of B is (-1)^(i+j) times the determinant of the matrix which is obtained > from A when scratching row j and column i. > Or am I missing something? > > Hope that helps. > -- > Hans D. Mittelmann http://plato.la.asu.edu/ > Arizona State University Phone: (602) 965-6595 > Department of Mathematics Fax: (602) 965-0461 > Tempe, AZ 85287-1804 email: mittelmann@asu.edu Sure, you can compute B this way - but its extremely slow - assuming you compute the (n-1)x(n-1) determinants using LU decomposition, that's O(n^3) per determinant, and there are O(n^2) of them, so overall you have a O(n^5) calculation - which is just tto slow to be realistic. By comparison if A is invertible, computing B as det(A).A^{-1} is just a O(n^3) computation. Thus my question is effectively whether B, or D det(A), can be computed in O(n^3) if A is invertible. cheers Jaroslav Stark -- Dr. Jaroslav Stark, Centre for Nonlinear Dynamics and its Applications University College London, Gower Street, WC1E 6BT, UK Tel: +44-171-391-1368 Fax: +44-171-380-0986 E-Mail j.stark@ucl.ac.uk
In <32994EDC.6C83@mail.idt.net> Joe KrolikowskiReturn to Topwrites: > >Jeremy Michael May wrote: >> >> AngelEyes (bluhme@post3.tele.dk) wrote: >> : geof wrote: >> : > >> : > ILLEGAL SCAM!! >> : > >> : > YOU WILL NEVER SEE A DOLLAR OF YOUR MONEY AGAIN!!! >> : > > >> : Even tho(deleted) >> -- >> >> Jeremy Michael May |----------------------------| Whittington 313 >> Post Office Box 5047 | *** HAVE A NICE DAY *** | (601)-925-3074 >> Clinton MS 39058 |____________________________| Home (601)-947-7980 > >Hey Jeremy, > >Do yourself (and everyone else) a favor. Go to the bookstore and buy a >dictionary. After all, you only make yourself look like a complete fuck >up with that sort of grammar. If you are really in college, as your >address implies, it is obvious as hell that you are wasting your money >AND you time. > >Incidentally, everthing in the post that you replied to is CORRECT, you >whiny little bastard. Jeremy is correct despite his grammar, you mendacious son of a bitch.
In articleReturn to Top, medtib@club-internet.fr (M. TIBOUCHI) wrote: > In article (Dans l'article) <57851i$nlh@mark.ucdavis.edu>, > psalzman@landau.ucdavis.edu (I hate grading almost as much as taking in > class exams) wrote (écrivait) : > > > Dear All, > > > > I would like some advice on how to handle complex numbers in ANSI C. > > My knowledge of C stops at structs, but from what litle I know about > > structures, it seems like that would be the most clear way of handling > > complex numbers. > The best way to deal with complexes is to hack in C++ : you don't use > structs but classes, it's much more powerful. You can then create > 'operands' and managing complexes like classic numbers. For example, you > can initialize two complexes. Then, it's possible to add, substract, > multiply, divide them, rise them to a power, and so on. Having wrestled with this problem in C and C++, I would agree most whole-heartedly. Use C++ to construct your complex numbers. You really don't need to know much beyond C to do it, i.e. you don't have to learn all of C++. You'll be happy you did it that way, believe me. It'll also be more fun. Lou Pecora code 6343 Naval Research Lab Washington DC 20375 USA == My views are not those of the U.S. Navy. == ------------------------------------------------------------ Check out the 4th Experimental Chaos Conference Home Page: http://natasha.umsl.edu/Exp_Chaos4/ ------------------------------------------------------------
In article <3297E6A6.253B@asu.edu>, "Hans D. Mittelmann"Return to Topwrote: >syzygy@vnet.net wrote: >> >> I suspect this is very elementary but I need some software that will help with curve fitting: >> >> If I have N sets of (x,y) data pairs I can attempt a best fit curve of any order up to (N-1) to it: >> >> y=a0 + a1*x, >> y=b0 + b1*x + b2*x^2 >> .... >> y= q0 + q1*x + q2*x^2 .... q(N-1)*x^(N-1) >> >> Does anyone know of such a software package that would do this? I don't want a routine that will fit >> the EXACT curve to the data, I want one that would present me with all the possible best fit (least >> squares) power curves, from a straight line up to the (N-1) fit with a correlation coef. >> >> Please reply directly to me at syzygy@vnet.net if you can help me. Thanks! >> >> - Bill >> >> ====================================================================== >> Fruit flies like an apple, time flies like an arrow >> ...................................................................... >> William Schwittek -- syzygy@vnet.net >> http://www.vnet.net/users/syzygy/photo/ >> ====================================================================== >Hi, >the package netlib/odrpack will help you. The link is in the >least-squares section of http://plato.la.asu.edu/guide.thtml >There is even a graphical interface if you are interested. > >Hope that helps > >Hans Mittelmann The software package 'TableCurve' by Jandel Scientific is what you want. Most scientific software catalogs have it. -- Jeff Brush
In article <57ajbc$o4k@rosebud.sdsc.edu>, u13839@pauline.sdsc.edu (Jose Unpingco) wrote: > >hi > >I'm reading Fletcher's 2nd edition on Practical Methods of >Optimization and on page 21, in the middle of the page, he states > >"...the exact minimizing value of alpha is required and cannot be >implemented in parctice in a finite number of operations. (Essentially >the nonlinear equation df/dalpha=0 must be solved.)" > >He's refering to the linsearch subproblem > > Find alpha^k to minimize f(x^k + alpha*s^k) w/r to alpha. > >The df/dalpha=0 is the directional derivative of f in the direction >specified by s^k, which is unit direction. Thus, > >df >-- = |grad f| * cos(theta) = 0 >dalpha > >I don't understand why this is a non-linear equation. I thought that >if an equation was linear in the derivatives of f, then it was a >linear differential equation. df/dalpha = 0 looks pretty linear to me. > >I'm confused. > >thanks. You are not 'solving df/dalpha = 0', you are looking for the alpha that when plugged into the function g(alpha) gives zero, where g(alpha) is the function df/dalpha. If you are not at an extrema (i.e. your desired minimum), then of course df/dalpha will be non-zero. You're welcome.Return to Top
Peter Spellucci (spellucci@mathematik.th-darmstadt.de) writes: > In article <577g5f$ih2@Masala.CC.UH.EDU>, mece2gn@jeston.uh.edu (Gopinath Warrier) writes: > |> Hello, > |> > |> I need to integrate a nonlinear ODE of the form > |> > |> f1(x,y)*(y'') + f2(x,y)*(y')^2 + f3(x,y)*(y') + f4(x,y) = g ---- (1); > |> > |> where, f1,f2,f3,f4 are polynomials. At x = 0, y' = 0. To find y(0), I > |> > |> substitute x= 0 in (1), but it so turns out that f1=f2=f3=0 and the (1) > |> > |> becomes a nonlinear equation which can be solved for y(0). > |> > |> Thus it turns out that the condition y'(0) = 0, is not needed to find y(0), so > |> > |> does this mean that the solution of the ODE is independent of y'(0) ?. > |> > since, as you write, f1=f2=f3=0 at x=0 you have a singular point and you > cannot start the integration directly from there. the usual approach > is to try a series expansion (known in the ODE-field as > Frobenius' technique) at the singular point There are various types of singularities: ports, nodes, etc.: the best bet, as I indicated already, is to directly start finding solutions at the 'vicinity' of x=0 with various (arbitrary) values for y'(a) (with a=0.001 and y'(a)=0.001, 0.002, etc. to meet your requirement y'(0)=0). With a set of solutions you will get an idea of the type of singularity at x=0. -- Angel, secretary (male) of Universitas Americae (UNIAM). http://www.ncf.carleton.ca/~bp887Return to Top
root (root@trev.seismology.hu) wrote: : On 13 Nov 1996, Michael Courtney wrote: : > Numerical recipes has algorithms for non-power-of-two numbers of points : > but not nonequispaced points. Nonequispaced points is tough. There are : ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ : in Numerical Recipes: : 13.6 Spectral Analysis of Unevenly Sampled Data, page 569 My apologies. This section was added to the second edition, but be aware that it is not in the first edition. -- Michael Courtney, Ph. D. michael@amo.mit.eduReturn to Top
John HarperReturn to Topwrote: : In article <32836B43.55A8@BBN.com>, Bill Marshall wrote: : >Is there an errata list for Abramowitz & Stegun? : My copy is the 9th Dover printing. Errata to that are: [errata list deleted] I have found one more in the same edition: formula 25.4.45 for the weights in Gauss-Laguerre integration is wrong (presumably it is right in conjunction with a different normalization of Laguerre polynomials than the one used by A & S). The correct formula is w_i = x_i / ( n^2 [ L_(n-1)(x_i) ]^2 ) I can't remember where I got it from, I guess I derived it using the Christoffel-Darboux formula. The numerical values given in Table 25.9 are correct. -- Peter Marksteiner e-mail: Peter.Marksteiner@univie.ac.at Vienna University Computer Center Tel: (+43 1) 406 58 22 255 Universitaetsstrasse 7, A-1010 Vienna, Austria FAX: (+43 1) 406 58 22 170
I'm looking for iterativ methods package in order to solve linear problems I found C and fortran routines but only for sparse matrix 1) where can I find the same packages for dense and complex matrix? 2) Comparisons iterative methods/direct methods for this type of matrix? cedric Dourthe -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cedric Dourthe projet:CAIMAN e-mail: cdourthe@sophia.inria.fr CERMICS,INRIA, 2004 route des lucioles BP 93 TEL: (33) 93 65 79 04 FAX: (33) 93 65 77 40 http://www.inria.fr/cermics/personnel/Cedric.Dourthe/cdourthe-fra.html ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Return to Top
I am looking for an efficient algorithm to locate regions on a lattice on which a certain quantity (scalar or vector) is constant within a prescribed tolerance. Note that I am not interested in regions of slow variation (which can be characterized by a small derivative), but the largest possible regions in which *all* points are approximatively equal. The best I can think of is to start from any point and search in successively larger regions around until I hit points that are too different. But I would prefer something simpler. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@ibs.ibs.fr Laboratoire de Dynamique Moleculaire | Tel.: +33-4.76.88.99.28 Institut de Biologie Structurale | Fax: +33-4.76.88.54.94 41, av. des Martyrs | Deutsch/Esperanto/English/ 38027 Grenoble Cedex 1, France | Nederlands/Francais -------------------------------------------------------------------------------Return to Top
Jaroslav Stark wrote: > > Can anyone point me to efficient ways of computing the derivative of > det(A) for non-invertible A, or alternatively an efficient way of > calculating the matrix of co-factors of A. > > Thus for in genral we have > > D det(A) = trace(B.DA) > > where B is the transpose of the matrix of co-factors of A. When A is > invertible, B is just det(A).A^-1, but what about the genral case? > > Answers by e-mail would be appreciated. > > J. Stark > > -- > Dr. Jaroslav Stark, > Centre for Nonlinear Dynamics and its Applications > University College London, > Gower Street, WC1E 6BT, UK > Tel: +44-171-391-1368 > Fax: +44-171-380-0986 > > E-Mail j.stark@ucl.ac.uk Hi, here is my theorem. Unless it can be found in the literature, I'd like to be quoted as the source. D det(a) = sum(i=1,n) product(j.ne.i) lambda_j(A) This requires one call of, say, the QR algorithm and is thus a O(n^3) method. Hans Mittelmann -- Hans D. Mittelmann http://plato.la.asu.edu/ Arizona State University Phone: (602) 965-6595 Department of Mathematics Fax: (602) 965-0461 Tempe, AZ 85287-1804 email: mittelmann@asu.eduReturn to Top
Cedric Dourthe wrote: > I'm looking for iterativ methods package in order to solve linear problems > I found C and fortran routines but only for sparse matrix > 1) where can I find the same packages for dense and complex matrix? > 2) Comparisons iterative methods/direct methods for this type of matrix? Cedric, as I'm sure you know, for direct methods you need to keep the whole matrix in storage whereas for iterative methods you need only store the non-zero elements. For large problems involving sparse matrices, often we just don't have enough RAM to store the whole matrix so we HAVE to use an iterative method such as conjugate gradient etc. If you have a non-sparse matrix, chances are you would be better off using a direct method. --snowbackReturn to Top
Hello ! I'm looking for a FAST code to factorize (LDL^T ) a positive definite = symmetric matrix stored i the so called skyline format. The code should be optimized for modern workstations, that means it should use some sort of blocking = method to reduce the communication overhead. Does anyone know where to find = such a code (FORTRAN or C) or information about = where look for information for the construction of such a code ... Thanks in advance. = --- Daniel Hilding Link=F6ping Institute of Technology Dept. of Mech. Eng. Div. of Mechanics S-581 83 Link=F6ping Phone: +46 (0)13 281712 Fax: +46 (0)13 281101 E-mail: danhi@ikp.liu.seReturn to Top
I'm solving the problem Ax = (lambda)Bx with dsygv. The routine re-orders the eigensolutions in ascending order. Is there a way to trace that re-ordering? In other words, how can I find out which degee of freedom in the original matrices corresponds to which degee of freedom in the eigenvector? Thanks, DaveReturn to Top
U Lange wrote: > > Michael T. Vaughn (mtvaughn@neu.edu) wrote: > > : slightly off this path, but what about the Bulirsch-Stoer method, as > : advertised in "Numerical Recipes" and also in a book on numerical > : analysis by Stoer and Bulirsch (and also introduced by other people > : in the early 1980s). > : > : The method seems attractive in principle, and I have seen it work > : faster (by a factor of about 2 or so) than traditional R-K methods on > : a problem. Yet I see very little comment on it. Does anyone have > : either comments or a pointer to comments on the method?? > > I found that the embedded Runge-Kutta method in numerical recipes was > always significantly faster than their Bulirsch-Stoer method for the > (nonlinear) ODEs I was interested in. This should depend on the tolerance and the problem. Shampine wrote a paper with Lorraine Baca that does a numerical comparison between an implementation of the Prince-Dormand (7,8) pair and a GBS-type polynomial extrapolation code. L.F. Shampine and L.S. Baca,"Fixed versus variable order Runge-Kutta, ACM TOMS 12 (1986), 1, pp. 1-23. When the tolerance was moderate or looser, the RK code was faster. However, as the tolerance tightened, the GBS code was eventually faster (because it could go to high orders). The cross-over point depends on the problem, but I think it is safe to say that it occurrs when the tolerance would be considered quite "stringent." Later, Shampine and I wrote a little theoretical comparison paper M.E. Hosea and L.F. Shampine,"Efficiency Comparisons of Methods for Integrating ODEs," Computers & Mathematics with Applications 28, 6, 1994. that supports the same conclusion. To really understand what is going on, forget all that and consider this. Researchers have spent years trying to derive more and more efficient and reliable Runge-Kutta pairs. They play with the degrees of freedom afforded by the nonlinear equations they must solve, trying to find the "best" solutions. It's an art and a science. So what? Well, consider that GBS (with polynomial extrapolation) really *IS* a Runge-Kutta method at each order. You can write it's Butcher array down in the conventional way. I even wrote a little code that takes the extrapolation sequence (asks whether you want smoothing) and spits out the RK coefficients. We would be unbelievably fortunate if RK methods derived from extrapolation turned out to be the most efficient available at each order, and they simply aren't. ("There ain't no such thing as free lunch.") Now, it could be that rational extrapolation would make all the difference, but as yet I have seen no evidence that rational extrapolation results in the kinds of performance improvements that would be required to catch up with modern RK pairs. Indeed, most people think polynomial extrapolation is better. On the other hand, because extrapolation methods are variable order, like the Adams methods, they may be faster in general when the tolerance is tight enough. Still, at tight tolerances I'd put my money on a good Adams code. I think extrapolation methods are elegant and comparatively easy to understand. I just wish they were more efficient. -- Mike Hosea (mhosea@ti.com) Texas Instruments Inc. phone (972) 917-2958 PO Box 650311, MS 3908 fax (972) 917-7103 Dallas, TX 75265Return to Top
On 24 Nov 1996 00:31:46 GMT, I hate grading almost as much as taking in class examsReturn to Topwrote: >I would like some advice on how to handle complex numbers in ANSI C. >My knowledge of C stops at structs, but from what litle I know about >structures, it seems like that would be the most clear way of handling >complex numbers. You might try using gcc. For example, the following code #include #include int main () { __complex__ double z; double z2; z = 3.0 + 4.0i; z2 = z * ~z; fprintf (stdout, "z = %f + i%f; |z|^2 = %f\n", __real__ z, __imag__ z, z2); return 0; } produces: z = 3.000000 + i4.000000; |z|^2 = 25.000000 -- John E. Davis Center for Space Research/AXAF Science Center 617-258-8119 MIT 37-662c, Cambridge, MA 02139 http://space.mit.edu/~davis
In article <56maqc$dlo@hecate.umd.edu>, Jason Stratos PapadopoulosReturn to Topwrote: >Hello. I've run across a problem that has me stumped. How would >you go about finding a function "f" such that > > 1 1 >f ( ----------- ) = j f( --- ) ? > 2(1+j)(1-j) 1-j > ... >Anyway, j is supposed to be close to 1, and an asymptotic series for f is > > 2 3 4 5 > (j-1) (j-1) (j-1) 23 (j-1) 263 (j-1) >f(j) = 1 + ----- - ------ + ------ - --------- + ---------- - .... > 3 45 189 14175 4677775 > >Can such a problem even be solved in closed form? I'm a little confused here; if you take j close to 1 in the top equation, you'll find yourself evaluating f at very large arguments; in the final line, you are evaluating f near 1. Is that really your intent? I also feel something must be amiss here: I don't think there are any nice nonzero functions satisfying the proposed functional equation. Functional equations such as this arise frequently in this newsgroup so if you don't mind I'll generalize your question a little (you can keep generalizing much more). The typical question here reads "What function f satisfies f(h(x))= G(x, f(x)) for all x?" (where the functions h and G are given explicitly). The first thing to keep in mind is that _the solution is not unique_ (usually). For example, in the original poster's situation, if f is any function satisfying the functional equation, then any scalar multiple c.f also satisfies that relation. The second thing to worry about is that _you need to be clear about the domain of f_ (or more precisely, the range of x's for which the purported functional equation is to hold). For example, there may be many more functions with a complex domain which match the given requirements, but fewer on the real line. And a restriction on the domain of f will mean the functional equation holds for fewer x (giving fewer restrictions on f). Finally, you need to decide _what you want to assume about continuity_ (and/or differentiability). Assume nothing and you'll get nowhere. Assume too much and you may find no nontrivial solutions f. (That is, you must be careful not to "throw out the baby with the dishwater" as my grandmother used to say.) Let me clarify this last point. Given any value of x such that both x and h(x) are (defined and) in the domain of f, the assumed functional equation will give some information relating f(x) and f(h(x)). So let us declare x and h(x) to be _equivalent_, and let " ~ " denote the equivalence relation this generates on the real number line (or whatever the domain of f is assumed to be). One can check that this means x1 ~ x2 iff there exist m,n >= 0 s.t. h^n(x1) = h^m(x2). Now the crucial observation is this: the given functional equation will _only_ give some information about the relationships between the values of f(x) among points x in a single equivalence class. This is a telling statement in situations like the proposer's problem: since h is at worst a two-to-one map, it is easily verified that the equivalence class of x is at most countable, and thus there are uncountably many equivalence classes. In most cases we have more than one solutions for the behaviour of f on each equivalence class, so we obtain uncountably many possible solutions f in toto! Therefore in order to make some progress we try to assume f is, say, continuous at x=1. I believe in that case, however, the number of such functions drops to 1: f must be identically zero. Let us see what information we do glean about the behaviour of f on each equivalence class C. Note that C is actually a directed graph, with an edge from x to y iff h(x) = y. If the (corresponding undirected) graph is a _tree_, then we can pretty easily determine the values of f on all of C. Pick a point x0 to be the root of this tree; define f(x0) to be any value you wish. Then the functional equation forces f(h(x0)) = G(x, f(x0)) to be a certain value; similarly, the value of f(h^n(x0)) is forced for any n. For any other point x in C there exist unique minimal n and m such that h^m(x) = h^n(x0); if for example m=1 we may then deduce the value of f(x) from the equation f(h(x)) = G(x, f(x)), which we solve as a single equation in the single unknown "f(x)". Proceeding by induction we may compute f(x) for those values of x corresponding to higher values of m. (Here we're using the fact that G(x, y) is linear in y for the proposer's problem, so that a unique solution for f(x) exists. In more general settings of course the equations to be solved do not have a unique solution for f(x), so we will find more than one possibility for the function f: C -> R. Or there may be occasions in which f(h(x)) = G(x, f(x)) admits no solution for f(x); in these cases we would need to backtrack and see if a different choice for f(h(x)) would enable a solution.) More interesting is the case in which C contains some cycles. One can check that this can only happen if C contains a point x for which h^n(x) = x for some n. When this occurs we usually have an equation which limits the possible values of f(x). For example, if x is a fixed point of h then we must have f(x) = G(x, f(x)). Of course, once the values of f on the points within cycles have been determined, the values of f on the rest of C are determined as in the previous paragraphs. So we see that the cycles under h play a special role. I will look for some of them in the poster's specific problem. I suppose I should reiterate that the domain of f is assumed to include all these points, otherwise the functional equation does not give any further information (that is, the graph C loses this cycle). As I remarked at the beginning I'm not sure a mistake in notation hasn't been made, but I'll take it as is. Rather than write > 1 1 >f ( ----------- ) = j f( --- ) > 2(1+j)(1-j) 1-j I would prefer to let x=1/(1-j) so that this equation reads (*).....f(x^2/(4x-2)) = (x-1)/x f(x) that is, h(x) = x^2/(4x-2) and G(x,y) = ((x-1)/x) * y . Now, the fixed points of h are 0 and 2/3. What do we learn here? From (*) with x=0 we have f(0) = 0*f(0), and so f(0)=0. Note that h(0)=0 and h(x)=0 implies x=0: the equivalence class of 0 is just { 0 }. Taking the other fixed point x=2/3 gives f(2/3) = -1/2 * f(2/3), which requires f(2/3)=0, too. This time the graph C is more complicated; here is a portion of it: ...-> 29.347... -> 4+sqrt(12) -> 2 -> 2/3 (loops back) ...-> 0.508... -----^ ^ | ...-> 1.349... -> 4-sqrt(12) ---| ...-> 0.794... -----^ Well, since f(2/3)=0, we have 0= f(h(x))=(x-1)/x f(x) for both x=4 +- sqrt(12); so f vanishes there as well. Similarly, working back over the graph, we see f(x) = 0 for all x in C. And now we see more possibilities. The only cycle of length of 2 is the one containing 1 +- sqrt(1/5) = {x1, x2} , say. Then we have f(x1) = (x2-1)/x2 f(x2) and f(x2) = (x1-1)/x1 f(x1), so that f(x1) = (x2-1)/x2 * (x1-1)/x1 * f(x1)= (-1/4) f(x1); again f(x1)=f(x2)=0. There are two cycles of length 3; on these as well we have f(2.66)=f(.818)=f(.526)= f(4.27)=f(1.21)=f(.515)=0. I think one can show that f(x)=0 for all elements in _any_ finite orbit under h, although I have not carried this out. In this way, one obtains a large number of points at which f must vanish. Since for any x > 0.5 there are two points y with h(y)=x, both with y>0.5, we find that the number of points in the tree doubles with each lengthening to the left. So a picture of the behaviour of f begins to emerge. There seems to be a countable collection of families of graphs such that f(x)=0 for all x in the graphs; these graphs end in cycles ("on the right") but go arbitrarily far to the left, splitting in two at each stage. All the other values of x lie in equivalence classes which are trees -- roughly as above but with no terminal points. On each tree f(x) may be chosen arbitrarily at one point x0, and is then determined everywhere else. (Well, in the equivalence class of 1 we must take x0 to be "to the left of" 1.) Ah, but you object, what does f "look like"? The answer: it's a mess. As far as I can tell, the equivalence class containing 2/3, for example, is dense in the interval (1/2 , oo). Certainly the 1024 points obtained as (h^n)^(-1) (2/3) ( for 0 <= n <= 10) show no sign of leaving any gap, although there is no upper bound on the magnitude of the points in the equivalence class. If this hunch is correct, then we have some bad news: wherever f is continuous, it must be zero. (This follows since at each of the points in this equivalence class, f(x)=0.) In particular, it seems difficult to believe there can be a function f expressible as a power series near 1, as the poster suggested; this function must vanish at the points ..., .98987715, .99188524, .99592606, .99795886, 1.0020494, 1.0041074, 1.0082486, 1.0103320, ... as well, surely, as many points in between. Summary: Chase through the trees, use continuity if you think it's appropriate -- and make sure you've expressed the problem correctly. dave
ScaLAPACK is a collection of software for performing dense and band linear algebra computations on distributed-memory parallel computers. ScaLAPACK, version 1.4, includes routines for the solution of: * Dense, band, triangular, and tridiagonal linear systems of equations, * Condition estimation and iterative refinement for LU and Cholesky factorizations, * Matrix inversion, * Full-rank linear least squares problems, * Orthogonal and generalized orthogonal factorizations, * Orthogonal transformation routines, * Reductions to upper Hessenberg, bidiagonal and tridiagonal form, * Reduction of a symmetric-definite/Hermitian-definite generalized eigenproblem to standard form, * Symmetric/Hermitian eigenproblem, * Generalized symmetric/Hermitian eigenproblem, and * Nonsymmetric eigenproblem. Most routines are available in four data types: single precision real, double precision real, single precision complex, and double precision complex. In addition, we have provided prototype software to handle the following areas: * Singular value decomposition, * Out-of-core linear solvers for LU, Cholesky, and QR, * HPF wrappers for a subset of ScaLAPACK routines, and * The matrix sign function for eigenproblems. Our software has been written to be portable across a wide range of distributed-memory environments such as the Cray T3, IBM SP, Intel series, TM CM-5, clusters of workstations, and any system for which PVM or MPI is available. A draft ScaLAPACK Users' Guide and a comprehensive Installation Guide is provided, as well as test suites for the collection. The ScaLAPACK software is or will be part of the following vendor's provided numerical software libraries: IBM, SGI/Cray, Fujitsu, NAG, and Visual Numerics(IMSL). For more information on the availability of each of these packages and their documentation, consult the scalapack index on netlib. The URL is: http://www.netlib.org/scalapack/ Comments/suggestions may be sent to scalapack@cs.utk.edu. This software was developed in collaboration between researchers at the Univ. of Tennessee, Univ. of California, Berkeley, and Oak Ridge National Lab. ScaLAPACK is part of a larger project called the Scalable Libraries Project. The Scalable Libraries project is made up of 4 components: dense matrix software (ScaLAPACK) large sparse eigenvalue software (PARPACK) sparse direct systems software (CAPSS) preconditioners for large sparse iterative solvers (PARPRE) and is a collaborative effort between: Oak Ridge National Laboratory Rice University Univ. of Tennessee, Knoxville Univ. of California, Berkeley Univ. of California, Los Angeles Univ. of Illinois, Champaign-Urbana Funding for this effort comes in part from DARPA, DOE, NSF, and CRPC. Regards, Jack Dongarra ************************************************************** Jack Dongarra dongarra@cs.utk.edu 104 Ayres Hall 423-974-8295 fax: 423-974-8296 Knoxville TN, 37996 http://www.netlib.org/utk/people/JackDongarra.htmlReturn to Top
dave becker wrote: > > I'm solving the problem Ax = (lambda)Bx with > dsygv. The routine re-orders the eigensolutions > in ascending order. Is there a way to trace that > re-ordering? In other words, how can I find out > which degee of freedom in the original matrices corresponds to > which degee of freedom in the eigenvector? > > Thanks, > Dave Hi, I don't think what you are looking for exists. Maybe, I'm missing something but the order in which the eigenvalues actually are computed depends on several details of the algorithm such as shifts etc. What do you exactly mean by "degree of freedom"? -- Hans D. Mittelmann http://plato.la.asu.edu/ Arizona State University Phone: (602) 965-6595 Department of Mathematics Fax: (602) 965-0461 Tempe, AZ 85287-1804 email: mittelmann@asu.eduReturn to Top
Hi all, My question is, Do u have to take the Conjugate of the elements while finding the Transpose of a Complex Matrix.? I find this so confusing as MATLAB gives a conjugated transpose while MAPLE does not!!! Pl, forward the email to my personal email id as i dont closely follow this newsgroup. Thanks in Advance for any help. Bala Fundamentally,things never change. ________________________________________________________________________________ Raju Balasubramanian | 701-107 Cumberland Ave. S Dept of Electrical Engineering | Saskatoon S7N 2R6 Univ of Saskatchewan | SK, Canada SK Canada. S7N 5A9 | Phone :(306)653-1513 Home http://www.engr.usask.ca/~bar553 | :(306)966-5400 Lab --------------------------------------------------------------------------------Return to Top
PhD or Masters Scholarship at the University of Queensland This is an opportunity for a student to work in a new and exciting area which involves sophisticated computational techniques with important real-life applications. Research project: "The development of stochastic models and efficient numerical techniques for solving stochastic differential equations in environmental modelling" Funding: $15,000 is available either as a top-up over 3 years (at $5,000 per annum for 1997-1999) to an existing APA, or as a one year scholarship (1997). The successful student would work under the guidance of the principal investigators: Professor Kevin Burrage (Computational Mathematics) and Professor Ray Volker (Civil Engineering). Equipment: The successful student would have access to state-of-the-art SGI workstations as well as the University Of Queensland's 20 processor parallel supercomputer. Potential applicants should contact Professor Kevin Burrage, Department of Mathematics, University of Queensland, Brisbane 4072, Australia email: kb@maths.uq.oz.au phone +61 07 33653487 or Professor Ray Volker Deparrtment of Civil Engineering University of Queensland, Brisbane 4072, Australia email:volker@uq_civil.civil.uq.oz.au phone +61 07 33653619Return to Top
PhD Scholarship(s) at the University of Queensland This is an opportunity for a student to work in a new and exciting area which involves sophisticated computational techniques with important real-life applications. Research project: "Large-scale parallel numerical methods for differential-algebraic equations in process engineering" Funding: A PhD scholarship of $15,000 per annum over 3 years for 1997-1999 is available. Alternatively, several top-ups to an existing APA will be granted for suitable applicants based on their ability. The successful student would work under the guidance of the principal investigators: Professor Kevin Burrage, Dr Roger Sidje (Computational Mathematics) and A/Professor Ian Cameron (Chemical Engineering). Equipment: The successful student would have access to state-of-the-art SGI workstations as well as the University Of Queensland's 20 processor parallel supercomputer. Potential applicants should contact Professor Kevin Burrage, Department of Mathematics, University of Queensland, Brisbane 4072, Australia email: kb@maths.uq.oz.au phone +61 07 33653487Return to Top
dear all i just got my complex code working. it's general enough that i can use it (or anyone in my research group) for any application involving complex numbers. just wanted to thank the group; i've gotten alot of good responses and learned a great deal to boot. peterReturn to Top
In article <3296297B.1458@spot.neurodyn.hscbklyn.edu>, David B. ChorlianReturn to Topwrote: #Problem: #Given a set of n dimensional vectors X of cardinality m, #with m << n, find the subset Y of X with given cardinality r, #which gives the "best" approximation to X in the sense that #the sum of the norms of the residuals of approximating each #element of X by the best linear combination of elements of Y #is a minimum. This is related to the problem in section 12.2 #of Golub and van Loan's _Matrix Computations_ called #"Subset Selection". The problem might be broadened by #making r depend on the size of the residuals. # #Clearly an exhaustive method will work. Are there better #methods? For example, one could use the greedy algorithm #of starting with the vector y from X such that the sum of # ^2/( ) was a maximum, then form #the set X' such that xi' = xi - y, and continuing #in a similar manner. This would be a nice solution if r #was also to be determined. One might think that some application #of SVD would be better. Pointers to any discussions would #be appreciated. # #-- #David B. Chorlian #Senior Scientific Programmer, Neurodynamics Lab, SUNY/HSCB #voice: 718-270-2231; fax: 718-270-4081 #chorlian@spot.neurodyn.hscbklyn.edu The greedy algorithm you propose computes a pivoted QR decomposition via the Gram--Schmidt algorithm. The decomposition is very effective in isolating a linearly independent set of columns. However, the naive Gram-Schmidt algorithm is ustable. Instead you should use orthogonal triangularization by Householder transformations. I beleive the algorithms is described in Golub and Van Loan. Pete Stewart
********************************************************************* * Information file, on compilers, tools, books, courses, tutorials, * * and the standard for the Fortran language. * * * * Additional information on Fortran products is available on the * * WWW at the URL http://www.fortran.com/fortran. * ********************************************************************* WHAT'S NEW? Since 21 October: HP announces its optimizing f90 compiler. Revised Fujitsu and Salford entries. WHERE CAN I OBTAIN A FORTRAN 90 COMPILER? Absoft sells its native version of Cray's CF90 for the Power Macintosh (sales@absoft.com or http://www.absoft.com). ACE of Holland provides f90 and HPF for Parsytec PowerPC-based machines (marco@ace.nl, http://www.ace.nl/). Apogee's f90 compiler is highly optimized for SPARC architectures (sales@apogee.com or http://www.apogee.com). Cray Research has a fully-optimizing, native compiler, CF90, that is being marketed by them for the YMP, J90, C90, T90 and T3E, and by Visual Numerics for workstations, starting with Suns (craysoft@cray.com or http://www.cray.com/PUBLIC/product-info/craysoft/Fortran_90.html). Digital has Digital Fortran 90, a native, optimizing compiler for Digital UNIX Alpha systems (with HPF and parallel processing as an option), and for OpenVMS Alpha (with HPF syntax). Versions for Windows NT (Alpha and Intel) and Windows 95 (Intel) are under development and will have an integrated development environment - planned for 1997. Fortran 95 support is planned for mid-1997 (fortran@digital.com or http://www.digital.com/info/hpc/fortran) EPC has optimizing, native compilers for x86, Sun, RS/6000, SGI and MIPS (http://www.epc.co.uk, info@epc.com, support@epc.co.uk). HPF is also available. FORTNER Research (formerly Language Systems Corp) expects to deliver f90 for Macintoshes at some unspecified date. Fujitsu is marketing a native Fortran 90 Workbench for Solaris 1.1 and 2.x. Also HPF. Contact Unicomp (walt@fortran.com), Fujitsu (info@fsc.fujitsu.com or http://www.adtools.com/lpg/fortranhp.htm). HP has collaborated with EPC to produce an optimizing compiler for HP-UX and SPP-UX platforms; see http://www.hp.com/go/hpfortran. IBM has been shipping its optimizing, native compiler for the RS/6000, xlf Version 3, since 31 December, 1993. HPF is now available too. See http://www.software.ibm.com/ap/fortran. Imagine1 Inc offers F, the subset language for Unix and Windows that they hope will be the true stepping stone to HPF and at the same time replace Basic, Pascal and C for teaching purposes. The version for Linux is free. See http://www.imagine1.com/imagine1 and the book section below. Lahey has a native LF90 compiler for Windows and DOS (sales@lahey.com or http://www.lahey.com). Version 3.0 provides an integrated Windows development environment. There is also elf90, a subset language without old features like storage association that is designed for teaching, and is very cheap. In fact, the elf90 compiler itself can be downloaded free from the Web site. Microsoft has released its Fortran Powerstation V4.0 that includes f90 for Windows NT 3.51 and Windows 95 (fortran@microsoft.com or http://www.microsoft.com/fortran). It is a 32-bit compiler with optimizations for Pentium and 486. Microway NDP Fortran 90 for 386/486 and Pentium is available (nina@microway.com). NAG provides a compiler for most unix platforms, VMS and PCs (including Linux). This was the first f90 compiler, in 1991. An optimizing version produced in collaboration with ACE (see above) for Suns is also available. The NAGWare f90 Tools are a suite of Fortran 90 tools derived from the same technology as the NAGWare f90 compiler (infodesk@nag.com, infodesk@nag.co.uk or http://www.nag.co.uk/). NA Software supplies Fortran 90 Plus on PCs (including Windows 95 and Linux), Sparc, and T800 transputers. There is a cheap student version available. They also supply an F77 to f90 syntax convertor, LOFT90, and as well as HPF (http://www.nasoftware.co.uk/home.html). NEC has released a native, optimizing Fortran90 compiler, FORTRAN90/SX, with an automatic vectorization and parallelization capability, for its supercomputer SX series (sx-4@sxsmd.ho.nec.co.jp). PSR's VAST/f90 compiler for unix, VMS and Convex includes a vectorizer. PSR also supplies VAST/77to90, to convert FORTRAN 77 programs into Fortran 90 syntax, as well as HPF (info@psrv.com or http://www.psrv.com/) ParaSoft has a compiler (f90-info@parasoft.com, or http://www.parasoft.com/f90.html). PGI has a Fortran 90/HPF compiler for SGI, IBM SP2, HP/Convex, etc. (sales@pgroup.com or http://www.pgroup.com/). It supplies HPF to Cray and Intel. Salford Software markets a PC version of the NAG compiler, also for Windows 95 and NT (http://www.salford.ac.uk/ssl/ss.html or sales@salford-software.com). A very cheap student version is available. SGI has the MIPSpro Fortran 90 64-bit compiler, version 6.2. It can be configured with an optional MIPSpro Power Fortran 90 Accelerator (PFA90) to optimize Fortran 90 code for SGI's multiprocessor systems (http://www.sgi.com/Technology/TechPubs/lib/0620bom.html). SofTech has a licence to sell its own versions of DEC's HPF/f90 compiler. Sun has released an f90 compiler based on Cray's CF90, initially for Solaris 2 (tel. 1-800-SUNSOFT or URL http://www.sun.com/sunsoft/Products/Developer-products). OTHER USEFUL PRODUCTS FORCHECK is a static analyzer for Fortran programs. It analyses both the individual program units and the whole program. It optionally verifies the syntax for conformance to the Fortran 90 standard, and provides warnings on undefined and unreferenced syntax items, inconsistent argument lists, and much more. FORCHECK generates documentation, such as cross-reference tables. See http://www.medfac.leidenuniv.nl/forcheck. FORGE90 and an HPF processor from APR (support@apri.com or http://www.infomall.org/apri/) are available. HPF is apparently available not only as listed above, but also from CDAC, Hitachi, Intel, Motorola, Meiko, NEC, Transtech and Thinking Machines. A source form convertor, convert.f90, is obtainable by ftp from jkr.cc.rl.ac.uk in the directory /pub/MandR. Latest version is 1.4. A graphics interface, f90gl, is obtainable at http://math.nist.gov/f90gl. NAG (see above) and IMSL (now Visual Numerics, mktg@houston.vni.com) offer f90 versions of their maths libraries that take full advantage of the language's library building capabilities. An f90 mode is included in the official Emacs distribution (GNU Emacs-19.28/XEmacs-19.13 or later). For make files, a perl5 script, which behaves like an X11 makedepend program (it edits an existing Makefile) and recursively searches include files for more dependencies, is available from Kate Hedstrom: ftp://ahab.rutgers.edu/pub/perl/sfmakedepend http://marine.rutgers.edu/po/perl.html For a makemake perl script: http://www.fortran.com/fortran/makemake.html. WHAT BOOKS ARE AVAILABLE? English: Advanced Scientific Computing - Wille, Wiley, 1995, ISBN 0471-95383-0. Fortran 90 - Meissner, PWS Kent, Boston, 1995, ISBN 0-534-93372-6. Fortran 90 - Counihan, Pitman, 1991, ISBN 0-273-03073-6. Fortran 90 and Engineering Computation - Schick and Silverman, John Wiley, 1994, ISBN 0-471-58512-2. Fortran 90, A Reference Guide - Chamberland, Prentice Hall PTR, 1995, ISBN 0-13-397332-8. Fortran 90/95 Explained - Metcalf and Reid, Oxford University Press, 1996, ISBN 0-19-851888-9, about $33. This book is a complete, audited description of the Fortran 90 and Fortran 95 languages in a more readable style than the standards themselves. It incorporates all X3J3 and WG5's interpretations and has a complete chapter on Fortran 95. It has seven Appendices, including an extended example program that is available by ftp and solutions to exercises, as well as an Index. US e-mail orders may be sent to: orders@oup-usa.org. The Fortran 90 version is also available in French, Japanese and Russian (see below). Fortran 90 for Scientists and Engineers - Brian D. Hahn, Edward Arnold, 1994, ISBN 0-340-60034-9. Fortran 90 Handbook - Adams, Brainerd, Martin, Smith and Wagener, McGraw-Hill, 1992, ISBN 0-07-000406-4. Fortran 90 Language Guide - Gehrke, Springer, London, 1995, ISBN 3-540-19926-8. Fortran 95 Language Guide - Gehrke, Springer, London, 1996, ISBN 3-540-76062-8. Fortran 90 Programming - Ellis, Philips, Lahey, Addison Wesley, Wokingham, 1994, ISBN 0-201-54446-6. Fortran Top 90-Ninety Key Features of Fortran 90 - Adams, Brainerd, Martin and Smith, Unicomp, 1994, ISBN 0-9640135-0-9. Introducing Fortran 90 - Chivers and Sleightholme, Springer-Verlag London, 1995, ISBN 3-540-19940-3. Introduction to Fortran 90/95, Algorithms, and Structured Programming, Part 1: Introduction to Fortran 90, Part 2: Algorithms and Fortran 90. R. Vowels: 93 Park Drive, Parkville 3052, Victoria, AUSTRALIA, (rav@goanna.cs.rmit.edu.au). $41 Aust, ISBN 0-9596384-8-2. Introduction to Fortran 90 for Scientific Computing - Ortega, Saunders College Publishing, 1994, ISBN 0-030010198-0. Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing, Volume 2 of Fortran Numerical Recipes - Press, Teukolsky, Vetterling and Flannery, Cambridge U. Press, ISBN 0-521-57439-0, 1996. Code can be downloaded (purchased) from http://nr.harvard.edu/nr/store. A CDROM is also availble (see Web site). Programmer's Guide to Fortran 90, third edition - Brainerd, Goldberg and Adams, Springer, 1996, ISBN 0-387-94570-9. Programming in Fortran 90 - Morgan and Schonfelder, Alfred Waller/ McGraw-Hill, Oxfordshire, 1993, ISBN 1-872474-06-3. Programming in Fortran 90 - I.M. Smith, Wiley, ISBN 0471-94185-9. Schaum's Outline of Theory and Praxis -- Programming in Fortran 90 - Mayo and Cwiakala, Mc Graw Hill, 1996. ISBN 0-07-041156-5. The F Programming Language - Metcalf and Reid, Oxford University Press, 1996, ISBN 0-19-850026-2, about $33. This book is the definitive description of the F programming language - a carefully crafted subset of Fortran 90 that is highly regular and stripped of Fortran's older, dangerous features, but retains the powerful array language, data abstraction and pointers. It has six Appendices, including an extended example program that is available by ftp and solutions to exercises, as well as an Index. US orders may be sent to: orders@oup-usa.org. Upgrading to Fortran 90 - Redwine, Springer-Verlag, New York, 1995, ISBN 0-387-97995-6. Chinese: Programming Language Fortran 90 - He Xingui, Xu Zuyuan, Wu Qingbao and Chen Mingyuan, China Railway Publishing House, Beijing, ISBN 7-113-01788-6/TP.187, 1994. Dutch: Fortran 90 - W.S. Brainerd, Ch.H. Goldberg, and J.C. Adams, translated by J.M. den Haan, Academic Service, 1991, ISBN 90 6233 722 8. French: Fortran 90; Approche par la Pratique - Lignelet, Se'rie Informatique E'ditions, Menton, 1993, ISBN 2-090615-01-4. Fortran 90. Les concepts fondamentaux, the translation of "Fortran 90 Explained" M. Metcalf, J. Reid, translated by M. Caillet and B. Pichon, AFNOR, Paris, ISBN 2-12-486513-7. Fortran 90; Initiation a` partir du Fortran 77 - Aberti, Se'rie Informatique E'ditions, Menton, 1992, ISBN 2-090615-00-6. Les specificites du Fortran 90, DUBESSET, M. et VIGNES, J., editions Technip, 1993. ISBN 2-7108-0652-5 Manuel complet du langage Fortran 90, et guide d'application, LIGNELET, P., S.I. editions, Jan. 1995. ISBN 2-909615-02-2 Manuel Complet du Langage FORTRAN 90 et FORTRAN 95, Calcul intensif et Genie Logiciel (MASSON Editions, Paris; ISBN: 2-225-85229-4). Programmer en Fortran 90, DELANNOY, C., Eyrolles, 1992. ISBN 2-212-08723-3 Traitement des donnees numeriques avec Fortran 90, OLAGNON M., Masson, 1996, ISBN 2-225-85259-6. Savez-vous parler Fortran, AIN, M., Bibliotheque des universites (de Boeck), 1994. ISBN 2-8041-1755-3 STRUCTURES DE DONNEES (et leurs algorithmes) EN FORTRAN 90/95, P. Lignelet, Les Editions MASSON (Paris, Milan, Barcelone ISBN: 2-225-85373-8). German: Fortran 90 - B.Wojcieszynski and R.Wojcieszynski, Addison-Wesley, 1993, ISBN 3-89319-600-5. Fortran 90: eine informelle Einfu"hrung - Heisterkamp, BI-Wissenschaftsverlag, 1991, ISBN 3-411153-21-0. Fortran 90, Lehr- und Arbeitsbuch fuer das erfolgreiche Programmieren - W.S. Brainerd, C.H. Goldberg, and J.C. Adams, translated by Peter Thomas and Klaus G. Paul, R. Olbenbourg Verlag, Muenchen, 1994, ISBN 3-486-22102-7. Fortran 90 Lehr- und Handbuch - T. Michel, BI-Wissenschaftsverlag, 1994. Fortran 90 Referenz-Handbuch: der neue Fortran-Standard - Gehrke, Carl Hansen Verlag, 1991, ISBN 3-446163-21-2. Programmierung in Fortran 90 - Schobert, Oldenburg, 1991. Programmieren in Fortran - Erasmus Langer, Springer-Verlag, Wien New York, 1993. ISBN 3-211-82446-4, 0-387-82446-4. Software Entwicklung in Fortran 90 - U"berhuber and Meditz, Springer Verlag, 1993, ISBN 0-387-82450-2. Japanese: Fortran 90 Explained - Metcalf and Reid, translated by H. Nisimura, H. Wada, K. Nishimura, M. Takata, Kyoritsu Shuppan Co., Ltd., 1993, ISSN 0385-6984. Russian An Explanation of the Fortran 90 Programming Language (translation of Fortran 90 Explained - Metcalf and Reid), translated P. Gorbounov, Mir, Moscow, 1995, ISBN 5-03-001426-8. Available also from Petr.Gorbounov@cern.ch. Swedish Fortran 90 - en introduktion - Blom, Studentlitteratur, Lund, 1994, ISBN 91-44-47881-X. WHERE CAN I OBTAIN COURSES, COURSE MATERIAL OR CONSULTANCY? Copyright but freely available course material is available on the World Wide Web from the URLs: Manchester Computer Centre: http://www.hpctec.mcc.ac.uk/hpctec/courses/Fortran90/F90course.html or via ftp: ftp.mcc.ac.uk, in the directory /pub/mantec/Fortran90. The University of Liverpool: http://www.liv.ac.uk/HPC/HPCpage.html. CERN: http://wwwcn.cern.ch/asdoc/f90.html or via anonymous ftp from asisftp.cern.ch in the directory cnl as the file f90tutor.ps. In French: Support de cours Fortran 90 IDRIS - Corde & Delouis (from ftp.ifremer.fr, file pub/ifremer/fortran90/f90_cours_4.ps.gz). A course on HPF is freely available from Edinburgh: http:// www.epcc.ed.ac.uk/epcc-tec/course-packages/HPF-Package-form.html Courses are available from: Walt Brainerd, a member of X3J3, also on HPF (walt@fortran.com); Tom Lahey (sales@lahey.com). PSR (see above); CETech, Inc. (also on HPF) 8196 SW Hall Blvd., Ste. 304, Beaverton, Oregon 97008, USA. Phone: (503)644-6106 Fax: (503)643-8425 (cetech@teleport.com). European companies offering courses and conversion consultancy are: IT Independent Training Limited, 2 Windlebrook Green, Bracknell, Berkshire, UK tel. +44 1344 860172 fax. +44 1344 867992 Salford Software (see above); Simulog, attn. Mr. E. Plestan, 1 rue James Joule, F-78286 Guyancourt Cedex, France tel: +33 1 30 12 27 80 fax: +33 1 30 12 27 27 e-mail: plestan@simulog.fr Allgemeiner Software Service Prinz-Otto Str.7c, D-85521 Ottobrunn, Germany tel: +49-89-6083758 Fax: +49-89-6083758 e-mail: 100722.746@compuserve.com URL: http://www.wp.com/AllSoftServe WHERE CAN I FIND THE STANDARD? Fortran 90 was adopted as an International Standard by ISO in July, 1991, as ISO/IEC 1539:1991, and is obtainable for 185 Swiss francs from ISO Publications, 1 rue de Varembe, Case postale 56 CH-1211 Geneva 20, Switzerland Fax. + 41 22 734 10 79 It may also be obtained from national member bodies such as ANSI, 1430 Broadway, New York, N.Y. 10018 (where it is also known as ANSI X3.198-1992), or in electronic PostScript or ASCII form from Unicomp (walt@fortran.com) at a cost and under conditions agreed by ISO. Corrigenda 1 and 2 were published by ISO in 1993 and 1995, respectively, and are available from them (cost about 30 Swiss francs). Corrigendum 3 was approved for publication in 1996. A Russian translation of the standard (translator S.G.Drobyshevich) is available from the editor, Alla Gorelik (gorelik@applmat.msk.su). ***** This information is compiled on a 'best-effort' basis and is without prejudice. It may be freely copied and disseminated. Corrections and additions are solicited. Mike Metcalf (metcalf@cern.ch) Version of 12 November, 1996.Return to Top