Newsgroup sci.math.num-analysis 28493

Directory

Subject: Re: PDEase FEA soft from Macsyma-any experiences? -- From: WOOWOON@chollian.dacom.co.kr (õ¸®¾È NEWS GROUP ÀÌ¿ëÀÚ)
Subject: Q: how to find {v} s.t. {v}t[A]{v} is minimize? -- From: Worawut Wisutmethangoon
Subject: Re: Minimizer stopping rule -- From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Re: non linear cubic spline -- From: pgh@bnr.co.uk (Peter Hamer)
Subject: Re: C Code for high order polynomial curve fitting? -- From: pgh@bnr.co.uk (Peter Hamer)
Subject: Convolution of sparse vectors -- From: Thomas Bleile
Subject: test -- From: Mirko Vukovic
Subject: Re: Wanted: FFT algorithm -- From: "Robert. Fung"
Subject: Re: Using C for number-crunching (was: Numerical solution to -- From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Subject: [Q] Need to plot in spherical coords. -- From: Craig Ross
Subject: Re: Convolution of sparse vectors -- From: bruns@tetibm2.ee.TU-Berlin.DE (Warner Bruns)
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize? -- From: nsinger@eos.hitc.com (Nick Singer)
Subject: Re: Convert Social Security number to six digits and back again. -- From: hisys@aol.com (Hisys)
Subject: bootstrapping app. refs. -- From: "William C. Snyder"
Subject: Help: Integrating functions with compact support .... ? -- From: "N. Sukumar"
Subject: mathematics: The Curmudgeon Quotelist -- From: marlow@concentric.com (marlowe)
Subject: Different rhs and CG -- From: cjlin@pelli.engin.umich.edu (Chih-jen Lin)
Subject: Re: Help with vector spaces, please. -- From: John Hench
Subject: Re: Minimizer stopping rule -- From: lpa@netcom.com (Pierre Asselin)
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq) -- From: mfriesel@ix.netcom.com
Subject: Re: Different rhs and CG -- From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Re: calibration/interpolation? -- From: Bill Simpson
Subject: Re: Convolution of sparse vectors -- From: Klaus Ramstoeck
Subject: Re: Different rhs and CG -- From: eijkhout@jacobi.math.ucla.edu (Victor Eijkhout)
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.? -- From: matsh@alwaid.tdb.uu.se (Mats Holmstr|m)
Subject: Re: books on stability of pdes -- From: jarausch@numa1.igpm.rwth-aachen.de (Helmut Jarausch)
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize? -- From: ghe@saturn.superlink.net (Guangliang He)
Subject: Re: PLEASE HELP ME: I need to factor/simplify an algebraic expression. -- From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize? -- From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Using C for number-crunching (was: Numerical solution to -- From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Subject: Iterative solver for banded matrix -- From: pernil@p1542.pip.dknet.dk (Per Nielsen)
Subject: Re: [Q] Need to plot in spherical coords. -- From: "Dann Corbit"
Subject: [Announce] CFD Jobs Database (WWW) -- From: jola@tfd.chalmers.se (Jonas Larsson)
Subject: uniqueness of a nonlinear ODE -- From: "Pablo A. Perez-Fernandez"
Subject: linear resolution -- From: ANA ISABEL GODOY ESMERALDA
Subject: Question: linear Diophantine eqn solver -- From: Vladislav Panferov
Subject: Re: Iterative solver for banded matrix -- From: "Hans D. Mittelmann"

Articles

Subject: Re: PDEase FEA soft from Macsyma-any experiences?
From: WOOWOON@chollian.dacom.co.kr (õ¸®¾È NEWS GROUP ÀÌ¿ëÀÚ)
Date: 6 Nov 1996 10:48:16 GMT
Victor Kharin (kharin@udc.es) wrote:
: Hi..
: Has anyone had any good or bad experiences using the PDEase2D finite 
: element software from Macsyma Inc.?
: I am planning to use this software under Windows NT for nonlinear 
: problems of solid mechanics coupled with stress-strain dependent 
: transient heat-mass transfer.
: I am wondering just about its "nonlinear capabilities", in particular, 
: could it serve well for elastoplastic problems with deformational 
: Ramberg-Osgood like plasticity (small strains)? How reliable is this 
: software?
: In general:
: Are there any problems with this software that I must be aware of?
: How friendly and easy to use is it?
: What should I expect from the package?
: I would appreciate every comment, suggestion, idea..
: ---------------------------------------------------------------
: Victor Kharin                          ETSI Caminos
: tel: +34-81-131150, ext. 450           Campus de Elvina
: fax: +34-81-132876                     15192 La Coruna
: E-mail: kharin@udc.es                  SPAIN
: ---------------------------------------------------------------
Return to Top
Subject: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: Worawut Wisutmethangoon
Date: Wed, 06 Nov 1996 02:57:05 -0600
Hi,
	Although this is not directly related to numerical analysis, I hope
some of you can help me with this problem.
	If I have an nxn matrix [A], how do I find a vector {v} of
dimension n such that {v}t[A]{v} is minimum ?  The matrix is known
to be symmetric and semi-positive definite, if that matters.
	I would really appreciate any pointers.
Thank you in advance
Worawut W.
(wisutmet@cae.wisc.edu)
Return to Top
Subject: Re: Minimizer stopping rule
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 6 Nov 1996 11:15:12 GMT
In article <327F0D61.6804@ix.netcom.com>, Jive Dadson  writes:
|> I find I am not too clear on how one should stop a function-minimizer
|> before it reaches machine-limit convergence. The NRC routines have
|> several ways to stop minimizing f(x), including these:
|> 
|>    1. If the gradient f'(x) becomes "small" on a step.
|>    2. If the difference in x between two steps becomes "small"
|>    3. The conj-grad routine frprmin() stops when the difference
|>       in f(x) between two steps becomes "small".
|> 
|> I have doubts about all of those stopping criteria.
|> 
|>    1. In monitoring the city block norm for the gradient in actual,
|>    noisy problems, I find that it hops around quite a bit, and frequently
|>    becomes almost as small as machine limits allow it to become
|>    at the actual minimum. The NRC routine often stops prematurely
|>    if you don't set "gtol" very low. This is the stopping criterion
|>    it uses:
|> 
|>         test = 0
|>         den = max(f(x),1);
|> 
|>         FOR ALL DIMENSIONS i
|>               temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
|>               IF temp > test THEN test=temp;
|>                 
|>         IF test < gtol THEN quit
|> 
|>     2. The second criterion particularly does not seem appropriate for
|>     algorithms that use an approximate line search, because it can cause
|>     the algorithm to stop because of one ineffective step. Yet, that is the
|>     other stopping criterion for the NRC BFGS routine dfpmin(), which
|>     uses approximate line searches.
|> 
|>         test = 0;
|>         FOR ALL DIMENSIONS i
|>             temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
|>             IF temp > test THEN test=temp;
|> 
|>         IF test < small THEN quit
|> 
|>     I also question the division by max(abs(x[i]),1). When training a neural
|>     network, (which is what I am mostly using this for), scaling is
|>     appropriate for some parameters ("weights"), but perhaps not for
|>     others ("biases"). If you are going to stop on near x-convergece,
|>     I think the user needs to specify the x norm.
|> 
|>     3. The third criterion has the problems of the first two: It can
|>     cause early stopping in a relative "flat" spot, or when a single
|>     step is ineffective because of not-exact line-search.
|>         
|>             test = 2.0*abs(f(x)-previous_f_x);
|>             lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
|>             IF test <= lim THEN quit
|> 
|> It would seem at the very least you should insist that these criteria are
|> met for some number of consecutive steps before stopping.
|> 
|> Ideally, one would like for the user to be able to specify a small
|> number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
|> criterion cannot be tested with certainty. Or the user could specify a
|> norm <>, and say, "Stop when  < e." Is there a way to stop before
|> complete convergence when it is "very probably" true that you are with an
|> epsilon of the minimum?
|> 
|> Gurus, please enlighten.
|> 
|>        Thanks,
|>        J.
your observations and remarks are quite correct. Indeed, a good algorithm
should insist in continuing "as long as it makes sense" and then stop
with giving the user the appropriate message. On the other side ..
imagine what takes place if you minimize with f_cur-f_new = 1.d-15 for some
thousand steps ? In principle your problem is easily solved: 
let the algorithm stop with an a posteriori error estimate (a reliable one)
this can be done using interval arithmetic, but requires an interval 
arithmetic extension of the evaluation of f and grad_f (_and_ a 
"sufficiently accurate" approximation of the Hessian, such that e.g. 
the Krawcyk_Operator is well defined and contractive). Look at the 
books of Ratschek&Rokne;, Halsted Press 1988 or Eldon Hansen, Decker, New York
1992. See also the code in netlib/toms/681. 
The stopping criteria you mention are quite common and not unreasonable for
superlinearly converging algorithms (as BFGS ). 
It is a matter of taste to require the fullfillment of the stopping criterion
for some consecutive steps (many algorithms do that), but this device 
gives you no certainty. No foolproof termination criterion can be devised
without safe lower and upper bounds for the eigenvalues of the Hessian 
in neighborhood of the solution containing the current point .
Besides  ... NRC dfpmin (actually, a direct BFGS-update
of the inverse Hessian) uses float _and_ a linesearch based on function 
values. this limits final precision to about 1.e-4 in the best case 
and in case of a little illcondioning limits final precision to nothing ....
hope these remarks help 
peter
Return to Top
Subject: Re: non linear cubic spline
From: pgh@bnr.co.uk (Peter Hamer)
Date: 6 Nov 1996 11:59:04 GMT
In article <327CCC21.63A6@interlog.com> falkiner@interlog.com writes:
>I am looking for a PC routine that will do a non linear cubic spline,
>where the "stiffness" of the spline can be an input parameter over a
>range of 0-1 where 0 would be a cubic spline fit where the spline curve
>goes through each of the input data points and 1 would be a linear
>regression.  there is a routine in the SAS mainframe library (NLIN?)
>that does this.  Are there any alternates, as I can't afford SAS for
>personal use on the PC.
There is FORTRAN code for smoothing cubic splines by Woltring in statlib.
In addition to doing the sort of thing you want, it can also work out the
optimum degree of smoothing by generalized cross-validation.
H J Woltring
A FORTRAN package for generalised, cross-validatory spline smoothing
and differentiation.
Adv. Eng. Softwar8(2) pp 104-113, 1987.
The code in statlib is called something like gcvspl; anyway it has gcv
in the name.
Peter
PS It expects the data sorted "in x". If several values are equal in x
it tries to fit a degenerate spline (with some pieces of zero length).
You can fix this up by condensing the equal (or essentially equal) values
into one point with less variance [email me if you want more details]. I
assume that all attempts at piece-wise cubics with one cubic between each
pair of points has the same problem (but not necessarily the features to
use to get round it).
Return to Top
Subject: Re: C Code for high order polynomial curve fitting?
From: pgh@bnr.co.uk (Peter Hamer)
Date: 6 Nov 1996 12:12:18 GMT
In response to a question 
>>|> Does anyone have/know of/ can explain a alogarithum to curve fit x data
>>|> points with a y orderd polynominal
In article <55l7r6$pt8@news.cis.okstate.edu> jpc@a.cs.okstate.edu (John Chandler) writes:
>
>Is dgless a routine specifically designed for least squares
>fitting of polynomials to data?  If so, fine, but one should
>almost never use any general linear least squares software
>to fit a polynomial to data using a monomial basis.
... and goes on to explain why.
I would like to make the additional point that a high order polynomial may
also be a poor choice of MODEL to choose in the first place (unless you have 
good reason to believe that it genuinely reflects the generating process).
Why not fit a [smoothing] spline of some sort instead? Or use locally weighted
regression (eg LOWESS)?
Peter
Return to Top
Subject: Convolution of sparse vectors
From: Thomas Bleile
Date: Wed, 06 Nov 1996 13:13:10 +0100
Hello,
I need to calculate the convolution of two large but sparse vectors
(dimension: (10^8 X 1) with 10^5 nonzero elements).
By the structure of these vectors, I know that the result will be sparse
too (aproximately 10^6 nonzero elements).
I am looking for some algorithm or c-library to do this job in less than
one week of CPU-time ;-)
Any ideas ? Pointers ? Literature ?
Thanks in advance,
Thomas 
-- 
====================================================================
Dipl.-Ing. Thomas Bleile               email: bleile@fli.sh.bosch.de
Robert Bosch GmbH, Abt. FV/FLI
Postfach 10 60 50                      Tel: (+49 | 0) 711 811-6659
D-70049 Stuttgart                      Fax: (+49 | 0) 711 811-7602
====================================================================
Return to Top
Subject: test
From: Mirko Vukovic
Date: Wed, 06 Nov 1996 09:25:54 -0800
test
-- 
Mirko Vukovic, Ph.D.           mirko.vukovic@grc.varian.com
Varian Research Center         Phone: (415) 424-4969
3075 Hansen Way, M/S K-109     Fax:   (415) 424-6988
Palo Alto, CA 94304-1025
Return to Top
Subject: Re: Wanted: FFT algorithm
From: "Robert. Fung"
Date: Wed, 06 Nov 1996 11:11:11 -0500
Bill Simpson wrote:
> 
> Try:
> http://www.tu-chemnitz.de/~arndt/joerg.html
> 
> This should really be in the sci.math.num-analysis FAQ, but isn't.
>  
  That's an interesting one. You can also try some of the Simtel
  Sound and Voice mirrored ftp sites like:
      ftp://svr-ftp.eng.cam.ac.uk/pub/comp.speech/
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 6 Nov 1996 18:27:11 GMT
In article ,
David Kastrup   wrote:
>shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
>
>: First, I recently posted a disclaimer to two of my earlier postings,
>@> but now I want to disclaim the disclaimer. :-) Someone had posted
>@> saying that const qualification (in C) does not convey non-alias
>@> information, because the const qualifier guarantees only that no
>@> attempt will be made to write to the underlying addresses *through the
>@> const-qualified pointer*.
>
>This was me, and I stand by my statement.  See below why.
Ugh.  I have to eat my words again.  I'm getting tired of the same 
meal. :-)
>@> But now (sparked by Nick's posting, which implicitly seemed to
>@>  concur
>@> with my original) I've looked this up in the C standard, section
>@> 6.5.3.  It states, in part, "If an attempt is made to modify an object
>@> defined with a const-qualified type through use of an lvalue with
>@> non-const-qualified type, the behavior is undefined."
Yes.  The operative word here is "defined";  I read it as if it
had said "declared".
>@> Thus, the standard is quite clearly saying that the compiler can
>@> assume from the const-qualified declaration of b[] that the contents
>@> of b[] will not be altered even by means of dereference through a.
>@> I.e., my original posting was correct.
>
>And that's where you are being wrong in your interpretation.  A
>pointer, be it const or not, does *not* define an object.  It just
>references it.  ....
And Christian Bau (christian.bau@isltd.insignia.com) pointed out
that if I were correct, memmove() couldn't work.  Its prototype
is as follows:
	void *memmove (void *s1, const void *s2, size_t n);
The whole point of memmove() is to copy the storage pointed to
by s2 to the destinations pointed to by s1 when these blocks
overlap.  If the const qualification of s2 forbade the overwriting
of storage pointed to by s2, then this function prototype for
memmove() would be incorrect.
I understand that the draft copy of the next C language standard
has a "restrict" keyword which guarantees the behavior we'd
like.  I guess this is modeled after Cray's "restrict".
I have two questions about this:
	1.  How does "restrict" differ from the illfated "noalias"
	    proposal from the previous standardization process?
	2.  How can one obtain a copy of the current draft standard?
Living and (hopefully) learning,
-P.
-- 
****************** In Memoriam, Bill Monroe, 1911 - 1996 ******************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY  10027;  shenkin@columbia.edu;  (212)854-5143;  FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Return to Top
Subject: [Q] Need to plot in spherical coords.
From: Craig Ross
Date: Wed, 06 Nov 1996 13:54:48 -0500
If this is not the place to ask this question, please forgive me
and point out the way.
I need to plot a surface in spherical coordinates and want the 
best flexibility and most ease.  The data is spaced on a regular
grid, from 0 <= phi <= 90 degrees and 0 <= theta <= 90 degrees.
I will map this, using symmetry, to the other 7 octants.
The r axis data represents the sound pressure emitted from a source,
in dB, so r will be offset to be greater than 0 for visualization,
if I can't pick the r axis scale independently.  Alternatively,
color maps may be used to represent the r data on a uniform sphere.
I currently have access to Matlab, Maple, Excel, and Axum but haven't
found a simple solution.  I'm using Matlab to crunch the measured
data and like Excel's easy rotation, but dislike that color maps
in Excel will represent variation on the z axis, not r.
Any ideas?  (I will code, if required, but seek existing solutions.)
-- 
Craig Ross				Find just about anything at:
ross.294@osu.edu			http://www.albany.net/allinone/
ross@rclsgi.eng.ohio-state.edu		or	http://www.yahoo.com/
Return to Top
Subject: Re: Convolution of sparse vectors
From: bruns@tetibm2.ee.TU-Berlin.DE (Warner Bruns)
Date: 6 Nov 1996 19:02:56 GMT
In article <328080D6.360F@fli.sh.bosch.de>, Thomas Bleile
 writes:
> Hello,
> 
> I need to calculate the convolution of two large but sparse vectors
> (dimension: (10^8 X 1) with 10^5 nonzero elements).
> By the structure of these vectors, I know that the result will be sparse
> too (aproximately 10^6 nonzero elements).
> I am looking for some algorithm or c-library to do this job in less than
> one week of CPU-time ;-)
> 
> Any ideas ? Pointers ? Literature ?
> 
> Thanks in advance,
> 
> Thomas 
> -- 
> 
> ====================================================================
> Dipl.-Ing. Thomas Bleile               email: bleile@fli.sh.bosch.de
> Robert Bosch GmbH, Abt. FV/FLI
> Postfach 10 60 50                      Tel: (+49 | 0) 711 811-6659
> D-70049 Stuttgart                      Fax: (+49 | 0) 711 811-7602
> ====================================================================
 just make an fft on your vectors, multiply the fft-ed,
 and fft them back...
 wasn't that easy?
 Warner Bruns
Return to Top
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: nsinger@eos.hitc.com (Nick Singer)
Date: 6 Nov 1996 20:29:55 GMT
In article <328052E1.2CA68A66@cae.wisc.edu>, 
wisutmet@cae.wisc.edu says...
>Although this is not directly related to numerical analysis, 
>I hope some of you can help me with this problem.
>If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ?  The matrix is 
>known to be symmetric and semi-positive definite, if that 
>matters. I would really appreciate any pointers.
You presumably mean for ||v||=1; otherwise, the answer just 
scales quadratically with ||v||.
Take v to be an eigenvector associated with the smallest 
eigenvalue of A, say lambda. Since A is symmetric and semi-
positive definite, then it has (or may have) a zero 
eigenvalue, and the rest of its eigenvalues are positive 
reals.  So take v to be in the null space of A; then 
{v}t[A]{v} = 0.
In general, if you want to find a vector that minimizes or 
maximizes {v}t[A]{v}, the same approach applies--and you end 
up with {v}t[A]{v} = lambda * ||v||^2. 
Return to Top
Subject: Re: Convert Social Security number to six digits and back again.
From: hisys@aol.com (Hisys)
Date: 6 Nov 1996 17:32:17 -0500
Mike,
I find your solution with a number in Base 36 very good, also because one
doesn't need any special characters, only the figures 0...9 (10) and the
letters A...Z (26).
That's exactly the solution I would recommend.
Rudi Hofstaedter
hisys@aol.com
Return to Top
Subject: bootstrapping app. refs.
From: "William C. Snyder"
Date: Wed, 06 Nov 1996 14:03:51 -0800
Hi,
I am interested in references on the following subjects:
1. Bootstrapping to assess error in monte carlo integration results. I
   haven't seen any references on this at all. It seems that this would
   be a good empirical method.
2. Simulation/Bootstrapping for error propagation (statistical 
   tolerancing) in computer models. I have references, but none prior 
   to 1980. I'd like to find out when this was first proposed.
Many thanks for any articles, thoughts, etc.
-Will
-- 
Dr. William Snyder       
will@icess.ucsb.edu
http://www.icess.ucsb.edu/~will/will.html
Imaging Scientist
Institute for Computational Earth Systems Science
University of California Santa Barbara
Return to Top
Subject: Help: Integrating functions with compact support .... ?
From: "N. Sukumar"
Date: Wed, 06 Nov 1996 14:11:23 -0600
Hi,
Have a question on the integration of functions with compact
support which I hope someone in net-land can help me out with.
Consider a set of discrete points x_i (i = 1,2,...n) 
(x \in R^m, m = 1/2/3) and functions f = f(x-x_i) (f \in C^1) 
which have compact support [zero outside a ball or radius r_i]
in \Omega => f is max at x_i and monotonically decreases to zero.
Is there any way to choosen disjoint domains \Omega_i 
(\union{\Omega_i}= \Omega) such that the integral
\int_\Omega f d\Omega  can be integrated accurately.
In 1D, by choosing the sub-domains to coincide with the
end of the support one can accurately integrate using
Gauss quadrature. However, in 2D and 3D, Gauss quadrature
isn't accurate and I don't see an "obvious" way to solve the 
problem accurately. If there are other judicious quadrature
schemes and/or refs/papers that address the above, I would 
appreciate if someone could drop me an e-mail.
Hope I was clear in explaining the problem.  If further
details are reqd., I'd be glad to supply them.
Thanks in advance.
-suku.
E-mail: suku@nwu.edu
-- 
N. Sukumar                                    Home Phone: (847)491-1522
Theoretical & Applied Mechanics               Work Phone: (847)467-3154
Northwestern U, Evanston IL 60208             E-mail: n-sukumar@nwu.edu
WWW:  *GO BLAZERS*
Return to Top
Subject: mathematics: The Curmudgeon Quotelist
From: marlow@concentric.com (marlowe)
Date: Wed, 06 Nov 1996 20:25:25 -0400
   These quotes about mathematics and science can be found on 'The
Curmudgeon Quotelist' at
          http://www.lm.com/~rww/index.html
 "Mathematics, rightly viewed, posses not only truth, but supreme beauty a
beauty cold and austere, like that of sculpture."
          Bertrand Russell
"Sex is the mathematics urge sublimated ." 
                    M. C. Reed 
  "The laws of probability, so true in general, so fallacious in particular."
          Edward Gibbon
"Science has a simple faith that transcends utility. It is the faith that
it is the privilege of man to learn to understand, and that this is his
mission ." 
                    Vannevar Bush 
"Science is nothing but developed perception, interpreted intent, common
sense rounded out and minutely articulated ." 
                    George Santayana 
"Logical consequences are the scarecrows of fools and the beacons of wise
men ." 
                    Thomas Henry Huxley 
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
 take a look at
     The Curmudgeon Quotelist
     http://www.lm.com/~rww/index.html
         Quotes that make you think
Return to Top
Subject: Different rhs and CG
From: cjlin@pelli.engin.umich.edu (Chih-jen Lin)
Date: 7 Nov 1996 03:30:57 GMT
Hi,
In my algorithm I have to solve two 
linear system Ax = b1, Ax = b2. Currently I use conjugate gradient
method and have the following two questions ?
(1) Is there any methods such that I can have warm start of the 
second system ?   
(2) I found two systems take quite different numbers of CG iterations
and the first one is much less then the second one. Usually under what 
conditions to change the rhs can produce such a difference ?
Thanks in advance
Chih-Jen Lin
Univ. of Michigan 
Return to Top
Subject: Re: Help with vector spaces, please.
From: John Hench
Date: Tue, 05 Nov 1996 10:58:41 +0000
Robert Gelb wrote:
> W' is the set of all ordered pairs(x,y) of real numbers
> that satisfy the equation 2x+3y=1.  The answer at the end
> of the book says, that this set is not a vector space.
Let's rewrite this in the vernacular: Az=b where
A=[2,3], z=[x,y]', and b=1.  The acid test for
any linear (vector) space is that if z1 and z2
are elements, then so is alpha z1 + beta z2.
(alpha and beta are scalars).  In this case, we
note that A z1 = 1 and A z2 = 1 and therefore
alpha A z1 + beta A z2 = alpha + beta ~= 1.
The conclusion can only be that this is not a
linear space.  If we change the question
slightly, however, and ask if Az = 0 is a linear
space, then we could easily show that it is,
which leads to the famous property that a null
space of a matrix is indeed a linear space.
It turns out that the problem that you examined
is important enough to merit its own name; it is
called a linear variety and it comes up in all
sorts of optimization problems.
I hope this helps! Cheers, John
------------------------------------------------
 Dr. J.J. Hench  
 Dept. of Mathematics, Univ. of Reading, England   
 Institute of Informatics and Automation, Prague
-------------------------------------------------
Return to Top
Subject: Re: Minimizer stopping rule
From: lpa@netcom.com (Pierre Asselin)
Date: Thu, 7 Nov 1996 04:27:19 GMT
Jive Dadson  writes:
>I find I am not too clear on how one should stop a function-minimizer
>before it reaches machine-limit convergence. The NRC routines have
>several ways to stop minimizing f(x), including these:
>   1. If the gradient f'(x) becomes "small" on a step.
>   2. If the difference in x between two steps becomes "small"
>   3. The conj-grad routine frprmin() stops when the difference
>      in f(x) between two steps becomes "small".
>I have doubts about all of those stopping criteria.
>[rest elided]
Automatically recognizing zero when you see it is a lot of trouble.
I have gradually evolved a stopping criterion different from most.
No, it's not foolproof.
The context is some minimization method or other with a line search,
usually CG but sometimes Newton's method (where the line search is used
to prevent catastrophic divergence).  The line search is looking for
the zero crossing of a directional derivative.  For robustness, I
bracket a zero and refine the inverval iteratively.  The search routine
might have to poke around a lot, and this is a good chance to gauge the
amount of nonlinearity in the system.  I look for the following:
     1- Having to extend the initial interval unexpectedly
	far forward to get a bracket.
     2- Needing more than a couple of Regula Falsi's
	to clinch the zero.
Maybe other things too, I forget (the code's at work).  The exit
criterion for the line search itself isn't too fancy, just return when
the directional derivative has been reduced to 0.01 of its starting
value, say.
Originally I used the nonlinearity score as a restart criterion for the
CG.  The interesting thing is that, in the endgame, the nonlinearity
tests get triggered by roundoff.  The search routine is working very
hard to find zeroes in noise, and it stumbles every time.  The pattern
of hits is different from what "real" nonlinearities produce during the
initial stages of the minimization.  The trick is to come up with an
`if' statement that recognizes the noise-induced pattern without
writing an artificial intelligence program.  I've got one that works,
for the most part...  It might be problem-specific, though.
-- 
--Pierre Asselin, Westminster, Colorado
  lpa@netcom.com
  pa@verano.sba.ca.us
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: mfriesel@ix.netcom.com
Date: Wed, 06 Nov 1996 21:32:18 -0700
Peter Dalgaard BSA wrote:
> 
> Konrad Hinsen  writes:
> 
> ....>
> > If a subprogram reference causes a dummy argument to become associated
> > with an entity in a common block in the referenced subprogram or in a
> > subprogram referenced by the referenced subprogram, neither the dummy
> > argument nor the entity in the common block may become defined within
> > the subprogram or within a subprogram referenced by the referenced
> > subprogram.
> 
> Does anybody else perceive this as the result of convergence of the
> computer-speak and legalese dialects? "The party of the third part,
> hereafter denoted The Third Party..."
I reply:
No, it all makes sense.  Like most of the details of programming 
languages, however, it is unspeakably dull and learning about it will 
teach you nothing whatsoever about physics.
Return to Top
Subject: Re: Different rhs and CG
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 7 Nov 1996 10:14:47 GMT
In article <55rl5h$eb3@srvr1.engin.umich.edu>, cjlin@pelli.engin.umich.edu (Chih-jen Lin) writes:
|> Hi,
|> 
|> In my algorithm I have to solve two 
|> linear system Ax = b1, Ax = b2. Currently I use conjugate gradient
|> method and have the following two questions ?
|> 
|> (1) Is there any methods such that I can have warm start of the 
|> second system ?   
|> 
|> (2) I found two systems take quite different numbers of CG iterations
|> and the first one is much less then the second one. Usually under what 
|> conditions to change the rhs can produce such a difference ?
|> 
|> Thanks in advance
|> 
|> Chih-Jen Lin
|> Univ. of Michigan 
the number of steps necessary to obtain a reasonable precision with
CG depends on the eigenvalue distribution of the matrix _and_ the
representation of the eigenvectors in the initial residual. Therefore,
if you have a solution of your first system, take its residual with respect 
to the second right hand side and it turns out that it has only small compo-
nents with respect to the eigenvectors of the "large" eigenvalues of A, 
then a warm start makes sense, otherwise not. If you take the eigensystem of A 
as your coordinate system then there holds
 \sum_{i=1}^n ( (x^{k+1}_i-x^*_i)^2 \lambda_i ) = 
  \min \{ sum_{i=1}^n ( x^0_i-x^*_i)^2(\lambda_i)(1+\lambda_i p_k(\lambda_i) )^2 :
                       p_k any polynomial of degree k \}
here x^0 is the initial point, \lambda_i are the eigenvalues of A , 
x^{k+1} is the k+1-th iterate and x^* the solution, and of course,
(\lambda_i(x_i-x^*_i)) the i-th component of the residual in x.
the minimization takes place over the space of all polynomials of degree k.
this is the optimality property of CG.
hope this helps
peter
Return to Top
Subject: Re: calibration/interpolation?
From: Bill Simpson
Date: Wed, 6 Nov 1996 09:45:25 -0600
Update.  I think I will do Chebyshev approx on (lum,z), with say 30 terms.
This seems good (on some criteria Chebyshev approx is best) and easy.
Numerical recipes in C has code, so at least that's a place to start.
Bill
Return to Top
Subject: Re: Convolution of sparse vectors
From: Klaus Ramstoeck
Date: Thu, 07 Nov 1996 11:33:35 +0100
Warner Bruns wrote:
> 
> In article <328080D6.360F@fli.sh.bosch.de>, Thomas Bleile
>  writes:
> > Hello,
> >
> > I need to calculate the convolution of two large but sparse vectors
> > (dimension: (10^8 X 1) with 10^5 nonzero elements).
> > ...
>  just make an fft on your vectors, multiply the fft-ed,
>  and fft them back...
> 
>  wasn't that easy?
> 
>  Warner Bruns
Almost: maybe this guy does not have a supercomputer with lots
of memory at hand? The fft of the vectors definitely will not be 
sparse, so this requires some 10^9 bytes of memory after all.
Still, it might be a good idea to not use the sparse property of
the data, and go for a special FFT algorithm which works (efficient) 
with the data on disk (see the numerical recipes for a *start*).
Klaus
-- 
"Los ordenadores son inutiles. Solo pueden darte respuestas." (Picasso)
Return to Top
Subject: Re: Different rhs and CG
From: eijkhout@jacobi.math.ucla.edu (Victor Eijkhout)
Date: 07 Nov 1996 09:42:41 GMT
In article <55rl5h$eb3@srvr1.engin.umich.edu> cjlin@pelli.engin.umich.edu (Chih-jen Lin) writes:
> In my algorithm I have to solve two 
> linear system Ax = b1, Ax = b2. Currently I use conjugate gradient
> method and have the following two questions ?
> 
> (1) Is there any methods such that I can have warm start of the 
> second system ?   
You can use the first system to get an estimate of the spectrum
(or at least the outer eigenvalues) of your matrix. Use those
for a polynomial preconditioning (or a Chebychev iteration)
when you do the second system.
Victor.
-- 
405 Hilgard Ave ...................... The US pays 120,000 rubles rent on its
Department of Mathematics, UCLA ........... Moscow embassy. At the signing of
Los Angeles CA 90024 ........................... the lease this was $170,000,
phone: +1 310 825 2173 / 9036 ....................... today it's about $22.56
http://www.math.ucla.edu/~eijkhout/                        [source: Sevodnya]
Return to Top
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: matsh@alwaid.tdb.uu.se (Mats Holmstr|m)
Date: 7 Nov 1996 10:48:02 GMT
In article <327F5976.2781@math.mcgill.ca>,
Jan Rosenzweig   wrote:
>A Montvay {TRACS} wrote:
>> 
>> Hi all,
>> 
>> I want to use a Runge-Kutta (or similar) method on an IBVP on the
>> three-dimensional wave equation
>> 
>> d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
>
>   You can't use Runge-Kutta for IBVP. Runge Kutta is a method for 
>initial value problems. Why bother with that, what is wrong with the
>finite element method?
>
>-- 
>Jan Rosenzweig
I wonder what you think is wrong with Runge-Kutta for IBVP?  Of course
a RK method alone can't solve an IBVP.  But if you use the Method of
Lines, i.e., discretize your problem separately in space and time, RK
works just fine for the time integration.  That is, first you
discretize in space by finite elements/volumes/differences, or
something else.  You then have a system of ODEs which can be solved by
any standard ODE method, e.g., a Runge-Kutta method.
I have set the followup to sci.math.num-analysis, since it seems
appropriate. 
---
Mats Holmström
-- 
---
Mats Holmstr\"om                                 matsh@tdb.uu.se  
Uppsala University                               http://www.tdb.uu.se/~matsh
Dept. of Scientific Computing                    Ph.: +46 18 18 29 80  
Return to Top
Subject: Re: books on stability of pdes
From: jarausch@numa1.igpm.rwth-aachen.de (Helmut Jarausch)
Date: 7 Nov 1996 13:20:51 GMT
In article <9610281954.AA00733@seb3.eng.ohio-state.edu>, venkat@seb1.eng.ohio-state.edu (Venkat) writes:
|> Could somebody give me some pointers to  books that contain stability analysis
|> of numerical methods applied p.d.e.'s. 
|> 
Have a look at these two
Heinz-Otto Kreiss, Jens Lorenz
Initial-Boundary Value Problems and the Navier Stokes Equation
Acadamic Press 1989
and
Bertil Gustafson, Heinz-Otto Kreiss and Joseph Oliger
Time Dependent Problems and Difference Methods
Wiley, 1995
Helmut Jarausch.
Return to Top
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: ghe@saturn.superlink.net (Guangliang He)
Date: 7 Nov 1996 05:43:43 -0500
In article <328052E1.2CA68A66@cae.wisc.edu>,
Worawut Wisutmethangoon   wrote:
>Hi,
>	Although this is not directly related to numerical analysis, I hope
>some of you can help me with this problem.
>	If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ?  The matrix is known
>to be symmetric and semi-positive definite, if that matters.
>	I would really appreciate any pointers.
>
>Thank you in advance
>Worawut W.
>(wisutmet@cae.wisc.edu)
Without any constraint, you can always let v be a zero vector and get
zero as your minimum.
If you require the vector has norm 1 ({v}t{v} = 1), then the eigenvector
of [A] corresponding to the smallest eigenvalue is the one you want.
If you choose a different norm function, you will have different answer.
For example, requiring Sum(v[i]) = 1 gives the answer
 v = inv(A)*l/(l'inv(A) l)
where l is a vector with all elements be 1.
Hope it helps.
Guangliang He
ghe@superlink.net
Return to Top
Subject: Re: PLEASE HELP ME: I need to factor/simplify an algebraic expression.
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Date: 7 Nov 1996 09:14:40 -0500
In article <55p74u$r9k@chile.earthlink.net>,
Tai  wrote:
:Hi,
:
:I need to factor/simplify: (a^5 - b^5)/(a^3 - b^3)
:
:Please reply before 23:00 Nov.5/96.  Email me at 
:phaethon1112@earthlink.net with a list of steps taken to reach final 
:answer.  If you are reading this post after Nov.5, please ignore. Thank 
:you.
:
Received Nov. 7 in the morning. Happy to report that I followed the 
instructions. (What chutzpah!) ZVK (Slavek).
Return to Top
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Date: 7 Nov 1996 10:19:42 -0500
In article <328052E1.2CA68A66@cae.wisc.edu>,
Worawut Wisutmethangoon   wrote:
>Hi,
>	Although this is not directly related to numerical analysis, I hope
>some of you can help me with this problem.
>	If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ?  The matrix is known
>to be symmetric and semi-positive definite, if that matters.
>	I would really appreciate any pointers.
 As posed, the question has a trivial answer: v=0.
 Perhaps you meant a unit vector v; in this case, v is any unit eigenvector 
corresponding to the smallest eigenvalue of A.
 The theoretical concept behind this is known as Rayleigh ratio; to make 
it homogeneous, he defined it as  {v}t[A]{v} / {v}t{v}.
Good luck, ZVK (Slavek).
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 7 Nov 1996 17:01:15 GMT
I had asked:
In article <55ql9v$bj2@sol.ctr.columbia.edu>,
Peter Shenkin  wrote:
>	1.  How does "restrict" differ from the illfated "noalias"
>	    proposal from the previous standardization process?
Then answer is to be found in http://www.lysator.liu.se/c/restrict.html.
>	2.  How can one obtain a copy of the current draft standard?
Proposals for the current draft can be found by going to
http://www.dmk.com/ and descending to the link labeled
"The ISO C committee's ftp archive".
-P.
-- 
****** Multicultural Holiday Song:  "I'm Dreaming of a White Kwanza" ******
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY  10027;  shenkin@columbia.edu;  (212)854-5143;  FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Return to Top
Subject: Iterative solver for banded matrix
From: pernil@p1542.pip.dknet.dk (Per Nielsen)
Date: 7 Nov 1996 18:44:57 GMT
Hi,
I am posting this for a collegue.
Does know about a program that can solve the equation 
A*X=B where A is a banded matrix. We are currently using
a routine that uses LU-factorization, but would like to
test a routine based on iterative methods. The reason for
this is that we have to solve the linear system millions
of times, where the matrix A (and B) changes only slightly 
each time. We thus believe that an iterative method overall
will result in a faster execution.
                           Per
=====================================================================
                                   *------------------------------------*
                                   |  Per Nielsen                       |
*------------------------------*   |  N\o rgaardsvej 21 1.tv            |
|   A few weeks of development |   |  DK-2800 Lyngby                    |
|   and testing can save an    |   |  Denmark                           |
|   afternoon in the library.  |   |  E-mail:pernil@pip.dknet.dk        |
*------------------------------*   |  URL: http://pip.dknet.dk/~pip1542 |
                                   |  Home phone: +45 45 93 44 63       |  
                                   |  Office phone: +45 45 27 24 66     |  
                                   *------------------------------------*
-- 
Return to Top
Subject: Re: [Q] Need to plot in spherical coords.
From: "Dann Corbit"
Date: 7 Nov 1996 19:17:05 GMT
Since you are plotting spherical coordinates, you need
to choose some kind of projection ( unless you are
plotting on a physical ball ).  It sounds like a perfect 
application for a GIS product.  If this is actually a real
sphere (not an oblate spheroid like the earth) then
the math becomes simpler.  You might like to pose the
same question to comp.infosystems.gis and 
comp.graphics.algorithms, as those groups are more
closely focused on your problem at hand.
You can get a public domain mapping package called
PROJ4 that will transform your spherical coordinates
to many, many projections.  Once you have transformed
your data via the projection, plotting the information will
be much simpler.
Craig Ross  wrote in article
<3280DEF8.41C6@rclsgi.eng.ohio-state.edu>...
> If this is not the place to ask this question, please forgive me
> and point out the way.
> 
> I need to plot a surface in spherical coordinates and want the 
> best flexibility and most ease.  The data is spaced on a regular
> grid, from 0 <= phi <= 90 degrees and 0 <= theta <= 90 degrees.
> I will map this, using symmetry, to the other 7 octants.
> 
> The r axis data represents the sound pressure emitted from a source,
> in dB, so r will be offset to be greater than 0 for visualization,
> if I can't pick the r axis scale independently.  Alternatively,
> color maps may be used to represent the r data on a uniform sphere.
> 
> I currently have access to Matlab, Maple, Excel, and Axum but haven't
> found a simple solution.  I'm using Matlab to crunch the measured
> data and like Excel's easy rotation, but dislike that color maps
> in Excel will represent variation on the z axis, not r.
> 
> Any ideas?  (I will code, if required, but seek existing solutions.)
> 
> 
> -- 
> Craig Ross				Find just about anything at:
> ross.294@osu.edu			http://www.albany.net/allinone/
> ross@rclsgi.eng.ohio-state.edu		or	http://www.yahoo.com/
> 
Return to Top
Subject: [Announce] CFD Jobs Database (WWW)
From: jola@tfd.chalmers.se (Jonas Larsson)
Date: 6 Nov 1996 18:01:34 GMT
I'm pleased to announce a new public service in CFD Online:
http://www.tfd.chalmers.se:8080/CFD_Jobs/index.phtml
The CFD Jobs Database is a forum where institutions and companies
can advertise open positions in the field of Computational Fluid
Dynamics. The service is free and open to everyone.
Individuals looking for jobs can browse the entire database of
open positions, or search for records that match their criteria.
This is a very effective way to find suitable candidates. Previous
experience from the "CFD Jobs List" shows that jobs listed here
generate a lot of applications. I've got a lot of positive responses
from companies who have used the "CFD Jobs List".
The system is brand new, so be nice to it. Report all bugs to me.
-- 
/-----------------------------------------------------------------------\
 Jonas Larsson (jola@tfd.chalmers.se)  Dept. of Thermo & Fluid Dynamics
 Phone: +46-31-7721388                 Chalmers University of Technology
 Fax:   +46-31-180976                  S-412 96 Gothenburg, Sweden
Return to Top
Subject: uniqueness of a nonlinear ODE
From: "Pablo A. Perez-Fernandez"
Date: Thu, 07 Nov 1996 10:56:09 -0800
The following ODE came up in a problem in conformal geometry:
                a D_x( (1-x)^2 D_x u ) + exp(2u(x)) = 1
		where 1/2 < a < 1, and -1<=x<=1.
u(x) = 0 is a trivial solution. Has anybody seen this equation or
a similar one before?  I need to know if u(x)=0 is the unique solution
of the above equation when 1/2 < a <= 1.  The result is known for
16/25<=a<=1.
Any numerical methods that would yield a uniqueness proof are welcomed.
In fact, a counter example to the uniqueness for 1/2
Return to Top
Subject: linear resolution
From: ANA ISABEL GODOY ESMERALDA
Date: 7 Nov 1996 19:39:44 GMT
Hello, I need some ideas about something relationed with my subject for doing
a small work .sorry for my
Return to Top
Subject: Question: linear Diophantine eqn solver
From: Vladislav Panferov
Date: Wed, 06 Nov 1996 13:17:37 +0100
Hi all,
Does anyone know of some code (Fortran or C(++) preferably) which
solves a single linear Diophantine equation? 
I've seen some references on the Net (Fortenbacher's algorithm, for
example), but I didn't manage to find any realization so far.
Any help would be appreciated.
Thanks,
Vladislav
Return to Top
Subject: Re: Iterative solver for banded matrix
From: "Hans D. Mittelmann"
Date: Thu, 07 Nov 1996 17:30:40 -0700
Per Nielsen wrote:
> 
> Hi,
> 
> I am posting this for a collegue.
> 
> Does know about a program that can solve the equation
> A*X=B where A is a banded matrix. We are currently using
> a routine that uses LU-factorization, but would like to
> test a routine based on iterative methods. The reason for
> this is that we have to solve the linear system millions
> of times, where the matrix A (and B) changes only slightly
> each time. We thus believe that an iterative method overall
> will result in a faster execution.
> 
>                            Per
> 
> =====================================================================
>                                    *------------------------------------*
>                                    |  Per Nielsen                       |
> *------------------------------*   |  N\o rgaardsvej 21 1.tv            |
> |   A few weeks of development |   |  DK-2800 Lyngby                    |
> |   and testing can save an    |   |  Denmark                           |
> |   afternoon in the library.  |   |  E-mail:pernil@pip.dknet.dk        |
> *------------------------------*   |  URL: http://pip.dknet.dk/~pip1542 |
>                                    |  Home phone: +45 45 93 44 63       |
>                                    |  Office phone: +45 45 27 24 66     |
>                                    *------------------------------------*
> 
> --
Iterative methods are applied when the matrix is sparse. It is not by
itself generally an advantage for these methods that it is banded. All
you normally need to supply is a routine that multiplies your matrix
times a vector. What IS, however, absolutely crucial are properties such
as symmetry, positive definiteness etc. Depending on these properties
you choose a method and possibly also a preconditioner. Start, maybe, at
netlib. There is ITPACK, there are the TEMPLATES,
there are several packages in LINALG etc.
-- 
Hans D. Mittelmann			http://plato.la.asu.edu/
Arizona State University		Phone: (602) 965-6595
Department of Mathematics		Fax:   (602) 965-0461
Tempe, AZ 85287-1804			email: mittelmann@asu.edu
Return to Top

Downloaded by WWW Programs
Byron Palmer