Subject: Re: PDEase FEA soft from Macsyma-any experiences?
From: WOOWOON@chollian.dacom.co.kr (õ¸®¾È NEWS GROUP ÀÌ¿ëÀÚ)
Date: 6 Nov 1996 10:48:16 GMT
Victor Kharin (kharin@udc.es) wrote:
: Hi..
: Has anyone had any good or bad experiences using the PDEase2D finite
: element software from Macsyma Inc.?
: I am planning to use this software under Windows NT for nonlinear
: problems of solid mechanics coupled with stress-strain dependent
: transient heat-mass transfer.
: I am wondering just about its "nonlinear capabilities", in particular,
: could it serve well for elastoplastic problems with deformational
: Ramberg-Osgood like plasticity (small strains)? How reliable is this
: software?
: In general:
: Are there any problems with this software that I must be aware of?
: How friendly and easy to use is it?
: What should I expect from the package?
: I would appreciate every comment, suggestion, idea..
: ---------------------------------------------------------------
: Victor Kharin ETSI Caminos
: tel: +34-81-131150, ext. 450 Campus de Elvina
: fax: +34-81-132876 15192 La Coruna
: E-mail: kharin@udc.es SPAIN
: ---------------------------------------------------------------
Subject: Re: Minimizer stopping rule
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 6 Nov 1996 11:15:12 GMT
In article <327F0D61.6804@ix.netcom.com>, Jive Dadson writes:
|> I find I am not too clear on how one should stop a function-minimizer
|> before it reaches machine-limit convergence. The NRC routines have
|> several ways to stop minimizing f(x), including these:
|>
|> 1. If the gradient f'(x) becomes "small" on a step.
|> 2. If the difference in x between two steps becomes "small"
|> 3. The conj-grad routine frprmin() stops when the difference
|> in f(x) between two steps becomes "small".
|>
|> I have doubts about all of those stopping criteria.
|>
|> 1. In monitoring the city block norm for the gradient in actual,
|> noisy problems, I find that it hops around quite a bit, and frequently
|> becomes almost as small as machine limits allow it to become
|> at the actual minimum. The NRC routine often stops prematurely
|> if you don't set "gtol" very low. This is the stopping criterion
|> it uses:
|>
|> test = 0
|> den = max(f(x),1);
|>
|> FOR ALL DIMENSIONS i
|> temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
|> IF temp > test THEN test=temp;
|>
|> IF test < gtol THEN quit
|>
|> 2. The second criterion particularly does not seem appropriate for
|> algorithms that use an approximate line search, because it can cause
|> the algorithm to stop because of one ineffective step. Yet, that is the
|> other stopping criterion for the NRC BFGS routine dfpmin(), which
|> uses approximate line searches.
|>
|> test = 0;
|> FOR ALL DIMENSIONS i
|> temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
|> IF temp > test THEN test=temp;
|>
|> IF test < small THEN quit
|>
|> I also question the division by max(abs(x[i]),1). When training a neural
|> network, (which is what I am mostly using this for), scaling is
|> appropriate for some parameters ("weights"), but perhaps not for
|> others ("biases"). If you are going to stop on near x-convergece,
|> I think the user needs to specify the x norm.
|>
|> 3. The third criterion has the problems of the first two: It can
|> cause early stopping in a relative "flat" spot, or when a single
|> step is ineffective because of not-exact line-search.
|>
|> test = 2.0*abs(f(x)-previous_f_x);
|> lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
|> IF test <= lim THEN quit
|>
|> It would seem at the very least you should insist that these criteria are
|> met for some number of consecutive steps before stopping.
|>
|> Ideally, one would like for the user to be able to specify a small
|> number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
|> criterion cannot be tested with certainty. Or the user could specify a
|> norm <>, and say, "Stop when < e." Is there a way to stop before
|> complete convergence when it is "very probably" true that you are with an
|> epsilon of the minimum?
|>
|> Gurus, please enlighten.
|>
|> Thanks,
|> J.
your observations and remarks are quite correct. Indeed, a good algorithm
should insist in continuing "as long as it makes sense" and then stop
with giving the user the appropriate message. On the other side ..
imagine what takes place if you minimize with f_cur-f_new = 1.d-15 for some
thousand steps ? In principle your problem is easily solved:
let the algorithm stop with an a posteriori error estimate (a reliable one)
this can be done using interval arithmetic, but requires an interval
arithmetic extension of the evaluation of f and grad_f (_and_ a
"sufficiently accurate" approximation of the Hessian, such that e.g.
the Krawcyk_Operator is well defined and contractive). Look at the
books of Ratschek&Rokne;, Halsted Press 1988 or Eldon Hansen, Decker, New York
1992. See also the code in netlib/toms/681.
The stopping criteria you mention are quite common and not unreasonable for
superlinearly converging algorithms (as BFGS ).
It is a matter of taste to require the fullfillment of the stopping criterion
for some consecutive steps (many algorithms do that), but this device
gives you no certainty. No foolproof termination criterion can be devised
without safe lower and upper bounds for the eigenvalues of the Hessian
in neighborhood of the solution containing the current point .
Besides ... NRC dfpmin (actually, a direct BFGS-update
of the inverse Hessian) uses float _and_ a linesearch based on function
values. this limits final precision to about 1.e-4 in the best case
and in case of a little illcondioning limits final precision to nothing ....
hope these remarks help
peter
Subject: Re: non linear cubic spline
From: pgh@bnr.co.uk (Peter Hamer)
Date: 6 Nov 1996 11:59:04 GMT
In article <327CCC21.63A6@interlog.com> falkiner@interlog.com writes:
>I am looking for a PC routine that will do a non linear cubic spline,
>where the "stiffness" of the spline can be an input parameter over a
>range of 0-1 where 0 would be a cubic spline fit where the spline curve
>goes through each of the input data points and 1 would be a linear
>regression. there is a routine in the SAS mainframe library (NLIN?)
>that does this. Are there any alternates, as I can't afford SAS for
>personal use on the PC.
There is FORTRAN code for smoothing cubic splines by Woltring in statlib.
In addition to doing the sort of thing you want, it can also work out the
optimum degree of smoothing by generalized cross-validation.
H J Woltring
A FORTRAN package for generalised, cross-validatory spline smoothing
and differentiation.
Adv. Eng. Softwar8(2) pp 104-113, 1987.
The code in statlib is called something like gcvspl; anyway it has gcv
in the name.
Peter
PS It expects the data sorted "in x". If several values are equal in x
it tries to fit a degenerate spline (with some pieces of zero length).
You can fix this up by condensing the equal (or essentially equal) values
into one point with less variance [email me if you want more details]. I
assume that all attempts at piece-wise cubics with one cubic between each
pair of points has the same problem (but not necessarily the features to
use to get round it).
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 6 Nov 1996 18:27:11 GMT
In article ,
David Kastrup wrote:
>shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
>
>: First, I recently posted a disclaimer to two of my earlier postings,
>@> but now I want to disclaim the disclaimer. :-) Someone had posted
>@> saying that const qualification (in C) does not convey non-alias
>@> information, because the const qualifier guarantees only that no
>@> attempt will be made to write to the underlying addresses *through the
>@> const-qualified pointer*.
>
>This was me, and I stand by my statement. See below why.
Ugh. I have to eat my words again. I'm getting tired of the same
meal. :-)
>@> But now (sparked by Nick's posting, which implicitly seemed to
>@> concur
>@> with my original) I've looked this up in the C standard, section
>@> 6.5.3. It states, in part, "If an attempt is made to modify an object
>@> defined with a const-qualified type through use of an lvalue with
>@> non-const-qualified type, the behavior is undefined."
Yes. The operative word here is "defined"; I read it as if it
had said "declared".
>@> Thus, the standard is quite clearly saying that the compiler can
>@> assume from the const-qualified declaration of b[] that the contents
>@> of b[] will not be altered even by means of dereference through a.
>@> I.e., my original posting was correct.
>
>And that's where you are being wrong in your interpretation. A
>pointer, be it const or not, does *not* define an object. It just
>references it. ....
And Christian Bau (christian.bau@isltd.insignia.com) pointed out
that if I were correct, memmove() couldn't work. Its prototype
is as follows:
void *memmove (void *s1, const void *s2, size_t n);
The whole point of memmove() is to copy the storage pointed to
by s2 to the destinations pointed to by s1 when these blocks
overlap. If the const qualification of s2 forbade the overwriting
of storage pointed to by s2, then this function prototype for
memmove() would be incorrect.
I understand that the draft copy of the next C language standard
has a "restrict" keyword which guarantees the behavior we'd
like. I guess this is modeled after Cray's "restrict".
I have two questions about this:
1. How does "restrict" differ from the illfated "noalias"
proposal from the previous standardization process?
2. How can one obtain a copy of the current draft standard?
Living and (hopefully) learning,
-P.
--
****************** In Memoriam, Bill Monroe, 1911 - 1996 ******************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY 10027; shenkin@columbia.edu; (212)854-5143; FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Subject: [Q] Need to plot in spherical coords.
From: Craig Ross
Date: Wed, 06 Nov 1996 13:54:48 -0500
If this is not the place to ask this question, please forgive me
and point out the way.
I need to plot a surface in spherical coordinates and want the
best flexibility and most ease. The data is spaced on a regular
grid, from 0 <= phi <= 90 degrees and 0 <= theta <= 90 degrees.
I will map this, using symmetry, to the other 7 octants.
The r axis data represents the sound pressure emitted from a source,
in dB, so r will be offset to be greater than 0 for visualization,
if I can't pick the r axis scale independently. Alternatively,
color maps may be used to represent the r data on a uniform sphere.
I currently have access to Matlab, Maple, Excel, and Axum but haven't
found a simple solution. I'm using Matlab to crunch the measured
data and like Excel's easy rotation, but dislike that color maps
in Excel will represent variation on the z axis, not r.
Any ideas? (I will code, if required, but seek existing solutions.)
--
Craig Ross Find just about anything at:
ross.294@osu.edu http://www.albany.net/allinone/
ross@rclsgi.eng.ohio-state.edu or http://www.yahoo.com/
Subject: Re: Convolution of sparse vectors
From: bruns@tetibm2.ee.TU-Berlin.DE (Warner Bruns)
Date: 6 Nov 1996 19:02:56 GMT
In article <328080D6.360F@fli.sh.bosch.de>, Thomas Bleile
writes:
> Hello,
>
> I need to calculate the convolution of two large but sparse vectors
> (dimension: (10^8 X 1) with 10^5 nonzero elements).
> By the structure of these vectors, I know that the result will be sparse
> too (aproximately 10^6 nonzero elements).
> I am looking for some algorithm or c-library to do this job in less than
> one week of CPU-time ;-)
>
> Any ideas ? Pointers ? Literature ?
>
> Thanks in advance,
>
> Thomas
> --
>
> ====================================================================
> Dipl.-Ing. Thomas Bleile email: bleile@fli.sh.bosch.de
> Robert Bosch GmbH, Abt. FV/FLI
> Postfach 10 60 50 Tel: (+49 | 0) 711 811-6659
> D-70049 Stuttgart Fax: (+49 | 0) 711 811-7602
> ====================================================================
just make an fft on your vectors, multiply the fft-ed,
and fft them back...
wasn't that easy?
Warner Bruns
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: nsinger@eos.hitc.com (Nick Singer)
Date: 6 Nov 1996 20:29:55 GMT
In article <328052E1.2CA68A66@cae.wisc.edu>,
wisutmet@cae.wisc.edu says...
>Although this is not directly related to numerical analysis,
>I hope some of you can help me with this problem.
>If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ? The matrix is
>known to be symmetric and semi-positive definite, if that
>matters. I would really appreciate any pointers.
You presumably mean for ||v||=1; otherwise, the answer just
scales quadratically with ||v||.
Take v to be an eigenvector associated with the smallest
eigenvalue of A, say lambda. Since A is symmetric and semi-
positive definite, then it has (or may have) a zero
eigenvalue, and the rest of its eigenvalues are positive
reals. So take v to be in the null space of A; then
{v}t[A]{v} = 0.
In general, if you want to find a vector that minimizes or
maximizes {v}t[A]{v}, the same approach applies--and you end
up with {v}t[A]{v} = lambda * ||v||^2.
Subject: Help: Integrating functions with compact support .... ?
From: "N. Sukumar"
Date: Wed, 06 Nov 1996 14:11:23 -0600
Hi,
Have a question on the integration of functions with compact
support which I hope someone in net-land can help me out with.
Consider a set of discrete points x_i (i = 1,2,...n)
(x \in R^m, m = 1/2/3) and functions f = f(x-x_i) (f \in C^1)
which have compact support [zero outside a ball or radius r_i]
in \Omega => f is max at x_i and monotonically decreases to zero.
Is there any way to choosen disjoint domains \Omega_i
(\union{\Omega_i}= \Omega) such that the integral
\int_\Omega f d\Omega can be integrated accurately.
In 1D, by choosing the sub-domains to coincide with the
end of the support one can accurately integrate using
Gauss quadrature. However, in 2D and 3D, Gauss quadrature
isn't accurate and I don't see an "obvious" way to solve the
problem accurately. If there are other judicious quadrature
schemes and/or refs/papers that address the above, I would
appreciate if someone could drop me an e-mail.
Hope I was clear in explaining the problem. If further
details are reqd., I'd be glad to supply them.
Thanks in advance.
-suku.
E-mail: suku@nwu.edu
--
N. Sukumar Home Phone: (847)491-1522
Theoretical & Applied Mechanics Work Phone: (847)467-3154
Northwestern U, Evanston IL 60208 E-mail: n-sukumar@nwu.edu
WWW: *GO BLAZERS*
Subject: mathematics: The Curmudgeon Quotelist
From: marlow@concentric.com (marlowe)
Date: Wed, 06 Nov 1996 20:25:25 -0400
These quotes about mathematics and science can be found on 'The
Curmudgeon Quotelist' at
http://www.lm.com/~rww/index.html
"Mathematics, rightly viewed, posses not only truth, but supreme beauty a
beauty cold and austere, like that of sculpture."
Bertrand Russell
"Sex is the mathematics urge sublimated ."
M. C. Reed
"The laws of probability, so true in general, so fallacious in particular."
Edward Gibbon
"Science has a simple faith that transcends utility. It is the faith that
it is the privilege of man to learn to understand, and that this is his
mission ."
Vannevar Bush
"Science is nothing but developed perception, interpreted intent, common
sense rounded out and minutely articulated ."
George Santayana
"Logical consequences are the scarecrows of fools and the beacons of wise
men ."
Thomas Henry Huxley
take a look at
The Curmudgeon Quotelist
http://www.lm.com/~rww/index.html
Quotes that make you think
Subject: Re: Help with vector spaces, please.
From: John Hench
Date: Tue, 05 Nov 1996 10:58:41 +0000
Robert Gelb wrote:
> W' is the set of all ordered pairs(x,y) of real numbers
> that satisfy the equation 2x+3y=1. The answer at the end
> of the book says, that this set is not a vector space.
Let's rewrite this in the vernacular: Az=b where
A=[2,3], z=[x,y]', and b=1. The acid test for
any linear (vector) space is that if z1 and z2
are elements, then so is alpha z1 + beta z2.
(alpha and beta are scalars). In this case, we
note that A z1 = 1 and A z2 = 1 and therefore
alpha A z1 + beta A z2 = alpha + beta ~= 1.
The conclusion can only be that this is not a
linear space. If we change the question
slightly, however, and ask if Az = 0 is a linear
space, then we could easily show that it is,
which leads to the famous property that a null
space of a matrix is indeed a linear space.
It turns out that the problem that you examined
is important enough to merit its own name; it is
called a linear variety and it comes up in all
sorts of optimization problems.
I hope this helps! Cheers, John
------------------------------------------------
Dr. J.J. Hench
Dept. of Mathematics, Univ. of Reading, England
Institute of Informatics and Automation, Prague
-------------------------------------------------
Subject: Re: Minimizer stopping rule
From: lpa@netcom.com (Pierre Asselin)
Date: Thu, 7 Nov 1996 04:27:19 GMT
Jive Dadson writes:
>I find I am not too clear on how one should stop a function-minimizer
>before it reaches machine-limit convergence. The NRC routines have
>several ways to stop minimizing f(x), including these:
> 1. If the gradient f'(x) becomes "small" on a step.
> 2. If the difference in x between two steps becomes "small"
> 3. The conj-grad routine frprmin() stops when the difference
> in f(x) between two steps becomes "small".
>I have doubts about all of those stopping criteria.
>[rest elided]
Automatically recognizing zero when you see it is a lot of trouble.
I have gradually evolved a stopping criterion different from most.
No, it's not foolproof.
The context is some minimization method or other with a line search,
usually CG but sometimes Newton's method (where the line search is used
to prevent catastrophic divergence). The line search is looking for
the zero crossing of a directional derivative. For robustness, I
bracket a zero and refine the inverval iteratively. The search routine
might have to poke around a lot, and this is a good chance to gauge the
amount of nonlinearity in the system. I look for the following:
1- Having to extend the initial interval unexpectedly
far forward to get a bracket.
2- Needing more than a couple of Regula Falsi's
to clinch the zero.
Maybe other things too, I forget (the code's at work). The exit
criterion for the line search itself isn't too fancy, just return when
the directional derivative has been reduced to 0.01 of its starting
value, say.
Originally I used the nonlinearity score as a restart criterion for the
CG. The interesting thing is that, in the endgame, the nonlinearity
tests get triggered by roundoff. The search routine is working very
hard to find zeroes in noise, and it stumbles every time. The pattern
of hits is different from what "real" nonlinearities produce during the
initial stages of the minimization. The trick is to come up with an
`if' statement that recognizes the noise-induced pattern without
writing an artificial intelligence program. I've got one that works,
for the most part... It might be problem-specific, though.
--
--Pierre Asselin, Westminster, Colorado
lpa@netcom.com
pa@verano.sba.ca.us
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: mfriesel@ix.netcom.com
Date: Wed, 06 Nov 1996 21:32:18 -0700
Peter Dalgaard BSA wrote:
>
> Konrad Hinsen writes:
>
> ....>
> > If a subprogram reference causes a dummy argument to become associated
> > with an entity in a common block in the referenced subprogram or in a
> > subprogram referenced by the referenced subprogram, neither the dummy
> > argument nor the entity in the common block may become defined within
> > the subprogram or within a subprogram referenced by the referenced
> > subprogram.
>
> Does anybody else perceive this as the result of convergence of the
> computer-speak and legalese dialects? "The party of the third part,
> hereafter denoted The Third Party..."
I reply:
No, it all makes sense. Like most of the details of programming
languages, however, it is unspeakably dull and learning about it will
teach you nothing whatsoever about physics.
Subject: Re: Different rhs and CG
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 7 Nov 1996 10:14:47 GMT
In article <55rl5h$eb3@srvr1.engin.umich.edu>, cjlin@pelli.engin.umich.edu (Chih-jen Lin) writes:
|> Hi,
|>
|> In my algorithm I have to solve two
|> linear system Ax = b1, Ax = b2. Currently I use conjugate gradient
|> method and have the following two questions ?
|>
|> (1) Is there any methods such that I can have warm start of the
|> second system ?
|>
|> (2) I found two systems take quite different numbers of CG iterations
|> and the first one is much less then the second one. Usually under what
|> conditions to change the rhs can produce such a difference ?
|>
|> Thanks in advance
|>
|> Chih-Jen Lin
|> Univ. of Michigan
the number of steps necessary to obtain a reasonable precision with
CG depends on the eigenvalue distribution of the matrix _and_ the
representation of the eigenvectors in the initial residual. Therefore,
if you have a solution of your first system, take its residual with respect
to the second right hand side and it turns out that it has only small compo-
nents with respect to the eigenvectors of the "large" eigenvalues of A,
then a warm start makes sense, otherwise not. If you take the eigensystem of A
as your coordinate system then there holds
\sum_{i=1}^n ( (x^{k+1}_i-x^*_i)^2 \lambda_i ) =
\min \{ sum_{i=1}^n ( x^0_i-x^*_i)^2(\lambda_i)(1+\lambda_i p_k(\lambda_i) )^2 :
p_k any polynomial of degree k \}
here x^0 is the initial point, \lambda_i are the eigenvalues of A ,
x^{k+1} is the k+1-th iterate and x^* the solution, and of course,
(\lambda_i(x_i-x^*_i)) the i-th component of the residual in x.
the minimization takes place over the space of all polynomials of degree k.
this is the optimality property of CG.
hope this helps
peter
Subject: Re: Convolution of sparse vectors
From: Klaus Ramstoeck
Date: Thu, 07 Nov 1996 11:33:35 +0100
Warner Bruns wrote:
>
> In article <328080D6.360F@fli.sh.bosch.de>, Thomas Bleile
> writes:
> > Hello,
> >
> > I need to calculate the convolution of two large but sparse vectors
> > (dimension: (10^8 X 1) with 10^5 nonzero elements).
> > ...
> just make an fft on your vectors, multiply the fft-ed,
> and fft them back...
>
> wasn't that easy?
>
> Warner Bruns
Almost: maybe this guy does not have a supercomputer with lots
of memory at hand? The fft of the vectors definitely will not be
sparse, so this requires some 10^9 bytes of memory after all.
Still, it might be a good idea to not use the sparse property of
the data, and go for a special FFT algorithm which works (efficient)
with the data on disk (see the numerical recipes for a *start*).
Klaus
--
"Los ordenadores son inutiles. Solo pueden darte respuestas." (Picasso)
Subject: Re: Different rhs and CG
From: eijkhout@jacobi.math.ucla.edu (Victor Eijkhout)
Date: 07 Nov 1996 09:42:41 GMT
In article <55rl5h$eb3@srvr1.engin.umich.edu> cjlin@pelli.engin.umich.edu (Chih-jen Lin) writes:
> In my algorithm I have to solve two
> linear system Ax = b1, Ax = b2. Currently I use conjugate gradient
> method and have the following two questions ?
>
> (1) Is there any methods such that I can have warm start of the
> second system ?
You can use the first system to get an estimate of the spectrum
(or at least the outer eigenvalues) of your matrix. Use those
for a polynomial preconditioning (or a Chebychev iteration)
when you do the second system.
Victor.
--
405 Hilgard Ave ...................... The US pays 120,000 rubles rent on its
Department of Mathematics, UCLA ........... Moscow embassy. At the signing of
Los Angeles CA 90024 ........................... the lease this was $170,000,
phone: +1 310 825 2173 / 9036 ....................... today it's about $22.56
http://www.math.ucla.edu/~eijkhout/ [source: Sevodnya]
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: matsh@alwaid.tdb.uu.se (Mats Holmstr|m)
Date: 7 Nov 1996 10:48:02 GMT
In article <327F5976.2781@math.mcgill.ca>,
Jan Rosenzweig wrote:
>A Montvay {TRACS} wrote:
>>
>> Hi all,
>>
>> I want to use a Runge-Kutta (or similar) method on an IBVP on the
>> three-dimensional wave equation
>>
>> d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
>
> You can't use Runge-Kutta for IBVP. Runge Kutta is a method for
>initial value problems. Why bother with that, what is wrong with the
>finite element method?
>
>--
>Jan Rosenzweig
I wonder what you think is wrong with Runge-Kutta for IBVP? Of course
a RK method alone can't solve an IBVP. But if you use the Method of
Lines, i.e., discretize your problem separately in space and time, RK
works just fine for the time integration. That is, first you
discretize in space by finite elements/volumes/differences, or
something else. You then have a system of ODEs which can be solved by
any standard ODE method, e.g., a Runge-Kutta method.
I have set the followup to sci.math.num-analysis, since it seems
appropriate.
---
Mats Holmström
--
---
Mats Holmstr\"om matsh@tdb.uu.se
Uppsala University http://www.tdb.uu.se/~matsh
Dept. of Scientific Computing Ph.: +46 18 18 29 80
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: ghe@saturn.superlink.net (Guangliang He)
Date: 7 Nov 1996 05:43:43 -0500
In article <328052E1.2CA68A66@cae.wisc.edu>,
Worawut Wisutmethangoon wrote:
>Hi,
> Although this is not directly related to numerical analysis, I hope
>some of you can help me with this problem.
> If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ? The matrix is known
>to be symmetric and semi-positive definite, if that matters.
> I would really appreciate any pointers.
>
>Thank you in advance
>Worawut W.
>(wisutmet@cae.wisc.edu)
Without any constraint, you can always let v be a zero vector and get
zero as your minimum.
If you require the vector has norm 1 ({v}t{v} = 1), then the eigenvector
of [A] corresponding to the smallest eigenvalue is the one you want.
If you choose a different norm function, you will have different answer.
For example, requiring Sum(v[i]) = 1 gives the answer
v = inv(A)*l/(l'inv(A) l)
where l is a vector with all elements be 1.
Hope it helps.
Guangliang He
ghe@superlink.net
Subject: Re: Q: how to find {v} s.t. {v}t[A]{v} is minimize?
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Date: 7 Nov 1996 10:19:42 -0500
In article <328052E1.2CA68A66@cae.wisc.edu>,
Worawut Wisutmethangoon wrote:
>Hi,
> Although this is not directly related to numerical analysis, I hope
>some of you can help me with this problem.
> If I have an nxn matrix [A], how do I find a vector {v} of
>dimension n such that {v}t[A]{v} is minimum ? The matrix is known
>to be symmetric and semi-positive definite, if that matters.
> I would really appreciate any pointers.
As posed, the question has a trivial answer: v=0.
Perhaps you meant a unit vector v; in this case, v is any unit eigenvector
corresponding to the smallest eigenvalue of A.
The theoretical concept behind this is known as Rayleigh ratio; to make
it homogeneous, he defined it as {v}t[A]{v} / {v}t{v}.
Good luck, ZVK (Slavek).
Subject: Re: [Q] Need to plot in spherical coords.
From: "Dann Corbit"
Date: 7 Nov 1996 19:17:05 GMT
Since you are plotting spherical coordinates, you need
to choose some kind of projection ( unless you are
plotting on a physical ball ). It sounds like a perfect
application for a GIS product. If this is actually a real
sphere (not an oblate spheroid like the earth) then
the math becomes simpler. You might like to pose the
same question to comp.infosystems.gis and
comp.graphics.algorithms, as those groups are more
closely focused on your problem at hand.
You can get a public domain mapping package called
PROJ4 that will transform your spherical coordinates
to many, many projections. Once you have transformed
your data via the projection, plotting the information will
be much simpler.
Craig Ross wrote in article
<3280DEF8.41C6@rclsgi.eng.ohio-state.edu>...
> If this is not the place to ask this question, please forgive me
> and point out the way.
>
> I need to plot a surface in spherical coordinates and want the
> best flexibility and most ease. The data is spaced on a regular
> grid, from 0 <= phi <= 90 degrees and 0 <= theta <= 90 degrees.
> I will map this, using symmetry, to the other 7 octants.
>
> The r axis data represents the sound pressure emitted from a source,
> in dB, so r will be offset to be greater than 0 for visualization,
> if I can't pick the r axis scale independently. Alternatively,
> color maps may be used to represent the r data on a uniform sphere.
>
> I currently have access to Matlab, Maple, Excel, and Axum but haven't
> found a simple solution. I'm using Matlab to crunch the measured
> data and like Excel's easy rotation, but dislike that color maps
> in Excel will represent variation on the z axis, not r.
>
> Any ideas? (I will code, if required, but seek existing solutions.)
>
>
> --
> Craig Ross Find just about anything at:
> ross.294@osu.edu http://www.albany.net/allinone/
> ross@rclsgi.eng.ohio-state.edu or http://www.yahoo.com/
>