Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: jac@ibms48.scri.fsu.edu (Jim Carr)
Date: 1 Nov 1996 14:32:34 GMT
Konrad Hinsen writes
}
} ---------------------------------------------------------------------------
} 15.9.3.6 Restrictions on Association of Entities.
}
} If a subprogram reference causes a dummy argument in the referenced
} subprogram to become associated with another dummy argument in the
} referenced subprogram, neither dummy argument may become defined
} during execution of that subprogram. ...
Well known (or, at least, it would be if a better job was done of
teaching this and other programming languages) example omitted.
I remember writing code for amusement where the goal was to redefine
many of the constants in the calling routine. Modern implementations
do not handle this the same way, at least not consistently (since
they aren't constrained to do so), and this can have an effect on
certain poorly written legacy code.
My favorite is a variable in COMMON, passed as a parameter, modified
in that routine which calls a third routine without any dummy params
at all -- since it references that variable through COMMON. ;-)
} Essentially this means that the code you have shown, although all
} compilers I know would accept it, does not conform to the Fortran 77
} standard.
Robin Becker writes:
>
>Then surely it's up to implementors of standard conforming compilers etc
>to detect and flag it.
No. The standard is quite clear in stating that the compiler is free
to do whatever it wants to with non-conforming programs in particular
circumstances. The standard does not require that an error be produced.
Independent compilation makes even a warning error impossible without
an interface block, such as was added in Fortran 90. (The AIX compiler
has a flag that handles this sort of aliasing more gracefully, that is,
probably the way it worked on an older system, but caveat user.)
--
James A. Carr | Raw data, like raw sewage, needs
http://www.scri.fsu.edu/~jac | some processing before it can be
Supercomputer Computations Res. Inst. | spread around. The opposite is
Florida State, Tallahassee FL 32306 | true of theories. -- JAC
Subject: Re: (1/n)! as integral expression
From: edgar@math.ohio-state.edu (G. A. Edgar)
Date: Fri, 01 Nov 1996 09:28:21 -0600
Here it is in MapleV release 4...
> int(exp(-x^m),x=0..infinity);
Definite integration: Can't determine if the integral is convergent.
Need to know the sign of --> m
Will now try indefinite integration and then take limits.
infinity
/
| m
| exp(-x ) dx
|
/
0
So we can do it like this...
> assume(m>0);
> int(exp(-x^m),x=0..infinity);
GAMMA(1/m)
----------
m
with assumptions on m
Maple will also do
> assume(q>0, n>0);
> int(exp(-q*x^n),x=0..infinity);
GAMMA(1/n)
----------
(1/n)
q n
with assumptions on q and n
And since (1+p)^(-q*x^n) is equal to exp(-q*ln(1+p)*x^n), we get
infinity
/ n
| (-q x ) GAMMA(1/n)
| (1 + p) dx = --------------------
| (1/n)
/ (q ln(1 + p)) n
0
with assumptions on p, q and n
Now a Maple peculiarity. One might want to get this answer with less
user intervention. Something like this...
> restart;
> assume(p>0, n>0, q>0);
> int((1+p)^(-q*x^n),x=0..infinity);
infinity
/ n
| (-q x )
| (1 + p) dx
|
/
0
with assumptions on p, q and n
--
Gerald A. Edgar edgar@math.ohio-state.edu
Subject: Re: Discrete Cosine Transform
From: mathar@qtp.ufl.edu (Richard Mathar)
Date: Fri, 01 Nov 1996 09:55:37 EST
phisan@ipi.uni-hannover.de@PROBLEM_WITH_INEWS_DOMAIN_FILE (Phisan Santitamnont) writes:
|> Does anybody know where can i get source code for doing such a DCT ?
|> My DCT is defined as:-
|>
|> M-1 / PI \
|> Cm = SUM 2* Xi * cos | --- * m * (2*i + 1) | 0<= m i=0 \ 2*M /
|>
|>
|> I also know that there exists Fast DCT routines in
|> fftpack, vfftpack, linalg in netlib , even the new Algorithm 749
|> in ACM-tom. Unfortunately they are different defined.
|>
|> Or could you please help me finding out the relation between mein and
|> eg. cost() in fftpack ? , in order that i can make use of existing
|> routine without modification.
|>
The first thing one can do is zero-padding of the vector x_i
to have a total phase in the cosine up to (in the usual
notation) 2*pi*m*i/M (m,i=0...M-1). That means, adding
x_i=0 (for i=M,...,2M-1) does not change the result, and you
may as well compute:
Cm = 2 sum(j=0...2M-1) x_j cos[ pi*m*(2j+1)/(2M)]
for m=0,..,2M-1 .
Change, just to recognize more familiar formulas, 2M=M' to have
...= 2 sum(j=0..M'-1) x_j cos[ pi*m*(2j+1)/M' ]
= 2 sum(j=0..M'-1) x_j cos[ 2 pi*m*(j+1/2)/M' ]
This can be split with
cos[ 2 pi m (j+0.5)/M'] = cos(2 pi m j / M')cos(pi m /M')
- sin(2 pi m j / M')sin(pi m /M')
into two sums over m=0...M'-1. To obtain these new sine
and cosine transform one can use the complex (!) Fourier
transform of the zero-padded original series,
z_m = sum(j=0...M'-1) x_j exp( 2 pi i m j / M')
split the result (supposed the x_j are real values, not complex)
into the real and imaginary parts to obtain
Re z_m = sum(j=0..M'-1) x_j cos ( 2 pi m j/M')
Im z_m = sum(j=0..M'-1) x_j sin ( 2 pi m j/M')
disregard the values m=M...2M-1, and post-multiply
with the factors cos(pi m/M') and sin(pi m/M') according
to the formula above to build the result.
Cm = 2 [Re z_m cos(pi m/M') - Im z_m sin(pi m/M')]
--
mathar@qtp.ufl.edu
Subject: Identifying Gaussian/Bessel functions in 2-D
From: Laurence Marks
Date: Fri, 01 Nov 1996 11:02:03 -0600
I have a problem which may (or may not) have an answer. I have a
reconstructed image, which (ideally) should contain a number of
2-D objects, either Gaussians or J1(ar)/ar Bessel functions. (Not
both, but I can work in a mode which will generate either.) I know
the width of the Gaussians, and the Bessel functions. What I want
to do is measure how good the reconstruction is in terms of such
a set of objects, and then use this "measure" to help control the
reconstruction algorithm.
This seems to be somewhere in between pattern recognition, image
processing, and other ground. Does anyone know:
a) If there are any methods of doing this?
b) Are their any transforms that could be used (in 2-D)?
c) Is there any method of determining the "measure" from the
cross-correllation function, for instance its entropy?
Subject: Re: How to solve a nonlinear system...
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 1 Nov 1996 18:11:43 GMT
In article <32690773.18A@mit.edu>, Lorenza Martinez writes:
|> Hello,
|> I have been trying to solve a nonlinear system of 36 eq. and 36
|> unknowns, and I haven't succeed. I have tried in mathematica and in
|> gauss (using the nlsys routine). This last routine uses a Newton method
|> and for very different type of initial conditions and end up either with
|> a singular matrix or the program stops without convergeing to the
|> solution (not even close !!!!).
|>
|> The system is of the type:
|>
|> 1) 15 homogenous equations (i.e. without a constant)
|>
|> a) 7 equations of the type x1=x2
|> b) 3 of the type: x5/x4=x11/x10
|> c) 5 equations of the type: (x20-x21)/x19=(x26-x27)/x25
|>
|> 2) 21 equations with a constant:
|>
|> a) 6 equations of the type: x1^2+x2^2+x3^2+...+x6^2=K1
|> b) 15 equations of the type: x1*x7+x2*x8+x3*x9+...+x6*x12=k7
|>
|> I will really appreciate any help.
|>
|> snip ..
multiply equations of type b) and c) by the denominators, making
the equations of polynomial type . add constraints x19>= 0.00001
or similar, you need this anyway. You get a system of equations of polynomial
type, with some simple bounds as variables. Minimize the function
f==1 (or f==0) subject to these constraints using a nonlinear programming
code that is capable of coping with highly degenerate problems.
you may try my code donlp2 (available from plato.la.aus.edu
in pub/donlp2) or e04ucf (i.e. NPSOL) from the NAG-Library).
this should work much better than a routine application of Newtons method
because your system admits manifolds with singular Jacobian, where
Newtons method fails.
hope this helps
peter
Subject: Re: non-linear equation solving
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 1 Nov 1996 17:46:02 GMT
In article <55aag1$og6@news.kth.se>, thomas@prima.met.kth.se (Thomas Helander) writes:
|> We are presently trying to solve a system of non linear equations using
|> Newtons method. In principal the system looks like this:
|>
|> F(C'',C')=(A-(dt/2)B'')C'' -R''(dt/2) -(A+(dt/2)B')C' +R'(dt/2) = 0
|>
|> where C"= C(x,t2) C'=(x,t1) etc..
|> x is space coordinate
|> t is the time, dt the timestep.
|> A is the mass matrix
|> B is the stiffness matrix
|> R is a matrix
|>
|> Since C is discontinuous in x, the elements of B and R vary discontinuously.
|> This seems to cause difficulties when using Newtons method, because it doesn't
|> always converge. Instead the sum of squares oscillates between two values. When
|> we multiply the Jacobian with a trust factor, the sum of squares....
(snip)
Unfortunately, I don't understand your notation. But from the explanations
of some of your variables I assume that you are integrating a vibrating system
by a discretization method using an implicit scheme. This always requires
continuity and even more, continuous differentiability of the functions
involved. in order to get sensible results, you have to trace discontinuities
and to restart your numerical process at points of discontinuity. Otherwise
you obtain senseless reults (if it wouldn't hang up your computer because
of unsafe programming).
tracing of discontinuities for ordinary differential equations is described
to some extent in Hairer, Norsett, Wanner "Solving Ordinary Differential equations
I" (Springer publishing company)
hope this helps
peter
Subject: Re: (1/n)! as integral expression
From: Edward Neuman
Date: Fri, 01 Nov 1996 12:17:15 -0600
Dieter Schmitt wrote:
>
> Thursday 31.10.96 19:46
>
> Hi to whom it may concern,
>
> once I bought a scientific calculator and proved its capabilites by
> evaluating the integral expression
>
> +oo (pos. infinit)
> --- n
> | -q|x|
> | (1+p) dx whereby p,q,n E R+ and x E R.
> |
> ---
> -oo (neg. infinit)
>
> by the little machine ... it worked slowly during months and I got a lot
> of numbers. I stored the results as statistic data and displayed later on
> a graphic result depending on n. I tried to get the numbers algebraical
> and found (I think by trial and luck) an expression, which reproduces the
> same results when the above parameter values p,q and n are inserted:
>
> 2*(1/n)!
> -------------------
> n
> -- ------------
> \ | q*ln(1+p)
> \|
>
> (... astonishing, I got this formula first )
>
> I simplified after all by q=1 and p=e-1 and did the lower limit
> from -oo to 0 preventing the absolute value of x.
>
> Then I got that the numeric calculus results could be expressed too
> by (1/n)! when evaluating the inserted values of n:
>
> +oo
> --- n
> | -x 1 -1
> | e dx = -! = n ! whereby n E R+ and x E R .. quite nice :-)
> | n
> ---
> 0
>
> I tested the formula and the grphic result by approximative determination
> the minimum nearby n=2.1662.... (Y-Axis = (1/n)! = (0.46163...)! =
> 0.8856.... ) and got the same Y-Axis-value when I later bought MAPLE V
> (it's just for fun, really!) and searched for the minimum of (1/x)! ... I
> got from MAPLE V the value 1.46163.... after numerically evaluating which
> has to be inserted into the gammafunction. The y-result is the same
> 0.8856... too, because the gamma-function produces n! by input n+1.
>
> I tested for more values and got a lot of fitting results.
>
> I set MAPLE V to check for equality, but it couldn't. It called the
> problem undecidable.
>
> I tried to find help in books with formula but without success.
> Because I'm not a mathematician (only poor high-school math) I'm not able
> to proof the equality in an analytical or algebraic way.
>
> If there is anybody out there, able and kindly enough to do this (and to
> confirm it) or to tell me that and why I'm wrong, then please help and
> send a mail to me.
>
> (I know that the gammafunktion already expands the ! function but it
> produces n! by input n+1 and not (1/n)! by input n).
>
> Have a good time.
>
> Dieter Schmitt
>
> dismit@xanth.mayn.de
>
> --
> *Glauben heisst nicht wissen wollen..*
> *Believing implies no will to know..*
> ## CrossPoint v3.11 R ##
Dieter,
Your first integral formula is wrong. Its special case (p=e-1; q=1)
hods true provided n=1, 1/2, 1/3,... .To prove this make a substitution
t=x^n and next use Euler's formula for the gamma function to obtain
(1/n)*Gamma(1/n) = Gamma(1+(1/n)) on the right side. The last expression
simplifies to (1/n)! for values of n as shown above. To obtain a correct
formula for your first integral apply an elementary identity:
a^b=exp(b*ln(a))
to the integrand, next make a change of variable, and use Euler's
integral.
I hope this helps.
Edward Neuman
Subject: Re: Cholesky factorization of a special matrix
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 1 Nov 1996 17:59:18 GMT
In article , Konrad Hinsen writes:
|> Given a non-square matrix A of shape NxM with N < M, the matrix B =
|> transpose(A)*A has shape NxN and is symmetric and positive
|> (semi)definite and therefore has a Cholesky factorization B =
|> transpose(C)*C. Is there a way to determine C without first
|> calculating B and then applying the standard factorization algorithm?
|> --
|> -------------------------------------------------------------------------------
|> Konrad Hinsen | E-Mail: hinsen@ibs.ibs.fr
|> Laboratoire de Dynamique Moleculaire | Tel.: +33-76.88.99.28
|> Institut de Biologie Structurale | Fax: +33-76.88.54.94
|> 41, av. des Martyrs | Deutsch/Esperanto/English/
|> 38027 Grenoble Cedex 1, France | Nederlands/Francais
|> -------------------------------------------------------------------------------
take the householder_qr-decomposition of A, make the diagonal of the
R-part positive by multiplying the row appropriately. Thats all.
(I assume you meant N>M, otherwise apply that to the transpose)
There are codes for sparse QR-decomposition, if you need that.
hope this helps
peter
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 1 Nov 1996 19:48:58 GMT
In article <$37sUHAD1$dyEwNK@jessikat.demon.co.uk>,
Robin Becker wrote:
>>Essentially this means that the code you have shown, although all
>>compilers I know would accept it, does not conform to the Fortran 77
>>standard.
>Then surely it's up to implementors of standard conforming compilers etc
>to detect and flag it. ...
No. The Fortran standard tells the *user* what he must do to create a
standard-conforming program. Then it says something about what a
standard-conforming compiler must do with such a program. It deliberately
says nothing about what the compiler should or shouldn't do with a
non-standard-conforming program.
At least this was true of the Fortran77 standard; I'm not sure about f90.
The ANSI C standard mandates much more about what the compiler must
do (e.g., put out a diagnostic) in the case of specific violations
of the standard by the user.
-P.
--
****************** In Memoriam, Bill Monroe, 1911 - 1996 ******************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY 10027; shenkin@columbia.edu; (212)854-5143; FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Subject: Re: BFGS variable metric question
From: Jive Dadson
Date: Fri, 01 Nov 1996 13:59:18 +0000
I certainly appreciate the two responses I got to my question. Both
of them said I should dump Numerical Recipes in C and go with a better,
public domain algorithm. I did some checking around, and I'm not
quite ready to give up on NRC just yet. I found one algorithm in FORTRAN,
and another that was compiled into C using f2c. The form of the source
code would make either of those very difficult for me to use. As bad
as the style of NRC C code is, at least it is C that I can read (sort
of). So I am still trying to answer the two questions: 1) Why does the
approximation to the Hessian in the NRC algorithm collapse into
non-positive-definite? 2) What kind of stopping criteria should I use
to assure that the Hessian approximation is good enough that its
determinant is useful?
It turns out that the other problem I noted, about the line-search failing,
is related to the Hessian approximation failure. It only seems to fail when the Hessian
approximation has collapsed. Any step in the supposed Newton direction leads to an
increase in the function's value, but the function is not at a local
minimum. Restarting the algorithm with the approximate Hessian set to the
identity matrix seems to save the day. But I would prefer to figure out
what is going wrong, and keep the approximation from going bad if I can.
I am a little worried that the code does not compute the true BFGS update
formula, but I haven't as yet been able to see anything wrong with it.
Karl F. Roenigk wrote:
> It's a little surprising you have had the kind of
> success with BFGS compared to other methods as you had indicated ...
I'm using it to train neural networks, and function evaluation is
very expensive. The BFGS wins because it usually uses only one or two
evaluations to do the approximate line-search. I also hope to be
able to use the Hessian approximation for another purpose. If I have
to I can calculate it once at the end though. What I really need is
the determinant of the inverse Hessian.
> For smaller dimension systems(*4) of a wide variety, I have to date not
> found it to outperform full Hessian evaluation, or even conjugate
> gradient;
Full Hessian evaluation is out of the question, because it would be much
too expensive. The dimension of the systems is usually somewhere around
14 to 300 parameters or "weights".
> The problem as I have observed it is in the
> direction vector inaccuracy; going in the right direction in the first
> place buys a lot of efficiency. Although it costs more to find the best
> direction, it ultimately costs less to get to the final destination
> because you need make fewer purchases.
I didn't follow that. Could you elaborate? What is the "right" direction?
Both CG and quasi-Newton will get to the minimum in N steps when the
function is quadratic. When it is not quadratic, I'm not sure what the
"right" direction is.
I'm not going to comment on the rest of the response, because I obviously
don't know enough about the subject to ask intellegent questions. Just suffice
it to say, I'm still bewildered.
Thanks again for your help.
J.
Subject: Re: SVD
From: "John C. Nash"
Date: Fri, 1 Nov 1996 18:52:36 -0500
John Chandler suggested that nobody would use normal equns for
systems that may be ill-conditioned (most of them). In general,
I heartily agree, but have a specific counter-example that
likely supports the generalization.
This arose in implementing the Marquardt nonlinear least squares
method -- actually my own modification to cope with a pretty
nasty BASIC interpreter in 1975. While we wanted to use the svd
or QR or (name your own preferred "good" method), they were just
too slow if we had any number of observations, since the normal
equations were p * p where p is the number of parameters, while the
Jacobian matrix J that we would have to decompose was n * p where
n >> p usually.
Now we did always have the Levenberg / Marquardt parameter lambda
that multiplied a metric matrix (diagonal, often unity) that was
added to the J' * J normal equation matrix, but even so in the
cruddy arithmetic would get pivots that the algorithm complained
about (we tested). The main issue was speed, and we could take care
of the not-so-hot numerics by doing an extra iteration or two, since
they took a lot less time than the svd or similar steps.
JN
John C. Nash, Professor of Management, Faculty of Administration,
University of Ottawa, 136 Jean-Jacques Lussier Private,
P.O. Box 450, Stn A, Ottawa, Ontario, K1N 6N5 Canada
email: jcnash@uottawa.ca, voice mail: 613 562 5800 X 4796
fax 613 562 5164, Web URL = http://macnash.admin.uottawa.ca
Subject: C vs Fortran for numerics: References? (was: Using C for number-crunching)
From: plesser@yadorigi.riken.go.jp (Hans Ekkehard Plesser)
Date: Tue, 29 Oct 1996 04:10:02 GMT
Hello experts!
I have been following the "Using C for number-crunching" for quite a
while now. Since I am doing a bit of number crunching myself, I would
like to learn a bit more about advantages and disadvantages of both C
and FORTRAN and about what is going on behind the scenes. In
particular, it would be interesting to know what I should do to aid
the compiler in optimizing my code.
Therefore, I'd like to ask those who know about references, in printed
or electronic form, that might provide a readable introduction for me,
lacking both knowledge in compiler theory, and the desire to learn too
much about it.
Thanks in advance,
Hans
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: pausch@electra.saaf.se (Paul Schlyter)
Date: 2 Nov 1996 13:07:31 +0100
In article <557p3u$bvv@seaman.cc.purdue.edu>,
Dave Seaman wrote:
> In article ,
> Konrad Hinsen wrote:
>>pausch@electra.saaf.se (Paul Schlyter) writes:
>
> [ Example of illegal aliasing in Fortran deleted. ]
>
>>Essentially this means that the code you have shown, although all
>>compilers I know would accept it, does not conform to the Fortran 77
>>standard.
>
> Although all compilers may accept the code, they don't all give the
> same result. That's the point.
Interesting point. This would imply that a program like:
B = 13.0
WRITE(*,*) 1.0/B*B-1.0
END
would be illegal because different compilers would produce different
output, due to round-off errors....
--
----------------------------------------------------------------
Paul Schlyter, Swedish Amateur Astronomer's Society (SAAF)
Grev Turegatan 40, S-114 38 Stockholm, SWEDEN
e-mail: pausch@saaf.se psr@home.ausys.se
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: pausch@electra.saaf.se (Paul Schlyter)
Date: 2 Nov 1996 13:08:48 +0100
In article <557ugv$oqk@brachio.zrz.tu-berlin.de>,
Warner Bruns wrote:
> In article <555i94$ahb@electra.saaf.se>, pausch@electra.saaf.se (Paul
> Schlyter) writes:
>> Consider this piece of code:
>>
>> SUBROUTINE COPY(DOUBLE PRECISION A, DOUBLE PRECISION B, INTEGER N)
>> DIMENSION A(N), B(N)
>> INTEGER I
>> DO 100 I=1,N
>> 100 A(I) = B(I)
>> END
>>
>> PROGRAM TEST(INPUT,OUTPUT,TAPE5=INPUT,TAPE6=OUTPUT)
>> DOUBLE PRECISION X(100), Y(10), Z(10)
>> EQUIVALENCE (X(3),Y(1)), (X(1),Z(1))
>> DATA /Z/1,2,3,4,5,6,7,8,9,10/
>> COPY(Y,Z,10)
>> WRITE(*,*) Y
>> END
>>
>> This exhibits exacly the same aliasing problem as in C .....
>
> 1.)
> This is not Fortran,
> 2.)
> The Fortran equivalent would be:
> SUBROUTINE COPY(A, B, N)
> DOUBLE PRECISION A, B
Sorry 'bout that! Obviously my Fortran is getting kind'a rusty,
which is understandable since I've written virtually no Fortran code
during the last decade or so.
> INTEGER N
> DIMENSION A(N), B(N)
> INTEGER I
> DO 100 I=1,N
> 100 A(I) = B(I)
> END
>
> PROGRAM TEST
> DOUBLE PRECISION X(100), Y(10), Z(10)
> EQUIVALENCE (X(3),Y(1)), (X(1),Z(1))
> DATA /Z/1,2,3,4,5,6,7,8,9,10/
> call COPY(Y,Z,10)
> WRITE(*,*) Y
> END
>
> and this Fortran equivalent is illegal.
> It is just ILLEGAL to call a subprogram that modifies its arguments
> in this way.
> You are just not allowed to do this.
I know that now.
--
----------------------------------------------------------------
Paul Schlyter, Swedish Amateur Astronomer's Society (SAAF)
Grev Turegatan 40, S-114 38 Stockholm, SWEDEN
e-mail: pausch@saaf.se psr@home.ausys.se
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: ags@seaman.cc.purdue.edu (Dave Seaman)
Date: 2 Nov 1996 09:48:56 -0500
In article <55fdi3$8c3@electra.saaf.se>,
Paul Schlyter wrote:
>> Although all compilers may accept the code, they don't all give the
>> same result. That's the point.
>
>Interesting point. This would imply that a program like:
>
> B = 13.0
> WRITE(*,*) 1.0/B*B-1.0
> END
>
>would be illegal because different compilers would produce different
>output, due to round-off errors....
The code I was discussing is non-ANSI. I was objecting to the
suggestion that the non-ANSI code might somehow be permissible because
"all compilers accept it." The point is that the standard does not
assign a meaning to such code, and therefore each implementation is
permitted to choose its own interpretation. The common saying is that
an ANSI-compliant compiler, when presented with such code, is allowed
to do anything at all, including launching missiles to start World War
III.
Starting World War III is not an ANSI-compliant response to the 3-line
program above, because that program is standard-conforming. Even
though the standard does not specify the precise output that would
result from running the program, the fact remains that the program has
a well-defined meaning that all ANSI-compliant compilers are required
to honor (within the limits imposed by floating-point arithmetic).
That's the difference.
--
Dave Seaman dseaman@purdue.edu
++++ stop the execution of Mumia Abu-Jamal ++++
++++ if you agree copy these lines to your sig ++++
++++ see http://www.xs4all.nl/~tank/spg-l/sigaction.htm ++++
Subject: Re: Generating Correlated Variables - Help
From: Greg Heath
Date: Sat, 2 Nov 1996 15:35:27 -0500
For n variables forming the components of the n-dimensional vector x =
(x1,x2,...xn) with mean vector m = (m1,m2,...mn) and psd covariance
matrix C = SS^T:
1. Generate n independently, but not necessarily identically,
distributed r.v.s from unit-variance/zero-mean distributions to
form the components of the vector z = (z1,z2,...zn). In general, the n
distributions will be different.
2. Form x = m + Sz.
Then variable x_i will have mean m_i, variance C_ii, and 2-way
correlation coefficients p_ij = C_ij/SQRT(C_ii*C_jj).
Hope this helps.
Gregory E. Heath heath@ll.mit.edu The views expressed here are
M.I.T. Lincoln Lab (617) 981-2815 not necessarily shared by
Lexington, MA (617) 981-0908(FAX) M.I.T./LL or its sponsors
02173-9185, USA
On Mon, 21 Oct 1996, in
<326BA7BC.41C67EA6@bechtel.Colordao.edu>, DE ALMEIDA ADELINO wrote:
> I need help in generating correlated random variables for a simulation
> program that I am preparing.
>
> All I've been able to find deals with normal variates and what I need is
> a general routine for variables that may not be normal.
>
> Can someone help me with this? I'm not a matematician and soem of the
> stuff I've come across is hard to digest.
>
> Thank you for your help
>
> Adelino
>
>
Subject: Re: C vs Fortran for numerics: References? (was: Using C for number-crunching)
From: checker@netcom.com (Chris Hecker)
Date: Sun, 3 Nov 1996 01:56:21 GMT
plesser@yadorigi.riken.go.jp (Hans Ekkehard Plesser) writes:
>I have been following the "Using C for number-crunching" for quite a
>while now. Since I am doing a bit of number crunching myself, I would
>like to learn a bit more about advantages and disadvantages of both C
>and FORTRAN and about what is going on behind the scenes. In
>particular, it would be interesting to know what I should do to aid
>the compiler in optimizing my code.
Well, this may not be exactly what you're looking for, but I wrote a
two part series in Game Developer Magazine (www.gdmag.com) on x86 and
PowerPC C compiler optimizations and how to help the compilers do their
job (they need a lot of help). The test code was a 3x3 matrix times a
vector, but a lot of the same issues come up, like alias analysis, code
motion, unrolling, and whatnot.
Chris