Newsgroup sci.math.num-analysis 28458

Directory

Subject: Re: Windows version of laTex? -- From: msh@holyrood.ed.ac.uk ( Mark Higgins)
Subject: Runge-Kutta for IBVP on second order PDE in 4 dim.? -- From: A Montvay {TRACS}
Subject: Non-linear fitting problem -- From: debl@world.std.com (David Lees)
Subject: Re: Help! Numerical libraries in C/C++ -- From: baum@hydra.tamu.edu (Steve Baum)
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM? -- From: Wayne Schlitt
Subject: Re: Wanted: FFT algorithm -- From: Bill Simpson
Subject: Re: Non-linear fitting problem -- From: Usenet@mjohnson.com (The Johnson Household)
Subject: test - do not read -- From: A Montvay {TRACS}
Subject: Please, stop me! -- From: Jive Dadson
Subject: Re: Using C for number-crunching (was: Numerical solution to -- From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM? -- From: bp887@FreeNet.Carleton.CA (Angel Garcia)
Subject: test -- From: Swami Vivekananda
Subject: Re: non linear cubic spline -- From: Brad Bell
Subject: Re: Need help with integration -- From: "Michael Clark"
Subject: Need help with integration -- From: Brian Gross
Subject: fortran program in wavelet needed -- From: bihu@newstand.syr.edu (Bin Hu)
Subject: Re: Non-linear fitting problem -- From: debl@world.std.com (David Lees)
Subject: Re: Using C for number-crunching (was: Numerical solution to -- From: David Kastrup
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM? -- From: "Dann Corbit"
Subject: Help with vector spaces, please. -- From: rgelb@engr.csulb.edu (Robert Gelb)
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM? -- From: tkidd@hubcap.clemson.edu (Travis Kidd)
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.? -- From: Jan Rosenzweig
Subject: Minimizer stopping rule -- From: Jive Dadson
Subject: Error propogation for SVD -- From: lynch@gsti.com (David Lynch)
Subject: Re: computational/chaos research -- From: Andy Froncioni
Subject: Re: Help me -- From: "Dann Corbit"
Subject: Re: Wanted: FFT algorithm -- From: agraps@netcom.com (Amara Graps)
Subject: Re: Need help with integration -- From: Dave Dodson
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq) -- From: jac@ds8.scri.fsu.edu (Jim Carr)
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq) -- From: jac@ds8.scri.fsu.edu (Jim Carr)
Subject: calibration/interpolation? -- From: Bill Simpson
Subject: PLEASE HELP ME: I need to factor/simplify an algebraic expression. -- From: phaethon1112@earthlink.net (Tai)
Subject: PostDoctoral Position -- From: Alain.Stoessel@ifp.fr (Alain Stoessel)
Subject: PostDoctoral Position in IFP (France) -- From: Alain.Stoessel@ifp.fr (Alain Stoessel)
Subject: PDEase FEA soft from Macsyma-any experiences? -- From: Victor Kharin

Articles

Subject: Re: Windows version of laTex?
From: msh@holyrood.ed.ac.uk ( Mark Higgins)
Date: 4 Nov 1996 14:40:03 GMT
ez062761@rocky.ucdavis.edu (Mike Henry) writes:
>I am looking for a good mathematical word processor like laTex, but for
>Win 3.1.  Shareware preferred, any ideas?
>-- 
read the comp.text.tex faq -- latex is avalible free for PC (but
technicaly its not shareware)
Mark
--
                    ,,,
                   (o o)
       ________o00__( )__00o_______________________________
          msh@ed.ac.uk           Mark Higgins 	 =>:o} 
Return to Top
Subject: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: A Montvay {TRACS}
Date: Mon, 4 Nov 1996 14:46:53 GMT
Hi all,
I want to use a Runge-Kutta (or similar) method on an IBVP on the
three-dimensional wave equation
d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
Most books on numerical solutions of DE describe Runge-Kutta for ODE
some transfer it to first-order PDEs in two dimensions, but i have
not found any for second order in four dimensions.
Therefore any algorithm, program code in (almost)any language,
reference to book/paper or hints on how to do it myself would be very
welcome.
I hope somebody helps me so I don't have to figure it out myself -
after all I'm sure this has been done before!
Andras
Return to Top
Subject: Non-linear fitting problem
From: debl@world.std.com (David Lees)
Date: Mon, 4 Nov 1996 21:07:55 GMT
I recently posted a very general non-linear fitting problem (too
general in fact), which I would like to repost in a more specific
form
-----------------------------------
Problem Statement:
Given a parameterized, multivariate, quadratic function definition
f(X) which takes an N dimensional vector (X)=(x1, x2, x3, ... xN) as
an argument:
f(X) = a_1*x1^2+a_2*x1+a_3*x2^2+a_4*x2+...+a_2N-1*xN^2+a_2N*xN+a_2N+1
where a_i is the i'th coefficient and there are a total of 2N+1 coefficients.
and a set of M measured vectors X1, X2, ..., XM having errors on all
components which are independent between vectors and between components.
Find the parameter values a_1, a_2, ... , a_2N+1 which cause the function
to "best" ("best" might mean "with minimum mean square error")
satisfy the relation: 
f(X) = 0
for the measured set of M vectors.
The solution parameter set must satisfy the normalization condition:
a1^2+a2^2+...+aN^2=1 in order to avoid the trivial solution that all
parameters are zero.
----------------
The problem with using SVD(Singular Value Decomposition) is that it
assumes errors are linear in the measured variables being fit and
sometimes gives poor(biased) results for the quadratic terms.   We are 
looking for something like SVD but without the "bias" problem.
The algorithm for solving this problem must find the parameters using
a number of computations that is NOT data dependent.  This rules out
gradient descent methods.
----------------------------------------------
Thanks in advance.
David Lees
debl@world.std.com
Return to Top
Subject: Re: Help! Numerical libraries in C/C++
From: baum@hydra.tamu.edu (Steve Baum)
Date: 4 Nov 1996 22:44:56 GMT
>In article ,
>Paul A Mathew   wrote:
>>Hello all:
>>Can someone give me info on numerical libraries written in C/C++, which
>>are public domain.
>>
>>thanks
>>
>>paul mathew
          Try the listings for C and C++ on my Linux Software for
       Scientists page at:
            http://www-ocean.tamu.edu/~baum/linuxlist.html
       There should be sufficient pointers thereabouts to satisfy
       your quest.  Enjoy.
                                                              skb
-- 
%  Steven K. Baum (baum@astra.tamu.edu) // Physical Oceanography Dept. //
%  Texas A&M; // Ultimate trendy science paper: "Chaotic fuzzy neural
%  wavelet genetic multigrid model of greenhouse warming" //
%  URL = http://www-ocean.tamu.edu/~baum 
Return to Top
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: Wayne Schlitt
Date: 4 Nov 1996 17:26:16 -0600
In <55jnsc$9v3@freenet-news.carleton.ca> bp887@FreeNet.Carleton.CA (Angel Garcia) writes:
> software have the function "isprime" among hundreds. Thus it
> is reasonable to assume that very soon such "isprime" function willl
isprime, to the best of my knownledge, does primality tests that
aren't 100% guarenteed to be correct.  This isn't quite the same
thing.   (isprime might be exact for "small" primes, but not for the
range up to 2^20-2^64 like is being discussed...)
> Then what for to store 20-digit primes in cdrom ?.
Oh, I am not as interested in the actual cdrom as I am in the idea of
how to store that much 'information'.  
-wayne
-- 
Wayne Schlitt can not assert the truth of all statements in this
article and still be consistent.
Return to Top
Subject: Re: Wanted: FFT algorithm
From: Bill Simpson
Date: Mon, 4 Nov 1996 09:55:47 -0600
Try:
http://www.tu-chemnitz.de/~arndt/joerg.html	
This should really be in the sci.math.num-analysis FAQ, but isn't.
Bill Simpson
Return to Top
Subject: Re: Non-linear fitting problem
From: Usenet@mjohnson.com (The Johnson Household)
Date: Mon, 04 Nov 1996 23:38:22 GMT
debl@world.std.com (David Lees) wrote:
> ... [deletions] ...
>for the measured set of M vectors.
> ...
>The algorithm for solving this problem must find the parameters using
>a number of computations that is NOT data dependent.
> ...
Wow, do you hope to find an algorithm whose running time is
NOT dependent upon the number of datapoints (your "M") ??
--
   Mark Johnson     Silicon Valley, California     mark@mjohnson.com
   "... The world will little note, nor long remember, what is said
    here today..."   -Abraham Lincoln, "The Gettysburg Address"
Return to Top
Subject: test - do not read
From: A Montvay {TRACS}
Date: Mon, 4 Nov 1996 16:35:40 GMT
test
Return to Top
Subject: Please, stop me!
From: Jive Dadson
Date: Mon, 04 Nov 1996 16:14:01 +0000
I've written yet another function minimizer, this one based on the
scaled conjugate gradient _a la_ Moller. It works pretty darn well,
I must say. Of course I HAD to make several improvements. :-)
Its speed is comparable with the BGFS routine in _Numerical Recipes
in C_, and apparently has none of the robustness problems.
[Added later: A helpful net correspondent may have put his finger
on why the BFGS routine's inverse Hessian becomes non-positive definite.
For necessary speed, I am using approximations in my objective function
(neural network) that may not be smooth enough for a large number
of BFGS updates (based on the gradients). In any case, the new routine
saves the day.]
I find I am not too clear on how one should stop a function-minimizer
before it reaches machine-limit convergence. The NRC routines have
several ways to stop minimizing f(x), including these:
   1. If the gradient f'(x) becomes "small" on a step.
   2. If the difference in x between two steps becomes "small"
   3. The conj-grad routine frprmin() stops when the difference
      in f(x) between two steps becomes "small".
I have doubts about all of those stopping criteria.
   1. In monitoring the city block norm for the gradient in actual,
   noisy problems, I find that it hops around quite a bit, and frequently
   becomes almost as small as machine limits allow it to become
   at the actual minimum. The NRC routine often stops prematurely
   if you don't set "gtol" very low. This is the stopping criterion
   it uses:
        test = 0
        den = max(f(x),1);
        FOR ALL DIMENSIONS i
              temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
              IF temp > test THEN test=temp;
        IF test < gtol THEN quit
    2. The second criterion particularly does not seem appropriate for 
    algorithms that use an approximate line search, because it can cause
    the algorithm to stop because of one ineffective step. Yet, that is the
    other stopping criterion for the NRC BFGS routine dfpmin(), which 
    uses approximate line searches.
        test = 0;
        FOR ALL DIMENSIONS i
            temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
            IF temp > test THEN test=temp;
        IF test < small THEN quit
    I also question the division by max(abs(x[i]),1). When training a neural
    network, (which is what I am mostly using this for), scaling is 
    appropriate for some parameters ("weights"), but perhaps not for 
    others ("biases"). If you are going to stop on near x-convergece, 
    I think the user needs to specify the x norm.
    3. The third criterion has the problems of the first two: It can
    cause early stopping in a relative "flat" spot, or when a single
    step is ineffective because of not-exact line-search.
            test = 2.0*abs(f(x)-previous_f_x);
            lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
            IF test <= lim THEN quit
It would seem at the very least you should insist that these criteria are
met for some number of consecutive steps before stopping.
Ideally, one would like for the user to be able to specify a small
number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
criterion cannot be tested with certainty. Or the user could specify a
norm <>, and say, "Stop when  < e." Is there a way to stop before
complete convergence when it is "very probably" true that you are with an
epsilon of the minimum?  
Gurus, please enlighten.
       Thanks,
       J.
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 5 Nov 1996 01:45:02 GMT
In article <55atf7$cpg@lyra.csx.cam.ac.uk>,
Nick Maclaren  wrote:
>In article <553eba$ii1@sol.ctr.columbia.edu>, shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
>|> In article <5539na$c6@lyra.csx.cam.ac.uk>,
>|> Nick Maclaren  wrote:
>|> >.....  The fact is
>|> >that ANY translation via C will be inefficient, because there is no way
>|> >in C to specify when pointers or data structures can be assumed not to
>|> >be aliased.  ....
>|> 
>|> Well, it depends what you're translating.  For instance, if the Fortran
>|> code being translated only writes to one destination in each function
>|> or subroutine, the C code can const-qualify the underlying data for
>|> the other variables.
>
>Yes and no.  This is possible in the simplest cases, but becomes
>extremely restrictive and tricky for anything else.  
First, I recently posted a disclaimer to two of my earlier postings,
but now I want to disclaim the disclaimer. :-)  Someone had
posted saying that const qualification (in C) does not convey
non-alias information, because the const qualifier guarantees only 
that no attempt will be made to write to the underlying addresses 
*through the const-qualified pointer*.  Consider the following
code fragement: 
void func( float a[], const float b[], const float c, const int n ) {
	int i;
	for( i=0; i...There is a
>particularly horrible mess to do with multi-level structures,
>such as the following:
>
>void fred (double *a, double **b, int *c, int d) {
>    int i;
>    for (i = 0; i < d; ++i) a[i] = b[c[i]];
>}
>
>You can easily const qualify 'c' and 'b' at ONE level, but the
>casting rules do not allow you to const qualify 'b' at both levels.
>And nor do they allow the caller of the function to do it :-(
Can you not do this with typedefs?
typedef const float CF;		/* const float */
typedef CF *const PCF;		/* const ptr to const float */
typedef PCF *const PPCF;	/* const ptr to const ptr to const float */
void fred( double *a, PPCF b, const int *const c, const int d) ;
	-P.
-- 
****************** In Memoriam, Bill Monroe, 1911 - 1996 ******************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY  10027;  shenkin@columbia.edu;  (212)854-5143;  FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Return to Top
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: bp887@FreeNet.Carleton.CA (Angel Garcia)
Date: 4 Nov 1996 23:49:06 GMT
Santiago Arteaga (arteaga@cs.umd.edu) writes:
> 	What about storing those numbers which are *not* primes,
> but pseudo-primes in base 2,3,5 and 7, to say something? 
> There shouldn't be too many of these, and so checking whether a
> number is prime would be easy; just check for pseudo-primality,
> which is fast, and if they are pseudoprimes check that they are
> not in the CDROM before declaring them primes.
> 
       Yes, Santiago. It seems pretty obvious that the first cdrom to
be published about primes will be (if still is not) these that you 
mention: the pseudo-primes or composite numbers which divide
         2^(p-1) - 1;   3^(p-1) -1; etc..
Not only they have merit by themselves but they are fundamental for
software- programmers who in order to give correct answer "true"
to the "isprime(p)" question have to program Fermat's little
theorem with 'several' bases 2, 3, 5, etc... to ensure that
no pseudo-prime passes as 'true prime'.
      If a program 'repeats' Fermat's test with 3 bases b1, b2 and b3
AND the first pseudo-prime common to these 3 bases has already
100 digits, say; then it follows that such program CERTIFIEDLY gives
true primes up to  99 digits or less. This is very important.
--
Angel, secretary (male) of Universitas Americae (UNIAM).
     http://www.ncf.carleton.ca/~bp887
Return to Top
Subject: test
From: Swami Vivekananda
Date: Mon, 4 Nov 1996 17:54:26 -0600
test, ignore pls.
Return to Top
Subject: Re: non linear cubic spline
From: Brad Bell
Date: Mon, 04 Nov 1996 16:57:16 -0800
Gleb Beliakov wrote:
> 
> Bob Falkiner  wrote:
> >I am looking for a PC routine that will do a non linear cubic spline,
> >where the "stiffness" of the spline can be an input parameter over a
> >range of 0-1 where 0 would be a cubic spline fit where the spline curve
> >goes through each of the input data points and 1 would be a linear
> >regression.  there is a routine in the SAS mainframe library (NLIN?)
> >that does this.  Are there any alternates, as I can't afford SAS for
> >personal use on the PC.
> 
> Why do you call this spline nonlinear? The spline you describe is a traditional
> cubic smoothing spline, which is a linear combination of B-splines, and the
> coefficients could be found by solving a linear system of equations.
> For instance, look at:
> Lyche, Schymaker, SIAM J.Numer.Anal 10,1027-1038 (1973).
> The smoothing parameter is in the range 0-infinity, but you can easily
> transform it into your stiffness parameter.
> 
> The fortran code can be found in netlib,
> the directory is called gcvspl. For C++,Mathematica and Maple code drop me a message.
> 
> Gleb
	Smoothing splines of arbitrary order and in arbitrary dimension are
available with the free version of O-Matrix,
	http://world.std.com/~harmonic
Once you have installed the O-Matrix package, you can use the help to
search for the routine "smosplc" and "smosplv" which
evaluate the spline coefficients and values respectively.
Brad Bell
Return to Top
Subject: Re: Need help with integration
From: "Michael Clark"
Date: 5 Nov 1996 04:06:57 GMT
MathCad solved it as 
(1/3) *ln(x-1)-(1/6)*ln(x^2+x+1)+(1/3^.5)*atan((1/3)*(2*x+1)*3^.5))
Did not have time to do it by hand, but you need toi do it by partial
fractions
ie
=1/(3*(x-1))-(x-1)/(3*(x^2+x+1))
later Mike Clark
clarkmj@worldnet.att.nat
Brian Gross  wrote in article
<327ED624.3093@worldnet.att.net>...
> What is the intergral of the following:
> 
>                  X
>            --------------- dx
>               3
>              X   -  1
> 
> Thanks
> Brian
> 
Return to Top
Subject: Need help with integration
From: Brian Gross
Date: Mon, 04 Nov 1996 21:52:36 -0800
What is the intergral of the following:
                 X
           --------------- dx
              3
             X   -  1
Thanks
Brian
Return to Top
Subject: fortran program in wavelet needed
From: bihu@newstand.syr.edu (Bin Hu)
Date: 5 Nov 1996 02:24:46 GMT
Hi 
Can any one tell me where I can find the fortran program for 2-D wavelet
transform? The source code in Numerical analysis is wrong.
thanks
Bin
Return to Top
Subject: Re: Non-linear fitting problem
From: debl@world.std.com (David Lees)
Date: Tue, 5 Nov 1996 06:20:00 GMT
Opps.  Sorry I did not state what I want very clearly.  There is
not problem with the run time being dependent on the number of data
points.  I want an upper bound on the number of computations that
is data independent.
David Lees
The Johnson Household (Usenet@mjohnson.com) wrote:
: debl@world.std.com (David Lees) wrote:
: > ... [deletions] ...
: >for the measured set of M vectors.
: > ...
: >The algorithm for solving this problem must find the parameters using
: >a number of computations that is NOT data dependent.
: > ...
: Wow, do you hope to find an algorithm whose running time is
: NOT dependent upon the number of datapoints (your "M") ??
: --
:    Mark Johnson     Silicon Valley, California     mark@mjohnson.com
:    "... The world will little note, nor long remember, what is said
:     here today..."   -Abraham Lincoln, "The Gettysburg Address"
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: David Kastrup
Date: 05 Nov 1996 10:29:00 +0100
shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
: First, I recently posted a disclaimer to two of my earlier postings,
@> but now I want to disclaim the disclaimer. :-) Someone had posted
@> saying that const qualification (in C) does not convey non-alias
@> information, because the const qualifier guarantees only that no
@> attempt will be made to write to the underlying addresses *through the
@> const-qualified pointer*.
This was me, and I stand by my statement.  See below why.
@>  void func( float a[], const float b[], const float c, const int n ) {
@>  	int i;
@>  	for( i=0; i  		a[ i ] = a[ i ] + c * b[ i ];
@>  	}
@>  }
@>  I originally stated that the compiler could easily tell that a[]
@>  and b[] do not overlap, because if they did, the const b[] array
@>  would in part be overwritten, but its const qualification says it
@>  can't be.  Against this it was argued that the C language
@>  guarantees that correct code must be generated even if a[] and b[]
@>  overlap, since "const" only says that the programmer can't alter
@>  b[i] by dereferencing b -- it remains legal and well-defined to
@>  dereference b[]'s data through a.  At the time I agreed, with
@>  regrets.
And you'll have to agree again.
@> But now (sparked by Nick's posting, which implicitly seemed to
@>  concur
@> with my original) I've looked this up in the C standard, section
@> 6.5.3.  It states, in part, "If an attempt is made to modify an object
@> defined with a const-qualified type through use of an lvalue with
@> non-const-qualified type, the behavior is undefined."
@> Thus, the standard is quite clearly saying that the compiler can
@> assume from the const-qualified declaration of b[] that the contents
@> of b[] will not be altered even by means of dereference through a.
@> I.e., my original posting was correct.
And that's where you are being wrong in your interpretation.  A
pointer, be it const or not, does *not* define an object.  It just
references it.  If an object is *defined* using a const qualifier, the
compiler can assume that the object will never change, legally.  But
the function getting a const pointer to an object does not know if the
object has been defined as const.  That the pointer is a const pointer
tells nothing about the state of the object it points to (except that
we are not going to change the object through just that pointer).
@> Incidently, if you read Dennis Ritchie's official comment to
@> X3J11 in 1989, on the subject of the then-proposed qualifiers,
@> he suggests for const:
@>
@> Add a constraint (or discussion or example) to assignment that makes
@> clear the illegality of assigning to an object whose actual type is
@> const-qualified, no matter what access path is used.  There is a
@> manifest constraint that is easy to check (left side is not
@> const-qualified), but also a non-checkable constraint (left side is
@> not secretly const-qualified).  The effect should be that converting
@> between pointers to const-qualified and plain objects is legal and
@> well-defined; avoiding assignment through pointers that derive
@> ultimately from `const' objects is the programmer's responsibility.
Again, here Ritchie is talking about the const-ness of an object
prohibiting assignments to it ultimatively, *not* about the const-ness
of pointers to them.
@> Second, I do agree with Nick that this method of telling the compiler
@> that aliasing is not going to happen is somewhat unwieldy except in
@> simple cases.
It does not work, simply because const pointers are allowed to point
to variable objects, and the only strict guarantees are about const
objects.
-- 
David Kastrup                                       Phone: +49-234-700-5570
Email: dak@neuroinformatik.ruhr-uni-bochum.de         Fax: +49-234-709-4209
Institut fuer Neuroinformatik, Universitaetsstr. 150, 44780 Bochum, Germany
Return to Top
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: "Dann Corbit"
Date: 5 Nov 1996 05:37:47 GMT
Wayne Schlitt  wrote in article
...
[snip]
> Oh, I am not as interested in the actual cdrom as I am in the idea of
> how to store that much 'information'.  
I have found a way to store 'lots' of primes in a nicely compressed format.
First, I calculate them using a sieve.  I store the prime list as a list of
bits.
Each bit is 1 if prime, and 0 if not prime.  Since all even numbers except
two are not prime, I throw out all of the even bits, and just 'remember'
that
2 is prime.  Then, I compress the bits in blocks of 64K using Huffman
squeezing.  The primality of the first 256 million numbers takes about
ten megabytes of space that way.  The blocks compress better and better,
as the density of primes drops.  It would be interesting to see how much
data could be fit onto one cd.
Return to Top
Subject: Help with vector spaces, please.
From: rgelb@engr.csulb.edu (Robert Gelb)
Date: 5 Nov 1996 07:00:36 GMT
Help with homework (vector spaces):
The problem is to find out whether a given set is a vector space.
The problem is as follows:
W' is the set of all ordered pairs(x,y) of real numbers that satisfy the
equation 2x+3y=1.
The answer at the end of the book says, that this set is not a vector
space.
Can someone explain to me why?  I would consult the book but it provides
theoretical examples, however no practical ones.
Thanks
Robert
-- 
Robert Gelb
Senior Systems Analyst
Data Express
Garden Grove, California USA
(714)895-8832
Return to Top
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: tkidd@hubcap.clemson.edu (Travis Kidd)
Date: 5 Nov 96 14:20:08 GMT
mmacleod@henge.com (Malcolm MacLeod) writes:
>I'm not sure I agree that it is impossible.  Very difficult,
>certainly... but technical data management is my specialty.
>I think I could find a way to cncode most of them onto a CDROM.
>As the numbers get bigger, the primes get farther apart.  That helps.
>So... just how many primes are there below 2^64?
I would guess roughly 2^64/ln(2^64) = 2^64/(64*ln 2) which is (very roughly)
2^58.
-Travis
Return to Top
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: Jan Rosenzweig
Date: Tue, 05 Nov 1996 10:12:54 -0500
A Montvay {TRACS} wrote:
> 
> Hi all,
> 
> I want to use a Runge-Kutta (or similar) method on an IBVP on the
> three-dimensional wave equation
> 
> d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
> 
> Most books on numerical solutions of DE describe Runge-Kutta for ODE
> some transfer it to first-order PDEs in two dimensions, but i have
> not found any for second order in four dimensions.
> 
> Therefore any algorithm, program code in (almost)any language,
> reference to book/paper or hints on how to do it myself would be very
> welcome.
> 
> I hope somebody helps me so I don't have to figure it out myself -
> after all I'm sure this has been done before!
> 
> Andras
   You can't use Runge-Kutta for IBVP. Runge Kutta is a method for 
initial value problems. Why bother with that, what is wrong with the
finite element method?
-- 
Jan Rosenzweig
e-mail: rosen@math.mcgill.ca
office:                                        home:
Department of Mathematics and Statistics       539 Rue Prince Arthur O. 
Burnside Hall, room 1132, mbox F-10            Montreal
805 Rue Sherbrooke O.                          Quebec H2X 1T6
Montreal, Quebec H3A 2K6
Return to Top
Subject: Minimizer stopping rule
From: Jive Dadson
Date: Tue, 05 Nov 1996 09:48:17 +0000
I find I am not too clear on how one should stop a function-minimizer
before it reaches machine-limit convergence. The NRC routines have
several ways to stop minimizing f(x), including these:
   1. If the gradient f'(x) becomes "small" on a step.
   2. If the difference in x between two steps becomes "small"
   3. The conj-grad routine frprmin() stops when the difference
      in f(x) between two steps becomes "small".
I have doubts about all of those stopping criteria.
   1. In monitoring the city block norm for the gradient in actual,
   noisy problems, I find that it hops around quite a bit, and frequently
   becomes almost as small as machine limits allow it to become
   at the actual minimum. The NRC routine often stops prematurely
   if you don't set "gtol" very low. This is the stopping criterion
   it uses:
        test = 0
        den = max(f(x),1);
        FOR ALL DIMENSIONS i
              temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
              IF temp > test THEN test=temp;
        IF test < gtol THEN quit
    2. The second criterion particularly does not seem appropriate for
    algorithms that use an approximate line search, because it can cause
    the algorithm to stop because of one ineffective step. Yet, that is the
    other stopping criterion for the NRC BFGS routine dfpmin(), which
    uses approximate line searches.
        test = 0;
        FOR ALL DIMENSIONS i
            temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
            IF temp > test THEN test=temp;
        IF test < small THEN quit
    I also question the division by max(abs(x[i]),1). When training a neural
    network, (which is what I am mostly using this for), scaling is
    appropriate for some parameters ("weights"), but perhaps not for
    others ("biases"). If you are going to stop on near x-convergece,
    I think the user needs to specify the x norm.
    3. The third criterion has the problems of the first two: It can
    cause early stopping in a relative "flat" spot, or when a single
    step is ineffective because of not-exact line-search.
            test = 2.0*abs(f(x)-previous_f_x);
            lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
            IF test <= lim THEN quit
It would seem at the very least you should insist that these criteria are
met for some number of consecutive steps before stopping.
Ideally, one would like for the user to be able to specify a small
number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
criterion cannot be tested with certainty. Or the user could specify a
norm <>, and say, "Stop when  < e." Is there a way to stop before
complete convergence when it is "very probably" true that you are with an
epsilon of the minimum?
Gurus, please enlighten.
       Thanks,
       J.
Return to Top
Subject: Error propogation for SVD
From: lynch@gsti.com (David Lynch)
Date: Tue, 05 Nov 1996 16:32:59 GMT
I have a matrix which I would like to compute the SVD, but the matrix
consists of physical data.  How can I compute the accuracy of the U,V,
and Sigma's?  If I have some estimate of the error in each matrix
element, can I get some estimeate of the error in the decomposition?
Dave
****************************************************************************
*   David Lynch                                                            *
*   Global Science and Technology                          e-mail:         *
*   6411 Ivy Lane Suite 610                             lynch@gsti.com     *
*   Greenbelt MD. 20770                            <\                      *
*                                                   >\                     *
*            -===================================>:::(0)//////]0           *
*                                                   >/                     *
*   Phone       (301) 474-9696                     
Return to Top
Subject: Re: computational/chaos research
From: Andy Froncioni
Date: Tue, 05 Nov 1996 13:05:25 -0500
Jason L. Russ wrote:
> 
> Hello,  I am considering doing my Master's research on
> computational issues in modeling a chaotic phenomenon
> and  am looking for suggestions.
> 
One of the newer, more practical areas in chaos research is
related to fluid mixing.  A simple experiment is to look at
a particle in an analytic two vortex system, as in the work of Aref [1]
with the blinking vortex.  
What's interesting is that in 2d flows where the velocity field
is deterministic and simple, particles subjected to the flowfield
exhibit chaos.  There are lots of excellent application areas for
chemical engineering [2].  
Consider taking a good look at this promising new field.
[1] Aref, H., "Stirring by Chaotic Advection", J. Fluid Mech., 
    vol. 143, p. 1, 1984.
[2] Ottino, J.M., "The Kinematics of Mixing: Stretching, Chaos,
and          Transport", Cambridge University Press., 1989.
Return to Top
Subject: Re: Help me
From: "Dann Corbit"
Date: 5 Nov 1996 17:55:11 GMT
Assuming that:
x and y are dimensions and z is elevation in some units...
Question:
Is f(x,y) a continuous function for z?
Question:
Is f(x,y) approximated well by flat plates that touch at the named
z coordinates?
Since this is a tiny problem, you could work out a decent
approximation by hand.  Calculate the volume of the rectangular
columns that touch the lowest point in a grid cell.  Calculate the
volume of the rectangular columns that touch the highest point in 
a grid cell. Form a summation of each of these calculations.
Average the two values.  You might choose a grid size of .5 where
the points are closer, and 1 or more where the grid is larger.
There are quadrature programs that will perform this calculation.
If you have a function that creates the z coordinate from x and y
and it isn't too painful to evaluate, it might be a good idea to
do a numerical quadrature in two dimensions.
Mr A.D. Bakhtiary  wrote in article
...
> 
> Hi: 
> 
> I wonder if anybody could help me.
> 
> If I have got 3 column numerical (X,Y,Z) data such below: 
> 
> 
> X	Y	Z
> 0	0	0
> 2	4	1.5
> 4	6	2
> 5	9	3
> 4.5	9	3.5
> 3	6	2
> 1.5	3	1
> 0	0	0
> 
> I want to calculate the valume of the 3D surface plot obtained
> from these data.
> 
> I would be grateful if you help me.
> 
> Amir Bakhtiary
> e-mail: hossein@liverpool.ac.uk  
> 
> 
Return to Top
Subject: Re: Wanted: FFT algorithm
From: agraps@netcom.com (Amara Graps)
Date: Tue, 5 Nov 1996 17:45:00 GMT
Bill Simpson  writes:
>Try:
>http://www.tu-chemnitz.de/~arndt/joerg.html	
>This should really be in the sci.math.num-analysis FAQ, but isn't.
That page is still there, but he has set up another FFT page
at:
http://www.spektracom.de/~arndt/fxt/fxtpage.html
that might be more current.
Amara
-- 
*************************************************************************
Amara Graps                         email: agraps@netcom.com
Computational Physics               vita:  finger agraps@best.com
Multiplex Answers                   URL:   http://www.amara.com/
*************************************************************************
Return to Top
Subject: Re: Need help with integration
From: Dave Dodson
Date: 5 Nov 1996 13:39:52 -0600
In article <327ED624.3093@worldnet.att.net>,
Brian Gross   wrote:
>What is the intergral of the following:
>
>                 X
>           --------------- dx
>              3
>             X   -  1
Use the method of partial fractions to write
		x/(x^3-1) = a/(x-1) + b/(x^2+x+1)
and integrate the pieces.
Dave
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: jac@ds8.scri.fsu.edu (Jim Carr)
Date: 5 Nov 1996 21:14:13 -0500
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
>
>Only in the USA!  ISO have superseded it by the Fortran 90 standard.
>ANSI dissented and have preserved both.
 Not ANSI, American corporations.  Under US law, the day after a new 
 standard for "Fortran" was adopted, they would be required to supply 
 a Fortran 90 compiler or be in default of certain contracts.  Hence 
 the decision to give Fortran 90 the name "Fortran Extended". 
 Is every 'fortran' compiler in Europe ISO compliant? 
-- 
 James A. Carr        |  It is election day in the U.S.   
    http://www.scri.fsu.edu/~jac        |  
 Supercomputer Computations Res. Inst.  |  "Vote early and often." 
 Florida State, Tallahassee FL 32306    |        -- my Dad, born in Chicago
Return to Top
Subject: Re: Using C for number-crunching (was: Numerical solution to Schrodinger's Eq)
From: jac@ds8.scri.fsu.edu (Jim Carr)
Date: 5 Nov 1996 21:14:13 -0500
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
>
>Only in the USA!  ISO have superseded it by the Fortran 90 standard.
>ANSI dissented and have preserved both.
 Not ANSI, American corporations.  Under US law, the day after a new 
 standard for "Fortran" was adopted, they would be required to supply 
 a Fortran 90 compiler or be in default of certain contracts.  Hence 
 the decision to give Fortran 90 the name "Fortran Extended". 
 Is every 'fortran' compiler in Europe ISO compliant? 
-- 
 James A. Carr        |  It is election day in the U.S.   
    http://www.scri.fsu.edu/~jac        |  
 Supercomputer Computations Res. Inst.  |  "Vote early and often." 
 Florida State, Tallahassee FL 32306    |        -- my Dad, born in Chicago
Return to Top
Subject: calibration/interpolation?
From: Bill Simpson
Date: Tue, 5 Nov 1996 11:07:47 -0600
I have a calibration problem, and am seeking advice.
An x,y,z scope is displaying dots.  The luminance of the dot is governed
by
z.  I step through the values of z, measuring the luminance with a
photometer (automatically).  The point of this is to linearize the z
values, abd to calibrate the display.  After doing this I wish
to say
plot(x,y,lum2z(100.0));
and get a dot with luminance of 100.0 cd/m^2.  That is, lum2z(lum)
returns the z value that gives a luminance of lum.
So I have measured lum (with error) at many z values (no error).  I wish
to estimate z from lum.  This is called a calibration problem in the
statistics literature (or inverse regression).
I have 4096 z values.  I measure in steps of 9 (from 4095 to 0).
My idea is to fit a high-order polynomial to the (z,lum) data points.
The order has to be high, say 11th or even higher, to get a decent fit.
I would use SVD.  I say the order has to be high from looking at
the actual data and trying various fits.  The fit is done on the
first call of lum2z().  Suppose that only a 2nd order polynomial is
fitted:
lum=b0+b1*z+b2*z^2
Then this is solved for z:
                   2                      1/2         
          -b1 + (b1  + 4 b2 lum - 4 b2 b0)      
      1/2 -----------------------------------
                          b2                  
                   2                      1/2
         -b1 - (b1  + 4 b2 lum - 4 b2 b0)
      1/2 -----------------------------------
                          b2
(not sure which one to use, lum and z both constrained to be positive)
Then on subsequent calls of lum2z(), I just use the fitted parameters
b0, b1, b2 in the above equation to get the z value.  This will be very
fast, and speed is important because this function will get a LOT of
work (8100 calls per image, multiple images).
An alternative is to call z the y value and lum the x variable
(even though that's not correct) and fit the polynomial to that.  That
way I avoid the symbolic algebra to solve for z.  This method should be OK
since the errors are very small compared to the range of lum.
[Actually I have just read John Chandler's posting on polynomial
fitting.  I will do it the way he suggests than than as written above.]
The other options would include
- linear interpolation
- quadratic interpolation
- spline interpolation
- ??
I have tried linear and quadratic interpolation on the data taken as
(lum,z).  They are not dependable.  I especially have problems on the
low z values where the luminance readings are noisy and near 0 and
luminance is not a monotonic function of z.
I have thought about fitting a spline.  The routines I have seen require
lum to be a monotonic function of z.  (I use C).  It seems to me this
will be a slow method, since the spline interpolation must be computed
on every call.
Please let me know if my proposed solution seems reasonable.  If not
what should I do?  Please also suggest available C code.
Thanks very much for any help.
Bill Simpson
Return to Top
Subject: PLEASE HELP ME: I need to factor/simplify an algebraic expression.
From: phaethon1112@earthlink.net (Tai)
Date: 6 Nov 1996 05:19:26 GMT
Hi,
I need to factor/simplify: (a^5 - b^5)/(a^3 - b^3)
Please reply before 23:00 Nov.5/96.  Email me at 
phaethon1112@earthlink.net with a list of steps taken to reach final 
answer.  If you are reading this post after Nov.5, please ignore. Thank 
you.
Return to Top
Subject: PostDoctoral Position
From: Alain.Stoessel@ifp.fr (Alain Stoessel)
Date: Wed, 06 Nov 1996 08:00:55 +0100
IFP: Institut Francais du Petrole
1-4 Av de Bois Preau
92852 Rueil-Malmaison Cedex
FRANCE
AVAILABLE POST-DOCTORAL POSITION
Domain Decomposition Methods for Multicomponents fluid flow in porous media
Duration: 12 Months (up to 18 months)
In basin modeling, the physical phenomena represented are single- or
multi-phase flow that take place in a heterogeneous porous medium whose
geometry changes over time. This geometry is including regional faults,
salt domes,etc. and can be described as a set of blocks with non conforming
meshes. Mathematically, these phenomena are described by a set of PDE that
must be solved in a given space-time domain. Although specific to this
problem, such a system of equations is similar to models used in
hydrogeology, reservoir simulation, and mechanics of porous media.
An important work has already been done within the framework of the CERES
consortium to handle the solution of single-component flows in these
complex sedimentary basins. Handling compressible multi-components
multi-phase flows with exchange between phases is requiring an in-depth
study of the numerical method and on how to apply domain decomposition
methods to this new set of PDE. In particular, the treatment of the
saturation equation and the need to be perfectly conservative must be
carefully studied.
The goal of the PostDoctoral work is to study how to correctly solve these
compressible multi-components multi-phase flow with a domain decomposition
method and to extend the CERES basin model.
PROFILE: Applicants must have a good skill either in the solution of Fluid
Flow in porous media or in domain decomposition methods applied to fluid flows.
A good knowledge of Finite Volume methods and a good practice of
scientific computing is a must.
This position is opened to NATO and European candidates (except French).
LOCATION: IFP is a R&D; institute of 1800 permanent fellows located in PARIS.
To apply, please contact:
Alain STOESSEL
Computer Science and Applied Mathematics Division
Tel:  +33.1.47.52.71.33
Fax:  +33.1.47.52.70.22
Email:   Alain.Stoessel@ifp.fr
Return to Top
Subject: PostDoctoral Position in IFP (France)
From: Alain.Stoessel@ifp.fr (Alain Stoessel)
Date: Wed, 06 Nov 1996 08:02:02 +0100
IFP: Institut Francais du Petrole
1-4 Av de Bois Preau
92500 Rueil-Malmaison
FRANCE
AVAILABLE POST-DOCTORAL POSITION
Numerical methods for turbulent reactive flows in Reciprocating Engines
Duration: 12 Months (might be prolongated)
Solving reactive flows for the simulation of reciprocating engines requires 
the solution of turbulent multi-specie fluid flows over a moving
unstructured hybrid 
mesh.
Mixed Finite Volume / Finite Element methods are currently under studies in 
IFP for the solution of Navier-Stokes flows on mixed quadrangle/triangle meshes.
Solving multi-specie flows with reactive terms can be done by several 
solutions (splitting, direct with multispecie Roe solver,...) that are 
needed to be investigated.
The goal of this PostDoctoral position is to investigate these different
solutions for the solution of these complex turbulent flows.
PROFILE: Applicants must have a good knowledge of numerical methods for
fluid flows.
A good knowledge of mixed FV/FE methods and a good practice of scientific
computing is a must.
This position is opened to NATO and European candidates (except French).
LOCATION: IFP is a R&D; institute of 1800 permanent fellows located in PARIS.
To apply, please contact:
Alain STOESSEL
Computer Science and Applied Mathematics Division
Tel:  +33.1.47.52.71.33
Fax:  +33.1.47.52.70.22
Email:   Alain.Stoessel@ifp.fr
Return to Top
Subject: PDEase FEA soft from Macsyma-any experiences?
From: Victor Kharin
Date: Wed, 6 Nov 1996 09:04:59 -0100
Hi..
Has anyone had any good or bad experiences using the PDEase2D finite 
element software from Macsyma Inc.?
I am planning to use this software under Windows NT for nonlinear 
problems of solid mechanics coupled with stress-strain dependent 
transient heat-mass transfer.
I am wondering just about its "nonlinear capabilities", in particular, 
could it serve well for elastoplastic problems with deformational 
Ramberg-Osgood like plasticity (small strains)? How reliable is this 
software?
In general:
Are there any problems with this software that I must be aware of?
How friendly and easy to use is it?
What should I expect from the package?
I would appreciate every comment, suggestion, idea..
---------------------------------------------------------------
Victor Kharin                          ETSI Caminos
tel: +34-81-131150, ext. 450           Campus de Elvina
fax: +34-81-132876                     15192 La Coruna
E-mail: kharin@udc.es                  SPAIN
---------------------------------------------------------------
Return to Top

Downloaded by WWW Programs
Byron Palmer