Subject: Re: Windows version of laTex?
From: msh@holyrood.ed.ac.uk ( Mark Higgins)
Date: 4 Nov 1996 14:40:03 GMT
ez062761@rocky.ucdavis.edu (Mike Henry) writes:
>I am looking for a good mathematical word processor like laTex, but for
>Win 3.1. Shareware preferred, any ideas?
>--
read the comp.text.tex faq -- latex is avalible free for PC (but
technicaly its not shareware)
Mark
--
,,,
(o o)
________o00__( )__00o_______________________________
msh@ed.ac.uk Mark Higgins =>:o}
Subject: Non-linear fitting problem
From: debl@world.std.com (David Lees)
Date: Mon, 4 Nov 1996 21:07:55 GMT
I recently posted a very general non-linear fitting problem (too
general in fact), which I would like to repost in a more specific
form
-----------------------------------
Problem Statement:
Given a parameterized, multivariate, quadratic function definition
f(X) which takes an N dimensional vector (X)=(x1, x2, x3, ... xN) as
an argument:
f(X) = a_1*x1^2+a_2*x1+a_3*x2^2+a_4*x2+...+a_2N-1*xN^2+a_2N*xN+a_2N+1
where a_i is the i'th coefficient and there are a total of 2N+1 coefficients.
and a set of M measured vectors X1, X2, ..., XM having errors on all
components which are independent between vectors and between components.
Find the parameter values a_1, a_2, ... , a_2N+1 which cause the function
to "best" ("best" might mean "with minimum mean square error")
satisfy the relation:
f(X) = 0
for the measured set of M vectors.
The solution parameter set must satisfy the normalization condition:
a1^2+a2^2+...+aN^2=1 in order to avoid the trivial solution that all
parameters are zero.
----------------
The problem with using SVD(Singular Value Decomposition) is that it
assumes errors are linear in the measured variables being fit and
sometimes gives poor(biased) results for the quadratic terms. We are
looking for something like SVD but without the "bias" problem.
The algorithm for solving this problem must find the parameters using
a number of computations that is NOT data dependent. This rules out
gradient descent methods.
----------------------------------------------
Thanks in advance.
David Lees
debl@world.std.com
Subject: Re: Non-linear fitting problem
From: Usenet@mjohnson.com (The Johnson Household)
Date: Mon, 04 Nov 1996 23:38:22 GMT
debl@world.std.com (David Lees) wrote:
> ... [deletions] ...
>for the measured set of M vectors.
> ...
>The algorithm for solving this problem must find the parameters using
>a number of computations that is NOT data dependent.
> ...
Wow, do you hope to find an algorithm whose running time is
NOT dependent upon the number of datapoints (your "M") ??
--
Mark Johnson Silicon Valley, California mark@mjohnson.com
"... The world will little note, nor long remember, what is said
here today..." -Abraham Lincoln, "The Gettysburg Address"
Subject: Please, stop me!
From: Jive Dadson
Date: Mon, 04 Nov 1996 16:14:01 +0000
I've written yet another function minimizer, this one based on the
scaled conjugate gradient _a la_ Moller. It works pretty darn well,
I must say. Of course I HAD to make several improvements. :-)
Its speed is comparable with the BGFS routine in _Numerical Recipes
in C_, and apparently has none of the robustness problems.
[Added later: A helpful net correspondent may have put his finger
on why the BFGS routine's inverse Hessian becomes non-positive definite.
For necessary speed, I am using approximations in my objective function
(neural network) that may not be smooth enough for a large number
of BFGS updates (based on the gradients). In any case, the new routine
saves the day.]
I find I am not too clear on how one should stop a function-minimizer
before it reaches machine-limit convergence. The NRC routines have
several ways to stop minimizing f(x), including these:
1. If the gradient f'(x) becomes "small" on a step.
2. If the difference in x between two steps becomes "small"
3. The conj-grad routine frprmin() stops when the difference
in f(x) between two steps becomes "small".
I have doubts about all of those stopping criteria.
1. In monitoring the city block norm for the gradient in actual,
noisy problems, I find that it hops around quite a bit, and frequently
becomes almost as small as machine limits allow it to become
at the actual minimum. The NRC routine often stops prematurely
if you don't set "gtol" very low. This is the stopping criterion
it uses:
test = 0
den = max(f(x),1);
FOR ALL DIMENSIONS i
temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
IF temp > test THEN test=temp;
IF test < gtol THEN quit
2. The second criterion particularly does not seem appropriate for
algorithms that use an approximate line search, because it can cause
the algorithm to stop because of one ineffective step. Yet, that is the
other stopping criterion for the NRC BFGS routine dfpmin(), which
uses approximate line searches.
test = 0;
FOR ALL DIMENSIONS i
temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
IF temp > test THEN test=temp;
IF test < small THEN quit
I also question the division by max(abs(x[i]),1). When training a neural
network, (which is what I am mostly using this for), scaling is
appropriate for some parameters ("weights"), but perhaps not for
others ("biases"). If you are going to stop on near x-convergece,
I think the user needs to specify the x norm.
3. The third criterion has the problems of the first two: It can
cause early stopping in a relative "flat" spot, or when a single
step is ineffective because of not-exact line-search.
test = 2.0*abs(f(x)-previous_f_x);
lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
IF test <= lim THEN quit
It would seem at the very least you should insist that these criteria are
met for some number of consecutive steps before stopping.
Ideally, one would like for the user to be able to specify a small
number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
criterion cannot be tested with certainty. Or the user could specify a
norm <>, and say, "Stop when < e." Is there a way to stop before
complete convergence when it is "very probably" true that you are with an
epsilon of the minimum?
Gurus, please enlighten.
Thanks,
J.
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: shenkin@still3.chem.columbia.edu (Peter Shenkin)
Date: 5 Nov 1996 01:45:02 GMT
In article <55atf7$cpg@lyra.csx.cam.ac.uk>,
Nick Maclaren wrote:
>In article <553eba$ii1@sol.ctr.columbia.edu>, shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
>|> In article <5539na$c6@lyra.csx.cam.ac.uk>,
>|> Nick Maclaren wrote:
>|> >..... The fact is
>|> >that ANY translation via C will be inefficient, because there is no way
>|> >in C to specify when pointers or data structures can be assumed not to
>|> >be aliased. ....
>|>
>|> Well, it depends what you're translating. For instance, if the Fortran
>|> code being translated only writes to one destination in each function
>|> or subroutine, the C code can const-qualify the underlying data for
>|> the other variables.
>
>Yes and no. This is possible in the simplest cases, but becomes
>extremely restrictive and tricky for anything else.
First, I recently posted a disclaimer to two of my earlier postings,
but now I want to disclaim the disclaimer. :-) Someone had
posted saying that const qualification (in C) does not convey
non-alias information, because the const qualifier guarantees only
that no attempt will be made to write to the underlying addresses
*through the const-qualified pointer*. Consider the following
code fragement:
void func( float a[], const float b[], const float c, const int n ) {
int i;
for( i=0; i...There is a
>particularly horrible mess to do with multi-level structures,
>such as the following:
>
>void fred (double *a, double **b, int *c, int d) {
> int i;
> for (i = 0; i < d; ++i) a[i] = b[c[i]];
>}
>
>You can easily const qualify 'c' and 'b' at ONE level, but the
>casting rules do not allow you to const qualify 'b' at both levels.
>And nor do they allow the caller of the function to do it :-(
Can you not do this with typedefs?
typedef const float CF; /* const float */
typedef CF *const PCF; /* const ptr to const float */
typedef PCF *const PPCF; /* const ptr to const ptr to const float */
void fred( double *a, PPCF b, const int *const c, const int d) ;
-P.
--
****************** In Memoriam, Bill Monroe, 1911 - 1996 ******************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code 3153 *
** NY, NY 10027; shenkin@columbia.edu; (212)854-5143; FAX: 678-9039 ***
MacroModel WWW page: http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: bp887@FreeNet.Carleton.CA (Angel Garcia)
Date: 4 Nov 1996 23:49:06 GMT
Santiago Arteaga (arteaga@cs.umd.edu) writes:
> What about storing those numbers which are *not* primes,
> but pseudo-primes in base 2,3,5 and 7, to say something?
> There shouldn't be too many of these, and so checking whether a
> number is prime would be easy; just check for pseudo-primality,
> which is fast, and if they are pseudoprimes check that they are
> not in the CDROM before declaring them primes.
>
Yes, Santiago. It seems pretty obvious that the first cdrom to
be published about primes will be (if still is not) these that you
mention: the pseudo-primes or composite numbers which divide
2^(p-1) - 1; 3^(p-1) -1; etc..
Not only they have merit by themselves but they are fundamental for
software- programmers who in order to give correct answer "true"
to the "isprime(p)" question have to program Fermat's little
theorem with 'several' bases 2, 3, 5, etc... to ensure that
no pseudo-prime passes as 'true prime'.
If a program 'repeats' Fermat's test with 3 bases b1, b2 and b3
AND the first pseudo-prime common to these 3 bases has already
100 digits, say; then it follows that such program CERTIFIEDLY gives
true primes up to 99 digits or less. This is very important.
--
Angel, secretary (male) of Universitas Americae (UNIAM).
http://www.ncf.carleton.ca/~bp887
Subject: Re: non linear cubic spline
From: Brad Bell
Date: Mon, 04 Nov 1996 16:57:16 -0800
Gleb Beliakov wrote:
>
> Bob Falkiner wrote:
> >I am looking for a PC routine that will do a non linear cubic spline,
> >where the "stiffness" of the spline can be an input parameter over a
> >range of 0-1 where 0 would be a cubic spline fit where the spline curve
> >goes through each of the input data points and 1 would be a linear
> >regression. there is a routine in the SAS mainframe library (NLIN?)
> >that does this. Are there any alternates, as I can't afford SAS for
> >personal use on the PC.
>
> Why do you call this spline nonlinear? The spline you describe is a traditional
> cubic smoothing spline, which is a linear combination of B-splines, and the
> coefficients could be found by solving a linear system of equations.
> For instance, look at:
> Lyche, Schymaker, SIAM J.Numer.Anal 10,1027-1038 (1973).
> The smoothing parameter is in the range 0-infinity, but you can easily
> transform it into your stiffness parameter.
>
> The fortran code can be found in netlib,
> the directory is called gcvspl. For C++,Mathematica and Maple code drop me a message.
>
> Gleb
Smoothing splines of arbitrary order and in arbitrary dimension are
available with the free version of O-Matrix,
http://world.std.com/~harmonic
Once you have installed the O-Matrix package, you can use the help to
search for the routine "smosplc" and "smosplv" which
evaluate the spline coefficients and values respectively.
Brad Bell
Subject: Re: Non-linear fitting problem
From: debl@world.std.com (David Lees)
Date: Tue, 5 Nov 1996 06:20:00 GMT
Opps. Sorry I did not state what I want very clearly. There is
not problem with the run time being dependent on the number of data
points. I want an upper bound on the number of computations that
is data independent.
David Lees
The Johnson Household (Usenet@mjohnson.com) wrote:
: debl@world.std.com (David Lees) wrote:
: > ... [deletions] ...
: >for the measured set of M vectors.
: > ...
: >The algorithm for solving this problem must find the parameters using
: >a number of computations that is NOT data dependent.
: > ...
: Wow, do you hope to find an algorithm whose running time is
: NOT dependent upon the number of datapoints (your "M") ??
: --
: Mark Johnson Silicon Valley, California mark@mjohnson.com
: "... The world will little note, nor long remember, what is said
: here today..." -Abraham Lincoln, "The Gettysburg Address"
Subject: Re: Using C for number-crunching (was: Numerical solution to
From: David Kastrup
Date: 05 Nov 1996 10:29:00 +0100
shenkin@still3.chem.columbia.edu (Peter Shenkin) writes:
: First, I recently posted a disclaimer to two of my earlier postings,
@> but now I want to disclaim the disclaimer. :-) Someone had posted
@> saying that const qualification (in C) does not convey non-alias
@> information, because the const qualifier guarantees only that no
@> attempt will be made to write to the underlying addresses *through the
@> const-qualified pointer*.
This was me, and I stand by my statement. See below why.
@> void func( float a[], const float b[], const float c, const int n ) {
@> int i;
@> for( i=0; i a[ i ] = a[ i ] + c * b[ i ];
@> }
@> }
@> I originally stated that the compiler could easily tell that a[]
@> and b[] do not overlap, because if they did, the const b[] array
@> would in part be overwritten, but its const qualification says it
@> can't be. Against this it was argued that the C language
@> guarantees that correct code must be generated even if a[] and b[]
@> overlap, since "const" only says that the programmer can't alter
@> b[i] by dereferencing b -- it remains legal and well-defined to
@> dereference b[]'s data through a. At the time I agreed, with
@> regrets.
And you'll have to agree again.
@> But now (sparked by Nick's posting, which implicitly seemed to
@> concur
@> with my original) I've looked this up in the C standard, section
@> 6.5.3. It states, in part, "If an attempt is made to modify an object
@> defined with a const-qualified type through use of an lvalue with
@> non-const-qualified type, the behavior is undefined."
@> Thus, the standard is quite clearly saying that the compiler can
@> assume from the const-qualified declaration of b[] that the contents
@> of b[] will not be altered even by means of dereference through a.
@> I.e., my original posting was correct.
And that's where you are being wrong in your interpretation. A
pointer, be it const or not, does *not* define an object. It just
references it. If an object is *defined* using a const qualifier, the
compiler can assume that the object will never change, legally. But
the function getting a const pointer to an object does not know if the
object has been defined as const. That the pointer is a const pointer
tells nothing about the state of the object it points to (except that
we are not going to change the object through just that pointer).
@> Incidently, if you read Dennis Ritchie's official comment to
@> X3J11 in 1989, on the subject of the then-proposed qualifiers,
@> he suggests for const:
@>
@> Add a constraint (or discussion or example) to assignment that makes
@> clear the illegality of assigning to an object whose actual type is
@> const-qualified, no matter what access path is used. There is a
@> manifest constraint that is easy to check (left side is not
@> const-qualified), but also a non-checkable constraint (left side is
@> not secretly const-qualified). The effect should be that converting
@> between pointers to const-qualified and plain objects is legal and
@> well-defined; avoiding assignment through pointers that derive
@> ultimately from `const' objects is the programmer's responsibility.
Again, here Ritchie is talking about the const-ness of an object
prohibiting assignments to it ultimatively, *not* about the const-ness
of pointers to them.
@> Second, I do agree with Nick that this method of telling the compiler
@> that aliasing is not going to happen is somewhat unwieldy except in
@> simple cases.
It does not work, simply because const pointers are allowed to point
to variable objects, and the only strict guarantees are about const
objects.
--
David Kastrup Phone: +49-234-700-5570
Email: dak@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut fuer Neuroinformatik, Universitaetsstr. 150, 44780 Bochum, Germany
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: "Dann Corbit"
Date: 5 Nov 1996 05:37:47 GMT
Wayne Schlitt wrote in article
...
[snip]
> Oh, I am not as interested in the actual cdrom as I am in the idea of
> how to store that much 'information'.
I have found a way to store 'lots' of primes in a nicely compressed format.
First, I calculate them using a sieve. I store the prime list as a list of
bits.
Each bit is 1 if prime, and 0 if not prime. Since all even numbers except
two are not prime, I throw out all of the even bits, and just 'remember'
that
2 is prime. Then, I compress the bits in blocks of 64K using Huffman
squeezing. The primality of the first 256 million numbers takes about
ten megabytes of space that way. The blocks compress better and better,
as the density of primes drops. It would be interesting to see how much
data could be fit onto one cd.
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: Jan Rosenzweig
Date: Tue, 05 Nov 1996 10:12:54 -0500
A Montvay {TRACS} wrote:
>
> Hi all,
>
> I want to use a Runge-Kutta (or similar) method on an IBVP on the
> three-dimensional wave equation
>
> d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
>
> Most books on numerical solutions of DE describe Runge-Kutta for ODE
> some transfer it to first-order PDEs in two dimensions, but i have
> not found any for second order in four dimensions.
>
> Therefore any algorithm, program code in (almost)any language,
> reference to book/paper or hints on how to do it myself would be very
> welcome.
>
> I hope somebody helps me so I don't have to figure it out myself -
> after all I'm sure this has been done before!
>
> Andras
You can't use Runge-Kutta for IBVP. Runge Kutta is a method for
initial value problems. Why bother with that, what is wrong with the
finite element method?
--
Jan Rosenzweig
e-mail: rosen@math.mcgill.ca
office: home:
Department of Mathematics and Statistics 539 Rue Prince Arthur O.
Burnside Hall, room 1132, mbox F-10 Montreal
805 Rue Sherbrooke O. Quebec H2X 1T6
Montreal, Quebec H3A 2K6
Subject: Minimizer stopping rule
From: Jive Dadson
Date: Tue, 05 Nov 1996 09:48:17 +0000
I find I am not too clear on how one should stop a function-minimizer
before it reaches machine-limit convergence. The NRC routines have
several ways to stop minimizing f(x), including these:
1. If the gradient f'(x) becomes "small" on a step.
2. If the difference in x between two steps becomes "small"
3. The conj-grad routine frprmin() stops when the difference
in f(x) between two steps becomes "small".
I have doubts about all of those stopping criteria.
1. In monitoring the city block norm for the gradient in actual,
noisy problems, I find that it hops around quite a bit, and frequently
becomes almost as small as machine limits allow it to become
at the actual minimum. The NRC routine often stops prematurely
if you don't set "gtol" very low. This is the stopping criterion
it uses:
test = 0
den = max(f(x),1);
FOR ALL DIMENSIONS i
temp = abs(gradient[i]) * max(abs(x[i]),1)/den;
IF temp > test THEN test=temp;
IF test < gtol THEN quit
2. The second criterion particularly does not seem appropriate for
algorithms that use an approximate line search, because it can cause
the algorithm to stop because of one ineffective step. Yet, that is the
other stopping criterion for the NRC BFGS routine dfpmin(), which
uses approximate line searches.
test = 0;
FOR ALL DIMENSIONS i
temp = abs(x[i]-prev_x[i])/max(abs(x[i]),1);
IF temp > test THEN test=temp;
IF test < small THEN quit
I also question the division by max(abs(x[i]),1). When training a neural
network, (which is what I am mostly using this for), scaling is
appropriate for some parameters ("weights"), but perhaps not for
others ("biases"). If you are going to stop on near x-convergece,
I think the user needs to specify the x norm.
3. The third criterion has the problems of the first two: It can
cause early stopping in a relative "flat" spot, or when a single
step is ineffective because of not-exact line-search.
test = 2.0*abs(f(x)-previous_f_x);
lim = ftol*(abs(f(x))+fabs(previous_f_x)+EPS);
IF test <= lim THEN quit
It would seem at the very least you should insist that these criteria are
met for some number of consecutive steps before stopping.
Ideally, one would like for the user to be able to specify a small
number e, and say, "Stop when |f(x)-f(M)| < e * abs(f(M))." Obviously that
criterion cannot be tested with certainty. Or the user could specify a
norm <>, and say, "Stop when < e." Is there a way to stop before
complete convergence when it is "very probably" true that you are with an
epsilon of the minimum?
Gurus, please enlighten.
Thanks,
J.