Back


Newsgroup sci.math.num-analysis 29505

Directory

Subject: Optimization: expensive objective -- From: jas140@engr.usask.ca (John A Steele)
Subject: Re: Eigenvalue problem of big sparse matrices (tridiagonalizing, Lanczos etc.) -- From: "Hans D. Mittelmann"
Subject: Re: Solution of Polynomials (how?) -- From: ssaguy@agri.huji.ac.il (Shay)
Subject: Fixed point FFT -- From: ssaguy@agri.huji.ac.il (Shay)
Subject: Re: Afternotes on Numerical Analysis -- From: "Jeffery J. Leader"
Subject: Gaussian Elimination -- From: Po-shan Chang
Subject: PI Series needed -- From: danfox@primenet.com (Dan Fox)
Subject: Re: How to determine what values are small enough to be set to zero in SVD? -- From: stewart@cs.umd.edu (G. W. Stewart)
Subject: Re: PI Series needed -- From: phil kenny
Subject: Integration of exponential function -- From: mok100@unity.ncsu.edu (Michael Kyereme)
Subject: Re: Good book for Applications of Group Theory? -- From: fleming@fma2.if.usp.br (Henrique Fleming {F})
Subject: Q: modified Cholesky decomposition (Gill +) -- From: lendl@late.e-technik.uni-erlangen.de (Markus Lendl)
Subject: Re: Matrix operator implementation in C++ -- From: hogan@rintintin.Colorado.EDU (Apollo)
Subject: Re: Eigenvalue problem of big sparse matrices (tridiagonalizing, Lanczos etc.) -- From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Re: Asin -- From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Interesting question... -- From: Wayne Hinkin
Subject: Re: Optimization: expensive objective -- From: Hans D Mittelmann
Subject: Algorithms archive -- From: "Cyril Nickanorov"
Subject: Optimization: expensive objective -- From: jas140@engr.usask.ca (John A Steele)
Subject: [W] WANTED: optimized LAPACK ilaenv. -- From: engstler@na.uni-tuebingen.de (Christian Engstler)
Subject: Re: Integration of exponential function -- From: "Dann Corbit"
Subject: Sparse Solvers -- From: "Michael I. Miga"
Subject: eigenvalues -- From: <.,@compuserve.com>
Subject: Re: How to find eigenvalues of "bad" matrix -- From: stewart@cs.umd.edu (G. W. Stewart)
Subject: Midwest Numerical Analysis Day 1997 -- From: keinert@iastate.edu (Fritz Keinert)
Subject: Re: Algorithms archive -- From: "Dann Corbit"
Subject: Re: Integration of exponential function -- From: Gleb Beliakov
Subject: Re: Optimization: expensive objective -- From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Subject: Re: Sparse Solvers -- From: Hans D Mittelmann
Subject: Re: Optimization: expensive objective -- From: Hans D Mittelmann

Articles

Subject: Optimization: expensive objective
From: jas140@engr.usask.ca (John A Steele)
Date: 13 Jan 1997 23:35:21 GMT
Hi.  I have an optimization problem in 6 variables,
but my objective function is very expensive to 
compute.  We have IMSL here at the U. of S.,
but I wasn't convinced by the documentation that it
offered what I think I need.  Advice, anyone?
Thanks.
Return to Top
Subject: Re: Eigenvalue problem of big sparse matrices (tridiagonalizing, Lanczos etc.)
From: "Hans D. Mittelmann"
Date: Mon, 13 Jan 1997 16:54:28 -0700
Axel Thimm wrote:
> 
> I am looking for numerical attempts to solve the eigenvalue problem of
> big sparse matrices. The best I have found until now is the Lanczos
> method.
> Are other algorithms that can help?
> Do concrete implementations in C or Fortran exist?
> 
> Best regards, Axel Thimm.
> 
> --
> Axel Thimm 
> Fachbereich Physik, Freie Universitaet Berlin
Hi,
you did not say if the matrix is symmetric or not. Anyhow, there are
codes in netlib, for example, laso. There is ARPACK at
                ftp://ftp.caam.rice.edu/pub/software/ARPACK
-- 
Hans D. Mittelmann			http://plato.la.asu.edu/
Arizona State University		Phone: (602) 965-6595
Department of Mathematics		Fax:   (602) 965-0461
Tempe, AZ 85287-1804			email: mittelmann@asu.edu
Return to Top
Subject: Re: Solution of Polynomials (how?)
From: ssaguy@agri.huji.ac.il (Shay)
Date: Mon, 13 Jan 1997 20:29:20 GMT
On 13 Jan 1997 13:39:38 +1100, rav@goanna.cs.rmit.edu.au (robin)
wrote:
>	Steve writes:
>
>	>Has anybody and  good references? Most Numerical Analysis
>	>book seem to concentrate on quadrature and interpolation :(
>	>		
>	>Specifically I want to concentrate on polys of degree < 10 
>	>(I have implemented up to fourth degree analytically)
>	>Also, most of the time just the least positive root is
>	>required, not all roots (a la Sturm sequence)
>
>	>Thanks,
>	>Steve.
>
>You could take a look at Chapter 13 of "Introduction to PL/I,
>Algorithms, and Structured Programming" (Vowels, 1995), which includes
>a discussion of complex roots of polynomials and Madsen's algorithm.
Or take a look at the Numerical Recipes homepage at http://www.nr.com
Return to Top
Subject: Fixed point FFT
From: ssaguy@agri.huji.ac.il (Shay)
Date: Mon, 13 Jan 1997 20:27:18 GMT
Hi,
	I'm looking for a fixed point (preferably integers) FFT
algorithm.  The input being an integer (16 bit) and the output a 32
bit which might need scaling.  All the 'butterfly' multiplications and
additions should be done using integers (16 bit) or long integers (32
bit).
	Anyone has any tips?
TIA,
	shayv
	email: ssaguy@agri.huji.ac.il
PS, please reply via email as well.
Return to Top
Subject: Re: Afternotes on Numerical Analysis
From: "Jeffery J. Leader"
Date: Mon, 13 Jan 1997 18:21:30 -0800
Michael Stoecker wrote:
> Maybe more relevantly, Strikwerda's
> "Finite Difference Schemes and Partial Differential Equations" isn't too
> bad.
Thanks for the suggestions.  I like the Strikwerda text in a lot of ways
but it just wasn't quite at a high enough level for the course I'm
giving now; I do flip through it for ideas on presentation though.  But
I'll look into the Elaydi text.  I used the Smith text at NPS a few
years ago for a course for mech. engs. and find it serviceable but,
well...dry.  I'm using it again but with reservations.
-- 
The business side is easy-easy!  If you're any good at math at 
all, you understand business.  It's not like its own deep, deep 
subject.  It's not C++.
 -William Gates
Return to Top
Subject: Gaussian Elimination
From: Po-shan Chang
Date: Mon, 13 Jan 1997 20:57:59 -0500
Hi,
I am trying to write a function to draw one interpolated spline by THREE
points. Currently, I have a function which takes FOUR points and draw a 
spline. I hear that the way to transfer from three points to four points
is to use Gaussian Elimination, but I am not quit famulier with that
can someone give me some reference or C codes which is related to this 
problem?
>Given 3 points, A, B, and C, I need to generate A' and C' such that when
>you draw a regular spline using A, A', C', and C, it passes through B.
>The only way I could thought of is to specify a set of linear equations
>and solve for A' and C' using Gaussian Elimination.  
Thank you.
Paul.
Return to Top
Subject: PI Series needed
From: danfox@primenet.com (Dan Fox)
Date: 13 Jan 1997 18:58:05 -0700
Hello -
I need the infinite series used to calculate PI. Can anyone help?
Thanks
Dan Fox
Return to Top
Subject: Re: How to determine what values are small enough to be set to zero in SVD?
From: stewart@cs.umd.edu (G. W. Stewart)
Date: 14 Jan 1997 00:26:20 -0500
In article <5b65mi$9tc@oxywhite.interaccess.com>,
Billy Leung  wrote:
#
#In SVD solution of linear equation, one often has to zero out certain
#small values to proceed.  How do you actually determine a value is small
#enough in reference to a particular problem?
#
#Thanks for your insight
#
The problem has no easy answers.  Some time ago I wrote a survey:
G. W. Stewart
Determining Rank in the Presence of Error
UMIACS TR-92-108, CS TR-2972, October 1992
Appeared in Linear
Algebra for Large Scale and Real-Time Applications, Moonen, Golub, and
De Moor eds., Kluwer Academic Publishers, Dordrecth, 1992
The TR can be obtained by anonymous ftp at thales.cs.umd.edu in
pub/reports.
Pete Stewart
Return to Top
Subject: Re: PI Series needed
From: phil kenny
Date: Mon, 13 Jan 1997 21:19:07 -0800
Dan Fox wrote:
> 
> Hello -
> 
> I need the infinite series used to calculate PI. Can anyone help?
> 
> Thanks
> 
> Dan Fox
Several series used to calculate Pi may be found at:
http://daisy.uwaterloo.ca/~alopez-o/math-faq/node12.html#SECTION00510000000000000000
Another link to very fast converging algorithms is:
http://mosaic.cecm.sfu.ca/organics/papers/borwein/paper/html/paper.html
Regards,
phil kenny
Return to Top
Subject: Integration of exponential function
From: mok100@unity.ncsu.edu (Michael Kyereme)
Date: Thu, 09 Jan 1997 10:27:20 -0600
Hello:
I am trying to integrate an exponential function using the numerical       
methods: Trapezoidal and Simpson's Rule coded in C.
The function is of the form:
         f(x) = exp(-B)
The limits of integration could be from x = 0 to 12 (or 120, or some other
simple integer).
Within this interval, the exponent, B (computed from another expression), 
takes on values which range from 10 to 180.
Thus the function f(x) tends to be extremely small due to the
large and negative nature of the exponent: e.g.  
    f(120) = exp(-120).
Some computers might evaluate such and expression to zero but mine doesn't.
Integration of the above function appears to be a formidable task. 
Is there a way to go around this problem. Any help as to how I should    
approach the problem wil be very much appreciated.
  Thank you in advance.
  Michael.
Return to Top
Subject: Re: Good book for Applications of Group Theory?
From: fleming@fma2.if.usp.br (Henrique Fleming {F})
Date: 13 Jan 1997 22:16:50 -0800
Chris O'Donovan (odonovan@physun.cis.mcmaster.ca) wrote:
: In article ,
: Lou Pecora  wrote:
: ] At present I am using an old version of Tinkham's book on Group Theory and
: ] Quantum Mechanics...
Y. Tanabe, Y. Onodera, T. Inui, "Group Theory and Its Applications
    in Physics", Springer Verlag, ISBN 3540604456
is an excellent text. It is a kind of highly improved Tinkham. Very
clearly and thoughtfully written. I used it in my lectures on
group theory. After reading the first chapters of it you will
also be able to enjoy the magnificent chapters on group theory
in the famous Landau, Lifshitz "Quantum Mechanics".
---------------------------------------------------------------
Henrique Fleming                    La duda, una de las formas
University of Sao Paulo, Brazil     de la inteligencia... 
fleming@fma.if.usp.br                    J.L. Borges 
Return to Top
Subject: Q: modified Cholesky decomposition (Gill +)
From: lendl@late.e-technik.uni-erlangen.de (Markus Lendl)
Date: 14 Jan 1997 08:21:08 GMT
I have serious problems implementing the modified CD of
Gill, Murray, Wright ('practical optimization', 1981, pp 109-111).
I feel that line 3 of algorithm MC (p 111): 
'[...] Interchange all information corresponding to rows 
and columns q and j of G_k' 
does _not_ mean a simple symetric pivoing. Otherwise it should run!
Any comments?                        ---markus
______________________________________________________________________________
Markus Lendl           lendl@late.e-technik.uni-erlangen.de
(research assistant,   http://late5.e-technik.uni-erlangen.de/user/lendl.html
  PhD candidate)       Tel.: x49/9131/85-7787
                       Fax.: x49/9131/13435
Return to Top
Subject: Re: Matrix operator implementation in C++
From: hogan@rintintin.Colorado.EDU (Apollo)
Date: 13 Jan 1997 17:35:30 GMT
In article <01bbfe92$09896460$8b7daccf@computek>,
Stephen W. Hiemstra  wrote:
>I want to implement a simple addition operator in C++ (Borland C++ 5.01). 
>I was able to implement a += operator just fine, but the straight +
>operator poses a problem in returning a value that will not affect my
>existing matrix.  The problem comes in creating an appropriate temporary
>matrix to return the new matrix value.   As shown below, the obvious and
>erroneous answer (a local temporary) is inappropriate.  How to I handle a
>global or static return matrix properly (that is, avoiding a memory leak)?
>
>Stephen
If you're gonna have big matrices, what I would do is implement the
  matrices with reference counting, then implement the copy constructor
  and assignment operators so that they share representations.  You can
  then return by value and the only cost will be a few bit twiddles and
  pointer copying.
That is:
class MatrixRep {
private:
	int	refs;
	double**	data;
	...etc...
};
class Matrix {
private:
	MatrixRep*	rep;
public:
	Matrix(const Matrix& m)
	{
		rep = m.rep;
		rep->IncrementReferenceCount();
	}
	~Matrix()
	{
		rep->DecrementReferenceCount();
		if (rep->NoMoreReferences())
			delete rep;
	}
	Matrix operator+ (const Matrix& m)
	{
		/* Note the deep copy here, so we don't
 		   munge 'this' */
		Matrix temp = this->DeepCopy();
		temp += m;
		return temp;
	}
};
--Apollo
-- 
Apollo's .sig of the hour for Mon Jan 13 09:25:44 MST 1997:
  DEEP THOUGHT:If you go flying back through time and you see somebody
      else flying forward into the future, it's probably best to avoid eye
      contact.
Return to Top
Subject: Re: Eigenvalue problem of big sparse matrices (tridiagonalizing, Lanczos etc.)
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 14 Jan 1997 11:35:09 GMT
In article <32dab4dc.3252140@news.fu-berlin.de>, thimm@physik.fu-berlin.de (Axel Thimm) writes:
|> I am looking for numerical attempts to solve the eigenvalue problem of
|> big sparse matrices. The best I have found until now is the Lanczos
|> method.
|> Are other algorithms that can help?
|> Do concrete implementations in C or Fortran exist?
|> 
|> Best regards, Axel Thimm.
|> 
|> --
|> Axel Thimm 
|> Fachbereich Physik, Freie Universitaet Berlin
For LANCZOS  there are several packages in netlib
(http://netlib.no or http://netlib.org) 
For very large dimension the simultaneous vector iteration 
developed by Rutishauser is preferred 
by many workers in the field. For the symmetric case 
there is the code "ritzit" in the Wilkinson-Reinsch book for numerical 
linear algebra (in ALGOL 60).  I may supply a f77-translation if needed.
For the nonsymmetric case there is same code described in J. Comp. Phys.
(I lost the exact source). See also the book by Youcef Saad 
"Numerical methods for large eigenvalue problems" published by 
Manchester University Press. 
Hope this helps
peter
Return to Top
Subject: Re: Asin
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 14 Jan 1997 11:37:42 GMT
In article , peter_rasmussen@fcgate.aapda.org.au (Peter Rasmussen) writes:
|> Hello
|> 
|> Does anyone know how to calculate ASIN or ACOS?
|> 
|> I am working in a program called Director.  It has a SIN and a COS command
|> but not ASIN or ACOS.
|> 
|> If anyone can shed light on this I would be most grateful.
|> 
|> Cheers
|> 
|> Peter
The book by Hart and alii "Computer approximations" (published by
SIAM) contains all the information you need in order to build ASIN
and similar routines yourself. 
hope this helps
peter
Return to Top
Subject: Interesting question...
From: Wayne Hinkin
Date: Tue, 14 Jan 1997 08:16:23 +0000
Two mathematicians are each given a positive whole number.  Each knows
his / her own number but neither one knows the other.  They are told
that the product of their numbers is either 8 or 16.  
At some point, one of the mathematicians knows the other number.  
What is the number and logically explain / prove it.
Just for fun...
Return to Top
Subject: Re: Optimization: expensive objective
From: Hans D Mittelmann
Date: Tue, 14 Jan 1997 08:23:19 -0700
John A Steele wrote:
> 
> Hi.  I have an optimization problem in 6 variables,
> but my objective function is very expensive to
> compute.  We have IMSL here at the U. of S.,
> but I wasn't convinced by the documentation that it
> offered what I think I need.  Advice, anyone?
> 
> Thanks.
Hi,
in case your problem is unconstrained and you do not have a gradient,
you may try codes such as ftp://ftp.netlib.org/opt/praxis or subplex in
the same place. If you have constraints, try COBYLA at
          ftp://plato.la.asu.edu/pub/other_software/
-- 
Hans D. Mittelmann			http://plato.la.asu.edu/
Arizona State University		Phone: (602) 965-6595
Department of Mathematics		Fax:   (602) 965-0461
Tempe, AZ 85287-1804			email: mittelmann@asu.edu
Return to Top
Subject: Algorithms archive
From: "Cyril Nickanorov"
Date: 14 Jan 1997 15:36:57 GMT
Hello!
Does anybody knows where on the Internet I can find algorithms library
especially in numerical differentiation & integration, math phisics etc.
Best regards.
-- 
Cyril Y. Nickanorov 
E-mail: cyril@orc.ru
Return to Top
Subject: Optimization: expensive objective
From: jas140@engr.usask.ca (John A Steele)
Date: 14 Jan 1997 20:15:11 GMT
John A Steele wrote:
> 
> Hi.  I have an optimization problem in 6 variables,
> but my objective function is very expensive to
> compute.  We have IMSL here at the U. of S.,
> but I wasn't convinced by the documentation that it
> offered what I think I need.  Advice, anyone?
> 
> Thanks.
It may seem I'm answering my own post, but I received
5 emails in response to my post, only one of which
is on the newsgroup currently.  I appreciate the 
advice and the added circulation some readers have
clearly given to my post.  I'm posting again to
give details some respondents had asked for but
with only one post.
Basicly, I'm trying to optimize the parameters 
describing an optical density function in a
long narrow circular sector domain.  I have
some synthesized noisy tomographic data generated
on a field described by unknown parameters from
two viewing angles very close to each other and 
very close to the long direction of the domain.
The objective function is the 2-norm of the
difference between the tomographic data on the 
field described by current parameter values
and the synthesized tomographic data from the
unknown field.  There are about 40 distinct 
rays of data to use for reconstruction.
Thanks again.
Return to Top
Subject: [W] WANTED: optimized LAPACK ilaenv.
From: engstler@na.uni-tuebingen.de (Christian Engstler)
Date: 14 Jan 1997 21:45:19 GMT
I'm looking for optimized versions of the LAPACK routine ILAENV.
(ILAENV is called from the LAPACK routines to choose problem-dependent
parameters for the local environment).
We have a couple of Sun SPARC 20 (one of them with a SuperCache), one 
UltraSPARC, a couple of PPro 200 PC's running Solaris x86 and two P130 
running Linux.
  The comment section of ILAENV states:
  Users are encouraged to modify this subroutine to set the tuning 
  parameters for their particular machine using the option and problem 
  size information in the arguments.
I can't believe that this has not been done by some of you yet :). 
Many thanks in advance,
-- 
Christian Engstler, Dept. of Mathematics, University of Tuebingen
72076 Tuebingen, FRG          email: engstler@na.uni-tuebingen.de           
Return to Top
Subject: Re: Integration of exponential function
From: "Dann Corbit"
Date: 14 Jan 1997 19:41:12 GMT
Don't do it numerically, do it symbolically.  Since the derivative of
exp(x) is exp(x), the integral is also exp(x).  Hence, all you have to do
is calculate the two endpoints and find the difference!  There is nothing
formidable about it.
Michael Kyereme  wrote in article
...
> Hello:
>         
> I am trying to integrate an exponential function using the numerical     
> methods: Trapezoidal and Simpson's Rule coded in C.
> The function is of the form:
>     
>          f(x) = exp(-B)
>          
> The limits of integration could be from x = 0 to 12 (or 120, or some
other
> simple integer).
> Within this interval, the exponent, B (computed from another expression),
> takes on values which range from 10 to 180.
> Thus the function f(x) tends to be extremely small due to the
> large and negative nature of the exponent: e.g.  
>     f(120) = exp(-120).
> Some computers might evaluate such and expression to zero but mine
doesn't.
> 
> Integration of the above function appears to be a formidable task. 
> Is there a way to go around this problem. Any help as to how I should    
> approach the problem wil be very much appreciated.
>   Thank you in advance.
>   Michael.
> 
Return to Top
Subject: Sparse Solvers
From: "Michael I. Miga"
Date: Tue, 14 Jan 1997 15:53:22 +0000
Hello,
I am lookinf for a iterative solver which uses compressed sparse row
(CSR) format to solve a large system of equations.
I have used ITPACK but have run into some problems when iterating in
time.  Also ITPACK doesn't have a preconditoner.
I tried to use SPLIB but could not seem to get anywhere with it.
Does anyone know of a good iterative solver that is ANSI standard
fortran 77 and works well?
Thanks in advance,
Mike Miga
michael.miga@dartmouth.edu
Return to Top
Subject: eigenvalues
From: <.,@compuserve.com>
Date: 14 Jan 1997 19:39:07 GMT
  I don't know if this will help, but if we let A' and (I-B)' stand for the inverses 
if they exist, then:
  (A -BA)x = kx  becomes  (I-B)y = kA'y or  Ax = k(I-B)'x where y = Ax.
 Now if either A or (I-B) is positive or negative definite, you at least have a 
recasting of the original problem into a well understood one.  It might also
point to Lanzcos-type solution algorithms.
Regards,  Chuck Crawford
Return to Top
Subject: Re: How to find eigenvalues of "bad" matrix
From: stewart@cs.umd.edu (G. W. Stewart)
Date: 14 Jan 1997 00:13:43 -0500
In article <32D39E85.41C6@damtp.cam.ac.uk>,
Tom Chou   wrote:
#Hello,
#
#I have an infinite, real, nonsymmetric square matrix 
#of which I want to find the lowest 10 or so eigenvalues. 
#I am taking larger and larger truncations and seeing if the eigenvalues
#converge. I am using balancing, then reduction to Hessenberg form, then 
#use a QR algorithm as described in Numerical Recipes. 
#
#However, for my particular matrix, I find that the eigenvalues don't
#quite converge at 40 X 40, where the algorithm uses too 
#many interations and exits (the lowest eigenval. changes by ~5% in going
#from 20 X 20 to 40 X 40). . Looking at the qualitative trends, I figure
#I need about a 400 X 400 truncation in the worst cases. 
#
#I think the problem is that the off diagonals get very large
#numerically. The matrix elements go as n^2*m^3, so numerically 
#get very large as one goes down the diagonal (~n^5) or, far away from
#the diagonals.
#
#
#My questions are:
#
#(1) Are there analytical bounds on how large a matrix I 
#need to take for a required accuracy in the lowest few
#eigenvalues? Where can I find theories about the convergence of
#the eigenvalues as the matrix is taken to be larger and larger?
#
#(2) What codes should I use? Can I simply reset the 
#number of iterations in the Numerical Recipes routines
#without catastrophic consequences? Are there other 
#routines/packages suited for this kind of matrix?
#
#(3)  Now suppose that each matrix element now depends on a parameter,
#s. I want to plot the eigenvalues as a function of s. Are there 
#theorems which can say when or when not any eigenvalues are degenerate?
#Or in particular, whether the lowest eigenvalue for one values of 
#s=s0 can become larger that say the 2nd largest at s=s0 
#at a different value s=s1? Is it possible to say that the lowest
#eigenval. is ALWAYS lower than the second lowest, for all s in 
#some range?
#
#This problem is related to band structure/floquet matrics.
#Any suggestions on where to look for the answers will be greatly 
#appreciated.
#
#Thx,
#
#Tom
#
Depending on the routine you use, you may want to present
your matrix with the diagonals in reverse order.  In particular,
if your eigenvalue program begins with a preliminary reduction
to Hessenberg form by Householder transformations, you want the
grading of your matrix to be downward.
Pete Stewart
Return to Top
Subject: Midwest Numerical Analysis Day 1997
From: keinert@iastate.edu (Fritz Keinert)
Date: 14 Jan 1997 13:28:02 GMT
		 MIDWEST NUMERICAL ANALYSIS DAY 1997
		       Saturday, April 12, 1997
		  Iowa State University, Ames, Iowa
Participants:
  This conference is aimed at faculty members, graduate students and
  visitors from universities is the central US. Ivo Babuska has
  tentatively agreed to give an invited talk. For other featured
  speakers, as well as the contributed talks, check the conference web
  site periodically.
Organizers: 
  Roger Alexander (alex@iastate.edu, (515) 294-7579) 
  Fritz Keinert (keinert@iastate.edu, (515) 294-5223)
Deadline:
  If you are interested in presenting a 20-minute talk, submit a title
  and abstract by March 17, 1997, either through the conference web
  page, via e-mail to naday@iastate.edu, or to one of the organizers.
Information:
  Information concerning the conference is available on the World Wide
  Web at http://www.math.uwm.edu/Midwest_NA_Day. 
Special Note:
  The joint annual meeting of the Iowa sections of MAA/ASA/IMATYC will
  be held in the same building on the same day. There will be
  opportunity to hear talks or socialize with participants from both
  conferences.
-- 
Fritz Keinert                                   phone:  (515) 294-5223
Department of Mathematics                       fax:    (515) 294-5454
Iowa State University                      e-mail: keinert@iastate.edu
Ames, IA 50011			   http://www.math.iastate.edu/keinert
Return to Top
Subject: Re: Algorithms archive
From: "Dann Corbit"
Date: 14 Jan 1997 19:38:54 GMT
do a web search for netlib.
Cyril Nickanorov  wrote in article
<01bc0230$91ac3880$ef5857c2@cyril.orc.ru>...
> Hello!
> 
> Does anybody knows where on the Internet I can find algorithms library
> especially in numerical differentiation & integration, math phisics etc.
Return to Top
Subject: Re: Integration of exponential function
From: Gleb Beliakov
Date: Wed, 15 Jan 1997 09:22:15 +1100
Michael Kyereme wrote:
> 
> Hello:
> 
> I am trying to integrate an exponential function using the numerical
> methods: Trapezoidal and Simpson's Rule coded in C.
> The function is of the form:
> 
>          f(x) = exp(-B)
> 
> The limits of integration could be from x = 0 to 12 (or 120, or some other
> simple integer).
> Within this interval, the exponent, B (computed from another expression),
> takes on values which range from 10 to 180.
> Thus the function f(x) tends to be extremely small due to the
> large and negative nature of the exponent: e.g.
>     f(120) = exp(-120).
> Some computers might evaluate such and expression to zero but mine doesn't.
> 
> Integration of the above function appears to be a formidable task.
> Is there a way to go around this problem. Any help as to how I should
> approach the problem wil be very much appreciated.
I just don't understand why you do it numerically? Do integration
analytically if you have such a simple function (I assume you meant
f(x)=exp(-x) and the limits of integration vary). If your B is not x,
then you have another function, and it's not just an exponent you
integrate.
By the way with the exponent Gauss quadrature with just 7 knots gives
you the same precision as Simpson rule with 300 knots of integration!
Except loosing precision because of underflow I don't see why there
should be difficulties integrating the exponent. Just make summation in
your method not from 0 to 120 but from 120 to 0 to sum first the small
values.
Gleb
Return to Top
Subject: Re: Optimization: expensive objective
From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Date: Tue, 14 Jan 1997 14:20:07 GMT
In article <5begrp$bh5@tribune.usask.ca>,
John A Steele  wrote:
>Hi.  I have an optimization problem in 6 variables,
>but my objective function is very expensive to 
>compute.  We have IMSL here at the U. of S.,
>but I wasn't convinced by the documentation that it
>offered what I think I need.  Advice, anyone?
>
>Thanks.
How many constraints? Expensive constraints?
How many derivatives can you provide?
For a problem with expensive function evaluations, I think that you 
should try and use as many derivatives as possible, i.e. the search
direction should be very good so that line searches can be done
very 'weakly'.
So - aim at something like Newton's method for unconstrained - or
a trust region method. 
For constrained problem? What are the constraints like?
-- 
||Henry Wolkowicz                |Fax:   (519) 725-5441
||University of Waterloo         |Tel:   (519) 888-4567, 1+ext. 5589
||Dept of Comb and Opt           |email:  henry@orion.math.uwaterloo.ca
||Waterloo, Ont. CANADA N2L 3G1  |URL: http://orion.math.uwaterloo.ca/~hwolkowi
Return to Top
Subject: Re: Sparse Solvers
From: Hans D Mittelmann
Date: Tue, 14 Jan 1997 19:39:16 -0700
Michael I. Miga wrote:
> 
> Hello,
> 
> I am lookinf for a iterative solver which uses compressed sparse row
> (CSR) format to solve a large system of equations.
> 
> I have used ITPACK but have run into some problems when iterating in
> time.  Also ITPACK doesn't have a preconditoner.
> 
> I tried to use SPLIB but could not seem to get anywhere with it.
> 
> Does anyone know of a good iterative solver that is ANSI standard
> fortran 77 and works well?
> 
> Thanks in advance,
> Mike Miga
> michael.miga@dartmouth.edu
Hi,
what about SPARSKIT:
  ftp://ftp.cs.umn.edu/dept/sparse/
further, templates and qmrpack in netlib/linalg
-- 
Hans D. Mittelmann			http://plato.la.asu.edu/
Arizona State University		Phone: (602) 965-6595
Department of Mathematics		Fax:   (602) 965-0461
Tempe, AZ 85287-1804			email: mittelmann@asu.edu
Return to Top
Subject: Re: Optimization: expensive objective
From: Hans D Mittelmann
Date: Tue, 14 Jan 1997 19:44:55 -0700
John A Steele wrote:
> 
> John A Steele wrote:
> >
> > Hi.  I have an optimization problem in 6 variables,
> > but my objective function is very expensive to
> > compute.  We have IMSL here at the U. of S.,
> > but I wasn't convinced by the documentation that it
> > offered what I think I need.  Advice, anyone?
> >
> > Thanks.
> 
> It may seem I'm answering my own post, but I received
> 5 emails in response to my post, only one of which
> is on the newsgroup currently.  I appreciate the
> advice and the added circulation some readers have
> clearly given to my post.  I'm posting again to
> give details some respondents had asked for but
> with only one post.
> 
> Basicly, I'm trying to optimize the parameters
> describing an optical density function in a
> long narrow circular sector domain.  I have
> some synthesized noisy tomographic data generated
> on a field described by unknown parameters from
> two viewing angles very close to each other and
> very close to the long direction of the domain.
> The objective function is the 2-norm of the
> difference between the tomographic data on the
> field described by current parameter values
> and the synthesized tomographic data from the
> unknown field.  There are about 40 distinct
> rays of data to use for reconstruction.
> 
> Thanks again.
Hi again,
I do not know what the people wrote who did not post it in the newsgroup
(that's the disadvantage of it), but from your last message it seems
that you have a least squares problem and may want to look at routines
for that instead of general optimization methods. What about ODRPACK in
netlib?
-- 
Hans D. Mittelmann			http://plato.la.asu.edu/
Arizona State University		Phone: (602) 965-6595
Department of Mathematics		Fax:   (602) 965-0461
Tempe, AZ 85287-1804			email: mittelmann@asu.edu
Return to Top

Downloaded by WWW Programs
Byron Palmer