Back


Newsgroup sci.math.num-analysis 29375

Directory

Subject: Re: Fourier Transform (better, Fourier interpolation) -- From: Gary Hampson
Subject: Piecewise linear boundaries -- From: jroberts@euler.gac.peachnet.edu (J.S. Robertson)
Subject: Two questions on Integral Equations -- From: krishnan suresh
Subject: two-way chasing algorithm for tri- to bidiogonal form reduction -- From: batruyen@etro.vub.ac.be (Bart Truyen)
Subject: CfP: Multidimensional Databases - Application & Technology -- From: "Dr. Peter Baumann"
Subject: FINAL Call for Papers IDA97 -- From: Michael Berthold
Subject: Non-linear oscillator -- From: Sean Manion
Subject: Book: Matrix Algorithms -- From: stewart@cs.umd.edu (G. W. Stewart)
Subject: Q: Matrix logarithms and linear dynamics -- From: calvitti@kevin.ces.cwru.edu
Subject: 2nd smallest eigen value of a symmetric matrix -- From: R.Ghosh-Roy@brunel.ac.uk (R Ghosh-Roy)
Subject: Re: normalizing constants in finite rings? -- From: tph1001@cus.cam.ac.uk (T.P Harte)
Subject: Good book for Applications of Group Theory? -- From: pecora@zoltar.nrl.navy.mil (Lou Pecora)
Subject: FOURIER INTERPOLATION (Discrete) -- From: abian@iastate.edu (Alexander Abian)
Subject: FOURIER TRANSFORM (Discrete) -- From: abian@iastate.edu (Alexander Abian)
Subject: Help num-analysis of complex PDE's -- From: Laura Nett
Subject: Complex version of SLAP? -- From: Chin Leo

Articles

Subject: Re: Fourier Transform (better, Fourier interpolation)
From: Gary Hampson
Date: Tue, 7 Jan 1997 11:53:03 +0000
In article , Alexander Abian
 writes
>
>
>Dear Emma,
>You e-mailed me that you read the recent Fourier Transform (especially Fast
>Fourier Transform) postings and that you did not understand a thing.
Dear Emma,
Heres my simple-minded introductory explanation:
Marvellous reference - The Fourier Transform by Ronald Bracewell
Fourier found that many functions can be described as the weighted sum
of sines and cosines. The sines and cosines have arguments 2*pi*f*t
where f=frequency(Hertz) and t=time(sec). f and t are on a linear scale.
Although f is frequency, it could be 1/wavelength (spatial frequency or
wavenumber) and t be space. Any similar pair could also be used.
Getting the weights for the sinusoids from the input function is called
Fourier Analysis or Forward Fourier Transform. Getting the original
function back from the weights is called Fourier Synthesis or Inverse
Fourier Transformation. The transforms are defined by:
G(w) = Integral_{-inf}^{+inf} g(t)*exp(-i*w*t) dt
g(t) = Integral_{-inf}^{+inf} G(w)*exp(i*w*t) dt /2*pi
In which i=(-1)^.5, w=2*pi*f
G(w) is known as the frequency domain, it is composed of the weights for
the sinusoids; it shows for example at what frequency the energy is
concentrated in a signal (see your hi-fi response curves for example).
g(t) is the original signal, known as the time domain. 
It is very valuable in signal analysis and processing.
There are many relations that show how an operation in one domain, may
be conducted in the other domain. For example:
convolution a(t)*b(t) is equivalently the product A(f).B(f). This is the
heart of linear filter theory and combining probability distributions.
It is often the case that choice of domain for a calculation is
important for speed and accuracy etc.
The Discrete Fourier transform is much as above except its applicable to
digital sequences (sampled functions). Since the transform is (from
previous posting) nf*nt complex multiplies and adds its an n^2
algorithm. There is a very clever way of coding it which makes the
algorithm n.log2(n). This is a terrific increase in speed and the
algorthm is know as the Fast Fourier Transform (Due to Cooley & Tukey),
or FFT.
Much of the worlds computer power that is left after running internet is
expended doing FFT's.
The previous posting used the term Fourier Interpolation, this could be
misleading. Interpolation can be acheived in the inverse transform by
choosing values of t at which interpolated values ar required, however,
it is simpler and equivalent to use sinc interpolation (sin(x)/x).
One final comment, in the transforms above, ignoring the 1/2pi factor
which some definitions distibute equallly between the forward and
inverse transforms, the only difference is the sign of i. That is the
forward and inverse transforms are identical. So much so, that I recall
a very drunken conversation I had with a colleague in which we argued
that we could not tell whether we lived in the time of frequency domain!
-- 
Gary Hampson
Return to Top
Subject: Piecewise linear boundaries
From: jroberts@euler.gac.peachnet.edu (J.S. Robertson)
Date: 7 Jan 1997 13:02:23 GMT
I camr across a reference that discusses using piecewise linear boundaries 
with Neumann conditions in a finite difference solution of the Schroedinger
equation.
The method transforms the b.c. into a 2nd-order ODE IVP which is solved using
finite difference approximations at the boundary as initial values.
The paper gives no additional references to this method.  I've not been able to
find a thing anywhere else. 
Can someone provide a reference?  Thanks.
Jack
Return to Top
Subject: Two questions on Integral Equations
From: krishnan suresh
Date: 7 Jan 1997 14:56:25 GMT
Q1: Is anyone aware of work on Integral Equations with boundary 
conditions (similar to Diff. Eqns. with BCs)? Does this question make 
sense? (Normally, BCs get 'absorbed' in an Integral formulation, but 
that doesn't seem to happen in a problem I am studying.)
Q2: Is there any (free?) software avaiable out there for solving 
Integral Equations?
Appreciate your help,
suresh
Return to Top
Subject: two-way chasing algorithm for tri- to bidiogonal form reduction
From: batruyen@etro.vub.ac.be (Bart Truyen)
Date: Mon, 06 Jan 1997 23:10:28 +0100
Anyone any idea about the existence of a two-way chasing algorithm for the
reduction of a tri-diagonal matrix to bidiagonal form (in the style of the
two-way chasing algorithm developed to reduce a bidiagonal matrix bordered
by a single row to tridiagonal form).  A problem often encounterd in SVD
updating.
All ideas wellcome, thanks.
Bart Truyen
ETRO Research Group
Free University Brussels
Brussels
Belgium
e-mail: batruyen@etro.vub.ac.be
Return to Top
Subject: CfP: Multidimensional Databases - Application & Technology
From: "Dr. Peter Baumann"
Date: Tue, 07 Jan 1997 18:11:42 +0100
Call for Papers (also available under
http://www.forwiss.tu-muenchen.de/~rasdaman/public/events/cisst97-st.html):
        International Conference
        on Imaging Science, Systems, and Technology
        (CISST'97)
        - Special Track on the Management
         of Multidimensional Discrete Data -
June 30 - July 2, 1997
Las Vegas, Nevada, USA
AIMS AND SCOPE
Raster data of arbitrary size and dimension, so-called Multidimensional
Discrete Data (MDD), span a remarkably rich manifold of variants - from
1-D time series and 2-D images to multidimensional OLAP hypercubes, from
a few kilobyes to several Gigabytes, as spatio-temporally discretized
natural phenomena or as artificially generated data sets. Among the
major application areas are Online Analytical Processing (OLAP) and data
mining; medical imagery (PACS); geo and environmental information
systems (GIS/EIS); hydrological/ maritime information systems;
technical/scientific data analysis; sensor fusion; and multimedia.
Recently, the database community has begun to focus on the particular
structure of such data hitherto called unstructured. The classical
method, linearizing MDD line by line in a FORTRAN-like style and
encoding them in one of more than 100 data exchange formats worldwide in
use, has failed both in performance and in functionality. Therefore,
conceptual models and physical storage formats are being developed to
offer classical DBMS services such as flexible query support, multiuser
synchronization, and access optimization also for large,
multidimensional arrays. Interdisciplinary work involving imaging,
database, and application experts proves particularly fruitful.
As part of the International Conference on Imaging Science, Systems,
and Technology (CISST'97) in Las Vegas, USA, this Special Track aims at
collecting recent findings and encouraging discussion on the large-scale
management of MDD of various dimensions. Contributions are sought for,
but not limited to the following topics:
*   MDD modelling;
*   query languages (incl. query optimization);
*   transaction mechanisms;
*   storage hierarchies (incl. indexing);
*   compression techniques;
*   systems (products and research prototypes);
*   MDD applications, such as environmental monitoring, satellite
*   imagery, sensor fusion, GIS, and medical imagery.
SUBMISSION OF PAPERS
Prospective authors are invited to submit three copies of their draft
paper (about 5 pages) to Peter Baumann (address is given below) by the
due
date. All other papers for CISST97 should be sent to the general CISST97
chair, Hamid R. Arabnia (address is also indicated below).
Electronic submission is acceptable if in one of the formats PostScript,
LaTeX, and MS-Word and if the document is formatted to print in A4
format. The length of the camera-ready papers (if accepted) will be
limited to 10 pages.  Papers must not have been previously published or
currently submitted for publication elsewhere.
The first page of the draft paper should include: title of the paper,
name, affiliation, postal address, E-mail address, telephone number, and
Fax number for each author. The first page should also include the name
of the author who will be presenting the paper (if accepted) and a
maximum of 5 keywords.
EVALUATION PROCESS
Papers will be evaluated for originality, significance, clarity, and
soundness.  Each paper will be refereed by two researchers in the
topical area. The camera-ready papers will be reviewed by one person.
IMPORTANT DATES
February 28, 1997 (Friday):   Draft papers (5-page) due
April 8, 1997 (Tuesday):   Notification of acceptance
May 19, 1997 (Monday):   Camera-Ready papers & Preregistration due
June 30, July 1, July 2:   CISST'97 Conference
All accepted papers are expected to be presented at the conference.
PUBLICATION
The conference proceedings will be published by CSREA Press. The
proceedings will be available at the conference. Please note that all
color pictures/diagrams will be published in gray-scale.
EXHIBITION
An exhibition is planned during the conference. We have reserved 20+
exhibit spaces.  Interested parties should contact H. R. Arabnia
(address is given below). All exhibitors will be considered to be the
co-sponsors of the conference.  Each exhibitor will have the opportunity
to include a two-page description of their latest products in the
conference proceedings (if submitted by May 19, 1997).
ORGANIZERS/SPONSORS
A number of university faculty members in cooperation with the Monte
Carlo Hotel (conference division) will be organizing the conference.
The conference is sponsored by the Computer Science Research, Education,
and Applications Tech. (CSREA) in cooperation with the Computer Vision
Research and Applications Tech. (CVRA), The National Supercomputing
Center for Energy and the Environment (USA), developers of
high-performance machines and systems (pending) and related computer
associations (pending.)
LOCATION OF CONFERENCE
The conference will be held in the Monte Carlo Resort and Casino hotel,
Las Vegas, Nevada, USA.  This is a new hotel with excellent conference
facilities and over 3000 rooms. The hotel is minutes from the Las Vegas
airport with free shuttles to and from the airport. The hotel has many
vacation and recreational attractions, including: casino, waterfalls,
spa, kiddie pools, sunning decks, Easy River water ride, wave pool with
cascades, lighted tennis courts, health spa (with workout equipment,
whirlpool, sauna, ...), arcade virtual reality game rooms, nightly
shows, snack bars, a number of restaurants, shopping area, ...  Many of
these attractions are open 24 hours a day and most are suitable for
families and children. The hotel's room rate is very reasonable ($79 +
8% tax) per night for the duration of the conference.
The hotel is minutes from other Las Vegas attractions (major shopping
areas, recreational destinations, fine dining and night clubs, free
street shows, ...).
For the benefit of our international colleagues: the state of Nevada
neighbors with the states of California, Oregon, Idaho, Utah, and
Arizona.  Las Vegas is only a few driving hours away from other major
cities, including: Los Angeles, San Diego, Phoenix, ...
SPECIAL TRACK CHAIR
Peter Baumann
FORWISS
Orleansstr. 34
D-81667 Munich
Germany
Tel: +49-89-48095-206
Fax: +49-89-48095-203
E-mail: baumann@forwiss.tu-muenchen.de
CISST'97 GENERAL CHAIR
Hamid R. Arabnia
The University of Georgia
Department of Computer Science
415 Graduate Studies Research Center
Athens, Georgia 30602-7404, U.S.A.
Tel: (706) 542-3480
Fax: (706) 542-2966
E-mail: hra@cs.uga.edu
CISST'97 ORGANIZING COMMITTEE
I. Ahmad, Hong Kong University of Science & Technology, Hong Kong;
H. R. Arabnia, University of Georgia, Athens, GA, USA;
C. Colin, Ecole des Mines de Nantes, France;
J. Farison, University of Toledo, Toledo, OH, USA;
M. E. Fayad, University of Nevada, Reno, NV, USA;
O. Frieder, George Mason University & Florida Tech., USA;
F. Golshani, Arizona State University, Tempe, AZ, USA;
V. Gudivada, University of Missouri at Rolla, MO, USA;
M. Halem, Space Data & Comp. Div., Goddard Space Flight Center, NASA,
USA;
G. Hu, Central Michigan University, MI, USA;
K. C. Hui, Chinese University of Hong Kong, Shatin, Hong Kong;
O. H. Ibarra, University of California, Santa Barbara, CA, USA;
X. Jia, City University of Hong Kong, Hong Kong;
J. Jin, University of New South Wales, Sydney, Australia;
D. Kazakos, University of Southwestern Louisiana, LA, USA;
A. Law, Ohio State University, Columbus, OH, USA;
D. Luzeaux, Etca/Crea/Sp, France;
K. Makki, University of Nevada Las Vegas, NV, USA;
S. A. M. Makki, University of Queensland, Australia;
A. Mana-Gomez, E.T.S.I.Informatica, Malaga, Spain;
N. Memon, Northern Illinois University, DeKalb, IL, USA;
B. Nassersharif, National Supercomputing Center For Energy and the
Environment, Las Vegas, Nevada, USA;
M. S. Obaidat, Monmouth University, NJ, USA;
Y. Pan, University of Dayton, Dayton, OH, USA;
E. K. Park, University of Missouri-Kansas City, USA;
W. Peng, Southwest Texas State University, San Marcos, TX, USA;
N. Pissinou, University of Southwestern Louisiana, Lafayette, LA, USA;
Rajkumar, Centre for Development of Advanced Computing, Bangalore,
India;
S. Sahni, University of Florida, Gainesville, FL, USA;
H. Sharif, University of Nebraska Lincoln, USA;
H. Shi, University of Missouri-Columbia, MO, USA;
M. Singhal, Ohio State University, Columbus, OH, USA;
S. Y. W. Su, University of Florida, Gainesville, FL, USA;
A. Tentov, University "Sv. Kiril i Metodij", Republic of Macedonia;
E. Torng, Michigan State University, MI, USA;
N-F. Tzeng, University of Southwestern Louisiana, Lafayette, LA, USA;
Y. Xu, Oak Ridge National Laboratory, Oak Ridge, TN, USA;
S. You, State University of New York at Stony Brook, NY, USA;
H. Zhang, Aptronix, Inc., Santa Clara, CA, USA;
D. Zhu, Aptronix, Inc., Santa Clara, CA, USA;
A. Y. Zomaya, University of Western Australia, Australia.
LOCAL ARRANGEMENT CHAIRS
Kia Makki
Department of Computer Science
University of Nevada Las Vegas
Las Vegas, Nevada 89154-4019, USA
kia@koko.cs.unlv.edu
Niki Pissinou
Center For Advanced Computer Studies
University of Southwestern Louisiana
Lafayette, LA 70508, USA
pissinou@cacs.usl.edu
PUBLICITY CHAIR
Yi Pan
Department of Computer Science
University of Dayton
Dayton, OH 45469-2160, USA
pan@cps.udayton.edu
Tel: (513) 229-3807
Fax: (513) 229-4000
----------------------
FORWISS (Bavarian Research Center for Knowledge-Based Systems)
- Knowledge Bases Research Group -
WWW:	http://www.forwiss.tu-muenchen.de/~baumann/
Email:	baumann@forwiss.tu-muenchen.de
(-:	"Help Wanted: Telepath. You know where to apply."
Return to Top
Subject: FINAL Call for Papers IDA97
From: Michael Berthold
Date: Tue, 07 Jan 1997 16:33:57 +0100
  =========>>> DEADLINE FOR SUBMISSIONS: FEBRUARY 1st, 1997 <<<===========
                        FINAL CALL FOR PAPERS
  The Second International Symposium on Intelligent Data Analysis (IDA-97)
                 Birkbeck College, University of London
                         4th-6th August 1997 
                         In Cooperation with 
           AAAI, ACM SIGART, BCS SGES, IEEE SMC, and SSAISB
               [ http://web.dcs.bbk.ac.uk/ida97.html ]
Objective
=========
For many years  the intersection  of computing  and data  analysis contained
menu-based statistics  packages and not  much else.  Recently, statisticians
have embraced computing,  computer scientists are using statistical theories
and methods, and researchers in all corners are inventing algorithms to find
structure in vast  online datasets.  Data analysts  now have access to tools
for exploratory  data analysis,  decision tree induction,  causal induction,
function  finding,  constructing  customised  reference  distributions,  and
visualisation.  There are  prototype  intelligent  assistants  to  advise on
matters of design and analysis.  There are tools for traditional, relatively
small samples and for enormous datasets.  
The focus of  IDA-97  will be  "Reasoning About Data".  We are interested in
intelligent systems that reason about how to analyze data,  perhaps as human
analysts do.  Analysts often  bring exogenous  knowledge about  data to bear
when they decide how to analyze it;  they use intermediate results to decide
how to proceed;  they reason about how much  analysis the data will actually
support;  they consider which methods will be most informative;  they decide
which aspects of a model are most uncertain and focus attention there;  they
sometimes  have  the  luxury  of  collecting more  data,  and plan  to do so
efficiently.  In short, there is a strategic aspect to data analysis, beyond
the tactical choice of this or that test, visualisation or variable.
Topics 
======
The following topics are of particular interest to IDA-97:
     * APPLICATIONS & TOOLS
         - analysis of different kinds of data (e.g., censored, temporal etc)
         - applications (e.g., commerce, engineering, finance, legal,
                          manufacturing, medicine, public policy, science)
         - assistants, intelligent agents for data analysis
         - evaluation of IDA systems
         - human-computer interaction in IDA
         - IDA systems and tools
         - information extraction, information retrieval
     * THEORY & GENERAL PRINCIPLES
         - analysis of IDA algorithms
         - bias
         - classification
         - clustering
         - data cleaning
         - data pre-processing
         - experiment design
         - model specification, selection, estimation
         - reasoning under uncertainty
         - search
         - statistical strategy
         - uncertainty and noise in data
     * ALGORITHMS & TECHNIQUES
         - Bayesian inference and influence diagrams
         - bootstrap and randomization
         - causal modeling
         - data mining
         - decision analysis
         - exploratory data analysis
         - fuzzy, neural and evolutionary approaches
         - knowledge-based analysis
         - machine learning
         - statistical pattern recognition
         - visualization
Submissions
===========
Participants  who wish to present a paper are requested to submit a manu-
script, not exceeding 10 single-spaced pages. We strongly encourage  that 
the manuscript is formatted following  the Springer's  "Advice to Authors 
for the Preparation of Contributions to  LNCS Proceedings"  which  can be
found  on the IDA-97 web page. This submission format is identical to the 
one for the  final  camera-ready copy of accepted papers. In addition, we 
request a separate page detailing the paper title, authors' names, postal 
and email addresses, phone and fax numbers.
Email submissions in Postscript form are encouraged. Otherwise, five hard 
copies of the manuscripts should be submitted.
Submissions should be sent to the IDA-97 Program Chairs:
Central, North and South America:        Elsewhere:
Paul Cohen                               Xiaohui Liu
Department of Computer Science           Department of Computer Science
Lederle Graduate Research Center         Birkbeck College
University of Massachusetts, Amherst     University of London
Amherst, MA 01003-4610                   Malet Street
USA                                      London WC1E 7HX, UK
cohen@cs.umass.edu                       hui@dcs.bbk.ac.uk
IMPORTANT DATES
February 1st, 1997              Submission of papers
April 15th, 1997                Notification of acceptance
May 15th, 1997                  Final camera ready paper
Review
======
All submissions will  be reviewed on the basis of relevance, originality, 
significance,  soundness and clarity.  At least two referees  will review 
each submission independently. Results of the  review will be send to the
first author via email, unless requested otherwise.
Publications
============
Papers which are accepted and presented at the  conference will appear in
the IDA-97 proceedings, to be published by Springer-Verlag in its Lecture
Notes in  Computer Science  series. Authors  of the  best papers  will be
invited to extend their papers for further review  for a special issue of 
"Intelligent Data Analysis: An International Journal".
IDA-97 Organisation
===================
General Chair:            Xiaohui Liu
Program Chairs:           Paul Cohen, Xiaohui Liu
Steering Comm. Chair:     Paul Cohen, University of Massachusetts, USA
Exhibition Chair:         Richard Weber, MIT GmbH, Aachen, Germany
Finance Chair:            Sylvie Jami, Birkbeck College, UK
Local Arrangements Chair: Trevor Fenner, Birkbeck College, UK
Public. and Proc. Chair:  Michael Berthold, University of Karlsruhe, Germany
Sponsorship Chair:        Mihaela Ulieru, Simon Fraser University, Canada
Steering Committee
Michael Berthold          University of Karlsruhe, Germany
Fazel Famili              National Research Council, Canada
Doug Fisher               Vanderbilt University, USA
Alex Gammerman            Royal Holloway London, UK
David Hand                Open University, UK
Wenling Hsu               AT&T; Consumer Lab, USA
Xiaohui Liu               Birkbeck College, UK
Daryl Pregibon            AT&T; Research, USA
Evangelos Simoudis        IBM Almaden Research, USA
Program Committee
Eric Backer               Delft University of Technology, The Netherlands
Riccardo Bellazzi         University of Pavia, Italy
Michael Berthold          University of Karlsruhe, Germany
Carla Brodley             Purdue University, USA
Gongxian Cheng            Birkbeck College, UK
Fazel Famili              National Research Council, Canada
Julian Faraway            University of Michigan, USA
Thomas Feuring            WWU Muenster, Germany
Alex Gammerman            Royal Holloway London, UK
David Hand                The Open University, UK
Rainer Holve              Forwiss Erlangen, Germany
Wenling Hsu               AT&T; Research, USA
Larry Hunter              National Library of Medicine, USA
David Jensen              University of Massachusetts, USA
Frank Klawonn             University of Braunschweig, Germany
David Lubinsky            University of Witwatersrand, South Africa
Ramon Lopez de Mantaras   Artificial Intelligence Research Institute, Spain 
Sylvia Miksch             Vienna University of Technology, Austria
Rob Milne                 Intelligent Applications Ltd, UK
Gholamreza Nakhaeizadeh   Daimler-Benz Forschung und Technik, Germany
Claire Nedellec           Universite Paris-Sud, France
Erkki Oja                 Helsinki University of Technology, Finland
Henri Prade               University Paul Sabatier, France
Daryl Pregibon            AT&T; Research, USA
Peter Ross                University of Edinburgh, UK
Steven Roth               Carnegie Mellon University, USA
Lorenza Saitta            University of Torino, Italy
Peter Selfridge           AT&T; Research, USA
Rosaria Silipo            University of Florence, Italy
Evangelos Simoudis        IBM Almaden Research, USA
Derek Sleeman             University of Aberdeen, UK
Paul Snow                 Delphi, USA
Rob St. Amant             North Carolina State University, USA
Lionel Tarassenko         Oxford University, UK
John Taylor               King's College London, UK
Loren Terveen             AT&T; Research, USA
Hans-Juergen Zimmermann   RWTH Aachen, Germany
Enquiries
=========
Detailed information  regarding IDA-97 can be found  on the World Wide Web 
Server of the  Department of Computer Science at Birkbeck College, London:
                 http://web.dcs.bbk.ac.uk/ida97.html
Apart from presentation of research papers, IDA-97 also welcomes demonstr-
ations of software and publications  related to  intelligent data analysis  
and welcomes those organisations who may wish to partly sponsor the confe-
rence. 
Relevant enquiries may be sent  to appropriate chairs whose details can be 
found in the above-mentioned IDA-97 web page, or to
                  IDA-97 Administrator 
                  Department of Computer Science
                  Birkbeck College
                  Malet Street
                  London WC1E 7HX, UK
                  E-mail: ida97-enquiry@dcs.bbk.ac.uk
                  Tel: (+44) 171 631 6722
                  Fax: (+44) 171 631 6727
There is also a  moderated IDA-97  discussion list. To subscribe, send the 
word "subscribe" in the message body to:
                  ida97-request@dcs.bbk.ac.uk
Return to Top
Subject: Non-linear oscillator
From: Sean Manion
Date: Tue, 07 Jan 1997 09:43:18 -0800
I am a physics grad student at Arizona State University.  
I am trying to find references that provide analysis of the 
oscillator:
    y" + b*y' + cy = F*cos(wt) + G*exp((h-y)/a)
Any input would be appreciated.
Thx
Sean Manion
manion@phyast.la.asu.edu
Return to Top
Subject: Book: Matrix Algorithms
From: stewart@cs.umd.edu (G. W. Stewart)
Date: 7 Jan 1997 13:38:23 -0500
I am currently writing a multivolume treatise entitled Matrix
Algorithms.  The present Volume I is entitled Basic Decompositions.  I
have recently rewritten the third chapter and completed a fourth.
They can be obtained by anonymous ftp from
    thales.cs.umd.edu  in  pub/survey
or through my home page at
    http://www.cs.umd.edu/~stewart/
The first two chapters contain introductory material from mathematics
and computer science and the third chapter is on Gaussian elimination.
The fourth chapter on the QR decomposition and least squares.  A fifth
on rank determination will complete the volume.  For more information
see the preface.
I am distributing the book in the hope that it will be helpful to
others and that others will be willing to help me with their comments
and corrections.  Please feel free to make copies for your personal
use.  However, if you want to make copies to distribute to a class,
please ask my permission (it will generally be forthcoming).
Pete Stewart
Return to Top
Subject: Q: Matrix logarithms and linear dynamics
From: calvitti@kevin.ces.cwru.edu
Date: 07 Jan 1997 20:01:02 GMT
given that the solution of a linear system dxdt = A.x in R^n is
(*)     x(t) = exp(A.t).x(0)
is the time "t" to reach x(t) from x(0) well defined for all A? seems
that t would be given by the matrix natural log: nl(A). at least this
works for scalar systems and i imagine that if A can be diagonalized,
then it should be just as easy in higher dimensions.
intuitively, the time between two points in the state space - provided
they are connected by a flowline - is well defined; it could be found
by numerical integration for instance. however, i don't know how this
is related to the matrix logarithm.
for example if A = {{-1,0},{0,-2}} then the (uncoupled) solutions are:
	x(t) = exp(-t).x(0)
        y(t) = exp(-2t).y(0)
where {x(0), y(0)}, {x(t), y(t)} corresponding to the initial and
final points in R^2 are given. the eigenvalues of A are {-1,-2}.
however, doesn't the matrix log of a matrix with negative eigenvalues
have complex entries? how is this matrix log computed to begin with?
(matlab has a function to compute it numerically)
can someone email/post some insight or point to the literature? 
thanks for the info,
 +---------------------------------+
 |          Alan Calvitti          |
 |       Control Engineering       |
 | Case Western Reserve University |
 +---------------------------------+
Return to Top
Subject: 2nd smallest eigen value of a symmetric matrix
From: R.Ghosh-Roy@brunel.ac.uk (R Ghosh-Roy)
Date: 7 Jan 1997 19:19:44 -0000
I am fascinated by the fact that the *second* smallest eigen value and
its corresponding eigen vector is extensively used by the Graph Theory
groups around the world.
However, I am still looking for a reason why the *second* smallest one
is chosen, and why *not* the largest or any other value. In "Algebraic
Connectivity" papers, people use the *second* smallest eigen value and
its (sorted in ascending/descending order) vector to partition a graph 
with least number of cuts (in the edges).
None of the papers I have read explain why the *second* smallest value
gives minimum number of cuts and why not others. Any comment/reference
would be highly useful.
Thanks,
Rana
Ex:          
     A @----@ F
       |\   |
       | \  |
       |  \ |
       |   \|
     B @    @ E
       |\   |
       | \  |
       |  \ |
       |   \|
     C @----@ D
The above graph is represented by the following laplace matrix:
  |  A     B     C     D     E     F
--|------------------------------------
 A|  3    -1     0     0    -1    -1
 B| -1     3    -1    -1     0     0
 C|  0    -1     2    -1     0     0
 D|  0    -1    -1     3    -1     0
 E| -1     0     0    -1     3    -1
 F| -1     0     0     0    -1     2
The eigen values of the above matrix are: 3,3,5,4,1,0.
The 2nd smallest is 1 and its corresponding vector is
   -0.2887   A
    0.2887   B
    0.5774   C
    0.2887   D
   -0.2887   E
   -0.5774   F
By sorting it in ascending order, we have
   -0.5774   F
   -0.2887   E
   -0.2887   A
-----------------
    0.2887   B
    0.2887   D
    0.5774   C
The halves are there F,E,A and B,C,D. The halves are due to 2 cuts
(E-D, A-B).
Question: Why the 2nd smallest eigen value/vector gives this result?
          And not others?
Please reply to r.ghosh-roy@acm.org
Thanks again.
Rana
-- 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ R. Ghosh-Roy, Research Fellow @ BIPS                             +
+               -- R.Ghosh-Roy@brunel.ac.uk -- Extension 2772      +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Return to Top
Subject: Re: normalizing constants in finite rings?
From: tph1001@cus.cam.ac.uk (T.P Harte)
Date: 7 Jan 1997 19:26:46 GMT
On Tue, 7 Jan 1997, Roger Kinkead  wrote:
> I'm probably not understanding the problem sufficiently, but can you maybe 
> run through your sequence of numbers x(n) and determine the max. and min. 
> values within the sequence  min_x  and  max_x.
> Given these, you should be able to normalise each sample as follows
> 
>    norm_x(n)  =    x(n) - min_x
>                    ------------
>                       max_x
I can, of course, perform something similar to normalize over teh finite ring:
the sequence x_{n} will be represented by one element of the finite residue
ring so that the data are properly scaled and do not "overflow" the floating
point values which they mimic. Such a normalization scheme is eminently
plausible.
However, I have to perform the direct analogue of the rms normalization 
scheme which I perform in the complex field, but this time in the finite ring,
Thus, I have to operate on the finite ring data analogues to have them mimic
the floating point complex values, but I have to do so with ring operations.
How do I achieve the analogue?
I was thinking along the lines of performing index arithmetic modulo some
prime p; thus, the ring (field) has an isomorphism into the ring p-1 where 
the modulo p multiplications can be done as modulo p-1 additions. Using 
index arithmetic based in this observation I should be able to define a
look-up table and do all of the x_{n}^{2} ops as additions. The algorithm 
gets a bit difficult and so I was hoping that someone had thought up something
smart.
It's not an idle intellectual exercise: I have to normalize over large 
(14M data points) 4-D sets...which, needless to say, is just asking to be
made efficient.
Thanks for your interest. I hope that I have clarified somewhat. 
Thomas.
Return to Top
Subject: Good book for Applications of Group Theory?
From: pecora@zoltar.nrl.navy.mil (Lou Pecora)
Date: 7 Jan 1997 14:26:08 -0800
At present I am using an old version of Tinkham's book on Group Theory and
Quantum Mechanics.  Is there another (better?) book to serve as an
introduction to applications of group theory and group representations? 
I'd like something on the grad level with good coverage on the basic math
that's need, but lots of applications, too (Condensed matter, chemistry,
etc.).  Should cover point and space groups (definitely), the symmetric
group (maybe), and, perhaps, some introduction to Lie Groups (nothing deep
here).
BTW I have checked out Hammermesh and hate it.  Thanks for any suggestions.
Lou Pecora
code 6343
Naval Research Lab
Washington  DC  20375
USA
 == My views are not those of the U.S. Navy. ==
------------------------------------------------------------
  Check out the 4th Experimental Chaos Conference Home Page:
  http://natasha.umsl.edu/Exp_Chaos4/
------------------------------------------------------------
Return to Top
Subject: FOURIER INTERPOLATION (Discrete)
From: abian@iastate.edu (Alexander Abian)
Date: 8 Jan 97 01:01:06 GMT
Dear Emma,
You e-mailed me that you read the recent Fourier Transform (especially Fast
Fourier Transform) postings and that you did not understand a thing.
You asked me to e-mail you an understandable version of the Fourier Transform.
FIRST however, I will post about  FOURIER INTERPOLATION (discrete) and
then in subsequent posting(s) I will post FOURIER TRANSFORM (discrete)
ITS INVERSE and prove the CONVOLUTION Theorem.
  The gist of the matter is as follows:
  Suppose there is a  function  f  of which you know its values  at
(1)                     x = -1 ,  x = 0  and  x = 1
and also you  have some informations about  f , e.g. that f  has some
very nice properties (say, f is bounded,or, say, f is integrable, or say f is
several times  differentiable a.e., etc, etc).
But you don't have an explicit equation for  f and thus you don't
necessarily know the values of  f  at every point x  say, in  [-1, 1].
The question is:
    Is there a way to define (explicitly) a function  f*  such that it agrees
    with  f  on the points  x  = -1, x = 0, x = 1  and  for other
    values of  x in [-1,1],  f*  gives a reasonable approximations of  f
Example:      
(2)      Suppose            f(-1) = 0,    f(0) = 2     f(1) = -3
and we don't know , what  f(0.54) is or what  f(0.8) is and, in
general, what  f(x)  is for  x, say  in [0,1].
.  Can we devise (explicitly) a function  f*   on [-1, 1] so that it
agrees with  f  on  -1, 0, 1  and gives a reasonably good approximation
to  f(x)  for any   x in {0,1].
 Well you can always devise, many,  many  f* 's  agreeing with  (2)  
Example 1.
(3)              f*(x)  = 2 -1.5 x - 3.5 x^2
Example 2.    
              f*(x)  = (3.5/(1-cos1))cos x - (1.5)x - (1.5+2cos1)/(1-cos1)
which is roughly
(4)              f*(x)  = 7.6 cos x - (1.5)x -5.6   
Both examples agree with  f  as far as (2) is concerned. Moreover,
both are  explicitly given.  Both make sense at, say,  x = 0,5
For instance, according to the Example 1, from (3) it follows that  
(4)           f(0,5)  would be approximated by   0.375
and according to the Example 2, from (4) it follows that
(5)           f(0.5)  would be approximated by     0.34
Is  f*  given by (3) preferable to  f* given by (4) ?
That depends.  If our information about the unknown function f is that
its absolute value in [-1, 1] is less than  0.35, then of course  f*
given by (5) is a better explicit approximating function of  f.
  Now Fourier says, in general, the following scheme gives a reasonably
good explicit approximating function  f*  of the (unknown) function   f
where   f  is endowed with some  desirable known properties.
 For the sake of convenience, Fourier assumes  that  we know the values of 
 f at some odd number of points symmetrically located  around  0 say at
-3, -2, -1, 0, 1, 2, 3, or rescaling of them at -3r, -2r, -r, 0, r, 2r,, 3r, 
for some real number  r.
For the sake of simplicity , I will give an example of  f  defined on
 x =  -1,  0,  1.  So , we know the following values of the (unknown)
function  f:
(6)            f(-1),  f(0),  f(1)
 Let  
(7)         w   be the  3-rd primitive  complex  root of  1
so that
(8)     w = e^ (2pi/3)i   and that   1 + w + w^2 = 0   and    w^3 = 1
Then  Fourier's  f*(x)  is explicitly given  by the following scheme:
			                 / w       1   w^(-1) \   / w^x \
				         |                     |  |      |
(9)   f*(x)  =  1/3 (f(-1), f(0), f(1))  | 1       1       1   |  |  1   | 
           			         |                     |  |      |
				         | w^(-1)  1       w   |  |w^(-x)|
                             	          \	               /  \      /
 So, according to Fourier, under some conditions the (unknown) the  function
f  is reasonably well approximated by  f*  given in (9).
 REMARK 1. It is readily verified that  f*(-1), f*(0), f*(1)  are 
respectively equal to  f(-1), f(0) , f(1)  so that f* agrees with  f 
at  x = -1, 0, 1.
REMARK 2.  It is really remarkable that for real values of  x,  f*(x)
IS ALSO REAL. 
 So, for instance although as (6) and (9) show that  we know the 
values of f only  at  x =  -1, 0, 1,  according to Fourier we can
determine a  (reasonable!) approximation of  f  for  x = 0.5  using
Fourier's scheme (9).  Indeed, we let
f(0.5) be  approximately  = f*(0,5)  given by  the product of the three
matrices appearing in (9) evaluated at   w =  e^(2pi/3)i as given by (8).
It is easy to verify that (9)  reduces to:
                                                  /-1 \
                                                  |   | 
(10)            f*(0.5) = 1/3 (f(-1), f(0), f(1)) | 2 | 
                                                  |   |
                                                  \ 2 /
Now, let us see what Fourier gives us for our  Example. From (2) and  (10) It 
readily follows that 
                                           /-1 \
                                           |   |
                  f*(0.5) = 1/3 (0, 2, -3) | 2 | = - 2/3
                                           |   |
                                           \ 2 /
PS.  (9) is  referred to  FOURIER INTERPOLATION (discrete) FORMULA.
     The continuous  version of which is nothing more than interpreting
     Dotproducts as Integrals.
PPS. Generalizations to any odd number  p of  x's  is obvious.
The    w  in the  p by p matrices must then be replaced with the primitive 
p-th root of  1, i.e,  with  e^(2pi/p)i.
For example for  p = 5,   with  w =  e^(2pi/5)i, we have:
                                        /w^4    w^2  1  w^-2  w^-4\ /w^2x \
                                        |w^2    w    1  w^-1  w^-2 ||w^x  |
f*(x) =1/5 (f(-2),f(-1),f(0),f(1),f(2)) | 1     1    1    1     1  ||  1  | 
                                        |w^-2  w^-1  1    w    w^2 ||w^-x | 
                                        \w^-4  w^-2  1   w^2   w^4 /\w^-2x/
The pattern for  p = 7, 9, 11, ... follows obviously from  p = 3 and  5  above.
PPPS.  I am tired and I hope I have not make some  arithmetic mistakes.
PPPPS.  I will continue this (if people are interested) with one more
        posting(s) exposing  the Fourier Transform (discrete), its
inverse and the Convolution Theorem
-- 
--------------------------------------------------------------------------
   ABIAN MASS-TIME EQUIVALENCE FORMULA  m = Mo(1-exp(T/(kT-Mo))) Abian units.
       ALTER EARTH'S ORBIT AND TILT - STOP GLOBAL DISASTERS  AND EPIDEMICS
       ALTER THE SOLAR SYSTEM.  REORBIT VENUS INTO A NEAR EARTH-LIKE ORBIT  
                     TO CREATE A BORN AGAIN EARTH (1990)
Return to Top
Subject: FOURIER TRANSFORM (Discrete)
From: abian@iastate.edu (Alexander Abian)
Date: 8 Jan 97 05:31:18 GMT
Dear Emma  (at your and others request I am continuing my previous posting
            of Fourier Interpolation to The Fourier Transform (discrete)
Let me recall that the basic Fourier Interpolation (discrete)(by f*) scheme of
a function  f  defined at  x = -1, 0, 1 was given in my previous posting as:
                                         /w     1   w^-1\   /w^x\
                                         |               | |     | 
(9)  f*(x)/s   =   s(f(-1), f(0), f(1))  |1     1    1   | | 1   | 
                                         |               | |     |
where                                    | w^-1 1    w   | |w^-x |
                                         \              /  \    /
(10)    w = (2pi/3)i
and
(11)    s  =  1/sqrt 3
  Now, we can stare at (9) till dooms day and not see what to extract from it.
In no books, in no lecture notes, no one has mentioned what key element
is hidden in (9).  I claim, that the hidden element is as obvious as
the sun in a cloudless sky - but it takes .... brain and eyes of  m....
caliber person to bring it out.
  This is how to use (9)  "of course after I say it  -it becomes obviously
 elementary and kindergartenish".
  Looking at (9), I wish it could be written as 
                                     /   \
                                     |w^x |
                                     |    |
(13)               (g(-1),g(0),g(1)) | 1  |      
                                     |    |
                                     |w^-x|
                                     \    /
 Based on   (9) and (11), we  DEFINE  the function  g   (of course , as
 usual  at  x = -1, 0, 1) given by
                                                      /w    1   w^-1\
                                                      |              |
(14)     (g(-1), g(0), g(1))  =  s(f(-1), f(0), f(1)) |1    1    1   |
                                                      |              |
                                                      |w^-1 1    w   |
                                                      \              /
as the FOURIER TRANSFORM (discrete) of  f.   
 It can be readily verified that the inverse of the 3 by 3 matrix appearing
in (14)  is              
                           /w^-1  1     w \
                           |               |
(15)                  1/3  | 1    1     1  |
                           |               |
                           | w    1    w^-1|
                           \              /
Clearly, using (15) we can solve  (14)  for  (f(-), f(0), f(1))  and obtain
the formula for INVERSE FOURIER TRANSFORM (discrete)  given by
                                                     /w^-1  1    w^ \  
                                                     |               |
(16)     (f(-1), f(0), f(1))  = s(g(-1), g(0), g(1)) | 1    1    1   |
                                                     |               |
                                                     |w     1    w^-1|
                                                     \              /
REMARK.  It is worth noticing how (14) and (16) are interrelated:  the
         matrices are inverse of each other and the roles of functions
         f  and  g  are interchanged.  
PS.  It is midnight and  am tired and have to stop now. I hope I did not
     make some obvious mistakes.  Tomorrow I hope I will finish this
     topic by introducing the Convolution product of functions and 
     proving the Convolution Theorem of Fourier Transforms
PSS  I made some minor corrections in my previous posting  FOURIER
     INTERPOLATION.  In reading the present posting please consult my
     lastest   FOURIER INTERPOLATION posting!
-- 
--------------------------------------------------------------------------
   ABIAN MASS-TIME EQUIVALENCE FORMULA  m = Mo(1-exp(T/(kT-Mo))) Abian units.
       ALTER EARTH'S ORBIT AND TILT - STOP GLOBAL DISASTERS  AND EPIDEMICS
       ALTER THE SOLAR SYSTEM.  REORBIT VENUS INTO A NEAR EARTH-LIKE ORBIT  
                     TO CREATE A BORN AGAIN EARTH (1990)
Return to Top
Subject: Help num-analysis of complex PDE's
From: Laura Nett
Date: Tue, 07 Jan 1997 19:58:28 -0800
I am a grad. student in Chemical engineering trying to solve a 
system of PDE's of the form : Ct(t,x)=Cxx(t,x) + C(t,x)*H(t,x)
where Ct and Cxx are the first derivative with respect to time
and the second derivative with respect to x.  I have tried using
the Numerical Method of Lines but the system is so stiff that
in order for the solution to be stable my step size needs to be
really small, too small.  I am open to any suggestion on alternate
methods for solving the system.  I am also looking for a good
book on the subject.  If anyone has any information please e-mail me
lnett@sdcc3.ucsd.edu.       Thanks.    Laura
Return to Top
Subject: Complex version of SLAP?
From: Chin Leo
Date: Wed, 08 Jan 1997 15:50:44 +1000
Hi,
    I wonder if anyone in this newsgroup might know whether
a complex version of SLAP - Sparse Linear Algebra Package is
available. I know that a real version is avaliable on netlib
but where can I get hold of a complex version?
    Thanks for reading this message. Any help is appreciated.
  Best wishes,
   Chin
Return to Top

Downloaded by WWW Programs
Byron Palmer