Subject: Wall Street Quant Position
From: David Rothman
Date: Tue, 12 Nov 1996 08:30:20 +0100
The trading arm of a major investment firm is seeking a quantitative
specialist for its New York based Analytical Equity Trading Group to
work with its senior professionals in the on-going development of
sophisticated statistical/econometric trading models and strategies.
QUALIFICATIONS:
The successful candidate will have in-depth knowledge of financial
economics, time series econometrics, stochastic processes and the
requisite skills necessary to design and implement strategies in a
sophisticated computer environment. Comfort in dealing with
Probabilistic notions such as Random Walk, Brownian Motion and
Martingale Theory, combined with Econometric ideas such as Stationarity,
Cointegration, Error-Correction Models and Arch/Garch is essential.
This position would be ideal for someone with prior experience in a
related field, and/or academic training near or at the Ph.D. level.
CONTACT:
E-mail: nyrtd@ny.ubs.com
Please reply via email with either a resume or a short informal
description of yourself. Please include a day & evening phone number.
We are an Equal Opportunity employer.
Subject: Re: Loading a large matrix from disk.
From: jerry1776@internetMCI.com
Date: 12 Nov 1996 14:34:38 GMT
In , Octavio Hector Juarez Espinosa writes:
>Hi,
>
>I would like to know if there are some routines in "C" or any language
>to read a matrix from disk.
>I am reading element by element (519 by 519) and delays 11 minutes.
>In a PC with visual basic delays 35 minutes. I would like to know if there
>are ways to speed up the process?
>Thanks,
>
>Octavio Juarez
Octavio,
A few things to consider:
1. Use a "32 bit" compiler if you are not already doing so.
2. Use a disk cache if you are not already doing so.
3. If you already doing each of 1 and 2, try a fortran compiler, a
32 bit one of course.
Jerry
Subject: Re: Solving special simetrical linear system
From: checker@netcom.com (Chris Hecker)
Date: Tue, 12 Nov 1996 14:56:10 GMT
Miroslav Trajkovic writes:
>I have one problem which looks very nice but I am not sure if it
>has nice solution.
>Let a = [1 p q r s]', //where ' means transpose
> b = [1 u v w z]'
>and A = a*a';
>Is there anu "shortcut" to solve the system
>A*x = b;
Well, I'm really just learning linear algebra, but it looks to me like
you can definitely tell if this problem has solutions, and what they
are, pretty easily. The problem is that aa' is a rank 1 matrix.
Anyway, I played with actually doing the problem out, but then I thought
of this:
aa'x = b
a(a'x) = b
(a'x)a = b
So, this trivially says that a dot x (otherwise known as a'x) times a
has to equal b. Now, the bummer for you is that the first element of a
and b are equal, so it looks like a'x has to equal 1, and a and b have
to be equal as well, or else there's no solution. If a'x=1 and a=b then
you've got a family of solutions that satisfy a'x=1.
I hope I'm not missing something.
Chris
Subject: Computing PI
From: Mark Gardner
Date: Tue, 12 Nov 1996 08:30:30 -0700
I found a formula for computing PI numerically, but it takes a LONG time
to converge.
Here is the formula:
2*2*4*4*6*6*8*8...
PI = 2*----------------------
1*3*3*5*5*7*7*9...
However, when put into a computer, it takes a long time to converge.
Even after a billion iterations, it only has 10 or so digits.
Can anyone give me a formula that I can put into the computer and have
it converge fairly quickly? Is it possible?
I want something that will give a more precise answer if the computer
has longer time to process it.
Thanks,
- Mark
(please e-mail me your responses as I don't check this list often.)
Subject: Re: linear regression with errors
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Date: 12 Nov 1996 16:56:27 GMT
In article , "M.A. Cremonini" writes:
|>
|>
|> Hi,
|> I must solve a problem like this:
|>
|> Ax=B
|>
|> A is my data matrix ans B is a column vector with results.
|> B is formed by numbers which can span a certain range i.e. all
|> the numbers within
|> the range can be solutions of the corresponding equation:
|>
|> |a11 a12| |x1| |B1 +/- e1|
|> |a21 a22| | | = |B2 +/- e2|
|> |a31 a32| |x2| |B3 +/- e3|
|> |a41 a42| | | |B4 +/- e4|
|>
|> For example:
|>
|> if A11*x1+A12*x2 = Q and Q is within the range (B1-e1 - B1+e1) the result
|> is the one searched for.
|>
|>
snip snip ..
if your errors in the right hand side are of statistical nature ,
stochastically independent, with mean zero and the same variance
then solving the normal equations would give a correct estimator
(but normally, you should have much more observations than just 4 for 4
unknowns!). As a matter of fact, as you indicate, the variances of the
right hand sides components are not equal, hence you must use
appropriate weighting. the normal equations are
A(transpose)Ax = A(transpose)B
If this system comes from some other source, it may be more useful
to use interval arithmetic to solve for the solution set or a bound of it.
see the books of moore or alefeld and herzberger for an introduction
to interval analysis.
hope this helps
peter
Subject: A Special Issue on Comp. Fluid Dynamics
From: wade@alpha2.csd.uwm.edu (Bruce A Wade)
Date: 12 Nov 1996 17:03:26 GMT
Subject: Special Issue of the International Journal of Applied Science
and Computation
The International Journal of Applied Science and Computation will be
publishing a special issue edited by David Schultz and Bruce Wade on
computational fluid dynamics. Any topics related to this area will be
considered for publication.
Requests for additional information can be addressed to schultz@math.uwm.edu.
Contributors should send three copies of their paper to:
David Schultz, Professor
Department of Mathematical Sciences
University of Wisconsin-- Milwaukee
PO Box 413
Milwaukee, Wisconsin 53201-0413
--
Bruce A. Wade, Associate Professor, Dept. of Mathematical Sciences
University of Wisconsin-Milwaukee, TEL: (414) 229-5103, FAX: (414) 229-4907
E-MAIL: wade@csd.uwm.edu, WWW: http://www.math.uwm.edu/
Amateur Radio: N9UR
Subject: Re: Computing PI
From: "Dann Corbit"
Date: 12 Nov 1996 17:46:08 GMT
The sci.math FAQ has a spigot algorithm written by Dik Winter
that will compute a large number of digits quickly. If you want
to calculate millions of digits, do a web search for PI and AGM
or a search for MPFUN. If you want to calculate billions of
hexadecimal digits, there is an algorithm that will calculate
any hex digit of pi. You might also try the inverse symbolic
calculator page, or Eric's Treasure Trove of Mathematics.
Mark Gardner wrote in article <32889816.167E@cc.usu.edu>...
> I found a formula for computing PI numerically, but it takes a LONG time
> to converge.
>
> Here is the formula:
>
> 2*2*4*4*6*6*8*8...
> PI = 2*----------------------
> 1*3*3*5*5*7*7*9...
>
> However, when put into a computer, it takes a long time to converge.
>
> Even after a billion iterations, it only has 10 or so digits.
>
> Can anyone give me a formula that I can put into the computer and have
> it converge fairly quickly? Is it possible?
>
> I want something that will give a more precise answer if the computer
> has longer time to process it.
>
> Thanks,
>
> - Mark
>
> (please e-mail me your responses as I don't check this list often.)
>
Subject: Re: leading dimension
From: rkowen@klingon.lbl.gov (R. K. Owen)
Date: 12 Nov 1996 21:27:41 GMT
In article <3286E56D.3A5F@sas.seismology.hu>,
Zoltan Bus wrote:
>Hi!
>
>Somebody, please, define me what is the meaning of the expression
>'leading dimension' which can be found in almost FORTRAN matrix
>subroutine. Excuse me for this silly question.
>
>Thank you very much
>
>Zoltan Bus
The leading dimension is the actual size the first column of a matrix is
dimensioned to. It's a little more clearer if I just use an example:
dimension A(13,20)
C fill only the 4x4 part of A
do 100 i=1,4
do 100 j=1,4
A(j,i) = i*j
100 continue
...
C the leading dimension of A = 13
C the dimension of the data in A = 4
call something(A,13,4)
...
end
subroutine something(Array,NLA, NA)
dimension Array(NLA, NA)
...
Fortran is column-major, or in other words, successive columns of an array
are stored in memory. In the above example the memory location A(1,2)
follows immediately after A(13,1).
The subroutine "something" needs the leading dimension so that it can
find all the elements of the 4x4 array stored in A.
Hope this is helps.
R.K.
+-------------------------------+------------------------------+
| R.K.Owen,PhD | rk@owen.sj.ca.us |
| LBNL/NERSC | rkowen@nersc.gov |
| Univ. of California, Berkeley | Wrk:(510)486-7556 |
+-------------------------------+------------------------------+
Subject: Re: Iterative solver for banded matrix
From: eijkhout@jacobi.math.ucla.edu (Victor Eijkhout)
Date: 12 Nov 1996 23:23:27 GMT
In article <32827F30.4F0F@asu.edu> "Hans D. Mittelmann" writes:
> Iterative methods are applied when the matrix is sparse. It is not by
> itself generally an advantage for these methods that it is banded.
Depends on your band width. The matrix from central differences /
linear elements on a square grid is banded and sparse. The bandwidth
is so large, however, that you do not make any use of htat fact.
To the original poster: what exactly is the size of the band?
O(1)? Or O(sqrt(N))?
Victor.
--
405 Hilgard Ave ...................... The US pays 120,000 rubles rent on its
Department of Mathematics, UCLA ........... Moscow embassy. At the signing of
Los Angeles CA 90024 ........................... the lease this was $170,000,
phone: +1 310 825 2173 / 9036 ....................... today it's about $22.56
http://www.math.ucla.edu/~eijkhout/ [source: Sevodnya]
Subject: Re: complex Newton's method
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Date: 12 Nov 1996 14:02:08 -0500
In article <567nn3$4s1@newstand.syr.edu>, Tom Scavo wrote:
>Hi,
>
>Anyone who has studied Newton's method on the real line knows its
>geometric interpretation, that is, a sequence of tangent lines whose
>zeros converge to a root of a function. What is the corresponding
>geometric interpretation of Newton's method in the complex plane?
>
>Explanations or pointers into the literature will be most appreciated.
>If you post, please cc trscavo@syr.edu as well.
Before it becomes tangled in the intricacies of fractals, let me sum up
the "good news":
Newton's method in many variables works with the Jacobian matrix and its
inverse. For two variables, we try to solve
f_1(x_1,x_2)=0
f_2(x_1,x_2)=0 ... in vector form, f(x)=0.
Taylor's expansion around x, with first order derivatives, is
f(x+h) = f(x) + J(x) * h + R(h)
where J(x) = [partial f_1/partial x_1 partial f_1/partial x_2]
[partial f_2/partial x_1 partial f_2/partial x_2]
and the remainder R goes to 0 faster than ||h||.
If you neglect the remainder, you obtain the linear (some prefer
"affine") approximation to f(x+h) by f(x) + J(x)*h.
Assuming that J(x) is invertible, we try to improve x by solving
f(x) + J(x)*h = 0, and declare x+h to be a new guess. This gives us
x+h = x - (J(x))^(-1) * f(x)
In one variable, this is Newton's method, isn't it? It assumes that
f'(x) is not 0, of course. And if f has bounded second order partial
derivatives and a bounded inverse of the Jacobian matrix in a
neighbourhood of the exact solution then guesses sufficiently close to
the exact soution lead to a "quadratically convergent" sequence, it is,
the present error is bounded by a constant multiple of the square of
the previous error. Like in one real variable.
Now for the analytic functions: for z = x_1 + i * x_2,
f(z) = f_1(x_1,x_2) + i * f_2(x_1, x_2)
and the Jacobian matrix J(x) acts on [h_1,h_2]' the same way as the
complex number f'(x) on h_1 + i * h_2 (up to obvious notational changes).
The reason is Cauchy-Riemann equations
partial f_1/partial x_1 = partial f_2/partial x_2
partial f_1/partial x_2 = - partial f_2/partial x_1
valid whenever f is analytic.
(Check out the algebra!)
Then the formula x_new = x - f(x)/f'(x)
is a consequence of the more general two-variable method.
Remark: Even in one variable, you can get pathological cases:
Solving x^3-5*x=0, starting with x_0=1, leads to a cycle
{1, -1, 1, -1, ...}
which is unstable (will depart from cycling if you start from numbers
close to 1 but different from 1)
and solving x^3 - 2*x + 2 = 0 (Smale's example), starting from 0, leads to
a stable cycle {0, 1, 0, 1, ...}. Experiment with x_0 from [-0.1, 0.1].
Good luck, ZVK (Slavek).
Subject: Re: Directed rounding on the Pentium
From: "Stephen W. Hiemstra"
Date: 13 Nov 1996 01:14:54 GMT
Fred,
This is interesting. Do you know if Microsoft has plans to add these
improvements to Visual C++? What about Borland C++? Does Borland
implement a long double as real10?
Stephen
tydeman@tybor.com wrote in article <5699a2$12k@gate.seicom.net>...
> In <3286B63E.5BB6@WorldNet.att.net>, "Jeffery J. Leader"
writes:
> >Kahan gave an interesting talk on this type of thing at the Gragg
> >Conference in Monterey last weekend--all the directed rounding,
> >overflow/underflow, etc. stuff that's built into the chips but not
> >reasonably accessible from most compilers. Interesting but frustrating.
>
> C9X, the revision of C currently under way, has added the Floating-Point
> C Extensions (FPCE) that will enable a numerics programmer to get access
> to the floating-point environment which includes the rounding direction
> and the exception flags. Several compiler vendors already support FPCE.
>
> Fred Tydeman +49 (7031) 288-964 Tydeman Consulting
> Meisenweg 20 tydeman@tybor.com Programming, testing, C/C++
training
> D-71032 Boeblingen Voting member of X3J11 (ANSI
"C")
> Germany Sample FPCE tests:
ftp://ftp.netcom.com/pub/ty/tydeman
>
>
Subject: Re: Runge-Kutta for IBVP on second order PDE in 4 dim.?
From: dragob <>
Date: Tue, 12 Nov 1996 19:40:28 GMT
Are you interested in solving the wave equation numerically?
If so you might want to consider two time domain methods:
- finite difference time domain
- finite element time domain
You might also consider the possibility of going to a system
of first order equations, as the numerical results are more stable.
Regards,
Dragos Bica
==========A Montvay {TRACS}, 11/4/96==========
Hi all,
I want to use a Runge-Kutta (or similar) method on an IBVP on the
three-dimensional wave equation
d2p/dx2+d2p/dy2+d2p/dz2-d2p/dt2=0
Most books on numerical solutions of DE describe Runge-Kutta for ODE
some transfer it to first-order PDEs in two dimensions, but i have
not found any for second order in four dimensions.
Therefore any algorithm, program code in (almost)any language,
reference to book/paper or hints on how to do it myself would be very
welcome.
I hope somebody helps me so I don't have to figure it out myself -
after all I'm sure this has been done before!
Andras
Subject: Q: Looking for software and advice to analyze my data
From: "Technical Consultant"
Date: 13 Nov 1996 07:52:39 GMT
Hello,
I was wondering if someone could answer a question from a
non-math.num.analysis guru:
I would like help in finding software which will provide model coefficients
to correct a single dependent variable for three independent variables, one
of which is a derivative-of-time of one of the other independent variables.
Is what I am trying to do called "multiple regression"? If not, what is
the term for this class of problems?
For solving this problem, what package would best for someone who is not a
math expert?
I'd prefer freeware or shareware for Win32/Win16/DOS (in order of
preference), but would pay for a low-cost package if I could gain
user-friendliness.
Please note that it is a *must* that the software handle the
derivative-of-time variable.
Any advice would be appreciated. I'll post a summary of responses.
Thanks,
rah@cris.com
Subject: Re: Pronunciation of LaTeX
From: Konrad Hinsen
Date: 13 Nov 1996 10:01:31 +0100
Hideo Hirose writes:
> In Japan, many researchers pronounce LaTeX as "latef." Is it correct? How do you
> pronounce TeX and LaTeX actually, especially in the united states?
The question has already been adressed by Donald Knuth himself, who
explained that the X in "TeX" is actually the Greek letter chi, and
therefore should be pronounced like "chi" in Greek, "ch" in German,
Irish, or Scottish, or like the corresponding sounds in Russian,
Arabic, etc. That seems to be too difficult for most speakers
of English, so what I hear in practice is either "tek" (in analogy
to the English pronunciation of words like "technical") or "tex",
especially "latex", i.e. totally ignoring the intended etymology.
--
-------------------------------------------------------------------------------
Konrad Hinsen | E-Mail: hinsen@ibs.ibs.fr
Laboratoire de Dynamique Moleculaire | Tel.: +33-76.88.99.28
Institut de Biologie Structurale | Fax: +33-76.88.54.94
41, av. des Martyrs | Deutsch/Esperanto/English/
38027 Grenoble Cedex 1, France | Nederlands/Francais
-------------------------------------------------------------------------------
Subject: Re: Loading a large matrix from disk.
From: ajung@informatik.uni-rostock.de (Andreas Jung)
Date: 13 Nov 96 09:44:40 GMT
Octavio Hector Juarez Espinosa (oj22+@andrew.cmu.edu) wrote:
: I would like to know if there are some routines in "C" or any language
: to read a matrix from disk.
: I am reading element by element (519 by 519) and delays 11 minutes.
Are you obligued to use a given format in that the matrix is stored
on disk? If not, check if your matrices are "sparse", i.e. most
entries are zero. In this case, you only have to write the
non-zero elements to disk, together with their position, i.e. you
have to write (and later to read) a sequence of triplets (i,j,a).
So writing and reading a matrix should be much faster.
: In a PC with visual basic delays 35 minutes. I would like to know if there
: are ways to speed up the process?
I'd suggest using a _programming language_ instead of VB ;->
(Forgive me, but I couldn't resist giving this silly remark ;-)
Greetings,
Andreas Jung.
--
Andreas Gisbert Jung DL9AAI Tel:0381/498-3364 Fax:0381/498-3366
Theoretische Informatik mailto:ajung@informatik.uni-rostock.de
Universitaet Rostock http://www.informatik.uni-rostock.de/~ajung/
PGP fingerprint = 8A 0B 05 CA EE AB 7B 01 D9 07 6A D0 84 38 BB 82
Subject: Re: PRIME NUMBER UPTO 2^64 on CDROM?
From: mcgrant@wheezer.stanford.edu (Michael C. Grant)
Date: 13 Nov 1996 01:56:59 -0800
mmacleod@henge.com (Malcolm MacLeod) writes:
> >>Colbert wrote in article
> ><53c8bj$jig@decius.ultra.net>...
> >> I am currently looking of the list of ALL prime number upto 2^64. I
> >> know that some poeple working with supercomputers have had their machines
> >> compute new prime numbers around 2^11000.
> >> What I would like to know is whether somebody has effectively recorded
> >> these prime numbers on some sort of media. A CDROM with prime number
> >> only would be nice.
>
> "Dann Corbit" wrote:
> >I don't think all of the primes up to 2^64 will fit on one CD-ROM.
> >That's 18,446,744,073,709,551,616 (18 quintillion) bits.
>
> I'm not sure I agree that it is impossible. Very difficult,
> certainly... but technical data management is my specialty.
> I think I could find a way to cncode most of them onto a CDROM.
> As the numbers get bigger, the primes get farther apart. That helps.
> So... just how many primes are there below 2^64?
Hmm, how about this: it's very likely that _some_ sort of compression
will be required here. So why not write a custom compression program
for this particular CD-ROM, and store it alongside the data?
Here's one that would work exceedingly well: one which includes the
ability to generate all of the primes from 1 up to N when N is
suppplied.
So, for N=2^64-1, this should require 64 bits, plus the size of the
compression program. A pretty good compression ratio, I imagine, but
more importantly it will easily fit on a CD-ROM.
I'm sure there would be plenty of room for both a PC and Macintosh
version, maybe even a few UNIX platforms would be supported.
Sorry if this has been suggested already :-) Of course, the biggest
disadvantage would be decompression time, I suppose.
--
Michael C. Grant Information Systems Laboratory, Stanford University
mcgrant@isl.stanford.edu
------------------------------------------------------------------------------
"When you get right down to it, your "Long hair, short hair---what's
average pervert is really quite the difference once the head's
thoughtful." (David Letterman) blowed off?" (Nat'l Lampoon)
Subject: Re: Non-linear schrodinger eqn
From: "Stephen R. Chinn"
Date: Wed, 13 Nov 1996 07:25:50 +0000
See the book "Nonlinear Fiber Optics" by G. P. Agrawal (Academic Press)
for a brief description, plus references. He uses the split-step
Fourier method for solving the NLSE describing optical soliton
evolution. You might also want to consider a finite-difference,
real-space type method.
--
Stephen R. Chinn, Optical Communications Technology
MIT Lincoln Laboratory
M/S C-280, 244 Wood Street
Lexington, MA 02173-9108
PHONE: (617) 981-5370; FAX: (617) 981-4129