Subject: Re: image analysis system
From: "Sheldon L. Epstein"
Date: Sun, 10 Nov 1996 16:46:29 -0600
Raja Elhayek wrote:
>
> Sheldon L. Epstein wrote:
> >
> > matthew g donovan wrote:
> > >
> > > Our lab is considering purchasing an image analysis system for the
> > > quantification of immunohistochemical staining. Does anyone know of a
> > > good source for comparing the features of the currently available
> > > systems? We are looking for something compatible with our Nikon
> > > microscopes and very user freindly. We would welcome any suggestions of
> > > what to look for, as well as what to avoid. Thanks in advance, MD
> >
> > Hello Matthew,
> >
> > Well to begin, there is no such thing as a 'user-friendly' imaging
> > system. To understand why this is so, you have to meet the people
> > who write the software. They are thinking in terms of words
> > such as 'morphology' while you thinking 'immunohistochemical staining'.
> > What you need is a person to do translation and provide the goodies
> > you need.
> >
> > We build custom automatic inspection systems and use ZEISS
> > microscopes. If we can be of assistance to you, then please call or
> > e-mail.
> >
> > Sheldon L. Epstein, shel@k9ape.com
> > Chief Engineer
> > Epstein Associates
> > Wilmette, IL
> >
> > http://www.k9ape.com
>
> Hello Matthew and Sheldon
>
> Actually Sheldon is wrong, although we truly think in terms of
> morphology
> when processing images, there are some software that are really user
> friendly, personnally I have used Image-Pro Plus in a lot of biological
> analysis and it seems to have some very good features, and it is easy to
> use. Now I am not saying that it is the best but for the features
> included and the price of about $2500. it seems to be OK. It also has a
> programming invironment for Basic and C you can also access it through
> dynamic linking to perform very specialized analysis. But be a little
> careful, their customer service department aren't that hot!
>
> If you have any questions you can email me back
> Raja Elhayek
Hello Raja,
I think that something I said got lost in translation. There are
several packages out there containing lots and lots of imaging tools.
However, they rarely contain advice on how any of these tools
might be useful in a practical application.
The question that a potential user has to answer is whether they
want to use their limited time to duplicate the learning experience
of others or they want to concentrate on expanding their own area
of science, engineering or technology. Now for someone like
Matthew who is interested in 'immunohistochemical staining' for
solving some problem - say in biology, it is probably a waste
of his time to spend hours trying to explore the finer points
of Canny transforms or the latest in wavelets. And even if
he were to do it, he is still left with basic problems
of engineering a practical system for a laboratory or a manufacturing
plant.
This is not to say that Matthew and others could not succeed; but,
it has been our experience that our customers have to decide
whether they want to concentrate on their business or whether
they want to be in our business. There are alot of video
cameras and imaging boards out there gathering dust because
some researcher in another area thought it would be a simple
matter to build a system.
So much for direct marketing to endusers of some of our tools.
That worked for solving simple problems; but, now that all of
the simple problems have been solved, direct marketing has
lost its allure as companies now focus their attention on
integrators. We still sell components to those who want them;
however as the components get more sophisticated, there are
fewer customers who want to buy them. Instead, the demand for
integration is growing and that is an intelligent investment
decision. After all, how many of you out there write your
own operating systems or WWW browsers? You can do it; but,
its unlikely that you'll make any money at it. And the same
is true for image analysis.
Shel Epstein, shel@k9ape.com
Chief Engineer
Epstein Associates
http://www.k9ape.com
Subject: Axiom ex1650 Video Printer,,
From: Bill Reuber
Date: 10 Nov 1996 23:24:04 GMT
for sale, Axiom EX 1650 video printer,,110 volt,,video in,,prints on
special paper,, 65.00
Subject: Re: Number of gray levels?
From: G.DITTIE@ABBS.heide.de (Georg Dittie)
Date: 9 Nov 96 21:21:00 +0200
> I know that most monitors can not distinguish 256 shades of gray. More
> to the point, the human eye can not either ...
> How many levels of gray can the human eye distinguish? How should these
> gray levels be distributed? (i.e. offhand, at low gray scales, the
> difference between gray levels should be low. At higher gray scales,
> we can have larger differences -- the contrast stretch basically.)
Hi to all,
I made the following experience: I used a hi-color video card in my pc the
last four years. hi-color means the ability to distinguish 32 grey levels.
And the eye could too. Another very disturbing effect occured: The
boundary between two levels was enhance by our own vision center in the
brain. I could see real steps !
Six weeks ago I changed the old adapter to a true color video card (with
256 grey levels) The boundaries, like equidensites, vanished.
Conclusion: for a realistic display 256 levels in every color channel is
absolutely neccessary. The relative sentivity of our eyes is very good,
btter than 1 % difference. Only the ability to estimate absolute values is
not better than 15 percent ...
Gruss, Georg
Meine neue Homepage: http://members.aol.com/Waermebild
## CrossPoint v3.02 R ##
Subject: Re: Downsampling without aliasing
From: tardif@gel.ulaval.ca (Pierre-Martin Tardif)
Date: Mon, 11 Nov 1996 03:19:06 GMT
tsui@mhd1.pfc.mit.edu wrote:
>Dear friends:
>How to downsample an image without aliasing effect?
You need to filter out 1/2 of the spectrum width. So, your
bandpass need to be less or equal to 1/2. Usually,
this filter is a low-pass filter.
Ideal LPF:
^
|
|------------------|
| |
--+------------------|--------------------|-----> f
0 1/2 1
PMT
PS:
f is normalized to sampling frequency
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Pierre-Martin Tardif, graduate student email: tardif@gel.ulaval.ca
Computer Vision and Systems Lab http://www.gel.ulaval.ca/~tardif
Laval University, Ste-Foy, Canada phone: 418-656-2131, 4848
Subject: Re: How can I view the edges of a slice
From: cohenj@cs.unc.edu (Jonathan Cohen)
Date: 11 Nov 1996 01:29:49 -0500
In article <01bbcf8e$dd9425c0$7b6bae80@viper.cso.uiuc.edu>,
Peter J. Wahle wrote:
>I need to view the edges of a slice of a 3D OpenGL object. For example, if
>I was to slice the trunk of a tree, I would want to see the rings; or if I
>was to slice a teapot, I would want to see the cut edges only. How can I
>do this using OpenGL on an OpenGL object? The output only need be a binary
>image.
Set the hither and yon (sorry, I guess that's now "near" and "far")
planes of your projection model to be very close together. That should
give you something like a slice. (You might want to disable backface
culling as well, but I'm not sure what sort of look you want).
This won't exactly turn your slices into lines, but it should give you a
pretty close approximation to what you want.
Jon
Subject: Re: Resectioning
From: "Dr. Helmut Kopp"
Date: Mon, 11 Nov 1996 10:44:53 +0100
Jules wrote:
>
> Hi,
>
> Does anyone know anything about photommetric resectioning, for
> finding the position of a camera?
>
> Are there any papers available to see the maths behind this??
>
> Any references perhaps??
>
> Thanks in advance
>
> Julian Harris
I think a good introduction will be:
Andreas Meisel, "3D-Bildverarbeitung für feste und bewegte Kameras",
Vieweg-Verlag 1994, ISBN 3-528-06624-5.
The book is written in German.
Hope this helps!
Helmut Kopp
Subject: weighing cows by imaging?
From: lupkeen@leland.Stanford.EDU (Lup Keen Ng)
Date: 11 Nov 1996 03:02:00 -0800
I came across this news posting some time ago and found it
interesting.
As I'm not an active researcher in image processing, I'm
wondering if anyone in this community knows about this,
and perhaps point me to some web pages/links.
The news article:
------ begin -------
From newsbytes@clari.net Thu Aug 29 10:53:04 PDT 1996
BRISBANE, AUSTRALIA, 1996 AUG 28 (NB) -- Researchers from Queensland's
University of Technology (QUT) say they have found the ultimate
application of information technology: judging the weight of dairy
cows by computer imaging.
The team responsible for the breakthrough says the computer-imaging
package it developed weighs cows for "a fraction of the cost of
existing methods," according to Australian Associated Press (AAP).
Why weigh a cow at all? The wire service says weighing of the beasts
is vital to determine how much feed they need and to plan for calving.
Dr. Shlomo Geva of QUT's Neurocomputing Research Centre is the brains
behind the development. He told AAP that a cow-weigher now needs only
a video camera, PC and a $A3,000 software package, compared to a
minimum investment of $25,000 for the system currently in place.
(19960828)
--- end ------
Subject: Interpolate interlaced video frames
From: Christian Merkwirth
Date: Mon, 11 Nov 1996 15:44:10 +0100
Video data from a CCD-camera is usually interlaced, that means
the first frame coming from the camera contains e.g the even numbered
lines of the image, the next frame carries the information from
the odd numbered lines of the image, both frames have exactly
the same size.
If you record a scene without moving objects, the frames coming from
the camera show almost the same picture, there's just a small
displacement in vertical direction of a half line between two frames.
My problem is to eliminate this small difference in the position by
interpolating for example all frames with the odd lines.
My first idea is to calculate the mean between two consecutive lines in
every 'odd' frame, but I guess there's a more exact interpolation
using more lines in one frame.
Does anyone know which formula is the best ? What do I with the first
and the last line in a frame, these lines have only one neighbour ?
Thanks in advance,
Christian Merkwirth
Drittes Physikalisches Institut
Goettingen
[tel] ++ 49 551 39 21 65 [fax] ++ 49 551 39 77 20
[email] cmerk@physik3.gwdg.de
Subject: Re: Interpolate interlaced video frames
From: rander+@elm.ius.cs.cmu.edu (Peter Rander)
Date: 11 Nov 1996 17:57:06 GMT
In article <32873BBA.15FB@physik3.gwdg.de>, Christian Merkwirth writes:
|> Video data from a CCD-camera is usually interlaced, that means
|> the first frame coming from the camera contains e.g the even numbered
|> lines of the image, the next frame carries the information from
|> the odd numbered lines of the image, both frames have exactly
|> the same size.
Just to clarify, you are using "frame" when it looks like you mean
"field." That is, one video frame contains two fields vertically
offset from one another.
|> If you record a scene without moving objects, the frames coming from
|> the camera show almost the same picture, there's just a small
|> displacement in vertical direction of a half line between two frames.
|>
|> My problem is to eliminate this small difference in the position by
|> interpolating for example all frames with the odd lines.
Wait, do you mean that you *know* nothing is moving? Then why do any
interpolation? You can simply re-interlace the two fields to create
one higher-resolution frame.
|> My first idea is to calculate the mean between two consecutive lines in
|> every 'odd' frame, but I guess there's a more exact interpolation
|> using more lines in one frame.
Your suggested approach is linear interpolation. You can use the same
approach but extended to higher-order polynomials. For example, you
could use cubic interpolation to the two rows above and the two rows
below each line you are interpolating. If the row you are computing
is row R, and the current pixel in the row is at column C, then you
fit a cubic polynomial to the pixels (R-1.5,C), (R-0.5,C), (R+0.5,C),
and (R+1.5,C). You then find the value at R = 0. You can also use
some sort of (B-)spline interpolation scheme.
|> Does anyone know which formula is the best ?
Well, in most cases, the linear interpolation that you are already using
will work pretty well, especially if speed is a factor in the performance
of your interpolation. The human eye is unlikely to see much of the
improvement of using higher-order interpolation on these images.
|> What do I with the first
|> and the last line in a frame, these lines have only one neighbour ?
A classic problem. It depends. Do you even care about these lines?
They *are* on the extreme edge of the image, so they may not even be
very noticable. In general, you can not do much -- just copy the last
row of your data, for example.
-Pete
Subject: Re: image analysis system
From: Raja Elhayek
Date: Mon, 11 Nov 1996 10:43:09 +0000
Sheldon L. Epstein wrote:
>
> Raja Elhayek wrote:
> >
> > Sheldon L. Epstein wrote:
> > >
> > > matthew g donovan wrote:
> > > >
> > > > Our lab is considering purchasing an image analysis system for the
> > > > quantification of immunohistochemical staining. Does anyone know of a
> > > > good source for comparing the features of the currently available
> > > > systems? We are looking for something compatible with our Nikon
> > > > microscopes and very user freindly. We would welcome any suggestions of
> > > > what to look for, as well as what to avoid. Thanks in advance, MD
> > >
> > > Hello Matthew,
> > >
> > > Well to begin, there is no such thing as a 'user-friendly' imaging
> > > system. To understand why this is so, you have to meet the people
> > > who write the software. They are thinking in terms of words
> > > such as 'morphology' while you thinking 'immunohistochemical staining'.
> > > What you need is a person to do translation and provide the goodies
> > > you need.
> > >
> > > We build custom automatic inspection systems and use ZEISS
> > > microscopes. If we can be of assistance to you, then please call or
> > > e-mail.
> > >
> > > Sheldon L. Epstein, shel@k9ape.com
> > > Chief Engineer
> > > Epstein Associates
> > > Wilmette, IL
> > >
> > > http://www.k9ape.com
> >
> > Hello Matthew and Sheldon
> >
> > Actually Sheldon is wrong, although we truly think in terms of
> > morphology
> > when processing images, there are some software that are really user
> > friendly, personnally I have used Image-Pro Plus in a lot of biological
> > analysis and it seems to have some very good features, and it is easy to
> > use. Now I am not saying that it is the best but for the features
> > included and the price of about $2500. it seems to be OK. It also has a
> > programming invironment for Basic and C you can also access it through
> > dynamic linking to perform very specialized analysis. But be a little
> > careful, their customer service department aren't that hot!
> >
> > If you have any questions you can email me back
> > Raja Elhayek
>
> Hello Raja,
>
> I think that something I said got lost in translation. There are
> several packages out there containing lots and lots of imaging tools.
> However, they rarely contain advice on how any of these tools
> might be useful in a practical application.
>
> The question that a potential user has to answer is whether they
> want to use their limited time to duplicate the learning experience
> of others or they want to concentrate on expanding their own area
> of science, engineering or technology. Now for someone like
> Matthew who is interested in 'immunohistochemical staining' for
> solving some problem - say in biology, it is probably a waste
> of his time to spend hours trying to explore the finer points
> of Canny transforms or the latest in wavelets. And even if
> he were to do it, he is still left with basic problems
> of engineering a practical system for a laboratory or a manufacturing
> plant.
>
> This is not to say that Matthew and others could not succeed; but,
> it has been our experience that our customers have to decide
> whether they want to concentrate on their business or whether
> they want to be in our business. There are alot of video
> cameras and imaging boards out there gathering dust because
> some researcher in another area thought it would be a simple
> matter to build a system.
>
> So much for direct marketing to endusers of some of our tools.
> That worked for solving simple problems; but, now that all of
> the simple problems have been solved, direct marketing has
> lost its allure as companies now focus their attention on
> integrators. We still sell components to those who want them;
> however as the components get more sophisticated, there are
> fewer customers who want to buy them. Instead, the demand for
> integration is growing and that is an intelligent investment
> decision. After all, how many of you out there write your
> own operating systems or WWW browsers? You can do it; but,
> its unlikely that you'll make any money at it. And the same
> is true for image analysis.
>
> Shel Epstein, shel@k9ape.com
> Chief Engineer
> Epstein Associates
> http://www.k9ape.com
Hello Sheldon
I understand very much your point of view relating to an integrated
system that does specifically what you want, this is actually what I am
developping right now for a company, but I believe that Matthew who is
probably reading our comments was asking for a system for his laboratory
and as I see it an integrated system that does analyse
'immunohistochemical staining' may not have all the exploring
capabilities that Matthew wants. Now don't get me wrong, I hope you can
strike a deal on this, but research (I am a physicist by the way) as I
have explored over the last 5 years is a matter of exploration not just
Idea, if you specify a system to analyse staining, you are limiting your
exploration factors first by not being able to test different imaging
aspects and how they apply to the topic at hand, second if you want to
add new capabilities to an already developped system you will have to go
through the same company again, (this isn't very cost effective unless
you are determined that your calculations are the end to the exploration
in your staining project!). I guess the whole issue here is what
Matthew really wants, does he want to explore the capabilities that can
apply to staining or does he already know everything he wants in a
system?
I am currently working with SpectraMetrix Inc. and I tell you everytime
I get an idea to integrate in the final product I use a commercial
software to test it first, that does reduce my developing time and I am
capable of testing the accuracy and potential of my idea before I code
it.
I hope Matthew is reading these comments, because this will definitly
help him in his decision, of getting a package that is very general and
through research he can discover what he really wants in a system or if
he already knows what he wants and want to limit his time doing research
the integrated system is a very good idea.
Raja Elhayek
San Diego, CA.
Subject: Re: How can I view the edges of a slice
From: Peter Lindstrom
Date: Mon, 11 Nov 1996 14:52:07 -0500
Jonathan Cohen wrote:
>
> In article <01bbcf8e$dd9425c0$7b6bae80@viper.cso.uiuc.edu>,
> Peter J. Wahle wrote:
> >I need to view the edges of a slice of a 3D OpenGL object. For example, if
> >I was to slice the trunk of a tree, I would want to see the rings; or if I
> >was to slice a teapot, I would want to see the cut edges only. How can I
> >do this using OpenGL on an OpenGL object? The output only need be a binary
> >image.
>
> Set the hither and yon (sorry, I guess that's now "near" and "far")
> planes of your projection model to be very close together. That should
> give you something like a slice. (You might want to disable backface
> culling as well, but I'm not sure what sort of look you want).
>
> This won't exactly turn your slices into lines, but it should give you a
> pretty close approximation to what you want.
>
> Jon
A more general approach would be to use glClipPlane() to specify one or more
arbitrary clipping planes.
--
_____________________________________________________________________________
PETER LINDSTROM Graphics, Visualization, & Usability Center
Ph.D. Student College of Computing
lindstro@cc.gatech.edu Georgia Institute of Technology
http://www.cc.gatech.edu/gvu/people/peter.lindstrom Atlanta, GA 30332-0280
Subject: Q: Adaptive Thresholding and Segmentation
From: "Gord Bowman"
Date: 11 Nov 1996 21:34:17 GMT
I'm trying to locate dark regions in an image using the adaptive
thresholding technique, which I have heard to be useful for such an
application. Being unable to find an actual description of this algorithm,
I assumed it to be:
Run a moving window over an image. If the value of the centre pixel is more
than a specified threshold different from the average of the pixels in the
window, set it equal to 1, otherwise 0.
The obvious problem I encountered with this was that if the dark region or
bright region fully encompasses the window, there is no way to distinguish
between them because the difference between the centre pixel value and the
mean is essentially zero.
Unless I am totally missing something, I don't see how this could possibly
be a good algorithm for spot detection.
To overcome this difficulty, I instead went through the image in blocks,
finding the mean value for each block. I then go through the image again,
interpolating between these mean values to determine the background value
for thresholding any pixel.
My questions are:
(1) Am I not understanding the adaptive thresholding algorithm?
(2) Is what I have done still considered "adaptive" thresholding?
(3) I have also heard of adaptive segmentation being used. Is that
basically creating a histogram for a moving window which is hopefully
bimodal and using the value at the valley to threshold the centre pixel? If
so, that would seem to have the same limitation as adaptive thresholding.
--
Gord Bowman (gbowman@atlsci.com) - Research/Development Engineer
Atlantis Scientific Systems Group Inc. (http://www.atlsci.com)
1827 Woodward Drive, Ottawa, ON, K2C 0P9 CANADA
phone: 613-727-1087 fax: 613-727-5853 toll free: 1-800-265-3894
Subject: Re: Fourier-coeffs
From: Christian Merkwirth
Date: Mon, 11 Nov 1996 22:09:22 +0100
Arpad BARSI wrote:
> I have an m-by-n image with a pixelsize d. What do the real and
> imaginary part of the Fourier coefficients mean? How do they contain the
> pixelsize?
I assume by pixelsize you mean the size of the quadratic(?) area of the
viewed scene
that corresponds to one pixel in the captured image ?
Generally, the resolution in the frequency domain (data after doing FFT)
is just the
inverse of the total length of the image in one direction (x or y axis).
E.g. your pixelsize is 1 mm, and you have m = 400 pixels in direction x
and n = 300
in direction y, your resolution is (400 * 1 mm)^-1 for x, (300 * 1mm)^-1
for y.
Resolution in the frequency domain is equivalent to resolution in
spatial domain.
The real part of a Fourier coefficient determines the cosine (symmetric
part), the
imaginary part the sine that is needed to form the wave at the
frequency.
The 2-dimensional FFT gives you two numbers (a real and a imaginary) for
each pixel,
whereas you needed only one number to describe the pixels before
transformation.
Therefore some of the information is redundant, you approximatly only
need half of the
FFT image, but it's a little tricky know what you need and what not, so
better take
everything.
It doens't matter which units you use, it's always the inverse.
--
Christian Merkwirth
Drittes Physikalisches Institut
Goettingen
[tel] ++ 49 551 39 21 65 [fax] ++ 49 551 39 77 20
[email] cmerk@physik3.gwdg.de
Subject: Re: Q: Adaptive Thresholding and Segmentation
From: cvermill@servtech.com
Date: Mon, 11 Nov 1996 22:26:27 GMT
"Gord Bowman" wrote:
>I'm trying to locate dark regions in an image using the adaptive
>thresholding technique, which I have heard to be useful for such an
>application. Being unable to find an actual description of this algorithm,
>I assumed it to be:
>Run a moving window over an image. If the value of the centre pixel is more
>than a specified threshold different from the average of the pixels in the
>window, set it equal to 1, otherwise 0.
My understanding of thresholding is that any value equal to or above
the threshold is set to one and any value below the threshold gets
zero. Adaptive thresholding is just allowing the threshold to change
based upon a region surrounding the pixel, instead of using a fixed
threshold for the whole image. So the threshold would be the mean
value, not a difference from a mean.
>The obvious problem I encountered with this was that if the dark region or
>bright region fully encompasses the window, there is no way to distinguish
>between them because the difference between the centre pixel value and the
>mean is essentially zero.
>Unless I am totally missing something, I don't see how this could possibly
>be a good algorithm for spot detection.
In the case of a region where the center pixel is equal to the mean
value, the pixel would be set to one, assuming your thresholding
condition was "greater than or equal to"
I hope this is helpful,
C
Subject: Re: Interpolate interlaced video frames
From: Christian Merkwirth
Date: Mon, 11 Nov 1996 23:18:26 +0100
Peter Rander wrote:
>
> In article <32873BBA.15FB@physik3.gwdg.de>, Christian Merkwirth writes:
> |> Video data from a CCD-camera is usually interlaced, that means
> |> the first frame coming from the camera contains e.g the even numbered
> |> lines of the image, the next frame carries the information from
> |> the odd numbered lines of the image, both frames have exactly
> |> the same size.
>
> Just to clarify, you are using "frame" when it looks like you mean
> "field." That is, one video frame contains two fields vertically
> offset from one another.
>
> |> If you record a scene without moving objects, the frames coming from
> |> the camera show almost the same picture, there's just a small
> |> displacement in vertical direction of a half line between two frames.
> |>
> |> My problem is to eliminate this small difference in the position by
> |> interpolating for example all frames with the odd lines.
>
> Wait, do you mean that you *know* nothing is moving? Then why do any
> interpolation? You can simply re-interlace the two fields to create
> one higher-resolution frame.
>
> |> My first idea is to calculate the mean between two consecutive lines in
> |> every 'odd' frame, but I guess there's a more exact interpolation
> |> using more lines in one frame.
>
> Your suggested approach is linear interpolation. You can use the same
> approach but extended to higher-order polynomials. For example, you
> could use cubic interpolation to the two rows above and the two rows
> below each line you are interpolating. If the row you are computing
> is row R, and the current pixel in the row is at column C, then you
> fit a cubic polynomial to the pixels (R-1.5,C), (R-0.5,C), (R+0.5,C),
> and (R+1.5,C). You then find the value at R = 0. You can also use
> some sort of (B-)spline interpolation scheme.
>
> |> Does anyone know which formula is the best ?
>
> Well, in most cases, the linear interpolation that you are already using
> will work pretty well, especially if speed is a factor in the performance
> of your interpolation. The human eye is unlikely to see much of the
> improvement of using higher-order interpolation on these images.
>
> |> What do I with the first
> |> and the last line in a frame, these lines have only one neighbour ?
>
> A classic problem. It depends. Do you even care about these lines?
> They *are* on the extreme edge of the image, so they may not even be
> very noticable. In general, you can not do much -- just copy the last
> row of your data, for example.
>
Thank you for your quick response.
Indeed, I meant fields when I spoke of frames.
Saying the cameras view a scene without moving objects was just
to avoid problems when speaking of 'different positions' of the lines
in even and odd fields, actually the scene is changing rapidly.
I already tried to use linear interpolation, and it works really well,
so I don't think I need to use higher order interpolation, which in fact
is computational expensive. The first and last line is just copied as
you
said, often the region of interest is smaller than the whole image area.
The only reason I need to interpolate the fields is that I use a very
position
sensitive algorithm to analyse the pictures, and I get artefacts of the
vertical
'flickering' in the results.
Regards,
Christian Merkwirth
Drittes Physikalisches Institut
Goettingen
[tel] ++ 49 551 39 21 65 [fax] ++ 49 551 39 77 20
[email] cmerk@physik3.gwdg.de
Subject: Re: what's is "P5" or "P6" ??
From: mleese@hudson.CS.unb.ca (Martin Leese - OMG)
Date: 11 Nov 1996 20:18:05 GMT
On Sat, 09 Nov 1996 17:24:39 +0800 gaubear (gaubear@dinosaur.soft.iecs.fcu.edu.tw) wrote:
>> I want to know that what's PGM and PPM raw format files (magic
>> number "P5" or "P6" )
>> Can you help me ??
Typing `man pgm ppm' produced this:
NAME
pgm - portable graymap file format
DESCRIPTION
...
There is also a variant on the format, available by setting
the RAWBITS option at compile time. This variant is dif-
ferent in the following ways:
- The "magic number" is "P5" instead of "P2".
- The gray values are stored as plain bytes, instead of
ASCII decimal.
- No whitespace is allowed in the grays section, and only a
single character of whitespace (typically a newline) is
allowed after the maxval.
- The files are smaller and many times faster to read and
write.
Note that this raw format can only be used for maxvals less
than or equal to 255. If you use the _p_g_m library and try to
write a file with a larger maxval, it will automatically
fall back on the slower but more general plain format.
and this:
NAME
ppm - portable pixmap file format
DESCRIPTION
...
There is also a variant on the format, available by setting
the RAWBITS option at compile time. This variant is
different in the following ways:
- The "magic number" is "P6" instead of "P3".
- The pixel values are stored as plain bytes, instead of
ASCII decimal.
- Whitespace is not allowed in the pixels area, and only a
single character of whitespace (typically a newline) is
allowed after the maxval.
- The files are smaller and many times faster to read and
write.
Note that this raw format can only be used for maxvals less
than or equal to 255. If you use the _p_p_m library and try to
write a file with a larger maxval, it will automatically
fall back on the slower but more general plain format.
Regards,
Martin
E-mail: mleese@omg.unb.ca
WWW: http://www.omg.unb.ca/~mleese/
______________________________________________________________________
Want to know how Ambisonics can improve the sound of your LPs and CDs?
Read the Ambisonic Surround Sound FAQ. Version 2.7 now on my WWW page.
Subject: Re: Number of gray levels?
From: Robert Smith
Date: Mon, 11 Nov 1996 23:16:12 GMT
Perry West wrote:
> It is still possible under some conditions to observe the Mach band
> effect with 256 gray levels. Of course, a lot depends on the brightness
> and contrast settings of your monitor.
>
> Regards,
> Perry West
Perry:
Can you actually show a mach band with a 1/256 luminance change?
There's a mountain of work on contrast detection, and all of it seems to
say that the best humans can do, under ideal conditions, is about 0.3%.
This is almost exactly 1 part in 256.
--
. Robert A. Smith, Ph.D.
_____ . Vision Systems' Analyst
| |<. Current Technology, Inc.
|_____| . (603) 868-2270
^ . mailto:ras@curtech.com
/ \
/ \
Subject: Re: Please help with ridges
From: Robert Smith
Date: Mon, 11 Nov 1996 23:28:18 GMT
Elizabeth Anna Stevenson wrote:
>
> I am taking a computer vision class and need to know
> how to find a point on the ridge of a brightness function f(x,y)
> (2D image). I have been given the direction of the
> gradient of the function as
>
> phi = arccos(sqrt(x^2 + 4y^2))
> Beth:
The basic idea here is that the ridge-line will always be parallel to
the gradient. The only question is whether you're on the ridge line.
The ridge line is a local maximum of curvature in the direction
perpendicular to the gradient (at least that's one definition).
So start at a point, calculate the gradient, then calculate the
directional curvature (2nd derivative is good enuf). Now move along a
contour line (i.e. perpendicular to the gradient) and look for a local
maximum of the curvature. I'd do this numeriocally, but I daresay a
good mathematician could do in in closed form, too.
good luck
Bob
. Robert A. Smith, Ph.D.
_____ . Vision Systems' Analyst
| |<. Current Technology, Inc.
|_____| . (603) 868-2270
^ . mailto:ras@curtech.com
/ \
/ \
Subject: Re: Downsampling without aliasing
From: Robert Smith
Date: Mon, 11 Nov 1996 23:39:44 GMT
tsui@mhd1.pfc.mit.edu wrote:
>
> Dear friends:
>
> I has a question and hope that you can give me some hints:
>
> How to downsample an image without aliasing effect?
>
> Thank you very much for your attention. I am waiting for your response.
>
> Sincerely,
>
> Chiwa Tsui
The basic idea is not to throw away any spatial information. For
example, if you were going to shrink the image 2x, you would create
"new" pixels by averaging a 2x2 array of the original pixels. (The
alternative, discarding 3 out of 4 pixels is the best way to GET
aliasing!)
This is great if you're changing size by an integer, but it's harder
when you're not. In that case you will have to create virtual pixels
with weighted averages. e.g. Suppose you want a pixel which is 2/3 of
the way between 2 existing pixels; you must weight this pixel by adding
2/3 of one original pixel and 1/3 of the other.
Now we come to 2D. There really is no obviously best way of
interpolating a pixel which falls between 4 existing pixels in 2D.
Fortunately it's not critical for most purposes so do a 1D interpolation
along all the rows, and then do it again along the resulting columns.
(Fancier than this requires curvilinear interpolation, and is rarely
necessary to avoid aliasing.)
Good luck/
Bob
. Robert A. Smith, Ph.D.
_____ . Vision Systems' Analyst
| |<. Current Technology, Inc.
|_____| . (603) 868-2270
^ . mailto:ras@curtech.com
/ \
/ \
Subject: Re: Interpolate interlaced video frames
From: Robert Smith
Date: Mon, 11 Nov 1996 23:44:13 GMT
Christian Merkwirth wrote:
>
> Video data from a CCD-camera is usually interlaced, that means
> the first frame coming from the camera contains e.g the even numbered
> lines of the image, the next frame carries the information from
> the odd numbered lines of the image, both frames have exactly
> the same size.
>
> If you record a scene without moving objects, the frames coming from
> the camera show almost the same picture, there's just a small
> displacement in vertical direction of a half line between two frames.
>
> My problem is to eliminate this small difference in the position by
> interpolating for example all frames with the odd lines.
>
> My first idea is to calculate the mean between two consecutive lines in
> every 'odd' frame, but I guess there's a more exact interpolation
> using more lines in one frame.
>
> Does anyone know which formula is the best ? What do I with the first
> and the last line in a frame, these lines have only one neighbour ?
Christian:
There are cameras (and not terribly expensive ones) which will trigger
their shutter at the beginning of a frame, rather than at the beginning
of each field. Though you have to live with a slightly shorter exposure
time, this may be a good idea with moving objects. Sorry, I don't have
model #s off the top of my head, but a knowledgeable camera salesman
could help.
Good luck
Bob
. Robert A. Smith, Ph.D.
_____ . Vision Systems' Analyst
| |<. Current Technology, Inc.
|_____| . (603) 868-2270
^ . mailto:ras@curtech.com
/ \
/ \
Subject: Re: Number of gray levels?
From: perry@netcom.com (Perry West)
Date: Tue, 12 Nov 1996 00:10:26 GMT
Robert Smith (ras@curtech.com) wrote:
: Can you actually show a mach band with a 1/256 luminance change?
: There's a mountain of work on contrast detection, and all of it seems to
: say that the best humans can do, under ideal conditions, is about 0.3%.
: This is almost exactly 1 part in 256.
Robert --
I haven't tried myself. But I have seen images with 256 gray-levels that
have exhibited contouring, which, I believe, is attributable at least in
part to the mach band effect.
Mapping from a linear system (the 256 gray level image) to a somewhat
logrithmic system (the human eye), I would expect that low gray levels
would be more prone to exhibiting the mach band effect. This is just my
expectation, and not necessarily fact.
Regards,
Perry West
Subject: ERS-1/2 data and Interferogram of subglacial volcanic eruption in Iceland and threatening glacial flood.
From: Christoph Boehm Doktorand FE
Date: Sun, 10 Nov 1996 11:55:32 +0100
VOLCANIC ERUPTION ON ICELAND: ERS-SAR-IMAGES
November 8, 1996
New ERS-1/2 SAR images and interferogram of the recent subglacial
eruption of Loki ridge, a subglacial volcano on Iceland, are now
available on the DFD homepage:=20
http://www.dfd.dlr.de/HOT-TOPICS/volcano/
These images show the newest eruption on Vatnaj=F6kull, Europe's
largest glacier, which started on October 1. and the subglacial
reservoir which is now filled with some km3 of meltwater. This huge
amount of water will cause a catastrophic flood.=20
This page is available both, in English and German language, and
will be updated as soon as new SAR data are processed and analyzed.
-------------------------------------------------------------------
This is a service of the German Remote Sensing Data Center (DFD).
For further information, please contact:
boehm@dfd.dlr.de
mueschen@dfd.dlr.de
roth@dfd.dlr.de
------------------------------------------------------------------
Christoph Boehm
DFD (German Remote Sensing Data Center)
Subject: Re: Number of gray levels?
From: tgl@netcom.com (Tom Lane)
Date: Tue, 12 Nov 1996 05:15:16 GMT
perry@netcom.com (Perry West) writes:
> Robert Smith (ras@curtech.com) wrote:
> : Can you actually show a mach band with a 1/256 luminance change?
> : There's a mountain of work on contrast detection, and all of it seems to
> : say that the best humans can do, under ideal conditions, is about 0.3%.
> : This is almost exactly 1 part in 256.
> I haven't tried myself. But I have seen images with 256 gray-levels that
> have exhibited contouring, which, I believe, is attributable at least in
> part to the mach band effect.
The best advice I've heard is that 256 levels are sufficient if the image
is gamma-encoded with a gamma around 0.5. If the image is encoded with
a linear sample-value-to-light-intensity mapping, then you *can* see
banding at the darker end of the range (and conversely, the sample
values are uselessly close together at the upper end). So you're both
right, depending on whether gamma correction is properly employed.
A good reference for this stuff is Poynton's gamma FAQ, at
http://www.inforamp.net/~poynton/Poynton-colour.html
regards, tom lane
organizer, Independent JPEG Group
Subject: Question About Two-Channel FIR PR QMF Banks
From: mdadams@surya.uwaterloo.ca (Mike Adams)
Date: Tue, 12 Nov 1996 06:11:39 GMT
Hi, Folks!
I have a question regarding two-channel PR FIR QMF banks that I'm hoping
you might be able to help me with. In essence, my question is this:
Given a two-channel maximally decimated perfect reconstruction
filter bank with causal FIR filters, what can be said about the
relationship between the order of the filters in the system?
Moreover, how do the filter orders relate to the overall delay of
the analysis/synthesis system?
This question arose while I was looking at the code which calculates
the forward and inverse wavelet transforms in the Uvi_Wave toolkit
(from the University of Vigo). This code uses only the order of the two
synthesis filters to determine the delay of the analysis/synthesis
system. (Of course, I'm assuming that I haven't misinterpreted the code. :-)
In order to be more precise, I'll introduce some mathematical
notation. Let H0(z) and H1(z) denote the analysis filters. Let F0(z)
and F1(z) denote the synthesis filters. Denote the order of the
filters H0(z), H1(z), F0(z), F1(z) as NH0, NH1, NF0, NF1 respectively.
Let K represent the total delay introduced by the analysis/synthesis system.
Now getting back to the code I mentioned above, it calculates the
system delay K as follows:
if abs(NF0 - NF1) is odd
K = NF0 + NF1 /* delay is sum of synthesis filter orders */
else
K = (NF0 + NF1) / 2 /* delay is half of sum */
end
After seeing this, of course, I wondered why such a calculation works.
I think I've managed to convince myself that this calculation is
correct for the "abs(NF0 - NF1) is odd" case. I've included my reasoning
below. If there are any flaws in my proof, I'd be grateful if you
could point them out to me. (Hopefully, I'm not too far out-to-lunch.)
Unfortunately, I'm having some difficultly in proving the correctness of the
calculation in the "abs(NF0 - NF1) is even" case. How would one go about
doing this? The approach I used for the "odd" case (described below)
doesn't seem applicable to the "even" case.
Proof (for the "odd" case)
--------------------------
Denote the input to the system as x(n) and the output as y(n).
Let us also define the following quantities:
F(z) = [ F0(z) ]
[ F1(z) ]
H(z) = [ H0(z) H1(z) ]
[ H0(-z) H1(-z) ]
Assume that the analysis filters are given by:
H0(z) = a_{0} + a_{1} * z^{-1} + ... + a_{NH0} * z^{-a_{NH0}} Equation (1)
H1(z) = b_{0} + b_{1} * z^{-1} + ... + b_{NH1} * z^{-b_{NH1}} Equation (2)
where a_{NH0} and b_{NH1} are nonzero.
In order to have a perfect reconstruction system, we require
H(z) * F(z) = A(z) Equation (2.1)
where
A(z) = [ C * z^{-K} ]
[ 0 ]
The reconstructed signal y(n) is, therefore, the original input
delayed by K (and also scaled).
By inverting Equation (2.1), we obtain:
F(z) = H(z)^{-1} A(z)
= (1 / d(z)) * [ C * z^{-K} H1(-z) ] Equation (2.2)
[ -C * z^{-K} H0(-z) ]
where
d(z) = H0(z) * H1(-z) - H1(z) * H0(-z)
In order to achieve perfect reconstruction, we require that d(z) have the
form
d(z) = C1 * z^{-K1} Equation (2.5)
where C1 is a nonzero constant and K1 is an integer.
Therefore, from Equations (2.2) and (2.5), we have that the analysis filters
can be determined from the synthesis filters as follows:
F0(z) = C0 * z^{-K0} * H1(-z) Equation (3)
F1(z) = -C0 * z^{-K0} * H0(-z)
where
C0 = C / C1
K0 = K - K1
From Equations (1),(2), and (3), we can see
NF0 = NH1 + K0 Equation (4)
NF1 = NH0 + K0 Equation (5)
This implies
NF0 - NH1 = NF1 - NH0 = K0 Equation (6)
and
NF0 + NH0 = NF1 + NH1 Equation (7)
Also, because the synthesis filters are causal, we know that
K0 >= 0
If we further make the assumption that only the minimal amount of delay
is added to make both synthesis filters causal, then we have
K0 = 0 Equation (8)
Using Equations (6) and (8), we now have
NF0 = NH1 Equation (8.5)
NF1 = NH0
Assume that NH0 is even and NH1 is odd. Substituting Equations (1) and (2)
into the expression for d(z), we get
d(z) = ... - 2 * a_{NH0} * b_{NH1} * z^{-(NH0+NH1)}
Because a_{NH0} and b_{NH1} are nonzero, the coefficient of z^{-(NH0+NH1)}
is always nonzero. Since we chose the filters to have perfect reconstruction,
only one term in the expansion for d(z) can have a nonzero coefficient.
Therefore, d(z) must have precisely the form
d(z) = C1 * z^{-(NH0 + NH1)}
By comparison with Equation (2.5), we observe
K1 = NH0 + NH1 Equation (9)
From Equation (3), (8), and (9), we have
K = K0 + K1 = 0 + K1 = NH0 + NH1
Therefore, we can conclude that if NH0 is even and NH1 is odd, then
the total delay introduced by the analysis/synthesis bank is NH0 + NH1.
In a similar fashion, the same result can be shown to hold for
NH0 odd and NH1 even.
--- END OF PROOF ---
Thank you kindly for your time.
Cheers,
Mike Adams
mdadams@surya.uwaterloo.ca
Subject: Re: Interpolate interlaced video frames
From: Ingo Elsen
Date: Tue, 12 Nov 1996 11:24:42 +0100
Dear Christian,
Christian Merkwirth wrote:
> Thank you for your quick response.
>
> Indeed, I meant fields when I spoke of frames.
> Saying the cameras view a scene without moving objects was just
> to avoid problems when speaking of 'different positions' of the lines
> in even and odd fields, actually the scene is changing rapidly.
>
> I already tried to use linear interpolation, and it works really well,
> so I don't think I need to use higher order interpolation, which in fact
> is computational expensive. The first and last line is just copied as
> you
> said, often the region of interest is smaller than the whole image area.
>
> The only reason I need to interpolate the fields is that I use a very
> position
> sensitive algorithm to analyse the pictures, and I get artefacts of the
> vertical
> 'flickering' in the results.
If the scene changes rapidly it may be possible to regard the two fields
as to be two frames. The aspect ratio changes, so the algorithmns have
to be adapted. You will have half the frame time then, but also have the
amount of data to be processed.
If you only need one of the original fields and you are not in need of
the full horizontal resolution, you may use a subsampling scheme of the
video signal (with half the pixel clock), resulting in a single field
with the same aspect ratio as the frame. This may cause sampling
artifacts, but I'm not sure.
Hope this helps
Ingo
________________________________________________________________________
Ingo Elsen Tel: +49(241)80-3633
Lehrstuhl fuer Technische Informatik Fax: +49(241)8888308
RWTH Aachen
Ahornstr. 55
52074 Aachen
e-mail: ogni@techinfo.rwth-aachen.de
WWW: http://www.techinfo.rwth-aachen.de
________________________________________________________________________
Subject: UCLA short course on "Surveill., Track., Low Observ. & ECM/Radar"
From: BGOODIN@UNEX.UCLA.EDU (William R. Goodin)
Date: Mon, 11 Nov 1996 16:37:32
On January 6-10, 1997, UCLA Extension will present the short course,
"Surveillance, Tracking, Low Observables, and ECM/Radar Management:
Algorithm Design and Real Data Applications", on the UCLA campus in
Los Angeles.
The instructors are Prof. Yaakov Bar-Shalom, University of Connecticut;
Mr. James Arnold, SRI International; Prof. K.C. Chang, George Mason
University; and Dr. Paul Frank Singer, Hughes Aircraft.
Each participant receives the 1995 edition of the text, "Multitarget-Multisensor
Tracking: Principles and Techniques"; demo diskettes of the interactive
software packages PASSDAT*, MULTIDAT*, IMDAT*, FUSEDAT*, and
BEARDAT* for IBM PC compatibles; and lecture notes.
This course presents state-of-the-art information in surveillance with passive
and active sensors, target tracking, and data fusion with emphasis on low
observable (LO) targets. A number of real data applications (defense as
well as commercial) are provided. Specific topics include:
Review of the Basic Techniques for Tracking
Review of Techniques for Tracking Targets with Multiple Behavior Modes
Applications of the IMM Estimator
Review of Techniques for Tracking in Clutter
The IMMPDAF
Applications of the IMMPDAF
Automatic Track Formation and Maintenance with Target Amplitude
Information
The NSWC Tracking Benchmark Problem II
Low-Observable TMA
Data Association from a Mix of Passive and/or Active Sensors in a 3D
Space
Performance Metrics for the Detection Process
The Matched Filter
Matched Filter Implementation
Detector
Maneuvering Target Tracking with a Passive Sensor
Trajectory Estimation of a TBM
Precision Tracking of Extended Targets with Imaging Sensors
Precision Tracking with Segmentation for Imaging Sensors
Detection and Tracking of Very Dim Targets
Performance Analysis
Distributed Estimation and Tracking
Multisource Correlation and Fusion
Problems on practical applications may be submitted by participants in
writing to the Coordinator through UCLA Extension before the course to be
discussed during the course. Recommendations for the appropriate
algorithms to use for their implementation will be given.
UCLA Extension has presented this highly successful short course since
1985.
The course fee is $1595, which includes the textbook, software and
extensive course notes. These notes are for participants only and are not
for sale.
For additional information and a complete course description, please
contact Marcus Hennessy at:
(310) 825-1047
(310) 206-2815 fax
mhenness@unex.ucla.edu
http://www.unex.ucla.edu/shortcourses
This course may also be presented on-site at company locations.