Newsgroup sci.image.processing 22576

Directory

Subject: Registration of 3D-Shapes -- From: "Loewenthal M."
Subject: Solar Azimut and Elevation -- From: fabarca@prokofiev.ugr.es (Francisco Abarca)
Subject: Re: Adaptive Thresholding and Segmentation -- From: "Kevin Landman"
Subject: Re: CCD calibration -- From: "Niall Dorr"
Subject: *** FRAME GRABBER SUGGESTIONS? *** -- From: androut@ecf.toronto.edu (Dimitrios Androutsos)
Subject: snake control problem -- From: jiang@cs.purdue.edu (Haitao Jiang)
Subject: searching description of DXF file format... -- From: stub@verleihnix.rz.tu-clausthal.de (Ulf Bartelt)
Subject: Re: image analysis system -- From: "Sheldon L. Epstein"
Subject: Re: Interpolate interlaced video frames -- From: "Sheldon L. Epstein"
Subject: Help: 3D Graphic programming -- From: drago1@sbox.tu-graz.ac.at (THOMAS DRAGOSITS)
Subject: Re: what's is "P5" or "P6" ?? -- From: Bill Gardner
Subject: Re: Bitmap to Vector file conversion software -- From: rubinsnk@is2.nyu.edu (Kalman Rubinson)
Subject: Re: CCD calibration -- From: "JG.Campbell"
Subject: Re: Black&White; to Colour Images -- From: Christian Soeller
Subject: Re: Downsampling without aliasing -- From: tardif@gel.ulaval.ca (Pierre-Martin Tardif)
Subject: FS: #9 Imagine 128 Series I w/4 MB VRAM $350 -- From: intevans@aol.com
Subject: Re: ITI .img format -- From: "Oscar del Rio"
Subject: Ease of Use Comparison wanted, DataCube vs. ITI -- From: Paul Stomski
Subject: Re: [SUMMARY] Hough Transform -- From: Michael Aramini
Subject: Re: [SUMMARY] Hough Transform -- From: Michael Aramini
Subject: Re: what's is "P5" or "P6" ?? -- From: Michael Aramini
Subject: Article on pattern recognition in VSD -- From: a@a.com (a)
Subject: Re: image analysis system -- From: "Rob Warren"
Subject: Television R,G,B pixel standard -- From: robgc@infomatch.com (Rob Chambers)
Subject: Re: Television R,G,B pixel standard -- From: alanr@rd.bbc.co.uk (Alan Roberts)
Subject: Re: Ultrasound research? -- From: crowej@colt45.eecs.umich.edu (John R. Crowe)

Articles

Subject: Registration of 3D-Shapes
From: "Loewenthal M."
Date: Tue, 12 Nov 1996 13:15:40 +0100
I'm searching for literature  where I can learn the mathematics and
programming of registration (matching) algorithmen of 2D- and 
3D- shapes.
thanks marc
Return to Top
Subject: Solar Azimut and Elevation
From: fabarca@prokofiev.ugr.es (Francisco Abarca)
Date: 12 Nov 1996 08:51:18 GMT
Hi everyone,
	I need some parameters to proccess a topographic correction. This
parameters can be read from the header of the image, but I have a copy
of it and it have no header. So, I ask if someone know about a program 
to calculate solar azimut and solar elevation from the latitude, altitude
and date and hour when the image was taken.
	Many thanks in advance.
-----------------------------------------------------------------------
Francisco Abarca
Faculty of Science
University of Granada
fabarca@carpanta.ugr.es
http://carpanta.ugr.es/~fabarca
Return to Top
Subject: Re: Adaptive Thresholding and Segmentation
From: "Kevin Landman"
Date: 12 Nov 1996 15:22:06 GMT
The approach you are taking is reasonable.  Other variations will change
the way the threshold is computed from the local image histogram or what
kind of averaging filter is used.  You must keep in mind that "adaptive"
means that the algorithm is adapting to "local" characteristics of the
image.  The problem is in defining what "local" is.  The local image is
determined by the size of your filtering window.  You will want the window
to be at least twice the size of the largest spot you want to detect, so
that you can measure the local image background intensity and adapt to it. 
If you don't know the size of the spots, then you can perform the algorithm
at several window sizes.
One pitfall of adapting the threshold to the local average is that the
local average can be skewed by clusters of spots or by other image edges
close to the spots.  In these cases a more sophisticated analysis of the
local histogram must be used to calculate the threshold.
Gord Bowman  wrote in article
<01bbd018$5410cbc0$053ae9cd@gord.atlsci.com>...
> I'm trying to locate dark regions in an image using the adaptive
> thresholding technique, which I have heard to be useful for such an
> application. Being unable to find an actual description of this
algorithm,
> I assumed it to be:
> 
> Run a moving window over an image. If the value of the centre pixel is
more
> than a specified threshold different from the average of the pixels in
the
> window, set it equal to 1, otherwise 0.
> 
> The obvious problem I encountered with this was that if the dark region
or
> bright region fully encompasses the window, there is no way to
distinguish
> between them because the difference between the centre pixel value and
the
> mean is essentially zero.
> Unless I am totally missing something, I don't see how this could
possibly
> be a good algorithm for spot detection.
> 
> My questions are:
> (1) Am I not understanding the adaptive thresholding algorithm?
> (2) Is what I have done still considered "adaptive" thresholding?
Return to Top
Subject: Re: CCD calibration
From: "Niall Dorr"
Date: 12 Nov 1996 10:37:41 GMT
tsui@mhd1.pfc.mit.edu wrote in article
<10NOV96.03274179@mhd1.pfc.mit.edu>...
> Dear friends:
> 
> I am looking for literatures about CCD calibration. Would you please show
me
> the direction?
> 
> Sincerely,
> 
> Chiwa Tsui
> 
Return to Top
Subject: *** FRAME GRABBER SUGGESTIONS? ***
From: androut@ecf.toronto.edu (Dimitrios Androutsos)
Date: Tue, 12 Nov 1996 16:01:02 GMT
We are about to upgrade our existing Image Processing LAboratory and
are interested in setting up a machine with a frame grabber capable of
capturing, playback and output options to video tape, etc.
Can anyone suggest a good card, in common use, for either UNIX but
preferably PC based machines?
I've heard of Matrox Meteors, but I don't think they have the capabilities
I'm looking for...also, heard about Miro...again, not sure what they
do.  If anyone has a setup they are using please let me know.
We want to work in video coding, motion estimation and MPEG development.
Thanks,
Dimitrios Androutsos
University of Toronto
Return to Top
Subject: snake control problem
From: jiang@cs.purdue.edu (Haitao Jiang)
Date: 12 Nov 1996 11:56:57 -0500
Hi, everyone,
    Does anyone know how to fix the following problem of the snake model?
    I am using snake model to track a human being, and I am only interested
in his/her head which can be an open snake, now since two end points are
also moving, I lose control of the snake, they all shrink into a point,
even I know how to prevent the shrinking tendency, they can still moving 
to another unwanted part. How to somehow control the snake to converge
to the region that I initially interested in?
Thanks a lot!
Return to Top
Subject: searching description of DXF file format...
From: stub@verleihnix.rz.tu-clausthal.de (Ulf Bartelt)
Date: 12 Nov 1996 15:45:16 GMT
Hi !
Sorry if this is a FAQ but searching the articles in this group I
didn't find any pointer...
I'm lookin' for a description of the DXF file format...
That's all...
Bye !
      Ulf.
Return to Top
Subject: Re: image analysis system
From: "Sheldon L. Epstein"
Date: Tue, 12 Nov 1996 11:15:08 -0600
Raja Elhayek wrote:
> 
> Sheldon L. Epstein wrote:
> >
> > Raja Elhayek wrote:
> > >
> > > Sheldon L. Epstein wrote:
> > > >
> > > > matthew g donovan wrote:
> > > > >
> > > > > Our lab is considering purchasing an image analysis system for the
> > > > > quantification of immunohistochemical staining.  Does anyone know of a
> > > > > good source for comparing the features of the currently available
> > > > > systems?  We are looking for something compatible with our Nikon
> > > > > microscopes and very user freindly. We would welcome any suggestions of
> > > > > what to look for, as well as what to avoid.  Thanks in advance,  MD
> > > >
> > > > Hello Matthew,
> > > >
> > > > Well to begin, there is no such thing as a 'user-friendly' imaging
> > > > system.  To understand why this is so, you have to meet the people
> > > > who write the software.  They are thinking in terms of words
> > > > such as 'morphology' while you thinking 'immunohistochemical staining'.
> > > > What you need is a person to do translation and provide the goodies
> > > > you need.
> > > >
> > > > We build custom automatic inspection systems and use ZEISS
> > > > microscopes.  If we can be of assistance to you, then please call or
> > > > e-mail.
> > > >
> > > >                 Sheldon L. Epstein, shel@k9ape.com
> > > >                 Chief Engineer
> > > >                 Epstein Associates
> > > >                 Wilmette, IL
> > > >
> > > >                 http://www.k9ape.com
> > >
> > > Hello Matthew and Sheldon
> > >
> > > Actually Sheldon is wrong, although we truly think in terms of
> > > morphology
> > > when processing images, there are some software that are really user
> > > friendly, personnally I have used Image-Pro Plus in a lot of biological
> > > analysis and it seems to have some very good features, and it is easy to
> > > use.  Now I am not saying that it is the best but for the features
> > > included and the price of about $2500. it seems to be OK.  It also has a
> > > programming invironment for Basic and C you can also access it through
> > > dynamic linking to perform very specialized analysis.  But be a little
> > > careful, their customer service department aren't that hot!
> > >
> > > If you have any questions you can email me back
> > > Raja Elhayek
> >
> > Hello Raja,
> >
> > I think that something I said got lost in translation.  There are
> > several packages out there containing lots and lots of imaging tools.
> > However, they rarely contain advice on how any of these tools
> > might be useful in a practical application.
> >
> > The question that a potential user has to answer is whether they
> > want to use their limited time to duplicate the learning experience
> > of others or they want to concentrate on expanding their own area
> > of science, engineering or technology.  Now for someone like
> > Matthew who is interested in 'immunohistochemical staining' for
> > solving some problem - say in biology, it is probably a waste
> > of his time to spend hours trying to explore the finer points
> > of Canny transforms or the latest in wavelets.  And even if
> > he were to do it, he is still left with basic problems
> > of engineering a practical system for a laboratory or a manufacturing
> > plant.
> >
> > This is not to say that Matthew and others could not succeed; but,
> > it has been our experience that our customers have to decide
> > whether they want to concentrate on their business or whether
> > they want to be in our business.  There are alot of video
> > cameras and imaging boards out there gathering dust because
> > some researcher in another area thought it would be a simple
> > matter to build a system.
> >
> > So much for direct marketing to endusers of some of our tools.
> > That worked for solving simple problems; but, now that all of
> > the simple problems have been solved, direct marketing has
> > lost its allure as companies now focus their attention on
> > integrators.  We still sell components to those who want them;
> > however as the components get more sophisticated, there are
> > fewer customers who want to buy them.  Instead, the demand for
> > integration is growing and that is an intelligent investment
> > decision.  After all, how many of you out there write your
> > own operating systems or WWW browsers?  You can do it; but,
> > its unlikely that you'll make any money at it.  And the same
> > is true for image analysis.
> >
> >                 Shel Epstein, shel@k9ape.com
> >                 Chief Engineer
> >                 Epstein Associates
> >                 http://www.k9ape.com
> 
> Hello Sheldon
> 
> I understand very much your point of view relating to an integrated
> system that does specifically what you want, this is actually what I am
> developping right now for a company, but I believe that Matthew who is
> probably reading our comments was asking for a system for his laboratory
> and as I see it an integrated system that does analyse
> 'immunohistochemical staining' may not have all the exploring
> capabilities that Matthew wants.  Now don't get me wrong, I hope you can
> strike a deal on this, but research (I am a physicist by the way) as I
> have explored over the last 5 years is a matter of exploration not just
> Idea, if you specify a system to analyse staining, you are limiting your
> exploration factors first by not being able to test different imaging
> aspects and how they apply to the topic at hand, second if you want to
> add new capabilities to an already developped system you will have to go
> through the same company again, (this isn't very cost effective unless
> you are determined that your calculations are the end to the exploration
> in your staining project!).  I guess the whole issue here is what
> Matthew really wants, does he want to explore the capabilities that can
> apply to staining or does he already know everything he wants in a
> system?
> I am currently working with SpectraMetrix Inc. and I tell you everytime
> I get an idea to integrate in the final product I use a commercial
> software to test it first, that does reduce my developing time and I am
> capable of testing the accuracy and potential of my idea before I code
> it.
> 
> I hope Matthew is reading these comments, because this will definitly
> help him in his decision, of getting a package that is very general and
> through research he can discover what he really wants in a system or if
> he already knows what he wants and want to limit his time doing research
> the integrated system is a very good idea.
> 
> Raja Elhayek
> San Diego, CA.
Hello Raja,
I think we're coming close to some agreement.  In my judgment,
the real focus in any instrumentation problem -and especially
an imaging instrumentation problem- is to ask the right
question(s).  If you don't have the right question, then the
answer is meaningless and if you do have the right question,
then the answer is trivial.
It has been our experience that customers are not likely to
ask the right questions when it comes to imaging.  I can't
begin to count the number of times a customer has told us
"Gee, I didn't know you could do that!"  Its not that
we're smarter than anyone else - its that we have experience
in analyzing problems from a perspective which is different
from that of our customers.
Now, we don't always get it right the first time.  Our
experience has taught us that building an imaging system
is an iterative process in which the customers and we
make successive changes in the specifications as both of
us learn more about the customer's products and processes.
Occassionaly we have to deal with a purchasing agent who
wants a firm contract with rigid specifications written
by a customer's engineer.  We will bid with a caveat that
flexibility is virtue in this business.  We have yet to
see one of these 'rigid specifications' remain unchanged.
As for whether Matthew does it all himself or calls upon
an outside imaging systems engineering firm, that is a
'make or buy' investment decision.  My main point is to
observe that unless a customer plans to make a major 
commitment to developing an in-house imaging capability,
it is probably wiser to buy/rent the capability.  This
is especcially true where the customer is in a business
where it is unlikely to be able to market its imaging
developments and management only looks upon imaging as
a tool for solving some immediate production problem.
As I said in an earlier message, there are alot of cameras
and imaging boards laying around in customers' desk drawers
once they found out how high the level of commitment is
to become commercially successful.
		Shel Epstein, shel@k9ape.com
		Chief Engineer
		Epstein Associates
		http://www.k9ape.com
Return to Top
Subject: Re: Interpolate interlaced video frames
From: "Sheldon L. Epstein"
Date: Tue, 12 Nov 1996 11:20:08 -0600
Christian Merkwirth wrote:
> 
> Video data from a CCD-camera is usually interlaced, that means
> the first frame coming from the camera contains e.g the even numbered
> lines of the image, the next frame carries the information from
> the odd numbered lines of the image, both frames have exactly
> the same size.
> 
> If you record a scene without moving objects, the frames coming from
> the camera show almost the same picture, there's just a small
> displacement in vertical direction of a half line between two frames.
> 
> My problem is to eliminate this small difference in the position by
> interpolating for example all frames with the odd lines.
> 
> My first idea is to calculate the mean between two consecutive lines in
> every  'odd' frame, but I guess there's a more exact interpolation
> using more lines in one frame.
> 
> Does anyone know which formula is the best ? What do I with the first
> and the last line in a frame, these lines have only one neighbour ?
> 
> Thanks in advance,
> 
> Christian Merkwirth
> Drittes Physikalisches Institut
> Goettingen
> [tel]   ++ 49 551 39 21 65   [fax]  ++ 49 551 39 77 20
> [email] cmerk@physik3.gwdg.de
Gutten tag Christian:
There is an easier way to solve the problem.  For B/W, purchase a SONY
XC-8500 (PAL) or XC-7500 (RS-170) camera which exposes both fields
at the same time.  Then, you don't have a problem.  
We use and sell the XC-7500 cameras here and recommend them.
		Sheldon L. Epstein, shel@k9ape.com
		Chief Engineer
		Epstein Associates - K9APE
		http://www.k9ape.com
Return to Top
Subject: Help: 3D Graphic programming
From: drago1@sbox.tu-graz.ac.at (THOMAS DRAGOSITS)
Date: 12 Nov 1996 17:53:52 GMT
Hi.
Does anybody know some good sources of 3D-graphic-libraries to support my
project of writing a graphical user-interface to visualize object-klassified
MR-data with Visual C++ 4.2 under Windows95(c). I know about the
Visualization Toolkit VTK and TGS in San Diego. Please email me.
Thank you in advance 
-- 
THOMAS DRAGOSITS alias drago1@sbox.tu-graz.ac.at
                 alias tdragosits@austronet-hartberg.co.at
Return to Top
Subject: Re: what's is "P5" or "P6" ??
From: Bill Gardner
Date: Tue, 12 Nov 1996 13:05:01 -0500
> >>     I want to know that what's PGM and PPM raw format files (magic
> >>     number "P5" or "P6" )
> 
> >>     Can you help me ??
> 
The P5 and P6 tell the display software, what type of data the file
contains.  All PGM, PPM files contain a header.  This information is
used by the display software to determine the size of the image, bit
resolution of the image, and how the data is stored( ie as integers,
binary, etc).
As I remember P5 means the data is stored as bytes.  Where pixel (0,0)
is first and pixel (n,n) is last in a single array of bytes.
I am not familar with P6, although I know that P2 stores the image data
as integers.
Hope this helps.
Bill
Return to Top
Subject: Re: Bitmap to Vector file conversion software
From: rubinsnk@is2.nyu.edu (Kalman Rubinson)
Date: 12 Nov 1996 15:31:02 GMT
Priyantha Mudalige (pri@brb.dmt.csiro.au) wrote:
> I am looking for any shareware/freeware utility program or C routine
> which converts bitmap file format (say, tiff,bmp,pcx or any other
> popular format) to autocad vector format (dxf). Platform doesn't matter,
> DOS/Windows or Unix. Any pointers or suggestions are appreciated.
I use R2V which is very capable but it ain't shareware/freeware.  However,
they offer a demo version which may be adequate for your needs (limited
image size).  You can contact them at able@world.std.com
Kal
Return to Top
Subject: Re: CCD calibration
From: "JG.Campbell"
Date: Tue, 12 Nov 1996 20:23:13 +0000
tsui@mhd1.pfc.mit.edu wrote:
> 
> I am looking for literatures about CCD calibration. Would you please show me
> the direction?
> 
Given the context 'CCD' I assume that you mean radiometric / photometric
calibration, i.e. either (a) absolute calibration -- in which you wish
to convert your sensor reading to some standard lightness unit  or (b)
relative calibration -- in which case you wish to equalise two or more
sensors, e.g. equalise gain, reduce to zero bias. A relatively recent
reference is 
G. E. Healy and R. Kondepudy, 1994, Radiometric CCD Camera Calibration
and Noise Estimation, IEEE Trans PAMI Vol. 16, No. 3, pp. 267--276.
which may, if nothing else, will give you a up-to-date survey.
I agree with an earlier answer (from "Niall Dorr" 
Return to Top
Subject: Re: Black&White; to Colour Images
From: Christian Soeller
Date: 12 Nov 1996 20:49:52 +0000
rubinsnk@is2.nyu.edu (Kalman Rubinson) writes:
> S Butterfield (CoMIR) (stuart@scs.leeds.ac.uk) wrote:
> > Can anyone out there give me pointers towards how one might go about
> > transforming a black and white image into (something) like the original
> > colours?
The colour -> greyscale transformation is generally *not* invertable. If you
look at the conversion from color to b/w you see that a lot of
colours will be mapped to the same shade of grey (in a sense the color, let's
say represented by the Hue value, is *independent* from the brightness of a
pixel in an image).
So I don't see how to do it unless you know a lot of additional things
about your image (how to segment into areas of equal colour, which original
colour, etc.). There are circumstances where that might apply (you are
colouring an image of a place well known to you) but it is probably very hard
to automate and more something for an artist.
Best regards,
  Christian
--------------------------------------------------------------------
Christian Soeller                         mailto: csoelle@sghms.ac.uk
St. Georges Hospital Medical School       Dept. of Pharmacology
Cranmer Terrace                           London SW17 0RE
Return to Top
Subject: Re: Downsampling without aliasing
From: tardif@gel.ulaval.ca (Pierre-Martin Tardif)
Date: Tue, 12 Nov 1996 20:15:17 GMT
Robert Smith  wrote:
>tsui@mhd1.pfc.mit.edu wrote:
>> 
>> Dear friends:
>> 
>> I has a question and hope that you can give me some hints:
>> 
>> How to downsample an image without aliasing effect?
>> 
>> Thank you very much for your attention. I am waiting for your response.
>> 
>> Sincerely,
>> 
>> Chiwa Tsui
>The basic idea is not to throw away any spatial information.  For 
>example, if you were going to shrink the image 2x, you would create 
>"new" pixels by averaging a 2x2 array of the original pixels.  (The 
>alternative, discarding 3 out of 4 pixels is the best way to GET 
>aliasing!)
>  This is great if you're changing size by an integer, but it's harder 
>when you're not.  In that case you will have to create virtual pixels 
>with weighted averages.  e.g. Suppose you want a pixel which is 2/3 of 
>the way between 2 existing pixels; you must weight this pixel by adding 
>2/3 of one original pixel and 1/3 of the other.
>  Now we come to 2D.  There really is no obviously best way of 
>interpolating a pixel which falls between 4 existing pixels in 2D. 
>Fortunately it's not critical for most purposes so do a 1D interpolation 
>along all the rows, and then do it again along the resulting columns. 
>(Fancier than this requires curvilinear interpolation, and is rarely 
>necessary to avoid aliasing.)
As I said in a previous posting, you just need to avoid frequency
foldover.  If you avoid frequency foldover you will not have aliasing
and you should be able to reconstruct the original signal.
Best way to avoid frequency foldover, is to filter out unnecessary
information.  Usually, this unneeded information is in the high
frequency range.  This is why LPF are used.
If you downsample by K where K is an integer in ]1,max(width,height)],
you need to remove (K-1)/K of the frequency range.
To keep it simple, you simply design a filter with the required
cut-off frequency and a good roll-off.  Averaging the image for an
2x2 array is the same as using the
    1  1
    1  1
filter.  Its cutoff frequency is before 1/2 and this is why it works.
But, it is not the best filter for that kind of job, you can design
a better filter using    fir1(#tap, 0.5)  in Matlab  (it is a 1-D
filter but use  conv2(ans,ans') to have a 2-D equivalent (it is
separable but sufficient).
========
If you want to interpolate your data (which is different from
downsampling and usually used in upsampling), you could just use a
bilinear or a bicubic interpolation.  bicubic interpolation is not a
major improvement over bilinear interpolation.
For interpolation during upsampling, K=2:
zero order interpolation:
    1   1
    1   1
bilinear interpolation:
   1   2   1  
   2   4   2   
   1   2   1
========
If you want to interpolate a new image using bilinear interpolation
(like the 1/3, 2/3 problem of the previous poster):
Let's define coordinates a(ax,ay), b(bx,by), c(cx,cy) and d(dx,dy):
     a-----b
      |      |
     c-----d
where bx = dx = ax+1 = cx+1  and  by = ay = cy+1 = dy+1
if e(ex,ey) is inside abcd then, you can find its value by using:
e = (b-a) (ex-ax) + (d-c-b+a) (ex-ax)(ey-ay) + (c-a) (ey-ay) + a
which is bilinear interpolation.
Hope this answer you question...
			PMT
PS:
If I am wrong with my argumentation, please let me know.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Pierre-Martin Tardif, graduate student  email: tardif@gel.ulaval.ca
Computer Vision and Systems Lab         http://www.gel.ulaval.ca/~tardif
Laval University, Ste-Foy, Canada       phone: 418-656-2131, 4848
Return to Top
Subject: FS: #9 Imagine 128 Series I w/4 MB VRAM $350
From: intevans@aol.com
Date: 12 Nov 1996 22:00:04 GMT
FS: #9 Imagine 128 Series I w/4 MB VRAM $350
Bought in January 96, incompatible with Portrait Display Labs Pivot 1700
monitor (PDL never wrote pivot-mode drivers after long wait), selling in
like new condition, with book, disks, & blank registration card.  Used for
only 3 months, sat in a static bag since then.
Price firm, terms 2-day FedEx C.O.D., s/h included in price.  Continental
United States only.  E-mail inquiries only, first come, first served.
Respond to:
intevans@aol.com
Return to Top
Subject: Re: ITI .img format
From: "Oscar del Rio"
Date: Tue, 12 Nov 1996 22:48:24 GMT
Michel Voue  wrote in article
<3283240E.41C6@gibbs.umh.ac.be>...
> What should be to structure (header and data organization) of images
> corresponding to the ITI .img format ?
I wrote this based on the info from an old manual of a ITI PCVisionPlus
framegrabber,
I do not know if they have modified the format since then, but you can try.
You can use the following C structure:
typedef unsigned char uchar;     /* 8-bit unsigned character */
typedef unsigned short uint16;    /* 16-bit unsigned integer */
/*  64 bytes of header */
typedef struct {
	uchar          I;
			/* Should be character 'I' */
	uchar          M;		
			/* Should be character 'M' */               
	uint16          c_size;	
			/* Size of 'comment' string that follows header*/
	uint16          width, height;
			/*  Size of image in pixels */
	uint16          h_pos, v_pos;	
			/* horizontal and vertical offsets of image */
	uint16          f_type;		
			/* file "type", set to zero? */
	uchar          reserved[50];
			/* set to zero? */
}  img;
You can do a binary read of the first 64 bytes of the file and that should
give you the header. You might have to swap the byte order of the
16-bit integers to get the actual values depending on your computer
(do not have to on Intel PC's)
After the 64 bytes of header comes "c_size" bytes of a "comment" string,
and after that comes the image data starting from the top-left corner
(width*height bytes (unsigned char))
Hope this helps
Oscar del Rio
Return to Top
Subject: Ease of Use Comparison wanted, DataCube vs. ITI
From: Paul Stomski
Date: Tue, 12 Nov 1996 15:21:45 -1000
Here at W.M. Keck Observatory, we are in need of a new image processing
system for use in our Adaptive Optics system.  We are running VxWorks on
a Force40 CPU in a VME cage.  We have narrowed the choices to two...
	Imaging Technology Inc.
	DataCube MaxVideo-200
We have cost comparisons, but are interested in getting some feedback on
ease of use.  We have heard that DataCube systems are difficult to
program, and have heard nothing at all about the ITI ease of use or
programming.  Obviously it will be application dependent, but we would
like to just get a gut feeling.
Can anyone tell us anything about the ease of use of these two systems?
Any horror stories?  Any testimonials?  Any recommendations? 
Thanks,
Paul Stomski
Adaptive Optics Software Engineer
W.M. Keck Observatory
Kamuela, Hawaii
Return to Top
Subject: Re: [SUMMARY] Hough Transform
From: Michael Aramini
Date: Tue, 12 Nov 1996 22:58:32 -0500
A couple of other books which discuss the Hough transform are:
    _Computer Vision_ by Ballard and Brown
and
    _Digital Image Processing_ by Pratt
Sorry that I don't have more complete references to these two books.
_Computer Vision_ discusses the Hough transform for lines and for
circles, and then goes on to discuss the "generalized" Hough transform.
_Digital Image Processing_ discusses discussed the Hough transform for
lines, but does not go on to other objects.
Neither book gives actual code which implements the Hough transform
but they are simple enough algorithms that it is possible to write
code without much thought after reading the verbal description.
While it is fairly straightforward to write code which implements the
Hough transform, it can be rather rather slow, especially for circles
where you have 3 parameters.  If these is being used in an application
where you want results while you wait, instead of letting something
run overnight, you must find ways to limit the range of candidate
parameter values.  This may involve apriori knowledge about where in
the image you'd expect to find the circle(s) you are looking for and/or
what range of circle sizes you are looking for.
I've used the Hough transform for circles to precisely determine the
center of the Solar disk in each of a series of digitized images of
the Sun during the partial phases of a solar eclipse.  In some ways
these images are easier to work because of the very high contrast
between the solar disk vs. the Moon and the rest of the sky.  One might
ask why I didn't do something simple like find the centroid of all the
bright pixels or find the centroid of the boundary between the bright
and dark pixels.  Such methods probably work find when the entire
disk of the Sun is visible, but as more and more of the Sun is eclipsed
by the Moon, the position of the centroid becomes skewed further and
further from the true center of the disk of the Sun.  The the problem
was not to find the center of a full circle, which is rather easy, but
to find the center of a circular arc forming the uneclipsed part of the
limb of the Sun, and not be distracted by the circular are forming the
part of the limb of the Moon which is in front of the Sun.
A trick I used to limit the number of candidate positions of the center
of the solar disk is to first find the minimal (smallest sized) bounding
circle which surrounds the edge pixels.  This is some cases may be
somewhat larger than the solar disk since there may be stray bright
spots in the photo due to dust/scratches on the negatives, etc.
To find the mimimal bounding circle, I first found the bounding box of
the edge pixels.  Then I came up with a relatively small rectangular
region where the center of the mimimal bounding circle must be within.
This can be defined by noting that the diameter of the minimal bounding
circle must be at least as large as the sides of the bounding box in
its longer direction, but can't be any larger than the length of the
diagonal of the bounding box.  I then tried each candidate center
point to determine when one resulted in the smallest circle which
contained all of the edge points.  To reduce the amount of computation
I used just a subset of the edge points which I call the "corner" points
these are edge points which are not colinear with any two of its
neighboring edge points.
Once the size of the mimimal bounding circle is known, I used it, the
bounding box, and the fact that at least 180 of the Sun's limb is
always visible (since this was an annular eclipse in which the apparent
size of the Moon is slightly smaller than the apparent size of the Sun
- this wouldn't work for the later partial phases leading up to a TOTAL
eclipse of the Sun, BTW) to come up with a different releatively small
candidate locations for the center of the circular arc forming the
limb of the Sun.  With the sufficiently limited set of center positions
I was able to determine the center and radius of the Solar disk to
the resolution of individual pixels in about 20 or 30 seconds of CPU 
time on an HP PA-RISC workstation (or about a couple of minutes of CPU 
time on a 100 MHz Pentium PC).  Not exactly extremely lightening speed,
but a lot faster than the hours of CPU time it might take for the Hough
transform if I tried every pixel position in the image as a candidate
center location.
A more difficult problem, at which I was somewhat less than sucessful
in solving using the Hough transform is finding the center of the
disk of the Moon in these eclipse photos.  Since in most of the images
less than 180 degrees of the limb of the Moon is in front of the Sun
and is thus visible (there's too much glare from the exposed part of
the Sun in that part of the sky to see the boundary between the dark
side of the Moon facing the Earth and the sky surrounding the part of
the Moon that is not in front of the Sun).  Thus my minimal bounding
circle optimization can be used and so I used other less reliable
heuristic methods for guessing the general location of the center of
the Moon.  Many times the guess is not very good and may totaly miss
where the center of the Moon really is.
-Michael
Return to Top
Subject: Re: [SUMMARY] Hough Transform
From: Michael Aramini
Date: Tue, 12 Nov 1996 22:58:32 -0500
A couple of other books which discuss the Hough transform are:
    _Computer Vision_ by Ballard and Brown
and
    _Digital Image Processing_ by Pratt
Sorry that I don't have more complete references to these two books.
_Computer Vision_ discusses the Hough transform for lines and for
circles, and then goes on to discuss the "generalized" Hough transform.
_Digital Image Processing_ discusses discussed the Hough transform for
lines, but does not go on to other objects.
Neither book gives actual code which implements the Hough transform
but they are simple enough algorithms that it is possible to write
code without much thought after reading the verbal description.
While it is fairly straightforward to write code which implements the
Hough transform, it can be rather rather slow, especially for circles
where you have 3 parameters.  If these is being used in an application
where you want results while you wait, instead of letting something
run overnight, you must find ways to limit the range of candidate
parameter values.  This may involve apriori knowledge about where in
the image you'd expect to find the circle(s) you are looking for and/or
what range of circle sizes you are looking for.
I've used the Hough transform for circles to precisely determine the
center of the Solar disk in each of a series of digitized images of
the Sun during the partial phases of a solar eclipse.  In some ways
these images are easier to work because of the very high contrast
between the solar disk vs. the Moon and the rest of the sky.  One might
ask why I didn't do something simple like find the centroid of all the
bright pixels or find the centroid of the boundary between the bright
and dark pixels.  Such methods probably work find when the entire
disk of the Sun is visible, but as more and more of the Sun is eclipsed
by the Moon, the position of the centroid becomes skewed further and
further from the true center of the disk of the Sun.  The the problem
was not to find the center of a full circle, which is rather easy, but
to find the center of a circular arc forming the uneclipsed part of the
limb of the Sun, and not be distracted by the circular are forming the
part of the limb of the Moon which is in front of the Sun.
A trick I used to limit the number of candidate positions of the center
of the solar disk is to first find the minimal (smallest sized) bounding
circle which surrounds the edge pixels.  This is some cases may be
somewhat larger than the solar disk since there may be stray bright
spots in the photo due to dust/scratches on the negatives, etc.
To find the mimimal bounding circle, I first found the bounding box of
the edge pixels.  Then I came up with a relatively small rectangular
region where the center of the mimimal bounding circle must be within.
This can be defined by noting that the diameter of the minimal bounding
circle must be at least as large as the sides of the bounding box in
its longer direction, but can't be any larger than the length of the
diagonal of the bounding box.  I then tried each candidate center
point to determine when one resulted in the smallest circle which
contained all of the edge points.  To reduce the amount of computation
I used just a subset of the edge points which I call the "corner" points
these are edge points which are not colinear with any two of its
neighboring edge points.
Once the size of the mimimal bounding circle is known, I used it, the
bounding box, and the fact that at least 180 of the Sun's limb is
always visible (since this was an annular eclipse in which the apparent
size of the Moon is slightly smaller than the apparent size of the Sun
- this wouldn't work for the later partial phases leading up to a TOTAL
eclipse of the Sun, BTW) to come up with a different releatively small
candidate locations for the center of the circular arc forming the
limb of the Sun.  With the sufficiently limited set of center positions
I was able to determine the center and radius of the Solar disk to
the resolution of individual pixels in about 20 or 30 seconds of CPU 
time on an HP PA-RISC workstation (or about a couple of minutes of CPU 
time on a 100 MHz Pentium PC).  Not exactly extremely lightening speed,
but a lot faster than the hours of CPU time it might take for the Hough
transform if I tried every pixel position in the image as a candidate
center location.
A more difficult problem, at which I was somewhat less than sucessful
in solving using the Hough transform is finding the center of the
disk of the Moon in these eclipse photos.  Since in most of the images
less than 180 degrees of the limb of the Moon is in front of the Sun
and is thus visible (there's too much glare from the exposed part of
the Sun in that part of the sky to see the boundary between the dark
side of the Moon facing the Earth and the sky surrounding the part of
the Moon that is not in front of the Sun).  Thus my minimal bounding
circle optimization can be used and so I used other less reliable
heuristic methods for guessing the general location of the center of
the Moon.  Many times the guess is not very good and may totaly miss
where the center of the Moon really is.
-Michael
Return to Top
Subject: Re: what's is "P5" or "P6" ??
From: Michael Aramini
Date: Tue, 12 Nov 1996 23:15:04 -0500
Here is an excerpt from section entitled "PBM, PGM, PNM, and PPM" from
the book _Encyclopedia of Image File Formats_ by Murray and vanRyper,
published by O'Reilly:
    RAWBITS Variant
    There is also a variant of the format, available by setting the
    RAWBITS option at compile time.  This variant differs from the
    traditional format in the following ways:
    - The "magic numbers" are as follows:
      Format   Normal   RAWBITS Variant
      PBM       P1      P4
      PGM       P2      P5
      PPM       P3      P6
    - The pixel values are stored as plain bytes, instead of ASCII
      decimal:
      PBM   RAWBITS is eight pixels per byte
      PGM   RAWBITS is one pixel per byte
      PPM   RAWBITS is three bytes per pixel
    - White space is not allowed in the pixel area, and only a single
      character of white space (typically a newline) is allowed after
      the MaxGrey value.
    - The files are smaller and many times farster to read and write.
    - Bit order within the byte is most significant bit (MSB) first.
    Note that this raw format can only be used for maximum values of
    less than or equal to 255.  If you use the PPM library and try to
    write a file with a larger maximum value, it automatically uses the
    slower, but more general, plain format.
-Michael
Return to Top
Subject: Article on pattern recognition in VSD
From: a@a.com (a)
Date: Tue, 12 Nov 1996 21:06:57 -0800
Hi,
I'm the technology editor of Vision System Design magazine, a new
publication from Penwell Publications (Computer Design, Laser Focus) for
image processing professionals.
My January piece is on pattern recognition and I'm looking for a couple
sidebars on leading-edge research to accompany the piece. Topics I'm
looking for include medical imaging pattern recognition and automatic
target recognition. I'm also open to ideas.
Please contact me: Barry Phillips, barryp@ico.com, 408-633-0307.
Regards,
Return to Top
Subject: Re: image analysis system
From: "Rob Warren"
Date: 12 Nov 1996 13:18:08 GMT
Speak for yourself Sheldon. LECO Corp sell an Image ANALYSIS product which
is less concerned with all the imaging terminology and more concerned with
making actual measurements and using this data in reports automatically.
> 
> Well to begin, there is no such thing as a 'user-friendly' imaging
> system.  To understand why this is so, you have to meet the people
> who write the software.  They are thinking in terms of words
> such as 'morphology' while you thinking 'immunohistochemical staining'.
> What you need is a person to do translation and provide the goodies
> you need.
> 
Return to Top
Subject: Television R,G,B pixel standard
From: robgc@infomatch.com (Rob Chambers)
Date: 13 Nov 1996 00:41:31 GMT
I am looking for information regarding the CIE value for NTSC and PAL 
television Red, Green, and Blue phosphors.
I image it's somewhere around 470, 540, and 620 for NTSC (6500K) Is PAL based 
on a different colour temperature?
thanks,
Rob Chambers.
-- 
>> Satellite Control v2.3 is now available! Supports MT810/830 <<
>> For information, or to download http://infomatch.com/~robgc <<
Return to Top
Subject: Re: Television R,G,B pixel standard
From: alanr@rd.bbc.co.uk (Alan Roberts)
Date: 13 Nov 1996 09:28:27 GMT
Rob Chambers (robgc@infomatch.com) wrote:
: I am looking for information regarding the CIE value for NTSC and PAL 
: television Red, Green, and Blue phosphors.
I assume you mean the specification for the display primaries.
: I image it's somewhere around 470, 540, and 620 for NTSC (6500K) Is PAL based 
: on a different colour temperature?
Colour primaries for television are not based on wavelengths, but are defined
by their chromaticity coordinates.
System M (525 NTSC or PAL)
     x     y
R    0.67  0.33
G    0.21  0.71
B    0.14  0.08
White balance is to Illuminant C
W    0.310 0.316
All other systems (625 PAL or SECAM)
     x     y
R    0.64  0.33
G    0.29  0.60
B    0.15  0.06
White balance is to Illuminant D65
W    0.313 0.329
Information from ITU-R BT.470
-- 
******* Alan Roberts ******* BBC Research & Development Department *******
* My views, not necessarily Auntie's, but they might be, you never know. *
**************************************************************************
Return to Top
Subject: Re: Ultrasound research?
From: crowej@colt45.eecs.umich.edu (John R. Crowe)
Date: 13 Nov 1996 13:21:16 GMT
tluu@thunder.ocis.temple.edu wrote:
>I am currently trying to the ultrasoud reasearch but I can't find any
>resoures on the internet. Could anyone helps me to find it.
>Thank you for your time.
>PLease, E mail me at tluu@thunder.ocis.temple.edu
The University of Michigan Biomedical Ultrasonics Laboratory's WWW
pages have abstracts, data sets, papers and relevant animations.  Areas of
research include ultrasonic imaging and therapy.
  http://bul.eecs.umich.edu
Return to Top

Downloaded by WWW Programs
Byron Palmer