![]() |
![]() |
Back |
Hi, is there anyone who uses IMAQVision (LabView add-on)?? Marc E-mail: horemans@natlab.research.philips.com --------------------------------------------------------------------------Return to Top
braunan@Informatik.TU-Muenchen.DE (Andi Braun) wrote: >How can I solve the problem of artifacts in error diffusion >dithering? Does rescaling the error (Floyd) work? How can I >do this. What about using 12 error fractions? > >Thanks in advance, > >Andi What kind of artifacts do you mean? If you mean the "worm-like" textures in the very dark and bright regions, the simplest way to diminish theses textures are small changes in the Floyd matrix, e.g. x 7/16 instead of x 7/16 1/16 3/16 5/16 3/16 5/16 1/16 . AFAIR this was proposed by Fan (from Xerox). A large diffusion matrix will also work, but leads to problems at medium graylevels, and usually also to a larger edge enhancement. If you look in a standard textbook (maybe Ulichney, Digital Halftoning) you will find more possibilities like ED on a serpentine raster or a mixture of Dither and ED. Usually the reproduction of extreme graylevels is improved at the cost of noisier medium graylevels. In the recent halftoning literature you will find many papers about algorithms that adapt itself to the local graylevel, using simple ED at medium and something different at extreme graylevels (including an article about adaptation of the processing raster that I published 3 years ago). If you want to play around with some standard halftoning methods, you can try a Java applet that a friend of mine has made: http://www.appl-opt.physik.uni-essen.de/~phy510 And if you have any other questions, feel free to email me. Thomas -- ****************************************************************** ** Thomas Zeggel ** ** email: phy540@sp2.power.uni-essen.de ** ** http://www.appl-opt.physik.uni-essen.de/~phy540 ** ******************************************************************Return to Top
Hi, I need a comment from You on image processing software. Anyone of You have strong feelings 'bout some particular package ? I'd be thankful if You'd also give me some hints 'bout market leadership. Thank you, --kaare -------------------------------------------------------------- | Kaare Gärtner | The Norwegian Radium Hospital | | kaareg@radium.uio.no| Dep. of biophysics | | fax:+47 22934270 | Montebello | | tlf:+47 22934276 | 0310 OSLO | --------------------------------------------------------------Return to Top
Miroslav Trajkovic, (miroslav@ee.usyd.edu.au) described problems using Faugeras' self calibration method with 8 points: >each time I perform calibration with different camera >orientation, I obtain different intrinsic parameters.... This is a very hard (and often impossible) problem, due to high correlations between the exterior orientation parameters (position and orientation) and the interior orientation parameters (focal length, principal point position). > ... > lots o' math > ... >In my experiments, I used set of 8 points. I measured their 3D >coordinates and image positions and carried calibration. I've got >reasonably small errors (in terms of (eqn. 1), not more than 3 pixels), >but... Were your 8 points in a plane, or distributed in 3D? If they're in a plane, then the transformation between the image and object spaces is defined by just 8 parameters; your solution is then overparameterized, resulting in perfect correlations between the parameters. In this case, the extra parameters wind up modeling the noise in the image measurements, along with lens distortion or other systematic errors. > >Each time I performed the calibration (with different camera >orientation) I obtained VERY different intrinsic parameters, and g >wasn't small at all. > >I suspect that 8 points were not enough, but I am not sure. Another >problem is that maybe I shoud put constraint g=0, but then I could >easily have a nonlinear problem, what I am trying to avoid. Unfortunately, life is non-linear. This is a very simplistic math model, which is sometimes usable if you apply the total transformation matrix instead of looking at the values of the individual parameters but as you've discovered, the parameter values you get usually don't mean anything. Photogrammetrists have spent many years on the camera calibration problem, both on the mathematics and on determining point configurations which will work. I would advise you to research the photogrammetric literature. Chris McGloneReturn to Top
Hello all! I am going to be doing an experiment in which I hope to count and size lots of objects on the order of 200 microns in diameter. I plan to do this using a CCD camera w/ frame grabber and software (leaning towards DT 3155 and Image Pro). The part of this plan which is less set is exactly *how* we will resolve the bubbles (objects). Can anyone suggest how to go about sizing a lens and selecting a CCD camera? To be honest, what I know about all of this wouldn't fill a small page with large text. Any help would be greatly appreciated. Thanks! Greg Anderson dwarf@wam.umd.eduReturn to Top
I've been using ImageTool for a while. It's quite good overall. However, the major problem that I have is that it doesn't have a very good tool for manipulating the pseudo-color scales, especially when you need 2 simultaneous color scales for displaying coregistration results. Does anyone have a good plug-in to overcome this problem? Cordially, Jeffrey Tsao Biomedical Magnetic Resonance Laboratory University of Illinois at Urbana-ChampaignReturn to Top
Adam Ray (ozric@worldnet.att.net) wrote: : Hello all. : : I'm trying to extract CCITT3 image data (possibly CCITT4) and create a TIFF : image out of it. I've looked at the specs of TIFF Rev 4, but I can't : really tell how or where I would insert the CCITT data. What I do have is A : the resolution of the image data and the size in bytes. But I don't even : know how close I am or how complicated this will be. I appreciate anyone's : help on this. : : : Adam Hi, Since you have gone through TIFF specs you must have noticed that the file format of TIFF is as follows: TIFF header TIFF tags Image data The tags are each of 12 Bytes in width and there may be any number of tags but there are a few essential tags. So while storing a CCITT G3/G4 image in the tiff file format all that one has to do is to write the tiff header and the tiff tags and then append the CCITT G3/G4 image data in the end of the file.(Note: you have to set the value of the Strip Offset Tag accordingly) Yes the info you have about the image is sufficient. To do a CCITT to TIFF conversion you need the following info about the image: Width in pels Hight in pels Fill order of the image data - i.e. Normal or reverse fill order usually you do not have this info in advance. Some applications may make it in the forward fill order and some in the reverse fill order. So if you dont know about this then try both. Number of strips of the image data Hope this was of any help to you. A lot of info is available on tiff specs on web, bye, aravind. k. e-mail: aravind@cse.iitb.ernet.inReturn to Top
Patrick, Try contacting magazines GIS World (+970-223-4848) and Earth Observation Magazine (+303-751-0755) in the US. Perhaps they have done a review of these different packages at one time or another. Are you looking for a GUI-based package? Regards, Tom Hospod Segment Manager, Image Processing The MathWorks Patrick Hill wrote: > > Im trying to find a newsgroup or somebody that can give me a review of some > PC based image processing packages pertaining to aerial photography and > satellite imagery. and maybe some comparisons of systems available. Im > aware of Leica Heleva System,ERMapper and MicroImages TNT Mips, but would > like to compare these and other systems. > Thanks > PatrickReturn to Top
Hello Ali, I suggest that you check our resource directory at http://www.mathworks.com/search.html and search for user-contributed m-files that may be able to help you in your work. Let me know how you make out. Regards, Tom Hospod Segment Manager, Image Processing ALi khalil Benkhalil wrote: > > Hi, all.. > > I just started a research work on motion analysis and tracking. > Can any one please let me know any (algorithme, sourcs code, URL page) > where i can fined more information on tracking using MATLAB and how the > MATLAB v4.2c.1 can read a sequeance of frames. > > Thanks. > ALI BENKHALIL > e-mail: akbenkha@bradford.ac.ukReturn to Top
Biao Lu wrote: > > Hi, all: > > Does anyone have any suggestion on the application of the cellular neural > network to the edge detection? I heard that CNN edge detection gives the > better results than the general edge detection techniques. > > Any suggestions will be highly appreciated. > > Please write to me by email. Thanks. > > Biao Lu Voice: (512) 471-2887 > Engineering Science Building Fax: (512) 471-5907 > The University of Texas at Austin E-mail: blu@ece.utexas.edu > Austin, TX 78712-1084 USA Web: http://anchovy.ece.utexas.edu/~blu Hi, As far as I know (I had some courses in CNN and currently doing a PhD related to CNN) CNN is more a general tool in image processing than something in connection with edge detection - if You mean Cellular Neural/Nonlinear Network. Since edge detection can be done with convolution, edge detection can be done with CNN - a good device for convolution/deconvolution. On the other hand there are some problems with it. If you really want to solve problems with it You must give more detailed questions -to my mailbox. L*Return to Top
Wenxin Mao wrote: > > Hi, friends, > > Recently I got a message from my friends: > > " I am study scientific image processing , I want get some good image > processing soft ware, it is especial to determine particle size, and holes, throat channels > dimension in rocks. It is very important for rock bearing oil. Could you help me looking for > some information in your college. " > > I am not familar with scientific images processing, so I ask you for advice. > > Thanks in advance, > Mao The most scientific software I can suggest is Khoros. It is also available for commercial and academic use (www.khoral.com, or if I am wrong look after with altavista). However it is very complex and needs a lot of time to learn and large computing power (availbale also for linux, 32M of ram, ~200 M hd). L*Return to Top
Can anyone point me to modern work on getting depth from a needle diagram (where by "needle diagram" I mean a map of local surface orientation)? I see that Horn talks about it briefly in his 1986 book, but I'd like to find out more about the techniques that people have used. -- Mike Hucka hucka@umich.edu http://www.eecs.umich.edu/~hucka University PhD to be, computational models of human visual processing (AI Lab) of UNIX systems administrator & programmer/analyst (EECS DCO) MichiganReturn to Top
Miroslav D Trajkovic wrote: > In my experiments, I used set of 8 points. I measured their 3D > coordinates and image positions and carried calibration. I've got > reasonably small errors (in terms of (eqn. 1), not more than 3 pixels), > but... > > Each time I performed the calibration (with different camera > orientation) I obtained VERY different intrinsic parameters, and g > wasn't small at all. > > I suspect that 8 points were not enough, but I am not sure. Another > problem is that maybe I shoud put constraint g=0, but then I could > easily have a nonlinear problem, what I am trying to avoid. One thing that you should consider, if you haven't already, is that the normalization of your world and image coordinates can significantly impact the accuracy of your results. I can't remember the exact reference, but Richard Hartley wrote a paper with a title like, "In defense of the 8-point algorithm," which goes into this problem in some detail. I can't say that this is the source of your errors, but it may be affecting things. (The paper was written in the mid 1990's, probably 1995.) Of course, as Chris McGlone pointed out, you should make sure that you 8 points do not lie in a degenerate configuration (e.g. colinear or coplanar). > Finally, it can be due to lense distortion, but I doubt that results > woud be so "catastrophic". This all depends on your lenses. If you are using wide field of view lenses, then your images probably have significant radial lens distortion. (Our lenses are about 90 degrees FOV and have distortions on the order of tens of pixels at the periphery of the FOV.) An alternative (and good!) calibration software package is Reg Willson's implementation of Tsai's calibration procedure. This code is available on line at http://www.cs.cmu.edu/~rgw/TsaiCode.html -Pete (rander@cs.cmu.edu)Return to Top
Dwarf wrote: > > Hello all! > > I am going to be doing an experiment in which I hope to count and size > lots of objects on the order of 200 microns in diameter. I plan to do > this using a CCD camera w/ frame grabber and software (leaning towards > DT 3155 and Image Pro). The part of this plan which is less set is > exactly *how* we will resolve the bubbles (objects). > > Can anyone suggest how to go about sizing a lens and selecting a CCD > camera? To be honest, what I know about all of this wouldn't fill a small > page with large text. > > Any help would be greatly appreciated. > > Thanks! > > Greg Anderson > dwarf@wam.umd.edu Hello Dwarf (a.k.a Greg) We regularly examine 50 micron objects here - so I have some practical experience. We have a ZEISS AXIOLAB microscope and would suggest that you start with a 10X to 20X micron objective to see the entire object. A 40X objective will let you peer at fine detail. For a camera, we selected the SONY XC-7500 CCD Camera because it has square pixels and great response for black-& white problems. Our imaging boards are both EPIX - the Model 12 with COC40 Board for DSP analysis and the PIXCI for our PCI board computer. The PIXCI board works with both color and B/W cameras. See http://www.epixinc.com/epix Good Hunting, Shel@k9ape.com http://www.k9ape.comReturn to Top