. . .
 About Us Education Research PhD Acta Cybernetica Conferences Sponsors Departments: - Image Processing and Computer Graphics - Technical Informatics - Foundations of Computer Science - Computer Algorithms and Artificial Intelligence - Computational Optimization - Software Engineering - Research Group on Artificial Intelligence [University of Szeged]
 Institute of Informatics>>> Department of Image Processing and Computer Graphics>>> Magyarul

# Selected Publications of the Department of Image Processing and Computer Graphics of the year 2007

BACK TO INDEX

## Books and proceedings

1. Gabor T. Herman and Attila Kuba, editors. Advances in Discrete Tomography and Its Applications, Applied and Numerical Harmonic Analysis. Birkhauser, pubadd-Birk, 2007. [WWW]
Abstract: Advances in Discrete Tomography and Its Applications is a unified presentation of new methods, algorithms, and select applications that are the foundations of multidimensional image reconstruction by discrete tomographic methods. The self-contained chapters, written by leading mathematicians, engineers, and computer scientists, present cutting-edge research and results in the field. Three main areas are covered: foundations, algorithms, and practical applications. Following an introduction that reports the recent literature of the field, the book explores various mathematical and computational problems of discrete tomography including new applications. Topics and Features: * introduction to discrete point X-rays * uniqueness and additivity in discrete tomography * network flow algorithms for discrete tomography * convex programming and variational methods * applications to electron microscopy, materials science, nondestructive testing, and diagnostic medicine Professionals, researchers, practitioners, and students in mathematics, computer imaging, biomedical imaging, computer science, and image processing will find the book to be a useful guide and reference to state-of-the-art research, methods, and applications.

@BOOK{Herman2007,
PUBLISHER = {Birkhauser},
TITLE = {Advances in Discrete Tomography and Its Applications},
YEAR = {2007},
EDITOR = {Gabor T. Herman and Attila Kuba},
SERIES = {Applied and Numerical Harmonic Analysis},
PAGES = {392},
URL = {http://www.springer.com/0-8176-3614-5},
}

## Articles in journal or book chapters

1. Peter Balazs. Decomposition algorithms for reconstructing discrete sets with disjoint components. In Gabor T. Herman and Attila Kuba, editors, Advances in Discrete Tomography and Its Applications, chapter 8, pages 153-173. Birkhauser, Boston, 2007. [WWW] [PDF]
Abstract: The reconstruction of discrete sets from their projections is a frequently studied field in discrete tomography with applications in electron microscopy, image processing, radiology, and so on. There have been several efficient reconstruction algorithms developed for certain classes of discrete sets having some good geometrical properties. On the other side it has been shown that the reconstruction under certain circumstances can be very time-consuming, or even NP-hard. In this chapter we show how the prior information that the set to be reconstructed consists of several components can be exploited in order to facilitate the reconstruction. We present some general techniques to decompose a discrete set into single components knowing only its projections thus reducing the reconstruction of a general discrete set to the reconstruction of single components which is usually a simpler task.

@INCOLLECTION{Balazs2007b,
AUTHOR = {Peter Balazs},
BOOKTITLE = {Advances in Discrete Tomography and Its Applications},
PUBLISHER = {Birkhauser},
TITLE = {Decomposition algorithms for reconstructing discrete sets with disjoint components},
YEAR = {2007},
CHAPTER = {8},
EDITOR = {Gabor T. Herman and Attila Kuba},
PAGES = {153-173},
URL = {http://www.springer.com/0-8176-3614-5},
}

2. Elena Barcucci, Andrea Frosini, Attila Kuba, Antal Nagy, Simone Rinaldi, Martin Samal, and Steffen Zopf. Emission discrete tomography. In Gabor T. Herman and Attila Kuba, editors, Advances in Discrete Tomography and Its Applications, pages 333-366. Birkhauser, Boston, 2007. [WWW] [PDF]
Abstract: Three problems of emission discrete tomography (EDT) are presented. The first problem is the reconstruction of measurable plane sets from two absorbed projections. It is shown that Lorentz theorems can be generalized to this case. The second is the reconstruction of binary matrices from their absorbed row and column sums if the absorption coefficient is $\mu_{0} = log((1+\sqrt{5})/2)$. It is proved that the reconstruction in this case can be done in polynomial time. Finally, a possible application of EDT in single photon emission computed tomography (SPECT) is presented: Dynamic structures are reconstructed after factor analysis.

@INCOLLECTION{BarcucciFrosiniKubaNagyRinaldiSamalZopf:2007:EDT,
AUTHOR = {Elena Barcucci and Andrea Frosini and Attila Kuba and Antal Nagy and Simone Rinaldi and Martin Samal and Steffen Zopf},
BOOKTITLE = {Advances in Discrete Tomography and Its Applications},
PUBLISHER = {Birkhauser},
TITLE = {Emission discrete tomography},
YEAR = {2007},
EDITOR = {Gabor T. Herman and Attila Kuba},
PAGES = {333--366},
URL = {http://www.springer.com/0-8176-3614-5},
}

3. Joachim Baumann, Zoltan Kiss, Sven Krimmel, Attila Kuba, Antal Nagy, Lajos Rodek, Burkhard Schillinger, and Juergen Stephan. Discrete Tomography Methods for Nondestructive Testing. In Gabor T. Herman and Attila Kuba, editors, Advances in Discrete Tomography and Its Applications, pages 303-332. Birkhauser, Boston, 2007. [WWW] [PDF]
Abstract: The industrial nondestructive testing (NDT) of objects seems to be an ideal application of discrete tomography. In many cases, the objects consist of known materials, and a lot of a prior infromation is available (e.g., the description of an ideal object, which is similar to the actual one under investigation. One of frequently used methods in NDT is to take projection images of the objects by some transmitting ray (e.g., X- or neutron ray) and reconstruct the cross sections. But it can happen that only a few number of projections can be collected, because of long and/or expensive data acquisition, or the projection can be collected only from a limited range of direction. The chapter describes two DT reconstruction methods used in NDT experiments, shows the results of a DT procedure applied in the reconstruction of oblong objects having projections only from a limited range of angles, and, finally suggests a few further possible NDT applications of DT.

@INCOLLECTION{BaumannKissKrimmelKubaNagyRodekSchillingerStephan:2007:NDT,
AUTHOR = {Joachim Baumann and Zoltan Kiss and Sven Krimmel and Attila Kuba and Antal Nagy and Lajos Rodek and Burkhard Schillinger and Juergen Stephan},
BOOKTITLE = {Advances in Discrete Tomography and Its Applications},
PUBLISHER = {Birkhauser},
TITLE = {Discrete Tomography Methods for Nondestructive Testing},
YEAR = {2007},
EDITOR = {Gabor T. Herman and Attila Kuba},
PAGES = {303--332},
URL = {http://www.springer.com/0-8176-3614-5},
}

4. Peter Balazs. A decomposition technique for reconstructing discrete sets from four projections. Image and Vision Computing, 25:10:1609-1619, 2007. [PDF]
Abstract: The reconstruction of discrete sets from four projections is in general an NP-hard problem. In this paper we study the class of decomposable discrete sets and give an efficient reconstruction algorithm for this class using four projections. It is also shown that an arbitrary discrete set which is Q-convex along the horizontal and vertical directions and consists of several components is decomposable. As a consequence of decomposability we get that in a subclass of $hv$-convex discrete sets the reconstruction from four projections can also be solved in polynomial time. Possible extensions of our method are also discussed.

@ARTICLE{Balazs2007a,
AUTHOR = {Peter Balazs},
JOURNAL = {Image and Vision Computing},
TITLE = {A decomposition technique for reconstructing discrete sets from four projections},
YEAR = {2007},
PAGES = {1609-1619},
VOLUME = {25:10},
PUBLISHER = {Elsevier},
}

5. Endre Katona. Contour line thinning and multigrid generation of raster-based digital elevation models. International Journal of Geographical Information Science, 21:71-82, 2007. [PDF]
Abstract: Thin plate spline interpolation is a widely used approach to generate a digital elevation model (DEM) from contour lines and scattered data. In practice, contour maps are scanned and vectorized, and after resampling in the target grid resolution, interpolation is performed. In this paper we demonstrate the limited accuracy of this process, and propose a high resolution processing method (without vectorization) that ensures maximum utilization of information in the source data. First, we discuss the mathematical background of thin plate spline interpolation, and explain the multigrid relaxation principle used to speed up convergence. After, we will show why fine tuning is necessary, especially when contour lines and elevation points are processed at the same time. Finally, our own contour thinning method that produces a significant reduction of elevation bias is described.

@ARTICLE{Katona2007,
AUTHOR = {Endre Katona},
JOURNAL = {International Journal of Geographical Information Science},
TITLE = {Contour line thinning and multigrid generation of raster-based digital elevation models},
YEAR = {2007},
PAGES = {71-82},
VOLUME = {21},
}

## Conference articles

1. Peter Balazs. Generation and empirical investigation of hv-convex discrete sets. In Bjarne K. Ersboll and Kim S. Pedersen, editors, Proceedings of the Scandinavian Conference on Image Analysis, volume 4522 of Lecture Notes in Computer Science, Aalborg, Denmark, pages 344-353, June 2007. Springer Verlag. [PDF]
Abstract: One of the basic problems in discrete tomography is the reconstruction of discrete sets from few projections. Assuming that the set to be reconstructed fulfils some geometrical properties is a commonly used technique to reduce the number of possibly many different solutions of the same reconstruction problem. Since the reconstruction from two projections in the class of so-called $hv$-convex sets is NP-hard this class is suitable to test the efficiency of newly developed reconstruction algorithms. However, until now no method was known to generate sets of this class from uniform random distribution and thus only ad hoc comparison of several reconstruction techniques was possible. In this paper we first describe a method to generate some special $hv$-convex discrete sets from uniform random distribution. Moreover, we show that the developed generation technique can easily be adapted to other classes of discrete sets, even for the whole class of $hv$-convexes. Several statistics are also presented which are of great importance in the analysis of algorithms for reconstructing $hv$-convex sets.

@INPROCEEDINGS{Balazs2007,
AUTHOR = {Peter Balazs},
BOOKTITLE = {Proceedings of the Scandinavian Conference on Image Analysis},
TITLE = {Generation and empirical investigation of hv-convex discrete sets},
YEAR = {2007},
EDITOR = {Bjarne K. Ersboll and Kim S. Pedersen},
MONTH = {June},
PAGES = {344-353},
PUBLISHER = {Springer Verlag},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4522},
}

2. Gyorgy Bekes, Marta Fidrich, Laszlo G. Nyul, Attila Kuba, and Eors Mate. 3D segmentation of liver, kidneys and spleen from CT images. In Proceedings of the International Conference on Computer Assisted Radiology and Surgery (CARS), volume 2 of International Journal of Computer Assisted Radiology and Surgery, Berlin, Germany, pages S45-S47, June 2007. Springer Verlag. [WWW] [PDF] [doi:10.1007/s11548-007-0083-7]
Abstract: The clinicians often need to segment the abdominal organs for radiotherapy planning. Manual segmentation of these organs is very time-consuming, therefore automated methods are desired. We developed a semi-automatic segmentation method to outline liver, spleen and kidneys. It works on CT images without contrast intake that are acquired with a routine clinical protocol. From an initial surface around a user defined seed point, the segmentation of the organ is obtained by an active surface algorithm. Pre- and post-processing steps are used to adapt the general method for specific organs. The evaluation results show that the accuracy of our method is about 90%, which can be further improved with little manual editing, and that the precision is slightly higher than that of manual contouring. Our method is accurate, precise and fast enough to use in the clinical practice.

@INPROCEEDINGS{Bekes2007,
AUTHOR = {Gyorgy Bekes and Marta Fidrich and Laszlo G. Nyul and Attila Kuba and Eors Mate},
BOOKTITLE = {Proceedings of the International Conference on Computer Assisted Radiology and Surgery (CARS)},
TITLE = {3D segmentation of liver, kidneys and spleen from CT images},
YEAR = {2007},
MONTH = {June},
NUMBER = {Suppl 1},
PAGES = {S45--S47},
PUBLISHER = {Springer Verlag},
SERIES = {International Journal of Computer Assisted Radiology and Surgery},
VOLUME = {2},
DOI = {10.1007/s11548-007-0083-7},
}

3. Csaba Benedek, Tamas Sziranyi, Zoltan Kato, and Josiane Zerubia. A Multi-Layer MRF Model for Object-Motion Detection in Uregistered Airborne Image-Pairs. In Proceedings of the International Conference on Image Processing, volume VI, San Antonio, Texas, USA, pages 141-144, September 2007. IEEE, IEEE. [PDF]
Abstract: In this paper, we give a probabilistic model for automatic change detection on airborne images taken with moving cameras. To ensure robustness, we adopt an unsupervised coarse matching instead of a precise image registration. The challenge of the proposed model is to eliminate the registration errors, noise and the parallax artifacts caused by the static objects having considerable height (buildings, trees, walls etc.) from the difference image. We describe the background membership of a given image point through two different features, and introduce a novel three-layer Markov Random Field (MRF) model to ensure connected homogenous regions in the segmented image.

@INPROCEEDINGS{Benedek-etal2007,
AUTHOR = {Csaba Benedek and Tamas Sziranyi and Zoltan Kato and Josiane Zerubia},
BOOKTITLE = {Proceedings of the International Conference on Image Processing},
TITLE = {A Multi-Layer MRF Model for Object-Motion Detection in Uregistered Airborne Image-Pairs},
YEAR = {2007},
ADDRESS = {San Antonio, Texas, USA},
MONTH = {September},
ORGANIZATION = {IEEE},
PAGES = {141--144},
PUBLISHER = {IEEE},
VOLUME = {VI},
}

4. Rudiger Bock, Jorg Meier, Georg Michelson, Laszlo G. Nyul, and Joachim Hornegger. Classifying Glaucoma with Image-based Features from Fundus Photographs. In F. A. Hamprect, C. Schnorr, and B. Jähne, editors, Proceedings of the Annual Symposium of the German Association for Pattern Recognition (DAGM), volume 4713 of Lecture Notes in Computer Science, Heidelberg, Germany, pages 355-364, 2007. Springer Verlag. [PDF]
Abstract: Glaucoma is one of the most common causes of blindness and it is becoming even more important considering the ageing society. Because healing of died retinal nerve fibers is not possible early detection and prevention is essential. Robust, automated mass-screening will help to extend the symptom-free life of affected patients. We devised a novel, automated, appearance based glaucoma classification system that does not depend on segmentation based measurements. Our purely data-driven approach is applicable in large-scale screening examinations. It applies a standard pattern recognition pipeline with a 2-stage classification step. Several types of image-based features were analyzed and are combined to capture glaucomatous structures. Certain disease independent variations such as illumination inhomogeneities, size differences, and vessel structures are eliminated in the preprocessing phase. The vesselfree'' images and intermediate results of the methods are novel representations of the data for the physicians that may provide new insight into and help to better understand glaucoma. Our system achieves 86% success rate on a data set containing a mixture of 200 real images of healthy and glaucomatous eyes. The performance of the system is comparable to human medical experts in detecting glaucomatous retina fundus images.

@INPROCEEDINGS{Bock:2007:CGI,
AUTHOR = {Rudiger Bock and Jorg Meier and Georg Michelson and Laszlo G. Nyul and Joachim Hornegger},
BOOKTITLE = {Proceedings of the Annual Symposium of the German Association for Pattern Recognition (DAGM)},
TITLE = {Classifying Glaucoma with Image-based Features from Fundus Photographs},
YEAR = {2007},
EDITOR = {F. A. Hamprect and C. Schnorr and B. Jähne},
PAGES = {355--364},
PUBLISHER = {Springer Verlag},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4713},
}

5. Balazs Erdohelyi, Endre Varga, and Attila Kuba. Surgical Planning Tool with Biomechanical Simulation. In Proceedings of the International Conference on Computer Assisted Radiology and Surgery (CARS), volume 2 of International Journal of Computer Assisted Radiology and Surgery, Berlin, Germany, pages S262-S263, 2007. [PDF] [doi:10.1007/s11548-007-0098-0]
Abstract: The fixation of fractured bones often needs a very careful decision making. The operation has to be designed very carefully. A computer assisted system can help the surgeon in the planning phase to increase surgical accuracy. This paper introduces a software tool to plan a surgical intervention and to calculate the biomechanical stability of the plan. The proposed system provides 3D movement and rotation of the bone fragments and the insertion of fixation screws and plates. Finite element analysis is used to calculate mechanical stability of the surgical plan. Using these results the surgeon is able to see the week points of the fixation before the surgery. He can even try several surgical plans to pick the most promising one.

@INPROCEEDINGS{Erdohelyi2007,
AUTHOR = {Balazs Erdohelyi and Endre Varga and Attila Kuba},
BOOKTITLE = {Proceedings of the International Conference on Computer Assisted Radiology and Surgery (CARS)},
TITLE = {Surgical Planning Tool with Biomechanical Simulation},
YEAR = {2007},
NUMBER = {Suppl. 1},
PAGES = {S262-S263},
SERIES = {International Journal of Computer Assisted Radiology and Surgery},
VOLUME = {2},
DOI = {10.1007/s11548-007-0098-0},
}

6. Peter Horvath. A Multispectral Data Model for Higher-Order Active Contours and its Application to Tree Crown Extraction. In Wilfried Philips, Dan Popescu, and Paul Scheunders, editors, Proceedings of the Advanced Concepts for Intelligent Vision Systems, volume 4678 of Lecture Notes in Computer Science, Delft, Netherlands, pages 200-211, August 2007. [PDF] [doi:10.1007/978-3-540-74607-2_18]
Abstract: Forestry management makes great use of statistics concerning the individual trees making up a forest, but the acquisition of this information is expensive. Image processing can potentially both reduce this cost and improve the statistics. The key problem is the delineation of tree crowns in aerial images. The automatic solution of this problem requires considerable prior information to be built into the image and region models. Our previous work has focused on including shape information in the region model; in this paper we examine the image model. The aerial images involved have three bands. We study the statistics of these bands, and construct both multispectral and single band image models. We combine these with a higher-order active contour model of a gas of circles' in order to include prior shape information about the region occupied by the tree crowns in the image domain. We compare the results produced by these models on real aerial images and conclude that multiple bands improves the quality of the segmentation. The model has many other potential applications, e.g. to nano-technology, microbiology, physics, and medical imaging.

@INPROCEEDINGS{Horvath07c,
AUTHOR = {Peter Horvath},
BOOKTITLE = {Proceedings of the Advanced Concepts for Intelligent Vision Systems},
TITLE = {A Multispectral Data Model for Higher-Order Active Contours and its Application to Tree Crown Extraction},
YEAR = {2007},
EDITOR = {Wilfried Philips and Dan Popescu and Paul Scheunders},
MONTH = {August},
PAGES = {200-211},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4678},
DOI = {10.1007/978-3-540-74607-2_18},
}

7. Peter Horvath and Ian Jermyn. A 'gas of Circles' Phase Field Model and its Application to Tree Crown Extraction. In Marek Domanski, Ryszard Stasinski, and Maciej Bartkowiak, editors, Proceedings of the European Signal Processing Conference (EUSIPCO), Poznan, Poland, September 2007. [PDF]
Abstract: The problem of extracting the region in the image domain corresponding to an a priori unknown number of circular objects occurs in several domains. We propose a new model of a gas of circles', the ensemble of regions in the image domain composed of circles of a given radius. The model uses the phase field reformulation of higher-order active contours (HOACs). Phase fields possess several advantages over contour and level set approaches to region modelling, in particular for HOAC models. The reformulation allows us to benefit from these advantages without losing the strengths of the HOAC framework. Combined with a suitable likelihood energy, and applied to the tree crown extraction problem, the new model shows markedly improved performance, both in quality of results and in computation time, which is two orders of magnitude less than the HOAC level set implementation.

@INPROCEEDINGS{Horvath07d,
AUTHOR = {Peter Horvath and Ian Jermyn},
BOOKTITLE = {Proceedings of the European Signal Processing Conference (EUSIPCO)},
TITLE = {A 'gas of Circles' Phase Field Model and its Application to Tree Crown Extraction},
YEAR = {2007},
EDITOR = {Marek Domanski and Ryszard Stasinski and Maciej Bartkowiak},
MONTH = {September},
}

8. Peter Horvath and Ian Jermyn. A New Phase Field Model of a 'gas of Circles' for Tree Crown Extraction from Aerial Images. In Walter G. Kropatsch, Martin Kampel, and Allan Hanbury, editors, Proceedings of the International Conference on Computer Analysis of Images and Patterns, volume 4673 of Lecture Notes in Computer Science, Vienna, Austria, pages 702-709, August 2007. [PDF] [doi:10.1007/978-3-540-74272-2_87]
Abstract: We describe a model for tree crown extraction from aerial images, a problem of great practical importance for the forestry industry. The novelty lies in the prior model of the region occupied by tree crowns in the image, which is a phase field version of the higher-order active contour inflection point gas of circles' model. The model combines the strengths of the inflection point model with those of the phase field framework: it removes the phantom circles' produced by the original `gas of circles' model, while executing two orders of magnitude faster than the contour-based inflection point model. The model has many other areas of application e.g., to imagery in nanotechnology, biology, and physics.

@INPROCEEDINGS{Horvath07b,
AUTHOR = {Peter Horvath and Ian Jermyn},
BOOKTITLE = {Proceedings of the International Conference on Computer Analysis of Images and Patterns},
TITLE = {A New Phase Field Model of a 'gas of Circles' for Tree Crown Extraction from Aerial Images},
YEAR = {2007},
EDITOR = {Walter G. Kropatsch and Martin Kampel and Allan Hanbury},
MONTH = {August},
PAGES = {702-709},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4673},
DOI = {10.1007/978-3-540-74272-2_87},
}

9. Jorg Meier, Rudiger Bock, Georg Michelson, Laszlo G. Nyul, and Joachim Hornegger. Effects of Preprocessing Eye Fundus Images on Appearance Based Glaucoma Classification. In W. G. Kropatsch, M. Kampel, and A. Hanbury, editors, Proceedings of the International Conference on Computer Analysis of Images and Patterns, volume 4673 of Lecture Notes in Computer Science, Vienna, Austria, pages 165-172, 2007. Springer Verlag. [PDF]
Abstract: Early detection of glaucoma is essential for preventing one of the most common causes of blindness. Our research is focused on a novel automated classification system based on image features from fundus photographs which does not depend on structure segmentation or prior expert knowledge. Our new data driven approach that needs no manual assistance achieves an accuracy of detecting glaucomatous retina fundus images compareable to human experts. In this paper, we study image preprocessing methods to provide better input for more reliable automated glaucoma detection. We reduce disease independent variations without removing information that discriminates between images of healthy and glaucomatous eyes. In particular, nonuniform illumination is corrected, blood vessels are inpainted and the region of interest is normalized before feature extraction and subsequent classification. The effect of these steps was evaluated using principal component analysis for dimension reduction and support vector machine as classifier.

@INPROCEEDINGS{Meier:2007:EPE,
AUTHOR = {Jorg Meier and Rudiger Bock and Georg Michelson and Laszlo G. Nyul and Joachim Hornegger},
BOOKTITLE = {Proceedings of the International Conference on Computer Analysis of Images and Patterns},
TITLE = {Effects of Preprocessing Eye Fundus Images on Appearance Based Glaucoma Classification},
YEAR = {2007},
EDITOR = {W. G. Kropatsch and M. Kampel and A. Hanbury},
PAGES = {165--172},
PUBLISHER = {Springer Verlag},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4673},
}

10. Kalman Palagyi. A 3-Subiteration Surface-Thinning Algorithm. In Walter G. Kropatsch, Martin Kampel, and Allan Hanbury, editors, Proceedings of the International Conference on Computer Analysis of Images and Patterns, volume 4673 of Lecture Notes in Computer Science, Vienna, Austria, pages 628-635, August 2007. Springer Verlag. [PDF]
Abstract: Thinning is an iterative layer by layer erosion for extracting skeleton. This paper presents an efficient parallel 3D thinning algorithm which produces medial surfaces. A three-subiteration strategy is proposed: the thinning operation is changed from iteration to iteration with a period of three according to the three deletion directions.

@INPROCEEDINGS{PalagyiCAIP2007,
AUTHOR = {Kalman Palagyi},
BOOKTITLE = {Proceedings of the International Conference on Computer Analysis of Images and Patterns},
TITLE = {A 3-Subiteration Surface-Thinning Algorithm},
YEAR = {2007},
EDITOR = {Walter G. Kropatsch and Martin Kampel and Allan Hanbury},
MONTH = {August},
PAGES = {628-635},
PUBLISHER = {Springer Verlag},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4673},
}

11. Kalman Palagyi. A Subiteration-Based Surface-Thinning Algorithm with a Period of Three. In Fred A. Hamprecht, Christoph Schnorr, and Bernd Jähne, editors, Proceedings of the Annual Symposium of the German Association for Pattern Recognition (DAGM), volume 4713 of Lecture Notes in Computer Science, Heidelberg, Germany, pages 294-303, September 2007. Springer Verlag. [PDF]
Abstract: Thinning on binary images is an iterative layer by layer erosion until only the "skeletons" of the objects are left. This paper presents an efficient parallel 3D surface-thinning algorithm. A three-subiteration strategy is proposed: the thinning operation is changed from iteration to iteration with a period of three according to the three deletion directions.

@INPROCEEDINGS{PalagyiDAGM2007,
AUTHOR = {Kalman Palagyi},
BOOKTITLE = {Proceedings of the Annual Symposium of the German Association for Pattern Recognition (DAGM)},
TITLE = {A Subiteration-Based Surface-Thinning Algorithm with a Period of Three},
YEAR = {2007},
EDITOR = {Fred A. Hamprecht and Christoph Schnorr and Bernd Jähne},
MONTH = {September},
PAGES = {294-303},
PUBLISHER = {Springer Verlag},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {4713},
}

BACK TO INDEX

Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.