Página 1 dos resultados de 1028 itens digitais encontrados em 0.076 segundos

‣ Identificação da correlação entre as características das imagens de documentos e os impactos na fidelidade visual em função da taxa de compressão.; Identification of correlation between the characteristics of document images and its impact in visual fidelity in function of compression rate.

Tsujiguchi, Vitor Hitoshi
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 11/10/2011 Português
Relevância na Pesquisa
57.909824%
Imagens de documentos são documentos digitalizados com conteúdo textual. Estes documentos são compostos de caracteres e diagramação, apresentando características comuns entre si, como a presença de bordas e limites no formato de cada caractere. A relação entre as características das imagens de documentos e os impactos do processo de compressão com respeito à fidelidade visual são analisadas nesse trabalho. Métricas objetivas são empregadas na análise das características das imagens de documentos, como a medida da atividade da imagem (IAM) no domínio espacial dos pixels, e a verificação da medida de atividade espectral (SAM) no domínio espectral. Os desempenhos das técnicas de compressão de imagens baseada na transformada discreta de cosseno (DCT) e na transformada discreta de Wavelet (DWT) são avaliados sobre as imagens de documentos ao aplicar diferentes níveis de compressão sobre as mesmas, para cada técnica. Os experimentos são realizados sobre imagens digitais de documentos impressos e manuscritos de livros e periódicos, explorando texto escritos entre os séculos 16 ao século 19. Este material foi coletado na biblioteca Brasiliana Digital (www.brasiliana.usp.br), no Brasil. Resultados experimentais apontam que as medidas de atividade nos domínios espacial e espectral influenciam diretamente a fidelidade visual das imagens comprimidas para ambas as técnicas baseadas em DCT e DWT. Para uma taxa de compressão fixa de uma imagem comprimida em ambas técnicas...

‣ Projeto de arquiteturas integradas para a compressão de imagens JPEG; Design of architectures for jpeg image compression

Agostini, Luciano Volcan
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
Português
Relevância na Pesquisa
68.033755%
Esta dissertação apresenta o desenvolvimento de arquiteturas para a compressão JPEG, onde são apresentadas arquiteturas de um compressor JPEG para imagens em tons de cinza, de um compressor JPEG para imagens coloridas e de um conversor de espaço de cores de RGB para YCbCr. As arquiteturas desenvolvidas são detalhadamente apresentadas, tendo sido completamente descritas em VHDL, com sua síntese direcionada para FPGAs da família Flex10KE da Altera. A arquitetura integrada do compressor JPEG para imagens em tons de cinza possui uma latência mínima de 237 ciclos de clock e processa uma imagem de 640x480 pixels em 18,5ms, permitindo uma taxa de processamento de 54 imagens por segundo. As estimativas realizadas em torno da taxa de compressão obtida indicam que ela seria de aproximadamente 6,2 vezes ou de 84 %. A arquitetura integrada do compressor JPEG para imagens coloridas foi gerada a partir de adaptações na arquitetura do compressor para imagens em tons de cinza. Esta arquitetura também possui a latência mínima de 237 ciclos de clock, sendo capaz de processar uma imagem coloria de 640 x 480 pixels em 54,4ms, permitindo uma taxa de processamento de 18,4 imagens por segundo. A taxa de compressão obtida, segundo estimativas...

‣ Compressão de imagens digitais combinando técnicas wavelet e wedgelet no ambiente de comunicações móveis; Digital image compression combining wavelet and wedgelet techniques in mobile communication environment

Ricardo Barroso Leite
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 07/07/2011 Português
Relevância na Pesquisa
67.46016%
Os avanços em telecomunicações e o desenvolvimento dos equipamentos digitais impulsionaram diversas áreas de pesquisa relacionadas à codificação e compressão de imagens. Dentre as áreas de atuação destacam-se as aplicações para dispositivos móveis (celulares, smartphones, iPhones, iPads entre outros), que se caracterizam por baixas taxas de transmissão de dados. Entretanto, imagens codificadas com os padrões atualmente em estado-da-arte apresentam artefatos visuais característicos, como efeito de bloco e ringing. Para contornar a inabilidade das transformadas ortogonais em lidar com a geometria, é proposto na literatura o uso de dicionários wedgelet e da decomposição cartoon-textura. Nesse contexto, propõe-se um método de codificação híbrido wedgelet-wavelet inédito que preserva componentes de cartoon e textura, superando em qualidade visual ao uso de dicionários isolados e se aproximando do desempenho de sistemas de codificação completos, tais como o padrão JPEG 2000. Os ganhos de desempenho, principalmente em qualidade visual das imagens reconstruídas para baixas taxas de dados, indicam que a metodologia apresentada pode vir a ser incluída em sistemas de transmissão com restrições de largura de banda...

‣ Fractal coding based on image local fractal dimension

Conci,Aura; Aquino,Felipe R.
Fonte: Sociedade Brasileira de Matemática Aplicada e Computacional Publicador: Sociedade Brasileira de Matemática Aplicada e Computacional
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/04/2005 Português
Relevância na Pesquisa
58.07898%
Fractal codification of images is based on self-similar and self-affine sets. The codification process consists of construction of an operator which will represent the image to be encoded. If a complicated picture can be represented by an operator then it will be transmitted or stored very efficiently. Clearly, this has many applications on data compression. The great disadvantage of the automatic form of fractal compression is its encoding time. Most of the time spent in construction of such operator is due on finding the best match between parts of the image to be encoded. However, since the conception of automatic fractal image compression, researches on improvement of the compression time are widespread. This work aims to provide a new idea for decrease the encoding time: a classification of image parts based on their local fractal dimension. The idea is implemented on two steps. First, a preprocessing analysis of the image identify the complexity of each image block computing its dimension. Then, only parts within the same range of complexity are used for testing the better self-affine pairs, reducing the compression time. The performance of this proposition, is compared with others fractal image compression methods. The points considered are image fidelity...

‣ Distributed Image Compression in Camera Networks

Wagner, Raymond; Wagner, Raymond
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
67.597886%
Masters Thesis; Dense networks of wireless, battery-powered sensors are now feasible thanks to recent hardware advances, but key issues such as power consumption plague widespread deployment. Fortunately, in a dense network of sensors, cross-sensor correlation can be exploited to reduce the communication power consumption. In this thesis, we examine a novel technique for distributed image compression in sensor networks. First, sensors are allowed to share low-bandwidth descriptors of their fields of view as image feature points, allowing sensors to identify a common region of overlap. The region is then compressed via spatial downsampling, and image super-resolution techniques are employed at the receiver to reconstruct an original-resolution estimate of the common area from the set of low-resolution sensor images. We demonstrate the feasibility of such an algorithm via a prototype implementation, and we evaluate the effectiveness of the proposed technique using a set of real sensor images gathered with an off-the-shelf digital camera.

‣ Geometric Tools for Image Compression

Wakin, Michael; Romberg, Justin; Choi, Hyeokho; Baraniuk, Richard G.; Wakin, Michael; Romberg, Justin; Choi, Hyeokho; Baraniuk, Richard G.
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Conference paper
Português
Relevância na Pesquisa
67.817773%
Conference Paper; Images typically contain strong geometric features, such as edges, that impose a structure on pixel values and wavelet coefficients. Modeling the joint coherent behavior of wavelet coefficients is difficult, and standard image coders fail to fully exploit this geometric regularity. We introduce wedgelets as a geometric tool for image compression. Wedgelets offer piecewise-linear approximations of edge contours and can be efficiently encoded. We describe the fundamental challenges that arise when applying such a tool to image compression. To meet these challenges, we also propose an efficient rate-distortion framework for natural image compression using wedgelets.

‣ Image Compression using Multiscale Geometric Edge Models

Wakin, Michael; Wakin, Michael
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Thesis; Text; Text
Português
Relevância na Pesquisa
67.745195%
Masters Thesis; Edges are of particular interest for image compression, as they communicate important information, contribute large amounts of high-frequency energy, and can generally be described with few parameters. Many of today's most competitive coders rely on wavelets to transform and compress the image, but modeling the joint behavior of wavelet coefficients along an edge presents a distinct challenge. In this thesis, we examine techniques for exploiting the simple geometric structure which captures edge information. Using a multiscale wedgelet decomposition, we present methods for extracting and compressing a cartoon sketch containing the significant edge information, and we discuss practical issues associated with coding the residual textures. Extending these techniques, we propose a rate-distortion optimal framework (based on the Space-Frequency Quantization algorithm) using wedgelets to capture geometric information and wavelets to describe the rest. At low bitrates, this method yields compressed images with sharper edges and lower mean-square error.

‣ Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime
Fonte: [Barcelona] : Universitat Autònoma de Barcelona, Publicador: [Barcelona] : Universitat Autònoma de Barcelona,
Tipo: Tesis i dissertacions electròniques; info:eu-repo/semantics/doctoralThesis; info:eu-repo/semantics/publishedVersion Formato: application/pdf
Publicado em //2011 Português
Relevância na Pesquisa
58.257246%
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados...

‣ A Fast Fractal Image Compression Method Based Entropy

Hassaballah, M.; Makky, M. M.; Mahdy, Youssef B.
Fonte: Universidade Autônoma de Barcelona Publicador: Universidade Autônoma de Barcelona
Tipo: Artigo de Revista Científica Formato: application/pdf
Publicado em //2005 Português
Relevância na Pesquisa
67.46016%
Fractal image compression gives some desirable properties like resolution independence, fast decoding, and very competitive rate-distortion curves. But still suffers from a (sometimes very) high encoding time, depending on the approach being used. This paper presents a method to reduce the encoding time of this technique by reducing the size of the domain pool based on the Entropy value of each domain block. Experimental results on standard images show that the proposed method yields superior performance over conventional fractal encoding.

‣ DNA Microarray Image Compression

Hernández-Cabronero, Miguel
Fonte: [Barcelona] : Universitat Autònoma de Barcelona, Publicador: [Barcelona] : Universitat Autònoma de Barcelona,
Tipo: Tesis i dissertacions electròniques; info:eu-repo/semantics/doctoralThesis; info:eu-repo/semantics/publishedVersion Formato: application/pdf
Publicado em //2015 Português
Relevância na Pesquisa
58.04828%
En los experimentos con DNA microarrays se genran dos imágenes monocromo, las cuales es conveniente almacenar para poder realizar análisis más precisos en un futuro. Por tanto, la compresión de imágenes surge como una herramienta particularmente útil para minimizar los costes asociados al almacenamiento y la transmisión de dichas imágenes. Esta tesis tiene por objetivo mejorar el estado del arte en la compresión de imágenes de DNA microarrays. Como parte de esta tesis, se ha realizado una detallada investigación de las características de las imágenes de DNA microarray. Los resultados experimentales indican que los algoritmos de compresión no adaptados a este tipo de imágenes producen resultados más bien pobres debido a las características de estas imágenes. Analizando las entropías de primer orden y condicionales, se ha podido determinar un límite aproximado a la compresibilidad sin pérdida de estas imágenes. Aunque la compresión basada en contexto y en segmentación proporcionan mejoras modestas frente a algoritmos de compresión genéricos, parece necesario realizar avances rompedores en el campo de compresión de datos para superar los ratios 2:1 en la mayor parte de las imágenes. Antes del comienzo de esta tesis se habían propuesto varios algoritmos de compresión sin pérdida con rendimientos cercanos al límite óptimo anteriormente mencionado. Sin embargo...

‣ A Study of trellis coded quantization for image compression

Panchapakesan, Kannan
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
67.542266%
Trellis coded quantization has recently evolved as a powerful quantization technique in the world of lossy image compression. The aim of this thesis is to investigate the potential of trellis coded quantization in conjunction with two of the most popular image transforms today; the discrete cosine transform and the discrete wavelet trans form. Trellis coded quantization is compared with traditional scalar quantization. The 4-state and the 8-state trellis coded quantizers are compared in an attempt to come up with a quantifiable difference in their performances. The use of pdf-optimized quantizers for trellis coded quantization is also studied. Results for the simulations performed on two gray-scale images at an uncoded bit rate of 0.48 bits/pixel are presented by way of reconstructed images and the respective peak signal-to-noise ratios. It is evident from the results obtained that trellis coded quantization outperforms scalar quantization in both the discrete cosine transform and the discrete wavelet transform domains. The reconstructed images suggest that there does not seem to be any considerable gain in going from a 4-state to a 8-state trellis coded quantizer. Results also suggest that considerable gain can be had by employing pdf-optimized quantizers for trellis coded quantization instead of uniform quantizers.

‣ Evaluation of digital image compression algorithms for use on lap top computers

Brower, Bernard V.
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
67.848145%
A technique for the evaluation of image compression algorithms was developed. This technique was then applied in the evaluation of six image compression algorithms (ARIDPCM, ISO/JPEG DCT, zonal DCT, proprietary wavelet, proprietary sub-band coding and the proprietary DCT). Of the six algorithms evaluated, the Wavelet algorithm performed the best on average in image quality at all bit rates. The JPEG DCT was concluded to be the most useful algorithm because of its performance and the advantages that come with being an international standard.

‣ Genetic algorithm and tabu search approaches to quantization for DCT-based image compression

Champion, Michael
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
58.029785%
Today there are several formal and experimental methods for image compression, some of which have grown to be incorporated into the Joint Photographers Experts Group (JPEG) standard. Of course, many compression algorithms are still used only for experimentation mainly due to various performance issues. Lack of speed while compressing or expanding an image, poor compression rate, and poor image quality after expansion are a few of the most popular reasons for skepticism about a particular compression algorithm. This paper discusses current methods used for image compression. It also gives a detailed explanation of the discrete cosine transform (DCT), used by JPEG, and the efforts that have recently been made to optimize related algorithms. Some interesting articles regarding possible compression enhancements will be noted, and in association with these methods a new implementation of a JPEG-like image coding algorithm will be outlined. This new technique involves adapting between one and sixteen quantization tables for a specific image using either a genetic algorithm (GA) or tabu search (TS) approach. First, a few schemes including pixel neighborhood and Kohonen self-organizing map (SOM) algorithms will be examined to find their effectiveness at classifying blocks of edge-detected image data. Next...

‣ A VHDL design for hardware assistance of fractal image compression

Erickson, Andrew
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
68.07898%
Fractal image compression schemes have several unusual and useful attributes, including resolution independence, high compression ratios, good image quality, and rapid decompression. Despite this, one major difficulty has prevented their widespread adoption: the extremely high computational complexity of compression. Fractal image compression algorithms represent an image as a series of contractive transformations, each of which maps a large domain block to a smaller range block. Given only this set of transformations, it is possible to reconstruct an approximation of the original image by iteratively applying the transformations to an arbitrary image. Compression consists of partitioning the image into range blocks and finding a suitable transformation of a domain block to represent each one. This search for transformations must generally be done using a brute force approach, comparing successive domain blocks until a suitable match is found. Some algorithmic improvements have been found, but none are adequate to reduce the required compression time to something reasonable for many uses. This thesis presents a new ASIC design which performs a large number of the required comparisons in parallel, yielding a substantial speedup over a program on a general-purpose computer system. This ASIC is designed in VHDL...

‣ An Investigation into the effect of the compression algorithm in the presence of shifting and cropping on image quality

Thonggoom, Ornsiri
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
58.134463%
Image compression is a form of data compression frequently used in digital imaging applications to improve the efficiency of transmission and storage. In order to have a manageable file size, the data compression is lossy, i.e. image information is lost. Lossy compression algorithms such as Baseline JPEG use the insensitivity of the human eye to high frequency information as a basis for the compression. Discrete cosine transforms (DCT) are performed on the data followed by variable quantization. And the image blocking has no exact regulations for the spatial variation. Using this technique, information is lost and the decompressed image is a distorted version of the original. It is known that simple repeated compressions do not influence image quality, which is deter mined by the initial compression. Considering practical use, this study extends JPEG operation to the repeated DCT block rearrangement operation of the JPEG baseline scheme. In particular image placement and translation are discussed. Objectively and subjectively analytical results are presented. It is now shown that these operations affect total image quality. The error of the JPEG image quality tends to increase with the number of repetitive DCT rearrangement operations...

‣ Binary image compression using run length encoding and multiple scanning techniques

Merkl, Frank J.
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
68.029785%
While run length encoding is a popular technique for binary image compression, a raster (line by line) scanning technique is almost always assumed and scant attention has been given to the possibilities of using other techniques to scan an image as it is encoded. This thesis looks at five different image scanning techniques and how their relation ship to image features and scanning density (resolution) affects the overall compression that can be achieved with run length encoding. This thesis also compares the performance of run length encoding with an application of Huffman coding for binary image compression. To realize these goals a complete system of computer routines, the Image, Scanning and Compression (ISC) System has been developed and is now avail able for continued research in the area of binary image compression.

‣ Image compression using noncausal prediction

Marchand, J. F. Philippe
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Dissertação
Português
Relevância na Pesquisa
67.90672%
Image compression commonly is achieved using prediction of the value of pixels from surrounding pixels. Normally the choice of pixels used in the prediction is restricted to previously scanned pixels. A better prediction can be achieved if pixels on all sides of the pixel to be predicted are used. A prediction and decoding method is proposed that is independent of scanning order of the image. The decoding process makes use of an iterative decoder. A sequence of images is generated that converges to a final image that is identical to the original image. The theory underlying noncausal prediction and iterative decoding is developed. Convergence properties of the decoding algorithm are studied and conditions for convergence are presented. Distortions to the prediction residual after encoding can be caused by storage requirements, such as quantization and compression and also by errors in transmission. Effects of distortions of the residual on the final decoded image are investigated by introducing several types of distortion of the residual, including (1) alteration of randomly selected bits in the residual, (2) addition of a sinusoidal signal to the residual, (3) quantization of the residual and (4) compression of the residual using lossy Haar wavelet coding. The resulting distortion in the decoded images was generally less for noncausal prediction than for causal prediction...

‣ Modeling and synthesis of the HD photo compression algorithm

Groder, Seth
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
57.90067%
The primary goal of this thesis is to implement the HD Photo encoding algorithm using Verilog HDL in hardware. The HD Photo algorithm is relatively new and offers several advantages over other digital still continuous tone image compression algorithms and is currently under review by the JPEG committee to become the next JPEG standard, JPEG XR. HD Photo was chosen to become the next JPEG standard because it has a computationally light domain change transform, achieves high compression ratios, and offers several other improvements like its ability to supports a wide variety of pixel formats. HD Photo’s compression algorithm has similar image path to that of the baseline JPEG but differs in a few key areas. Instead of a discrete cosine transform HD Photo leverages a lapped biorthogonal transform. HD Photo also has adaptive coefficient prediction and scanning stages to help furnish high compression ratios at lower implementation costs. In this thesis, the HD Photo compression algorithm is implemented in Verilog HDL, and three key stages are further synthesized with Altera’s Quartus II design suite with a target device of a Stratix III FPGA. Several images are used for testing for quality and speed comparisons between HD Photo and the current JPEG standard using the HD Photo plug-in for Adobe’s Photoshop CS3. The compression ratio when compared to the current baseline JPEG standard is about 2x so the same quality image can be stored in half the space. Performance metrics are derived from the Quartus II synthesis results. These are approximately 108...

‣ 0.35um implementation of an experimental mixed signal image compression circuit

Divatia, Reema
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
67.649404%
Switched-current is an analog, discrete in time, signal processing technique that is fully compatible with any digital CMOS technology. This means that analog circuits can be realized together with digital components on a single chip without any additional technological processes. In designs implemented using the switched-current technique, the individual circuit elements interact by the means of currents, which allows to reduce voltage swings and thus power consumption. This work investigated the implementation of a low power mixed signal image compression system in TSMC 0.35um technology. The major components of this system were two dimensional discrete cosine transform processor, analog to digital converter, quantizer and entropy encoder. The discrete cosine transform section was implemented using switched-current technique. The digital part consisting of the quantizer, entropy encoder and control unit was modelled using VHDL and then synthesized into standard cells.

‣ Contrast-detail analysis ofthe effect of image compression on computed tomographic images

Cook, Larry; Cox, Glendon; Insana, Michael; McFadden, Michael; Hall, Timothy; Gaborski, Roger; Lure, Fleming
Fonte: International Society for Optical Engineering (SPIE) Publicador: International Society for Optical Engineering (SPIE)
Tipo: Proceedings Formato: 892197 bytes; application/pdf
Português
Relevância na Pesquisa
57.925605%
Three compression algorithms were compared by using contrast-detail (CD) analysis. Two phantoms were designed to simulate computed tomography (CT) scans of the head. The first was based on CT scans of a plastic cylinder containing water. The second was formed by combining a CT scan of a head with a scan of the water phantom. The soft tissue of the brain was replaced by a subimage containing only water. The compression algorithms studied were the full-frame discrete cosine (FDCT) algorithm, the Joint Photographic Experts Group (JPEG) algorithm, and a wavelet algorithm. Both the wavelet and JPEG algorithms affected regions of the image near the boundary of the skull. The FDCT algorithm propagated false edges throughout the region interior to the skull. The wavelet algorithm affected the images less than the other compression algorithms. The presence of the skull especially affected observer performance on the FDCT compressed images. All of the findings demonstrated a flattening of the CD curve for large lesions. The results of a compression study using lossy compression algorithms is dependent on the characteristics ofthe image and the nature of the diagnostic task. Because of the high density bone of the skull, head CT images present a much more difficult compression problem than chest x-rays. We found no significant differences among the CD curves for the tested compression algorithms.; Copyright 1996 Society of Photo-Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE Volume 2712 Medical Imaging 1996: Image Perception...