A leitura como um processo cognitivo
Figueiredo, Olívia Maria
1999-01-01
Search results
56 records were found.
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods.
This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth c...
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applic...
This paper addresses the estimation of object boundaries from a set of 3D points. An
extension of the constrained clustering algorithm developed by Abrantes and Marques in the
context of edge linking is presented. The object surface is approximated using rectangular
meshes and simplex nets. Centroid-based forces are used for attracting the model nodes
towards the data, using competitive learning methods. It is shown that competitive learning
improves the model performance in the presence of concavities and allows to discriminate
close surfaces. The proposed model is evaluated using synthetic data and medical images
(MRI and ultrasound images).
Esta comunicação aborda a estimação da superfície de objectos a partir de um conjunto de pontos tridimensionais usando modelos activos. Propõe-se, uma extensão da Classe Unificada desenvolvida por Abrantes e Marques. A superfície é discretizada usando dois tipos de redes: redes de malha rectangular e redes Simplex. A Classe Unificada baseia-se no cálculo dos centróides dos dados na vizinhança de amostras pré-definidas da superfície deformável. Os pontos da superfície são atraídos na direcção dos centróides. O artigo revê os conceitos básicos de modelamento activo de superfícies, a Classe Unificada e as redes Simplex.
Os modelos descritos são testados usando dados sintéticos e reais obtidos a partir de imagens ecográficas e de ressonância magnética.
Hyperspectral instruments have been incorporated in satellite missions, providing
large amounts of data of high spectral resolution of the Earth surface. This data can be used
in remote sensing applications that often require a real-time or near-real-time response. To
avoid delays between hyperspectral image acquisition and its interpretation, the last usually
done on a ground station, onboard systems have emerged to process data, reducing the volume
of information to transfer from the satellite to the ground station. For this purpose, compact
reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely
used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method
based on the vertex component analysis (VCA) and it works without a dimensionality reduction
preprocess...
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum.
With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL...
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisi...
Proceedings of International Conference - SPIE 7477, Image and Signal Processing for Remote Sensing XV - 28 September 2009
