Produção Científica



Apresentação
22/06/2015

Lanczos Bidiagonalization Method for Parallel 3-D Gravity Inversion - Application to Basement Relief Definition
It is present an efficient parallel algorithm for the inversion of 3-D gravity data, which goal is to estimate the depth of a sedimentary basin in which the density contrast varies parabolically with depth. The efficiency of the gravity inversion methods applied to the interpretation of sedimentary basins depends on the number of data and model parameters to be estimated, making it very poor when the number of
parameters is very large. We present the simulation results with a synthetic model of a sedimentary basin inspired in a real situation, taking advantage of a parallel Levenberg-Marquardt algorithm implemented using both MPI and OpenMP. Lanczos bidiagonalization method has been used to obtain the solution for the linearized subproblem at each iteration. The idea of obtaining the solution of a large system of
equations using the bidiagonalization procedure is quite useful in practical problems, and allows to implement selection methods for the optimal regularization parameter in an easy way, like the weighted generalized cross validation method, adopted in this work. The hybrid parallel implementation combined with Lanczos bidiagonalization allows us to achieve a significant reduction of the computational cost, which is otherwise very high due to the scale of the problem.
Apresentação
22/06/2015

Remigration-trajectory Time-migration Velocity Analysis in Regions with Strong Velocity Variations.
Remigration trajectories describe the position of an image point in the image domain for different source-receiver offsets as a function of the migration velocity. They can be used for prestack time- migration velocity analysis by means of determining kinematic migration parameters, which in turn, allow to locally correct the velocity model. The main advantage of this technique is that it takes the reflection-point displacement in the midpoint direction into account, thus allowing for a moveout correction for a single reflection point at all offsets of a common image gather (CIG). We have tested the feasibility of the method on synthetic data from three simple models and the Marmousoft data. Our tests show that the proposed tool increases the velocity-model resolution and provides a plausible time-migrated image, even in regions with strong velocity variations. The most effort was spent on the event picking, which is critical to the method.
Apresentação
22/06/2015

A Wavefront-propagation Strategy for Time-to-depth Conversion
We present a strategy to time-to-depth conversion and velocity estimation based only on the image-wavefront propagation. It has two main features: (1) it computes the velocity field and the traveltime directly, avoiding the ray-tracing step; and (2) it requires only the knowledge of the image- wavefront at the previous time step. As a consequence, our method tends to be faster than usual techniques and does not carry the constraints and limitations inherent to common ray-tracing strategies. We have tested the feasibility of the method on the original Marmousi velocity model and two smoothed versions of it. Moreover, we migrated the Marmousi data set using the estimated depth velocity models. Our results indicate that the present strategy can be used to construct starting models for velocity-model building in depth migration and/or tomographic methods.
Apresentação
22/06/2015

Time-frequency Decomposition and Q-estimation Using Complex Filters
Two ideas are presented in this paper. First, we develop an analytic extension of a time-frequency decomposition, the amplitude of which is a high-resolution time-frequency decomposition that produces very tight energy peaks around the instantaneous frequency and the phase of which is a high precision and structured representation of the frequency content over signal’s entire bandwidth. Second, we build upon this signal representation by developing a Q-factor estimation method that does so by balancing both the amplitude and phase information content of the complex time- frequency decomposition. This estimator uses a propagator based on the Kolsy-Futterman formalism, which has a real part associated with attenuation and an imaginary part associated with dispersion, both of which are Q-dependent. The two methods are matched to take advantage of both amplitude and phase information of the time-frequency distribution. We apply both methods to a synthetic seismic trace and to real marine data. In the synthetic example, instantaneous frequency and Q-factor are determined successfully. The phase of the TFD reveals the instantaneous frequency, with greater sharpness, in both the synthetic and, most markedly, in the
marine data.
Apresentação
18/11/2014

Detection of diffractions in seismic sections using Support Vector Classifiers
Detection of diffractions is an essential step on diffraction imaging techniques. Due to their smaller amplitudes regarding reflection events, diffraction events are usually treated as noise in standard seismic processing. Diffraction imaging is often used to identify subsurface scattering features with enhanced resolution in comparison to conventional seismic reflection imaging. Several techniques have been presented in literature for separation of diffracted from reflected events. One way is to analyze amplitudes along diffraction time curves in common-offset sections, where it is easier to perceive differences between diffraction and reflection events. Known pattern recognition methods can be used to separate the events. We analyze automatic detection of diffraction points using a two-class k Nearest-Neighbours (kNN) and we present a routine for detection of diffractions using Support Vector Machines (SVM). We evaluate the ability of each method to detect scattering features, using synthetic seismic models. Results indicate that kNN method is more robust to noise and velocity model variation. On the other hand, SVM sensitiveness to velocity model can be useful on velocity analysis of scattering events.
Apresentação
17/11/2014

RTM imaging condition using impedance sensitivity kernel combined with Poynting vector
Reverse time migration (RTM) using cross-correlation imaging condition is always contaminated by low-spatial-frequency artifacts due the presence of sharp wave-speed contrasts in the velocity model. Different techniques have been used and Laplacian filtering can lead to good results but it might damage the signal of interest. Recently it has been observed through numerical examples that RTM images obtained using the impedance sensitivity kernel are much less contaminated by lowfrequency artifacts. In this work, we are proposing to use the impedance sensitivity kernel instead of the conventional cross-correlation RTM imaging condition to attenuate the low frequency artifacts. Using the impedance sensitivity kernel for the source downgoing wavefield separeted by the Poynting vector, we demostrate through syntethic examples that RTM
image results preserve well the reflections and attenuate significantly the ackscattered low frequency noise.
Apresentação
17/11/2014

Chebyshev expansion applied to the one-step wave extrapolation matrix
A new method of solving the acoustic one-step wave extrapolation matrix is proposed. In our method the analytical wavefield is separated in its real and imaginary parts and the first-order coupled set of equations is solved by the Tal-Ezer’s technique, and Chebyshev expansion is used to approximate the extrapolate operator eADt , where A is an anti-symmetrical matrix and the pseudodifferential operator F is computed using the Fourier method. Thus, the proposed numerical algorithm can handle any velocity variation. Its implementation is straightforward and if an appropriate number of terms of the series expansion is chosen, the method is unconditionally stable and propagates seismic waves free of numerical dispersion. In our method the number of FFTs is explicitly determined and it is function of the maximum eigenvalue of the matrix A. Numerical modeling examples are shown to demonstrate that the proposed method has the capability to extrapolate waves in time using a time step up to Nyquist limit.
Apresentação
03/09/2012

GêBR: a free seismic processing interface
There are many programs for processing seismic data that are freely available and widespread, for example Seismic Un*x, Madagascar, FreeUSP, and SEPlib, among others. All these packages consist of packages of command-line-oriented programs that are designed to be used in sequence; the data conceptually flow in a pipeline through one program after another. Each program is generally controlled by its own set of command-line options. To take full advantage of such a toolkit, the user must have considerable knowledge beyond general geophysical expertise: shell scripting, process submission and management, and batch queue processing, to name a few. While these skills are useful, they should not be a requirement for seismic data processing.
A suitable graphical user interface could take care of these computational details, allowing the user to focus on the central problem of processing seismic data. This is particularly important during training courses, where the limited duration of the does not leave time for learning skills that are not essential to the material being taught. A graphical user interface may also boost the uptake of a new program, by making it more accessible to users and allowing its easy integration with other programs available within the same interface. These principles have guided the development of GêBR, a graphical user interface to control commandline programs for seismic processing. It permits users to build complex processing flows from predefined modules known as menus. Menus describing new programs can be easily added to the interface, extending its capabilities. GêBR is also designed to be simple, in the sense that a couple of hours is enough to introduce the core features of the interface, to allow the user to start working with the seismic data.
Apresentação
03/09/2012

Minimum-delay seismic trace decomposition and SVD filtering for seismic reflection enhancement
Spiking deconvolution corrects for the effect of the seismic wavelet, assumed to be minimum delay, by applying an inverse filter to the seismic trace to get an estimate of reflectivity. In order to compensate for propagation and absorption effects one may use time-varying deconvolution where a different inverse filter is computed and applied for each output sample position. We modify this procedure by estimating a minimum-delay wavelet for each time-sample position of the seismic trace. This gives a decomposition of the seismic trace as a sum of minimum-delay wavelets, each multiplied by a reflectivity coefficient. The reflectivity estimation is a single-trace process which is sensitive to non-white noise, and it does not take into account lateral continuity of reflections. We therefore have developed a new data processing strategy by combining it with adaptive SVD filtering. The SVD filtering process is applied to the data in two steps. First, in a sliding spatial window on NMO-corrected CMP or common shot gathers. Next, after local dip estimation and correction, on local patches in common-offset panels. After the SVD filtering, we apply the new reflectivity estimation procedure. The SVD filtering removes noise and improves lateral continuity while the reflectivity estimation increases the high-frequency content in the data and improves vertical resolution. The new data processing strategy was successfully applied to land seismic data from North-east in Brazil. Improvements in data quality are evident in prestack data panels, velocity analysis and the stacked section.
Apresentação
10/08/2012

Migration velocity analysis using residual diffraction moveout in the pre/post-stack depth domain
Diffraction events contain direct information on the medium velocity. In this work, we develop a method for migration velocity improvement and diffraction localization based on a moveout analysis of over or undermigrated diffraction events in the depth domain. The method uses an arbitrary initial velocity model as input. It provides an update to the velocity model and diffraction locations in the depth domain as a result. The algorithm is based on the focusing of remigration velocity rays from incorrectly migrated diffraction curves. These velocity rays are constructed from a ray-tracing like approach applied to the image-wave equation for velocity continuation. After picking the diffraction events in the migrated domain, the method has a very low computational cost, and the diffraction points are located automatically. We demonstrate the feasibility of our methods using a synthetic data example.
<<  <   1  2  3  4  5  6  7  8  9  10  11  12  13  14   >  >>