Produção Científica

**Artigo em Revista**

Error analysis of the spectral element method with Gauss-Lobatto-Legendre points for the acoustic wave equation in heterogeneous media.We present the error analysis of a high-order method for the two-dimensional acoustic wave equation in the particular case of constant compressibility and variable density. The domain discretization is based on the spectral element method with Gaussâ€“Lobattoâ€“Legendre (GLL) collocation points, whereas the time discretization is based on the explicit leapfrog scheme. As usual, GLL points are also employed in the numerical quadrature, so that the mass matrix is diagonal and the resulting algebraic scheme is explicit in time. The analysis provides an a priori estimate which depends on the time step, the element length, and the polynomial degree, generalizing several known results for the wave equation in homogeneous media. Numerical examples illustrate the validity of the estimate under certain regularity assumptions and provide expected error estimates when the medium is discontinuous. |

**Artigo em Revista**

Estimation of quality factor based on peak frequency-shift method and redatuming operator: Application in real data setQuality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as QQ-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve QQ-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the QQ factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis. |

**Artigo em Revista**

Limitations of correlation-based redatuming methodsRedatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surfaceâ€“seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green's theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surfaceâ€“seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available. |

**Artigo em Revista**

Time-to-depth conversion and velocity estimation by image-wavefront propagationA new strategy for time-to-depth conversion and interval-velocity estimation is based entirely on image-wavefront propagation without the need to follow individual image rays. The procedure has three main features: (1) It computes the velocity field and the traveltime directly, allowing us to dispense with dynamic ray tracing; (2) it requires only the knowledge of the image wavefront at the previous time step; and (3) it inherently smooths the image wavefront, inhibiting the formation of caustics. As a consequence, the method tends to be faster than the usual techniques and does not carry the constraints and limitations inherent to common ray-tracing strategies. Synthetic tests using a Gaussian velocity anomaly as well as the Marmousi velocity model, and two smoothed versions of it show the feasibility of the method. A field-data example demonstrates the use of different numerical procedures. Our results indicate that the present strategy can be used to construct reasonable depth-velocity models that can be used as reliable starting models for velocity-model building in depth migration or for tomographic methods. |

**Artigo em Revista**

Optimization of the parameters in complex PadÃ© Fourier finite-difference migrationComplex PadÃ© Fourier finite-difference migration is a stable one-way wave-equation technique that allows for better treatment of evanescent modes than its real counterpart, in this way producing fewer artifacts. As for real Fourier finite-difference (FFD) migration, its parameters can be optimized to improve the imaging of steeply dipping reflectors. The dip limitation of the FFD operator depends on the variation of the velocity field. We have developed a wide-angle approximation for the one-way continuation operator by means of optimization of the PadÃ© coefficients and the most important velocity-dependent parameter. We have evaluated the achieved quality of the approximate dispersion relation in dependence on the chosen function of the ratio between the model and reference velocities under consideration of the number of terms in the PadÃ© approximation and the branch-cut rotation angle. The optimized parameters are chosen based on the migration results and the computational cost. We found that by using the optimized parameters, a one-term expansion achieves the highest dip angles. The implementations were validated on the Marmousi data set and SEG/EAGE salt model data. |

**Artigo em Revista**

Source wavelet and local wave propagation effects on the amplitude-variation-with-offset response of thin-layer models: A physical modeling study.In target layers with thicknesses below the vertical seismic resolution as thin layers, the tuning effect/interference between the wave propagation modes may increase the challenge of doing amplitude-variation-with-offset (AVO) analysis because it is difficult to recover the primary PP amplitudes embedded in the data by further seismic data processing. Thus, we have investigated the importance of the primary PP reflections, locally P-SV converted waves, and internal multiple reflections in the amplitude response of two thin-layer seismic physical models. One model consists of a thin water layer embedded between two nylon plates, and another model with a thin acrylic layer surrounded by water. Numerical modeling using the reflectivity method was applied to analyze each wave propagation mode and the source waveform role in the experimental data. Before the experimental reflection data acquisition, we characterized two source and receiver piezoelectric transducer (PET) pairs: one with a circular plane face and the other with a semispherical face. We measured the source wavelet, its dominant frequency, and the PETsâ€™ directivity pattern. Semispherical PETs were chosen to acquire common midpoint reflection data. Thereafter, a processing workflow was applied to remove linear events interfering with the target reflections and to correct amplitudes due to transmission losses, source/receiver directivity, and geometric spreading effects. Finally, we investigated the thin-layer targets near incidence angle amplitude and the AVO response. The results showed that the interference between the primary PP reflections and the locally converted shear waves may considerably affect the observed amplitude response. The source wavelet bandwidth appeared as a second-order effect, and the internal multiple reflections were practically negligible. These results suggested that in real data sets, it is important to investigate the wave propagation modes and source wavelet role in the amplitudes observed, before deciding the AVO analysis/inversion workflow that should be adopted. |

**Artigo em Revista**

The magnetometric resistivity method in a stratified medium having resistivities varying exponentially with depthUsing a known solution for the electric potential and AmpÃ¨reâ€™s law, the azimuthal component of the magnetic field is deduced in a horizontally layered medium with a current point source placed anywhere, considering that the resistivity in each layer varies exponentially with depth. This theoretical result contributes to model the magnetometric resistivity method, which had been applied onshore, e.g., for mineral exploration, offshore to investigate the permafrost layer at bottom sea, hydrothermal flux, and natural resources. We have numerically tested the obtained formulation against previous results found in the literature that use distinct electrode and sensor dispositions, with models having three and four layers. Introducing the exponential variation, it verified the sensitivity to physical and geometric parameters comparing the exponential and homogeneous models. |

**Artigo em Revista**

A high-resolution weighted AB semblance for dealing with amplitude-variation-with-offset phenomenonVelocity analysis is an essential step in seismic reflection data processing. The conventional and fastest method to estimate how velocity changes with increasing depth is to calculate semblance coefficients. Traditional semblance has two problems: low time and velocity resolution and an inability to handle amplitude variation-with-offset (AVO) phenomenon. Although a method known as the AB semblance can arrive at peak velocities in the areas with an AVO anomaly, it has a lower velocity resolution than conventional semblance. We have developed a weighted AB semblance method that can handle both problems simultaneously. We have developed two new weighting functions to weight the AB semblance to enhance the resolution of velocity spectra in the time and velocity directions. In this way, we increase the time and velocity resolution while eliminating the AVO problem. The first weighting function is defined based on the ratio between the first and the second singular values of the time window to improve the resolution of velocity spectra in velocity direction. The second weighting function is based on the position of the seismic wavelet in the time window, thus enhancing the resolution of velocity spectra in time direction.We use synthetic and field data examples to show the superior performance of our approach over the traditional one. |

**Artigo em Revista**

Resolution in crosswell traveltime tomography: The dependence on illuminationThe key aspect limiting resolution in crosswell traveltime tomography is illumination, a well-known result but not well-exemplified. We have revisited resolution in the 2D case using a simple geometric approach based on the angular aperture distribution and the Radon transform properties. We have analytically found that if an isolated interface had dips contained in the angular aperture limits, it could be reconstructed using just one particular projection. By inversion of synthetic data, we found that a slowness field could be approximately reconstructed from a set of projections if the interfaces delimiting the slowness field had dips contained in the available angular apertures. On the one hand, isolated artifacts might be present when the dip is near the illumination limit. On the other hand, in the inverse sense, if an interface is interpretable from a tomogram, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region, it is diffusely imaged, but its interfaces, particularly vertical edges, cannot be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body, because this anomaly could be an artifact. These results are typical of ill-posed inverse problems: an absence of a guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of constraints. Crosswell tomograms derived with the use of sparsity constraints, using the discrete cosine transform and Daubechies bases, essentially reproduce the same features seen in tomograms obtained with the smoothness constraint. Interpretation must be done taking into consideration a priori information and the particular limitations due to illumination, as we have determined with a real data case. |

**Artigo em Revista**

A combined time-frequency filtering strategy for Q-factor compensation of poststack seismic dataAttenuation is one of the main factors responsible for limiting resolution of the seismic method. It selectively damps the higher frequency components of the signal more strongly, causing the earth to work as a low-pass filter. This loss of high-frequency energy may be partially compensated by application of inverse QQ filtering routines. However, such routines often increase the noise level of the data, thereby restricting their use. These filters also require a quality factor profile as an input parameter, which is rarely available. In recent years, alternative methods for stable inverse QQ filtering have been presented in the literature, which makes it possible to correct the attenuation without introducing so much noise. In addition, new methods have been proposed to estimate the quality factor from seismic reflection data. We have developed a three-stage workflow oriented for attenuation correction in stacked sections. In the first stage, a trace-by-trace estimate of the quality factor is performed along the section. The second stage consists of preparing the data for attenuation compensation, which is performed via a special filtering strategy for efficient noise removal to avoid high-frequency noise bursts. The last stage is the application of a stable inverse QQ filtering. As an example, we applied the proposed flow in a seismic section to compensate for the attenuation caused by shallow gas accumulation. Careful data preparation proved to be a key factor in achieving successful attenuation compensation. |

<< < 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 > >>