Produção Científica

**Artigo em Revista**

A combined time-frequency filtering strategy for Q-factor compensation of poststack seismic dataAttenuation is one of the main factors responsible for limiting resolution of the seismic method. It selectively damps the higher frequency components of the signal more strongly, causing the earth to work as a low-pass filter. This loss of high-frequency energy may be partially compensated by application of inverse QQ filtering routines. However, such routines often increase the noise level of the data, thereby restricting their use. These filters also require a quality factor profile as an input parameter, which is rarely available. In recent years, alternative methods for stable inverse QQ filtering have been presented in the literature, which makes it possible to correct the attenuation without introducing so much noise. In addition, new methods have been proposed to estimate the quality factor from seismic reflection data. We have developed a three-stage workflow oriented for attenuation correction in stacked sections. In the first stage, a trace-by-trace estimate of the quality factor is performed along the section. The second stage consists of preparing the data for attenuation compensation, which is performed via a special filtering strategy for efficient noise removal to avoid high-frequency noise bursts. The last stage is the application of a stable inverse QQ filtering. As an example, we applied the proposed flow in a seismic section to compensate for the attenuation caused by shallow gas accumulation. Careful data preparation proved to be a key factor in achieving successful attenuation compensation. |

**Artigo em Revista**

Migration velocity analysis using residual diffraction moveout: a real-data exampleUnfocused seismic diffraction events carry direct information about errors in the migration-velocity model. The residual-diffraction-moveout (RDM) migration-velocity-analysis (MVA) method is a recent technique that extracts this information by means of adjusting ellipses or hyperbolas to uncollapsed migrated diffractions. In this paper, we apply this method, which has been tested so far only on synthetic data, to a real data set from the Viking Graben. After application of a plane-wave-destruction (PWD) filter to attenuate the reflected energy, the diffractions in the real data become interpretable and can be used for the RDM method. Our analysis demonstrates that the reflections need not be completely removed for this purpose. Beyond the need to identify and select diffraction events in post-stack migrated sections in the depth domain, the method has a very low computational cost and processing time. To reach an acceptable velocity model of comparable quality as one obtained with common-midpoint (CMP) processing, only two iterations were necessary. |

**Artigo em Revista**

A separable strong-anisotropy approximation for pure qP-wave propagation in transversely isotropic mediaThe wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A PadÃ© expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a low-rank solution of the exact qP-wave dispersion relation. Our numerical experiments showed that this approximation can provide highly accurate wavefields, even in strongly anisotropic inhomogeneous media. The wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A PadÃ© expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a low-rank solution of the exact qP-wave dispersion relation. Our numerical experiments showed that this approximation can provide highly accurate wavefields, even in strongly anisotropic inhomogeneous media. The wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A PadÃ© expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a l |

**Artigo em Revista**

Offset-continuation stacking: Theory and proof of conceptThe offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of average velocity and local event slope allows us to determine horizon-based optimal parameter pairs and a final stacked section at arbitrary offset. Synthetic examples demonstrate that the OCO stack works as predicted, almost completely removing random noise added to the data and successfully recovering the reflection events.The offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of ave |

**Artigo em Revista**

Estimation of quality factor based on peak frequency-shift method and redatuming operator: Application in real data setQuality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as QQ-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve QQ-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the QQ factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis.Quality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as QQ-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve QQ-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the QQ factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis. |

**Artigo em Revista**

AN ALGORITHM FOR WAVE PROPAGATION ANALYSIS IN STRATIFIED POROELASTIC MEDIAAbstract The classic poroelastic theory of Biot, developed in 1950â€™s, describes the propagation of elastic waves through a porous media containing a fluid. This theory has been extensively used in various fields dealing with porous media: seismic exploration, oil/gas reservoir characterization, environmental geophysics, earthquake seismology, etc. In this work we use the Ursin formalism to derive explicit formulas for the analysis of propagation of elastic waves through a stratified 3D porous media, where the parameters of the media are characterized by piece-wise constant functions of only one spatial variable, depth. Key words: poroelasticity, Biot system, low-frequency range, layered media, Ursin algorithm |

**Artigo em Revista**

Relief Geometric Effects on Frequency-Domain Eletromagnetic DataA perpendicular transmiter-receiver coils arrangement used in the frequency-domain eletromagnetic survey can have deviations in relation to its standard geometric definition due to the relief geometry of the surveyed area when combined with large transmitter-receiver distance and large transmitter loop. This happens because the local relief characteristics along the equivalent magnetic moment axis from the vertical, and receiver positions at different elevations. A study about that is carried on here substituting the rugged relief by an inclined plane. We have developed a new formulation for the n-layered model that allowed us to investigate the relief geometry effects on FDEM data but restricting the analysis to the two-layer earth model, considering three cases of transmitter-receiver situations controlled by the relief model. Such procedures resulted to be very useful to demonstrate their behavior departing from those curves obtained for an inclined and a horizontal ground. These results show that small deviations in the verticality of the transmitter loop axis or in the horizontality of the surficial plane causes significant deviations, even for angles as small as 1Âº |

**Artigo em Revista**

How much averaging is necessary to cancel out cross-terms in noise correlation studies?We present an analytical approach to jointly estimate the correlation window length and number of correlograms to stack in ambient noise correlation studies to statistically ensure that noise cross-terms cancel out to within a chosen threshold. These estimates provide the minimum amount of data necessary to extract coherent signals in ambient noise studies using noise sequences filtered in a given frequency bandwidth. The inputs for the estimation process are (1) the variance of the cross-correlation energy density calculated over an elementary time length equal to the largest period present in the filtered data and (2) the threshold below which the noise cross-terms will be in the final stacked correlograms. The presented theory explains how to adjust the required correlation window length and number of stacks when changing from one frequency bandwidth to another. In addition, this theory provides a simple way to monitor stationarity in the noise. The validity of the deduced expressions have been confirmed with numerical cross-correlation tests using both synthetic and field data. Key words: Time-series analysis; Interferometry. |

**Artigo em Revista**

Automatic data extrapolation to zero offset along local slopeVelocity-independent seismic data processing requires information about the local slope in the data. From estimates of local time and space derivatives of the data, a total least-squares algorithm gives an estimate of the local slope at each data point. Total least squares minimizes the orthogonal distance from the data points (the local time and space derivatives) to the fitted straight line defining the local slope. This gives a more consistent estimate of the local slope than standard least squares because it takes into account uncertainty in the temporal and spatial derivatives. The total least-squares slope estimate is the same as the one obtained from using the structure tensor with a rectangular window function. The estimate of the local slope field is used to extrapolate all traces in a seismic gather to the smallest recorded offset without using velocity information. Extrapolation to zero offset is done using a hyperbolic traveltime function in which slope information replaces the knowledge of the normal moveout (NMO) velocity. The new data processing method requires no velocity analysis and there is little stretch effect. All major reflections and diffractions that are present at zero offset will be reproduced in the output zero-offset section. Therefore, if multiple reflections are undesired in the output, they should be removed before data extrapolation to zero offset. The automatic method is sensitive to noise, so for poor signal-to-noise ratios, standard NMO velocities for primary reflections can be used to compute the slope field. Synthetic and field data examples indicate that compared with standard seismic data processing (velocity analysis, mute, NMO correction, and stack), our method provides an improved zero-offset section in complex data areas. |

**Artigo em Revista**

Parallel Scalability of a Fine-Grain Prestack Reverse Time Migration AlgorithmSeismic imaging has evolved significantly due to the high demand from the oil/gas industry for hardware technological advancements, boosting the development of more sophisticated algorithms. In order to deliver the quality and accuracy required, the execution of these algorithms may lead to time infeasible solutions. Aiming at performance improvement, this work conducted the parallelization of the core of a reverse time migration (RTM) algorithm. Furthermore, analysis such as speedup and efficiency was performed in order to assess the scalability of the proposed method. While the many parallelization efforts so far deal with coarse-grain approaches, this letter tackles the intrashot fine-grain parallelization of prestack RTM, which increases the overall concurrency degree of the algorithm. Results using 2-D synthetic data show that the proposed approach is scalable, which means that an increase in hardware resources and/or in problem size will lead to a proportional increase in speed and/or accuracy. |

<< < 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 > >>