Geoscientist Artificial Intelligence
Geoscientist Artificial Intelligence
  • Home
  • Processing & Imaging
    • Anisotropy Analysis
    • Deconvolution
    • Inverse Q Filtering
    • Migration
    • Multiple Attenuation
    • Noise Attenuation
    • Ray Tracing
    • Stacking
    • Static Correction
    • Velocity & NMO Analysis
    • Waveform Modeling
    • Wave Equation Datuming
  • Interpretation
    • AVO Analysis
    • Data Conditioning
    • Facies Analysis
    • INVERSION
    • Rock Physics Modeling
    • Seismic Attributes
    • Spectral Blending
    • Well-Tie Analysis
  • More
    • Home
    • Processing & Imaging
      • Anisotropy Analysis
      • Deconvolution
      • Inverse Q Filtering
      • Migration
      • Multiple Attenuation
      • Noise Attenuation
      • Ray Tracing
      • Stacking
      • Static Correction
      • Velocity & NMO Analysis
      • Waveform Modeling
      • Wave Equation Datuming
    • Interpretation
      • AVO Analysis
      • Data Conditioning
      • Facies Analysis
      • INVERSION
      • Rock Physics Modeling
      • Seismic Attributes
      • Spectral Blending
      • Well-Tie Analysis
  • Home
  • Processing & Imaging
    • Anisotropy Analysis
    • Deconvolution
    • Inverse Q Filtering
    • Migration
    • Multiple Attenuation
    • Noise Attenuation
    • Ray Tracing
    • Stacking
    • Static Correction
    • Velocity & NMO Analysis
    • Waveform Modeling
    • Wave Equation Datuming
  • Interpretation
    • AVO Analysis
    • Data Conditioning
    • Facies Analysis
    • INVERSION
    • Rock Physics Modeling
    • Seismic Attributes
    • Spectral Blending
    • Well-Tie Analysis

checkshot calibration & well-tie Analysis

Calibrating a sonic log with checkshot/VSP data is a standard practice in seismic and well correlation.

Difference in Measurement Principles

  • Sonic log: Measures interval travel time (Δt) of acoustic waves in the formation at a very fine vertical resolution (~cm scale).
  • Checkshot / VSP (Vertical Seismic Profile): Measures the actual travel time from the surface to a specific depth using seismic waves, typically lower vertical resolution (~m scale).


Even though both measure velocities, sonic logs are prone to systematic errors due to:

  • Tool calibration drift
  • Borehole conditions (mud, washouts, deviation)
  • Formation anisotropy or lithology effects 

Purpose of Calibration

 Correct systematic bias: Sonic logs can be consistently faster or slower than true seismic velocities. Checkshots give “ground truth” travel times from surface to depth.

  • Tie well to seismic: To create synthetic seismograms that match actual seismic reflection data, you need accurate velocity. Calibration ensures:
    tsynthetic​(z)=∫0z​vsonic​(z)dz​  
    aligns with checkshot times.
  • Improve depth conversion: When converting seismic time to depth, calibrated sonic velocities produce more accurate depth models.

Technical Workflow

This workflow calibrates a high-resolution sonic log using sparse checkshot data to produce a reliable Time–Depth Relationship (TDR) and a calibrated sonic log suitable for synthetic generation, well tie, and depth conversion.

1. Input Data and Pre-conditioning

The workflow uses two datasets:

  • Sonic log sampled at fine depth intervals (µs/ft vs depth in meters)
  • Checkshot data providing sparse, absolute two-way time control (ms vs depth)

All inputs are sorted by depth, and invalid samples are removed to ensure monotonic depth progression.


2. Sonic Integration to Time

The sonic log is first converted to interval velocity and then integrated in depth to obtain a preliminary sonic-derived two-way time curve.
Integration is performed using layer thickness, not numerical gradients, ensuring physical correctness and stability.

This produces a smooth sonic-controlled time trend that preserves high-resolution velocity variations.


3. Drift Estimation at Checkshot Depths

Because checkshot data are sparse, drift is computed only at checkshot depths by comparing:

  • Sonic-derived time at checkshot depth
  • Observed checkshot two-way time

This avoids introducing artificial time control in depth intervals with no checkshot data.

4. Drift Conditioning and Interpolation

The raw drift is smoothed in the checkshot domain, reducing noise while preserving long-wavelength trends.
The conditioned drift is then interpolated to the sonic depth grid and added to the sonic-derived time.

This step stretches or compresses the sonic time smoothly between checkshot anchors without violating physical constraints.


5. Time–Depth Relationship (TDR) Construction

A full TDR curve is constructed by:

  • Combining sonic-controlled time trends
  • Forcing exact honor of checkshot times at checkshot depths
  • Applying gentle smoothing between anchors

The resulting TDR is monotonic, stable, and suitable for seismic depth conversion and synthetic generation.


6. Recovery of Calibrated Sonic

The calibrated sonic log is recovered by differentiating the final TDR with respect to depth, converting back to interval velocity and then to sonic slowness (µs/ft).

This ensures that the calibrated sonic is consistent with the final TDR, not artificially scaled or clipped during processing.


7. Physical Range Enforcement (Hard Clipping)

As a final quality-control step, the calibrated sonic is hard-clipped to a physically valid range (40–140 µs/ft).

This guarantees:

  • No non-physical velocity values
  • Compatibility with seismic inversion and modeling workflows
  • Robustness in sparse checkshot intervals

Hard clipping is applied only at the final stage to avoid contaminating the calibration process.

Example 1

Sonic calibration with check-shot. Drift curve shows the diference transit time of check-shot and cumulative sonic two way time.

Generated synthetic trace from calibrated TDR spliced with correspondig seismic data.

Example 2

Sonic calibration with check-shot. Drift curve shows the diference transit time of check-shot and cumulative sonic two way time.

Generated synthetic trace from calibrated TDR spliced with correspondig seismic data.

Example 3

3D stacking velocoty model with check-shot projected in well position.

Sonic calibration with check-shot. 3D stacking velocoty model with check-shot projected in well position.

Stacking to Interval velocity conversion (CVI method)

Overview:

In seismic processing, stacking (RMS) velocities are commonly derived from velocity analysis. While useful for basic NMO correction, RMS velocities cannot be directly used for depth imaging or geophysical interpretation. To build accurate subsurface models, we need interval velocities the true velocities of each layer in the subsurface. The Constrained Velocity Inversion (CVI) method provides a robust and reliable way to convert RMS velocities into interval velocities, even when the data is noisy or contains outliers.

The CVI method is a robust and production-ready approach for converting RMS velocities into interval velocities. It is especially useful for modern seismic workflows where velocity picks are noisy or discontinuous, providing a stable and physically meaningful velocity model for depth imaging and interpretation.


Why CVI is Better than Traditional Methods:

The classical approach, known as Dix differentiation, is highly sensitive to:

  • Noisy velocity picks 
  • Irregular time sampling 
  • Lateral inconsistencies 

CVI overcomes these issues by using a regularized inversion approach. It balances fitting the data with producing smooth, realistic interval velocity profiles.

Key benefits:

  • Smooth and laterally consistent interval velocity fields 
  • Less sensitive to picking errors 
  • Physically realistic velocities, avoiding impossible negative or zero values 
  • Ideal for depth imaging workflows (PSTM/PSDM)
     

How It Works:

Without using formulas, the CVI process can be explained in simple steps:

  1. Input RMS velocities: The method starts with the RMS velocities obtained from velocity analysis. 
  2. Weighted inversion: It calculates the interval velocity for each layer, considering the RMS trend over time. 
  3. Smoothness control: Vertical smoothness is applied to avoid unrealistic jumps between layers. 
  4. Lateral consistency: Neighboring traces are considered to ensure lateral continuity and reduce isolated outliers. 
  5. Physical limits: The method enforces minimum and maximum plausible velocities to maintain realistic results. 
  6. Fallback mechanism: For locations with unreliable picks, a fallback method (similar to Dix) ensures the interval velocity remains valid.
     

The result is a stable, realistic interval velocity model ready for seismic imaging or depth conversion.


Practical Advantages

  • Handles noisy or discontinuous velocity picking data 
  • Suppresses spikes and extreme outliers 
  • Produces smooth vertical and lateral velocity trends 
  • Suitable for automated seismic processing workflows 
  • Provides reliable input for depth imaging and geophysical interpretation
     

Recommended Use

  • Use CVI for all velocity picking data before depth conversion 
  • Apply soft constraints for minimum and maximum velocities 
  • Use a moderate lateral smoothing radius to maintain lateral consistency 
  • Validate results by reconstructing RMS velocities and checking for anomalies

RMS calibrated 3D velocity model

Interval 3D vlocity model generated by CVI method with lateral consistency in model.

Copyright © 2026 Geoscientist Artificial Intelligent - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept