Poster #1: Development of machine learning models and workflow for Task 5

Presenters: Alex Sun, Seyyed Hosseini

Video

UT-BEG has actively participated in Task 5 of the SMART Project, in the areas of designing the testing (toy) problems, implementing machine learning models, and implementing workflows. Multiple ML models were screened under Task 5 for model comparison and baseline development. So far, we have developed the long short-term memory (LSTM) model, sparse grid model, and Gaussian process regression (GPR) model on 2D and 3D toy problems.

More...

Results suggested that the models gave performance that are on par with the more sophisticated image-based deep learning methods on toy problems, thus providing baselines for Task 5. A container-based workflow was designed to standardize model development and testing. Future work of Phase 1 will expand the current work on field-scale models.

Less...

Poster #2: Fast Proxy Models for Two-Phase CO2-Water Flow

Presenters: Curtis M. Oldenburg, Omotayo Omosebi, Abdullah Cihan

Video

Fast proxy models of CO2-water flow are useful for the engineering and design of geologic carbon sequestration reservoirs.  In some important limiting cases, fast proxy models can be used in place of full-physics models, and for training of machine-learning methods.  We are developing  transient macroscopic invasion percolation (TMIP) and diffusion-limited aggregation (DLA) approaches.  Preliminary results demonstrate the promise of the methods through their ability to model two-phase displacements very rapidly.

Poster #3: Multiphysics Signature of CO2 Injection on Host and Seal Formations

Presenter: Mathias Pohl

Video

Joint multiphysics experiments will allow us to accurately measure CO2 saturation and its effects on geophysical data. Simultaneously measured NMR data will provide saturation changes during CO2 injection while ultrasonic velocity and complex conductivity measurements determine the associated mechanical and electrical changes. Our findings will help guide machine learning algorithms when used on observation data (Seismic and EM) – both from a physical understanding and from quantitative data.

Poster #4: Imaging Pressure Front Propagation in Complex Fracture Networks Using the Fast Marching Method and Diffusive Time of Flight

Presenter: Akhil Datta-Gupta

Video

We will discuss a novel approach for rapid field-scale modeling and visualization of subsurface flow and transport and its application to unconventional oil and gas reservoirs. Our proposed approach is based on a high frequency asymptotic solution of the diffusivity equation in heterogeneous reservoirs and serves as a bridge between simplified analytical tools and complex numerical simulation.

More...

The high frequency solution leads to the Eikonal equation which is solved for a ‘diffusive time of flight (DTOF)’ using the Fast Marching Method (FMM). The DTOF represents the propagation of the ‘pressure front’ in the subsurface and generalizes the concept of ‘depth of investigation’ to heterogeneous and fractured reservoirs. More importantly, the ‘diffusive time of flight’ can be used as a spatial coordinate to reduce the 3-D diffusivity equation into an equivalent 1-D equation which can be solved numerically accounting for the relevant physics and leading to orders of magnitude faster computation compared to commercial simulators.

Less...

Poster #5: Deep Learning-Based Surrogate Models for High-Dimensional Data Assimilation in Reservoir Management

Presenters: Hewei Tang, Pengcheng Fu, Christopher Scott Sherman

Video

Real-time reservoir management demands fast assimilation of observation data to reduce the uncertainty in reservoir properties and future prediction. The most computational expensive component of data assimilation workflow is forward reservoir simulation.

More...

We present a deep learning-based surrogate model to replace forward reservoir simulation. Adapting an image-to-image mapping strategy based on convolutional neural network (CNN), the model is able to predict CO2 plume shape resolved by 3D seismic survey and pressure buildup resolved by virtual InSAR measurement. The model further couples the CNN with a long short-term memory (LSTM) for dynamic prediction. The surrogate model proves to be fast and accurate based on a single-well injection scenario.

Less...

Poster #6: Variational autoencoder for inverse analysis

Presenters: Bailian Chen, Dan O’Malley, Rajesh Pawar, Dylan Harp

Video

In this work, we tested a variational autoencoder (VAE) with gradient-based optimization approach for GCS inverse analysis with regularization. A VAE will be trained to map to a low-dimensional set of latent variables with a simple structure to the high-dimensional parameter space (i.e., original space) that has a complex structure.

More...

The required optimization process to fit model to observational data will then be performed on a low dimensional latent space, making gradient based optimization (i.e., L-BFGS) more computationally efficient. The major novelty of this approach is an automated and straightforward way of regularizing with a VAE. The feasibility of the proposed approach for GCS inverse analysis was demonstrated with a 3D synthetic case.

Less...

Poster #7: Using Physics Models to Train Neural Networks to Manage Reservoir Pressures

Presenters: Dylan Harp, Dan O’Malley

No Video Available

We evaluate the feasibility of using physics-informed machine learning (PIML) for subsurface energy-related pressure management. Pressure management is important when fluids are being injected and or extracted from the subsurface to avoid induced seismicity and unwanted migration of fluids. We develop a PIML framework that trains a neural network to manage pressures by determining extraction rates for arbitrary reservoir

More...

conditions (e.g., transmissivity and storativity). To ensure that enough training scenarios could be executed to fully evaluate the feasibility of PIML for pressure management, we implement an automatically-differentiable analytical model of pressure head within the PIML framework. We verify the automatic differentiation by also training the neural network using finite-difference approximations of the physics-model parameter gradients. On a simple single injector, single extractor, single critical location scenario, we evaluate the effect of the size of the training dataset (number of physics model scenarios used in training) and training dataset batch size (number of physics models scenarios used to update the neural network coefficients; i.e., how the training dataset is binned for training). For an equivalent number of model evaluations, the larger training dataset took less time to train and produced a neural network that was able to better control pressures. While larger training batch sizes ran faster, they produced neural networks that were not able to control pressures as well. We evaluate the difference in time to calculate gradients as a function of the number of physics-model parameters using automatic differentiation and forward and central finite difference gradient approximations.
This provides an anecdotal example of the number of parameters required for automatic differentiation to become more efficient than finite difference gradient approximations. We demonstrate the approach on a more complicated scenario involving 10 injectors, 10 extractors, and 4 critical locations. Each injection or extraction well and critical location can have a unique effective transmissivity/storativity combination, similar to what would be found from pumping test analyses in a heterogeneous reservoir. Along with the 10 variable extraction rates, this results in 190 physics model parameters for which gradients are required, well beyond the point where automatic differentiation becomes more efficient. Here again, we verify the automatic differentiation of the physics model parameters by comparison to training with finite difference gradient approximations. Automatic differentiation and finite difference gradient approximations produce essentially identical results for our simple and complicated scenarios.

Less...

Poster #8: Applicability Study of ConvLSTM Models for Reservoir Management

Presenters: Xiongjun Wu, Chung-Yan Shih

Video

ConvLSTM has been successfully used for complete time-series profile prediction in carbon storage applications. Our study indicates that univariate models predicting single variable outperform multivariate models predicting multiple outputs simultaneously, while the modified affected range scheme shows good performance in sink/source type scenarios, such as predicting water production rate of a well.

Poster #9: SMART Model Comparison App

Presenter: Brandon Hill

Video

The SMART Model Comparison App is a framework for the standardization and comparison of machine learning models and their predictive performance. The standardization of inputs, outputs and model classes allows for an easy apples-to-apples comparison of multiple machine learning models all in one place. The aim for this app is to allow different users to compare model performance by several criterion, including prediction speed, overall (global) predictive ability and grid level (local) predictive ability.

Poster #10: Improving Our Ability to “See” into the 3-D Subsurface using Deep Learning

Presenter: Bicheng Yan

Video

An efficient Physics-Constrained Deep Learning Model is developed for predicting three-dimensional multiphase fluid flow and transport in porous media. The model architecture fully leverages the spatial topology predictive power from deep convolutional neural networks, and it is coupled with an efficient physics-based smoother for the image processing that need continuity at pixel level. 

More...

Furthermore, those transient regions in the images are highly weighted to navigate the training process such that the model can capture the fluid flow dynamics in the regions. Similar to physics-driven simulators, the model is fed with multiple data sources such as solid, fluid properties and the fluid extraction/injection control, and generates fluid movement images at arbitrary time steps.

Less...

Poster #11: Graph Neural Network (GNN) for Property Prediction

Presenter: Michael Riedl

Video

Graph Neural Networks (GNNs) are an emerging deep learning architecture useful for making predictions across nodes and edges of a graph. For the task of approximating full physics simulator outputs across grids of cells, the GNN architecture is a natural fit and has advantages over other modeling approaches. The Battelle team tested the GNN model on a toy CO2 injection problem to evaluate its performance both spatially and temporally. The results were promising, but also revealed some opportunities for improving performance.

Poster #12: Physics-informed deep learning for prediction of CO2 storage site response

Presenter: Parisa Shokouhi

Video

We demonstrate a physics-informed deep learning method that uses deep neural networks but also incorporates flow equations to predict a carbon storage site response to CO2 injection. A 3D synthetic dataset is used to demonstrate the effectiveness of this modeling approach. The model approximates the temporal and spatial evaluations of CO2 pressure and saturation and predicts water extraction rate over time (outputs), given the initial porosity, permeability and injection rate (inputs).

Poster #13: Deep learning applied to Distributed Acoustic Sensing (DAS) data for reservoir strain mapping

Presenters: Verónica Rodríguez Tribaldos, Omotayo Omosebi

Video

Distributed Acoustic Sensing (DAS) converts fiber-optic cables into massive arrays of strain-rate sensors, which enables continuous monitoring of strain-rate changes at high spatial resolution (< 1 m) along the length of the wellbore. At the low-frequency limit (<mHz), DAS is extremely sensitive to quasi-static strain(rate), which makes it an excellent tool for continuous monitoring of in-situ deformation and stress during reservoir operations. In this work, we explore the application of AI deep learning approaches to low-frequency DAS-derived strain datasets from sparse boreholes to map the evolution and spatio-temporal distribution of strain within a reservoir.

More...

An example dataset from a decameter-scale Enhanced Geothermal System (EGS) hydraulic fracturing experiment is used to explore the application of a multi-layer perceptron supervised machine learning algorithm. Preliminary results show that a single-layer neural network can reliably reproduce the dataset, including new data, without over-designing the network.

Less...

Poster #14: Generative Adversarial Imputation Networks (GAINs) for downscaling geophysical attributes

Presenters: Savini Samarasinghe, Hua Wang, Yaoguo Li, Jyoti Behura, Manika Prasad

Video

The goal of this research is to obtain high-resolution geophysical attributes by posing the problem as a missing value imputation question. Here we assume that low resolution field-scale geophysical attributes represent possibly noisy, sparse observations of a higher resolution (e.g., ~10m scale) model and the remainder of that model is missing. We use deep learning, specifically Generative Adversarial Imputation Networks, to fill these missing values. We present a preliminary experiment with the Kimberlina simulation, which enables us to understand the utility, limitations, and areas of improvement for this approach.

Poster #15: Data integration for engineered completion design in the Marcellus shale

Presenters: Liwei Li, Tim Carr

Video

Maximizing stimulated reservoir volume is one of the primary hydraulic fracturing concerns for economic production from horizontal shale gas or oil well. We developed a workflow to evaluate the optimal placement of multiple clusters for each stage to enhance production per lateral. The proposed engineered completion design aims to address the challenges from Marcellus subsurface geologic complexities including extensive natural fractures, variations in geomechanical properties, and reservoir characterizations.

Poster #16: A comparison of tree-based and neural-network ML models on the 2D and 3D toy problems

Presenters: Ryan Johnson, Carlos Oroza

Video

Fast predictions are critical for the SMART platform; therefore, quantifying the tradeoff between machine-learning algorithm complexity and training/prediction time is an essential first task. We began by comparing a relatively simple tree-based ensemble algorithm (Random Forest) against a higher-complexity neural network model (MLP) using the 2D and 3D Toy Problems. Each algorithm is used to predict pressure, CO2 saturation, and water extraction rate.

More...

We find the tree-based model to be significantly faster than the neural-network model (by up to two orders of magnitude), but the accuracy tradeoff is significant on the simple 2D problem. However, Random Forest’s accuracy surpasses the MLP model’s accuracy within the 3D problem. Therefore, simpler tree-based models may show potential for the SMART tasks in striking a balance between speed and accuracy for large-scale.

Less...

Poster #17: CarbonSAFE Simulation Model and the ReGrid Package

Presenters: Nathan Moodie, Alec Nelson, Wei Jia

Video

The SMART Initiative will leverage a regional geologic carbon storage model developed under the CarbonSAFE Rocky Mountain Phase I program to aid in the development of fast-predictive models. The model represents the Glen Canyon Formation and associated sealing formations on the Colorado Plateau’s western edge in central Utah. The primary storage reservoir is the Navajo Sandstone, the upper member of the Glen Canyon Formation.

More...

A laterally continuous formation across the region with high injectivity. ReGrid is a Python package for converting CMG and Eclipse data into formats more easily utilized by machine learning algorithms, including reservoir mesh geometry, model inputs, time-series simulation outputs, and well information.  This package converts the simulation data into two different formats: VTK, creating powerful custom visualizations, and NumPy, for easy integration into data applications.

Less...

Poster #18: Neural Network-Based Surrogate Models for Joint Prediction of Reservoir Pressure and CO2 Saturation

Presenter: Yash Kumar

Video

Work to date has involved the development of surrogate models for CO2 geologic storage using multilayer perceptron (MLP) and long short-term memory (LSTM) neural networks that are capable of accurate prediction of spatio-temporal outputs of CO2 saturation and pressure in both 2D and 3D spaces. Synthetic training datasets were developed using CMG-GEM simulations (provided by U.T. Austin).

More...

This work is in support of SMART-CS Task 5 with the goal of developing a virtual learning environment for exploring and testing strategies to optimize reservoir development, management & monitoring prior to field activities. The MLP and LSTM neural network configurations are being developed to enable easy modification, scalability, and integration with the broader Task 5 workflows.

Less...

Poster #19: Using Surface Deformation and Machine Learning to Determine State of Stress Changes

Presenters: Sarah Roberts, Andrew Delorey, Paul Johnson, David Coblentz

Video

Tracking the state of stress evolution within developed reservoirs is important for optimizing use and understanding seismic hazard. We develop a novel method to obtain a full description of the evolving stress field that utilizes machine learning to quantify the subsurface stress tensor changes using satellite observations. We train a convolutional neural network using training data produced from analytical realizations of surface displacements mapped to stress, constrained by possible stress sources in the reservoir.

More...

We beta test the new method on the Coso Geothermal field, where subsurface pressure changes due to fluid injection and production serve as analogues to pressure changes due to CO2 injection. With generalized training, our method could be employed at any reservoir to provide field operators with a near real-time model of the evolving stress field.

Less...

Poster #20: Application of CO2BRA Relative Permeability Data to Estimate CO2 Storage Efficiency using TOUGH Simulations

Presenters: Paul Holcomb, Mohammad Haeri, Evgeniy Myshakin, Johnathan Moore

Video

In order to understand the long-term storage potential in the subsurface, the ability to quantify relative permeability and accurately simulate CO2 propagation within the storage reservoir is necessary. Using an unsteady state flow methodology coupled with CT, empirical data on saturation and relative permeability in multiple rock types was gathered into a database called CO2BRA.

More...

This data was then used to model CO2 movement in Berea sandstone using the TOUGH3 multi-phase flow modeling software along with custom-designed solutions for relative permeability data extraction and curve fitting. Current efforts are focused on simulating flow in an array of potential depositional environments, as well as applying machine learning to both more accurately represent the relative permeability and to better match simulation results to experimental data.

Less...

Poster #21: Fast Forward Model Development Using Image-to-Image Translation

Presenter: Diana Bacon

Video

Image-to-image translation using a conditional deep convolutional generative adversarial network is applied to the task of replicating the output of a 3D multiphase flow simulator.

Poster #22: Marcellus Shale Energy and Environment Lab (MSEEL) Data

Presenter: Tim Carr

Video

The MSEEL wells contain many terabytes of subsurface data that is accessible online and available for big data processing. Data availability is one of the biggest challenges faced by the SMART community for modeling and analysis. Collecting, integrating, and intuitively managing data is a time-consuming process, but one which is fundamental to further analyses.

Poster #23: Physics-Consistent Data-driven Seismic Inversion

Presenters: Youzuo Lin, Neill Symons

Video

Solving the seismic full-waveform inversion (FWI) problem can be challenging due to its ill-posedness and high computational cost. We develop a new hybrid computational approach to solve FWI that combines physics-based models with data-driven methodologies.

More...

In particular, we develop a data augmentation strategy that can not only improve the representativity of the training set, but also incorporate important governing physics into the training process and therefore improve the inversion accuracy. We demonstrate our method with an example of monitoring subsurface carbon sequestration leakage. Our method yields higher accuracy and greater generalization ability than purely physics-based and purely data-driven approaches.

Less...

Poster #24: Resolution Enhancement using Autoencoders

Presenters: Jyoti Behura, Manika Prasad

Video

We exploit geological principles in enhancing the resolution of field-data derived field-scale attributes to log-scale by employing autoencoders.

Poster #25: Development of the Kimberlina 1.2 Multi-Scale, Multi-Physics Data Set

Presenter: David Alumbaugh

Video

The Kimberlina 1.2 Reservoir Model was created as part of the National Risk Assessment Program (NRAP) funded by the Carbon Storage part of DOE’s Fossil Energy program. We have taken the flow model results which simulate CO2 injection into the Vedder Sandstone reservoir at approximately 3km depth, and converted hydrologic properties at 0 and 20 years after start of injection to 3D volumes of density, electrical resistivity, and seismic velocity.

More...

These 3D property volumes are currently being run through various forward modeling algorithms using realistic data acquisition arrays to create synthetic geophysical measurements which will be used to test geophysical imagining algorithms under development for Task 2 of the SMART Initiative. To provide the full range of scales that are present in the data analysis to be include in Task 2 (sub milli-meter pore scale up to tens of meters resolution at the geophysical measurement field scale), laboratory imaging experiments are being conducted at NETL during CO2 injection on core samples of the Vedder Sandstone, and pseudo-well logs being created by combining the model properties in the 3D property volumes at hypothetical well locations with real variations observed in well logs collected in the Kimberlina 1 well. This poster presentation will provide examples of the geophysical property models as well as the synthetic and real data that will be available once the simulations have been completed.

Less...

Poster #26: Material Balance Based Operational Rapid History Matching: A SACROC Test Case

Presenters: Guoxiang Liu, Xiongjun Wu, Jeffery Ilconich, Grant Bromhal

Video

This poster is going to share a material balance principle based approach to support associated CO2 storage decision making in a rapid fusion. The method mainly focuses on operational support by providing injection allocation, extraction response, and reservoir communication based upon injection and production data study.

More...

Then, proposed approach offers rapid forecasting and optimization for the CO2 storage and utilization decisions along the daily based operational monitoring by the mean of reservoir surveillance. In the poster sharing, the workflow and a preliminary test from a coarse SACROC simulation data will be discussed.

Less...

Poster #27: Sensitivity and hyperparameter optimization for CNN-LSTM based architectures for CO2 flow prediction

Presenters: Joseph Hogge, Hongkyu Yoon, Hector Mendoza

Video

This work presents comparison of multiple machine learning (ML) methods to predict CO2 flow and production rates based on 2D and 3D synthetic problems. ML architectures have been constructed using convolutional neural networks and long-short term memory (LSTM). A number of hyperparameters and size of CNN-LSTM architectures (e.g., encoder-decoder, U-Net) are explored to evaluate the prediction accuracy. In particular, we will highlight the parameter optimization and the impact of hyperparameters on prediction accuracy.

Poster #28: Fracture Dynamic Understanding from Test and Monitoring Datasets: Tracer Data Use Case

Presenters: Guoxiang Liu, Chung Yan Shih, Abhash Kumar, Richard Hammack, Jeffery Ilconich, Grant Bromhal

Video

This poster will share the proposed methodology of how to use multiple level of the data driven approach to help identify the fracture network to support the rapid decision making. The static data, hydraulic fracking data, and dynamic monitoring/testing data will be applied to the fracture network visualization and imaging regarding to the data availability over the field development stages.

More...

As an use case, tracer data preliminary study will be discussed for how the level of the data will support level of the decision regarding to the operations and/or further data acquisition for application development of infilling and/or spacing design and optimization within phase I.

Less...

Poster #29: ML applied to DAS Monitoring Data to Detect Leakage Indicators

Presenters: Mark Kelley, Priya Ravi Ganesh

Video

This project aims to determine if DAS monitoring data can be used to detect anomalous changes in stress-related parameters (strain, stress, displacement, pressure) that could lead to fluid leakage out of reservoir via injection-induced/enhanced fractures/faults. The approach will involve 1) Test proof-of-concept using synthetic (modeling) data and a real geologic reservoir-caprock system (Chester 16 reef, Michigan); 2) Create synthetic training data set(s) that include(s) pressure, strain, stress, displacement for long-term injection period that causes fracturing/faulting and “focused” fluid migration out of reservoir; 3) test different ML algorithms.

Poster #30: Forecasting of CO2 flow using variational autoencoder with ensemble-based data assimilation

Presenters: Hongkyu Yoon, Jonghyun Harry Lee

Video

This work presents a preliminary result to demonstrate a framework for accomplish machine learning-driven CO2 modeling by combining a variational autoencoder (VAE) with ensemble-based data assimilation (EnDA) for data assimilation. For fast testing and optimal implementation of VAE, CO2 saturation and pressure distribution are simulated using an open-source percolation model (pyPERC) and single phase flow model (MODFLOW).

More...

The encoder in VAE works as a nonlinear dimension reduction method that determines a low-dimensional latent space “z” with encoded/compressed information, possibly performing better than traditional linear dimension reduction methods such as principal component analysis (PCA) and singular value decomposition (SVD). For data assimilation, the latent space z obtained from non-linear dimension reduction through the encoder in VAE will be continuously updated conditioned on the available data instead of the original high-dimensional state parameter space for effective sampling with better convergence on the low dimensional subspace. Overall this work will demonstrate a framework to combine VAE-EnDA for fast forecasting of CO2 saturation and pressure plume development.

Less...

Poster #31: Production Forecast in SACROC using Long Short-Term Memory (LSTM) Machine Learning Model

Presenters: Palash Panja, Wei Jia, Brian McPherson

Video

Forecasting of production performance from reservoir with complex geometry, geologic parameters distribution, well completion and operation is a challenging task. In the SACROC model, 5-spot injection scheme is applied with water alternating gas injection. In the coarse grid model, 23 production wells and 22 injection wells are placed. The water and gas injections are altered annually.

More...

The rate curves of oil, gas and water show unusual trends due to the irregular open and shut-in of all production wells to mimic a real field scenario. The model is run for 30 years (360 months). Long Short-Term Memory (LSTM) which has become popular recently in the time series prediction is adopted as the machine learning algorithm for rate predictions. Several studies proved LSTM as an efficient model of time series data prediction. LSTM model is capable of predicting outputs for several time steps with mutual-dependence. A stacked LSTM model (2 layers) is trained with 300 months (25 years) data and then tested for next five years’ prediction (60 months). Three LSTM models depending on the input parameters are compared for each rate (oil, water and gas). The first model has only rate as input, the second model has the rate and the respective bottom hole pressure (BHP) and the third model has rate, respective BHP and the BHPs of neighboring injection wells. The mean square error (MSE), mean absolute error (MAE), relative error (RE), mean absolute percentage error (MAPE), and coefficient of determination (R2) are selected for performance analysis.

Less...

Poster #32: The Critical State of Stress Preceding the Prague M5.7 Earthquake

Presenters: Richard Alfaro-Diaz, Ting Chen, Xaiofei Ma

Video

Induced seismicity, earthquakes caused by anthropogenic activity, have increased significantly in the last decade resulting from practices related to oil and gas production. Large earthquakes have been shown to promote the triggering of events due to static stresses caused by physical movement along the fault, and also remotely from the passage of seismic waves (dynamic triggering). In order to understand the mechanisms leading to earthquake failure, we investigate Prague, Oklahoma, a region where natural, induced, and dynamically triggered events occur.

More...

We analyze ~1.5 years of data (2010 -2012) from EarthScope’s USArray Transportable Array (TA), implementing a machine learning convolutional neural network to develop a high-resolution local earthquake catalog. We then investigate 254 teleseismic earthquakes with M ≥ 6 and flag events that significantly increase seismicity within Prague, Oklahoma following the arrival of the teleseismic waves. We find evidence of triggered seismicity leading up to the 2011 Mw 5.7 Prague, Oklahoma earthquake, indicating a critical state of stress condition prior to the Prague event.

Less...

Poster #33: The Bell Creek Field Data Set for Testing of SMART Initiative Algorithms

Presenters: Shaughn Burnison, Beth Kurz, Dustin Crandall, Johnathan Moore, Bryan Tennant, Manika Prasad, Similoluwa Oduwole, Mathias Pohl

Video

Through the Plains CO2 Reduction Partnership (PCOR) Partnership, led by the Energy & Environmental Research Center (EERC) in partnership with Denbury Resources Inc. (Denbury), a variety of geological, geophysical and geochemical data from the Bell Creek oil field of southeastern Montana was collected to evaluate incidental carbon dioxide (CO2) storage associated with a commercial enhanced oil recovery (EOR) project.

More...

To provide data from a real-world CO2 storage site to support the goals and objectives of the SMART Initiative, the EERC is working to create a curated dataset for a portion of the Bell Creek field with permission from Denbury. This poster provides an overview of the curated Bell Creek Field dataset that will be used for evaluation and application of machine learning approaches in the SMART Initiative, and previews some of the Bell Creek reservoir core plug testing already being performed by NETL and CSM to support SMART Task 2.

Less...

Poster #34: Selecting Geologic Realizations for Dynamic Reservoir Modeling using Diffusive Time of Flight

Presenters: Srikanta Mishra, Valerie Smith, Anna Wendt

Video

Potential CO2 storage reservoirs were represented by a GoM model and a Permian Basin SACROC model. Fifty geologic realizations for both sites were ranked using the diffusive time of flight (DTOF) method. Candidate model realizations at p10, p50, and p90 levels were selected for dynamic reservoir modeling.

Poster #35: Fuzzy c-means clustering for multiphysics geophysical inversion

Presenter: Yaoguo Li

Video

Imaging CO2 saturation in carbon storage monitoring requires multiphysics approaches that quantitatively integrate different geophysical data sets. Each geophysical method is primarily sensitive to one physical property, and each physical property in turn depends upon the CO2 saturation differently. Statistical petrophysical data that characterize these physical properties in different zones in a reservoir constitute an important data set.

More...

It is logical to integrate such this data set with geophysical data sets for effective rock property and CO2 imaging. The guided fuzzy c-means (FCM) clustering inversion provides such a framework for integration by using data classification from machine learning. This e-poster will provide a brief summary of guided FCM joint inversion and its potential application in SMART-CS initiative.

Less...

Poster #36: Combining Physics-Based and ML-based Geomechanical Models

Presenters: Jeff Burghardt, Ting Bao

Video

This poster will present an approach for combining and training physics-based and machine-learning based geomechanical models. We will briefly discuss the planned approach and the progress and challenges faced so far.

Poster #37: CCS Well Bottomhole Pressure (BHP) Prediction Sensitivity to Simulation Constraints

Presenters: Thomas McGuire, Jonathan Garcez, Luis Ayala

Video

Simulation of carbon storage reservoirs have shown sensitivity to the simulation constraints, especially for parameters such as the injection well bottom-hole pressures (BHP). Since control of injection well BHPs are extremely important for safe and compliant class VI injection, Penn State and the EERC conducted sensitivity studies with current SMART-CS simulations. These studies assured that current simulations are generating expected (physical) reservoir responses as well as examined the utility of local grid refinement in cases where non-physical reservoir responses are observed.

Poster #38: Natural fracture characterization of the Marcellus Fm at the MSEEL site for discrete fracture network modeling

Presenters: Michael Gross, Jeffrey Hyman

Video

The role played by natural fractures in unconventional resource plays varies among basins and even within a particular basin. One common theme among unconventionals, which differentiates them from conventional fractured reservoirs, is that the natural fracture population prior to drilling does not constitute a permeable flow network. This may result from a sparse, poorly connected natural fracture network, or alternatively a more substantial fracture network that has been sealed by mineralizing fluids.

More...

The Eagle Ford Formation of SW Texas is an example of the former, whereas the Wolfcamp (Permian Basin) and Marcellus (Appalachian Basin) Formations reflect the latter. This poster outlines a workflow for collecting fracture data from the subsurface for incorporation into a discrete fracture network model (dfn). The dfn, in turn, models fracture connectivity and provides the framework for reservoir simulations. The study site is the DOE-sponsored MSEEL 1 experiment in West Virginia, which recovered ~110 feet of core from the Marcellus Formation and collected  borehole image logs from vertical and lateral wells, along with microseismic and fiber optic measurements during hydraulic fracturing. A conceptual model of fracture architecture and mechanical stratigraphy was developed for the site, which served as input for creating the dfn.

Less...