First Break           EarthDoc           Learning Geoscience

Resources for Artificial Intelligence in geosciences

Advice and news by the EAGE A.I. community

Resources for Artificial Intelligence in geosciences

Advice and news by the EAGE A.I. community

Resources for Artificial Intelligence in geosciences

Advice and news by the EAGE A.I. community

Resources for Artificial Intelligence in geosciences

Advice and news by the EAGE A.I. community

A lot is happening in the EAGE digital world: on this page you can find some highlights from the latest initiatives on machine learning, A.I. and digitalization involving EAGE members worldwide, in addition to periodic contributions on the topic of Artificial Intelligence by the EAGE A.I. special interest community every week.

Digitalization Opportunities and Resources

The EAGE Short Courses catalogue has a section entirely dedicated to Data Science for geoscientists and engineers interested in learning new skills. From elementary to advanced level, EAGE Education offers various opportunities to approach the world of digitalization. 

Discover EAGE’s new Extensive Online Courses

The digital transformation means change in every industry and enterprise, for everyone. EAGE is proud to accompany this process by providing a platform for energy experts and data scientists to discuss challenges and solutions. Join the conversation at one of the following meetings:

  • EAGE Digital 2023 (20-22 March 2023)
  • or browse for more in the EAGE calendar of events!

Find events

More information and updates are shared in the EAGE Digital Newsletter. Sign up to receive it in your mailbox every month!

Sign up

A.I. Community - Resources

Advice by the EAGE A.I. Committee

The A.I. Committee discusses tips, techniques, learning experiences and whatever else is of interest to stay informed and up to speed with this emerging field. By sharing weekly tips on A.I. they aim to support geoscientists maintain their employability throughout the many reorganization cycles when the world requires different skills.

Click here to read the latest advice shared by the Committe!

A.I. topic of the weekAdvice by the EAGE A.I. Committee
NumbaAn easy alternative to make a function in Python very fast

Python is very popular; it is fairly user friendly, has easy access to many application software libraries. The main drawback is inefficiency (CPU, RAM); thanks to its’ object orientation even the simplest items have many attributes (any integer has 71).

Several options exist to write a fast function:

1. C/C++ or Fortran, place in an extension module
2. Fast library (i.e. PyTorch)
3. Numba compiler to translate the function into optimized machine code
Numba is the simplest way; upon import the compiler is invoked with a function decorator (‘@jit’).

From numba import jit
@jit (nopython = True)
def func(…):

Make Python code faster with Numba
Multi-task learningTraditionally, training a machine learning model implies fitting a single objective or solving a single task. An appealing alternative is to simultaneously solve multiple related tasks within the Multi-task learning (MTL) framework. The motivation behind MTL is to learn shared representations of common ideas between several associated tasks. Using the MTL typically leads to improved generalizability and faster training due to constraining the search space for optimization by additional loss terms. For example, given well-log data, one could train a neural network to fit losses for facies classification and regression for another observed property since these tasks describe the same medium and complement each other. The headwind, however, is that an intelligent implementation of the MTL network should include some task balancing mechanism. Otherwise, one outweighing task might dominate the training, neglecting the contribution of other tasks.

There is a variety of flavors for MTL implementations. These include architectural solutions for weight sharing, strategies for task selection and balancing. Please refer to this survey of MTL methods, this overview, and this article for a deeper dive into the subject.
CodexA Machine Learning tool to translate language into programming syntax
OpenAI, an R&D company that developed GPT-3 is about to launch a Beta release of their ‘Codex’ software that uses machine learning to translate English into code. According to the developers’ own comments the system is still in relatively early stage. They are inviting interested parties to participate in ongoing trials.

The system is in principle able to transcribe natural language into 12 different programming languages.
As it gets developed further can potentially provide a valuable tool for improving speed of development as well as learning to program (i.e. to use for answers and examples on the fly).
Image Generation from Text using GANs and NLP ModelsGenerating high-resolution images with neural networks has been a successful application of deep learning across many domains.
Similarly, natural language processing (NLP) has made significant advances in recent years thanks to innovations such as self-supervised learning and Transformer models.

A recent approach called VQGAN+CLIP combines both of these worlds to allow image generation guided by natural language processing.
This allows one to generate artistic images from short sentences and pre-trained image-generation and NLP models alone.

Here is an article to get you started on how to start generating images from text on your own using freely available resources such as Google Colaboratory.
Machine learning in productionYou have built a machine learning model and wonder how to put it in the hands of demanding users. Far more challenging than experimental settings, machine learning in production must run continuously with ever changing dataset, minimum cost, best performance, etc. MLOps is a set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently. In order to apply your AI skills to solve real-world problems, you need production engineering capabilities as well.

Coursera course - Machine Learning Engineering for Production (MLOps) Specialization

EAGE workshop - Development of ML Solutions at Scale: Going from proof of concepts to integrated workflows
A.I. accelerating the Energy TransitionTo accelerate the energy transition people are looking ever more towards AI. AI is being developed and deployed across all aspects of renewable energy generation, emissions reduction applications and carbon capture and storage. Subsurface applications already include; AI to process and interpret well logs to identify good CO2 storage sites, AI to de-risk offshore wind turbine placement and AI to characterise heat flow and energy generation for geothermal operations.

Upcoming EAGE events will further highlight ongoing successes in these areas.

Skills in data science will enable geoscientists to continue to grow with our fast evolving energy industry and help the decarbonisation challenge.
A.I. in the European Football ChampionshipAs a European geoscience organization we cannot ignore a major event that gets many people excited for a whole month. We are talking about the European Football Championship (we'll cover the European Song Contest next year).
AI and data analysis plays an important role in modern sports, to understand one's own players, the opponent and the impact of various choices.
After their succes with AlphaGo, Google's researchers look at football "with the goal of better addressing new scientific challenges involved in the analysis of both individual players’ and coordinated teams’ behaviors"

More specifically about the EC we are interested ofcourse in predicting the outcome of various matches. Kickoff.ai uses machine learning to predict the results of football matches with "Bayesian inference" (scientific references are on the site)

Finally, if you want to get active there is an active Kaggle competition by Manchester CIty and Google where the objective is to create an AI agent that can play football
From Recurrent Neural Networks to Markov ChainsKarpathy’s blog provide a fun introduction to Recurrent Neural Networks (RNNs) through the character-by-character generation of text,

Johnson, in his Stanford University course, discusses more in depth the theory and applications of RNNs and LSTMs (Long Short-Term Memory),

You might say: what is in it for a geoscientist? Well, vertical sequences of geological facies can be regarded as a sequence of characters! Generating some synthetic sequences after training an RNN on vertical geological logs is an interesting idea proposed by Talarico, Leao and Grana in their paper “Comparison of Recursive Neural Network and Markov Chain Models in Facies Inversion”, available on EarthDoc.
BFGS (Broyden Fletcher Goldfarb Shanno) parameter optimizationMost common parameter optimization techniques are based on 1st order gradient (Gradient Descent, RMSProp, ADAM, etc.). Albeit quite robust, it has some inefficiency as it only ‘sees’ very limited local information (local gradient, fixed step size).
Less widely advertised optimization concepts are based on 2nd order derivatives where the stepsize is determined by the distance of the datapoint to the stationary point of the locally derived paraboloid. This gives a drastic reduction in steps. To avoid the very costly Hessian matrix inversion (order ), the BGFS (Broyden, Fletcher, Goldfarb, Shanno) and related L-BFGS-B (limited memory, boxed) methods are the most popular in the quasi Newton class of algorithms.
This is a relatively straightforward method using first derivatives of the previous two steps to approximate an inverse Hessian directly. Pytorch that has a fairly usable implementation.

Read more here.
Self-supervision for object segmentationAn exciting new field is emerging in deep learning in the form of self-supervised methods, a sub-category of unsupervised approaches. At the end of April, Facebook announced DINO, a methodology that combines self-supervision with transformers for object segmentation. Not only was no labelled dataset required, the results from DINO were shown to outperform traditional supervised training procedures. To learn more, check out the Facebook blog or for the more technical-orientated you can check out their pre-print.

The potential of self-supervised approaches is particularly great in geoscience where often we cannot obtain labelled datasets. Facebook have realised their code for DINO, therefore if you are interested to try it out for geoscience applications, you can find it on Github.
Accelerate your deep-learning locally using PlaidMLTraining neural network is very computer intensive, and any network deep-enough or with complex enough data will require long-training time. The industry work-around is to use virtual machines and virtual GPUs, and distribute the computation in the cloud. But what if you don’t have access to cloud-based computing, perhaps because you are trying new ideas and don’t have a budget for it, or simply because you are on a long-flight with no internet access?

The solution is to use the GPUs of your graphics card of course. But a common problem is that many data scientists use MacOS, and Apple ships with AMD graphics chips, not NVIDIA. Most GPU based computation are based on the CUDA architecture, that relies on NVIDIA GPUs. The problem persists for external GPU units (eGPU) as MacOS only performs well with AMD eGPUs.

Introducing the free library ‘PlaidML’: PlaidML is a very simply library that replaces your GPU backend with any GPU of your choice that is available on your system. After installing PlaidML with pip install, you can use a simple command line to direct PlaidML to the unit of your choice (GPU, eGPU, CPU) and with one line of python code in your code base you change change what backend is used for calculation.

PlaidML work with AMD, Intel and NVIDIA GPUs, and is compatible with popular libraries such as Keras, ONNX, and nGraph. My own experimentations with PlaidML and Keras on a Radeon 9100 eGPU has resulted in significantly reduced training time for deep-learning networks.
MLP MixerCan we replace CNNs and Transformers with Multi-layer-Perceptrons?

Recently, several publications have proposed that the recent advances in computer vision which have been achieved through new convolutional neural networks and transformer networks could also be achieved by a set of multi-layer perceptrons trained on extremely large datasets.

Yannic Kilcher has created a very nice video explaining the MLP-Mixer publication:

What do you think: Will Transformers replace convolutional networks or is attention really not what we needed after all?
Big Data Analytics Using Lazy EvaluationData wrangling refers to the practice of cleaning and shaping data. It typically requires multiple steps, such as removing or substituting missing values, renaming columns, changing data types, or creating new categories from existing ones. These steps are absolution crucial for machine learning as the data needs to be prepared before it can be analysed.

There are two approaches to data wrangling. One approach, used by Pandas (an exclusively Python library) is known as the ‘eager’ evaluation model. Eager means that each operation is applied immediately at the point of call. The drawback of this approach is that no optimisation on the data preparation process can be made since each step is executed independently. By contrast, libraries that use a ‘lazy evaluation’ approach build a computation graph, and only apply the operations of the graph once you ask to collect the data and after the library optimises the computation graph. The big benefit of this model is that optimisation in terms of memory and calculation efficiency can be made, for instance by splitting the calculation dynamically over several GPUs or CPUs, or reducing the number of slow operations such as copying data between different memory register or different GPUs.

Probably the best known library designed around lazy evaluation is Apache Spark, which is written in Scala but has API bindings in Python and other languages. Because lazy evaluation offers the ability to optimise computational speed and memory use, Apache Spark is the de-facto Big Data processing engine in the enterprise world. There are however newer libraries out there too, with some promising concepts and easy to use Python APIs. Py-polars is a library written in Rust but with python APIs, and it is heralded by some as the successor of Pandas. It is capable of doing both eager and lazy evaluation and its syntax is very similar to Pandas. So for your next project using big data, take a look at Apache Spark and Py-polar and see if it is time to move away from Pandas.

Refer this recent article for reasons to choose Spark over Pandas
Here is a good article on Py-polar
Read tthishis for a more in-depth view of lazy and eager (and even greedy) evaluation from a functional programming point of view
Getting started with A.I.Getting started is often the hardest thing. Fortunately, nowadays there are many sites out there to get you started. With open source data and code there really is no excuse not to get started. (yes, even in the oil & gas industry you can find open source software nd data). All you have to do is come up with a new idea (and learn to code). Here are some links to get you started

Open source software
Open source data
Example of open source seismic data processing, recently used in the 2nd EAGE Machine Learning workshop
Using callbacks to dramatically improve the learning process in Neural NetworksCallbacks are functions used as argument for other ‘base’ functions to monitor and control their performance and outcomes without modifying the source code.
Application in the training loop of a Neural Network enables customized monitoring (i.e. log files) and automatic intervention, for instance hyperparameter modification, saving model parameters, early stopping, etc. This is done essentially using on the fly monitoring of selected metrics while comparing them to preset criteria.
One example is lowering the learning rate when the improvement of certain validation metrics drops below a certain value. In short, this ensures significant improvement in both efficiency as well as optimization of setting hyperparameters, as compared to manual ‘trial and error’ approach.

Refer to this link for a general discussion callbacks in Python

Here is a guide on the inbuilt callbacks in TensorFlow. Read more here.
Computer visionEnabling computers to see and understand images like human vision, computer vision is one of the most powerful types of AI and you’ve almost certainly experienced it every day. Recently, computer vision has been able to take big leaps driven by rapid development of AI (especially deep learning and neural networks), computing power and significant amount of visual data available. Popular computer vision tasks involve image classification, object detection, image segmentation, etc. Despite tremendous advances in real-world applications, helping computers to see turns out to be very challenging and we are still far from solving it. Are you keen to take your first step and contribute to its latest development?

Introduction to computer vision
Hands-on tutorial series
Stanford lecture collection
Transformers for NLP and Image GenerationTransformers are a recent neural network architecture that has proven extremely successful in natural language processing (NLP) applications. This neural network is built around the concept of self-attention which Andrew Ng explains in one of his lectures.

Training of these NLP models is extremely demanding in terms of computational resources and can easily require thousands of GPUs for multiple weeks. Luckily there exist shared pre-trained transformer models for various NLP tasks provided by institutions like Huggingface and OpenAI.

Recently these flexible sequence-based models have been extended to the image domain leading to impressive results in image generation and classification.
Physics-informed A.I.Advances and breakthroughs in neural network and machine learning takes some time to be picked up. The geoscience community was a bit slow to pick up on the development in convolutional neural networks, but have fully embraced CNNs and U-net style networks in the last two years for all sort of interpretation problems. However, they are not the right tool for inversion style problems, simply because they lack knowledge of the physicsand hence have issues with generalization. Other fields have realized the same thing and hence the development of physics-informed or physics guided neural networks. This year we will see a lot of those

1) For a nice introduction see this site and its examples
2) An application to the geosciences
3) For those with more time on their hands, sit back and enjoy a full workshop
Interpretable Deep NetworksA novel approach to make Deep Networks interpretable
It is (nearly) impossible to make direct observations inside the hidden layers (latent space). Standard DNN builds abstract features purely on statistical grounds and can be scattered; these are the main reasons for the ‘black box’ nature.

In a recent (Dec 2020) publication, Zhi Chen et al of Duke University propose Concept Whitening (CW), a modification to selected hidden layers such that they better represent known (sub) features up to that point in the network. The main idea of CW is ‘disentangle’ the latent space so different parts (layers) represent different (user defined) concepts.

To put it (overly) simplified, the network is then not only tuned using the main dataset, but also selected layers are tuned to sets of selected sub features.

Experiments with CNN networks so far have shown great promise.

For a clear(er) explanation
A copy of the paper
The code used
What is A.I. good for and what is it not?If we put “Hollywood AI” aside and look at practical applications of AI in an industrial context there is usually a sweet-spot were AI excels.
AI is not magic, it will typically not find patterns in data that you cannot find in other ways. If you have looked at a dataset over and over again don’t expect an AI system to suddenly find an answer that you couldn’t find before. AI systems are though VERY good at doing the same well-defined problem very quickly and millions of times. Use them for highly repetitive, high dimensional problems, e.g. find me similar music or seismic wavelets, find me anomalies in financial transactions or well logs (across thousands of datasets). Keep it simple and you have a good chance of success.
Good luck.
A.I. in the Real WorldHappy New Year 2021! This year my mission is to gain more hand-on experience with a wider range of data types and data science approaches. One of the hardest challenges when learning data science methodologies is bridging the gap between toy examples provided in courses to moving into real world, domain specific problems where you are not even sure if what you are attempting is possible with your given dataset. Kaggle datasets provide a great middle ground containing some relatively dirty datasets that range in application from fashion to academic performance to geoscience. One of the best things about these datasets, beside their open availability, is that users can upload their notebooks and have discussions around insight that can be garnered from these datasets. A few to get you going are:

Brent oil prices 1987-2020
Geology Image Similarity
Volcanic Eruption Prediction
Generative Adversarial NetworksGenerative Adversarial Networks (GANs), “the most interesting idea in the last 10 years in Machine Learning”, are a class of deep learning models capable of creating new data instances that resemble the training data. GANs can improve image resolution, augment training datasets, create A.I. art, etc. Within GANs architecture, two neural networks - a generator and a discriminator, are trained jointly with opposite goals: the generator learns to make fake data that look real to fool the discriminator, while the discriminator learns to distinguish the generated fake data from the real. Both networks are becoming better and better during the fight against each other, until the generator can produce realistic outputs given random inputs.

Google Machine Learning Crash Course on GANs (Beginner)
Coursera GANs Specialization (Intermediate)
The paper of Ian J. Goodfellow and co-authors first proposing GANs (Advanced)
ML LibrariesOpen-source software has been a key ingredient in the widespread adoption of machine learning technologies. Many libraries exist for different programming languages such as Python, R, or C++. In this weekly contribution we highlight a few of the well-known and upcoming machine learning libraries:

Scikit-Learn: This library is the bread and butter of all machine learning libraries. It contains not only implementations of many algorithms, but also supporting functionality for preprocessing, cross-validation, and metric-based evaluation.

Pytorch and Tensorflow: When a Random Forest won’t do the Job, these two libraries provide the necessary tools to build various deep neural networks. Both provide automatic differentiation capabilities and can scale from a laptop to HPC clusters.

PyMC3: For all things Bayesian modeling, this library provides all the necessary tools to build complex hierarchical models and allows for fast inference using modern implementations of Markov-Chain methods.
GPT-3, the largest neural network in the worldGPT-3 has made the AI headlines since it appeared in May 2020. It is a product of the company OpenAI, and it can write poetry, translate, calculate, write code, have on-line conversations or write papers... It is the largest neural network in the world, with a total of 175 Billion parameters. GPT-3 was trained by reading 500 Billion words, that is the equivalent of 150 times the size of Wikipedia (in all the different languages)!

Wikipedia provides a general presentation of GPT-3.

There are plenty of different things that GPT-3 can do, many are useful and some are potentially harmful.

GPT stands for “Generative Pretrained Transformer“. GPT-3 addresses some of the well-known issues associated with standard Recurrent Neural Networks.
A.I. Back to basicsArtificial Intelligence seems to be all about fancy machine learning and neural networks mathematics and algorithms. The reality is that easily 80% of the time will be spent getting your data ready for action. For geoscientist this at least is familiar, since before you can run your fancy RTM or seismic inversion, there is quite some pre-processing to be done also. So, this week we go back to some basics and since we like Python, that means Python basics.

1 page Pandas ccheat sheetheat sheet
Tutorials on various pre-processing topics
Complete course on Python for datascience
Graph NNAny data related problem statement can be represented using a Graph network, which is a mathematical construct defining interactions between data objects. Formally expressed as an ordered pair G of two sets V (vertices or nodes; data objects) and E (edges; interconnections): G = (V, E).

Graphs can have any structure; Decision Trees is an example of Graphs with extra restrictions on direction and connectivity.

Graph Neural Networks (GNN) is a category of learning methodologies for optimizing Graph networks currently under rapid development and showing high potential in effectiveness and efficiency.

For application of Graph networks to generate fast physics simulators, check this video.

Here is also an easy to read (re) introduction to Graph theory and a fairly readable short tutorial on GNN applied to Imaging with PyTorch examples.
Vulnerability of NNThis week we will discuss the vulnerability of neural networks to hacking attempts either by manipulation from a software perspective or by altering input data in the physical world. Towards Data Science provides
a nice introduction to the security vulnerabilities of NNs and the different forms in which attacks can take.
The most common attacks are in the form of strategically adapting the input data which
fools the network into a misclassification. To the human eye, the adapted input data is
often almost identical to the original input data however these small adaptations have the power to completely deceive the classification procedure.

At a software level this can be by adding noise to the input data as illustrated by Goodfellow et al., 2015.
Their experiments showed how computationally-generated noise can be used to trick a network into misclassifying images that visually look identical, resulting in incorrect classifications with a very high confidence score.

Adverserial attacks can also be performed in the physical world by adding stickers or patches to objects to confuse classification network. Brown et al., 2018 illustrate the use of physical adverserial stickers
that when placed within a cameras reference frame cause misclassification of a banana
to be identified as a toaster.
A.I. to enable the Energy
Transition
Across the world and throughout the energy industry the direction of travel is clear.
The world needs to dramatically cut emissions while ensuring there is enough energy for countries and communities
to continue to develop. AI will have a key role to play. Whether that be in terms of energy efficiency
and optimisation, reducing emissions, low carbon energy generation, energy distribution and storage.

Below are several views from different parts of the energy creation and consumption ecosystem:

Government view
Energy company view
Consulting company view
Academic view

AI has a key role to play to enable a future more sustainable world.
People who can critically apply such techniques have a vital role to play in our future world.
Cross-Validation for
Subsurface ML
Many predictive tasks we encounter in the subsurface are of spatial or temporal nature
e.g. predicting porosity and permeability away from well-control, or predicting the future flow behavior
of a subsurface reservoir given historical data.

In many applications, we evaluate the performance of algorithms using cross-validation.
Sebastian Raschka’s introduction to model validation provides an excellent overview of the definitions,
assumptions, and techniques used to choose the best algorithms and their parameters.

Code examples (Part IV) provide a practical starting point for practitioners.

Spatially correlated data used to build predictive models can have a significant impact on our ability
to judge the spatial predictive performance of algorithms and can lead to an optimistic bias in model evaluation. Roberts et. al. provide a comparison of various temporal, and spatial validation strategies as well as the significant
impact the choice of validation strategy can have on our ability to judge a model’s predictive performance.

Choosing the right validation strategy for the task at hand, allows practitioners to reduce bias
through model selection and builds trust in a method’s ability to make predictions away
from data and the future state of subsurface systems.
Gaussian Processes and NNGaussian Processes (GPs) for Machine Learning are closely related to geostatistical models,
with the exception that Geostatistics tends to focus on one, two or three-dimensional models,
while GPs typically live in spaces of very large dimension. GPs are often used to generate
possible stochastic realizations constrained by data and provide a way to quantify uncertainties.
The book “Gaussian Processes for Machine Learning” by Rasmussen and Williams,
is a great introduction to GPs.

Neal showed that, before training, feed-forward Neural Networks (NNs) with just one infinite hidden layer,
generate a GP with a covariance derived from the NN’s activation function and the initial probability
distributions of the NN’s weights and biases.

Neal’s results have been generalized to deep and convolutional networks.
This means that, by defining a NN’s architecture and its hyperparameters, we are already defining
an implicit “prior” on the output of the NN. The concept of “Deep Image Priors” takes advantage of this
by proposing not to train the model using a Training Set, but to directly apply the prior NN model
to the optimization task. This has close links with Bayesian Deep Learning,
that we will discuss in the near future.
A.I. failuresThere are many inspiring quotes on failure. Like Thomas Edison’s “I have not failed.
I've just found 10,000 ways that won't work” and Churchill’s “Success is not final, failure is not fatal:
it is the courage to continue that counts.”. Most of the quotes encourage one to persists and
to take lessons from the failed endeavors. Failures in A.I. and machine learning happen all the time,
they are just not talked about much and therefore such learning is not as easy to come by
as the learnings from success. Here are some links about failure,
overpromise and underdelivery of A.I. and machine learning technology for you to learn from.

1) Weapons of Math Destruction

2) How IBM Watson Overpromised and Underdelivered on AI Health Care

3) Consumer Reports Unmasks Tesla’s Full Self-Driving Mystique, Here’s The Upshot
Interpretable A.I.Essential for business confidence and in critical decisions is the ability
not just to provide accuracy with Machine Learning but also the why and how.

In short, interpretability means to determine a representation in terms of human understanding
of the results; with few parameters (i.e. linear regression) this is straightforward.
At the other end, Deep Neural Networks (DNN) are effective in finding
subtle relationships among many features but are hard to interpret.

Recently developed methods to analyze DNN include LIME
(Local Interpretable Model-Agnostic Explanations)
and DeepLIFT (Deep Learning Important Features)

With using alternatives to DNN, the common belief is that interpretability goes
at the expense of accuracy, on which assertion some disagree with.

Suggested reading:
Decoding the Black Box
Guide to Interpretable Machine Learning
The use of Neural NetworksAn exciting area in the deep learning space is the use of neural networks for solving PDEs, equations which dictate the majority of geophysical phenomena. Through tailoring of the cost function, physics-informed neural networks (PINNs) have recently been shown to accurately solve a variety of PDEs. Early attempts in geophysics have been published for solving both the Eikonal and the wave equation. Whilst it is still unclear whether PINNs will reach the precision of our waveform modeling procedures, they are likely to be a fierce competitor with respect to compute time.

The underlying principles of PINNs are detailed in this page.

An example of such a network being to solve the wave equation is illustrated by this paper.

And, for those ready to get your hands dirty, checkout the DeepXDE python library.
Quick and easy A.I.Its' great to try simple examples to see what A.I. can do.
There are multiple sites where you can try examples for free and see the results.
In the examples below you can upload images and see how ML systems perform classifications
and extractions and what data they return.

Google – Image classification

Microsoft – Image classification

When you want to step into run your own more domain specific data (e.g. timeseries or
multiple attribute data) then many of the ‘AI Platforms’, like Dataiku and DataRobot allow
you to register and run free versions. These systems can run ‘code free’, so if you can use
Excel then should be able to run those. These are great ways to explore quickly what A.I. can do
and see if it might be relevant for you and your data challenges!
Explainable A.I.Explainable Artificial Intelligence (XAI) tries to open the black box of Machine Learning models such that their behavior can be understood by humans. Google Cloud's A.I. Explanations provide a set of tools and frameworks to explain how much each feature in your model contributed to the predicted results for classification and regression tasks. More specifically, SHAP (SHapley Additive exPlanations) is a popular XAI tool based on a solution in cooperative game theory. It can explain the output of any machine learning model with rich visualizations that are friendly for end users.
A.I. ChallengesHistorically, the Imagenet Challenge has allowed researchers to develop ground-breaking machine learning methods on open data, enabling reproducible, comparable progress in computer vision.
In geoscience, efforts such as the SEG contest on facies prediction have inspired geoscientists to engage in the field of AI and serve as an excellent entry point for machine learning in geoscience.
Currently ongoing, the FORCE machine learning contest on wells and seismic provides a labeled dataset for facies prediction from wireline logs and a seismic dataset for fault detection.
These and other collaborative challenges will help to inspire future geoscientists and breakthrough technologies in applied machine learning for geoscience.
Deep Learning for A.I. (2)There is plenty of online training material on Deep Learning.
This week we recommend three sources that are very useful for illustrating the practicalities of Deep Learning. They are real fun to use!

TensorFlow playground (already discussed in a different context) provides simple two-dimensional examples of feed-forward neural networks, mostly for classification, and displays the results in a very useful way for somebody who is new to neural networks.

3D Visualization of a Convolutional Neural Network shows the details of the structure and performance of a simple convolutional neural network applied to the classical MNIST dataset.

GAN Lab explains Generative Adversarial Networks, and it really helps understand the interaction between the Generator and the Discriminator.
U-netUnderstanding what happens in images in crucial in the field of machine vision. This problem is broken up into separate but similar topics, such as classification, localization, object detection, semantic segmentation and image segmentation. Without realizing it, geoscientists face similar challenges. Think first break picking or salt interpretation. One of the workhorses for image segmentation problems is the U-net and to get ahead in the field, or simply grasp what your colleagues have developed for you now, one should really
have a basic understanding of this algorithm. Here are three useful links:

- Convolutional Networks for Biomedical Image Segmentation (video) - Beginner
- Convolutional Networks for Biomedical Image Segmentation (paper) - Intermediate
- U-net application fo TGS challenge - Advanced
Deep Learning for A.I. (1)Drastic improvements in hardware performance (GPU) enabled wide spread use of Deep Neural Networks (DNN). Combined with the Convolutional Neural Network (CNN) approach, they complement seismic workflows very well; fault detection, time lapse, inversion seismic-log integration, etc.
In application, careful consideration is advised: non transparency (black box), dependence on training data, outcomes being approximations, sometimes artefacts.
However, because of the multilayered architecture, Deep Learning has proven ‘unreasonably effective’, and improved understanding through research (MIT) will enable novel breakthroughs.

- TensorFlow Neural Network (Beginner)
- Deep Learning Specialization (Intermediate)
- Seismic Deep Learning libraries (Advanced)
Hands-on A.I. exercisesA hurdle for many wanting to gain hands-on experience with AI is setting up a development environment - hours of frustration trying to install Python on Windows, we have all been there! Google's CoLab provides an online Jupyter-like environment with FREE GPU resources where you can experiment to your heart's content. Whilst Google has already provided a number of data science tutorials, one of the great benefits of CoLab is it is possible to open any .ipynb file. Whether you are looking at csv files, images or jumping right into manipulating segy data, there are hundreds of geoscience-specific examples sitting in open GitHub repositories. Here are three notebooks to get you started:

- Analysing thin section compositions (Beginner)
- An image segmentation example from the TGS salt detection Kaggle competition (Intermediate)
- Seismic inversion on the Volve dataset (Advanced)
A.I. Give it a go - it won't biteIf you haven’t had a go before – try it, get your hands (digitally) dirty. You can play without breaking anything (or in some cases even without installing anything). It can help you understand what’s possible. It can help at work, in your AI studies or across the rest of your life. Thankfully these days you don’t have to be a coding supremo to take those first steps. Much of the AI world is moving towards ‘low-code’ or even ‘no-code’, so you can do some pretty impressive AI stuff without leaving the comfort of a friendly app, whether that be on your phone or laptop. Below are a few cool places where you can start exploring the art-of-the-possible – hopefully it will inspire you and be fun, enjoy!

- AI Experiments with Google (Beginner)
- Machine Learning Experiments with GitHub (Beginner to Intermediate)
- Anaconda, incl. Orange (no code), Jupyter, Spyder (Python) & RStudio (low to high code) (Beginner to Advanced)
The pandemic and A.I.The coronavirus outbreak put us in unprecedented times. This week we take a special look at the role A.I. can play in battling the pandemic as well as transforming the healthcare practice. Check out the latest issue of Nature Machine Intelligence for a general read about the potential advantages and challenges of deploying A.I. in the pandemic. With no prior medical expertise required "A.I. for Medicine Specialization" teaches how to apply A.I. tools to medical diagnosis, prognosis and treatment, including working with 2D and 3D medical image data. You can obtain open dataset, share code and models, and enter competitions on the largest machine learning community Kaggle to join the battle against Covid-19 as an A.I. practitioner.

- A path for A.I. in the pandemic (Beginner)
- AI for Medicine Specialization (Intermediate)
- Kaggle ML Community (Advanced)
Staying up-to-date with A.I.This week's focus is on staying up-to-date with the rapidly moving field of AI.
"Two-minute papers" is a video podcast series that aims to distill the hottest and most fascinating research in the field of computer vision and machine learning into a format accessible for everyone. The artificial intelligence podcast by Lex Friedman is an excellent resource for interviews with researchers around the field of AI and machine learning. Arxiv-sanity preserver is your one-stop-shop to the world of AI and machine learning preprints, where the most recent publications from the ArXiv can be found in one place.

- Two-minute Papers (Beginner)
- The Artificial Intelligence Podcast (Intermediate)
- ArXiv Sanity Preserver (Advanced)
Learning A.I.To get started, the focus naturally falls on learning. If you have been looking for the right entry, here are three (free) courses for you to consider:

- AI For Everyone (Beginner)
- Practical Deep Learning for Coders, v3 (Intermediate)
- UVA Deep Learning Course (Advanced)

Questions? Ideas? Contact us!

2021

Siddharth Misra

Prof Dr Siddharth Misra’s research focuses on improving subsurface characterization and prospect evaluation for the exploration of hydrocarbons, minerals and water resources.

His major contribution is in the theory of electromagnetic responses of geological formations to various charge polarization phenomena. The theory has enabled him to introduce a multi-frequency electromagnetic log-inversion technique to remove dielectric effects for improved estimation of hydrocarbon pore volume.

           First Break           EarthDoc