By Catherine Pennington, British Geological Survey
]]>Each IASC member country appoints up to two members to each of IASC's 5 scientific Working Groups (Atmosphere, Cryosphere, Marine, Social & Human, and Terrestrial). These groups allocate IASC‘s scientific funds for small meetings, workshops, and projects. Follow the to see the science priorities of each group, find out about upcoming Working Group activities, and explore the expertise of their members!
Understanding the future of the Arctic means we need to invest in the Arctic researchers of today. Therefore, At least 40% of IASC's working group funds must be co-spent with another working group, to encourage interdisciplinary projects. Check out upcoming events in 2019 like the Year Of Polar Prediction Arctic Science Workshop, High Latitude Dust Workshop, Snow Science Workshop, and Network on Arctic Glaciology meeting – just to name a few!
IASC convenes the annual Arctic Science Summit Week (ASSW), which provides the opportunity for coordination, cooperation and collaboration between the various scientific organizations involved in Arctic research and to economize on travel and time. In addition to IASC meetings, ASSW is a great opportunity to host Arctic science meetings and workshops. ASSW2019 will be in Arkhangelsk, Russia.
IASC influences international Arctic policymakers by being an observer to the Arctic Council and contributing to its work. IASC projects include contributing experts to an assessment on marine plastic pollution in the Arctic, helping coordinate reviews for the first Snow, Water, Ice, and Permafrost Assessment (SWIPA) report, and co-leading the Arctic Data Committee & Sustaining Arctic Observing Networks.
At least one third of IASC‘s scientific funds must be spent on supporting early career researchers (see the image of the week)! In addition, the IASC Fellowship Program is meant to engage early career researchers in the Working Groups and give them experience in helping lead international and interdisciplinary Arctic science activities. Applications are now open and due by 19 November!
IASC convenes the annual Arctic Science Summit Week (ASSW), which provides the opportunity for coordination, cooperation and collaboration between the various scientific organizations involved in Arctic research and to economize on travel and time. In addition to IASC meetings, ASSW is a great opportunity to host Arctic science meetings and workshops. ASSW2019 will be in Arkhangelsk, Russia.
Do you have a great idea that you think IASC might want to support? Or want to learn more about IASC? Connect with IASC on Facebook, and sign up to receive our monthly newsletter! You are also encouraged to reach out to the relevant national/disciplinary IASC Working Group experts, IASC Council member, and the IASC Secretariat.
Edited by Adam Bateson
Allen Pope is the IASC Executive Secretary. IASC scientific funds are provided from national member contributions. The IASC Secretariat in Akureyri, Iceland is supported by Rannís, the Icelandic Centre for Research. The IASC Secretariat is responsible for the day-to-day operations and administration of IASC. Allen also maintains an affiliation as a Research Scientist at the National Snow and Ice Data Center at the University of Colorado Boulder where he continues research based on remote sensing of glacier mass balance and surface hydrology. You can find out more about Allen and his research at https://about.me/allenpope. He also enjoys sharing and discussing polar science with the public and tweets @PopePolar.
]]>Despite the apparent need for science outreach, achievements are still not sufficiently recognised. In times of an increasingly competitive funding climate, high-quality publications remain the currency of science and time spent on outreach activities equals less time for research." says Dr Anne Osterrieder [6]
“People have fear of science: they have the fear they are not smart enough. And you want the audience to feel smart”
This post was written by Marina Corradini, with revisions from Walid Ben Mansour and Maria Tsekhmistrenko
Walid Ben Mansour is a post-doctoral research fellow at Macquarie University. He works on multi-observable probabilistic tomography for different targets (mining, seismic hazard). You can reach him at walid.benmansour[at]mq.edu.au Maria Tsekhmistrenko is a PhD student at the University of Oxford. She works on the velocity structures beneath the La Reunion Island from the surface to the core mantle boundary. You can reach her at mariat[at]earth.ox.ac.uk _________________________________________________________________________________ Jenda Johnson creates animations for IRIS to help Earth-science teachers understand complex seismologic processes, available here: www.iris.edu/earthquake. Member of the Board of Advisors at Oregon State University’s College of Earth Ocean and Atmospheric Sciences. Among other collaborations: UNAVCO, U.S. Geological Survey, Teachers on the Leading Edge (TOTLE), Cascadia EarthScope Earthquake and Tsunami Education Program (CEETEP), Hawai`i Volcanoes National Park, Haleakala National Park, EarthScope/USArray, High Lava Plains Seismic Array.Enter reverend Thomas BayesThis part-time mathematician (he only ever published one mathematical work) from the 18th century formulated the Bayes' Theorem for the first time, which combines knowledge on parameters. The mathematics behind it can be easily retrieved from our most beloved/hated Wikipedia, so I can avoid getting to caught up in it. What is important is that it allows us to combine two misfit functions or probabilities. Misfits and probabilities are directly interchangeable; a high probability of a model fitting our observations corresponds to a low misfit (and there are actual formulas linking the two). Combining two misfits allows us to accurately combine our pre-existing (or commonly: prior) knowledge on the Earth with the results of an experiment. The benefits of this are two-fold: we can use arbitrarily complex prior knowledge and by using prior knowledge that is bounded (in parameter space) we can still invert underdetermined problems without extra regularisation. In fact, the prior knowledge acts as regularisation.
This is turning in quite the detective story!However, the kitchen counter that has been worked on is also royally covered in flour. Therefore, we estimate that probably this pack was used; about 400 grams of it, with an uncertainty (standard deviation) of 25 grams. Mathematically we can formulate our prior knowledge as a Gaussian distribution with the aforementioned standard deviation and combine this with our misfit of the inverse problem (often called the likelihood). The result is given here: [caption id="attachment_2808" align="aligncenter" width="350"] Prior and original misfit[/caption] [caption id="attachment_2809" align="aligncenter" width="350"] Combined misfit[/caption]
One success and one failure!First, we successfully combined the two pieces of information to make an inverse problem that is no longer non-unique (which was a happy coincidence of the prior: it is not guaranteed). However, we failed to make the problem more tractable in terms of computational requirements. To get the result of our combined misfit, we still have to do a systematic grid search, or at least arrive at a (local) minimum using gradient descent methods. We can do the same in 2D. We combine our likelihood (original inverse problem) with rather exotic prior information, an annulus in model space, to illustrate the power of Bayes' theorem. The used misfit functions and results are shown here: [caption id="attachment_2813" align="aligncenter" width="350"] Original misfit for a bread of 500 grams[/caption] [caption id="attachment_2814" align="aligncenter" width="350"] Prior knowledge misfit[/caption] [caption id="attachment_2815" align="aligncenter" width="350"] Combined misfit[/caption] This might also illustrate the need for non-linear uncertainty analysis. Trade-offs at the maxima in model space (last figure, at the intersection of the circle and lines) distinctly show two correlation directions, which might not be fully accounted for by using only second derivative approximations. Despite this 'non-progress' of still requiring a grid search even after applying probability theory, we can go one step further by combining the application of Bayesian inference with the expertise of other fields in appraising inference problems...
How do we extract interesting information (preferably without completely blowing our budget on supercomputers)?Let's first say that with interesting information I mean minima (not necessarily restricted to global minima), correlations, and possibly other statistical properties (for our uncertainty analysis). One answer to this question was first applied in Los Alamos around 1950. The researches at the famous institute developed a method to simulate equations of state, which has become known as the Metropolis-Hastings algorithm. The algorithm is able to draw samples from a complicated probability distribution. It became part of a class of methods called Markov Chain Monte Carlo (MCMC) methods, which are often referred to as samplers (technically they would be a subset of all available samplers). The reason that the Metropolis-Hastings algorithm (and MCMC algorithms in general) is useful, is that a complicated distribution (e.g. the annulus such as in our last figure) does not easily allow us to generate points proportional to its misfit. These methods overcome this difficulty by starting at a certain point in model space and traversing a random path through it - jumping around - but visiting regions only proportional to the misfit. So far, we have only considered directly finding optima to misfit functions, but by generating samples from a probability distribution proportional to the misfit functions, we can readily compute these minima by calculating statistical modes. Uncertainty analysis subsequently comes virtually for free, as we can calculate any statistical property from the sample set. I won't try to illustrate any particular MCMC sampler in detail. Nowadays many great tools for visualising MCMC samplers exist. This blog by Alex Rogozhnikov does a beautiful job of both introducing MCMC methods (in general, not just for inversion) and illustrating the Metropolis Hastings Random Walk algorithm as well as the Hamiltonian Monte Carlo algorithm. Hamiltonian Monte Carlo also incorporates gradients of the misfit function, thereby even accelerating the MCMC sampling. Another great tool is this applet by Chi Feng. Different target distributions (misfit functions) can be sampled here by different algorithms. The field of geophysics has been using these methods for quite some time (Malcom Sambridge writes in 2002 in a very interesting read that the first uses were 30 years ago), but they are becoming increasingly popular. However, strongly non-linear inversions and big numerical simulations are still very expensive to treat probabilistically, and success in inverting such a problem is strongly dependent on the appropriate choice of MCMC sampler.
Written by João Duarte
Researcher at Instituto Dom Luiz and Invited Professor at the Geology Department, Faculty of Sciences of the University of Lisbon. Adjunct Researcher at Monash University.
Edited by Elenora van Rijsingen
PhD candidate at the Laboratory of Experimental Tectonics, Roma Tre University and Geosciences Montpellier. Editor for the EGU Tectonics & Structural geology blog
For more information about the Great Lisbon Earthquake of 1755, check out these two video's about the event: a reconstruction of the earthquake and a tsunami model animation. References: Dewey, J.F., 1969. Continental margin: A model for conversion of Atlantic type to Andean type. Earth and Planetary Science Letters 6, 189-197. Duarte, J.C., Schellart, W.P., Rosas, F.R., 2018. The future of Earth’s oceans: consequences of subduction initiation in the Atlantic and implications for supercontinent formation. Geological Magazine. https://doi.org/10.1017/S0016756816000716 Duarte, J.C., and Schellart, W.P., 2016. Introduction to Plate Boundaries and Natural Hazards. American Geophysical Union, Geophysical Monograph 219. (Duarte, J.C. and Schellart, W.P. eds., Plate Boudaries and Natural Hazards). DOI: 10.1002/9781119054146.ch1 Duarte, J.C., Rosas, F.M., Terrinha, P., Schellart, W.P., Boutelier, D., Gutscher, M.A., Ribeiro, A., 2013. Are subduction zones invading the Atlantic? Evidence from the SW Iberia margin. Geology 41, 839-842. https://doi.org/10.1130/G34100.1 Fukao, Y., 1973. Thrust faulting at a lithospheric plate boundary: The Portugal earthquake of 1969. Earth and Planetary Science Letters 18, 205–216. doi:10.1016/0012-821X(73)90058-7. Kant, I., 1756a. On the causes of earthquakes on the occasion of the calamity that befell the western countries of Europe towards the end of last year. In, I. Kant, 2012. Natural Science (Cambridge Edition of the Works of Immanuel Kant Translated). Edited by David Eric Watkins. (Cambridge: Cambridge University Press, 2012). Kant, I., 1756b. History and natural description of the most noteworthy occurrences of the earthquake that struck a large part of the Earth at the end of the year 1755. In, I. Kant, 2012. Natural Science (Cambridge Edition of the Works of Immanuel Kant Translated). Edited by David Eric Watkins. (Cambridge: Cambridge University Press, 2012). McKenzie, D.P., 1977. The initiation of trenches: A finite amplitude instability, in Talwani, M., and Pitman W.C., III, eds., Island Arcs, Deep Sea Trenches and Back-Arc Basins. Maurice Ewing Series, American Geophysical Union 1, 57–61. Purdy, G.M., 1975. The eastern end of the Azores–Gibraltar plate boundary. Geophysical Journal of the Royal Astronomical Society 43, 973–1000. doi:10.1111/j.1365-246X.1975.tb06206.x. Ribeiro, A.R. and Cabral, J., 1986. The neotectonic regime of the west Iberia continental margin: transition from passive to active? Maleo 2, p38. Wilson, J.T., 1965. A new class of faults and their bearing on continental drift. Nature 207, 343– 347]]>Edited by Clara Burgard
]]>This post was written by Maria Tsekhmistrenko, with revisions from Nienke Blom and Andrea Berbellini
Maria Tsekhmistrenko is a PhD student at the University of Oxford. She works on the velocity structures beneath the La Reunion Island from the surface to the core mantle boundary. You can reach her at mariat[at]earth.ox.ac.uk Nienke Blom is a postdoctoral research associate at the University of Cambridge and works on seismic waveform tomography, developing methods to image density. She is the EGU point of contact for the ECS rep team. You can reach her at nienke.blom[at]esc.cam.ac.uk. Andrea Berbellini is an Italian Post Doc at the University College London. He works on the source characterization from second-order moments and crustal tomography from ellipticity of Rayeligh waves. You can reach him at: a.berbellini[at]ucl.ac.uk]]>Weight | |
How well does this contribution fit into the session it is submitted to? | 15% |
Is the abstract clearly structured and scientifically sound? | 35% |
Are there conclusions and are they supported by data or analysis? | 35% |
How well is the abstract written (grammar, orthography)? | 15% |
Well, duh, there's 3 solutions, here, here and here!Why care about such an complicated way to only get one of them? The important realisation to make here is that I have precomputed all possible solution for this forward model in the 0 - 700 grams range. This precomputation on a 1D domain was very simple; at a regular interval, compute the predicted value of baked bread. Following this, I could have also programmed my Python routine to extract all the values with a sufficiently low misfit as solutions. This is the basis of a grid search. Let's perform a grid search on our second model (1b). Let's find all predictions with 500 grams of bread as the end result, plus-minus 50 grams. This is the result: [caption id="attachment_2787" align="aligncenter" width="450"] Grid search for the 2D model[/caption] The original 312.5 grams of flour as input is part of the solution set. However, the model actually has infinitely many solutions (extending beyond the range of the current grid search)! The reason that a grid search might not be effective is the inherent computational burden. When the forward model is sufficiently expensive in numerical simulation, exploring a model space completely with adequate resolution might take very long. This burden increases with model dimension; if more model parameters are present, the relevant model space to irrelevant model space becomes very small. This is known as the curse of dimensionality (very well explained in Tarantola's textbook). Another reason one might want to avoid grid searches is our inability to appropriately process the results. Performing a 5 dimensional or higher grid searches is sometimes possible on computational clusters, but visualizing and interpreting the resulting data is very hard for humans. This is partly why many supercomputing centers have in-house departments for data visualisation, as it is a very involved task to visualise complex data well.
Now: towards solving our physical inversions!
The simplest solution must be assumedWhen dealing with more parameters than observables (non-uniqueness) in linear models it is interesting to regard the forward problem again. If one would parameterize our volcanic model using 9 parameters for the subsurface and combine that with the 3 measurements at the surface, the result would be an underdetermined inverse problem. [caption id="attachment_2792" align="aligncenter" width="450"] Rough parametrization for the heat conduction model[/caption] This forward model (the Laplace equation) can be discretised by using, for example, finite differences. The resulting matrix equation would be Am = d, with A a 3 x 9 matrix, m a 9 dimensional vector and d a 3 dimensional vector. As one might recall from linear algebra classes, for a matrix to have an inverse, it has to be square. This matrix system is not square, and therefore not invertible!
Aaaaahhh! But don't panic: there is a solutionBy adding either prior information on the parameters, smoothing, or extra datapoints (e.g., taking extra measurements in wells) we can make the 3 x 9 system a perfect 9 x 9 system. By doing this, we condition our system such that it is invertible. However, many times we end up overdetermining our system which could result in a 20 x 9 system, for example. Note that although neither the underdetermined nor the overdetermined systems have an exact matrix inverse, both do have pseudo-inverses. For underdetermined systems, I have not found these to be particularly helpful (but some geophysicists do consider them). Overdetermined matrix systems on the other hand have a very interesting pseudo-inverse: the least squares solution. Finding the least squares solution in linear problems is the same as minimising the L2 norm! Here, two views on inversion come together: solving a specific matrix equation is the same as minimising some objective functional (at least in the linear case). Other concepts from linear algebra play important roles in linear and weakly non-linear inversions. For example, matrix decompositions offer information on how a system is probed with available data, and may provide insights on experimental geophysical survey design to optimise coverage (see "Theory of Model-Based Geophysical Survey and Experimental Design Part A – Linear Problems" by Andrew Curtis). I would say it is common practice for many geophysicists to pose an inverse problem that is typically underdetermined, and keep adding regularization until the problem is solvable in terms of matrix inversions. I do not necessarily advocate such an approach, but it has its advantages towards more agnostic approaches, as we will see in the post on probabilistic inversion next week!
The jobs listed on this page were gathered by Walid Ben Mansour
]]>By Olivia Trani, EGU Communications Officer
Imaggeo is the EGU’s online open access geosciences image repository. All geoscientists (and others) can submit their photographs and videos to this repository and, since it is open access, these images can be used for free by scientists for their presentations or publications, by educators and the general public, and some images can even be used freely for commercial purposes. Photographers also retain full rights of use, as Imaggeo images are licensed and distributed by the EGU under a Creative Commons licence. Submit your photos at http://imaggeo.egu.eu/upload/.]]>Prof. Xavier Le Pichon is one of the pioneers of the theory of plate tectonics. He developed the first global-scale predictable quantitative model of plate motion. The model, published in 1968, accounted for most of the seismicity at plate boundaries. Among many substantial contributions to the field, he also published, together with Jean Francheteau and Jean Bonnin, the first book on plate tectonics in 1973.
I think of the Earth as a living organism
[caption id="attachment_1099" align="aligncenter" width="757"] Le Pichon, X. (1968). Sea-floor spreading and continental drift. Journal of Geophysical Research, 73(12), 3661–3697.[/caption]I believe that science that is completely regulated
top-down is not efficient
The most important things in the evolution of research have been totally unexpected
I am very afraid of people who get specialized too early
You have to go to a place where research is thriving
Interview conducted by David Fernández-Blanco
]]>The strategy will also tie-in with the EU's other priorities such as climate mitigation, economic development, food security, energy and ecological conservation. Furthermore, it will help the EU meet its targets and commitments to global goals such as the Paris Agreement and Sustainable Development Goals (SDGs).
The future of the bioeconomy in Europe Based on the 2018 strategy, the European Commission will launch concrete measures and outline specific legislation to scale up bio-based sectors and meet both the strategy's suggested targets and global targets. There may be opportunities for external researchers to contribute to these processes through institutionalised platforms such as EU Consultations. Research on the bioeconomy will continue to be undertaken by scientists working at the JRC. Specific issues that will be researched in the coming months and years include the demand for bioeconomy products, the environmental impact of biomass, the condition of EU ecosystems and their services, and how the bioeconomy can contribute to reaching the SDGs. References and additional reading ]]>The Antarctic Peninsula is the 'canary in the coalmine' of Antarctic climate change. In the last half-century it has warmed faster than most other places on Earth, and considerable change has consequently been observed in the cryosphere, with several ice shelves collapsing in part or in full. Representing this change in models is difficult because we understand comparatively little about the effect of atmospheric processes on melting in Antarctica, especially clouds, which are the main protagonists of this Image of the Week...
The Antarctic Peninsula: a part of the southern continent that is surrounded by ice shelves, but also a place that has seen rapid and dramatic changes in the last decades. Until recently, the Antarctic Peninsula was one of the most rapidly warming regions on Earth, with annual mean surface temperatures rising by as much as 2.5°C between the 1950s and early 2000s in some places (Turner et al., 2005; 2016).
That warming has been linked to the demise of the region's ice shelves: since 1947, more than half of the peninsula's ice shelves have thinned, lost area, or collapsed entirely (Cook & Vaughan, 2010). Most recently, that includes Larsen C, whose area was reduced by 12% in July 2017 following a calving event where an iceberg four times the size of London broke away from the ice shelf. As a result, the ice shelf has slipped down the rankings from the 4^{th} largest ice shelf on the continent to the 5^{th} largest.
Evidence suggests that ice shelves on the peninsula are being warmed mostly from the top down by the atmosphere. This is contrary to what’s happening on other Antarctic ice shelves, like those in West Antarctica that are being eroded from beneath by the warming ocean. Atmospheric processes are much more important over peninsula ice shelves than those elsewhere on the continent.
To understand the effect of the atmosphere on melting at the top of ice shelves, we need to know how much energy is entering the surface of the ice shelf, how much is leaving, and use what’s left over to determine whether there's residual energy available to melt the ice. That’s the general principle of the surface energy balance, and it's called a 'balance' because it is usually just that – the amount of energy flowing into and out of the ice shelf averages out over the course of say, a year, to produce a net zero sum of energy left for melting. However, there are times when this balance can become either negative, leading to growth of the ice shelf, or positive, leading to ice loss via melting.
Many different processes influence the surface energy balance, such as weather patterns and atmospheric motion. For instance, when warm, dry air blows over an ice surface, which happens during 'foehn' wind events (German readers will know this means 'hairdryer': a descriptive name for the phenomenon!), this can produce a surplus of energy available for melting (Grosvenor et al., 2014; King et al., 2017; Kuipers Munneke et al., 2018). If the surface temperature reaches 0°C, melting occurs.
Clouds also greatly influence the surface energy balance by affecting the amount of radiation that reaches the surface. The amount of incoming solar (shortwave) radiation that reaches the surface, and the amount of terrestrial (longwave) radiation that escapes is affected by what stands in the way – clouds. Of course, this obstacle is important for the surface energy balance because it affects the balance between the energy flowing into and out of the surface. However, the fine-scale characteristics of clouds (aka 'microphysics') produce different, often interacting and sometimes competing, effects on the surface energy balance, some of which are shown in the schematic above. Examples of these properties include:
The amount of ice and liquid in a cloud can affect how much energy it absorbs, reflects and emits – for instance, the more liquid a cloud contains, the more energy it emits towards the surface, because it is thicker and tends to be warmer than a cloud with lots of ice. However, clouds made up of lots of tiny liquid droplets also tend to be brighter than ice clouds containing larger crystals, which means they reflect more incoming solar radiation back into space. This example is a typical one where different microphysical properties cause competing effects, which makes them difficult to separate from each other.
[caption id="attachment_4554" align="aligncenter" width="711"] Radiative forcing (RF, solid bars) and Effective radiative forcing (ERF, hatched bars) of climate change during the Industrial Era (1750-2011) [Credit: adapted from IPCC Fifth Assessment Report, Figure 8.15: pp. 697].[/caption]The short answer is: not that much. Clouds are the largest source of uncertainty in our estimates of global climate change (check out the huge range of error in the estimates of cloud-driven radiative forcing in the figure above, from the IPCC's most recent report), and the science of Antarctic clouds is even more unclear because we don’t have a great deal of data to base our understanding on. To measure clouds directly, we need to fly through them - a costly and potentially dangerous exercise, especially in Antarctica.
[caption id="attachment_4555" align="aligncenter" width="1600"] Flying through a gap in cloud near Jenny Island on the approach to Rothera research station, on the Antarctic Peninsula, at the end of a data collection flight in November 2017 [Credit: Ella Gilbert].[/caption]In somewhere like Antarctica where we don't have much observational data, we have to rely on other tools. That's where computer models can be really useful – so long as we can be confident in the results they produce. Unfortunately, that’s part of the problem. Cloud properties and their effects on the surface energy balance are complex: we know that much. But modelling those properties is even more complex, because we have to simplify things to be able to turn them into computer code.
There is hope though! Recent studies (e.g. Listowski et al., 2017) have shown that models can more realistically represent Antarctic cloud microphysics if they use more sophisticated 'double moment' schemes, which are able to simulate more microphysical properties. With more accurate microphysics comes better representation of the surface energy balance, and improved estimates of melt over Antarctic ice shelves.
Edited by Clara Burgard
By Olivia Trani, EGU Communications Officer
References Ebert, C. et al.: Regional response to drought during the formation and decline of Preclassic Maya societies. Quaternary Science Reviews 173:211-235, 2017 Nooren, K., Hoek, W. Z., Dermody, B. J., Galop, D., Metcalfe, S., Islebe, G., and Middelkoop, H.: Climate impact on the development of Pre-Classic Maya civilization. Clim. Past, 14, 1253-1273, 2018 Nooren, K.: Holocene evolution of the Tabasco delta – Mexico : impact of climate, volcanism and humans. Utrecht University Repository (Dissertation). 2017 Nooren, K. et al.: Explosive eruption of El Chichón volcano (Mexico) disrupted 6th century Maya civilization and contributed to global cooling, Geology, 45, 175-178, 2016 Press conference: Volcanoes, climate changes and droughts: civilisational resilience and collapse. European Geosciences Union General Assembly 2016 Caltech Climate Dynamics Group, Why does the ITCZ shift and how? 2016]]>Math ahead!(although as little as possible) First, for people who are serious about learning more about inverse theory, I strongly suggest Albert Tarantola's “Inverse Problem Theory and Model Parameter Estimation”. You can buy it or download it (for free!) somewhere around here: http://www.ipgp.fr/~tarantola/. I will discuss the following topics in this blog series (roughly in order of appearance): 1. Direct inversion 2. Gradient descent methods 3. Grid searches 4. Regularization 5. Bayesian inference 6. Markov chain Monte Carlo methods
I start with these materials and those initial conditions, what will happen in this system?In this post I will consider two examples of forward problems (one of which is chosen because I have applied it everyday recently. Hint: it is not the second example...) 1. The baking of a bread with known quantities of water and flour, but the mass of the end product is unknown. A recipe is available which predicts how much bread will be produced (let's say under all ratios of flour to water). 2. Heat conduction through a volcanic zone where geological and thermal properties and base heat influx are known, but surface temperatures are unknown. A partial differential equation is available which predicts the surface temperature (heat diffusion, Laplace's equation). Both these systems are forward problems, as we have all the necessary ingredients to simulate the real world. Making a prediction from these 'input' variables is a forward model run, a forward simulation or simply a forward solution. These simulations are almost exclusively deterministic in nature, meaning that no randomness is present. Simulations that start exactly the same, will always yield the same, predictable result.