|November 4, 2012||Posted by karmadsen under blog, Calibration, Complexity, Groundwater and Surface Water, Groundwater Modeling, Groundwater Modeling Software, Modeling Software, Programming|
As water becomes more scarce, water managers will have less of a margin of error in managing their water budgets. Water management problems can be quite complex, and include diverse social, economic, and physical inputs.
Genetic algorithms are a good option for finding solutions to these types of problems, as these algorithms can handle water management problems in which a diverse array of inputs if formulated as a mixed-integer nonlinear programming problem.
These algorithms are a method of evolutionary computing, in which optimization search is based on the principle of evolution in the natural world. Model parameters represent ‘genes’ and their particular values represent the gene’s DNA. As populations of simulations are tested for optimization, the genes are allowed to ‘mutate’ to encourage exploration of the model space while retaining well-optimized model realizations.
Recent research published in Ground Water illustrated an optimization problem involving the development and management of a future water treatment plant to end septic-tank impacts to groundwater in the Warren Groundwater Basin in California. The researchers used a genetic algorithm combined with linear programming to find the global optimum to the management problem of combining surface water, reclaimed water, and groundwater.
Chiu YC, Nishikawa T, & Martin P. 2012. Hybrid-Optimization Algorithm for the Management of a Conjunctive-Use Project and Well Field Design. Ground Water. 50(1):103–117.
|October 22, 2012||Posted by karmadsen under blog, Calibration, Complexity, Geostatistics, GIS, Groundwater Modeling, Groundwater Modeling Software, GroundwaterGo|
A few people have asked me how the Map the Water Table Project works, and I have decided to provide the algorithm here. I will also write some commentary on how the algorithm works in a follow-up blog post this week.
Inputs to the algorithm include a digital elevation model of land surface and observations of the water table. For the current version of Map the Water Table, I am using well data from the USGS well database.
This week’s blog series will become part of the Map the Water Table manual, which will be easy to reference from the Map the Water Table page.
|September 9, 2012||Posted by karmadsen under blog, Calibration, Complexity, Groundwater Modeling, Groundwater Modeling Software, MODFLOW|
Personally, I would prefer to keep philosophy and literary theory as far away from the discipline of groundwater modeling as possible. To my aesthetic tastes, models should be clean, mathematical, and objective. I would like to believe that if two rational people can find something to debate about in the context of modeling, then something went wrong in the set-up. Scientists and policy makers would also like this, but as we know, nothing could be further from the truth. Earlier on this blog I have written about what happens when MODFLOW models are debated in court. Philosophers have also had a go at the discipline of simulation modeling, notably Jean Baudrillard, who dissected modeling through the post-modern lens in the 1980s and 90s.
Baudrillard developed a concept called simulacra, which basically means an artificial representation of the natural world. There are three levels of simulacra. Under level one, the simulacra can clearly be perceived as artificial, think of a cave painting of a gazelle. Level one correlates with the pre-industrial era. Under level two, it is difficult to tell the natural from the artificial, and this level correlates with industrial era. For example, film and photograph are mediums that look so real, they can almost be mistaken for the real. The third level is the most interesting. Under level three, the simulacra proceeds what is real and even determines what is real. Simulation modeling would fall under this category (Baudrillard 1994).
Baudrillard was mostly concerned with linguistic modeling of the type used in advertising in which coded elements act-out future events. (Buy this beer, get the girl.) He also used the example of political polling, in which the results of the poll may as well be reality itself. But while he was concerned that this linguistic landscape was alienating people from authentic experience, he did acknowledge that simulation modeling was more appropriate within the scientific context. “It can be used as an an analytical tool under controlled scientific conditions,” he wrote (Baudrillard 1999).
Still, modelers are increasingly confronting the deceptive illusion of reality that can come out of simulation modelin. The result of model is not reality, as much as we would like it to be and as much as it would make our lives easier. But while most modelers now accept that a model cannot truly be validated that doesn’t mean it isn’t a useful tool. Thus, modelers have simply refined the way that they talk about modeling. Instead of talking about model validation, we talk about the model’s corroboration with natural observation, such that it can be helpful in making prediction within pragmatic constraints (Saltelli et al 2008).
In the end, Baudrillard’s critique of simulations is more of an important warning than a true damning the media. Most of us can distinguish between a model and reality when we take the time to stop and actually think about it.
Baudrillard J. 1994. Simulacra and Simulation. Translator: Glaser SF. University of Michigan Press.
Baudrillard J. 1999. Revenge Of The Crystal – Classic Edition: Selected Writings on the Modern Object and its Destiny, 1968-1983. Pluto Press. pg 92.
Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, Gatelli D, Saisana M, Tarantola S. 2008. Global Sensitivity Analysis: The Primer. John Wiley & Sons Ltd. pg 3.
|September 3, 2012||Posted by karmadsen under blog, Calibration, Complexity, Groundwater Modeling, Groundwater Modeling Software, Modeling Software, MODFLOW, Uncertainty|
Recently Ground Water Journal published a method note about parametrization in MODFLOW-2005, and some computational challenges associated with tackling a high number of parameters (D’Oria and Fienen 2012).
In MODFLOW, numerical parameter values can be replaced with character strings in the input files. These character strings represent parameters. A simple way to understand this would be to think of a hydraulic conductivity zone in which all the cells were assigned the same value. This value could be replaced by a character string in the input file for all the cells. Parameter estimation or sensitivity analysis could then be conducted on this parameter. This system works very well when the number of parameters is relatively small, but may bog down a model with the number of parameters is large. FORTRAN, the code that MODFLOW is written in, compares the strings one character at a time, starting with the first character in the string and then moving on if those characters are the same. Because MODFLOW also checks for errors in the parameters sets in addition to implementing the parameters into the grid, a lot of string comparisons are performed. As the number of parameters increases, the computational effort this entails and thus the time needed to complete the parameter estimation quickly becomes cumbersome.
Through test cases, the researchers demonstrated that these inefficiencies could be corrected by conducting sensitivity analysis on every model node, rather than model parameters. This modification resulted in on order of magnitude reduction in time when the number of parameters/nodes reached 1,000,000.
I think this is a very interesting idea, but can see that it could be difficult to communicate the results to others since many of us are used to thinking about sensitivity analysis in regards to traditional parameters. However, the authors make an important point. It would be wise for Fortran users to think about using this implementation if they are have a high number of parameters and could drastically improve the efficiency of their modeling effort by implementing this method.
D’Oria M, Fienen MN. 2012. MODFLOW-Style Parameters in Underdetermined Parameter Estimation. Ground Water. 50(1): 149-153.
|August 22, 2012||Posted by karmadsen under blog, Calibration, Complexity, Uncertainty|
YouTube has several videos about sensitivity analysis for Excel. The methods presented below are all based on financial models and they examine a relatively few number of variables.
In simple financial models, sensitivity analysis is used to study how changing an input parameter will change an output parameter. Generally, it is used to analyze how a change to a business model will effect profits. In this way, it is somewhat different than how sensitivity analysis is used on groundwater modeling, where sensitivity analysis may have implications for the certainly of the whole model.
However, the mathematics of these techniques could be applied to simple groundwater models for that goal if they were used a little differently.
Michael Popelianski explains his Sensitivity Analysis Model for excel.
ExcelIsFun’s Excel Finance Class 89 discusses Sensitivity Analysis in excel spreadsheet financial models.
Ezselflearning shows how to use Excel’s What-if Analysis tool to conduct sensitivity on a simple financial model.
|August 20, 2012||Posted by karmadsen under blog, Calibration, Complexity, Uncertainty|
This blog is part 8 of a new blog series, a Systematic Approach to Sensitivity Analysis.
At the Proceedings of the 2007 Winter Simulation Conference, researchers presented a summary of various visualization techniques for sensitivity analysis. Naturally, when visualizing sensitivity analysis, it is logical to plot sensitivities vs inputs, but a major problem is that most models have more than three dimensions. The simplest goal of such a visualization would be to explore regions of the sensitivities set where various inputs come together in an interesting way, distinguishing out particularly robust or particularly weak input sets. It’s easy to imagine such a plot in 2D or 3D space, but obviously, as humans we can’t visualize 4D space with our eyes.
One solution to this problem is projecting multidimensional space into 3D space. (This is the equivalent of taking a 2D cross section of a 3D model.) Other methods include stacked bar graphs.
These solutions separate out one or two variables and study their impact on the solution. While this is obviously interesting, it has got me thinking that visualizing sensitivity analysis has some serious limitations. Why should we weight so heavily the relationships that we can understand visually? Conceptually, we can understand the problem in more than three dimensions. The user could establish the kind of relationships he/she is interested if finding in the multidimensional plot, and areas of interest could be located with the aid of an algorithm.
However, visualization would be of use when studying a handful of variables of interest. In particular, it could be helpful in report writing when trying to draw the reader’s attention to a particular detail.
In conclusion, when trying to understand the system as a whole, visualization would probably just confuse to user. It would be better to explore the space mathematically.
Kondapaneni I, Kord´ık P, Slav´ık P. 2007. Visualization Techniques Utilizing the Sensitivity Analysis of Models. Proceedings of the 2007 Winter Simulation Conference. Department of Computer Science and Engineering. Czech Technical University in Prague.
|August 2, 2012||Posted by karmadsen under blog, Calibration, Complexity, Uncertainty, Visualization|
This blog is part 7 of a new blog series, a Systematic Approach to Sensitivity Analysis.
In my earlier post in this series I discussed the challenges of communicating the results of sensitivity analysis. One software, SUNDIALS, provides an in interesting solution to this problem. SUNDIALS is a suite of equation solvers maintained by the Lawrence Livermore National Laboratory.
From the website:
“SUNDIALS was implemented with the goal of providing robust time integrators and nonlinear solvers that can easily be incorporated into existing simulation codes. The primary design goals were to require minimal information from the user, allow users to easily supply their own data structures underneath the solvers, and allow for easy incorporation of user-supplied linear solvers and preconditioners.”
SUNDIALS also includes several options for outputting visualizations of sensitivity analysis. These include:
- Plots showing the evolution of sensitivities with respect to problem parameters.
- Isosurfaces of the sensitivity gradient with respect to the source parameters.
For me, these types of visualizations make understanding sensitivity analysis much more intuitive, at least when it comes to comparing the effects of different parameters on the outcomes. A remaining challenge is how to visualize, in an intuitive manner, red flag that indicate problems within the model.
|July 31, 2012||Posted by karmadsen under blog, Calibration, Complexity, Uncertainty|
This blog is part 6 of a new blog series, a Systematic Approach to Sensitivity Analysis.
Like forward sensitivity analysis, adjoint sensitivity analysis is locally focused, but rather than differentiating the model with respect to each parameter, an adjoint of the matrix is constructed for each output of interest (Serban 2008). (A very clear explanation of the mathematics of calculating the adjoint is shown here).
The method is appropriate when only a few outputs are of interest and reduces the number of computations that must be performed. The adjoint for the output can be solved instead of the original matrix. Then, its sensitivity to any set of parameters can be calculated.
This is all well and good, but the question still remains of how to communicate sensitivity analysis in a way that humans can understand. That’s the question that will be tackled in the next post in this series.
Serban, R. 2008. Adjoint-based methods for analysis of dynamic systems. Center for Applied Scientific Computing Lawrence Livermore National Laboratory. March 12, 2008 Stanford Presentation. Available at: http://smartfields.stanford.edu/documents/080312_serban.pdf
|July 27, 2012||Posted by karmadsen under blog, Calibration, Complexity, Uncertainty|
This blog is part 5 of a new blog series, a Systematic Approach to Sensitivity Analysis.
In this post I will briefly explain the concept behind local forward sensitivity analysis. Local sensitivity analysis differs from global in that only one parameter is varied at a time. While the partial derivative on the perturbed parameter is calculated, the others are held at their “nominal” value, which is the modeler’s best guess at the true value.
In forward sensitivity analysis, you start with a matrix of equations based on a set of parameters. For each parameter p in the set, you differentiate the matrix with respect to p. This process results in a new matrix, the same size as the original one, which describes solution sensitivity (Petzold et al. 2006).
Using the chain rule of differentiation, the gradient of any output function can be computed using this matrix.
There are many things that I like about local forward sensitivity analysis. It is logical, complete, standardized, and does not rely on human judgement. The major problem that is often stated in regards to this method is that it is computationally intensive with large parameter sets. But that’s not the problem that I have with it. My problem with this, as well as with many other studies of sensitivity analysis, is its lack of intuitiveness. If we are trying to communicate model uncertainty to policy makers and the public, we cannot hand them a matrix of derivatives.
The real question is how to we create a standardized paradigm for sensitivity analysis that is logical, complete, standardized, and easy to communicate to policy makers and the public.
Petzold, L., Li, S., Cao, Y., and Serban, R. 2006. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations. University of California Santa Barbara. Department of Computer Science. Available: http://www.cs.ucsb.edu/~cse/Files/CCEpaper.pdf.
A Systematic Approach to Sensitivity Analysis: An argument for standardized, computerized sensitivity analysis
|July 20, 2012||Posted by karmadsen under blog, Calibration, Complexity, Groundwater and Surface Water, Groundwater Modeling, Modeling Software|
This blog is part 4 of a new blog series, a Systematic Approach to Sensitivity Analysis.
The goal of sensitivity analysis is to study how model outcomes change as model input parameters change.
In a report for the Sandia National Laboratories, Scott C. James detailed several different approaches to sensitivity analysis.
“Variation of parameters or model formulation – In this approach, the model is run at a set of sample points (different combinations of parameters of concern) or with straightforward changes in model structure (e.g., grid resolution). Sensitivity measures appropriate for this analysis are: the model response from arbitrary parameter variation, normalized model response, and extrema. Of these measures, the extreme values (worst case scenarios) are often critically important to environmental applications. That is, low probability high consequence events must be carefully considered.
Regional sensitivity analysis – Sensitivity is estimated by studying model behavior over the entire range of parameter variation, taking uncertainty in the parameter estimates into account.
Local sensitivity analysis – Here, the focus is on estimates of model sensitivity to input and parameter variation in the vicinity of a sample point. This sensitivity is often characterized through gradients or partial derivatives at the sample point (e.g., radionuclide concentrations at a receptor point). These might include first- and second-order moment techniques.(James 2004)”
Sensitivity analysis works best under a specific set of conditions. First of all, a model must be “relevant.” In other words, changing the model input factors much change the output factor of interest. (This seems obvious, but in fact, sensitivity analysis can invalidate a poorly thought-out modeling effort by revealing that, in fact, the model is irrelevant.) Secondly, complexity is the enemy of sensitivity analysis. In a best case scenario, a sensitivity analysis would be conducted on a relatively simple model, composed of a manageable number of input parameters which tightly constrain an output parameter (Saltelli 2005).
Unfortunately, when it comes to complexity, modeling trends are actually moving in the opposite direction. As computers become more powerful, models are growing bigger and different types of models are increasing linked. The result is that, while the concepts of sensitivity analysis are simple in theory, non-linearities and interactions among parameters quickly make it complicated. When dealing with interactions amount multiple parameters, it becomes a task that cannot be managed by hand.
At this point in the history of groundwater modeling, people probably shouldn’t be performing sensitivity analysis by hand or even through non-standardized algorithms. The goal should really be to develop uniform and automatic processes that can be applied across models and parameters, and can easily be shared and replicated among scientists and engineers. That said, it is important to understand how sensitivity analysis is, and was, done, both by hand and by automated processes.
James, S.C. (2004). An Example Uncertainty and Sensitivity Analysis at the Horonobe Site for Performance Assessment Calculations. Sandia National Laboratories. Sandia Report. SAND2004-3440.
Saltelli, A. 2005. Global Sensitivity Analysis: An Introduction. European Commission, Joint Research Centre of Ispra, Italy. In Sensitivity Analysis of Model Output. Ed. Hanson, K.M. and Hemez, F.M. Los Alamos National Laboratory.