Hydrological modelling is about generalisation; the aim of the exercise being to produce a simplified representation that can give a reasonable prognosis on the future performance of a hydrological system, either with respect to real-time (forecasting), or with no specific time reference (prediction). In more detail, we seek to model a set of physical, chemical and/or biological processes acting upon an input variable (or variables) and to convert it (or them) into an output variable (or variables). Or, to put it another way, from a computing perspective we seek to transform data inputs into acceptable data outputs - and nothing more sophisticated than that! Most existing hydrological models are based on (a) "recognised equations" and (b) "sound physical processes"; with the latest batch comprising various intricate and complicated to run programs, that require detailed input in the form of numerous difficult to acquire layers of continuous spatial data, and which are dependent upon extended meteorological and hydrological records for their calibration. In addition to these technical drawbacks, imperfections and limitations in their actual internal workings have become a matter for concern, especially with the current trend towards building more complex models, simulating larger catchments, and operating in a real-time context.
Neurohydrology, defined in this instance as "the application of artificial neural networks to hydrological modelling tasks", offers one possible solution to the explicit limitations of traditional equation-bound models. Developments in computer hardware and software over the last decade have enabled neural networks to become a viable technology for tackling problems that were hitherto difficult or impossible to resolve using conventional linear programming techniques. Nevertheless, there remains a technology transfer gap between the neural network experts who are now tackling such problems, and those who are unskilled in neurocomputing - with hard problems to solve. Some initial neural network explorations have been made within the hydrological sciences and these are reported elsewhere. However, most of the modelling opportunities thus far taken on board have been of a simple or catchment specific nature, leading to a fundamental underestimation amongst other workers in this field - with regard to the use of neural networks for more important or more demanding hydrological modelling work. But a neural network is a universal approximator, and such tools are thus ideal candidates for all forms of generalisation, including the construction of hydrological models. Moreover, these "computer intelligence" based tools, have other advantages which go beyond what most traditional forms of modelling can achieve - since such systems will perform high speed information processing, handle complex non-linear functions, handle incomplete, noisy and fuzzy information, perform model-free function estimation, and learn from training data. The role of the responsible scientist should therefore be to examine what is on offer, and attempt to assess the potential benefits and drawbacks of the various tools that are now available, in this powerful, emergent, data-driven modelling paradigm.
In a generic sense, one can envisage three basic options for the implementation of neural network solutions in hydrological modelling viz:
A neural network could be trained to solve a specific hydrological task or problem. This is the traditionalist viewpoint with implementation being dependent upon the provision of extensive records; covering not just extreme events, but also containing a reasonable distribution of all intermediate instances. Such implementations, however, would enable the inclusion of all possible data sources (irrespective of our knowledge about their relative roles) and would not be handicapped by the need to use poor theories, ill-fitting models, difficult to code equations, and inappropriate surrogate values.
A neural network could be trained to solve less complex hydrological problems following appropriate sub-division of a task or problem into more solvable units. Indeed, there are some computations for which the full data set will be known, or can be generated elsewhere with suitable statistical tools - in each case providing a near-perfect set of training data for the neural network. Moreover, recent neural network software can generate 3GL solution code, thus enabling the inclusion of internal neural network solutions within existing models. Enhancement could also involve intelligent data pre-processing and/or post-processing operations - working with single or multiple data sets.
A neural network could be trained to mimic an existing model - thus offering general improvements in speed and data handling capabilities. However, such clones could also be constructed to include additional variables in those circumstances where this is deemed appropriate, and/or to omit certain variables in those instances where the standard input is not available. Neural network clones could also be used for rapid prototyping, sensitivity analysis, and boot strapping operations. Perhaps less obvious is the use of neural network clones to mimic spatial distributions, thus making redundant our existing problems of storing and accessing copious amounts of spatial input, and enabling models to switch from file-based data retrieval (slow) to chip-based data computation (fast) operations.
A research agenda has therefore been established - comprising a sequence of investigations that will be used to assess the full potential of this latest challenge to the status quo.
|Step 1.||An illustration that existing models can be emulated (in part or in full).|
|Step 2.||Evidence that improved levels of performance and faster model outputs can be obtained.|
|Step 3.||Case studies that demonstrate how previously poorly modelled systems can be handled in a more accurate manner.|
|Step 4.||Development of new types of hybrid neuro and conventional models.|
|Step 5.||Development of highly dynamic models.|
Perforce, each successful experiment that is carried out, will furnish another piece of the evidence required to mount an acceptable challenge to the existing paradigm. It is recognised that new paradigms can acquire status over their competitors through the successful solution of a limited number of problems that the group of later practitioners have come to recognise as acute - so this will therefore be the ultimate goal. Moreover, to be successful is not to be completely successful with a single problem, or notably successful with a large number. It is just to be successful! Indeed, the triumph of a new paradigm is at the start based more on a promise of success discoverable in selected and still incomplete examples.
In the light of knowledge and experience thus far gained the following optimistic predictions will be examined and commented upon:
The presentation will be illustrated with examples drawn from work to date.