139
View with images and charts Determining The Capacity Of Dam Using Probabilistic Approach. Chapter One Prelude 1.1 Introduction Dam, barrier, commonly across a watercourse, to hold back water, often forming a reservoir or lake; dams are also sometimes used to control or contain rockslides, mudflows, and the like in regions where these are common. According to the definition given by Federal Emergency Management Agency (FEMA), “Dam means an artificial barrier, including dikes (a barrier blocking a passage, especially for protection), embankments, and appurtenant works- that impounds, diverts, water or is designed to impound or divert water or a combination of water and any other liquid or material in the water”. 1 Important structure or machinery incident to or annexed to a dam that is built to operate and maintain a dam is called the appurtenant structure. Important appurtenant structures of a dam may include spillways- to allow safe passage of flood flows, tunnels- to control releases for irrigation and / or power generation, power station, and canal outlet (s). Spillway means water in or about a dam, designed for the discharge of water. Impoundment means the water held back by a dam. Dams are made of timber, rock, earth, masonry, or concrete or of combinations of these materials. The height of a dam may vary from less than 50 feet to more than 1,000 feet. The reservoir or lake created by a dam may also vary in size; its area may vary from a few acres to more than 100 square miles. In a dam, water arrives which is termed as input, and water is stored in the associated reservoir for some time. Then a fraction of water stored is released from the dam 1 Definition given by Federal Emergency Management Agency (FEMA), www.fema.gov

capacity of dam

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: capacity of dam

View with images and charts

Determining The Capacity Of Dam Using Probabilistic Approach.

Chapter One

Prelude

1.1 IntroductionDam, barrier, commonly across a watercourse, to hold back water, often forming a reservoir or lake; dams are also sometimes used to control or contain rockslides, mudflows, and the like in regions where these are common. According to the definition given by Federal Emergency Management Agency (FEMA), “Dam means an artificial barrier, including dikes (a barrier blocking a passage, especially for protection), embankments, and appurtenant works- that impounds, diverts, water or is designed to impound or divert water or a combination of water and any other liquid or material in the water”.1

Important structure or machinery incident to or annexed to a dam that is built to operate and maintain a dam is called the appurtenant structure. Important appurtenant structures of a dam may include spillways- to allow safe passage of flood flows, tunnels- to control releases for irrigation and / or power generation, power station, and canal outlet (s). Spillway means water in or about a dam, designed for the discharge of water. Impoundment means the water held back by a dam.

Dams are made of timber, rock, earth, masonry, or concrete or of combinations of these materials. The height of a dam may vary from less than 50 feet to more than 1,000 feet. The reservoir or lake created by a dam may also vary in size; its area may vary from a few acres to more than 100 square miles.

In a dam, water arrives which is termed as input, and water is stored in the associated reservoir for some time. Then a fraction of water stored is released from the dam to meet the demand. The ‘release’ is usually called ‘draft’. The release of water from the dam is made according to some policy called the ‘release policy’. Thus, the inflow of water (input) and the policy of release are the two main features of what we call the ‘operating rules’ or ‘operating policy’. A dam is maintained and operated according to the operating rules.

When a dam is constructed a storage reservoir is formed automatically. Efficient operation of a dam requires meeting the demand of water from the dam properly. And to meet the demand properly, the dam should be of that size that can contain as much water to supply. For efficient operation of a dam, researchers have been studying for long. They related the storage of a dam to the inventory problem since the basic feature of a dam is storing water until it is released. But it was Professor P.A.P. Moran who for the first time gave the probabilistic formulation of a storage model for the dam in 1954 and called the theory a “Probability theory of Dams”. In a dam situation water arrives into the dam continuously which is random. Water is released from the dam according to certain policy which is usually deterministic. The random input in the dam process makes the storage function

1 Definition given by Federal Emergency Management Agency (FEMA), www.fema.gov

Page 2: capacity of dam

where,

amount of water in the dam at time t,

is the input or inflows into the dam during time interval t, and

is the release in the corresponding time.

a stochastic process, the behavior of which must depend on the input and release pattern. Here the amount of water into the dam at different points of time is called the storage function. As the input into the dam is random, the dam process is a stochastic process and thus statistical techniques can be applied in studying the dam process.

There are four main problems in reservoir theory where statistics and applied probability theory are involved. These are:

(1) Studying the nature of input pattern and to augment the short record values,(2) Finding the stationary level of the dam content,(3) Finding the probability distribution of first emptiness time, and(4) Determination of capacity.

These problems are actually connected to each other, since a method of solution of the fourth problem automatically leads to studying the nature of input pattern of the historical flow and then finding the stationary level of the content of the dam. The first problem is essentially a problem in time-series analysis and hence, statistical tools can be directly applied in it. We have to obtain the characteristics of the historical flows, such as- mean, variance, coefficient of variation, coefficient of skewness, and serial correlation coefficient. Experience shows that consideration of lag one serial correlation is enough although more lags can be considered. We, in our study considered lag one serial correlation coefficient.

Researchers focused mainly on the fourth problem in which relationship between capacity, reliability (probability of failure or probability of emptiness) and release policy is studied. This fourth problem is essentially related to applied probability because the concept involved in determining the reliability in terms of probability is the core matter of statistics. Now inflow pattern may be independent or dependent to each other. The dependence pattern (if any) depends on many factors and conditions. But until Lloyd’s work in 1963, in which he considered Markov dependent inflows, researchers considered the inflows as being independently and identically distributed.

Ideally, the assumption of independent and identically distributed inflows is not quite realistic as flow pattern is not naturally independent of each others because of “persistence” of the hydrologic series (e.g., water flows series, stream flow series, snowfall etc.). Persistence is a non-random characteristic of hydrologic series that indicates how one event is influenced by the other. For example, a month with high flow will tend to be followed by another high flow rather than a low flow. That is, one event is related to or characterized by another event adjacent to it. This feature of

Page 3: capacity of dam

hydro-series is quantitatively characterized by the serial correlations coefficient. It indicates how strongly one event is affected by a previous event. Thus the behavior of inflow is something like the following- the flow at a given time (say a day or a month) may depend on the inflow of the previous time period and not further on the past. Therefore, inflow pattern is Markova and hence, flows are not serially independent. Assumption of serial ‘dependence’ is rather than realistic.

As in the dam situation, inflow follows either independent or Markova dependent pattern, and inflows are stored in a dam which are released according to some release rule; the process is similar to the inventory process-an important branch of stochastic process. So, the selection of inflow model and the determination of capacity can very well be studied using statistical techniques.

In our study, we will focus on determining the capacity of a reservoir using various techniques. Before going for further description, we need to know what the ‘capacity of a dam’ is and why determination of capacity is so important.

1.2 Capacity of Dam

The capacity of a dam is defined as the volume capable of being impounded at the top of the dam. We will use the terms ‘reservoir capacity’ and ‘capacity of the dam’ synonymously. We will also use the terms ‘reservoir’ and ‘dam’ synonymously.

Active capacity of reservoir is the capacity normally usable for storage and regulation of reservoir inflows to meet established reservoir operating requirements. It is also the total capacity less the sum of the inactive and dead storage capacities.

Total capacity is the reservoir capacity below the highest of the elevation representing-

(i) The top of exclusive flood control capacity [i.e., the reservoir capacity assigned to the sole purpose of regulating flood inflows to reduce possible damage downstream. In some instances, the top of exclusive flood control capacity is above the maximum controllable water surface elevation.],

(ii) The top of joint use capacity [i.e., the reservoir capacity assigned to flood control purposes during certain periods of the year and to conservation purposes during other periods of the year], or

(iii) The top of active conservation capacity [i.e., the reservoir capacity assigned to regulate reservoir inflow for irrigation, power generation, municipal and industrial use, fish and wildlife, navigation, recreation, water quality maintenance, and other purposes. It does not include exclusive flood control or joint use capacity.]

Total capacity is fixed at a certain level. Water in the lower elevation of a reservoir that is unavailable for use is called dead storage level. The reservoir storage capacity up to dead storage level is called dead storage capacity and is provided to accommodate the incoming sediments. The number of years taken to fill the dead storage capacity is called reservoir life. The reservoir capacity between the dead storage and live storage levels is called the live storage capacity and represents the amount of water which could be stored during the flood season(s) and released during the low flow season(s). Live storage capacity of a dam is reduced day by day because of sedimentation. Therefore, a replacement storage dam is required if we wish to maintain current levels of water availability. The storage capacity has the units of volume and is generally

Page 4: capacity of dam

measured in million acre-feet (MAF). An acre-foot is the volume of water required to cover one acre of land (43,560 square-feet) to the depth of one foot and is equal to 43,560 cubic feet. For a dam designed to safely pass the maximum probable floods arriving at a time when the dam is already at maximum storage level, additional storage capacity is provided above the maximum storage level and spillways of adequate capacities are provided to handle the flood flows.

1.3 Importance of Capacity Determination

Water is one of the continuously renewable natural resources of the globe. It establishes a connection between the other spheres of the earth and it is an important component of the human environment. It is generally agreed perception that water is increasingly becoming an issue of primal significance. ECOSOC2 committee on Natural Resources in a recent strategy paper indicated that as many as 52 countries with a population of more than three billion will be “Water Stressed” or face chronic water scarcity by the year 2025 (Hussain, 2000). The growing problem has as much to do with the availability of fresh water in the overall global context as with the fact that such resources, even when available, are in the wrong places or available at wrong times. That’s why effective management of water resources is a no denying fact for proper utilization of this resource. For the beneficial use of the water resource, it can be managed in various ways one of which is by constructing storage reservoirs. The concept of storage reservoir is not only for beneficial use of water resource but also to help develop structures to control seasonal natural flooding which causes much damage to an economy. Now let us explore some purposes of a dam from which we would be able to realize the importance of a proper capacity dam.

The purposes may be manifold- from simply storing the water for use in lean period to constructing reservoir for multiple purposes. Dams are built for specific purposes. In ancient times, they were built only for water supply or irrigation. One of the earliest large dams for this purpose was a marble structure built c.1660 in Rajputana (Rajasthan), India. The main purpose of a dam is to use water in the most efficient way. A dam may be constructed to meet some specific functions. Specially, the purposes may be-

(i) To provide water for irrigation, to aid flood control and hence improve the navigability of waterways, and especially to furnish power for hydroelectric plants. Notable dams built to provide hydroelectric power include the Aswan Dam, 3 mi (4.8 km) south of the city.

(ii) To impound water is often called a barrage; the largest such barrage is the Syncrude Tailings Dam in Canada, which impounds 540 million cubic meters of water.

(iii) To store water during the wet seasons for using in dry seasons (or during period of low flows).

(iv) To store water in a dam during high flow (flood) season and release during the low flow season to supplement the natural flows to meet the irrigation requirements.

2 United Nations Economic and Social Council

Page 5: capacity of dam

(v) For generating hydro-electricity.

(vi) To manage multipurpose water demand of a particular basin.

(vii) To divert/ transfer water from a stream to another stream to augment the later stream to maintain smooth navigation.

(viii) To address usual hydraulic problems of a particular basin (e.g., flooding, river instability, sea tides and salinity) having a distributory type drainage pattern (e.g., the region of Ganga basin, mostly in Bangladesh deltaic region).

(ix) Sometimes, dams are constructed to obtain multifarious problems of management of water resources such as optimal and multiple utilization of resources of an international river.

(x) Storage reservoirs are built for harnessing the water resources to mutual benefit of all the co-basin regions.

(xi) To produce fish and protect and improve environment, and

(xii) For recreation and so on.

These are the few specific purposes of a dam. To meet the purposes mentioned above, constructing a dam is not the only solution to the problem. It should be appropriate size so that the purposes, for which it is constructed, are fully served. For example, for a flood control dam, the capacity determination is very important because of its purpose. Generally, a flood control dam should be as large as possible to be helpful to control flood- even for the worst flood in the respective rivers or streams. But, making of a large dam involves lot of money and manpower and so, the economic side should also be taken into consideration. The dam should not be too small to serve the purposes, or it should not be extravagantly large so that its capacity is rarely or never utilized. A dam which is constructed for irrigation purpose should be of that capacity which can store enough water needed for irrigation. Similarly, a dam, constructed for storing water during wet season (flood period), should be of that capacity so that the stored water can be used during dry period without failure (deficit). That is why the capacity determination of a dam is so important.

An important consideration in determining reservoir capacity is the minimum annual runoff. The available storage determines the magnitude of demand that can be met during a period of low runoff.

1.4 Objective of the Study

The main objective of the study is to determine the capacity of dam using probabilistic approach.

The specific objectives are

(i) To review the existing methods

Page 6: capacity of dam

Methods using engineering consideration (Mass Curve method, Sequent Peak Algorithm etc)

Methods using statistical techniques (Dincer’s method, Gould’s Gamma method etc)

(ii) To generate inflow data using various model.

(iii) To modify or develop new techniques for determining of capacity of the dam using probabilistic approach.

(iv) Comparison of various methods for determining the capacity of the dam.

(v) Application of methods to simulated data.

1.5 Organization of the Study

The study is organized into five chapters.

The first is the introductory chapter discussing introduction of the study and also objectives are introduced. The theoretical concepts such as general description of dam and dam system used in the study are discussed.

In the next chapter, a brief description of early works in determination of capacity of dam has been given. The early works have been divided into three categories, namely, probabilistic approaches and the Moran related methods in determining the capacity, methods based on Mean emptiness time, methods in which the capacity is determined by Linear Programming. Finally, the methods based on generated data are described.

Third chapter begins with brief introduction about simulation which we have used in our analysis. In this chapter, we have explored the characteristics of the historical inflows and its distribution. Also some models in data generation are also given there.

Chapter four is the main part of the study. Here we have generated annual and monthly data by simulation. Also we have developed a new approach to determine the capacity of a dam. We have suggested considering the level of dam content to obtain the required capacity. Capacity by this approach has found to have no emptiness and also the estimated capacity ensured that there will be no overflow.

In chapter five, some of the well known methods such as Mass curve technique, Sequent Peak Algorithm, Gould-Dincer’s Normal method, Gould-Dincer’s Log-normal method, Gould-Dincer’s Gamma method and Gould’s Gamma technique to determine the storage capacity of dam are described.

In chapter six, the main stream of the work begins. Using existing techniques to obtain capacity of dam, we have applied the generated data. Also we considered various drafts viz. 70%, 75%, 80%, 85%, 90%, 95% and 98% of mean inflow. Some tables and graphs are presented in this chapter. Also results are discussed. Capacity determined using the existing techniques are then compared with the capacity determined by the developed technique.

The final concluding chapter includes the major study findings. Conclusion has been drawn on the basis of the estimated capacity along with the limitations faced in this

Page 7: capacity of dam

study have been described in chapter seven. A brief description of scope of further research in this field is also given at the end of this chapter.

Chapter Two

Review of Literature

2.1 Introduction

In this section we shall concentrate on a review of the earlier works related to the subject matter of this project. There have been a number of studies to determine the capacity of dam. In early days the capacity of a dam was determined by using the historical data and empirical method of problem solving. In those days, in determining the capacity, a quantity was assumed as the capacity and further it was assumed that the dam was initially full at the beginning of drought. By adding the monthly inflow to the dam and subtracting the monthly demand, the quantities left in the dam at the end of each month are calculated for a period of one year. Should the quantity show a deficiency (i.e., the initial dam content assumed, appearing a negative quantity), the capacity originally assumed for the dam was increased and the calculation was repeated.

2.2 Earlier Methods in Determining Capacity

Systematic investigation for determining the capacity of a dam dates back from the work of Rippl (1883). Rippl’s method assumed that during an interval and at unit

interval of time the historical flows and corresponding releases are known and given by and respectively. Let Then T is

plotted against time gives a curve which is called the mass curve (Rippl, 1883).

Rippl used the mass curve and took as the capacity; where, is the

peak and is the trough of the mass curve. This mass curve method for determining

the capacity is based solely on the historical record, which is often very short in length and is likely to differ from the economic life of the proposed dam. Also, the same flow might not occur in the future time period. The mass diagram has many limitations which will be discussed later.

To overcome the inadequacy of the short term records, Hazen (1914) used the annual flow records of 14 rivers. He combined the 14 records of various lengths to form a continuous record of 300 years. To calculate the risk of water shortages, different storage occurring with each size were counted (Hazen, 1914). The disadvantage of this method is that, Hazen did not consider the probable correlation among the flows resulting in correlation between the successive sections of the combined data.

Sudler (1927) for the first time described a method of producing synthetic stream flow. He selected 50 representative annual stream flows and wrote each on a card. The cards were well shuffled and drawn one by one until all the 50 records were used. By repeating this procedure, he compiled a 1,000-year record. The record was then

Page 8: capacity of dam

subdivided into shorter records of duration equal to the economic life of the dam. These were then analyzed by the mass curve method (Sudler, 1927).

The serious defect of Sudler’s method was that the individual records were collectively the same from one sequence to another. Hence, moments and extreme values were always the same which is an unrealistic assumption for future sequences. This method also did not consider the possible correlation between the successive flows.

Following Sudler (1927), Barnes (1954) used the stochastic simulation to design a dam on the upper Yarra river in Australia. He generated a 1,000 year flow sequence by using a table of standardized normal variables by preserving the mean and variance as found in the historical data. A residual mass curve was then obtained by plotting

against where is the mean inflow. Straight lines of various uniform draft

rates were drawn on the residual mass diagram and capacity, for a particular draft rate, is then taken as the maximum vertical distance between the draft line and the residual mass curve.

Hurst (1951, 1956) also determined the capacity by using the residual mass curve technique. He took the range of the cumulative sums of departures form the mean of inflow, , as the capacity for a dam.

Langbein (1958) determined the capacity by using an analogy between a dam and finite capacity queue. He assumed that inputs are normally and independently distributed with mean and variance and the release in any period is given by

where, is the content of the dam. He then determined that capacity required to

maintain a target draft and is some fraction, say, 0.5, 0.6, etc..

Bryant (1961) considered several models for optimal design of dams for specified target release.

2.3 Probabilistic Methods in Determining the Capacity

The probability theory of storage systems formulated by P.A.P. Moran in 1954 has now developed into an active branch of applied probability. An excellent account of the theory, describing results obtained up to 1958 is contained in Moran’s (1959) monograph. Considerable progress has since been made in several directions- the study of the time- dependent behavior of stochastic processes underlying Moran’s original model, modifications of this model, as well as the formulation and solution of new models.

2.3.1 Moran’s Model

In Moran’s Monograph, he considered three main purposes for the construction of a dam of which the third one is ‘to provide a storage which will be filled during the wet

Page 9: capacity of dam

season and used during the dry season’. In order to obtain a tractable theory,he supposed that all the inputs occur during the ‘wet’ period (i.e., the period of high flows or flood season) and that all the output occur during the ‘dry’ season (i.e., period of low flow). This is, of course, not the usual situation to be considered, but he considered it for simplicity and considered the process as occurring at a discrete series of time intervals which he considered as years. Thus, the amount of water which flows into a dam (called input) will vary from time to time, and will have a probability distribution. Apart from a possible overflow, which may occur if the dam is of finite capacity,

this water is stored, and released according to a specific rule. The stored water is used for generation of hydro-electric power, and the released water (the output) is used for irrigation purposes. The central characteristic of the system is the storage function, giving the amount f water stored in the dam at various points of time.

In the basic storage model considered by Moran (1954), the content of a dam of

finite capacity , is defined at discrete times by the recurrence

relation

where,

a) denotes the amount of water which has flowed into the dam during the time

interval (say, the year), and it is assumed that are mutually independent and identically distributed random variables.

b) the term represents a possible overflow at time , which

occurs if and only if the content of the dam being after the overflow. and

c) the term indicates a release policy of “meeting the demand if

physically possible”, according to which, at time , an amount, of water

is released, unless the dam contains less than , in which case the entire available amount is released.

Moran’s approach can be subdivided into three main groups:

(i) Those in which time and volumes are considered as continuous variables.

(ii) Those in which time is discontinuous but water volumes are continuous. In this approach, Moran derived the following integral equations describing a mutually exclusive situation (McMahon & Mein, 1978).

Page 10: capacity of dam

For

For

where,

inflows

reservoir capacity

constant release during unit period

inflow function, and

probability function of storage content plus inflow during unit period.

Gani and Prabhu (1957), Prabhu (1958a) and Ghosal (1959, 1960) derived solutions for particular inflow distribution and release rules.

(iii) Those in which time and water volumes are both discrete variables. This approach was given by Moran (1954) in his paper and Ghosal (1962) and Prabhu (1958b) followed him. This approach involves sub-dividing the reservoir volume into a number of parts, thus creating a system of equations which approximates the integral equations given earlier [Equations 2.3.2, 2.3.3]. this approximation primarily affects the results at the storage boundaries (that is; full and empty) but is satisfactory if the storage volume is fine enough.

Two main assumptions can be made about the characteristics of inflow and outflow which occur at discrete time intervals. Moran (1954) assumed that the inflow and outflow do not occur at the same time. He termed this type of model as “mutually exclusive model”. In this model the unit period is sub-divided into a wet season (i.e., all inflow, and no outflow) followed by a dry season (i.e., all release but no inflow). The other assumption is only a simple further development to Moran’s assumption, i.e., the

Page 11: capacity of dam

inflows occur simultaneously. This is called the “simultaneous model”. The mutually exclusive model and the simultaneous models are given below.

2.3.1.1 A Simple Mutually Exclusive Model

For the mutually exclusive model we have:

where,

inflow during period, and

capacity of the reservoir,

constant volume released at the end of the unit period.

stored water at the beginning of the period

stored water at the end of the period or the beginning of the period.

Given the information about capacity, draft and inflows, the first step is to set up a “transition matrix” of the storage contents. A transition matrix shows the probability of the storage finishing in any particular state at the end of a time period for each possible initial state at the beginning of that period. Because of mutually exclusive assumption of inflows, the reservoir can never be finished in the full condition.

2.3.1.2 A Simple Simultaneous Model

For the simultaneous model, we have:

Page 12: capacity of dam

That is, and occurs simultaneously. Here the notations bear the same meaning as

in the case of mutually exclusive model.

In contrast to the mutually exclusive model, it is now possible for the reservoir to finish in a full condition at the end of a period.

The mutually exclusive model described earlier, overestimates both the probability of failure and the probability of spill. This is because of the assumption that inflows always precede outflow- thus the reservoir can never be full at the end of a time period. The simultaneous model on the other hand, is more realistic as it is more representative of reservoir inflow and outflow conditions.

2.3.2 Other Probabilistic MethodsIn 1955, Moran modified the discrete model to deal with seasonal flows. Transition matrices were prepared for each season and were multiplied together to yield an annual transition matrix. However, the seasonal flows were assumed to be independent. Lloyd and Odoom (1964) adopted a somewhat similar model.

Harris (1965) gave a worked out example of Moran’s seasonal method applied to a British catchment. He found the flows to be seasonal and independent and he prepare wet and dry season transition matrices which were multiplied together to get the annual transition matrix.

Lloyd (1963) partly became the independence assumption in Moran’s approach by assuming that the inflows are represented by a bivariate distribution rather than a simple histogram. In effect, this squared the number of equations to be solved.

Dearlove and Harris (1965) made the techniques more applicable by combining Lloyd’s approach with Moran’s seasonal method, but computationally the problem was large and therefore its use was limited. However, Doran’s recent work (Doran, 1975) on the divided interval technique for solving the transition matrix may overcome this limitation.

Venetis (1969) developed monthly bivariate transition matrices from generated flows using Rossener and Yevjevich’s model (1966). Following Moran, and Dearlove and Harris, he multiplied the matrices together to get an annual transition matrix.

Gould (1961) modified the simultaneous Moran-type model to account for both seasonality and serial correlation of inflows. He did this by using the transition matrix with a yearly time period, but accounting for within-year flows by using behavior analysis. Thus monthly flow variations, monthly serial correlations and draft variations can be included.

McMahon (1976) took 156 Australian rivers and used Gould’s modified procedure to estimate the theoretical storage capacities for four draft conditions (

) and three probability of failure values (

). These capacities were related by least squares analysis to the

Page 13: capacity of dam

appropriate coefficient of variation of annual flows by the following simple relationship:

where,

Storage capacity in volume units

Mean annual flow in volume units

Reservoir capacity divided by mean annual flow,

Coefficient of variation of annual flows, and

Empirically derived constants (McMahon & Mein, 1978)

Langbein (1958) gave probability routing method which is similar to the Moran’s (1954) probability matrix method except that Langbein modified his technique to deal with correlated annual flow. Both the streamflow regime and reservoir storage were divided into low, medium and high sub-regimes. By classifying each flow into the same streamflow regime as its predecessor, three separate streamflow histograms were obtained. Thus setting up his system of equatioms describing the cumulative probability of reservoir contents, Langbein used the inflow distribution appropriate to the state of the reservoir.

Hardison (1965) generalized Langbein’s probability routing procedure using theoretical distributions of annual flows and assuming serial correlation to be zero. This is equivalent to Moran’s model except that Hardison used a simultaneous model rather than the mutually exclusive model adopted by Moran. The annual storage estimates were shown graphically for Log-normal, normal and Weibull distributions of annual flows. The percentage change of deficiency shown in his graphs was defined by Hardison as the percentage of years that the indicated storage capacity would be insufficient to supply the target draft. In this technique, first the mean, standard deviation and skewness of both annual flows and the common logarithms of annual flows are needed. Then selecting the appropriate distribution of the flows (which is selected by the value of the skewness coefficient), capacity was determined graphically for a given chance of deficiency and variability.

Melentijevich (1966) obtained expressions for both time dependent and steady state distributions of reservoir content assuming an infinite storage and independent normal inflows. In considering finite reservoirs, Melentijevich used a random model and a behavior analysis of 100,000 random normally distributed numbers. From the analysis, he obtained an expression for the density function of the stationary distribution of

Page 14: capacity of dam

storage contents. The solution is complex and limited in use because of the assumption of normality, independence and neglect of seasonality.

Klemes (1967) in his method of determining the capacity was able to reduce the probability of failure within a limited period to the classical occupancy problem. His technique was restricted to a uniform release or a randomly varying one. The major limitation is that the unit period is of one year. Consequently, it is not possible to distinguish a failure within a year.

Phatarfod (1976) suggested another method in determining the capacity which is based on random walk theory and is concerned with finding the probability of the contents of a finite reservoir being equal to or less than some value lC where lC > 0 and is the

reservoir capacity. The physical process of dam fluctuations can be linked to a random walk with impenetrable barriers at full supply and empty conditions. Phatarfod used Wald’s identity which is an approximate technique to solve the problem with absorbing barriers and a relation connecting the two kinds of random walks (McMahon & Mein, 1978). Phatarfod considered annual flows are gamma distribution and is based on a fixed draft.

2.3.3 Khan’s Suggested Methods for Determining the Capacity

2.3.3.1 Capacity Based on Mean Emptiness Time

Suppose that at time the dam contains an amount of water and let be the

first subsequent time at which it becomes empty. Then is called the ‘wet period’ of the dam and the time at which the dam becomes empty is called the ‘Emptiness Time’. If a dam situation is observed in a simulation study i.e., if the same capacity dam is observe for various inflow values (inflows are randomly generated), we will have different Emptiness Times. The average value of those is what we call ‘Mean Emptiness Time’. M.S.H. Khan (1992) in his studies determined the capacity of a dam by considering mean emptiness time. Khan’s methods can be described as:

Assume that the input rate is less than the demand rate and that a certain quantity “ ”,

the initial dam content, is stored into the dam capacity before supply actually

commences. In this case, emptiness will be certain and the mean emptiness period will be a function of the capacity. We may expect that the mean emptiness period increases with the increase in the capacity and then becomes stationary and does not increase any further by increasing the size of the dam and therefore any further increase in the size may not economically or otherwise be justified. Thus, we can find the optimal capacity of the dam from its mean emptiness period such that if the capacity is increased further, mean emptiness time will not increase significantly (e.g., further increase in the capacity will not guarantee a longer period of functioning) provided that an initial content ‘ ’ is available for storage or supply. The value of may be taken as the

average rate of supply. Khan (1992) studied for Geometric and Exponential inflows.

2.3.3.2 Capacity Based on Mean Emptiness Time for Geometric and Exponential Inputs

Page 15: capacity of dam

Suppose, is the input process. Consider that a dam of capacity starts functioning with an initial storage . To determine the optimal capacity ,

suppose the input process is a discrete independent increment process and the

release is at unit rate at the end of each unit time interval

For geometric input with

the mean emptiness period can be shown to be [For details, see (Khan, 1992)]

The mean will be finite provided that .

For exponential input with probability density function

And constant unit release, the mean emptiness time can also be shown to be

Page 16: capacity of dam

It may noted that is finite for And is the unique non-zero solution of

(Khan,1992)

Table 2.1 shows the mean emptiness period for geometric input with unit release and table 2.2 shows the mean emptiness period for exponential input distribution which are computed using equations 2.12 and 2.13 respectively. It can be seen in table2.1 that, for geometric input with , the optimal capacity would be . If the

capacity is increased further, the mean emptiness time will not increase. Similarly, for

exponential input with unit release, it is seen in table 2.2 that, for and unit

release, the optimal capacity is . If the capacity is increased further, the mean

emptiness period will not increase.

Table 2.1: Mean Emptiness Period for Geometric Input and Unit release

Capacity

1224567891011121520

1.0001.1111.1231.1241.1251.1251.1251.1251.1251.1251.1251.1251.1251.1251.125

1.0001.2501.3211.3281.3321.3331.3331.3331.3331.3331.3331.3331.3331.3331.333

1.0001.4281.6121.6981.7241.7391.7451.7481.7491.7491.7491.7491.7501.7501.750

1.0001.6662.1112.4072.6042.7362.8242.8832.9222.9482.9652.9762.9932.9993.000

1.0001.8182.4873.0353.4833.8504.1504.3974.5964.7604.8955.0055.2285.4005.500

Page 17: capacity of dam

5678910122050

5.4845.6095.6235.6245.6255.6255.6255.6255.625

6.2226.5556.6386.6596.6646.6666.6666.6666.666

7.4568.1958.5128.6488.7068.7318.7468.7508.750

9.79011.52612.68413.45613.97014.31314.69514.98814.999

11.82414.67417.00618.91720.47521.75223.65226.72727.499

Khan (1992) obtained mean emptiness period for arbitrary input also. He used probability relations and probability generating function to obtain the mean of the first emptiness period (allowing overflow before emptiness) for an arbitrary discrete independent input distribution. Mean of first emptiness period can be defined as the average time period at which the dam becomes empty for the first time during its life.

Table 2.2: Mean Emptiness Period for Exponential Input and Unit Release

Capacity

123456781012152050

1.4211.2491.2491.2501.2501.2501.2501.2501.2501.2501.2501.2501.2501.250

1.3721.4261.4281.4281.4281.4281.4281.4281.4281.4281.4281.4281.4281.428

1.5061.6491.6641.6661.6661.6661.6661.6661.6661.6661.6661.6661.6661.666

1.9522.3222.4422.4812.4932.4982.4992.4992.4992.4992.4992.5002.5002.500

2.8324.2664.5384.7104.8174.8854.9274.9714.9824.9914.9984.9995.0005.000

8.4418.7218.9859.1819.3399.4679.5679.7199.7149.8529.2229.9739.99910.000

2.3.3.3 Capacity Based on the Stationary Level of the Dam Content

M.S.H. Khan (1979) determined the capacity using stationary level of the dam content. According to Khan, if the successive inputs are mutually independent and

identically distributed, then it is known from Phatarfod (1976) that,

; for continuous input and constant unit release (2.3.6)

Page 18: capacity of dam

; for discrete input and unit release

Where is the probability that the dam content is less than or equal to , is

the probability that starting with a quantity u the dam of capacity gets emptybefore

overflow. Then capacity of the dam can be determined such that

(2.3.7)

Where, is a specified fraction of the capacity and is also given. The value of should depend on the input rate.

As an illustration, the stationary distribution of the dam content for exponential input with probability density function

And with unit release per unit of time [Khan (1979)], is given by

(2.3.8)

For general arbitrary input and for constant release of M units, g(u) can approximately be given by

Hence from (2.3.6)

for continuous inputs,

for discrete inputs, (2.3.9)

Page 19: capacity of dam

Where is the unique non-zero solution of being the

moment generating function (mgf) of the net input in unit time.

For markovian inputs, Equation 2.14 also remains valid if the markov sequence is

reversible. In this case, is the non-zero solution of , where is

such that for exponential results, may be

taken equal to one. Then equation 2.17 will give approximate distribution function once is found.

Table 2.3: Capacity for Exponential Input and Unit Release

Mean Input

1/4 1/3 3/4

P=0.1 P=0.05 P=0.1 P=0.05 P=0.1 P=0.051.22.0

7.122.30

10.073.06

8.652.64

11.933.49

25.997.32

33.839.53

To determine the capacity , we have from equation 2.15 and equation 2.16 when the

input is exponential that

(2.3.10)

Where is the real root (other than 1) of

3

And when . Table 2.3 gives the capacity of the

dam for various values of , and the mean input rate.

2.3.3.4 Capacity by Specifying the Probability of Overflow

31 A multipurpose dam is a dam which is used for many purposes including hydropower generation and fish cultivation.

Page 20: capacity of dam

In a multipurpose dam1 where a fraction of the incoming water into the dam is allocated for power generation and the remainder is used for irrigation and other purposes. Suppose the input into the dam is continuous and water is released continuously for power generation at a fixed rate, say, , where is the mean input rate. The

dam contents are then leftover for using it for other purpose. Suppose that there is always a demand for water from the dam. Then the maximum of the dam contents during a given period may be taken as the capacity of the dam. In this case there will be no overflow and therefore 100% of the available water could be utilized. If we take a particular value, say, , of the dam content equal to the capacity of a dam and during a

given interval of time if it is observed that in 95% cases the level of the dam content is below the assumed value and only in 5% cases the level exceeds , then is the

capacity with 0.05 as the probability of overflow. The capacity can thus be determined by specifying probability of overflow.

As an illustration for determining the capacity by this method, Khan (1979) first generated an input series of Gamma-Markov type by using

where are random errors with mean zero and variance one. The distributional form

of determines the behavior of and it can be shown that if is normal, is also

normal. But it has been observed that the theoretical flows are quite close to the observed data if the errors are assumed to be the standard gamma variates rather than

the standard normal variates. The deviates can be generated by using the

transformation of Wilson and Hilferty (1931) [Ref. Fiering (1967)]. With mean

, standard deviation skewness and first-order autocorrelation

coefficient

Using i.e., a 50% utilization for power generation, and an input

series of length 25 we compute in Table 2.4, page-29, the capacity by specifying the probability of overflow. It will be seen that if we take 4.54 as the capacity, then there will be no overflow; and if the capacity is taken to be 4.118, the probability of overflow is 0.04. From the Table 2.4 we find the capacity as 4.54 with 100% utilization of available water and 4.118 with probability of overflow equal to 0.04. To use this method for determining the capacity of a dam a long sequence of inflow data is to be used and any release rule may be followed.

Page 21: capacity of dam

Table 2.4: Capacity by Specifying the Probability of Overflow (Gamma type inputs

)

Content after release Content in order of magnitude3.3837 0.58231.8890 0.77531.6168 0.00684.1180 1.28901.0068 1.61682.1927 1.62501.6250 1.64922.5982 1.88902.5748 1.92832.4011 2.06583.3682 2.13162.0658 2.10110.7753 2.53634.5400 2.57482.5363 2.58052.5805 2.59821.2890 2.73442.1316 2.79741.9683 2.91271.6492 3.36822.7344 3.38370.5823 3.48923.8762 3.87652.7974 4.11803.4892 4.5400

Capacity =4.54 with 0.0 as the probability of overflow

Capacity =4.118 with 0.04 as the probability of overflow

2.4 Approaches Based on Linear ProgrammingThe method of linear programming has been applied to water-resources design by Masse and Gibart (1957, 1962), Lee (1958), Castle (1961), Heady (1961) and Dorfman (1961). The principle of such applications can be illustrated [as given in Chow 91961)] by a simple example as:

A single multi-purpose reservoir is subject to analysis by liner programming for its beneficial use of water. The hydrological data used for inflow to the reservoir are in corrected inflow-hydrograph for estimated evaporation and leakage. The initial reservoir content is given or so chosen that the reservoir is full, empty or an optimal condition of operation. The duration of the analysis was assumed to be one year. It was divided into a number of equal interval, say, 12 months. Let be the

respective volumes of monthly inflow, be the initial storage capacity and

Page 22: capacity of dam

be the volumes of water planned to be released in the respective

months. The twelve volumes of water used monthly add up to the total volume of outflow from reservoir, including any unavoidable spills in an average year. Since the total outflow up to the month cannot exceed the sum of inflow volumes up to the

month plus the initial storage, the following inequality can be written

Where Also, the total volume of water in storage at any time cannot exceed the

maximum useful storage capacity of the reservoir. Thus

Where when , the above inequalities become the equation

The theory of inventory and the theory of storage are often used to determine the optimal policies in water resource. In the United States, Karlin and Koopmans have applied the inventory theory to determine water-storage policies in hydroelectric systems.

“Little used the functional equation approach of the dynamic programming to inventory problems and thus to formulate stochastic dynamic programming model for determining the optimal water-storage policy for an electric generating system.” (Chow, 1964).

Hall and Howell (1970) optimized the size of a single-purpose reservoir by applying dynamic programming to sequentially generated data (Hall & Howell, 1970).

2.5 Approaches Based on Simulated Flows

In many situations the historical flow sequence are not long enough to rely on to determine the long term capacity of the dam and the need arises to generate synthetic flows by simulation which are statistically in distinguishable the historical flows. The term “simulation” is used to mean empirical sampling when the process sampled from the population is a close model of the real system. The benefit of the simulated series is

Page 23: capacity of dam

that is possible to generate long series of inflows keeping the historical characteristics fixed. Thus the long sequence will contain more extreme events than the observed values. The generated flows should have same population mean, variance, skewness and correlation coefficient, as of their historical values. The length of the input sequence for determining the dam capacity, should depend on the desired economic life of the dam. Capacity of the dam can then be determined by some established technique.

To generate inflows by simulation, we need some model which generates inflows maintaining the same properties of the historical flows. Several methods for generating synthetic flows have been proposed. In prescribing a model for data generation, it is important to study the behavior of historical data. Flow records at a gagging station in a given time interval may be considered as a hydrologic time series. Many hydrologic time series have no important smooth trend and that can be found using statistical analysis. In case of monthly inflow data, it is reasonable to assume that the series has some seasonal effects or seasonality. Seasonality means, that the characteristics of inflow changes with seasons (or months). For example, with seasonal variation in inflows, mean flow and standard deviation of January will differ from that March. Similarly, inflow during rainy seasons will obviously be different from inflow during winter. This is how seasonality affects inflow mean and standard deviation.

Inflow follows some particular distributions. For monthly inflows, normal and gamma type distribution are common. Annual flows usually found to be log-normal or Weibull or gamma-type distributions. A wide variety of distributions for generating input series have been used and it is found that Gaussian, log-normal and gamma type distribution fit well in most of the cases. Dependence in the input series have been considered by Fiering (1964), Rosesner and Yevjevich (1966) and others. In hydrology, log-normal and the gamma distributions have been the most popular for simulating input sequences.

The use of simulation analysis of water-resources systems began in 1953 by the U.S. Army Corps of Engineers on the Missouri River (US Army, 1957). In this analysis, the operation of six reservoirs on the Missouri River was simulated on the Univac-I computer to maximize the power generation; subject to constraint for navigation, flood control and irrigation specifications.

Other simulation analysis were made by Britain (1960) and Fiering (1962). Britain dealt with the integration of an energy-producing Glen Canyon Dam into already existing power system on the Colorado River in order to approximate maximum return. Earlier, in 1961, Britain conducted probability analysis to the development of a synthetic hydrology for the Colorado River (Britain, 1961). Fiering proposed a method for the optimal design of a single multi-purpose resevior by computer simulation studies of a simple coded model. (Chow, 1964)

Fiering (1963, 1965, 1967) and Svanidze (1964) many investigators have employed stochastic streamflow models to examine the probability distribution of over-year reservior storage capacity.

A variety of monthly stochastic streamflow models have been developed to investigate the combined within-year storage-reliability-yield (S-R-Y) relationship by Lawrance and Kottegoda (1957); Hirsch (1979); Klemes et al. (1981); Stedinger and Taylor (1982a, b); Stedinger et al. (1985).

Page 24: capacity of dam

Hazen (1914) developed the first relationship between the over-year design storage capacity and provide tables of over-year reservoir storage capacity based upon of the coefficient of variation of the inflows and level of development. The limitation is that tables developed for one region are not necessarily applicable to another.

Hurst (1951) developed algebraic expressions which relate the required over-year storage S, to the mean and variance of the inflows as well as the level of

development . The relationship of the form:

(2.5.1)

(2.5.2)

Where are constants and k is the Hurst coefficient.

Hurst applied the single-cycling sequent peak algorithm to sequences of streamflow, precipitation, temperature, tree ring and varve records to obtain single estimates of S which in turn were used to obtain estimates of the constants a, b and k using graphical curve fitting procedures. The expressions on equations (2.5.1) and (2.5.2) are only reasonable approximations over the range considered by Hurst.

Actually m could be any non-negative number since are non-negative and the

demand as a fraction of the mean annual flow is usually in the interval (0, 1).

Gomide (1975) and Troutman (1978), both derived the probability density function of S and its first two moments , which result from application of the single-

cycling sequent peak algorithm to realistic models of annual streamflows. For example, when the annual streamflows, Q, are normally distributed and follow a first-order autoregressive model:

where the are independent normal random disturbances with mean 0 and variance 1.

Page 25: capacity of dam

Gomide (1975) derived the pdf of S and its first moment . The resulting expressions

were so complex that Gomide presented his results graphically for equal to 0.0, 0.2,

0.5; m equal to 0 (full regulation) and N ranging from 0 to 100 years.

Troutman (1978) derived the of the asymptotic distribution of S when m=0

(full regulation) and inflows are described by an AR(1) log-normal model:

where , are independent normal disturbances with mean 0, variance 1

and are the mean, variance and serial correlation of the log

transformed streamflows. No analytical expressions have been developed for the pdf or moments of the stedy-state required storage obtained using the double-cycling sequent peak algorithm.

Vogel and Stedinger (1987) presented general over-year Storage-Reliability-Yield (S-R-Y) relationship in an analytical form. They provided approximate but general expressions for evaluating the quantiles of the distribution of over-year storage capacity as a function of the inflow parameters the demand level , and the planning

period for log normal inflows.

Oguz and Bayazit (1991) studied with the properties of Critical Period (CP). The probability distribution function of the length of the critical period, defined as the time interval during which an initially full reservoir is completely emptied, can be determined using the storage theory when the inflows are considered as discrete variables. For continuous flows, Oguz and Bayazit solved it by simulation. Simulation experiments with normally and log-normally distributed inflow have shown that the mean and standard deviation of the CP length increased rapidly (more than linearly) with the reservoir size and with the skewness of the inflows. Mean and standard deviation of CP increases with the level of regulation (release) for normally distributed flows, but decreases for log-normal flows. They decreased only slightly as the serial dependence of the inflow increases.

Bayazit and Bulu (1991) determined the probability distribution and parameters by simulation using streamflow series generated by four differents models.

1. Normal first-order autoregressive2. Log- normal first-order autoregressive3. Normal first-order autoregressive moving average

Page 26: capacity of dam

4. Log- normal first-order autoregressive moving average

They showed that the reservoir capacity standardized by subtracting the mean inflow and dividing by the standard deviation of inflow follows the three parameter log-normal distribution with a constant lower bound -2.0.

Phein (1993) estimated storage capacity of dams where annual flows are assumed to be gamma distributed, following the existing studies on normal and log-normal inflows. Particularly, Phein (1993) described the gamma first-order autoregressive GAR (1) model along with suitable methods to produce the required exponential, Poisson, and gamma variates. He obtained the approximate formula for mean and standard deviation of capacity for a wide range of reservoir life n, the lag-one autocorrelation coefficient

, and the regulation level via mean inflow m, rather than at discrete values of and m.

Thus Phein (1993) directly applied the given formula in his paper.

Karim and Chowdhury (1995) showed in their paper that the generalized extreme value (GEV) distribution best represented the statistical characteristics of observed data. AM (annual maximum) discharge data from 31 gauging stations of Bangladesh were used in this study.

Vogel and Bolognese (1995) introduced an approximate yet general approach for describing the overall behavior of water supply systems dominated by carry-over storage. Generalized relationships among reservoir system storage, yield, reliability, and resilience were introduced for water supply systems fed by autoregressive normal and lognormal annual inflows. They derived relationships for reservoir system resilience which represent the likelihood that a system will recover from a failure, once a failure has occurred. They reproduce the relationships between resilience and reliability for a wide class of water supply systems using a two-state Markov model. A two state Markov model combined with some existing analytical relationships among storage, reliability, and yield provides a very general theoretical foundation for understanding the trade-offs among reservoir system storage, yield, reliability, and resilience.

Khan and Raheem (2001) determined the capacity of dam by considering probability of emptiness of the dam and average first overflow time together. The technique gave a storage estimate which ensure a given level of reliability (say, probability of dam failure is 5%) and also ensure that the dam will overflow at a particular period for the first time during a given period.

They estimated the probability of emptiness of the dam and estimated capacity of the dam ensuring first overflow will occur, on an average at a particular year. Also the technique was applied on monthly inflow data and capacity was determined where inflow and release occur monthly. This annual probability failure and the time, at which the dam is expected to overflow for the first time, had been obtained.

Furthermore, using behavior analysis, various storage capacity was assumed and the inflows were routed through the assumed capacity. Thus the ‘behavior’ of the dam at various time points with various inflows were observed and recorded. After routing the

Page 27: capacity of dam

inflows to various assumed storages, capacity was determined by considering probability of emptiness (PE) and average first overflow (AFO) time together.

where,

And, AFO = the average time at which the dam overflows for the first time during a given time.

Using the suggested technique, a simple Storage-Reliability (S-R) relationship was obtained by least square method for the annual inflow data only. Considering 5% probability of emptiness of the dam and 10% probability of overflow, the simple relationship of the form:

where a was obtained as

Thomas A. McMahon, Pegram, Vogel and Peel (2007) used a global dats set to assess the adequacy of five techniques to estimate the relationship between reservoir capacity, target draft and reliability of supply. The techniques examined are extended deficit analysis (EDA), behavior analysis, sequent peak algorithm (SPA), Vogel and Stedinger empirical (lognormal) method and Phein empirical (Gamma) method.

Chapter Three

Inflow Distribution and Data Generation Methodology

3.1 Introduction

In many situations the historical flow sequences are not long enough to rely on in determining the long term capacity of the dam and the need arises to generate synthetic flows which are statistically indistinguishable form the historical flows. The benefit of the simulated series is that the long sequence will contain more extreme events than the observed sequences where the generated flows should have some population means, variance, skewness and correlations, etc., which are in exact agreement with their historical values. To generate such flow sequences, several models are available and these models are used according to the distribution of inflows.

Page 28: capacity of dam

To generate a sequence of values of synthetic flows for a given stream, we consider the flows to be the results of a random process, a process whose results change with time a in a way that involves probability (Moran, 1950). While generating a flow sequence, we do not assume that the exact flow can be predicted. Rather, in reservoir capacity-reliability analysis data generation procedures are used to provide alternative yet equally likely flow sequence to the historical one. It is obvious that in a large number of generated sequences, some will contain flows smaller than the historical record and some greater values. Of course, the historical flow sequences do not seem entirely whimsical or haphazard. We can expect that the general level of variability of the flow pattern will be maintained in the future flow sequences. Conversely, if the past flow pattern is very erratic, we expect the same variability in future. Other characteristics of the flow provide clues to future flows. If the flow of a year is low, it is likely, not certain, that the flow of next year will be low. Similarly, high flow tends to follow another high flow. Thus the historical characteristics of a stream flow provide valuable information about the probable future flows. Models for flow generation should certainly use this information. But, at the same time, we should include a random component in the model to reflect our inability to predict the future flow sequences exactly.

Therefore, it is quite clear that to overcome the limitation of the short length of inflow record, we have to have long sequence of inflows with the same statistical properties as that found in the historical flows. This can be done using the technique of simulation. Simulation technique, though applied in many situations, its applicability in hydrological studies, especially in generating inflow data are widespread among the researchers. Here we go for a brief description about simulation and its uses in hydrology.

3.2 Simulation

The word simulation is in common use today. Literally, the word ‘simulate’ is a transitive verb, meaning- ‘pretend to have’, ‘resemble’, ‘wear the guise of’, ‘mimic’ etc. Many definition of simulation is available. Some of them are given below:

According to Hufschemidt and Fiering, simulation analysis is defined as “… a process which duplicates the essence of a system or activity without actually attaining reality itself” (Hufschmids & Fiering, 1967).

According to Naylor et al., “Simulation is a numerical technique for conducting experiments on a digital computer, which involves certain types of mathematical and logical models that describe the behaviors of business or economic (or some component thereof ) over extended periods of real time” (Naylor, 1966).

Simulation is, therefore, a technique of building a model of a real or proposed system so that the behavior of the system under specific conditions may be studied. One of the key powers of simulation is the ability to model the behavior of a system as time progresses.

Two other elements that are vital to any simulation system are the random number generators and the results collation and analysis. The random number generators are

Page 29: capacity of dam

used to provide stochastic behavior typical to the real world. For example, machine scrap rates will rarely be fixed but will vary between certain ranges and hence the scrap rate of a machine should be determined by a random distribution (probably a normal distribution ). The results collation and display provides the user a means of utilizing the simulation tool to provide meaningful analysis of the new or proposed system.

3.2.1 Examples and Areas of Simulation

Here we give a few areas where simulation technique is used.

(i) Epidemics: Disease can spread through animals, as well as plant populations and in recent years there has been much interest in mathematical models of the spread of infections. Although these models are often extremely simple, their mathematical solution is not so simple, and simulation of the models has frequently been employed.

(ii) Animal Population: Simulation models have been used to mimic the behavior of animal populations. Pennycuick (1969) used a computer model to investigate the future development of population of Great Tits. Gibbs (1980) used simulation compare two ditterent strategies tor young male gorillas of the African mountain.

(iii) Other fields: Other examples of process in which it may be either impossible or prohibitively expensive to obtain data include: (a) next year’s sales figures from a firm, (b) Gross National Products (GNP) for the economy tor the next five years, (c) data on the frequency of machine breakdowns in a factory that has kept only limited records of information of this type, (d) the performance of large-scale rocket engines, (e) the effects of a proposed tax cut on the economy, the effects of an advertising campaign on total sales, (g) the effects of a particular managerial decision policy on a firm’s profits, and (h) the effect of increasing the number of toll gates at the Bangabandhu Jmuna Bridge or effect of increasing the number of ticket counter at the Bangabandhu National Stadium.

In all the above situations, the simulation can be used as an effective means of generating numerical data describing processes that otherwise would yield such information only at a very high cost, if at all.

3.2.2 Uses of simulation in Hydrology

Synthetic streamflows have become an important tool to the water resource planners because, in conjunction with the digital computers, simulation techniques allow to evaluate proposed system designs more thoroughly and in a more statistically sophisticated manner than was possible with the others methods.

It has been observed that the inflow pattern of different stream varies considerably and that the history of flows on a particular stream provides a very valuable clue to the future behavior of that stream. Hence, Ripple (1883) devised the familiar mass curve

Page 30: capacity of dam

analysis to investigate the storage capacity required to provide a desired pattern of release (draft) despite inflow fluctuations. Similarly, the historical records of a stream can be used in examining other aspects of a water system. For example, the frequency and the persistence of periods of high flows in a historical record provide an indication of flooding pattern.

The technique of simulation in hydrology involves using a stochastic data generation model to produce “streamflow” sequences with the same statistical properties of the historical flows. It is then possible to use the existing methods to determine the capacity corresponding to each sequence. “Synthetic flows do not improve poor records but merely improves the quality of designs made with whatever records are available.” (Fiering and Jackson, 1970)

3.3 Distribution of InflowsStatistical analysis reveals that streamflows, river flows, etc. have some distributional form. Finding the distribution of inflows is very important because the data generation model requires a random component and the random component is to be from that particular distribution that inflow follows.

A wide variety of distributions for generating inflow series have been used and it is found that Gaussian, log-normal and gamma type distributions fit well in most of the cases. Dependence in the inflow series have been considered by Fiering (1964). Rosesner and Yevjevich (1966) and others. In hydrology, log-normal and gamma type distributions have been the most popular for simulating input series. Here we describe the important distributions of inflows.

3.3.1 Normal DistributionThe most common and the first major distribution of inflows is the normal distribution. In testing a historical flow sequence for normality, one can use the normal arithmetic probability paper. Another technique is the use of sample coefficient of skewness.

3.3.2 Log Normal DistributionThe second major distribution widely used in stream flow modeling is the log-normal distribution. If is a random variable with the property that , the logarithm of to any base, is normally distributed, then has a lognormal distribution.

In testing the log normality of a set of values one can take the logarithms of the flows and then applies the normality tests. These days, with the help of built functions in various statistical packages, we can easily test normality and log-normality.

3.3.3 Gamma DistributionThe third major distribution in flow generation models is the gamma distribution. The Gamma distribution is used when the historical flows show distinct skewness of the flows and of their logarithms and when skewness should be reflected in the generated flow sequences.

3.4 Selecting a Distribution

Page 31: capacity of dam

In trying to select an appropriate distribution, we can start by plotting sample values and considering sample moments. But unfortunately, this procedure might not provide clear cut differences between various distributions. As mentioned above the log-normal distribution provides especially good resolution in the range of low flows because small changes in low numbers produce relatively large changes in the logarithms of those numbers. The gamma distribution is the only distribution among the said three distributions that show skewness of both flows and their lags. Therefore, it is appropriate for extensions of historical flows in which skewness is marked and important. The choice of a gamma type distribution will be appropriate if past history can be considered to influence this year’s flow only through its effects in determining flows in the last one or two years. But if the effect of the past is significantly more complicated and if it seem that multiple lags are important, then the gamma type distribution should not be considered.

If no clear decision is possible, one should invoke economic considerations in selecting distribution. That is, if one can not choose between two distributions, he can conduct two simulation studies, one for each of the possible distributions .Then decision and game theories can be used to select the appropriate distribution and its optimal design. Else, one cam simply choose the normal distribution. Because, empirical studies have shown that the mean and standard deviation are much more important than other statistics in producing good results in most simulation studies. Moreover, the mean and standard deviation can be estimated tolerably well from moderate sized samples. In many cases, it is sufficient to reproduce the historical mean and standard deviation and therefore, the normal distribution is quite adequate.

3.5 Data Generation

As the historical data are not long enough, we have to generate inflow data by synthetic procedures. We now describe some tools in data generation and a few familiar models that are used to generate data. In order for the models to be realistic, the parameters of the models (such as birth and death rate of animal, arrivals of cars at a toll booth etc.) must be chosen to correspond as close as possible to reality, and typically this means that before a model can be simulated the real life process must be observed, and so sampled that parameter estimates can be obtained.

3.5.1 Tools in data Generation

To represent a historical flow sequence by the generated flows, we have to keep all the historical parameters intact in the generated inflow sequences. The data generation process requires some specific information as well as information of some random events. Every model, that we use to generate data, has this two components, i.e., the deterministic part (specific part ) and the random part, The random part of the model is obtained by drawing random numbers, and the deterministic part is obtained from the historical inflow records. Persistence in hydrological series is very important. Persistence indicates how one flow in influenced by another flow. Statistical measure of persistence is the serial correlation coefficient. Thus, tools required in data generation are:

(a) Specification of the time series and its components which include deterministic and random components.

Page 32: capacity of dam

(b) Obtaining the deterministic parts.(c) Generation of random numbers to obtain the random part. (d) Specification of correlation coefficient and coefficient of skewness.

3.5.1.1 Time Series and its Components

A set of historical or synthetic flows for a stream is a sequence of numbers or values produced by a random process in a succession of time intervals. Such a sequence is called a time series. In general, the member of a time series, written as , is the

sum of two parts:

= +

Here is a deterministic part which is a number determined by dome exact functional

rule from the parameters and previous values of the process. Typically, might be a

function of the mean flow, of the variability of the inflows such as , . The

random component of the generating scheme is . It is a random number drawn from

the set of random numbers with a certain probability distribution or pattern. In one case might be drawn from the so called uniform distribution on ; in this distribution

each number between 0 and 1 is just as likely to occur as each other number. The random part might be drawn from the bell shaped normal distribution.

From a stochastic point of view, streamflow data can be regarded as consisting of four components (Kottegoda, 1971); trend ( ), periodic or seasonal component ( ),

correlation ( ), and random ( ) components which can be combined simply as

follows:

= + + +

To obtain representative sequences of data, it is necessary to identify the individual strength of each component.

3.5.1.2 Specification of the Deterministic Part

Now we consider specifying the components of the time series for generating synthetic flows, that is, specifying the deterministic elements of the flows. To do so, we consider the important characteristics of the historical flow record and alternate ways of making the generated flows show the same characteristics.

First, we assume that the generated flows have the same average value as the observed flows. If the historical record contains n yearly flows, the sample mean flow is

Page 33: capacity of dam

= (3.1)

which is an estimate of the population mean µ. Mathematically, it can be shown that

(3.2)

where, denotes the expectation of As Tends to infinity; that is is the

limiting value (in a probability sense) of as Tends to infinity and where the ’s are

annual flows. Since the generation scheme is used only to form finite sequences of flows, the sample mean obtained cannot be expected to equal the historical mean exactly. However, they tend to be near the historical mean, and the closeness is expected to increase (i.e., improve) with the length of the generated sequence. But as we use the historical or sample values to estimate the true population values for the actual process, we must have some idea about how good the sample estimates will be. Unfortunately, there is some statistical uncertainty regarding the answer of this question. One obvious rule is that small samples are not very reliable. And the chance of obtaining an un-representative sequence in a large sample is much smaller than it is in a small sample. Again a question will arise about how large would’ve the sample? Again the answer is uncertain. Historical flow records are characterized by persistence. Typically, a low flow is more likely to be followed by another low flow than to be followed by a high flow. Similarly, a high flow is more likely to be followed by another high flow. Statistically, it can be said that successive flows are positively correlated. Presence of such persistence reduces the true amount of information about the population mean that is contained in the sample of given size. So, there is a chance that the sample might be unrepresentative. Hence, one should be aware that the problem of sample bias may exist and also that the probability of biased results will decrease as historical sample size increases.

Another important characteristic of the historical record is its variability or spread. The variability of record is measured by its variance or standard deviation. The expected value of the square of the difference between a value drawn at random from the population and the population mean is called the variance. Thus, if µ is the population mean, is the random variable from the population, and is the expectation operator,

the variance is defined by

(3.3)

Page 34: capacity of dam

The standard deviation is the positive square root of the variance, and is denoted by .

When we have a sample of size from the population, the following

sample estimate of population variance is used.

(3.4)

Where the sample mean as is defined in Equation 3.1. The part in the

denominator appears because the computation in equation 3.4 uses in place of the

population mean µ. S is taken as an estimate of .

3.5.1.3 Drawing Random Numbers

In dealing with simulation, one must have a source of random numbers. One obvious source is any random process in nature; the emission of particle from a radioactive component is of example. But this will not serve the researchers’ purpose since the numbers generated should have to be recorded, and then entered into a computer for further analysis. And this would obviously be expensive to record random numbers from natural phenomenon. Instead , computers can be used to generate random numbers. But “numbers generated using computers are not actually random. Because, computers are deterministic machined where identical output is produced upon identical input. Thus, it seems impossible for a deterministic machine to use a deterministic algorithm to generate random numbers, and it is in fact, impossible. However, computers can generate pseudo-random numbers, which are sequence of numbers carefully (deterministically) constructed to maintain the important properties of truly random numbers. All the basic pseudo random number generators produce uniformly distributed numbers. From these uniformly distributed numbers, other distributions can be generated by manipulating the sequence.” (Fiering & Jackson, 1971)

Our primary interest is to generate normally distributed random numbers. Fortunately, it is quite easy to obtain normal random number from a sequence of uniformly distributed random numbers in the range . We want to generate normal random

numbers with zero mean and unit standard deviation. Such numbers are called standard normal deviates. They have the frequency function

The central limit theorem of probability says that number formed by taking the sums of random numbers from the uniform distribution (or from most other distributions) are

Page 35: capacity of dam

approximately normally distributed. Thus according to Fiering and Jackson (1971) if

is a sequence of uniformly distributed random numbers

Will be approximately normally distributed with mean 0 and standard deviation 1. The ‘-6’ is required to give zero mean. (Fiering & Jackson, 1971)

3.5.1.4 Statistics of the Distribution: Correlation Coefficients and Coefficient of Skewness

One important characteristic of a time-series is persistence which relates to the sequencing of the data. In streamflow, persistence arises from natural catchment storage effects which tend to delay the runoff (rainfall or streamflow not absorbed by the soil): over a short time period high flows in one interval will tend to be followed by high flows in the following interval. The longer the time period the lesser the effect and for many streams it is negligible for annual flows. The usual quantitative measure of persistence is the serial correlation. Serial correlation coefficients may be calculated for the correlation between the flow in any given time period (for example, month or year). And the time interval between which the correlation coefficient is calculated is called the ‘lag’. We denote the lag by K in this report. In many studies only the lag one serial correlation is considered, that is, the persistence between an event and the immediately preceding event is considered. Lag-one models have been shown to be operationally satisfactory in several studies (for example, Kottegoda, 1970;Philips, 1972; Wright, 1975; and Bayazit and Bulu, 1995; and more). The definition of this quantity for the flow generation model i.e., the population value of the lag-one serial correlation coefficient is defined as -

Where and is the population mean and variance of flows , respectively, and is

a measure of the extent to which a flow tends to determine its successor. If there is marked persistence in a sequence of flows, there is a notable tendency for both and

to be greater than µ or for both to be less than µ. Thus, there is a distinct tendency

for the product to be positive since it is usually the product of two

terms with the same sign. The expected value of the product is then positive. Conversely, if a higher than average flow is most likely to be followed by a lower than average flow, then the products tend to be negative and so is their

expected value. However, if there is no persistence in the flow pattern, then a higher than average flow is just as likely to be followed by another high flow as it is to be followed by a lower than average flow. Similarly, a low flow is followed by a high

Page 36: capacity of dam

flow or another low flow with equal probability if there is no persistence in flow pattern. The variance that appears in the denominator of the expression for in

Equation 3.5 is a normalizing factor. It restricts the correlation values in the range

and expresses that correlation from population with different amounts of spread

can be compared meaningfully. With a finite sample of values

drawn from the population, we get the following sample estimate of :

Higher order correlation coefficients can also be considered. It is reasonable to expect that the flow of this year will depend on the flow of last year and also on the flow in the next year to last year and perhaps on flows in years before that. That is, one flow may be related to flows at 2 step back. This is ithe case of lag-two serial correlation coefficient. The population values of lag-two serial correlation coefficient is defined as

It’s sample estimate is

Similarly, the lag-K serial correlation coefficient is defined by

It’s sample estimate is

At this point we must decide how many lags to include in the generating model. One easy but not very restrictive rule is that very long lags should not be used. That is, lags close to the sample size should be avoided. For close to there are not many pairs of

Page 37: capacity of dam

flows separated by times and the sample estimate of will be unstable and hence

very imprecise. It is suggested that if more than one lag is included then additional lags should be included as long as it is practical and profitable to do so. Thus, additional lags should be included as long as they produce a model that explains more about the pattern of flows than one with fewer lags do.

For a sample of finite size, computed values of serial correlation (where is the lag)

may differ from zero because of sampling errors. Thus it is necessary to test the value to determine if they are significantly different from zero. Yevjevich (1972b) outlined a test for this purpose.

The confidence limits (CL) for a computed value of

Where, is the standardized normal deviate corresponding to the level of

significance, is the number of flow events and is the lag.

If falls outside the conficence limits, is considered to be significantly different

from zero at the level of significance. Equation 3.11 may be used to test the statistical

significance of for if k is small relative to .

Finally, a statistic of the historical flow sequences which is of particular interest is the coefficient of skewness. The coefficient of skewness measures the degree of symmetry or lack of fit of the distribution about its mean. The population coefficient of skewness is

Where is the population standard deviation. is the third moment about

the mean. in the denominator is a scaling factor that renders the statistic

dimensionless and thus allows meaningful comparisons of the skewness coefficients of populations with different degreed of spread. The sample estimate of is defined for a

sample as

Page 38: capacity of dam

A computationally useful form of the numerator in this definition is given below

And the denominator is

The population mean is the center point of the distribution in the sense that it is the number that minimizes over the population. A positive value of the

skewness coefficient means that the part of the population with values greater than the mean is spread further from the mean than is the part with values less than the mean. The normal distribution has . The family of gamma distributions has nonzero

skewness coefficient.

3.5.2 Models in Data Generation

After estimating the parameters of the historical flows and selecting a distribution of inflows, our next task is to choose a model for data generation. We saw in the section on Time Series that such a generation model is like: = +

With is the deterministic part and as the random part of the synthetic flow model.

A model for streamflows is chosen which can generate (hypothetically or actually) sequences which bear a resemblance to the historical sequence in terms of certain important characteristics, viz. the mean (µ), variance ( ), skewness ( ) and the serial

correlation coefficient of lag one ( ). Many other considerations are involved in

stochastic data generation jof streamflow such as correlograms, partial correlation functions, spectral analysis, and daily models. Besides, there are some well known models which are most frequently used by the researchers. These are Annual Markov Model, Yevjevich model, Moran-Klemes-Borvka model, Two Tire Model etc. we briefly discuss them in the following section.

Page 39: capacity of dam

3.5.2.1 Annual Markov Model

Markov (1856-1922), the Russian mathematician , introduced the concept of a process in which the probability distribution of the outcome of any trial depends only on the outcome of the directly preceding trial and is independent of the previous history of the process. In hydrology, ‘trial’ is the time interval during which the occurrence of inflows are considered and it’s ‘outcome’ is the streamflow during that time period. In our study, the ‘trial’ is the passage of one year and its ‘outcome’ is the streamflow for that year. If the probability distribution of annual srreamflow is independent of previous flows, we have a “simple” Markov process; and if the annual flows are correlated only with the previous years flow, we have a “lag one” Markov process. The Markov process was the basis of the developments at the Universities of Colorado and Harvard in stochastic streamflow generation procedure during the early 1960’s (Julian 1961; Yevdjevich, 1961; Brittan, 1961; Mass et al.,1962).

Brittan (1961) proposed the following Marokv model to represent actual stream flows when the annual streamflows, Q, are normally distributed and follow a first -order autoregressive model:

where,

annual flow for ( ) year

= annual flow for year

= mean annual historical flow

= standard deviation of annual flows

= annual lag one serial correlation coefficient

= normal random variate with a mean of zero and a variance of

unity.

This equation is adopted in order that the expected value of the mean, standard deviation and serial correlation of the computed ’s would be equal to the

representative values of those parameters derived from the historical record and used in the right-hand side of the Equation 3.16. Moreover, if the ’s are normally distributed,

then it follows that the Values will also be normally distributed.

As the normal distribution assigns nonzero probability to negative values, this procedure will occasionally produce a negative flow. Negative values should be used in

Page 40: capacity of dam

the generating equation to produce succeeding flows and should be discarded there after and they should not be used as flows for simulation.

The meaning of such a Markovian flow model is that- a given flow depends on the preceding flow and a random component alone. One explanation might be that a high flow in one time period will build up ground water level and thus provide a tendency toward another high flow in the next period. Similarly, ground water will be depleted during the period of low flow and so a low flow is expected to follow by another low flow.

Similarly, according to Troutman (1978) when the annual flows are described by a first-order autoregressive log normal model [AR(1)log normal], the data generation model will be:

where, the ’s are independent normal distributions with mean 0, variance

1 and , and are the mean, standard deviation and lag-one serial correlation

of the log transformed streamflows.

3.5.2.2 The Yevdjevich Model

Yevdjevich presented this model in 1996. First we describe the model as described by Matalas (1967). Let denote a sequence of random variables, the

normal Markov sequence, defined by:

where, are independent normal with zero mean and unit variance.

Yevdjevich considered such independent sequences, so that the sequence (

), is now given by:

Then he proved that

Page 41: capacity of dam

is distributed as gamma (chi-squared) with mean , variance , skewness , and

that is a Markov chain with serial correlation . Making the trans-formation:

gives a gamma-distributed sequence with mean , variance , skewness and derail correlation .

3.5.2.3 Moran-Klemes-Boruvka model

This model is based on a bivariate gamma developed by Moran (1969) and has been discussed in considerable details by Klemes and Boruvka (1974). The idea is to use a standard bivariate normal distribution with correlation coefficient and by a

transformation of each marginal to a uniform distribution, derive bivariate uniform distribution; by a further transformation from the uniform marginal to a gamma marginal,we obtain a bivariate gamma. Identifying, now, the two original normal variables as the successive values of a normal Markov sequence, we have a sequence of random variables whose stationary distribution is a gamma distribution.

Chapter Four

Data Generation and Determination of Capacity

In this chapter we have generated annually and monthly data. After this capacity is determined for draft ( ) 75% of mean inflows using behavior analysis. The capacity

was determined by considering the level of the dam content.

4.1 Behavior AnalysisBehavior analysis, also known as simulation analysis, is an iterative procedure to determine the capacity of a reservoir. In this technique, various storage capacities are assumed and the inflows are routed through the assumed and the inflows are routed through the assumed capacity. Thus the ‘behavior’ of the dam at various time points with various inflows are observed and recorded. The ‘behavior’ that is of our interest include- probability of level of the dam content. After routing the inflows to various assumed storages, capacity is determined by considering the probability of level of the dam content.

In behavior analysis, the changes in storage content of a finite reservoir are calculated using a mass storage equation which is given as:

(4.1)

where,

Page 42: capacity of dam

Storage at the end of period

Storage at the beginning of time period

Inflow during time period

Release during time period

Gross loss during time period, and

Storage capacity

In the behavior equation given in , which is also called the continuity equation the content at time is obtained by substracting the demand during time and

loss during time , from the content at time The steps involved in Behavior

analysis is given in the following section.

4.1.1 Steps in Behavior Analysis

The steps involved in the Behavior analysis are-(i) First, an arbitrary capacity C, is chosen. It is assumed that the reservoir is

initially full, i.e.,Z0 =C (ii) Apply equation (4.1) year by year or month or day by day (which is

appropriate) for the whole historical / generated data.(iii) Compute the probability that the dam content never exceeds the assumed

capacity.(iv) If the probability is unacceptable, Choose a new value of C and repeat the

steps i-iii.

Thus, the capacity for which dam content is less than to a given capacity will be chosen as the required capacity of the dam.

4.1.2 AssumptionsThe assumption in behavior analysis are:

(i) The reservoir is initially full.(ii) The historical/ generated data sequence is represented of future river flows.

4.1.3 Advantages The advantages of behavior analysis are:

(i) The behavior analysis is a simple procedure and displays the behavior of the stored water clearly.

(ii) The procedure takes into account serial correlation, seasonality and other flow parameters so far as they are included in the historical flows.

(iii) The procedure can be applied to data based on any time period. That is , daily , monthly as well as yearly as data can be used.

(iv) With this method, complicated operating policies (such as release policy) can be modeled. For example, there are real difficulties in including release

Page 43: capacity of dam

as a function of demand (which may further be a function of some climatic variable) and the contents of the dam. In this situation, behavior analysis can be performed with not much difficulties.

4.1.4 Limitations Apart from the advantages, the method has a few limitations too. These are:

(i) The reservoir should be initially full at the beginning of its operation.(ii) The analysis is based on the historical record but the sequencing of flows

may not be representative

4.2 Determination of Capacity for Annual Inflow

In this section, we will describe the procedure of capacity determination by considering the level of the dam content. M.S.H. Khan (1979) determined the capacity using stationary level of the dam content. He obtained the stationary distribution of the dam content for exponential input. But it is difficult to find a stationary distribution for general input. So we in this study consider the level of the dam content.

For this, we need to generate inflow sequences by simulation. But before going for simulation, we need to find out the distribution of the annual inflows and then to select the appropriate data generation model. It may be mentioned that the techniques to be followed will remain same of some other historical records of inflow data are used to determine capacity.

4.2.1 Selection of Annual Inflow Distribution

We have used the 60-years annual inflow of Alsea river of the U.S. state of Oregon as the historical flow sequence. The mean of the historical annual flow of Alsea river is calculated as

The standard deviation of historical flow is calculated using the equation 3.4. Annual standard deviation is calculated as-

That is, the standard deviation is

The skewness coefficient is calculated using equations 3.14 and 3.15. The calculated coefficient of skewness is 0.2092836.

Page 44: capacity of dam

Now, we calculate the lag one serial correlation coefficient . The correlation

coefficient is calculated using equation 3.6. Experience shows that consideration of lag-one is sufficient. Annual lag-one serial correlation coefficient is found to be. After this, we have calculated the logarithms of the annual flow values and then annual mean, standard deviation, and skewness coefficient is calculated. The calculated values of mean, standard deviation, skewness coefficient and lag one serial correlation coefficient

for normally distributed flows and log-normally distributed flows are given in table

4.2.1.

Table 4.2.1: Summary Statistics of Observed Flows and Log-transformed Flows (

units)

Statistics Observed Flow Log-transformed FlowMeanStandard DeviationSkewness CoefficientLag-one Serial Correlation

5470401312070.20928360.6330962

13.1830.2476-0.8524-0.3086399

Now, the annual inflow values of Alsea river are plotted to check whether the annual flows follow normal distribution. Normal probability plot of observed annual flows is given in figure 4.2.1. Normal probability plot of annual log-normal flows is given in figure 4.2.2. Figure 4.2.1 shows that the best fitting curve is nearly a straight line but not as straight as that in the figure 4.2.2. Therefore, we say that the log-normal flows are well fit and hence we can say that the annual flows of Alsea river is log-normally distributed.

Figure 4.2.1: Normal Probability plot of Annual flows of Alsea river.

Page 45: capacity of dam

Figure 4.2.2: Normal Probability plot of Log of Annual Flows of Alsea river.

4.2.2 Choice of Annual Model

As we are dealing with annual data, annual markov model is appropriate. Annual markov model is given by equation 3.16, but as our historical data follows log-normal distribution, we have to deal with the log-transformed model and thus, the appropriate model is given in equation 3.17. We repeat the model here as:

where,

4.2.3 Annual Data Generation Using the AR [1] Log-normal Model

As our historical annual inflows follow log-normal distribution, we have used the data generation model that is suitable for log normal model. Our primary aim is to generate annual inflows that will possess the same statistical properties as that of the historical annual flows. This means, the generated inflows should have approximately same mean, standard deviation, and other moments of the historical flow distribution. But to

Page 46: capacity of dam

generate logarithms of flows “we must remember that the procedure reduces the mean, variance, serial correlation coefficient, and skewness coefficient of the flows themselves will not necessarily be preserved ” (Fiering & Jackson, 1971). This distortion may sometimes be important and so Matalas (1967) has suggested procedures for ensuring that the moments of the flows are maintained. Matalas assumed that the the number a is a lower bound on the possible flow values and that if x denote a flow, then is normally distributed. According to Matalas (1967) the

parameters of the x’s are related to the parameters of the y’s as follows:

where,

Lower bound of the possible flow values

Mean value of the historical flow sequence

Variance of the historical flow sequence

Skewness coefficient of the historical sequence

Lag one serial correlation coefficient of the historical sequence

To preserve the historical statistics of the flows rather than of their logarithms, Matlas suggested calculating the sample statistics and substituting these

values into the four equations above. Then we get the estimates of the lower bound , by solving the equations given in 4.2-4.5. Then these estimates (not

the sample statistics of the logarithms of the historical flows) are used in the flow

Page 47: capacity of dam

generation model to generate a series of synthetic logarithms of flows.

Then a series of synthetic flows is formed from the relation-

The flows obtained from this procedure have expected parameters as desired.

4.2.4 Summary Calculations for Annual Data Generation Using AR[1] Log- Normal Model

At first, the logarithms of the historical flow sequence is obtained by taking log of historical flows. From this log-transformed flows, the mean, standard deviation, skewness and serial correlation of lag one is calculated.

Now, to generate logarithms of flows, first we have calculated

from the historical record. The calculated values are given in table 4.2.1. Now let

be the mean, variance, lag-one serial correlation coefficient,

and lower bound of logarithms of flows. We got from the historical flows, , and skewness

First, we estimate the variance of the logarithm of flows using equation 4.4. From equation 4.4, we can write

The estimate of is obtained using equation 4.4 by Newton-Raphson iteration and

the value is obtained as-

Next, we use the value of and the just estimated to obtain an estimate of using the simplified form of equation 4.3 and is given as:

Page 48: capacity of dam

Now using equation 4.5 and using the values of and

we obtain the estimate of as:

Or,

Finally, from equation 4.2, we get the lower bound as:

Now we use the Markov Model given in equation 3.17 to generate logarithms of flows with mean , standard deviation= 0.06956434 and lag-one serial correlation

coefficient = . Now, if is the series of synthetic logarithms, then

is the series of synthetic flows. Our intention is to

generate 60 years inflows. So we obtain a series of inflows of 50+60 years from which discarding the first 50 values, inflows of 60 years are obtained.

In this way, we have generated a total of 1000 flow sequences each containing 60 values i.e., each flow sequence contains 60 years’ flow values. At the end, we have got a total of 60000 years’ flow. The mean, standard deviation, coefficient of correlation, etc., of historical and generated flows are given in table 4.2.2.

Table 4.2.2: Comparison of Parameters of the Historical and Generated Annual Flows ( units)

Statistics Historical Flows Generated FlowsFlow Log-Flow Flow Log-Flow

Page 49: capacity of dam

MeanStandard DeviationCoeff. SkewnessSerial Correlation

5470401312070.20928360.6330962

13.1830.2476-0.8524-0.3086399

5464371315620.18761880.6221457

13.180.256-0.71260.6466462

Table 4.2.2 shows that the generated annual inflows preserve the statistical properties of the historical annual inflows.

4.2.5 Capacity Determination Using Generated Annual Inflows

Capacity is determined by considering the level of the dam contents. In this case there will be no overflow and no emptiness. The procedure is as follows:

We considered the demand as 75% of mean annual inflow. After the data have been generated, and the demand has been fixed, we are ready to simulate the dam system using the continuity equation given by equation 4.1. First, a capacity is chosen and the dam is assumed full in the beginning. Then the demanded amount water is released from the dam. After the release, there will be some amount of water in the dam. Then inflow will occur and will be stored in the dam until the end of first year when release will be made again. By this process we routed all the generated inflows and observed the behavior of the dam for the assumed capacity. We computed the probability that the dam content never exceeds the assumed capacity

We have computed various probabilities that the dam content never exceeds the assumed capacity to determine the required capacity of the dam.

Table 4.2.3: Capacity Determination Considering

Capacity (in units) Probability

160460

161880

164720

167560

170400

195960

0.92

0.93

0.95

0.97

0.98

0.99

The graph is as follows:

Page 50: capacity of dam

Figure 4.2.3: Capacity by Specifying the level of the Dam Content Using Generated Annual Inflows

From the above table as well as figure 4.1.3, we can say that the probability that the dam content never exceeds assumed capacity (e.g., 195960) is 0.99.

4.3 Determination of Capacity for Monthly Input and Monthly Release

In the above section annual inflow data are used only to simplify the dam system. Moreover, it is easy to generate annual data with the available models. Also, in the annual model, inflow values of one year can be considered independent to another years’ flow because, correlation coefficient between two successive years’ flow is very small.

As in the case of annual inflows, here we will use Behavior analysis to estimate the required monthly storage capacity of a dam. To run the behavior analysis, we need a long series of flows which is not usually available and so we use the historical monthly

Page 51: capacity of dam

inflow properties of the inflows to generate long sequence of flows keeping the historical properties such as mean, standard deviation, coefficient of skewness, serial correlation coefficient etc., intact.

4.3.1 Selection of Monthly Inflow Distribution

To select the distribution of monthly inflows, we have to study the observed pattern first. We have a total of 720 monthly inflow values. The mean monthly inflow of Alsea river is calculated as-

The standard deviation is,

Coefficient of variation is obtained as .

Coefficient of skewness is calculated using equation 3.15 and equation 3.14 and the calculation is given as follows. We know, coefficient of skewness is given by

As the skewness coefficient is large and it is found to be near 2 , so, we must have considered the skewness in the future inflow generation process i.e., the skewness should be preserved in future flow generation. To preserve the historical skewness as well as the mean and standard deviation, assumption of Gamma type distribution is suggested (Fiering & Jackson, 1971). Because, “gamma distribution is the only one of the inflow distributions that shows skewness of both flows and their logarithms; therefore it is appropriate for extensions of historical flows (i.e., data generation) in which skewness is marked and important” (Synthetic Streamflows by Fiering and Jackson, 1971). We therefore, considered Gamma-type distribution for the monthly inflows.

Khan (1979) first generated an input series of Gamma-Markov type by using

As this model is not consider seasonality between flows so we use Thomas and Fiering suggested seasonal model to generate monthly data which is described in the following section.

4.3.2 Generation of Monthly Inflows using Thomas and Fiering‘s

Seasonal Model

Page 52: capacity of dam

In the case of annual inflow data, we considered the flows as free from periodicity. The most common form of periodicity is related to monthly data. Because monthly flow varies from season to season (i.e., from month to month). And hence, the data generation procedure should consider this seasonality. Here the most appropriate practical model is the one proposed by Thomas and Fiering (1962). The algorithm for Thomas and Fiering’s seasonal model is as follows:

(4.6)

where,

generated flows during the seasons

mean flows during seasons

least square regression coefficient for estimating flow from the flow

using

= normal random variate with mean zero and variance of unity.

, = standard deviation of flows during the seasons, and

= correlation coefficient between flows in seasons.

To use the model given in equation 4.6 to generate monthly flows at a particular site, monthly means, standard deviations, and lag-one serial correlations are required. These are obtained from historical flows and are calculated in the earlier sections. The model is given as:

:

:

:

Page 53: capacity of dam

To run the model, we set , and computed successively , ,……………..,

where is the only unknown and for each step it is calculated as a pseudo-random

normal variate with mean zero and unit standard deviation. The above model is restricted to normally distributed flows, that is, is considered to be a Normal random

variate with mean zero and unit standard deviation. In order to cater for non-normal streams the model can be modified in three different ways as:

(i) Modifying by an appropriate transformation(ii) Modifying the streamflow parameters and the model algorithms such that

the final generated data are distributed like the historical flows upon which they are based, and

(iii) Generate normally distributed flows and apply inverse normalizing equations.

Since the second way is not so easy and so is the third, modifying to an appropriate

transformation would be simpler. For skewed data, Thomas and Burden (1962)

transformed the Normal variate, , to a skewed variate, , with an approximate

Gamma distribution called ‘like Gamma’ using the Wilson and Hilferty (1931) transformation. Thus

where,

= coefficient of skewness of the like Gamma variate,

, and

repetitive annual cycle of seasons usually .

In order to maintain the historical skewness in the generated flows, the historical skewness is increased to account for the effect of serial correlation. Using expectation theory, Thomas and Burden derived the following algorithm to do this.

Page 54: capacity of dam

where,

, = seasonal coefficient of skewness for seasons.

To apply this method, called the like Gamma transformation, in the eq (4.6) is

replaced by .

Using Thomas and Fiering’s seasonal model we have generated 1000 years inflow data which contains monthly inflow values.

Calculation

The above model (equation 4.6) was used to generate monthly inflows. We first calculated the mean, standard deviation, coefficient of skewness and lag-one serial correlation coefficient of the historical monthly inflows. The historical monthly flow characteristics are given in table 4.9.

Now, here we give a simple calculation for flow generation using the Thomas and Fiering’s model. As per the model, the first flow value is assumed as the mean flow of

January. That is, . so the inflow of February is calculated using-

Where, is calculated using equation 4.8. For the calculation of using equation 4.8,

we need to get the like Gamma transformed value of which is obtained using

equation 4.9. For a random number t= -1.854213

Table 4.3.1: Historical Monthly Flow (in units) Characteristics of Alsea river.

Month Mean Flow St. Dev Coeff. Var. Coeff. Skew Serial CorrJanuary

February

March

110547

93708

76817

55051

42266

32367

0.5

0.45

0.42

0.232

0.432

0.454

0.266

0.288

0.161

Page 55: capacity of dam

April

May

June

July

August

September

October

November

December

45105

24988

12288

6320

3661

3888

11701

54339

103678

19639

11207

5487

3646

942

2078

13312

37007

54839

0.44

0.45

0.45

0.58

0.26

0.53

1.14

0.68

0.53

0.899

1.01

1.665

5.860

1.11

2.534

2.92

0.933

0.440

0.133

-0.017

0.040

-0.061

0.181

-0.025

0.086

-0.182

0.052

Here, skewness of January, , skewness of February, , and

serial correlation coefficient of January . Hence,

So,

Also, we get the regression coefficient as-

Page 56: capacity of dam

Finally, the flow of February is obtained as-

In this way, the successive month’s flows are generated. As we have used the random component , which is a normal random variate with mean zero and unit standard

deviation, flow values obtained in this process could be negative. Because of the complexity of the seasonal model, we did not use the negative values for further generation but considered it as zero. So whenever any month’s generated flow appears negative, we considered it as zero.

Note that, we deterministically considered the first flow value as the historical mean flow of January. To overcome this limitation and to make the entire generated flows as ‘random’, we discarded first five years’ values (i.e., 60 values) from the generated flows. Thus, we obtained 1000 years monthly flows. Characteristics of the generated monthly flows in comparison with the historical flows are given in table 4.3.2.

Table 4.3.2: Characteristics of the Generated Monthly Flows in Comparing with the Historical Monthly Flows of Alsea river. Flows are in units.

Statistics Historical Flow Generated FlowMean Flow

St. Deviation

Coeff. Variation

Coeff. Skewness

Corr. Coefficient

45587

49597

1.09

1.60

0.65

1126857

1797211

1.60

1.40

0.12

From table, it is observed that although skewness is almost preserved in the generated flows, the mean value and standard deviation are different from the historical mean flow. This might be happened as we have considered the negative flows as zero.

4.3.3 Capacity Determination Using Generated Monthly Inflows

Now we apply the technique (considering the level of the dam contents) for the generated monthly inflows. Various capacities are assumed and the behavior of the dam was observed using the continuity equation 4.1. We have computed various probabilities that the dam content never exceeds the assumed capacity to determine the required capacity of the dam. The result is summarized in table 4.3.3.

Page 57: capacity of dam

Table 4.3.3: Capacity Determination Considering

Capacity (in units) Probability

568000

596400

624800

639000

653200

681600

710000

766800

0.91

0.93

0.94

0.95

0.96

0.97

0.98

0.99

The graph is given below.

Page 58: capacity of dam

Figure 4.3.3: Capacity by Specifying Level of the Dam content using Generated Monthly Inflows

It is clear from the above figure that the probability that the dam content never exceeds the assumed capacity (e.g., 766800) is 0.99.

4.3.4 Capacity Determination Using Historical Monthly Inflows

Since the mean monthly flow of the generated inflow data is not exactly the same as that of historical monthly flow, we have applied this method on the historical data directly. The results are summarized in table 4.3.4.

Table 4.3.4: Capacity Determination Considering

Capacity (in units) Probability

Page 59: capacity of dam

24992

26980

28116

29820

31240

38340

0.90

0.93

0.94

0.95

0.96

0.99

The graph is given as follows:

Figure 4.3.3: Capacity by Specifying Level of the Dam content using Historical Monthly Inflows

Page 60: capacity of dam

Chapter Five

Methods of Determining Capacity

5.1 Introduction

The storage required for a dam to meet a specific demand depends on the following three main factors:

1. The variability of the inflow pattern2. The required demand, and3. The degree of reliability of the demand being met.

A large number of produces have been devised in determining the capacity of a dam. Dam can be constructed on a single stream or on different streams or even not on any stream (e.g., pumped storage).

Reservoir capacity-yield relationship can be classified into three main groups. These are:

Page 61: capacity of dam

1. Techniques based on critical period2. Techniques based on probability theory3. Techniques based on generated inflows

5.2 Techniques Based on Critical Period

The techniques involved in this category are: Mass Curve Method (Ripple Diagram), Residual Mass Curve Method, and Behavior Analysis. Some of the critical period techniques are based on range of inflows. These include Hurst’s Procedure, Fathy and Shurky’s Method, Sequent Peak Algorithm, etc.

5.2.1 Critical period

A critical period is defined as a period during which a reservoir goes from a full condition without spilling in the intervening period. That start of a critical period is a full reservoir and the end of a critical period is when the reservoir first empties. Thus, only one failure can occur during a critical period. The definition stated above is, however, not universally accepted. For example, the U.S. Army Corps of Engineers (1975) define critical period as the period from the full condition of the reservoir through emptiness to be full condition again. They term the fullness-to-emptiness period as critical drawdown period (McMahon & Mein, 1978).

Two assumptions in critical period are

1. The reservoir is initially full, and2. Only one failure during the critical period can occur.

5.2.2 Mass Curve Method

This method is an important one which is a technique of determining the capacity based on ‘critical period’. Mass curve method is the first systematic method for analyzing the relationship between reservoir inflow, target draft, and storage capacity. The method, suggested by Ripple in 1883, is based on the mass diagram. It gives the minimum effective storage required so that no water shortages occur during the time period under considerations.

Rippl’s method assumed that during an interval and at unit interval of time the

historical flows and corresponding releases are known and given by and

respectively. Let Then T is plotted against time gives a curve which is called the mass curve (Rippl, 1883). Rippl used the mass curve and took as the capacity.

Where,

is the peak and

Page 62: capacity of dam

is the trough of the mass curve.

The mass curve method is based on the following assumptions:

(i) Both inflow and release are known functions of time.(ii) The reservoir is full at time zero and consequently at the beginning of the

critical period.(iii) While using the historical flow data, it is assumed that future flow

sequences will not contain a more severe drought than the historical flow sequence.

5.2.2.1 Procedure

A mass diagram is the plot of accumulated inflow (i.e. supply) or outflow (i.e. demand) versus time. The mass curve of supply (i.e. supply line) is, therefore, first drawn and is superimposed by the demand curve. The procedure to construct such diagram is as follows:

(i) Using the historical streamflow data we have to construct a mass curve or cumulative inflow curve.

(ii) From the past records, determine the hourly demand for all 24 hours for typical days (maximum, average and minimum).

(iii) Calculate and plot the cumulative demand against time, this is the cumulative draft line which must be superimposed on the mass curve such that it is tangential to each hump of the mass inflow curve.

(iv) Read the storage required as the sum of the two maximum ordinates between demand and supply line. That is, Measure the largest intercept between the mass inflow curve and the cumulative draft line.

(v) Repeat the procedure for all the typical days (maximum, average and minimum), and determine the maximum storage required for the worst day.

5.2.2.2 Advantages

The mass curve analysis has the following advantages:

(i) The procedure is simple and easily understood.(ii) It considers seasonality, serial correlation and other flow parameters as they

are included in the historical flows used in the analysis.

5.2.2.3 Disadvantages

Although much effort has been expended to correct them this method suffers the following principal defects:

(i) This mass curve method for determining the capacity is based solely on the historical record, which is often very short in length and it is unlikely that the same flow sequence with same characteristics will recur during the active life of the dam.

Page 63: capacity of dam

(ii) The same flow might not occur in the future time period.(iii) It is not possible to compute a storage capacity for a given probability

of failure, i.e., the mass curve does not help the designer to calculate the risk to be taken with regard to water shortage during period of low flow.

(iv) The draft or release is usually taken to be constant. As seasonality affects the demand the restriction in release (as a fraction of content, for example) are difficult to handle.

(v) The length of the historical record is likely to differ from the economic life of the dam, and because the required storage capacity obtained by Rippl’s method increases with the length of the record, the estimated capacity is likely to be incompatible with a design based on the economic life, which in turn properly determined by social and economic considerations as well as by purely physical considerations.

King(1920) modified Rippl’s method to include evaporation loses and incident rainfall in the mass balance of water over the critical period.

5.2.3 Sequent Peak Algorithm: The Automated Mass Curve Technique

When mass curve fails to determine the capacity for long synthetic sequence, its equivalent Sequent Peak Algorithm is suggested by Thomas and Burden. Fiering (1967) described the Sequent peak procedure as follows:

Given year record of streamflow at the site of a proposed dam and the desired drafts,

is required to find a reservoir of minimum capacity such that the design draft can always be satisfied if the flow and drafts are repeated in a cyclic progression of cycles of periods each. The solution required only two cycles. The steps in the Sequent peak

algorithm are:

(i) Calculate (that is, flow-draft) for all .

(ii) Locate the first peak (local maximum),

(iii) Locate the Sequent peak, , which is the next peak of greater magnitude than the first.

(iv) Between this pair of peaks find the lowest trough (local minimum), , and

calculate .

(v) Starting with , find the next Sequent peak, , that has magnitude greater

that .

(vi) Find the lowest trough, , between and and calculate .

(vii) Starting with , find and , calculate .

(viii) Continue for all Sequent peaks in the series for periods.(ix) The required reservoir capacity is:

(5.1)

Page 64: capacity of dam

Where, is the peak and is the trough of the mass curve.

The mass curve method uses only one stream at a time whereas Sequent Peak Algorithm can handle replicates of generated data. In this method, the inflow data are replicate twice. One more advantage of Sequent Peak method over mass curve is that it can handle variable draft so long as they can be specified without reference to the reservoir content. For example, we can handle seasonal drafts with this method. It uses the historical data directly, so the effects of seasonality, serial correlation and other flow parameters are taken into account directly.

5.2.3.1 Loucks Computer Based Sequent Peak Algorithm

The Sequent peak algorithm developed by Thomas and Burden is a rather complex algorithm especially if long sequences are used to estimate the storage capacity. Loucks (1970) framed the Sequent peak algorithm as a linear programming problem (Loucks et al., 1981, pp. 235-236). Loucks’ (1970) algorithm applied to a sequence of annual streamflows is described as:

(5.2)

Subject to:

And

where,

storage required at the beginning of period i,

annual streamflow,

mean annual streamflow (MAF)

demand as a fraction of the MAF

indicator variable equal to 1 or 2, and

Page 65: capacity of dam

length of available streamflow record.

The Sequent peak algorithm, advanced by Thomas and Burden (1963), uses .

Thus the sequence of required storages, , are computed over the period i=1,2,……2N

which is accomplished by repeating the sequence of streamflows. This double cycling algorithm is used to take care of the situation when the critical low flow sequence occurs at the end of the planning period.

The double cycling Sequent peak algorithm generates the steady-state solution to the problem of determining the minimum storage required over an -year planning period

to supply the desired yield, , with no shortages. Although the introduction of double

cycling is often attributed to (Thomas and Burden, Klemes 1979b, p. 138) point out that it has been used in the past, starting with Stupecky in 1909. Further detailed discussion of Rippl’s mass curve technique or its equivalent Sequent peak algorithm may also be found in Klemes (1978, 1978a).

5.2.3.2 Alternative Sequent Peak Algorithm

A number of variants of the sequent peak algorithm (SPA) are available that accommodate storage dependent losses. Assuming the initial storage in a semi-infinite reservoir is zero. We apply the following water balance equation for all years or months in the streamflow record of length :

(5.3)

where is the storage ( ) at time (again ), and and are the draft and

the inflow during the interval . If we continue with equation (3.3) for

the concatenated inflow sequence. The required active reservoir capacity is given by:

over all (or if )

5.2.3.2.1 Attributes and Limitations

Using the historical inflow data, SPA computes the storage required to provide the firm yield, which is the yield that can be met over a particular planning period with no failure. This approach has been widely used in the United States and elsewhere. Furthermore, the design capacity of many reservoirs world-wide has been determined using either the Ripple graphical method or the SPA which is a numerical version of that technique.

As SPA is equivalent to the Ripple graphical mass curve procedure, it suffers from the same limitations. First, the estimated storage is based on the critical historical low flow sequence and says little about the reliability (expressed as a probability) of meeting the target draft. Second, fluxes (including evaporation) dependent on storage content

Page 66: capacity of dam

cannot be taken into account in the simple SPA procedure. Now there are more complex algorithms to overcome this inadequacy.

5.3 Techniques Based on Probability Theory

This class includes Moran and Moran Type Methods, Gould’s Probability Matrix Method, Alexander’s Method, Dincer’s Method, Gould’s Gamma Method, Gould-Dincer Methods.

5.3.1 Dincer’s Method

This method is due to Professor T. Dincer, Middle East Technical University, Ankara. He described the method as follows:

Consider a sequence of independent annual flows with mean and standard deviation

.

n-yearly mean (5.4)

n-yearly standard deviation (5.5)

As a consequence of central limit theorem, the distribution of n consecutive annual flows approaches normality as increases. Therefore, the lower p-percentile flow is

given by:

(5.6)

where

n-year flow with a probability of occurrence of , that is, for of time n-

year

flow , and

standard normal variate at during a critical period

(5.7)

Page 67: capacity of dam

where

depletion of an initially full storage at the end of and n-year period during which

time the n-year flow ( ) has a probability of occurrence of , and

constant draft from the reservoir over years (5.8)

where is the constant draft as ratio of mean annual flow.

Substituting equation 5.6 and equation 5.8 in equation 5.7 we get,

(5.9)

To obtain the length of the critical period and the maximum required capacity, differentiate equation 3.9 with respect to n and equate to zero. This will give

(5.10)

And hence (5.11)

where,

Length of the critical drawdown period in years,

Annual coefficient of variation,

Maximum required storage in volume units, and

Maximum required storage expressed as a ratio of mean annual flow

5.3.1.1 Assumptions

Page 68: capacity of dam

In the critical period method we assumed that the reservoir is initially full and only one failure during the critical period can occur. There are several assumptions in this approach. These are:

(i) The draft rate is uniform(ii) Annual flows are independent(iii) The critical period is large enough so that the n-year flow tends toward

normality.

5.3.1.2 Advantages

Despite the above assumptions, this method provides reasonably reliable storage estimates at high drafts (greater than 50%).

5.3.1.3 Limitations

(i) In this procedure, the annual flows are assumed to be normally distributed. But for non-normally distributed annual flows, this procedure tends to over estimate the storage. But the overestimation is balanced by a tendency for underestimation due to initially full condition and the condition of no repeated failure without spill.

(ii) Also, this method uses annual data and thus does not take into consideration about seasonality.

5.3.2 Gould’s Gamma MethodGould, in 1964, suggested this method although this procedure can be thought of as a modification of Dincer’s procedure. The essence of this process is that, while parameters for the normal distributions are easy to calculate and probability table for it are readily available, the Gamma distribution is a better approximation to the distribution of annual flow data. In this procedure, Gould used the Normal distribution for calculation, and then applied a correction to approximate the Gamma distribution.

The mean and variance of one parameter Gamma distribution, , are equal and

equivalent to the shape parameter, say . it is possible to convert the mean, , and

standard deviation, , of a Normal distribution to a Gamma units by dividing them both

by . The resultant Normal distribution will have the same mean and variance that is

Gamma units. For a normal distribution Dincer showed that

where

required storage in volume units,

Page 69: capacity of dam

draft as ration of mean annual flow,

coefficient of variation,

mean annual flow, and

standard normal variate at

Substituting for and with Gamma units we get-

(5.12)

Where storage volume in Gamma units.

Gould argued that the difference between the lower percentile flow of ,

distribution and that of a normal distribution with mean and variance both equal to is

approximately constant for a given value of over a large range of shape parameter .

values of are given in table 5.1.

Table 5.1: Values of and

Lower p percentile value

0.5 3.30 not constant

1.0 2.33 1.52.0 2.05 1.13.0 1.88 0.94.0 1.75 0.85.0 1.64 0.67.5 1.44 0.410.0 1.28 0.3

As is constant for a given , according to Gould the critical period for a Gamma

distributed drought is the same as that for normally distributed inflow with the same mean and coefficient of variation. Also, the required capacity for an inflow that has a Gamma distribution is d Gamma units less than that required for a normal distribution. Thus it can be shown graphically that the drought for a Gamma distribution will be

Page 70: capacity of dam

greater than for the Normal case and the capacity should be decreased by Gamma

units.

Hence,

(5.13)

(5.14)

where,

required capacity divided by mean annual flow in Gamma units, that is,

We can convert the required capacity determined in Gamma units to units of volume as

a ratio of annual flow by multiplying the right-hand side of equation 5.13 by and

thus we get-

where,

required capacity divided by mean annual flow in volume units per year.

5.3.2.1 Limitations

(i) The main limitation of Gould’s gamma method lies in the fact that annual flows are often found to be distributed other than Gamma.

(ii) For log-normally distributed flows, this method can not be applied, at least theoretically.

Page 71: capacity of dam

5.3.3 Gould–Dincer Approach

The Gould–Dincer approach, which was offered to the first author by C.H. Hardison in 1966, is a modification of a method for reservoir storage–yield analysis derived by Professor T. Dincer, Middle East Technical University, Turkey. The Dincer method assumed the reservoir inflows were normally distributed and serially uncorrelated. In 1964, Gould (1964) independently derived a similar reservoir storage–yield relationship but incorporated inflows that were Gamma distributed. This method has become known as the Gould Gamma method (McMahon and Adeloye (2005)). To apply this method to skewed flows Gould (1964) provided a manual adjustment to modify normal flows to Gamma distributed flows. Vogel and McMahon (1996) proposed that the Wilson–Hilferty transformation (1931) be used instead of the cumbersome Gould procedure to deal with skewed flows and derived an adjustment to storage as a result of auto-correlation which produced a result identical to that of Phatarfod (1986), but used a completely independent approach. A further variation of the Gould–Dincer approach that allows for lognormal inflows was offered to the first author by G. Annanadale in 2004. To distinguish among these three variations of the Gould–Dincer approach we have labelled them: Gould–Dincer Normal (G–DN), Gould–Dincer Gamma (G–DG) and Gould– Dincer Lognormal (G–DLN). The following summarizes the methodology.

5.3.3.1 Gould–Dincer Normal (G–DN) Method

The equation representing G–DN model is developed as follows. Assuming normally distributed and independent annual flows (mean l and standard deviation r), consecutive n-year inflows (i.e., the sum of n consecutive annual flows) into a reservoir can be defined as:

n-year mean (5.1 5)

n-year standard deviation (5.16)

During a critical period (i.e., a period during which the reservoir contents decline from full to empty) of length n:

(5.17)

where is the depletion (thought of as a positive quantity) of an initially full

reservoir at the end of n years without having spilled, is the target draft over

n years, is the n-year inflow with a probability of non-exceedance of and

is a constant draft defined as a ratio of . Assuming inflows are normally distributed

Page 72: capacity of dam

(5.18)

where is the standardised normal variate at probability of non-exceedance (

because we are looking at inflows below the mean). To obtain the maximum

storage required to supply the draft, combine Eqs. (5.17) and (5.18) and differentiate with respect to n; this gives the required capacity to meet the target draft for probability of non-exceedance, i.e., for reliability, and the equivalent

critical period ncrit in years as follows:

(5.19)

which, after back-substitution into Eq. (5.17), gives:

(5.20)

where is the coefficient of variation of annual inflows to the reservoir. is the

period for the reservoir of capacity to empty from an initially full condition.

By substituting in Eq. (5.20), we obtain the dimensionless relationship:

(5.21)

where is the standardised storage (reservoir capacity divided by the standard

deviation of annual flows) and is known as drift or standardized net inflow, in other

words, is the inverse of the coefficient of variation of net inflow. Note that for a

given reliability, reduces as m increases.

Page 73: capacity of dam

To account for the auto-correlation effect on reservoir capacity one can adjust the reservoir capacity computed from Eq. (5.19) by as follows:

(5.22)

where is the lag-one serial correlation coefficient. It is noted that probability is the

probability that inflows into the reservoir will be just sufficient to allow the reservoir to meet the targeted draft with reliability In terms of Pegram’s definitions of

failure (1980), is assumed to be a measure of the mean first passage time from a

full to an empty reservoir.

5.3.3.2 Gould–Dincer Gamma (G–DG) Method

If the inflows are assumed to be Gamma distributed, in Eq. (5.22) is replaced by:

(5.23)

where is an approximate Gamma variate based on the Wilson–Hilferty

transformation (1931) (see also Chowdhury and Stedinger (1991) for developments relating to the transformation) and is the coefficient of skewness of the annual

inflows. If the flows are also auto-correlated, then needs to be replaced by (Eq.

(3.24)) in Eq. (5.23). Eq. (5.24) was first proposed by Thomas and Burden (1963). This correction adjusts and is separate from the correction in Eq. (5.22) which deals with

the effect of auto-correlation on reservoir inflows and is independent of the inflow distribution:

(5.24)

It should be pointed out that the Gamma transformation (Eq. (5.23)) based on the Wilson and Hilferty transformation breaks down for values of > 4.

Page 74: capacity of dam

5.3.3.3 Gould–Dincer Lognormal (G–DLN) Method

If the annual flows are considered to be lognormal, in Eq. (3.22) can be replaced by

Eq. (3.25) which is a rearrangement of Chow [2, Eq. (8-I-53)]

(3.25)

5.3.3.3.1 Attributes and limitations

A major advantage of the Gould–Dincer approach is that it is based on a straight-forward and logical water balance of simultaneous inputs and outputs of a storage reservoir. Computationally, it is simple and, although the basic formulation assumes inflows are independent, the storage estimates can be adjusted to take the auto-correlation into account as provided in Eq. (5.22)

A limitation of G–D models relates to the definition of probability of failure (emptiness) (Pf). From Eq. (5.17), the probability of failure is defined as the failure of the n-year inflow to occur with a probability of non-exceedance of . Given that

the unit of time in a G–D analysis is a year, can be likened to and, for our analysis,

has been assumed equivalent to the mean first passage time from a full reservoir to an empty condition. The theory does not allow for failures beyond the first failure. The complement of Pf is an approximate measure of the reliability to meet the target draft from a full condition.

5.4 Techniques Based on Generated Inflows

In this class, all the previous techniques are applied for the synthetic inflows, which are using appropriate data generating model by simulation. To generate inflows by simulation, there need some model which generates inflows maintaining the same properties of the historical flows and the flows to be the results of a random process, a process whose result change with time a in a way that involves probability (Moran, 1959). Several models for generating synthetic flows have been proposed e.g. Gaussian, log-normal, gamma, Gumble, Log-pearson type3 distribution.

Chapter SixCapacity Determination6.1 Introduction

In this chapter we have applied our generated data to some of the existing methods to determine the capacity of a dam. In our study, capacity is determined for various drafts

Page 75: capacity of dam

( ) viz. 70%, 75%, 80%, 85%, 90%, 95% and 98% of mean inflows using Mass Curve

Method, Sequential Peak Algorithm, Gould Gamma Method, Dincer’s Method, Gould-Dincer Normal Method, Gould-Dincer Log-normal Method, Gould-Dincer Gamma Method. Capacity has been obtained with those methods for annual and monthly inflows. All computed capacities are given in units.

6.2 Capacity Determination by Mass Curve Method

6.2.1 Capacity Determination by Mass Curve Method Using Historical

Monthly Inflows of Alsea river

Capacity was determined using the monthly historical inflows. The cumulative inflows are then plotted against time. Figure 6.2.1(a) shows the cumulative mass curve when 60 years monthly flows are plotted. After getting the mass curve, the draft line (90% of the mean annual flow) is superimposed on the curve. After examining the distances of the draft line from the mass inflow curve, we marked the largest distance by the two points b and a. The distance is then measured graphically and is found to be 170400 Now

as we consider different drafts (70%, 75%, 80%, 85%, 90%, 95% and 98% of mean inflows), so we get different capacities according to draft values. The following table shows these capacities. From table 6.2.1, we can say that capacity decreases as drafts are increases.

Table 6.2.1: Capacity by Mass Curve Method Using Historical Monthly Inflows

Mass Curve Method

Values of Capacity

34080028400025560019880017040010224082360

We use these tabulated value to draw the following graph.

Page 76: capacity of dam

Figure 6.2.1: Capacity by Mass Curve Method for Various Drafts Using Historical Monthly Inflows.

Page 77: capacity of dam

Figure 6.2.1(a): Capacity by Mass Curve Method Using Historical Monthly Inflows.

6.2.2 Capacity Determination by Mass Curve Method Using Historical Annual Inflows of Alsea river

Capacity was determined for various drafts such as 70%, 75%, 80%, 85%, 90%, 95% and 98% of mean inflows. Table 6.2.2 shows these capacities.

Table 6.2.2: Capacity by Mass Curve Method Using Historical Annual Inflows.

Mass Curve Method

Values of Capacity

2612802215201988001476801221209372099400

And the graph is as below:

Page 78: capacity of dam

Figure 6.2.2: Capacity by Mass Curve Method of Historical Annual Inflows

Figure 6.2.2 shows that capacity significantly decreased when draft values are raised.

6.2.3 Capacity Determination by Mass Curve Method Using Generated Annual Inflows

Page 79: capacity of dam

Figure 6.2.3: Mass Curve of Synthetic Annual Flows

In this method, the cumulative inflows are plotted against time. The result is shown in Figure 6.2.3. In Figure 6.2.3 (1) we have plotted 68 years’ cumulative inflows against time, we got the mass curve as in figure 6.2.3 (1). The mass curve got very much smoother than we found in figure 6.2.1 (a). Figure 6.2.3 (4) shows the cumulative mass curve when all the generated inflows are plotted. The result is somewhat a straight line. So, because of long sequence of the generated flows, capacity determination is not possible using Mass curve method.

6.2.4 Capacity Determination by Mass Curve Method Using Generated Monthly Inflows

The cumulative synthetic inflows are plotted against time and Figure 6.2.4 is obtained. The result is a straight line. Capacity determination is not possible because of long sequence of the generated flows.

Page 80: capacity of dam

Figure 6.2.4: Mass curve of Synthetic Monthly Flows

6.2.5 Comment

It has been found that capacity determination by Mass Curve Method is easy if we plot small amount of inflow against time. We have plotted the historical 68 years’ annual flow of Alsea river. But if we plot longer series the resulting Mass Curve gets smoother. Figure 6.2.3 (2) shows such smooth Mass Curve when we have plotted 1000 years inflows. This way, when all the generated 60000 inflow values are plotted, we got a straight line which is given in Figure 6.2.3(4).

Similarly, for monthly inflows, when all the generated monthly inflows are plotted against time we got almost a straight line from which it is not possible to determine the capacity. Therefore, we conclude that mass curve technique is not applicable for generated inflow series.

6.3 Capacity Determination by Sequent Peak Algorithm

This method is described in section 5.2.3.1.

Page 81: capacity of dam

6.3.1 Capacity Determination by Sequent Peak Algorithm Using Historical Monthly Inflows

Our monthly data contains 60 years’ inflow values each containing 12 month’s flow. For each set of data containing 12 months’ flow values, capacity was determined using the Loucks’ (1970) computer based Sequent Peak algorithm and Alternative Sequent Peak Algorithm. Thus we got a total of 60 estimated capacities in each algorithm. And then we took the mean of the estimated capacities as the required capacity. Capacity rises slightly with draft values. Table 6.3.1: Capacity by Sequent Peak Algorithm (SPA) ( units) Using Historical

Monthly Inflows

Capacity

Values of Louck’s SPA Alternative SPA

4010444949075383586563576660

3972439748295278573662046490

Page 82: capacity of dam

Figure 6.3.1: Capacity by SPA

6.3.2 Capacity Determination by Sequent Peak Algorithm using Historical Annual data

According to section 5.2.3.1 and 5.2.3.2, we obtained the annual storage. Table 6.3.2 shows the storage value for annual data obtained by Louck’s SPA and Alternative Sequent Peak Algorithm.

Table 6.3.2: Capacity by Sequent Peak Algorithm (SPA) ( units) Using Historical

Annual Inflows

Capacity

Values of Louck’s SPA Alternative SPA

Page 83: capacity of dam

10875117631355917443247163383242688

27983575918016948247163383242688

The corresponding figures are given in 6.3.1(3) and 6.3.1(4).

6.3.3 Capacity Determination by Sequent Peak Algorithm using Generated Annual InflowsOur generated data contains 1000 inflow sequences each containing 60 years flow values. For each set of data containing flow values, capacity was determined using the Loucks’ (1970) computer based Sequent Peak algorithm and Alternative Sequent peak Algorithm. Thus we got a total of 1000 estimated capacities. And then we took the mean of the estimated capacities as the required capacity. Capacity was determined for various drafts such as 70%, 75%, 80%, 85%, 90%, 95% and 98% of mean inflows.

Table 6.3.3: Capacity by Sequent Peak Algorithm (SPA) ( units) Using Generated

Annual Inflows

Capacity

Values of Louck’s SPA Alternative SPA

12014140761723122117297734178251887

601788491277018317261993771947224

And the figure is given below:

Page 84: capacity of dam

Figure 6.3.3: Sequential Peak Algorithm of Synthetic Annual Flows

From the above figure we can say that capacity increased significantly as demand increased. But capacities in Alternative Sequential Peak Algorithm are comparatively lower than Louck’s Sequential Peak Algorithm.

6.3.4 Capacity Determination by Sequent Peak Algorithm using Generated Monthly InflowsOur monthly data contains 1000 years’ inflow values each containing 12 month’s flow. For each set of data containing 12 months’ flow values, capacity was determined using the Loucks’ (1970) computer based Sequent Peak algorithm. Thus we got a total of 1000 estimated capacities. And then we took the mean of the estimated capacities as the required capacity. Table 6.3.4: Capacity by Sequent Peak Algorithm Using Generated Monthly Inflows

Loucks Sequent Peak Algorithm

Page 85: capacity of dam

Values of Capacity

148724161468174238187021199815212613220292

We can draw a graph using the above tabulated value.

Figure 6.3.4: Capacity by Sequent Peak Algorithm Using Generated Monthly Inflows

6.3.5 Comment

Page 86: capacity of dam

If a single sequence is used to determine the capacity, then the value obtained by the S-P Algorithm will be the required storage capacity. But for generated series, there will be more than one similar sequence with different inflow value and hence there will as many capacities as there are inflow series. In that case (as in our situation) we took the mean of the estimated capacities as the required capacity.

6.4 Capacity Determination Using Generated Inflows

1. Annual Inflows The mean and standard deviation of annual inflows are found to be 155536 and

3726 respectively (for easy calculation we convert the unit, to )

Also for annual flows we get,

Coefficient of variation = 0.0068

Standard deviation = 3726

lag one serial correlation coefficient = 0.018

Skewness = 0.006

Mean = 15536

= 1.64 from tables of normal distribution

2. Monthly Inflows

As Dincer’s method determines capacity of dam using annual data, we first obtain annual inflows from the generated monthly inflows. The mean and standard deviation of annual inflows (obtained from the monthly inflow) are found to be 384033 and

47203 respectively.

Also for annual flows we get,

Coefficient of variation = 0.123

Standard deviation = 47203

lag one serial correlation coefficient = 0.0016

Skewness = 0.0018

Page 87: capacity of dam

Mean = 384033

= 1.64 from tables of normal distribution

6.4.1 Gould–Dincer Normal (G–DN) Method

The formulae for calculating storage capacity [see section 5.3.3.1] by Gould-Dincer’s Normal method is given below

And

6.4.1.1 Adjusting Correlation Coefficient in Gould-Dincer Normal Method

To account for the auto-correlation ( ) effect on reservoir capacity the formula is:

6.4.1.2 Dincer Method

The formulae for calculating storage capacity by Dincer’s method is given below

And

For annual and monthly inflows we have determined various capacity using different values of demand (draft) as shown in the Table 6.4.1.3 and Table 6.4.1.4.

6.4.1.3 Capacity using Generated Annual Inflows

Table 6.4.1.3: Capacity by Gould–Dincer Normal Method, Adjusted Gould- Dincer Normal Method Using Synthetic Annual Inflows.

Values of GDN Method Adjusted GDN Method

Page 88: capacity of dam

200324043005400660101201930048

89161069913374178322674953497133744

And we can best describe it as graphically

Figure 6.4.1.3: Comparison between Gould-Dincer Normal method and after adjusting rho Using Synthetic Annual Inflows.

We can say from above table and figure that when Gould-Dincer Normal method is adjusted by the correlation coefficient, the capacity increased significantly.

Page 89: capacity of dam

Also the graph shows that capacity increased as the demand of water (draft) increased. For

(i.e., demand is 95% of mean flow) the storage is 12019 in GDN Method.

6.4.1.4 Capacity using Generated Monthly Inflows

For different values of demand, capacities are given in the following table:

Table 6.4.1.4: Capacity by Gould–Dincer Normal Method, Adjusted Gould- Dincer Normal Method and Dincer Method Using Synthetic Monthly Inflows.

Capacity

Values of GDN Method Adjusted GDN Method

Dincer Method

130131561619520260263903978078195197

130631567719596261283919278384195960

130221562619533260443906778133195333

From the above table we see that for a draft value capacity is not so differ in the three methods. For example, when demand is 75% of mean inflow, capacity in GDN, Adjusted GDN, and in Dincer Method is almost same.

The graphs are given in the next page.

Page 90: capacity of dam

Figure 6.4.1.4 : Capacity by GDN, Adjusted GDN and Dincer Method Using Synthetic Monthly Inflows.

The graph indicates that capacity rises markedly with the demand.

6.4.2 Gould–Dincer Gamma (G–DG) Method

If the inflows are assumed to be Gamma distributed then

Where

is an approximate Gamma variate.

is the coefficient of skewness of the annual inflows

Page 91: capacity of dam

And capacity is computed by the following formula:

Where is replaced by .

6.4.2.3 Adjusting Coefficient of Skewness

The formula for computing storage capacity is:

Where is replaced by .

If the flows are also auto-correlated, then needs to be replaced by as follows:

6.4.2.4 Gould’s Gamma Method

In this method,

And

6.4.2.3 Capacity using Generated Anual Inflows

Following table (6.4.2.3) shows the capacities obtained using different draft.

Table 6.4.2.3: Capacity by Gould–Dincer Gamma (G–DG) method, Adjusted Gould-Dincer Gamma Method and Gould-Gamma Method Using Synthetic Annual Inflows.

Page 92: capacity of dam

Capacity

Values of G DG Method Adjusted G DG Method

G-G Method

95431145214315190862862957258143145

99081189014863198172972559450148626

146718682469347054731148329512

The table shows that capacity increased as the demand of water (draft) increased. For (i.e., demand is 98% of mean flow) the storage is 29512 in Gould Gamma

Method.

Page 93: capacity of dam

Figure 6.4.2.3: Capacity by Gould-Dincer Gamma Method, Adjusted Gould-Dincer Gamma Method, and Gould-Gamma Method Using Synthetic Annual Inflows.

It is clear from graph that there is significant increase in capacities with the increase of drafts.

6.4.2.3.1 Comparison between Gould-Dincer Gamma method and after adjusting coefficient of skewness in Gould-Dincer Gamma method

From the graph, it has been observe that capacity is increased slightly when the inflows are autocorrelated.

Figure 6.4.2.3.1: Comparison between Gould-Dincer Gamma method and after adjusting Skewness

6.4.2.4 Capacity using Generated Monthly Inflows

For different values of demand, capacities are given in the following table:

Table 6.4.2.4: Capacity by Gould–Dincer Gamma (G–DG) method and Gould-Gamma Method Using Synthetic Monthly Inflows.

Page 94: capacity of dam

Capacity

Values of G DG Method G-G Method

130721568619608261443921678432196081

95361214116047225583558174647191847

Figure 6.4.2.4: Capacity by Gould-Dincer Gamma Method and Gould-Gamma Metho

6.5 Gould–Dincer Lognormal (G–DLN)

The formula of capacity is:

If the annual flows are considered to be lognormal, can be replaced by

Capacities according to this formula are given in the following table.

6.5.1 Capacity using Generated Anual Inflows

Page 95: capacity of dam

Table 6.5.1: Capacity by Gould-Dincer Log normal Method Using Synthetic Annual Inflows.

Gould-Dincer Log normal Method

Values of Capacity

108131297616220216273244064880162201

Figure 6.5.1: Capacity by Gould-Dincer Log normal Method

The graph indicates that capacity rises markedly with the demand.

6.5.2 Capacity using Generated Monthly Inflows Table 6.5.2: Capacity by Gould-Dincer Log normal Method Using Synthetic Monthly Inflows.

Gould-Dincer Log normal Method

Page 96: capacity of dam

Values of Capacity

146251755021938292514387687753195197

Figure 6.5.2: Capacity by Gould-Dincer Log normal Method Using Synthetic Monthly Inflows.

There are significant increases in capacity with the increase of demand.

6.6 Comparison

Page 97: capacity of dam

In our study, we have estimated reservoir capacity using Mass Curve Method, Sequent Peak Algorithm, Alternative Sequent Peak Algorithm, Dincer’s method, Gould-Dincer Normal Method, Gould-Dincer Log-Normal Method, Gould-Dincer Gamma Method, and Gould Gamma Method. Also capacity was determined using the developed approach.

Following table shows the comparison of capacities obtained using some existing techniques, with that obtained using behavior analysis by considering the level of the dam contents.

Table 6.6.1: Comparison of estimated Annual Capacity Using the Synthetic Annual Inflows.

Method CapacityMass Curve

Loucks Sequent Peak

Alternative Sequent Peak

Gould-Dincer Normal

Gould-Dincer Log-normal

Gould-Dincer Gamma

Gould Gamma

NA

14076

8849

2404

12976

11452

1868

164720

From the above table of annual capacity determination, the probability that the dam content never exceeds the assumed capacity (164720 ) is 0.95. Annual capacities

obtained by other approaches are 14076 by Locks Sequent Peak Algorithm, 12976

by Gould-Dincer Log-normal Method.

Comparison of capacities obtained using some existing techniques, with that obtained using behavior analysis by considering the level of the dam contents using synthetic monthly flows are given below.

Table 6.6.2: Comparison of estimated Annual Capacity Using the Synthetic Monthly Inflows.

Method CapacityMass Curve

Loucks Sequent Peak

NA

161468

Page 98: capacity of dam

Alternative Sequent Peak

Gould-Dincer Normal

Gould-Dincer Log-normal

Gould-Dincer Gamma

Gould Gamma

161468

15616

17550

13072

9536

639000

From the above table of monthly capacity determination, the probability that the dam content never exceeds the assumed capacity (639000 ) is 0.95. Monthly capacities

obtained by other approaches are 161468 by Locks Sequent Peak Algorithm, 17550

by Gould-Dincer Log-normal Method.

SummaryIn this chapter capacity is obtained by Mass Curve Method, Sequent Peak Algorithm, Gould-Dincer Normal Method, Gould-Dincer Log-Normal Method, Gould-Dincer Gamma Method, and Gould Gamma Method. The annual and monthly capacities obtained these methods are then compared with that obtained by considering the level of the dam content.

Chapter Seven

Conclusion

One of our objectives of this study was to suggest a new approach in determining the capacity of dam. For this, we have reviewed the existing techniques. After careful review of the earlier approaches, we have found that in most of the methods, capacity was determined considering probability of emptiness of the dam. Some researchers determined the capacity by using stationary distribution of the dam content, by considering Mean of the First Emptiness time, and by specifying the probability of overflow of a dam and the two events- probability of emptiness and the time at which the dam overflows for the first time together. So we have suggested an approach to determine the capacity considering the level of the dam content i.e., without allowing dam to dry up or overflow.

The consideration of overflow is important in many real life situations particularly, if a reservoir is intended to supply water to a city, it would be of that capacity that should not be allowed to overflow. Because, such a dam, if overflows, will cause heavy damage to the downstream areas.

We employed behavior technique in which a continuity equation was used to determine the capacity. In this technique, first a capacity was assumed. Then the inflows were

Page 99: capacity of dam

routed through the assumed capacity and the behavior of the dam during the given period was observed. Therefore, behavior analysis enabled us to calculate the probability that the dam content do not exceed the assumed capacity.

We, in this study, have generated monthly and annual inflows by simulation keeping the historical flow characteristics fixed. After this, capacity was determined using the behavior analysis. For the purpose of comparison, we have applied several other methods to determine the capacity by using the generated inflow data. The suggested capacity determination technique is described in chapter 4.

In our study capacity was determined by Mass Curve Method, Sequent Peak Algorithm, Dincer’s Method, Gould-Dincer Normal Method, Gould-Dincer Log-normal Method, Gould Gamma Method and Gould-Dincer Gamma Method, and the results obtained was compared with the capacity obtained in chapter4. We compared the determined capacities obtained for both annual and monthly inflows.

In comparison of the capacity determined using the developed approach with that obtained by some other existing approaches, the following discrepancies have been found:

Mass Curve Method has always been dealt with preliminary technique, not the convenient one and there remains possibility of error in plotting draft line from one hump to another. Moreover, for long inflow sequences the mass diagram becomes smoother making our task difficult to differentiate between the cumulative inflow curve and the draft line. To overcome this limitation of Mass Curve technique, Loucks’ Computer based Sequential Algorithm is used.

Sequential Peak Algorithm (SPA), though traditionally used by the water resource planners as a convenient tool for estimating the design storage capacity, it under-estimates storage capacity when the coefficient of variation is small. Usually, monthly streamflows show higher coefficient of variation than that of annual streamflows.

In this study we have determined the capacity of dam using the level of the content. We have determined the monthly and annually storage capacities. We considered the required capacity which has probability of overflow 0.05. That is, the probability that the dam content never exceeds the assumed capacity is 0.95. Also we obtained content with 1%, 2%, 3% etc probability of overflow. And we neglect this overflow. Therefore we can say that, this method allows no emptiness and thus ensures optimum utilization of water resources.

Appendix Figures of Mass Curve Method

In this section figures of Mass Curve Method for different values of draft are given. Capacities are given for both monthly and annual data.

Page 100: capacity of dam

B. 1 Capacity by Mass Curve Method for Historical Monthly Data

1. For

2. For

Page 101: capacity of dam

B. 2 Capacity by Mass Curve Method for Annual Historical Data

1. For

2. For

Page 102: capacity of dam

3. For

Bibliography

Page 103: capacity of dam

Barnes, F. (1954). Storage required for a city water sypply. Journal of Institution of Engineers, Australia, 26, 198-203.

Bayazit, M., & Bulu, A. (1991). Generalized probability distribution of reservoir capacity. Journal of Hydrology, 126, 195-205.Bayazit, & Bulu. (1992). Reservoir capacity with gamma inflow. Journal of Hydrology, 2 (32), 65-280.

Bhat, U. (1984). Elements of applied stochastic processes (2nd ed.). John Wiley and Sons.

Chow, V.T. (1964). Handbook of applied hydrology. New York: McGraw HillDearlove,R., &harris, R. (1965). Probability of emptiness iii. Proc. Reservior Yield Symposium.

Fiering, M. (1962). Queueing theory and simulation in reservoir design. Trans. Am. Soc. Civil Engrs., 127 (pt. I), 1114-1144.

Fiering, M., & Jackson, B. (1971). Synthetic streamflowss. New-York: Springer-Verlag.Fiering, M. B. (1967). Streamflow synthesis. Cambridge, Massachusetts.

Gould, B. (1961). Statistical methods for estimating the design capacity of dams. Journal of the Institution of engineers, Australia, 33 (12), 405-416.

Gould, B. (1964). Statistical methods for reservoir yield estimation. Water Research Foundation of Australia, Report No. 8.

Hardison, C. (1972). Potential United States water supply development. Journal of the Irrigation and Drainage Division, ASCE (IR3), 479-492

Hazen, A. (1914). Storage to provide in impounding reservoir for municipal water supply. Trans. ASCE., 77(1539)

Hurst, H. (1951). Long term storage capacity reservoirs. Trans. ASCE., 116,770-799.Hurst, H. (1956).Methods of using long term storage in reservoirs. Proceedings of the Institutions of Civil Engineers, paper-6059,5:519.

Hussain, M. (2003, 23 September). The Ganges: 1996 Agreement, Augmentation and Beyond. An Analysis of the Background and Possibilities. The Daily Star.

Karim and Chowdhury (1995). A comparison of four distributions used in flood frequency analysis in Bangladesh. Hydrological Sciences- Journal- des Sciences Hydrologiques, 40.

Khan, M. (1979). On probability theory of dams. Ph.d. thesis, University of Otago, New Zealand.Khan, M. (1992). A note on the capacity of a dam. Pakistan Journal of Statistics Walid Abu- Dayyeh, 8 (3B), 35-42.

Page 104: capacity of dam

King, C. (1920). Supply of water for towns in new south wales. Transactions, The Institution of Engineers, Australia, 1:262

Klemes, V. (1967). Reliability of water supply perform by means of a storage reservoir within a limited period of time. Journal of Hydrology, 5:70Klemes, V. (1978). Discussion on “sequent peak procedure: Minimum reservoir capacity subject to constraint on final storage” by k.w. Potter, Water Res. Bull., 14 (4), 991-993.Klemes. V. (1979a). storage mass-curve analysis in a system-analytic perspective. Water Resource Res., 15 (2), 359-370.

Langbein, W. (1958). Queueing theory and water storage. Proceedings, Journal of Hydraulics Division, ASCE, 84 (Paper-1811).

Lloyd, E. (1963). A probability theory of reservoirs with serially correlated inputs. Journal of Hydrology, 1 (2), 99-128.Locks, D. (1970). Some comments on linear decision rules and chance constraints. Water Resources Res., 6 (2), 668-671.

Martin, F. (1968). Computer modeling and simulation. Jon Willey and Sons.

Matalas, N. (1967). Mathematical assessment of synthetic hydrology. Water Resources Research, 3 (4), 937.

McMahon, T.A., &Mein, R.G. (1978). Reservior capacity and yield. Nederlands: Elsevere Scientific Publications.

Melentijevich, M. (1966). Storage equations for linear flow regulations. Journal of Hydrology, 4, 201-223.Moran, P. (1954). A probability theory of dams and storage systems. Australian Journal of Applied Science, 5, 116-126.

Naylor. (1966). Computer simulation techniques. John Wiley and Sons, Inc.Oguz, B., & Bayazit, M. (1991). Statistical properties of the critical period. Journal of Hydrology, 126.

Pegram, s. J. B., G.G.S., & Yevjevich, V. (1980). Stochastic properties of water storage. Colo. State University, gort Collins. Hydrol. Pap. (100).

Phatarfod, R. (1976). Some aspects of stochastic reservoir theory. Journal of Hydrology, 30, 199-217.

Phein, H. (1993). Reservoir storage capacity with gamma inflows. Journal of Hydrology, 146, 383-389.Prabhu, N. (1958b). Some exact results for the finite dam. Annals of Mathematical Statistics, 28 (1234). Prabhu, N. (1964). Time dependent results in storage theory. Journal of Applied Probability, 1.

Page 105: capacity of dam

Prabhu, N. (1965). Queues and inventories: Their basic stochastic processes. New York: John Wiley and Sons Inc.

Raheem and Khan (2003). Considering Probability of Emptiness and Average first overflow time together in determination of capacity of dam. Journal of Spatial Hydrology.

Richad M. Vogel & Ralph A. Bolognese (1995). Storage-reliability-resilience-yield relations for over-year water supply systems. Water Resources Research, 31 (3), 645-654.Ripple, W. (1883). The capacity of storage reservoir for water supply. Proc. Inst. Civil Engrs., 71, 270-278.

Sudler, C. (1927). Storage required for regulation of stream flow. Transactions, ASCE, 61,622. Svandize, G. (1964). Elements of river runoff regulation computation by monte carlo method. Tbilisi.

Thomas A. McMahon, Geoffrey G.S. Pegram, Richard M. Vogel, & Murray C. Peel (2007). Revisiting reservoir storage-yield relationships using a global streamflow database. Advances in Water Resources. 30, 1858-1872.

Thomas, H., & Burden, R. (1963). Operations research in water quality management. Harvard Water Resource Group.

Thomas, H., & Fiering, M. (1962). Mathematical synthesis of streamflow sequences for the Analysis of river basins by simulations. Harvard University.

Troutman, B. (1978). Reservior storage with dependent, periodic net inputs. Water Resources Res., 14 (3), 395-401.U.S., A. c. (1987). Reservior storage-yield procedure. Davis, California: Hydrol. Eng. Center.US Army, C. (1957). Report of use of electric computers for integrating reservoir operations. In (Vol. 1). Missouri River Division: DATAmatic Corporations.

Venetis, C. (1969). A stochastic model of monthly storage. Water Resources Research, 5 (3), 729.

Vogel, R. M., & Stedinger, J.R. (1987). Generalized storage-reliability-yield relationships. Journal of Hydrology, 89, 303-327.

Wilson, H., & Hilferty, M. (1931). Distribution of Chi-square. Proceedings National Academy of Science, 17, 684-688.

Yevjevich, V. (1972a). probability and statistics in hydrology. Water Resources Publication, Fort Collins.Yevjevich, V. (1972b). Stochastic processes in hydrology. Water Resources publications, Fort Collins.

Page 106: capacity of dam