84
Journal of EEA, Vol. 26, 2009 AN EFFICIENT MODIFIED ELLIPTIC CURVE DIGITAL SIGNATURE ALGORITHM Tilahun Kiros Department of Computer Science and Engineering Mekelle Institute of Technology and Kumudha Raimond Department of Electrical and Computer Engineering Addis Ababa University ABSTRACT Many digital signatures which are based on Elliptic Curves Cryptography (ECC) have been proposed. Among these digital signatures, the Elliptic Curve Digital Signature Algorithm (ECDSA) is the widely standardized one. However, the verification process of ECDSA is slower than the signature generation process. Hence, the main objective of this work is to study ECDSA in order to improve its execution time. The method of the improvement is focused on the mathematical relationships of the algorithm in a manner that its verification process can be efficient. As a result, without affecting the underlying mathematical problem the Elliptic Curve Discrete Logarithmic Problem (ECDLP) - a related efficient scheme is developed. The signature verification algorithm of the modified scheme is found to be faster than the verification process of ECDSA by 45%. Keywords: Digital signature, ECDSA algorithm, Elliptic curve cryptography, Scalar multiplication, Signature generation, Signature verification. INTRODUCTION With the advent of information technology, ensuring network and data communication security has become a crucial issue. Though the information technology provides us with various versatile tools for data manipulation and data storage, it is not without different facets of security attack. Thus, it is crucial to have tools that can insure the integrity of data, the confidentiality of data, and authenticity of any form of data communication. To meet the requirements of network and data communication security, the cryptography science plays a great role. A variety of researches and applications of cryptography are developed in parallel with the advancement of the IT facilities. As a result, algorithms and techniques have been introduced to offer a better security mechanism. Algorithms like Rivest-Shamir-Adleman (RSA) [1, 6, 8], Digital Signature Algorithm (DSA)[1, 15], Diffie-Hellman (DH) [1, 7], and Elliptic Curve Cryptography (ECC) based schemes [4, 10] - like the ECDSA - are a few of the known cryptographic systems that are being employed in various applications. However, among these known cryptographic systems, ECC is emerging as an attractive and better alternative to the public-key cryptosystems [11, 12, 13, 14, 17, 18, 23]. ECC offers equivalent security with smaller key sizes resulting in faster computations. The use of elliptic curves in modern public key cryptography was independently introduced by Neal Kobltiz and Victor Miller in 1985 [2, 3, 4]. Since then, a lot of researches have been conducted in order to challenge its security strength and find out efficient ways of implementing ECC based cryptosystems. ECC has got increasing attention by the research community, as it offers equivalent security but shorter key size when it is compared with previously known systems like RSA and Discrete Logarithm (DL) – based cryptosystems. Though the confidence level of ECC is not equal to RSA as RSA has been around for above thirty years, it is widely believed that 160-bit ECC offers equivalent security of 1024-bit RSA. In general, ECC has better per-bit security, and hence, suitable in constrained environments like smart cards and hand-held devices. ECC has less storage, power, and bandwidth requirements, and improved performance [20]. The rest of the paper is organized as follows: section 2 presents related works of elliptic curve digital signature. Preliminaries on ECC and ECDSA are presented in section 3. The proposed modified scheme is described in section 4.

zede26

  • Upload
    sumiphd

  • View
    19

  • Download
    0

Embed Size (px)

Citation preview

Page 1: zede26

Journal of EEA, Vol. 26, 2009

AN EFFICIENT MODIFIED ELLIPTIC CURVE DIGITAL SIGNATURE ALGORITHM

Tilahun Kiros

Department of Computer Science and Engineering Mekelle Institute of Technology

and

Kumudha Raimond

Department of Electrical and Computer Engineering Addis Ababa University

ABSTRACT

Many digital signatures which are based on Elliptic Curves Cryptography (ECC) have been proposed. Among these digital signatures, the Elliptic Curve Digital Signature Algorithm (ECDSA) is the widely standardized one. However, the verification process of ECDSA is slower than the signature generation process. Hence, the main objective of this work is to study ECDSA in order to improve its execution time. The method of the improvement is focused on the mathematical relationships of the algorithm in a manner that its verification process can be efficient. As a result, without affecting the underlying mathematical problem – the Elliptic Curve Discrete Logarithmic Problem (ECDLP) - a related efficient scheme is developed. The signature verification algorithm of the modified scheme is found to be faster than the verification process of ECDSA by 45%. Keywords: Digital signature, ECDSA algorithm, Elliptic curve cryptography, Scalar multiplication, Signature generation, Signature verification.

INTRODUCTION With the advent of information technology, ensuring network and data communication security has become a crucial issue. Though the information technology provides us with various versatile tools for data manipulation and data storage, it is not without different facets of security attack. Thus, it is crucial to have tools that can insure the integrity of data, the confidentiality of data, and authenticity of any form of data communication.

To meet the requirements of network and data communication security, the cryptography science plays a great role. A variety of researches and applications of cryptography are developed in parallel with the advancement of the IT facilities.

As a result, algorithms and techniques have been introduced to offer a better security mechanism. Algorithms like Rivest-Shamir-Adleman (RSA) [1, 6, 8], Digital Signature Algorithm (DSA)[1, 15], Diffie-Hellman (DH) [1, 7], and Elliptic Curve Cryptography (ECC) based schemes [4, 10] - like the ECDSA - are a few of the known cryptographic systems that are being employed in various applications. However, among these known cryptographic systems, ECC is emerging as an attractive and better alternative to the public-key cryptosystems [11, 12, 13, 14, 17, 18, 23]. ECC offers equivalent security with smaller key sizes resulting in faster computations.

The use of elliptic curves in modern public key cryptography was independently introduced by Neal Kobltiz and Victor Miller in 1985 [2, 3, 4]. Since then, a lot of researches have been conducted in order to challenge its security strength and find out efficient ways of implementing ECC based cryptosystems. ECC has got increasing attention by the research community, as it offers equivalent security but shorter key size when it is compared with previously known systems like RSA and Discrete Logarithm (DL) – based cryptosystems. Though the confidence level of ECC is not equal to RSA as RSA has been around for above thirty years, it is widely believed that 160-bit ECC offers equivalent security of 1024-bit RSA.

In general, ECC has better per-bit security, and hence, suitable in constrained environments like smart cards and hand-held devices. ECC has less storage, power, and bandwidth requirements, and improved performance [20].

The rest of the paper is organized as follows: section 2 presents related works of elliptic curve digital signature. Preliminaries on ECC and ECDSA are presented in section 3. The proposed modified scheme is described in section 4.

Page 2: zede26

Tilahun Kiros and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

66

Section 5 presents alternative form of the proposed scheme. Performance comparison of elliptic curve digital signature and the proposed modified scheme is presented in section 6, while section 7 concludes the paper.

RELATED WORKS

Leading mathematicians and scientists have done a lot to ensure the robustness and correctness of many of the cryptographic schemes [9]. However, in [17], it is discussed that none of the mathematical problems – like the Integer Factorization Problem (IFP), the Discrete Logarithm Problem (DLP), and the Elliptic Curve Discrete Logarithm Problem (ECDLP) – are proven to be intractable. This article [17] underlines that it is based on our belief of their intractability that we rely on these algorithms, as no efficient algorithms are found to solve them. This article, [17], also assures that no sub-exponential algorithm is found to solve ECDLP.

Rosner discusses the implementation of GF1 (2m) based curves on a reconfigurable hardware [11]. It is shown that for GF (2168), one point doubling operation takes 273 clock cycles. The work in [11] provides with some fundamental concepts for hardware implementation. The suitability of ECC based schemes for constrained devices and embedded systems is explained in [18]. Based on the per-bit security of ECC, this paper clarifies the advantage of ECC to achieve longer running battery operated devices with less heat, faster applications that consume less memory, and scaleable cryptographic applications. Moreover, the key-size comparison of ECC with RSA and DH based systems is given in [18]. In general, the advantage and performance comparison of ECC with RSA and DH schemes is provided in [12, 13, 14, 17, 18].

In [16], implementation of ECDSA on Advanced RISC Machines (ARM) processors for a curve on GF (2m) is done. It is concluded that by using certain machine and curve specific techniques, the ECDSA signature can be made faster and optimized [16]. Similar work is done in [21] for a curve on GF (p). ARM processor implementation of curve p-224 is discussed in [21]. According to [21], it is concluded that 129.28ms was taken to perform point multiplication over the curve p-224 for C-based implementation. And the time was less for assembly language based implementation.

1 Finite fields with pn, for p a prime integer and n a positive integer are known as Galois Fields or GF.

Don B. Johnson, [12], has given an explanation of ECC suitability on high-security environment based on the underlying difficulty of ECDLP. It is explained that ECDLP is more difficult to solve than IFP and DLP [12] as currently known efficient algorithms to solve ECDLP are full exponential, whereas to solve IFP and DLP there are sub exponential algorithms. The article shown in [13], and the works discussed in [14] strengthened this idea. Moreover, in [13, 14], the suitability of ECC on smart cards is evidently explained, as ECC is more compact than RSA. Pietilainen [14] has compared ECC and RSA based on security, efficiency and space requirement by implementing both of them.

In [22], the authors provide basic alternatives to resolve the implementation issues of ECC on constrained devices like cellular phones. They indicated that curves over GF (2m) are convenient for hardware implementation; whereas curves on GF (p) are suitable for software implementation. Finally, in [22], optimization of ECC based schemes is recommended as it is accepted as the next generation public-key cryptosystem.

Many of the works that aimed at improving performance of ECC based schemes either concentrated on improving the underlying mathematical operations, or concentrated on implementation of specific curves on a specific hardware platform. Little is done in designing different digital signature algorithms which may have a better performance than the existing ones.

In this work, after a thorough study of ECC based cryptosystems, areas of performance improvements of ECDSA has been examined. In ECDSA, the most expensive operation is the scalar multiplication or elliptic curve point multiplication. Another expensive operation is the modular inversion operation. Optimized techniques of scalar multiplication are given in [4, 16, 24]. Here, an attempt has been made to develop ECDSA related scheme in such a way that the number of elliptic curve point multiplications can be reduced during signature verification process.

PRELIMINARIES

Elliptic Curve Cryptography

Elliptic curves for cryptography are defined over finite algebraic structures such as finite fields. Let’s assume prime fields Fp of characteristics

3>p [2, 4]. Such a curve is the set of geometric

Page 3: zede26

An Efficient Modified Elliptic Curve Digital Signature Algorithm

Journal of EEA, Vol. 26, 2009

67

solutions ),( yxP = to an equation of the following form

)(mod: 32 pbaxxyE ++= (1)

Where a and b are constants in Fp (p > 3) satisfying 4 a3 + 27 b2 )(mod0 p . To have the points on E to form a group, an extra point denoted by О∞ is included. This extra point is called the point at infinity and can be formulated as

О∞ ),( ∞= x (2)

The point at infinity is the identity element for the group law formulated as

pFyxyxpE ∈== ,|),({ that solves (1)} {∪ О∞} (3)

This set of points form a group under a group operation which is conventionally written additively using the notation “+” [2]. The group forms an abelian group, [5], over which ECC is based and all operations are performed.

Suppose the point P is in E(Fp), and suppose P has a prime order n, then, the cyclic additive subgroup of E(Fp) generated by P is }.)1(,...,3,2,,{ PnPPPOP −= ∞

(4) The prime p, the equation of the elliptic curve E, and the point P and its order n, are the public domain parameters. Furthermore, a randomly selected integer d from the interval [1, n-1] forms a private key. Multiplying P by the private key d, which is called scalar multiplication, will generate the corresponding public key Q, i.e. Q = dP. The pair (Q, d) forms the ECC public-private key pair with Q is the public key and d is the private key.

The Elliptic Curve Digital Signature Algorithm (ECDSA)

ECDSA is the elliptic curve analogue of DSA [4, 16, 19]. It was accepted by many standard organizations around 2000. Below, the ECDSA signature generation and the ECDSA signature verification algorithms are given. The algorithms are available in [4].

In ECC, there are a set of domain parameters denoted by ).,,,,,( hnPbaqD = q represents the field order of the prime field Fq. The parameters

qFba ∈, are coefficients of the elliptic curve equation E. The parameter )( qFEP ∈ is the base

point. The parameter n is the order of the point P. P is the generator of the cyclic sub group P (Eq. (4)). The parameter h is known as cofactor. It is found as

nFEorderh q ))((

= , Where order (E(Fq))

is the number of elements in E(Fq).

Given the public-private key pair (Q, d), and domain parameters, the ECDSA signature generation and verification can be formulated as shown in Algorithms (1) and (2), respectively. A hash function, H shown in line 4 of Algorithm (1), accepts a variable size message M as input and produces a fixed-size output, referred to as a hash code H(M) or a message digest [2]. Hash functions are used for data integrity in conjunction with digital signature schemes, where a message is typically hashed first, and then the hash value as the representative of the message is signed in place of the original message. The receiver authenticates the message by applying the hash function on the message and re-computes the hash value.

Algorithm (1) ECDSA signature generation

Given parameters q, a ,b, P, n and private key d, to sign a message m, A does the following

1. Select k ]1,1[ −∈ n .

2. Compute kP = ),( 11 yx . 3. Compute r = nx mod1 . If r=0 then go to step

1. 4. Compute )(mHe = . 5. Compute ndreks mod)(1 += − . If s=0 go to

step 1. 6. Return(r, s).

Algorithm (2) ECDSA signature verification

To verify A’s signature (r, s) on m, B uses parameters q, a, b, P, n, h, public key Q, message m and signature (r, s).

1. Verify that r and s are integers in the interval [1, n- 1]. If any verification fails then return (“Reject the signature”).

2. Compute e=H(m). 3. Compute w= s-1 mod n. 4. Compute u1 = ew mod n and u2 = rw mod n. 5. Compute X = u1P + u2Q. a. If X = ∞O then return (“Reject the signature”); 6. Take the x-coordinate of X as x1 and compute

v = x1 mod n. 7. If v = r then return (“Accept the signature”);

Else return (“Reject the signature”).

Page 4: zede26

Tilahun Kiros and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

68

Below a proof is given to show how the signature verification of ECDSA works. If a signature (r, s) on a message m was generated by A, then necessarily the following will be true as a result of Algorithm (1), step number 5:

))(mod(1 ndreks +≡ − (5)

From Eq. (5), by the principles of modular arithmetic, we will obtain that

)(mod)( 111 nrdsesdresk −−− +≡+≡ (6)

However, in algorithm (2), step number 3,

)(mod1 ns − is represented by the parameter w as ).(mod1 nsw −≡ Substituting )(mod1 ns − in Eq.(6)

by w , we will get

)(mod nwrdwek +≡ (7) But, in Algorithm (2), step number 4, )(mod nwe

is represented by 1u and )(mod nwr is

represented by .2u Thus, based on equation (7), )(mod21 nduuk += (8)

From the verification algorithm, we can see that QuPuX 21 += (9)

However, the public key Q = dP, where d is a private key in the interval }1,1[ −n and P is the generator of the cyclic sub group P (Eq. (4)). Therefore, substituting Q in Eq. (9) by dP and using Eq. (8), we will obtain,

kPPduudPuPuX =+=+= )( 2121 (10)

This proves that v = r. Because, kPX = indicates that the x-coordinate of kP, 1x , in Algorithm (1) step numbers 2 and 3, and the x-coordinate of X,

1x , in Algorithm (2) step number 6, are equal in essence.

The ECDSA algorithm is involving modular inversion and the elliptic curve point multiplication operations (scalar multiplication) in the process of signature generation and signature verification. Both the modular inversion operation and scalar multiplication operation can have impact on the performance of the algorithm. In fact, the most time consuming operation in ECDSA is the elliptic

curve scalar multiplication operation. This work focuses on a possible way of minimizing the scalar multiplication operations.

PROPOSED SCHEME

Scalar multiplication dominates the execution time of ECC based schemes [4, 10]. In ECDSA, there are scalar multiplications in the signature generation and signature verification processes. In step 2 of algorithm (1) (ECDSA signature generation), the base point P is multiplied by the scalar or integer value k. Furthermore, in step 5 of algorithm (2) (ECDSA signature verification), the base point P is multiplied by an integer value u1 and the public key Q is multiplied by an integer value u2. As there are two scalar multiplications in the ECDSA verification algorithm, execution of the signature verification process needs a longer time than the signature generation process. So, attention is given to the verification process to examine if a scheme can be developed to minimize the execution time needed for signature verification of ECDSA.

Observing algorithms (1), and (2), there is an important relationship between the signature generation and the signature verification. The elliptic curve point kP = (x1, y1) computed in the signature generation algorithm must be equal to the elliptic curve point X = (x1, y1) computed during signature verification. Thus, if these points are equal, one can declare that the signature is valid and the signature is indeed generated by the owner of the public key Q. Therefore, finding any mathematical relationship without impairing the underlying ECDLP problem so that the points kP = (x1, y1) and X = (x1, y1) can be equal, leads us to a new scheme. Based on this notion, an attempt is made to search for such mathematical relationships and, accordingly, the following scheme is proposed.

Let the signature s is generated as

)(mod ndkes ++= (11)

Where e is the hash value H(m) of a given message m, k the per message secret, and d the private key. Hence, k can be computed as

)(mod ndesk −−= (12)

As the elliptic curve point X = (x1, y1) must be equal to the elliptic curve point kP = (x1, y1) (see algorithms (1), and (2)), in the verification process,

Page 5: zede26

An Efficient Modified Elliptic Curve Digital Signature Algorithm

Journal of EEA, Vol. 26, 2009

69

the point X = (x1, y1) can be calculated based on the following steps.

1. Compute e = H(m).

2. Compute ).(mod nesu −= (i.e. u = k + d (mod n)).

3. Compute X = uP – Q. To make it further clarified and to show that verification process holds, the following proof is given.

Proof:

If the signature (r, s) is indeed generated by the private key d holder using Eq. (11), then Eq. (12) holds true. In the verification process X must be computed as

kPPddkPdu

dPuPQuPX

=−+=−=−=−=

)()(

And this proves that X = kP, from which it can be concluded that v = r is as intended. The signature generation and signature verification algorithms are formulated as shown in algorithms (3) and (4), respectively.

Algorithm (3) Proposed scheme signature generation Given domain parameters q, a ,b, P, n and private key d, to sign a message m, A does the following

1. Select k ]1,1[ −∈ n .

2. Compute kP = ),( 11 yx .

3. Compute r = nx mod1 . If r = 0 then go to step 1. 4. Compute )(mHe = . 5. Compute )(modndkes ++= . If s = 0 or

es = then go to step 1. 6. Return(r, s).

Algorithm (4) Proposed scheme signature verification 1. To verify A’s signature (r, s) on m, B uses domain

parameters q, a, b, P, n, h, public key Q, message m and signature (r, s).

2. Verify that r and s are integers in the interval [1, n-1]. If any verification fails then return (“Reject the signature”).

3. Compute e = H(m). 4. Compute ))(mod( nesu −= . If 0=u return

(“Reject the signature”).

5. Compute X=uP - Q. 6. If X = O∞ then return (“Reject the signature”); 7. Take the x-coordinate of X as x1 and compute v =

x1 mod n. 8. If v = r then return (“Accept the signature”); Else

return (“Reject the signature”). In this proposed scheme, the execution time required to verify a signature is reduced almost by half when it is compared with the execution time required to verify a signature in ECDSA. Both of the algorithms are compared for the underlying field size of 32-bit and 64–bit. Signature verification process of the proposed scheme was 48-57% faster than that of the ECDSA (see section 6). The reason is, in ECDSA’s signature verification process, there are two elliptic curve point multiplications i.e. u1P and u2Q (algorithm (2)). The results of the point multiplications are to be added. So, there is one point addition operation. Whereas in the proposed one, there is only one scalar multiplication i.e. uP. Furthermore, there is one point addition operation (algorithm (4)). Thus, it would be reasonable and expected that the execution time for signature verification to be reduced almost by half.

The prominent issue here is security considerations. Basically, cryptographic schemes are designed to secure our on-line communication as well as stored information asset. So, is this proposed scheme as secure as ECDSA?

The security of ECDSA relies on the mathematical problem ECDLP. Currently known efficient algorithms to solve the ECDLP are fully exponential time algorithms, and hence, the problem is intractable. Similarly, this proposed scheme is relied on the ECDLP. However, the way signature is generated and verified in this scheme is different from that of the ECDSA. In ECDSA, the adversary is required to recover d by brute search or by understanding the per message secret key k or by using currently known efficient algorithms. If the adversary can get an opportunity to know the value of a single message secret key k, it is possible to recover d from k. For this proposed scheme, if the adversary learns the per message secret k, d can be recovered as

))(mod( neksd −−= (13)

Without the knowledge of k, guessing d and k from the relationship )(mod ndkes ++= is difficult as there are different values of d and k in [1, n-1] that can satisfy such relationship. In fact, there are

Page 6: zede26

Tilahun Kiros and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

70

12)1()1( 2 +−=−×− nnnn possible solutions in the interval [1, n - 1] for the equation

)(mod ndkes ++= for a given values of s and e. Such a result is very huge number. The complexity of this approach will be )( 2nO . So, by using this approach, it is not possible to guess the value of k or d. Rather than using this method, the straight forward attack – exhaustive search - is easier. That is, computing all the points PnPPP )1....(3,2, − until the point Q is encountered.

If such a guess would have been possible, then adversaries could have been also successful in recovering d from the relationship kduu =+ 21 (see algorithm (2)) in ECDSA. Thus, from these arguments, it can be seen that this proposed scheme can be as secure as ECDSA. However, cryptographic schemes should normally pass through a lot of evaluations by different mathematicians and computer scientists before they get employed in real world applications in order that their security can be assured.

Currently, the most known efficient algorithm to attack ECDLP is the Pollard’s rho algorithm. The main idea of Pollard’s rho algorithm is to find distinct pairs (a, b) and (a’, b’) of integers modulo n such that [4]

.)'()'()'(

.''dPbbQbbPaa

QbPabQaP−=−=−

+=+ (14)

Here, the goal is to recover the private key d. From the Eq. (14), the value of d can be recovered as

)(mod)')('()'()'(

1 nbbaaddPbbPaa−−−=

−=− (15)

The method for finding the pairs (a, b) and (a’, b’) is to select random integers ]1,1[, −∈ nba and store the triples ),,( bQaPba + in a table until another point equal to bQaP + is obtained for the second time [4]. This occurrence is called collision [4]. By the birth day paradox, the expected number of iterations before a collision is obtained is approximately .2533.12/ nn ≈π

In the proposed variant of ECDSA, there is one loophole so far discovered while designing the algorithm. If the adversary prepares his/her own message m and calculates the hash value e as

)(mHe = , then by assigning s = e and

Qxr = (mod n)- where Qx is the x -coordinate of

the public key Q and n is the order of the base point- the signature pair (r, s) will be a valid signature. In signature verification, the verifier will verify the signature as

0)(mod)(mod =−=−= neenesu

Then, QQPQuPX −=−=−= 0

However, -Q contains the coordinate pair ),( QQ yx − , and hence, rnxv Q == )(mod is as required. This is the reason for the check es ≠ in algorithm (3) and the check 0≠u in algorithm (4).

ALTERNATE FORM OF THE PROPOSED

SCHEME

An alternate form of the proposed scheme can be achieved by including the parameter r while computing s as shown below:

)(mod ndkers ++= (16)

And k can be computed based on the following equation

)(mod ndersk −−= (17)

Therefore, in the verification process u (algorithm (4)) can be computed as shown below

).(mod nersu −= (18)

In the verification process X can be calculated as

kPPderdker

dPPnersQuPX

=−−++=

−−=−=

)(

))(mod( (19)

PERFORMANCE COMPARISON OF ECDSA

AND THE PROPOSED SCHEME UPON PRACTICAL IMPLEMENTATION

To test the time taken to verify a signature or to generate a signature in ECDSA, and in the proposed scheme, three sample inputs are used for k and d. The impact of the message size on the execution time is negligible.

All the algorithms are executed in a Dell laptop. The laptop’s processor is Intel Centrino with speed of 1.5GHz. It has 256MB RAM. Each of the algorithms has been run five times and then the time elapsed to execute the program at each run is registered.

Page 7: zede26

An Efficient Modified Elliptic Curve Digital Signature Algorithm

Journal of EEA, Vol. 26, 2009

71

The time taken to execute the ECDSA’s and the proposed scheme’s verification process is shown in Table 1. The corresponding average value is shown for each of the three sample inputs. Only the average values are given to save space.

As it can be observed from Table 1, the execution time difference between corresponding value for ECDSA and proposed scheme is very large. This is as a result of reduced elliptic curve point multiplications. Moreover, though the impact on its improvement is negligible as the underlying field size is increasing; two modular inversion operations available in ECDSA are eliminated in the proposed scheme.

Table 1: Average time taken to execute signature

verification of ECDSA and the proposed scheme for each of the three sample inputs

Algorithm

Average elapsed time to verify a signature (in seconds) – ECDSA vs. proposed scheme For sample

input 1 For sample

input 2 For sample

input 3 32-bit

64-bit

32-bit

64-bit

32-bit

64-bit

ECDSA 0.31 1.144 0.33 1.09 0.318 1.19 Proposed Scheme 0.165 0.55 0.16 0.53 0.17 0.39

Difference in sec. 0.145 0.594 0.17 0.56 0.148 0.80

For each sample input the improvement in percentage is calculated as %.100

secintimeverif.ingCorrespondsECDSA'Sec.inDifference

× (20)

The result is depicted in Table 2. For 32-bit field the overall average improvement is 48.28%. For the 64-bit field the overall average improvement is 56.93%. It can be seen that the proposed scheme’s signature verification process can run faster than the ECDSA’s signature verification by about 48-57%.

Table 2: Average improvement in percentage for

signature verification process ECDSA

Sample input

number

Average improvement for verification process in percentage (%)

Signature verification 32-bit 64-bit

1 46.77 51.92

2 51.52 51.64

3 46.54 67.23

Sum 144.83 170.79

Average 48.28 56.93

In general, it is observed that above 45% performance improvement can be achieved for signature verification process.

CONCLUSION

A related new scheme is proposed and developed. This new scheme and ECDSA are implemented for comparison purposes. The signature verification algorithm of the newly proposed scheme is found to be above 45% faster than the verification process of ECDSA. The test was performed for randomly selected specific sizes of the private key d and the one-time key k. The underlying field implementation was up to 192-bit size. Potentially, if further researches are conducted to examine its security strength, we believe that the result will play a great role in enhancing the speed of ECC based digital signature schemes.

REFERENCES

[1] Menezes, A. Van Oorschot, P. Vanstone, S.

“Handbook of Applied Cryptography”, CRC Press, pp. 1-165, 223-481, 1997.

[2] Wenbo Mao, “Modern Cryptocraphy: Theory

and Practice”, Prentice Hall, pp. 139-199, 305-321, 2004.

[3] Stallings, W. “Cryptography and Network

Security: Principles and Practice”, Prentice Hall, pp. 24-27, 235-394, 2003.

[4] Hankerson, D. Menezes, A. Vanstone, S.

“Guide to Elliptic Curve Cryptography”, Springer, pp. 1-61, 75-147, 153-196, 2004.

[5] Shoup, V. “A Computational Introduction to

Number Theory and Algebra (version 1)”, Cambridge University Press, pp. 1-73, 180-281, 2005.

[6] Lewand, R. “Cryptological Mathematics”, The

Mathematical Association of America, pp. 1-26, 141-176, 2000.

[7] Diffie, W. Hellman, M. “New Directions in

Cryptography”, http:// crypto.csail.mit.edu/ classes/6.857/papers/diffie-hellman.pdf, 1976.

[8] Rivest, R. Shamir, A. Adleman, L. “A Method

for Obtaining Digital Signatures and Public-key Cryptosystems”, http://theory.lcs.mit.edu/~rivest/rsapaper.pdf, 1977.

Page 8: zede26

Tilahun Kiros and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

72

[9] Bauer, F. “Decrypted Secrets: Methods and Maxims of Cryptology”, Springer, pp. 8-43, 2000.

[10] Rosing, M. “Implementing Elliptic Curve

Cryptography”, Manning Publications, pp. 14-43, 104-120, 200-210, 1999.

[11] Rosner, M.C. “Elliptic Curve Cryptosystems

on Reconfigurable Hardware”, Master’s Thesis, Worcester Polytechnic Institute, http://www.crypto.ruhr-uni- bochum.de/imperia/md/content/texte/theses/ms_mrosner.pdf, 1998.

[12] Johnson, D. B. “ECC, Future Resiliency

and High Security Systems” http://www.comms.scitech.susx.ac.uk/fft/crypto ECCFut.pdf, 2000.

[13] “The Elliptic Curve Cryptosystem ”,

http://www.comms.scitech.susx.ac.uk/fft/crypto/EccWhite3.pdf, 2000.

[14] Pietilainen, H. “Elliptic Curve Cryptography

on Smart Cards”, Master’s Thesis, Helsinki University of Technology, http://citeseer.ist.psu.edu/cache/papers/cs/25616/http:zSzzSzwww.iki.fizSznipsuzSzDippazSzdi.pdf/blake00elliptic.pdf, 2000.

[15] Schneier, B. “Applied Cryptography:

Protocols, Algorithms, and Source Code in C”, John Wiley & Sons, pp. 461-481, 1996.

[16] Turan, E. “ECDSA Optimization for ARM

Processors for a NIST Curve Over GF(2m)”, Master’s Thesis, Oregon State University, http://security.ece.orst.edu/papers/01Turan.html, 2001.

[17] “Certicom ECC Challenge”,

http://www.certicom.com/download/aid-111/cert_ecc_challenge.pdf, 2006.

[18] Krasner, J. “Using Elliptic Curve Cryptography (ECC) for Enhanced Embedded Security “ http://www.certicom.com/download/aid-355/WP-enhancedSecurity.pdf, 2004.

[19] Johnson, D. and et al, “The Elliptic Curve

Digital Signature Algorithm (ECDSA)”, http://www.certicom.com/download/aid-27/ECDSA.pdf, 2001.

[20] Vanstone, S. “Elliptic Curve Cryptography:

The next generation of wireless security”, http://www.certicom.com/download/aid-322/CIC_Markt& Technik_ECC.pdf, 2004.

[21] Tanik, H. K. “ECDSA Optimizations on an

ARM Processor for a NIST Curve Over GF(p)”, Master’s Thesis , Oregon State University, http://islab.oregonstate.edu/papers/01Tanik.pdf, 2001.

[22] Balasubramaniam, P. et al ,“Implementation

Issues in Elliptic Curve Based Cryptosystem”, TRANSACTION ON CRYPTOLOGY, vol. 3, Issue 1, 2006.

[23] Eddy, W. et al, “An Interoperability

Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography”, RS Information Systems, Inc., http://gltrs.grc.nasa.gov/reports/2005/CR-2005-213578.pdf, 2005.

[24] Joye, M. “Compact Encoding of Non-

Adjacent Forms with Applications to Elliptic Curve Cryptography”, Springer-Verlag, http://intro.gemplus.com/smart/rd/publications/pdf/JT01nafe.pdf, 2001.

Page 9: zede26

Journal of EEA, Vol. 26, 2009

ENERGY AWARE GPSR ROUTING PROTOCOL IN A WSN

Sayed Nouh Egyptian Consultant, Egyptian Embassy

and

Zewdu Geta

Department of Electrical and Computer Engineering Addis Ababa University

ABSTRACT

Energy is the scarce resource in wireless sensor networks (WSNs), and it determines the lifetime of WSNs. For this reason, WSN algorithms and routing protocols should be selected in a manner which fulfills these energy requirements. This paper presents a solution to increase the lifetime of WSNs by decreasing their energy consumption. The proposed solution is based on incorporating energy information into Greedy Perimeter Stateless Routing Protocol (GPSR). The proposed solution performs better in energy consumption, network lifetime and packet delivery ratio, with a performance gain of Network Lifetime 45.9% - 78.69%. However, the performance is comparatively low in average delay because of computational complexity. Key Words: Energy aware routing protocol, GPSR routing protocol, Geographical routing protocol, Wireless Sensor Networks,

INTRODUCTION Wireless sensor networks [1] have inspired tremendous researches of interest since the mid-1990s. Advancement in wireless communication and micro electro-mechanical systems (MEMSs) have enabled the development of low-cost, low power, multifunctional, tiny sensor nodes that can sense the environment, perform data processing, and communicate with each other over short distances. The era of WSNs is highly anticipated in the near future. In September 1999, WSNs were identified by Business Week as one of the most important and impactive technologies for the 21st century. Also, in January 2003, the MIT's Technology Review stated that WSNs are one of the top ten emerging technologies, [2].

Wireless sensor networks (WSNs) are composed of sensor nodes that must cooperate in performing specific functions. In particular, with the ability of nodes to sense, process data, and communicate, they are well suited to perform event detection, which is clearly an important application of wireless sensor networks. On the other hand, energy efficiency has always been a key issue for sensor networks as sensor nodes must rely on small, nonrenewable batteries. Wireless sensor networks present tradeoffs in system design, [3]. On the one hand, the low cost of the nodes facilitates massive scale and highly parallel computation. On the other hand, each node is likely to have limited power, limited reliability, and only local communication with a modest number of neighbors. These limitations make WSNs unrealistic to rely on careful placement or uniform arrangement of sensors. Rather than using globally accessible expensive global positioning system (GPS) to localize each sensor, beaconing protocol is used to enable sensors to know their neighbors’ positions on demand. The operation of beaconing protocol is based on the measure of received radio signal strength, where this radio information is used to compute ranges. The one with low radio signal strength is shortest to the destination and is selected to forward data.

Figure 1 Beacon’s working principle

N4

D N3

N2

N1

X

Page 10: zede26

Sayed Nouh and Zewdu Geta

Journal of EEA, Vol. 26, 2009

2

For example, as shown in Fig. 1, suppose node X has a packet intended to send to node D. First, node X sends Beaconing signal to its neighbors, (N1, N2, N3, and N4). These neighbors in turn reply to node X. The path with received low signal strength is selected, (path X à N4)), to forward packet towards destination D. The rest of the paper is organized as follows: Section 2 presents the different routing protocols in WSNs. Greedy Perimeter Stateless Routing Protocol is explained in Section 3. Section 4 presents related work on energy-efficient routing. The proposed algorithm and its implementation is described in Section 5. Simulation set-up and performance metrics are presented in Section 6. Section 7 discusses the results obtained, while Section 8 concludes the paper.

ROUTING PROTOCOLS IN WSNS Routing in WSNs is a very challenging task due to the inherent characteristics that distinguish these networks from other wireless networks like cellular or mobile ad hoc networks. Traditional IP-based protocols may not be applied to WSN, due to the large number of sensor nodes and because getting the data is often more important than knowing the specific identity of the source sending it. Furthermore, almost all applications of sensor networks require the flow of sensed data from multiple sources to a particular base station, sink. Sensor nodes are constrained in terms of energy, processing, and storage capacities, thus, they require careful resource management. Sensor networks are strictly dependent on their applications, and the design requirements of a sensor network change with the applications. Furthermore, position awareness of sensor nodes is important since data collection is normally based on their location. Finally, since data collected by many sensors in a WSN are typically based on common phenomena, they are often much correlated and contain a lot of redundancy. Such redundancy needs to be exploited by the routing protocols to improve energy and bandwidth utilization, [4]. Flat routing In flat networks, sensor nodes typically play the same role and collaborate together to perform the sensing task, [5]. The lack of a global identification due to the large number of nodes present in the network and their random placement, typical of many specific wireless sensor network (WSN) applications, make it hard to select a specific set of sensors to be queried.

Hierarchical routing In a hierarchical architecture, higher energy nodes can be used to process and send the information while low-energy nodes can be used in monitoring the interested area and gathering data, [5]. This means the creation of clusters with the assigning of special tasks to cluster heads, such as data fusion and data forwarding, in order to achieve system scalability, network lifetime increment, and energy efficiency. Geographical Routing Geographical Routing protocol exploits information about the location of the sensors in order to forward data through the network in an energy-efficient way, [5]. The location of nodes may be available directly from a GPS system or by implementing some localization protocol. The possible advantage is a much simplified routing protocol with significantly smaller or even non existing routing tables as physical location carries implicit information to which neighbor to forward a packet to.

GREEDY PERIMETER STATELESS ROUTING (GPSR)

Greedy Forwarding Rule: in GPSR, packets are marked, by their originator, with their destinations’ locations. As a result, a forwarding node can make a locally optimal, greedy choice in choosing a packet’s next hop. Specifically, if a node knows its radio neighbors’ positions, the locally optimal choice of next hop is the neighbor geographically closest to the packet’s destination. Forwarding in this region follows successively closer geographic hops, until the destination is reached. An example of greedy next-hop choice appears in Figure 2.

Figure 2 Greedy forwarding example. Y is X’s

closest neighbor to D. Here, X receives a packet destined for D. X’s radio range is denoted by the dotted circle about X, and the arc with radius equal to the distance between Y and D is shown as the dashed arc about D. X

D D

Y X

Page 11: zede26

Energy Aware GPSR Routing Protocol in A WSN

Journal of EEA, Vol. 26, 2009

3

forwards the packet to Y, as the distance between Y and D is less than that between D and any of X’s other neighbors. This greedy forwarding process repeats until the packet reaches D. Advantage of Greedy forwarding protocol is its reliance only on knowledge of the forwarding node’s immediate neighbors. The state required is negligible and dependent on the density of nodes in the wireless network, not the total number of destinations in the network. For more details about GPSR’s advantage and limitations refer to [6]. RELATED WORK ON ENERGY-EFFICIENT

ROUTING The current work on energy-efficient routing assumes that all the nodes in the network are always available to route all packets. In reality, since nodes consume power even in idle mode, significant overall energy savings can be achieved by turning off an appropriate subset of the nodes without losing connectivity or network capacity. There has been much work on topology control algorithms, [7, 8] based on the notion of connected dominating sets that reduce energy consumption precisely by periodically putting some nodes into sleep mode. Geographic and Energy Aware Routing (GEAR) exploits geographic information while propagating queries only to appropriate regions [9]. It can be classified as a data-centric algorithm with geographic information knowledge. The process of forwarding a packet to all the nodes in the target region consists of two steps. The first one aims at forwarding the packets towards the target region and the second step is concerned with disseminating the packet within the region. However, the GEAR protocol has a limitation, which is not scalable and all nodes are active even though only a part of the network is queried. Geographic adaptive fidelity (GAF) is an energy-aware location-based routing algorithm, [8]. The network area is divided into fixed zones to form a virtual grid, as shown in Fig. 3. GAF uses equal areas of square zones, whose size is dependent on the required transmitting power and the communication direction. GAF exploits the equivalence of all nodes inside the same zone by keeping at least one node per zone awake for a certain period of time and turning all the others in that zone into sleep state during that time. With high mobility of nodes there is a high packet loss as nodes may leave the gird without replacing an active node which is the disadvantage of GAF.

Figure 3 Virtual grid formations in a GAF

Adaptive Self-Configuring sEnsor Networks Topologies (ASCENT) adaptively elects “active” nodes from all nodes in the network, [8]. Active nodes stay awake all the time and perform multi-hop packet routing while the rest of the nodes remain passive and periodically check if they should become active. To do this, ASCENT has four state transitions: Test, Active, Passive and Sleep. ASCENT depends on the routing protocol to quickly re-route traffic. This may cause some packet loss, and therefore an improvement that has not been implemented is to inform the routing protocol of ASCENT’s state changes so that traffic could be re-routed in advance. ASCENT does not work for low density nodes and behaves differently for a different routing protocol which is the limitations of this work.

PROPOSED ALGORITHM AND ITS IMPLEMENTATION

The GPSR routing protocol does consider only the shortest distance to the destination during path selection. However, in wireless sensor network (WSN) energy is a scarce resource, so we are going to consider the remaining energy of nodes, energy for transmission & receiving and by making nodes that are not participating in communication to go into sleep mode. As nodes in sleep mode uses least amount of energy. Hence, we can reduce energy wastage. Assumptions This section presents the basic design of the Proposed Protocol, which works with the following network setting: • A vast field is covered by a large number of

homogeneous sensor nodes which communicate with each other through short range radios. Long range data delivery is accomplished by forwarding data across multiple hops.

• Each sensor node is assumed to be aware of its

own geographic location. The network uses

Page 12: zede26

Sayed Nouh and Zewdu Geta

Journal of EEA, Vol. 26, 2009

4

beaconing algorithm localization method to estimate the locations of the individual nodes.

• The wireless communication channels are

bidirectional. Each sensor node has constrained battery energy.

• After having been deployed, sensor and sink

nodes remain stationary at their initial locations. • Target (source) node moves randomly.

Proposed solution GPSR routing protocol uses Greedy Forwarding to route data to neighboring nodes which does not consider either remaining energy of nodes or the transmission energy, so that a packet (a data) reaching to a destination is in question. The proposed solution consists of two-step-solution. The first step is concerned with making nodes which are not participating in either sending or receiving to go into sleep mode. The second one is considering remaining energy of nodes in addition to the shortest path during path selection. Then in wireless sensor network (WSN) there are 3 states of a Node: • Active, Sleep and Idle, as shown in Figure 4. The active node consumes more amount of energy while idle node consumes lesser and sleep node consumes the least amount of energy. Hence a good power saving algorithm should make the active number of nodes as little as possible [7].

Figure 4 Proposed scheme

Step I If nodes are farther away from a sink node than source node, they will enter Sleep mode to save energy and will stay till next communication. All other nodes will be in active mode and will participate in sending and receiving a packet. During a communication cycle, we set a timer. At the end of the communication cycle, the timer is reset and all the nodes in a grid are set to active mode. Step II: Minimum weight function calculation Minimum weight function is an important key to make the routing decision by a source node to a destination. In this section we will formally define how to calculate the value of minimum weight function and using this weight to evaluate the proposed protocol. Minimum weight function contains two factors, the distance from a source node to the destination and the remaining energy level of neighbor nodes. The minimum weight function Wi , of neighbor node x(i) is defined as follows:

Wi

Where • Wi is the minimum weight value among the N

neighbors of a source node • x(i) is the position of the ith neighbor node of a

source node • d(x(i),y) is the Eculidean distance between the ith

neighbor node and the destination y

• Erfi is the remaining energy factor

Erfi

Where • Eoi is the initial energy of node i; • ECi is the consumed energy of node i, [10].

Sin

Sour

Sens

Acti

Sleeping

Page 13: zede26

Energy Aware GPSR Routing Protocol in A WSN

Journal of EEA, Vol. 26, 2009

5

The flow chart in Fig. 5 represents the two step solution, where it first sets a timer to make nodes either sleep or active by calculating distance d (i,s) and d(i,t). If d (i,s) >d(i,t), node i will go into sleep mode otherwise it will be active. • d(i,s) is the Euclidean distance of each node i to

a Sink node s and • d(i,t) is the Euclidean distance of each node I to

a Target (source) node t. The original GPSR routing protocol uses Greedy forwarding when there is a neighbor node and

Perimeter forwarding when the source node has no neighbor or when its neighbor’s distance is shorter than itself to a destination. Whereas our proposed algorithm uses Minimum Weight forwarding (shortest distance plus residual energy) instead of Greedy forwarding and Perimeter forwarding to come out of no neighbor problem until it reaches a destination. If the forwarding node is a destination, wakeup timer (T) will be reset and all nodes become active and this process is repeated again. SIMULATION SETUP AND PERFORMANCE

METRICS The proposed algorithm is implemented by J-Sim simulation. As J-Sim have the following features:

N

Reset Time r; T=0

Y

Y

Y

N

END

Set Wakeup Timer T=1

Calculate d (i, s) and d (i, t), i . If d (i,s) > d(i,t) , set node i into Sleep mode.

Node=Source

Is Node=Source?

Mode = Minimum Weight

Is a node has

Neighbors

Mode=Perimeter

Have left Max?

Apply Right Hand Rule for Perimeter forwarding

Is Node= Destination?

Minimum Weight Forwarding

Start

Y

N

N

Figure 5 Flow chart

Page 14: zede26

Sayed Nouh and Zewdu Geta

Journal of EEA, Vol. 26, 2009

6

• As it is implemented in Java, makes J-Sim a truly platform-independent and reusable environment.

• It is a dual-language simulation environment

like ns-2 in which classes are written in Java and scripts using Tcl/Java (Jacl).

• Only the public classes/methods/fields in Java

can be accessed in the Tcl environment instead of exporting explicitly all classes/methods/fields like other simulators, e.g. ns-2.

• J-Sim exhibits good scalability for the memory

allocation to carry out simulation of length 1000 is at least two orders of magnitude lower than that in ns-2 [11].

The simulation is done on different performance metrics, to compare the performance of the proposed algorithm against the original GPSR routing protocol. The implementation has the following assumptions: • The sensor nodes are deployed in a random

manner. • Node density, target (source) speed may

represent a moving tank, and percentage of number of node failures are varied during simulation.

Simulation setup To explore the results, we conduct a detailed simulation using a J-Sim simulator. In our simulation up to 450 sensors are scattered over to a 350 × 350 m2

sensor field. Other simulation

parameters are listed in Table 1, most of which are taken from white papers of commercial products vendors.

Table 1: Simulation parameters

Variables Values Communication Rage 15 m Simulation Time 200sec Simulation Area 350 x350 m2 Target node Speed 10 m/s and 15m/s Number of Nodes 450 Node receiving power 14.88mW Node transmit Power 12.50mW Node Idle mode power 12.36mW Node Sleep mode power 0.016mW

Performance metrics Although different researchers propose different performance metrics to evaluate the performance of routing protocols, we use the following metrics for evaluating the efficiency of the proposed routing protocol. • Average Energy Consumption: The average

energy consumption is calculated across the entire topology. It measures the average difference between the initial level of energy and the final level of energy that is left in each node. Let Ei and Ef are the initial energy and final energy level of a node respectively and N is total number of nodes in the network. Then

• Average Data Delivery Ratio: This represents

the ratio between the number of data packets that are sent by the source and the number of data packets that are received by the sink.

• Average Delay: It is defined as the average time

difference between the moments of data packets received by the Sink node and the moments of data packets transmitted by the Source node. This metric defines the freshness of data packet.

N: number of packets • Network Lifetime (NL): This is one of the

most important metrics to evaluate the energy efficiency of the routing protocols with respect to network partition. In wireless sensor networks (WSN), especially in those with densely distributed nodes, the death of the first node seldom leads to the total failure of the network. When the number of dead nodes increases, the network is partitioning too. Network Lifetime can be defined in the following ways

• It may be defined as the time taken for K% of

the nodes in a network to die. • It can also be the time for all nodes in the

network to die. • The lifetime of the network under a given

flow can be the time until the first battery drains-out (died) [12]

Page 15: zede26

Energy Aware GPSR Routing Protocol in A WSN

Journal of EEA, Vol. 26, 2009

7

We adopt the third definition for the analysis of this work. Here, a node with less than 20% of its full battery capacity is considered as a dead node based on the definition in [2]. SIMULATION RESULTS AND DISCUSSIONS We deploy the nodes in a region of size 350 x350 m2. Sensor nodes are deployed randomly, sink node is fixed at the lower right corner of the grid and Target (Source) node deployed at the center of grid and moves with a speed of 10 m/s. There are one sink node, one target node and 450 sensor nodes in our simulation environment. Each target node generates stimuli every 1.5 seconds and sensing radius is 15m. The number of nodes in the region is controlled by increasing nodes from 50 to 450 with step of 100. The simulation time is 200 seconds and the parameters are affected by the number of nodes used in the simulation, simulation time and node failures. We consider two scenario designs. All the experiments are conducted on a dual processor Intel 2.66 GHz machine running Windows XP Professional with 2 GB RAM. Each data point reported below is an average of 20 simulation runs, [13]. Scenario-1: This Scenario follows parameters shown in Table 2.

Table 2: Scenario 1 parameters

Variables Values Target Speed 10 m/s Number of Nodes 450 Sink Location (350,0) Target Location (150,150) and moves Sensor Location Randomized and stay static Random Node Failure

With no Failure

Scenario-1 Results:

Figure 6 Average Energy Consumption (no Node

Failure, Speed 10m/s)

Figure 7 Average Packet Delivery Ratio (no Node

Failure, Speed 10m/s)

Figure 8 Average Delay (no Node Failure, Speed

10m/s)

Figure 9 Network Lifetime (no Node Failure,

Speed 10m/s)

Scenario-1: Discussion of Results Scenario 1 results, Figures 6, 7 and 9, show that the proposed solution performs better in energy consumption and packet delivery ratio than the original GPSR protocol and hence the Network Lifetime is improved significantly. As the aim of our interest is to increase the lifetime of the network, the goal is achieved by considering residual energy in the proposed solution which reduces individual node failure and network

Page 16: zede26

Sayed Nouh and Zewdu Geta

Journal of EEA, Vol. 26, 2009

8

partition. Moreover, making nodes which are not participating in transmission or receiving into sleep mode reduces overall node failures. Hence the number of node failure and energy wastage decreases, i.e. the lifetime of the network increases. Whereas Figure 8 shows that the average delay of the proposed solution is larger as compared to the original GPSR protocol because the proposed solution checks not only the shortest distance but also the residual energy and distance calculation to make nodes either in sleep or in active mode. The proposed algorithm uses a number of parameters to select a route than the original GPSR protocol. The cause of the delay is due to computational complexity. Scenario-2: This Scenario follows parameters shown in Table 3.

Table 3: Scenario 2 parameters

Variables Values

Target Speed 15 m/s Number of Nodes 450 Sink Location ( 350,0) Target Location (150,150) and moves Sensor Location Randomized and stay

static Random Node Failure 15% Failure Scenario-2 Results:

Figure 10 Average Energy Consumption (15%

Node Failure, Speed 15m/s)

Figure 11 Average Packet Delivery Ratio (15%

Node Failure, Speed 15m/s)

Figure 12 Network Lifetime (15% Node Failure,

Speed 15m/s)

Figure 13 Average Delay (15% Node Failure,

Speed 15m/s) Scenario-2: Discussion of Results The same is true for this scenario; Figs. 10, 11, and 12 show that the proposed solution performs better in energy consumption and packet delivery ratio than the original GPSR protocol and hence there is an improvement in Network Lifetime. As Fig. 13 shows, the average delay is low too. In scenario 2 as compared with scenario 1, the average energy consumption, average packet delivery ratio and Network Life time is comparatively low. This is because in scenario 2,

Page 17: zede26

Energy Aware GPSR Routing Protocol in A WSN

Journal of EEA, Vol. 26, 2009

9

target speed is more which incur routing over-head. Further, due to node failure, less number of nodes will be available for routing i.e. there is more energy consumption.

CONCLUSIONS In this paper, we have studied GPSR routing protocol, which is a geographical routing protocol and uses a greedy forwarding whenever possible and perimeter forwarding, if not. It considers only distance during packet routing. In order to increase the lifetime of a network, we added energy information and making nodes, which are not participating in sending or receiving packets, in to sleep mode. To show the performance gained, we compare the proposed solution with the original GPSR routing protocol using J-sim, simulation software. The simulation output indicates that, there is a performance gained in average energy consumption, average packet delivery ratio and network lifetime from 45.9% to 78.69%. However, the proposed solution increases the average delay due to high computational complexity.

REFERENCES [1] Mobile, Wireless, and Sensor Networks: A

John Wiley & Sons, Inc., Publication, R.Shorey, A.Ananda, 2006.

[2] Wireless Communications: Principles and

Practice: Upper Saddle River, NJ: Prentice Hall, T. Rappaport, 1996.

[3] Localization in Sensor Networks,

Massachusetts Institute of Technology, Jonathan Bachrach and Christopher Taylor, 2005.

[4] A Survey on Routing Protocols for Wireless

Sensor Networks: Kemal Akkaya and Mohamed Younis, Department of Computer science and Electrical engineering University of Maryland, Baltimore County, 2005.

[5] Protocols and Architectures in Wireless Sensor Networks: John Willey & Sons, Ltd, Holger Karl and Andreas Willig, 2005.

[6] Wireless Sensor Network Designs: John

Wiley & Sons Ltd, Anna Hac, 2003. [7] SPAN: An Energy-Efficient Coordination

Algorithm for Topology Maintenance in Ad Hoc Wireless Networks: By Zhimin He, Oct 1st, 2003.

[8] Adaptive Topology Control for Ad-hoc

Sensor Networks: By Ya Xu, John H. and Deborah E., July 2001.

[9] GEAR: A Recursive Data Dissemination

Protocol for Wireless Sensor Networks: by Yu Y, Estrin D, and Govindan R., UCLA Computer Science Department Technical Report, 2001.

[10] Modeling the Lifetime of Wireless Sensor

Networks: Kewei sha and Weisong shi, April 2005.

[11] Comparing ns-2 with j-sim,

http://www.jsim.org/comparision.html (access date April 23, 2009).

[12] Online Power-aware Routing in Wireless Ad-

Hoc Networks: Q. Li, J. Aslam and D. Rus, Proceedings of MOBICOM, July 2001.

[13] J-Sim: A Simulation and Emulation

Environment for Wireless Sensor Networks, by Ahmed Sobeih, Wei-Peng Chen, Jennifer C. Hou, Lu-Chuan Kung, Ning Li, Hyuk Lim, Hung-Ying Tyan, and Honghai Zhang, http://www.jsim.org/, (access date April 2, 2009).

[14] Metrics in Wireless Networks, ACM

MOBIHOC, Lausanne, Switzerland, A. Jacobson, June 2002, pp. 194–205.

Page 18: zede26

Journal of EEA, Vol. 26, 2009

THREATS AND TRUSTED COUNTERMEASURES USING A SECURITY PROTOCOL IN THE AGENT SPACE

Sayed Nouh

Egyptian Consultant, Egyptian Embassy

and

Tinbit Admassu Department of Electrical and Computer Engineering

ABSTRACT

Mobile Agent computing is a paradigm of distributed computing, that has generated considerable excitement in the research community. Despite that, it has not been translated into a significant number of real-world applications due to a new dimensionality of security problem it brings along with it. In this paper familiarization to Mobile Agent technology and threat of hostile host towards a visiting agent is given due diligence: Malicious host problem. The threats are identified and a modified mobile computing model is proposed to prevent some of the threats. A prototype that realizes the concept is implemented using IBM’s Mobile Agent platform, Aglets. Keywords: Agents, Malicious host problem, Mobile Agents, Trusted nodes.

INTRODUCTION A Mobile Agent can be thought of as a program, which can autonomously migrate between various nodes of a network and perform computation on behalf of a user [1]. It has a unique ability to transport itself from one system in a network to another. The ability to travel allows a Mobile Agent to move to a system that contains the object with which the agent wants to interact and then take advantage of being in the same host or network as the object. Mobile Agents are promising paradigms for the design and implementation of distributed applications [1]. Mobile Agent technology is not entirely based on Mobile Agents only, there is another complementary component called Mobile Agent platform. It provides appropriate execution environment and services to the Mobile Agents. The agent along with the Mobile Agent platform is called Mobile Agent System (MAS). Mobile Agent Systems can be roughly divided based on the

programming language by which they are developed and use: Java and non Java based (using languages like C/C++ and scripting languages like Tc1/Tk). Around 80% of Mobile Agent systems available today are built using Java, due to its inherent support to Mobile Agent programming. The rest of the paper is organized as follows. In section 2 the threats of hostile hosts towards a visiting agent is identified and some of the available countermeasures to those threats are presented. In section 3 the proposed system design is explained. Section 4 presents the capability and performance of the proposition using a prototype developed and the last section, section 5, draws a conclusion.

SECURITY IN THE AGENT SPACE The security issues of MAS are of multidimensional. An agent could attack a platform, an agent could attack another agent and a platform could launch an attack against its visiting agents. The first two attacks have their counter part in the traditional client server environment. But the last kind of attack, a platform launches an attack against its visiting agent, is the most difficult of all attacks to solve. Some researchers even claim that it is impossible to solve. In this paper we will be looking into this attack (Malicious Host Problem). Hostile host threats These types of threats represent a class of threat, where the host compromises the agent. The hostile actions include: Masquerading, Denial of Service, Eavesdropping and Alteration of carried result. These attacks are most difficult to be detected and prevented, since the host has a full control of the agent’s code and data. Masquerading: An agent platform can masquerade as another agent platform in an attempt to deceive the Mobile Agent as to its true

Page 19: zede26

Sayed Nouh and Tinbit Admassu

Journal of EEA, Vol. 26, 2009

54

destination. The problem has more to do with the capability of a visiting agent to correctly identify and authenticate its executing host, while it is actually on it. As an example, a Mobile Agent entrusted with the task of finding the “lowest price” of a commodity by visiting various virtual shops, can be tricked by a malicious masquerading platform, by making it believe that all other shops have quoted a higher price. Thus, the masquerading platform can harm both the visiting Agent and other Agent platforms [3, 4]. Denial of Service: When an agent arrives at an agent platform, it expects the platform to execute the agent’s request faithfully and provide a fair allocation of resources. A malicious platform, however, may ignore agent’s service requests, introduce unacceptable delays for critical task or even terminate the agent without notification. Agents, if any, which are waiting for a result from a non-responsive agent on malicious platform must be careful to avoid becoming deadlocked. An agent can also become live locked if a malicious platform or programming error creates a situation in which some critical stage of the agent’s task is unable to finish because more work is continuously created for it to do. Agent live lock differs from agent deadlock in that the live locked agent is not blocked or waiting for anything, but is continuously given task to perform and can never catch up or achieve its goal [3]. Eavesdropping: The classical eavesdropping threat involves the interception and monitoring of secret communications. The threat of eavesdropping, however, is further exacerbated in Mobile Agent systems because the agent platform can not only monitor communications, but also can monitor every instruction executed by the agent, all the data it brings to the platform, and all the subsequent data generated on the platform [3,5]. Alternation: Alteration threatens the integrity of the agent as a whole. As already discussed earlier when an agent arrives at a given host, it exposes its code, state and data to the platform. A mechanism that ensures the integrity of the agent needs to be in place [5]. Countermeasures for malicious host threats Over years, a number of countermeasures for malicious behavior of hosts towards a visiting agent have been proposed, some of them are applicable, while others have only of a theoretical significance. Generally, the countermeasures

provide either detection or prevention mechanism to the visiting agent. Trusted Hardware: This countermeasure tries to enforce the notion of trust between an agent and a host by physically adding, secure – tamper detecting and responding hardware to conventional computing systems. The hardware encapsulates the entire environment in which the agent executes, creating a safe heaven within hosts in the agent space. It protects the visiting agent from any possible attack that could be launched by the entertaining host [6]. Trusted Execution Environment: This method is a variation of the above method. It eliminates the deployment of the specialized hardware. Instead, according to this method a set of trusted nodes needs to be setup in the agent space prior to any agent to host interactions [1]. Computing with Encrypted Functions: This method prohibits the executing host from learning anything substantial about the agent. It has been suggested by Sander and Tshudlin and tries to ensure the computation privacy of the agent in the untrusted host. Accordingly, functions will be encrypted such that their transformation can again be implemented as programs. The resulting program will consist of instructions that a processor understands, but the processor will not understand the “program’s function” [4]. Code Obfuscation: Code obfuscation is suggested by Hohl [2]. According to this proposition, an algorithm called obfuscating algorithm will be used to mess up the code, while still creating a semantically equivalent version of a program. The idea is to make the program behave like a black box [6].

PROPOSED SYSTEM DESIGN The proposed countermeasure is based on trust. Generally a trust that a Mobile Agent has on a particular host can be blind folded, based on policy enforcement or based on control and punishment. A blind folded, kind of trust, is the one in which the Mobile Agent “simply need to trust its entertaining host”. In this scenario, the host can do whatever it wants while giving services to the Mobile Agent, but still it is trusted that it neither has malicious behavior nor collaborate with other hostile hosts that perform some evil action on the agent.

Page 20: zede26

Threats and Trusted Countermeasures, using a Security Protocol, …

Journal of EEA, Vol. 26, 2009

55

The second kind of trust is based on policy enforcement. In this case, the Mobile Agent and the host have a prior contractual relationship in the form of policy. Such kind of trust should work fine as long as the signing parties conform to their rights and obligations. The last kind of trust is based on control and punishment. Here no prior policy needs to be signed between the two parties. Although there is no contract signed it is not a blind folded trust as in the first case. The trust assumes that hosts are not by nature malicious and give them a chance to behave accordingly. But it still uses control mechanism to punish the host if found guilty of misbehaviors. The proposed countermeasure uses a combination of the above two kinds of trusts, based on to which nodes, in the computer network, the Mobile Agent is interacting: based on policy enforcement and based on control and punishment. Before we move on to describe the proposed countermeasure, the next section outlines the guidelines used to develop the countermeasure. Design guidelines The following points are used as guidelines when the proposition is being developed:

• Convenience to the owner of the agent. • Abstraction of the modification. • No pre negotiation with hosts. • Ease of access of information gathered. The proposed countermeasure A closer look and evaluation of the various kinds of attacks launched by hostile hosts’ reveals that, Mobile Agents are subjected to such kind of attacks because they are a lonely figure once sent to the agent space. Hence the proposition modifies the computing model of the mobile computation in order to address hostile host threats. Figure 1 shows an overview of the mobile computing model, in which the proposed countermeasure is taken into account, with a number of additional elements. Much like the case of trusted third parties a node is setup in the agent space to provide a different task to the Mobile Agent. In this setup it is mandatory that the home or owner of the Mobile Agent has a public-private key pair at its disposal, the public key is published to the world, so that the Mobile Agent could retrieve this key while it is visiting hosts. These keys are used by the security protocol to protect the confidentiality and the integrity of parts of the Mobile Agent.

Host#3

Host#4

Host#2

Host#5

Trusted Server

CAHost#1

mobile agentmobile agent

mobile agent

mobile agent

mobile agent

mobile agent

mobile agent

mobile agent

mobile agent

mobile agent

Host#n

Home

Figure 1 Proposed mobile computing model

Page 21: zede26

Sayed Nouh and Tinbit Admassu

Journal of EEA, Vol. 26, 2009

56

As it can be seen from the Fig. 1, the proposal modifies the way by which the mobile computation is done. More specifically, the arrows dictate that, the Mobile Agent first goes to the trusted node, creates a temporary storage element called active storage element (ASE), then moves to the first host to be visited. It goes there, sends the information it has retrieved from the corresponding host to be stored temporarily at ASE. The trusted node accepts the information and stores it. Each Mobile Agent that has a trust relationship with this node does the same, creates its own ASE at the trusted node and uses it to store the partial information it retrieves from each hosts. At the end of its mission the Mobile Agent returns back to the trusted node and asks the corresponding ASE to hand it over the results it has been accumulating so far, carry back the result to its home as if it has been doing the job alone. Unlike the original model of a unified agent space, it is assumed that the agent space is divided into regions, within each region a node called trusted server is setup. These servers provide various services to the Mobile Agent while the agent is in the agent space. The Mobile Agent supported by these third parties trusted nodes as well as the security protocol discussed later on should be able to avert some of the evil acts from hostile hosts. The division of the agent space into regions is analogous to the cells in the mobile communication systems. In case of mobile communication systems, at the center of each cell there is a transmitter and receiver, likewise in the proposed countermeasure at the center of each region there is a trusted server. The concept of introducing a trusted server setup in a network handled by a third party is not new at all. If we look around a number of servers deployed in an internet work, they all or at least at some point in their operation provide synonym. Web servers, mail servers and root Domain Name Services (DNS) servers are some to mention. Let us take on the web servers as an example to highlight the similarity as well as the feasibility of the proposition. These days we can develop our own web page and upload it to a web server for free. Let us take this argument a step further as it might seem less convincing in case of freely web hosting services. Take Ethiopian Telecommunication Corporation (ETC), it provides web hosting service with the amount of fee to be paid depending on the size of file we want to upload as well as other features our page requests from their web server. In either case all that is needed from our side is to pay the price.

Indeed the two parties, the ETC and the one who wants to get hosting service, have to be agreed on terms of use. The ETC once agreed on to host our page, it provides all the necessary computational resources, when our page is being viewed by all around the world. The ETC has the obligation not to modify the content of our page without our permission, hence the ETC should display our page as it is. This has a strong implication on the feasibility of a trusted server set up in the agent space by a third party which could provide a processing service to the Mobile Agent without altering the data or code of the Mobile Agent. Much like the terms of use signed between the above two parties, here also terms of use could be signed between the user of the Mobile Agent (home node) and the trusted server in the agent space in a form of policy enforcement. Hence trusted servers will not modify the Mobile Agent’s content, as web servers do not modify the web page they host. But here the security protocol provides further protection to the Mobile Agent content at the trusted server. It is such a similar concept that the proposition wants to exploit. The nodes and the trusted servers could be set up, in a similar style as nodes of root Web servers, by the Mobile Agent user community. More specifically by the huge set of nodes that are set up to be visited by the Mobile Agent. In section to follow, we will take a look at the main components of the proposal and how should the components interact according to the security protocol. Components of the proposed countermeasure Figure 1 depicting the overall view of the proposal shows that the countermeasure constitutes various components at various degree of multiplicity. Each of these components are listed and defined as follow:

(i) Home of the Mobile Agent,

(ii) Mobile Agent (MA),

(iii) Trusted Node (TS),

(iv) Active Storage Element (ASE) and

(v) Host.

Home of the Mobile Agent (Home):

It is the computer running Mobile Agent platform and has sent the Mobile Agent to carry out a task on its behalf. It can also be defined as a computer

Page 22: zede26

Threats and Trusted Countermeasures, using a Security Protocol, …

Journal of EEA, Vol. 26, 2009

57

running a Mobile Agent based distributed application. The application as a part of its mission packs a task into the Mobile Agent and sends it to the agent space. The Mobile Agent after completing its task will eventually return to the home carrying the result. Mobile Agent:

As defined throughout this paper, it is a program that migrates from one node to another node in a computer network to accomplish a task given to it by its owner. Trusted Node:

It is similar in composition to the home of the Mobile Agent, but differs in the function it provides. It is there to provide support and service to the Mobile Agent while it is in the agent space. Active Storage Element:

It is a temporary storage element that is created by each Mobile Agent, at the trusted server, that is sent to visit nodes in the agent space. It actively participates in the process of temporary information storage and handing over of all the information to the Mobile Agent. Host:

It is a computer in the agent space running Mobile Agent platform and entertains any visiting Mobile Agent which would like to gather information from it. This component is at the center of the controversy, which could be hostile. The host provides all the necessary resources for the agent to execute there. Security protocol A security protocol that defines how the basic components of the system (Home, Trusted Nodes and Active Storage Element) should interact with each other as well as what are the needed tasks to be performed at each level, so as the whole system could stand against the possible hostile host threats, is developed. The security protocol alters what the Mobile Agent constitutes depending on where it is. While the Mobile Agent is transiting between its home and trusted nodes the usual composition is deemed. But when the agent is in the agent space visiting

different nodes it has assumed to be composed of only the two out of the three components that is usually associated with: code and state, to give hostile hosts no chance of disclosure of information collected from previous hosts. The security protocol also develops a mechanism that lets the user of the Mobile Agent to digitally sign the list of destinations it wants the Mobile Agent to visit. After forming a destination object which contains the list of all hosts the Mobile Agent is going to visit, it digitally signs the destination object using its private key, then the destination object is passed down to the Mobile Agent. The Mobile Agent upon its arrival at each and every host in the agent space verifies that it has a valid copy of the destination object before putting that object into use. So the Mobile Agent avoids the possibility that it would be directed to visit other hosts by altering the list of paths it has carried from its home, as any malicious host could not counterfeit the digitally signed destination object. Let us see, step by step, the security protocol in action and its effects on the components of the Mobile Agent system. It is assumed that, the home node has a public-private key pair (HPubK-HPrvK).The public key could be retrieved by the hosts from relevant authorities. At Home, as shown in Fig. 2: Ø The user of the MA specifies the address of the

list of hosts it wants to be visited using the Agent Based Application (ABA),

Ø The ABA accepts the list and forms a

destination object. The destination object includes the list of hosts to be visited, the address of the trusted server (TS) and the home,

Ø The ABA digitally signs the destination object

and passes it to the MA which is programmed to perform the required task and

Ø The MA accepts the signed object. By using

HPubK, the MA verifies that it has the right unsigned destination object from which the address of the next node to be visited is determined and dispatches itself to that node, as pointed out in the previous section it goes first to the Trusted Node.

Page 23: zede26

Sayed Nouh and Tinbit Admassu

Journal of EEA, Vol. 26, 2009

58

Host #1 AddressHost #2 Address

Host #n Address

Host #1 AddressHost #2 Address

Host #n Address

TS AddressHome Address

TS Address

Host #1 AddressHost #2 Address

Host #n Address

TS AddressHome Address

TS Address

Signed Signed

HPrvK

Signed

MA

MA

MA

Address of nodes tobe visited entered by

the userDestination objectformed from the

addresses enteredDestination object

signed using HPrvK

Signed Destinationobject passed to

the MA

Unsigning the Destinationobject using HPubK to getthe next node to be visited

The MA goesto TS

HPubK

Home

At Trusted Server (TS), as shown in Fig. 3: Ø The MA arrives at the TS, creates its own

Active Storage Element (ASE), Ø The MA passes down the necessary

information to the ASE so it can effectively communicate with it,

Ø The MA retrieves the public key of the home,

HPubK,

Ø Ø Using this key, the MA unsigns the digitally

signed destination object and determines the next node to be visited. In this case it is the first host in the list and

Ø The MA Dispatches itself to that node.

SignedSignedSigned

ASE

MA MA MA

ASE created bythe MA to store

informationtemporarily

Importantvariables

passed to ASE

Unsigning the Destinationobject using HPubK to getthe next node to be visited

ASE

The MA goes tothe first host

HPubK

The MA arrivedfrom Home

Trusted Server(TS)

Figure 2 Security protocol at home

Figure 3 Security protocol at TS

Page 24: zede26

Threats and Trusted Countermeasures, using a Security Protocol, …

Journal of EEA, Vol. 26, 2009

59

At ith host, as shown in Fig. 4: Ø The MA arrives at the ith host, Host_i, Ø It generates a random symmetric key, SymK_i, Ø The MA retrieves the public key of the home,

HPubK, Ø Asks the host about the information it wants,

Info_i, Ø Encrypts the information using the symmetric,

SymK_i(Info_i), Ø Encrypts the randomly generated symmetric

key using the public key, HPubK(SymK_i), Ø Sends both of these information,

HPubK(SymK_i) and SymK_i(Info_i), to its ASE at the TS to be stored temporarily,

Ø The MA unsigns its destination object, looks

the address of the next node to be visited and dispatches itself to that node.

If the next node is another host it does the same task as indicated above. Else if it is a trusted server, the following set of actions follows. At Trusted Server, as show in Fig. 5: Ø The MA arrives at the TS, in its last leg of

journey, Ø Asks the corresponding ASE to hand it the

overall information it has been accumulating so far (a pair of HPubK(SymK_i) and SymK_i(Info_i) retrieved from each host), and takes these information,

Ø After unsigning its destination object, it looks

for the address of the next node to be visited. In this case for sure it is the home node and

Ø The MA dispatches itself to its home.

SignedSignedSigned

MA MA MA

Generates randomly symmetrickey , SymK_i and retireves the

HPubK

The MA retrieves the info it wantedand encrypts it using SymK_i

The MA arrivedfrom TS

Host_i

SymK_i

HPubK Info_i

SymK_i(Info_i)

SymK_i

SymK_i(Info_i)HPubK(SymK_i)

Sig ned

The MA encrypts SymK_i usingHPubK and send these

information to ASE

MA

Unsigning the Destination objectusing HPubK to get the next node

to be visited

The MA goesto the node

To TS

Figure 4 Security protocol at the ith host.

Page 25: zede26

Sayed Nouh and Tinbit Admassu

Journal of EEA, Vol. 26, 2009

60

SignedSignedSigned

M A M A MA

The M A asks theAS E to handover

th e total infoaccum ulated so far

AS E gives tothe M A the info

Unsigning the D estin ationob ject usin g HP ubK to getthe next no de to be visited

ASE

The M A returnsback to ho m e

HPubK

The M A arrivedfrom the last

h ost

Trusted Server(TS)

ASE

SymK_i(Info_i)H PubK (Sym K_i)

SymK_i(Info_i)H PubK (Sym K_i)

SymK_i(Info_i)H PubK (Sym K_i)

SymK_i(Info_i)H PubK (Sym K_i)

SymK_i(Info_i)

HPubK(SymK_i) SymK_i(

Info_i)HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i) SymK_i(

Info_i)HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

At Home, as shown in Fig. 6: Ø The MA arrives back at home after doing the

task assigned to it, Ø The MA contains a pair of encrypted

information,

Ø The MA hands the overall information to the ABA. Note that they are all in encrypted form.

Ø For each pair of encrypted information

retrieved from each host, the ABA does the following:

Signed

MA

The MA hands over the total infoaccumulated so far to Home and

disposes itself then after

The Home retrivesthe HPubK and

decrypts the SymK_i

HPrvK

The MA arrivedfrom TS

Home

MA

The Home using each of theretrieved SymK_i decryptsthe corresponding info_i

SymK_i(Info_i)HPubK(SymK_i)

SymK_i(Info_i)HPubK(SymK_i)

SymK_i(Info_i)HPubK(SymK_i)

SymK_i(Info_i)HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i) SymK_i(

Info_i)HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

SymK_i(Info_i)

HPubK(SymK_i)

SymK_i(Info_i)SymK_i

SymK_i(Info_i)SymK_i

SymK_i(Info_i)SymK_i

SymK_i(Info_i)SymK_i

Info_i

Info_i

Info_i

Info_i

Figure 5 Security protocol back at TS

Figure 6 Security protocol back at home

Page 26: zede26

Threats and Trusted Countermeasures, using a Security Protocol, …

Journal of EEA, Vol. 26, 2009

61

o First, using its private key (HPrvK) to decrypt the encrypted symmetric key, HPubK (SymK_i) ,

o Second, using the decrypted symmetric key, SymK_i, it decrypts the information which is encrypted using the same key, SymK_i (Info_i),

Ø The ABA does the same process for each pair

of information retrieved from every host the agent goes to collect information and

Ø At last the ABA displays the result to the user,

Info_i. Security Protocol Summary: • For N hosts:

o N hosts addresses digitally signed by the home node,

o One ASE created at TS,

o N symmetric random keys generated at each host,

o N information retrieved will be encrypted by the corresponding N symmetric keys,

o The N symmetric keys will be encrypted by the public key (RSA) of the home,

o The encrypted N information and encrypted keys stored at the ASE,

o Decryption at home node and

o Displaying the plain text result to the user.

RESULTS AND PERFORMANCE COMPARISON

To access the capability and the cost of the proposed countermeasure, the following techniques and equipments are used: Two personal computers with 768MB and 512MB of RAM respectively, which run Windows Server 2003 and a number of Mobile Agents (Proposed MA, DS MA and Normal MA) as described below, are used. Proposed Mobile Agent: Proposed MA

A Mobile Agent which is governed by the security protocol and hence performs mobile computation as pointed out in Section 4.

Digitally Signed Mobile Agent: DSMA

A Mobile Agent that supports digital signing of the destination object while still performing computation in a way discussed in Section 3.4. Normal Mobile Agent: Normal MA

A Mobile Agent that performs mobile computation in the usual way. Each of the above Mobile Agents (Proposed MA, Normal MA and DS MA) are given a similar task to carry out. Results: To measure the capability of the proposal towards eavesdropping threat, a test environment is set up using the above mentioned computers as shown in Fig. 7. Computer A takes up the position of trusted server (TS) and computer B runs many host nodes simulated through various port numbers as well as the home node. Ethereal Network Packet Analyzer software is run on computer A. It is open source freely available packet analyzer software that can capture and store packets from a live network for further processing. This packet analyzer software is made to sniff into packets exchanged between the two computers as the various types of MA’s do their job. The Fig. 8 shows analysis of the packet captured while the Normal MA and DS MA are in operation.

Computer BHome 20.0.0.2:4434Host#1 20.0.0.2:8001Host#2 20.0.0.2:8002Host#3 20.0.0.2:8003Host#4 20.0.0.2:8004

***

Computer ATS 20.0.0.1:8000

Figure 7 Test environment set up

Page 27: zede26

Sayed Nouh and Tinbit Admassu

Journal of EEA, Vol. 26, 2009

62

As it can be seen from Fig. 8, in either of the cases it is possible to eavesdrop what information is retrieved and exchanged at each host: “OS Architecture: x86; OS Version: 5.2” seen at the bottom right corner of the window. Figure 9 shows analysis of the captured packet while the Proposed MA is in operation. As it can be seen from the figure, unlike the above case since the information is sent to the TS in encrypted form, it is not possible to look into its content. Hence the security protocol provides the required confidentiality of the information while it is being stored at ASE. To test the capability of the proposed countermeasure, towards Alteration threat, a similar test environment as in the above case is used, except that all of the nodes are simulated in computer. A hostile node is introduced on a different port number in the same computer, Computer B. This node is planned to behave maliciously towards the Proposed MA. Specifically, it is planned to supply a wrong public key to the MA as the MA arrives there and is in the process of unsigning its digitally signed destination object. But fortunately the MA cannot unsign the signed object using the public

key just supplied. This is because the destination object is signed by the private key of the home node, not by a private key which corresponds to the public key supplied by the hostile node. Hence any attempt of alteration of destination object will be detected by the MA.

Performance comparison: Performance Comparison: To measure the cost of the proposal, a similar test environment as above is used. Each of the above Mobile Agents (Proposed MA, Normal MA and DS MA) are given a similar task to carry out. Their performance is compared in terms of their average turn around time, measured in milliseconds (ms).

This performance parameter is the average time in milliseconds (ms) each Mobile Agent requires to do the job, after dispatched till it returns and handovers the result to the user. Figure 10 indicates just that for the three scenarios (Proposed, Normal and DS). As might be expected DS Mobile Agent takes in between of the two. Comparing the execution time of the Normal MA with the Proposed MA, the Proposed MA needs approximately 4x more time. This substantial amount of time is a price to pay to achieve the corresponding security. Generation of the keys,

Figure 8 Captured packet analysis for DS MA and normal MA.

Figure 9 Captured packet analysis for Proposed MA

Page 28: zede26

Threats and Trusted Countermeasures, using a Security Protocol, …

Journal of EEA, Vol. 26, 2009

63

encryption of partial information, verification of destination object at each visited host and at last collecting the results back to the Mobile Agent from the TS all add up to form a big turn around time.

39.25

132.25

168.25

0

20

40

60

80

100

120

140

160

180

Normal MA DS MA Proposed MA

Tim

e(m

s)

Comparing the performance time between DS MA and Proposed MA, the DS MA needs less time. This is due to the fact that it (DS MA) does not carry out some of the functions the Proposed MA performs like: Generation of keys, Encryption and others. It takes time only as it verifies that the destination object is valid copy on its arrival at each and every host. Figure 11 compares the trend of the execution time for all Mobile Agent cases. As the number of nodes to be visited is steadily increased, we notice that the turn around time increases. This is tribute to the fact that there are more jobs to be done.

0

50

100

150

200

250

300

350

400

450

1 2 3 4 5 6 7 8 9

No of Nodes

Tim

e(m

s) Normal MADS MAProposed MA

CONCLUSION AND FUTURE WORK In this paper we have tried to address the issue of Mobile Agent security at the same time providing familiarization to the concept of Mobile Agents programming. The paper more specifically was an attempt to try to look in to a notoriously difficult task as pointed out by many researchers. An attempt has been made to avert some of the malicious host’s threats by adopting a number of mechanisms to the way the original computation is made by the Mobile Agent. The counter measure proposed introduces the concept of setting up another home in the agent space as called “home away from home” for partial result storage and the separation and digital signing of the destination of the Mobile Agent.

REFERENCES

[1] Rahula Jha, Mobile Agents for e-commerce,

M. Tech. Dissertation, IIT Bombay, India, 2002.

[2] Lange, D. B. Mobile Objects and Mobile

Agents: The Future of Distributed Computing, Springer-Verlag Berlin Heidelberg 1998.

[3] Jansen W. and Karygiannis T. Mobile Agent

Security, National Institute of Standards and Technology, Gaithersburg, MD 220899.

[4] Vijil E. C. Secuirty Issues in Mobile Agents,

M. Tech. Dissertation,IIT Bombay, India, 2002.

[5] Altalayleh, M. and Brankovic, L. An

Overview of Security Issues and Techniques in Mobile Agents, University of NewCastle, Australia.

[6] Bierman, E. and Cloete, E. Classification of

Malicious Hosts Threats in Mobile Agent Computing, Proceedings of SAICSIT 2002, Pages 141-148, University of South Africa.

Figure 10 Actual performance time comparisons between the three scenarios

Figure 11 Performance time trend as the number of nodes visited increases

Page 29: zede26

Journal of EEA, Vol. 26, 2009

COUNTERFEIT CURRENCY IDENTIFICATION SYSTEM - A CASE STUDY ON ETHIOPIAN BIRR NOTE

Zewde Dinku and Kumudha Raimond

Department of Electrical and Computer Engineering Addis Ababa University

ABSTRACT

Counterfeit notes came into circulation right from the time of existence of genuine notes. A number of techniques like first line inspection methods, second line inspection methods and smart money counterfeit detector are being used in many countries to identify the genuine notes from the fake ones. The other method is manual inspection, which is currently used in Ethiopia at the banks. Though experts make right decisions based on their years of experience, it is however important to avoid any bias by humans. So, there should be some technique that should help the banks and insurance companies to precisely identify the counterfeit currencies. This work proposes and implements a Counterfeit Currency Identification System (CCIS) based on Cauchy–Schwarz inequality algorithm. Keywords: Banknote recognition, Cauchy-schwarz inequality, Counterfeit currency identification.

INTRODUCTION

Currency counterfeiting is a common problem around the world, because highly sophisticated devices such as photocopiers, digital cameras, digital scanners, and printers which highly contribute to the work of counterfeiting are easily available to counterfeiters to reproduce pictures, logos, symbols, etc. Sometimes, it is very difficult to discriminate the fake notes from the genuine ones by simply looking at the paper notes. It needs additional means of identification using sophisticated currency identification hardware devices or machines. So, all countries are trying to tackle as much as possible, or at least minimize the risk of counterfeiting by enforcing different measures and techniques [1], [2]. Few of the existing techniques are: First-Line Inspection Methods, Second-Line Inspection Methods and Smart Money Counterfeit Detector Pen (Money Tester Pen). The other method is by just looking at the suspicious bank notes against bright light to check for the alignment of symbols, continuity of lines and for the availability of watermarks which are unique to genuine bank

notes. This manual method is currently being used in Ethiopia at the banks. However, to avoid human errors, it is mandatory to automate the identification system. So, the main objective of this work is to identify and implement a suitable technique to correctly identify the counterfeit currencies. Ethiopian 100 birr is taken as a case study in this work.

LITERATURE SURVEY

A number of techniques like first line inspection methods, second line inspection methods are being used in different countries to tackle the counterfeiting problem. First-line inspection methods are used on-the-spot by vendors and retailers to determine the authenticity of currency being exchanged. First-line inspection methods are Varied-Density Watermarks, Ultraviolet Fluorescence, Intaglio Printing, Micro text Holograms and Kinegrams. A second-line inspection method is one that cannot be verified by the naked eye alone, but requires an extra device to perform the verification process. These are more secure than visual methods, but the additional security increases the cost at both the manufacturing and verification ends. Second line inspection methods are Isocheck/Isogram, Fiber-Based Certificates of Authenticity, Color and Feature Analysis [9], [8]. The Patented Smart Money Counterfeit Detector Pen (Money Tester Pen) has revolutionized counterfeit detection. Since the advent of color photocopy machines and printers, there has been a surge in casual counterfeiting. This approach works by simply running the pen over the currency which required to be detected. While no method of counterfeit identification is foolproof, Smart Money Counterfeit Pen is a very good option and will detect a great majority of non-genuine notes. For more protection, it is recommended that this device be used in conjunction with other identification methods. This technique is widely used for US and Canadian notes [8]. Apart from this work, lots of other detecting techniques are also patented such as barcode scanner [5], ultraviolet detector [7], detecting using infrared beams [6] etc.

Page 30: zede26

Zewde Dinku and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

74

Apart from the techniques/methods mentioned above, countries have implemented and are implementing algorithms and enforcing different measures to discourage counterfeiters. Some of the algorithms are: Cellular Neural Network (CNN) Algorithm, Inductive Learning Rule-3 Algorithm, and Cauchy–Schwarz inequality Algorithm. CNN algorithm is efficient because of the use of CNN universal machine which is faster, but costly and more complicated, and is not applicable for counterfeit currency discrimination and identification; it is used only for discriminating the genuine banknotes [4]. Inductive learning rule-3 algorithm is used for the discrimination of the banknotes, i.e. to classify a given banknote to a predefined class (designation) of banknotes. This algorithm saves memory space, and it is easy and cheap compared to CNN, but it is complicated and expensive when compared to Cauchy-Schwarz. But, this algorithm is unable to recognize counterfeit banknotes and takes time for the learning process, thereby slows down the processing speed compared to CNN and Cauchy–Schwarz inequality algorithms. This algorithm has been implemented to discriminate all denominations of Turkish banknotes. The system has been trained for back and front sides of 5 different Turkish banknotes. The system also has been tested using 24 “unseen” examples and correctly classified all of them. The efficiency of the system has been found to be 100% [3]. Since both CNN and Inductive learning rule-3 algorithms are used only for discriminating genuine currency notes and not for identifying and discriminating the counterfeit currencies, an algorithm is needed to address this problem of discriminating and identifying both genuine and counterfeit banknotes. Cauchy–Schwarz inequality algorithm can be used for this purpose. None of these above mentioned techniques/ algorithms are implemented in Ethiopia for banknote identification at present. So, this work proposes Cauchy–Schwarz inequality algorithm for the identification purpose.

CAUCHY–SCHWARZ INEQUALITY

The Cauchy–Schwarz inequality theorem is useful for measuring the similarity measure between two images. The algorithm is explained elaborately in [10].

ANALYSIS OF CCIS

It is important to analyze the requirements of CCIS before designing the system. The important requirement of the system is to distinguish the notes between genuine and fake based on the features of genuine note. So, it is important to understand the distinguishing features of the genuine note. Figure 1 shows the scanned image of genuine Ethiopian 100 birr note. The features that are pointed are considered as unique features of genuine 100 birr note by National Bank of Ethiopia (NBE).

Figure 1 Genuine Note: Image obtained using

digital Scanner The details of the features are: No.1 indicates thick bar consisting of logo of NBE, different colors, symbols and others. The circle that is shown by No. 2 is exactly aligning with a similar circle at the overleaf while seeing against sunlight. No. 3 points to the printing date in Ethiopian and Gregorian calendar. No. 4 shows a watermark of map of Ethiopia, with the text denoting the denomination of the birr note. Apart from the features shown in Fig. 1, there are few more lines and watermarks which are visible only while seeing against sunlight. Bankers use those features also to check for an alignment of different symbols and for the continuity of lines available on the notes to identify the counterfeit currencies. Therefore, it is very important to capture those invisible features also to automate the identification process. The images are taken by digital camera using UV light to acquire those invisible feature. Figure 2 shows the corresponding image and the features that are pointed indicate the additional features of the genuine note. No. 1: A watermark of farmer plowing is seen on the white area while seeing against light. No. 2: The line looks discontinuous, but it is continuous in light. On this continuous line, the phrase “National Bank of Ethiopia is written in English and Amharic.”

4

1

2

3

Page 31: zede26

Counterfeit Currency Identification System - A Case Study on Ethiopian Birr Note

Journal of EEA, Vol. 26, 2009

75

Figure 2 Genuine Note: Image taken by digital camera using UV light

The change in color is due to the UV light falling on the camera lens. Further, the features at the back side of the note overlap with the front side features. Apart from the features mentioned in Fig.1 and Fig. 2, there are other important features which can be noticed in the genuine note such as the quality of the paper and the texts written on the note which can be felt when rubbed with fingers (embossed features). Only one side of the birr note is enough for the identification purpose, because there are no watermarks and invisible lines on the other side of the note. Also, the bankers are verifying only the front side to discriminate the notes. Added to that, it is very easy for the counterfeiters to duplicate the back side as it is in the original note. Due to the above stated reasons, this work concentrates only on the front side of the note.

DESIGN OF CCIS

Hence, two approaches are proposed in this work: Approach I (image obtained using digital scanner) and Approach II (Image taken by digital camera using UV light). The proposed CCIS is shown in Fig. 3. The explanations of the block diagram corresponding to both the approaches are given below:

Figure 3 Proposed counterfeit currency identification

system Birr Note Image: Two approaches are suggested for collecting the test images. In the first approach, images are collected by scanning the genuine birr

notes and in the second approach, images are obtained using digital camera and UV light. Image Pre-processing, Enhancement and Normalization: Generally, the pre-processing of an image involves creating a uniform background and converting the image into binary format. Then the images should be filtered to reduce the noise. This noise can be due to the dirt on the birr notes, the quality of the digital scanner, digital camera used, and the background against which the images are taken. After filtering, the data must be transformed into a form useful for the selected algorithm. Counterfeit Identification Technique/Algorithm: The image processed by the above steps will be given to a suitable CCI algorithm to identify the notes. Cauchy–Schwarz inequality algorithm is selected for this work to measure the similarity between the genuine and test images and to decide whether a given birr note is genuine or not. A threshold value will be set after performing the counterfeit test with large number of samples. If the similarity value is greater than or equal to the threshold value, the birr note will be considered to be “genuine”; otherwise will be considered as suspicious or fake.

IMPLEMENTATION PHASE

The process of implementing the CCIS involves three basic steps: Collection of genuine and fake currencies, processing the images of these currencies, and comparing the images of genuine with that of fake ones using Cauchy–Schwarz inequality algorithm. Data Collection: In this work, genuine and fake birr notes are collected for the testing purpose. The notes are collected by taking into consideration the age of the notes and the dirt on the notes as criteria. New genuine note is used as a benchmark to compare against with any other sample note. Some of the genuine birr notes are old and some of them are very old and torn. The fake birr notes are almost new, except the number of hole punches (hole punched by the bank in order to avoid the circulation) on the birr notes. The number of fake notes collected for this work is very limited as it is very difficult to get the fake notes (even the hole punched ones) from authorized body. Image Processing: The digital images (JPEG format) of both the genuine and fake 100 birr notes are taken using digital scanner (HP Scan jet 5100C, 600 dpi) and by digital Camera using UV light.

2 1

Birr Note Image Approach I/ Approach II

Image Pre-Processing, Enhancement and

Normalization

Counterfeit Currency Identification Technique

Page 32: zede26

Zewde Dinku and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

76

Some of the lines, symbols and watermarks are visible only if the birr notes are seen against light sources. In this case, UV light is used as light source. The color of the image in this approach has changed due to the fact that the UV light passes through the notes while the images are taken. Also, some of the features from the back side of the notes are overlapping with the front side features. However, the color of the old birr note has not changed much. This may be due to the fact that the age and dirt on the old birr notes blocks the light to pass through the notes. Image Pre-processing Stage: In part-I of this stage, the size (5.813 inches by 2.813 inches) and resolution (90 pixels/inch to 200 pixels/inch) of the images (JPEG images) are adjusted using Adobe Photoshope 7. In part II, image processing toolbox of MATLAB is used to create a uniform background. Image Enhancement: The images of the birr notes (genuine and fake) should be filtered to reduce the noise. This filtering is done at different levels using MATLAB functions. There are various types of noise removal techniques. A convenient property of filtering is that filtering a three-dimensional image with a two-dimensional filter which is equivalent to filtering each plane of the three-dimensional image individually with the same two-dimensional filter. So, it is sufficient to make use of a two-dimensional filter. In this work, linear filter which is a 2-dimensional filter has been used. Normalizing the pixel values: The pixel values of the images ranges from ‘0’ to ‘255’. Since, Cauchy–Schwarz inequality algorithm yields the similarity measure value between ‘0’ and ‘1’, the pixel values are normalized to take values between ‘0’ and ‘1’. Counterfeit Identification Technique /Algorithm: Similarity measuring technique is implemented using Cauchy-Schwarz algorithm. In this technique, the similarity values are calculated between the benchmark note (new genuine) and other genuine and fake (new and old) birr notes. This technique is applied to both the approaches. The results of these approaches with their respective drawbacks (limitations) are given below. A number of tests are carried out. Sample results of approach I and approach II are shown in Table 1 and 2 respectively. In all these cases, the test images in Table 1 and 2 are compared against the benchmark images shown in Fig. 1 and Fig. 2 respectively.

Table 1: Sample results of approach I

No Test Images Similarity Values

1

New genuine note

1.0

New genuine note: Noise added manually

2

0.9635

3

Old genuine note : aged, dirty and lost some of the features of genuine note

0.8573

4

Old genuine note : aged, torn with much dirt and folding

0.8148

5

Old fake note : better paper quality compared to test image No.6

0.7603

Page 33: zede26

Counterfeit Currency Identification System - A Case Study on Ethiopian Birr Note

Journal of EEA, Vol. 26, 2009

77

6

Old fake note : torn, lot of folding with poor paper quality

0.7362

Table 2: Sample results of approach II

No Test Images Similarity Values

1

New genuine note

1.0

2

New genuine note: noise added manually

0.9721

3

Old genuine note : aged, dirty and lost some of the features of genuine note

0.8904

4

Old genuine note : torn, aged, dirty and lost all the features of genuine note

0.8836

5

Old fake note : better paper quality compared to test image 6

0.7402

Old fake note : torn with lot of foldings

6 0.7318

The following Table 3 depicts the average similarity measure value between the birr notes. Table 3: Average similarity values between the

birr notes

Approaches Between Genuine notes

Between Genuine and fake notes

I 0.9116 0.7578

II 0.9293 0.7423 As it can be seen from the above Table 3, the results of approach II are better than that of approach I. The average similarity measure value of approach II between genuine birr notes is higher (0.9293) than that of between genuine and fake ones which is lower (0.7423). This interval between the genuine and the fake notes is higher in approach II than that in approach I. This is due to the visibility of some of the features such as watermarks, hidden lines and symbols on the images taken by digital camera while these are invisible on the images obtained using digital scanner. In spite of capturing the important features in approach II, the similarity interval between the notes is not much higher than that of approach I. This may be due to the change in color of the image and also due to the overlapping of the overleaf features with the front end features. Finally, by considering the results obtained and the limitations of both the approaches, approach II is considered to be preferable compared to approach I for CCIS.

Page 34: zede26

Zewde Dinku and Kumudha Raimond

Journal of EEA, Vol. 26, 2009

78

Based on the values obtained in Table 3, the threshold value is set to 0.80 (for this work) by considering the similarity measure values obtained for each test in both the approaches. When tested, the birr note with similarity measure value above the threshold (0.80) is considered as “genuine note” otherwise, the note is considered as “suspicious or fake birr note”.

CONCLUSIONS The following conclusions have been drawn: • A methodology for identifying the fake

Ethiopian 100 birr note has been proposed and implemented based on similarity measure and the results are found to be promising to distinguish the notes with a threshold of 0.8.

• From the test cases, it is observed that torn,

aged, folded and the dirty notes have less similarity compared to the new notes.

• From the fake note test cases, it is observed that

paper quality is playing a significant role.

REFERENCES

[1] Herley, C. Vora, P. Yang, S., Detection and deterrence of counterfeiting of valuable documents, IEEE International Conference on Image Processing (ICIP), Vol.4, pp.2423-2426, 2004.

[2] Horner, J. L., Javidi, B., Optical Signal

Processor for Anti-Counterfeiting and Security Systems, IEEE Conference Proceedings on Lasers and Electro-Optics Society Annual Meeting, Vol.1, p.145-146, 1994.

[3] Sevkli. M, Turkyilmaz. A, Aksoy M. S,

Banknote recognition using inductive learning, International Conference on Fuzzy Syst. and Soft Computational Intelligence in Management and Industrial Eng., FSSCIMIE’02, İstanbul Technical Univ., Turkey, pp.122-128, 2002.

[4] Zarandy, A. Werblin, F. Roska, T. Chua, L.O., Novel Types of Analogic CNN Algorithms for Recognizing Bank-notes, Third IEEE International Workshop on Cellular Neural Networks and their Applications, p.273-278, 1994.

[5] Counterfeit money detecting barcode scanner-

Patent US2003098350(A), 2003, http://www.oti.com/oti/patent/20030529-

2003098350-US-A1 (viewed as on 12 Nov 2009).

[6] Detection of counterfeit currency - Patent

US5757001(A), 1998, http://www.oti.com/oti/patent/19980526-

5757001-US-A, (viewed as on 12 Nov 2009). [7] UV counterfeit currency detector - Patent

US2008063252(A1), http://www.oti.com/oti/patent/20080313-

2008063252-US-A1 (viewed as on 12 Nov 2009).

[8] Method and apparatus for currency

discrimination and counting, http://www.freepatentsonline.com/5295196.ht

ml. [9] Counterfeit Currency Detection Techniques,

Declan McAleese, http://homepages.inf.ed.ac.uk/rbf/CVonline/

LOCAL_COPIES/AV0506/s0128541.pdf (viewed as on 12 Nov 2009).

[10] Cauchy–Schwarz inequality, http://en.wikipedia.org/wiki/Cauchy%E2%80

%93Schwarz_inequality, (Viewed as on 12 Nov 2009).

Page 35: zede26

Journal of EEA, Vol. 26, 2009

PRELIMINARY SURVEY ON ELECTRIC ENERGY EFFICIENCY IN ETHIOPIA:- AREAS OF INTERVENTION

Mengesha Mamo

Department of Electrical & Computer Engineering Addis Ababa University

ABSTRACT

In this paper the significance of electric energy efficiency improvement and major areas of loss in Ethiopia’s electric power system are highlighted for further rigorous study. Major electric energy loss areas in the utility transmission and distribution systems and consumer premises are indicated. In the consumer area the loss associated with the energy conversion devices like lamps, electric motors, electric Mittad, and welding transformers are estimated. The investigation demonstrates that there is a possibility of electric energy saving which is at least equal to 10% of the present yearly generated electric energy. Present trend in electrical engineering education; that is the inclination of students towards electronics and computer engineering areas abandoning power engineering area and the resulting danger of shortage of qualified pool of engineers for employment in the energy sector have been demonstrated. Consorted efforts of Ethiopian universities and the sector players for sustainable, efficient, reliable, and high quality electric energy supply for Ethiopia and for sustainable qualified human power training for the sector have been recommended. Keywords: EEPCO, Efficiency, Electric mittad, Energy, Power factor, Technical loss.

INTRODUCTION

Energy efficiency In Engineering, efficiency is defined as the ratio of output power from a system or equipment to the input power to the system or equipment. However; efficiency applied to energy needs explanation. The total energy a country uses to produce a dollar of Gross Domestic Product (GDP) for example can be used as the energy efficiency of a country [1]. Energy in this definition is not limited to electric energy. It includes all forms of energy obtained from nuclear, fossil fuel, biomass, etc.

In this article electric energy efficiency is defined with respect to the electric utility operator and the consumer. With respect to the Utility operator, electric energy efficiency is the ratio of electric energy sold by the utility operator to electric energy generated as shown in Eq. 1.

GeneratedEnergyElectric

SoldEnergyElectric=η (1)

If a vertically integrated utility operator like Ethiopian Electric Power Corporation (EEPCo) is considered, the electric energy sold is total electric energy generated minus energy loss in the process of generation, transmission, distribution and sales. There are two components of loss; technical and non-technical losses [2]. Technical loss refers mainly to the electric energy lost on resistive effect of the electrical system (I2R) throughout the system and core losses on machines and control systems. Non-technical loss on the other hand accounts for energy utilized but not paid for due to measuring errors, theft by some customers, etc. Electric energy Efficiency with respect to the electric energy consumer can be defined as the ratio of the output of the consumer (in terms of product quantity or other form of energy) to the input electrical energy, Eq. 2.

SystemthetoinInputEnergyElectric

etcHeatMechanicaloductOutput ),...,,,(Pr=η (2)

Why energy efficiency Today, electric energy efficiency improvement and search for renewable energy sources are some of the top agenda of the world and research themes of scholars. This is because of the carbon-dioxide emission and the resulting environmental deterioration, limited fossil fuels as energy source, and the fast increase of energy demand in developing countries. According to a study done in USA, energy saving by efficiency improvement is more attractive, because new conventional base–load production sources

Page 36: zede26

Mengesha Mamo

Journal of EEA, Vol. 26, 2009

46

generate electricity at an estimated cost of $0.073 to $0.135 per kilowatt-hour while the saving cost through various improvements of efficiency is estimated to cost about $0.03 per kilowatt-hour [4], which is two to four times. Introducing Energy Efficiency Resource Standard (EERS) USA plans to reduce its electric energy consumption by 15% in 2020 [3] [4]. This standard includes efficient transmission and distribution plus efficient utilization of energy. This effort by the highly professional and developed country like USA, indicates that there is a lot to be done in our system both in respect of the utility operator, EEPCO system and its customers’ systems to improve the electric energy efficiency. EEPCo’s and electric energy users’ systems and components modeling and analysis, identification and quantification of sources of losses, and identification and recommendations on means of minimization of losses are what has to be done concertedly by stakeholders; the utility operator, the universities, the industries, and the public as a whole. In the following sections EEPCo present estimated efficiency, sources of loss, potentials of loss reduction at consumer premises and some efforts of Electrical and Computer Engineering Department (ECED) in this respect are discussed. EEPCO ELECTRICAL SYSTEM EFFICIENCY Energy losses along the way from the generation point up to utilization point is the main reason for low efficiency in electrical system.

Technical loss in electric power system There are two main components of technical loss in transmission and distribution systems. One is loss in the lines while the other is loss in transformers. Common reasons for high electric energy loss on the line are low voltage level compared to the size and length of the conductor, low power factor, load unbalance in the phases, harmonics in the current and voltage, improper design or installation using inappropriate materials and components, etc. There are at least three transformers in a power system transmission from generation point up to the consumer premises. These are step-up transformer at the generator switch yard, the step-down transformer at the distribution center, step-down to low voltage (400 V, 220 V) level. The transformer efficiency varies from 96 to 99.5 depending on the capacity, load, design, etc. Older transformers have high loss or lower efficiency. In our EEPCo system, if we assume two of the three transformers in the chain, what is generally the case, to have efficiency of 99% and the last transformer in the distribution to be 97% the overall efficiency of the transformers would be 95%. That is 5% energy loss on the three transformers overall. Since 1980s, development of new transformer core materials has resulted in high efficiency transformers. A transmission and distribution loss, including the loss on transformers and the line I2R loss, in Japan is 4%, USA about 6%, Europe average 6.5%, China 7%, UK 8% [5] [6]. There are also high loss systems like that of India 31%, Pakistan 26% and Tanzania 25% [6, 7].

Figure 1 Electrical power system loss

Page 37: zede26

Preliminary Survey on Electric Energy Efficiency in Ethiopia, …

Journal of EEA, Vol. 26, 2009

47

EEPCO’s Electrical System Loss The current Electric Energy loss in EEPCo’s system is about 18% of the electric energy generated [8]. In other words efficiency of the system is 82%. The electric energy loss in transmission, distribution, and secondary circuits in EEPCo’s system need modeling, analysis, and some measurements to quantify the level of each sources of loss (cable, and transformer, due to low power factor,…) and their contribution to the total electric energy loss. However, from the efficiency level of USA, Europe and Japan we can see that there is possibility of improvement in the efficiency of EEPCo’s electrical energy system and save significant amount of energy. For example; if we are able to reduce the system loss from 18% to 12% as a first stage, we would be able to have about 48 MW (~0.06x800MW) of electric power [9]. This saving which is equivalent to 6% of the yearly total energy generated, about 3,570 GWH, is equal to 214.2 GWH. Efforts by Electrical and Computer Engineering Department towards Improvement of Energy Efficiency There are some MSc students from EEPCo working on the modeling and analysis of major transmission lines like Koka to Diredawa and evaluation of main distribution system like Bahir-Dar distribution system. The intension is to make these as model and scale up for modeling and analysis of other similar transmission and distribution systems. The analysis is expected to result in quantified performance of the system based on which appropriate decisions can be made. The ECED of Addis Ababa University is encouraging the electrical power MSc students to work on real problems in EEPCo so that they come up with feasible solutions for improving the electric energy efficiency.

ELECTRIC ENERGY EFFICIENCY AT CONSUMERS PREMISES

At the point of use electric energy is basically changed to other form of energy like light, kinetic, heat, and electronic outputs like audio, video, etc.

Lighting The electrical energy efficiency with respect to the consumer can easily be demonstrated by considering the electric lighting. In Ethiopia almost in all domestic lightings, incandescent lamps are used. However, incandescent lamp is the most inefficient artificial light source as shown in Table 1. The efficacy of incandescent lamp at present varies from 10 to about 25 Lumens/watt while the Compact Florescent Lamp (CFL) efficacy reaches around 75 Lumens/watt, three times that of incandescent lamp. That means a CFL gives light energy which is equal to three times that of incandescent lamp for the same electric energy consumed. We can save two-third of electric energy we are using for domestic lighting if we replace the incandescent lamps with CFLs. At present, due to the relatively cheap initial cost and readily availability incandescent lamp is the first choice for domestic light source. This is true in European and American countries too. However, European Union and Australian government are considering a ban on the import and sale of incandescent lamps in favor of CFL and other more efficient light sources. In CFLs the 50Hz, 220V power supply is processed to supply the florescent with high frequency. Due to the high frequency, the current limiting inductor, known as choke, becomes very small resulting in the compactness of the lamp. EEPCO has about 1.3 million customers at the moment [6]. If one million of the customers are assumed to have two incandescent lamps of 40 Watts each replaced by two 11 Watts CFL, the power consumption reduces from 80 MW to 22 MW for the same light output if they are to operate at a time. If the lamps are to operate for fours hours per day, the total electric energy per year will be 83,510,000 kWH (3 hours/day x 58,000 kW x 365days).

Table 1: Efficacy of some electric lamps [10]

Lamp type Efficacy, Lumens/Watt

Incandescent Lamp 10 to 25 Compact Fluorescent Lamp 20 to 75 Fluorescent full size and U-tube 70 to 100 Compact metal halide 45 to 80 High pressure sodium lamp 45 to 110 Low pressure sodium 110 to 155

Page 38: zede26

Mengesha Mamo

Journal of EEA, Vol. 26, 2009

48

In commercial buildings more than 25% electric energy is used for lighting. Exterior lightings, where the quality of light is not critical like street lightings need investigation to use the highest efficiency sodium lamps. Therefore; work on efficiency of lighting can save more energy than what is estimated above. EEPCO has taken an initiative to distribute CFL free of charge to its customers recently to relieve the power shortage pressure on it. It has demonstrated the energy saving possible at the national level and energy cost reduction to its customers. This can be considered as significant awareness creation for energy efficiency issue. For the sustainability of energy saving through high efficiency light sources like CFL, the government may need to take appropriate majors to encourage the import or local manufacturing of these high efficiency lamps at an affordable price for the household while discouraging the import and manufactory of low efficiency lamps. Electric machines Electric machines are the main electric power loads in industry comprising up to 70% of electric energy in the processing and manufacturing industries. In US 64% of total generated electric energy is said to be consumed by electric motors. Electric machines do convert the electric energy into mechanical energy. The efficiency of induction machines, the most widely used in the industry range from 78% to 94% when they operate at the rated power. When they are partially loaded their efficiency drops much blow this range. Traditional methods of speed adjustment like inserting resistors in the rotor circuit and mechanical braking have excessive loss. The conversion efficiency can be improved by using power processors, supplying the machine at a desired voltage level and frequency for best possible efficient conversion at a desired speed. Speed control of the electric machines and the driven units can be done by controlling the frequency and voltage of supply to the machine. Over sizing electric machine is another important point to be considered. Electric machines operate at their best efficiency when they are operated at their rating. Over sizing electric motors result in less efficient operation. They also work at low power factor. The same holds for transformers.

Work must be done in industries to audit their energy utilization and determine the quantified energy saving possible. Electric Mittad Important electric energy load in domestic use in Ethiopia is the electric Mittad. On average it consumes about 3 kW. It uses resistor to convert electric energy to heat energy. The electric heat generated by the resistor is conducted to the plate made of clay. In the process, the heat transferred to the injera for baking is estimated to be very small while a huge amount of heat energy is lost to the environment [11]. Research should be done on this indigenous equipment to improve efficiency. Some preliminary investigations demonstrate that the efficiency of injera baking can be improved by more than double using induction heating rather than resistor heating. The department has filed a patent on induction mittad and is working on its further development. If we estimate the number of electric mittad in the country to be 300,000 pcs, and each operate for two hours every week, the total electric energy consumption in a year can be calculated to be (3 kW x 52 weeks x 2 hours/week x 300,000 mittads) 93.60 GWH. If the electric energy consumption is halved, the saving can be 46.80 GWH per year. Welding transformers Recently metal workshops using locally made welding transformers have increased all over the country. Survey done by an MSc student in Electrical and Computer Engineering Department of Addis Ababa University has estimated the total number of locally produced welding generators in use to be about 60,000 to 80,000 [9]. The authors have measured the loss of samples of the locally manufactured transformers to be on average 1 kW compared to 0.23 kW of equivalent imported transformers. It is possible to reduce the loss at least by half by proper design and material selection. If 170 working days and about 2 hrs effective working hours are assumed per day, the electric energy saving per year will be 10.2 GWH. These transformers are not well designed and core material used is not the proper material for the required efficiency [12]. Electrical and computer engineering department is working on proper design and material selection to help the local manufacturers to improve the efficiency of the transformers.

Page 39: zede26

Preliminary Survey on Electric Energy Efficiency in Ethiopia, …

Journal of EEA, Vol. 26, 2009

49

Sub-Standard electric components In addition to the other disadvantages of substandard materials, electric loss associated is critical. Substandard materials, the appliances and equipments are expected to increase the energy loss where ever they are used. Recently electrical and computer engineering has made survey on electrical cables, socket outlets and dividers with the finding that there is a significant and alarming amount of these materials in Ethiopian market and in use [13]. Similar survey has to be done on electric appliances, equipments and electronic devices. The issue of substandard electrical materials and appliances require consorted effort of policy makers, the government bodies concerned with standards, professional societies, the business community, and the society as a whole. Power factor Alternating current system has two types of power flow in electrical cables; namely active power and reactive power. Active power is the power which is converted to the useful power at the point of use while the reactive power is power in the energy storage elements like inductors and capacitors. The reactive power is required for proper functioning of the electrical equipments. Fig. 3 demonstrates the problem of reactive power. In Fig. 3(a), the electric motor draws 80 Amps for active power and 60 Amps for reactive power. The electric generator should supply the vector sum of the two that is 100 Amps. The electric motor is said to operate at 0.8 or 80% power factor (pf); where pf is the ratio of active power component current to the vector sum of the two. The line from generator to the motor carries 100 Amps. In Fig. 3(b), the reactive component current is supplied from capacitor bank, known as power factor compensator. In this case the electric generator is required to generate load current of only 80 Amps, the active power component. The cable from generator to the motor also carries the 80 Amps. The power factor corrector solves four problems; congestion of the transmission and distribution system line by the reactive component current, relieving the generator from generating reactive power component current (instead it can generate additional 20 Amps active power current), reducing power loss in the transmission and distribution

system due to the reactive power component current, reduction of voltage drop along the cable. Table 2 shows some electrical appliances and equipments requiring reactive power for proper operation with their approximate power factor.

Figure 3 Electric motor power factor Power factor correction becomes more important in rural electrification where the distribution lines are relatively long. Some details of capacitor bank selections for power factor correction are given in reference [14]. The electronic devices like computer and TV are also contributing to power factor reduction by introducing harmonics. Harmonic distortion results in power factor reduction. Table 2: Power-factor of some electric equipment

Appliance Power Factor Induction motors 55%-90% Arc welders 50%-70% Solenoids 20%-50% Induction heating equipment 60%-90% Small “dry-pack” transformers 30%-95% Fluorescent & HID lighting fixture ballasts

40%-80%

Economics of Electric Energy efficiency The approximate energy saving possible from the above exercises has been summarized in Table 3. Total of about 334.71 GWH energy saving is possible per year. EEPCOs yearly energy production in 2001 EC is about 3570 GWH. Therefore; the electric energy saving is about 10% of the yearly production. If the electric energy is taken to be 0.45 Birr/kWH (the average domestic charge rate), the yearly saving in Birr is more than 150, 000,000.

Page 40: zede26

Mengesha Mamo

Journal of EEA, Vol. 26, 2009

50

The cost to be paid for such saving requires rigorous evaluations and may include the electric power down-time cost at the country level during the critical power shortage like in year 2001 EC in our country, Ethiopia. A study done in Nigeria on down-time cost has shown that the electric energy outage cost in that country varies between $0.94 and $3.12 per kWH. This is a potential research topic to be done in our country. If efficiency improvement of electric motors in industry and conservation of energy by using electric energy only when and where required are considered the energy saving can be multiple of the above exercise result. Table 3: Estimated possible energy saving

No. Intervention point GWH 1 Efficiency improvement from

18% to 12% 214.20

2 Lighting (CFL) 63.51 3 Electric Mittad 46.80 4 Welding Transformer design

improvement 10.20

Sum 334.71

ELECTRICAL POWER ENGINEERING EDUCATION

Electrical power engineering in AAU With the advent of microelectronics and computer technologies, the trend in electrical engineering education is towards the electronics, computer engineering and communication engineering. There were no applicants for Electrical Power Engineering MSc program in AAU until some two years. There are only two graduates from the program until now. The largest admission to the electrical power engineering program was in 2000 EC, which were 25 students from EEPCo based on the postgraduate study and research agreement between AAU and EEPCo. About 20 of these are expected to graduate in year 2002 EC. The curriculum for Electrical Power Engineering masters program is a standard MSc program covering high-voltage engineering, transmission and distribution systems, renewable energy technologies and power electronics and derives. The first batches of electrical power engineering masters students, from EEPCo are working on their MSc thesis proposal at present.

The followings are the main areas the students have worked on in their independent project and thesis. o Transmission (including high voltage DC) and

distribution system modeling and analysis for increasing capacity and loss reduction.

o Survey on welding transformer efficiency and

ways of improving efficiency o Electric power quality, power factor and

efficiency at the consumers premises o Renewable energy technologies as an

alternative electric energy source

o Inverter technologies for integrating power from renewable energy source with the grid system.

o Generation system control and automation, etc. EEPCo has been supporting the Faculty by finance so that its students get diversified knowledge and skills at a required level. Expatriate academic staffs, par-timers, and guest lecturers have been involved in the teaching EEPCo students. Electrical power engineering in USA and South-Africa The trend of electrical engineering education and the inclination of most students away from electrical power engineering have worried different electric utility operators and policy makers. South-Africa’s ESKOM has introduced the so called Tertiary Education Support Program (TESP) in which it finances universities and research centers since 1980s to overcome the declining trend with the following specific benefits [15].

• Retention of academic staff in Electrical power area.

• Development of knowledge base of academic

staff in the area.

• Development of capacity to do contract research with ESKOM.

• Development of students and potential pool

of employees with an increased level of knowledge and skills.

ESKOM is south-Africa’s utility company generating 35 GW, which is about 60% of the power generation of the whole sub-Saharan countries and about 80% of the southern African countries [16].

Page 41: zede26

Preliminary Survey on Electric Energy Efficiency in Ethiopia, …

Journal of EEA, Vol. 26, 2009

51

In response to critical concerns about the power and energy engineering workforce and the education system that supports it, the U.S. Power and Energy Engineering Workforce Collaborative, led by the IEEE Power & Energy Society, has developed a sweeping and detailed action plan. The plan envisages to double the number of graduate and undergraduate electrical engineering students in USA calling upon industry, government and educational institutions to take specific, reasonable and immediate actions to attract more young people to electric power engineering and to support the education system that will make them highly-qualified engineers [17]. Millions of dollars has been allocated for the implementation of the plan. Consorted Effort Requirement for Today’s Competitive World Challenges In USA, companies on average allocate 3.5% of their revenue for research and development. Research and development (R and D) is investment by companies to continuously increase their competitiveness. The competition is in efficiency, quality of product or service, cost, etc. Had it been operating in USA, EEPCo should have expended more than 35 million Birr each year for research and development. It is generally expensive for industries to install and administer laboratories and employ highly qualified professionals to do research in-house for industries. The chipset way of doing R&D for industries is to work with universities. University postgraduate students guided by their professors can solve most problems. Universities also have basic research facilities as part of laboratories of education. The industries are required to cooperate with universities and come up with their problems and some finance. Students who have worked on problems of a particular industry can also be employed by the industry and continue on working on the solution of the problem. AAU and EEPCo’s agreement is a positive sign in this direction. It should be strengthened.

CONCLUSION & RECOMMENDATION CONCLUSION

This paper has demonstrated that there is a lot to be done to improve the electrical energy efficiency of our utility and industry.

Improving energy efficiency can reduce new power plant generation requirements and can contribute to the effort being exerted to curb the energy shortage we are facing. Energy efficiency in the utilization process can also reduce the industries’ energy cost and improve their cost competitiveness in the global market. Recently, the number of students in electrical power engineering area has been declining globally, as compared to the computer and communication engineering students. We need to take appropriate action to secure pool of employees and researchers in electrical power engineering.

RECOMMENDATION The government, the energy sector players like EEPCO and higher education institutions have to work on policy framework to support the education, research and development in the energy sector for efficient, reliable, and quality energy supply which is essential for sustainable economic and technological development of Ethiopia. Government should allocate a certain percentage of gross domestic product (GDP), and encourage industries to allocate a certain percent of their revenue for research and development for sustainability of development and competitiveness of the products and services. EEPCo and AAU have to strengthen their existing cooperation to produce solutions for local problems and become a successful/ideal model for university-industry linkage.

REFERENCES [1] Estache, A. et.al “Are African Electricity

Distribution Companies Efficient Evidence from South African Countries”, Discussion paper series No. 07/13, Department of Economics City University, London, May 2007.

[2] Fritz J. and Dolores, H. “Non-Technical

Losses in Electrical Power Systems”, A Masters Thesis, College of Engineering and Technology Ohio University, November, 2002.

[3] American Council for an Energy-efficient

Economy, “Energy efficiency Resource Standard”, March 17th 2009.

Page 42: zede26

Mengesha Mamo

Journal of EEA, Vol. 26, 2009

52

[4] “Reducing the cost of addressing climate change through energy efficiency”, and Consensus recommendation for future federal climate legislation, February 2009.

[5] European Copper Institute, “Energy

Efficiency in Transmission and Distribution the scope for energy saving in the EU through the use of energy-efficient electricity distribution transformers”, December 1999.

[6] Shrirang Karandikar and Ashok Ghatol,

“Reengineering of Distribution lines for Power Loss Reduction-Bhiwandi Case Study”, WSEAS Transactions on Power Systems ISSN: 1790-5060, June 2008, pp 404 to 415.

[7] Onohaebi, O. S. “Reduction of High

Technical Power Loss Associated with the Nigerian 330KV Transmission Network”, International Journal of Electrical and Power Engineering, pp 421-431, 2007.

[8] EEPCO “EEPCO’s Strategic plan for the

period of 1997 to 2002 EFY”, volume 1 of 2, pp 25, October 2004.

[9] EEPCO webpage, http://www.eepco.gov.et [10] Egan, M. D. “Architectural Lighting”

Second edition, McGraw Hill, 2002, pp 154. [11] Senior Project by students advised by

Getahun Mekuria, Sintayehu chala and Daniel Dilbie, “Induction mittad”, Electrical and Computer Engineering, FoT, AAU, July, 2009.

[12] Tezarash Yohanes and Getachew Biru “Performance Evaluation of Locally Made Welding Transformers”, an independent project report, Electrical and Computer Engineering, FoT, AAU, June 2009.

[13] Mengesha Mamo, “Substandard Materials

in Ethiopian Market” Proceeding of the second Scientific conference on Electrical Engineering, August, 2008.

[14] United States Department of Agriculture,

“The Application of Capacitors on Rural Electric Systems”, Rural Utilities Service Bulletin 1724D-112.

[15] Faculty of Technology and Faculty of

Business and Economics, AAU, “Needs Assessment and Curriculum Design for Graduate College At the Ethiopian Electric Power Corporation study Report”, 2006.

[16] Vivien Foster Jevgenijs Steinbuks, “Paying

the Price for Unreliable Power Supplies; In-House Generation of Electricity by Firms in Africa”, Policy Research Working Paper 4913 The World Bank Africa Region, April 2009.

[17] Workforce Collaborative Press Release,

“U.S. Power and Energy Engineering”, April 21, 2009.

Page 43: zede26

Journal of EEA, Vol. 26, 2009

THE USE OF SQUARE CROSS SECTIONS OF UNIT-LENGTH SIDE FOR THE ANALYSIS OF RECTANGULAR SOLID AND HOLLOW SECTIONS UNDER

BIAXIALLY ECCENTRIC LOADS

Girma Zerayohannes Department of Civil Engineering

Addis Ababa University

ABSTRACT This paper deals with the analytical proof of the equivalence between the relative design axial load and biaxial bending resistance of a solid and hollow rectangular section, and the design axial load and biaxial bending resistance of the associated square solid and hollow cross sections of unit-length side. The results of the proof showed that the equivalence exists only if the design compressive strength of the concrete in the square cross section of unit-length side is equal to one for solid sections and 1/α for hollow rectangular sections, where α is defined as the ratio of the solid to the gross cross sectional area. The new sets of compressive strengths and transformations of rectangular sections into square cross section of unit-length side have been shown to be matched with consistent transformations of the design yield strength of reinforcing bars, locations of concrete fibre and rebar, and area of reinforcement. Keywords: Biaxial eccentric load, Hollow section, Solid sections, Square cross section.

INTRODUCTION The results of cross section analysis show that the relative design axial load and biaxial bending resistance of a solid rectangular reinforced concrete section is identical to the design axial load and biaxial bending resistance of an associated square cross section of unit-length side, provided that, (1) the design compressive strength of the concrete in the square cross section of unit-length side is equal to one, (2) the resulting rebar location and area of reinforcement is in conformity with the transformation of the rectangular cross-section into the square cross section of unit-length side, and (3) the resulting design yield strength of the reinforcement is in conformity with the transformation of the design compressive strength of concrete. Moreover the total amount of reinforcement in the transformed section can be replaced by the mechanical reinforcement ratio, ω, if the design

yield strength of the reinforcement is taken to be equal to one. Keeping the transformation requirements in respect of rebar locations, concrete fibre locations, and amount of reinforcement the same as in the solid cross section, it can be shown that the relative values of the design axial load and biaxial bending resistance of a rectangular hollow section with solidity ratio α is identical to the design axial load and biaxial bending resistance of the corresponding square hollow section of unit-length side, provided that the design compressive strength of concrete used in the analysis of the latter is equal to 1/α. Comparisons of cross section analysis results have shown the equivalence between the relative design axial load and biaxial bending resistance of a rectangular reinforced concrete section and the design axial load and biaxial bending resistance of an associated square cross section of unit-length side. However, analytical proof for its justification is hardly available in the literature. More recently, the square cross section of unit-length side is used to calculate interaction diagrams for load eccentricities along axes parallel to the axes of symmetry and to a diagonal of a solid rectangular cross section for the derivation of approximate analytical expressions of the moment contours based on the ACI Code Specification [1] [2]. Analytical proof of the equivalence between the dimensionless expressions for the solid rectangular section with four corner reinforcement and the ultimate bending moments and axial force of an equivalent square cross section of unit length-side is also provided in Ref [1].

RESEARCH SIGNIFICANCE The purpose of this paper is to present a new approach for the analytical proof of the equivalence between the relative design axial load and biaxial bending resistance of:

i. Solid rectangular section with arbitrary

reinforcement arrangement and the design axial load and biaxial bending resistance of the

Page 44: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

32

associated square cross section of unit length side.

ii. Hollow rectangular section with arbitrary

reinforcement arrangement and the design axial load and biaxial bending resistance of the associated square hollow cross section of unit length side.

iii. Same as in (i) and (ii), but using net cross

section for high strength concrete as recommended in modern building codes [3] [4].

EQUIVALENCE BETWEEN THE RELATIVE

DESIGN AXIAL LOAD AND BIAXIAL BENDING RESISTANCES OF A

RECTANGULAR SECTION AND THE ASSOCIATED SQUARE CROSS SECTION

OF UNIT-LENGTH SIDE Biaxial interaction diagrams for solid rectangular cross section made of reinforced concrete are presented in non-dimensional form as:

hbfN

cd

RdRd ⋅⋅

=ν (1)

2hbfM

cd

yRdyRd ⋅⋅

=µ (2)

2bhfM

cd

zRdzRd ⋅⋅

=µ (3)

where,

νRd, µRd y, µRd z are the relative values of the design axial load and biaxial bending resistance of the rectangular cross section.

NRd, MRd y, and MRd z are the design axial load and

biaxial bending resistance of he rectangular cross section.

fcd is the design compressive

strength of concrete. b, and h are the side lengths of the

rectangular cross-section It can be seen from Eq. (1) to (3) that the design axial load and biaxial bending resistance of a different cross section whose side lengths and design compressive strength of concrete satisfy Eq. (4) to (6) would be identical to the required relative values of the design axial load and biaxial

bending resistance of the original rectangular section. 1=⋅⋅ hbfcd

(4) 12 =⋅⋅ hbfcd

(5) 12 =⋅⋅ bhfcd

(6) Equation (4) to (6) represents a system of independent simultaneous equations in b, h, and fcd. The solutions are: b = 1, h = 1, and fcd = 1. (7) Thus given a rectangular reinforced concrete section with side lengths (b/h) and design compressive strength fcd, there is an alternative way of determining its relative values of the design axial load and biaxial bending resistance. It is achieved through the transformation of the side lengths of the original cross section and the design compressive strength of concrete into unity. The design values of the axial load and biaxial bending resistance of the transformed section will then give the relative design axial load and biaxial bending resistance νRd, µRd y, and µRd z of the original cross section directly. It is to be recognized that in the process, locations of concrete fibre and rebar, area of reinforcement, and the design yield strength of reinforcing bars undergo consistent transformations as will be described in more detail in the following sections. In summary all rectangular sections with arbitrary side lengths b and h possess the same relative design axial load and biaxial bending resistances as that of the associated square cross section of unit-length side, provided that, i. The design compressive strength of the

concrete fcd in the square cross section of unit-length side is equal to 1.

ii. The resulting rebar location and area of

reinforcement in the square cross section of unit-length side is in conformity with the coordinate and area transformation of the rectangular cross-section.

iii. The resulting design yield strength of the

reinforcement in the square cross section of unit-length side is in conformity with the transformation of the design compressive strength of concrete.

Page 45: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

33

COORDINATE TRANSFORMATION MATRIX

Rectangular Solid Cross-Sections The transformation of a rectangular cross-section with side lengths b and h into a square cross section of unit-length side can be expressed in matrix form as:

=

11

1001

hb

hb (8)

Thus given a rectangular reinforced concrete section (Fig. 1(a)), the transformation matrix in Eq. (8), will be used to determine the coordinates of any desired point in the associated square cross section of unit-length side (Fig. 1(b)). As an example, the coordinate of the corner concrete fibre and rebar locations in the positive quadrant, are determined in Eqs. (9) and (10) respectively.

=

5.05.0

22

1001

hb

hb (9)

−−

=

−−

)(5.0)(5.0

)2()2(

1001

'

'

'

'

hhbb

hhbb

hb (10)

For a given size of concrete cover that can be expressed as a fraction of the length of one side of a rectangular cross section, it can be shown that the coordinates of the location of the reinforcement in the transformed section is a function of the aspect ratio of the original rectangular section. On the other hand it can be observed from Eq. (10) that, the rebar location can be made independent of the aspect ratio by choosing equal dimensionless covers, b’/ b = h’/ h, in both directions.

Therefore equivalent square cross sections of unit-length side can be made to represent all rectangular sections regardless of their aspect ratios, provided that they satisfy the conditions of equal dimensionless cover in the y- and z- directions. This property is exploited in the preparation of biaxial interaction diagrams, although it is not in accordance with the usual practice where equal absolute cover is provided in both directions. The following example shows the differences observed in a square cross section of unit-length side when equal absolute cover is provided in the actual rectangular cross section instead of equal dimensionless cover. Assuming that the cover in the rectangular section (Fig. 1 (a)) is equal in both directions with b’ = h’ = c’, the rebar location in the positive quadrant in the square cross section of unit-length side is determined by the transformation given in Eq. (11).

−−

=

−−

)(5.0)(5.0

)2()2(

1001

'

'

'

'

hcbc

chcb

hb (11)

Assuming further that c’ = 0.2 ⋅b and that the aspect ratio b/h = 0.5, the coordinate of the location of the reinforcement in the transformed section is:

=

−−

4.03.0

)(5.0)(5.0

'

'

hcbc (12)

The result of the transformation shows that, the rebar are placed further apart in the direction of the larger dimension, i.e. h, as opposed to the symmetrical location of the reinforcement with equal y- and z- coordinates for the case of equal dimensionless cover.

y (b/2-b’, h/2-h’)

(b/2, h/2)

z

z

y

b’/b b’/b

h’/h

h’/h

1

1

b

h

b’ b’

h’

h’

(0.5/ 0.5)

(0.5 – (b’/b), 0.5 – (h’/h))

Figure 1(a) Rectangular solid section; and (b) Square cross section of unit-length side

(a) (b)

Page 46: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

34

Rectangular Hollow Cross Sections Figure 2(a) and (b) show the actual rectangular hollow section with uniformly distributed reinforcement along the edges and the associated square hollow section of unit-length side respectively. Compared to the solid rectangular section, the hollow rectangular section has one additional parameter, which is the ratio of the wall thickness to the corresponding edge length.

Biaxial interaction diagrams for hollow rectangular cross section made of reinforced concrete are presented in non-dimensional form as:

hbfN

cd

RdRd ⋅⋅⋅

ν (13)

2hbfM

cd

yRdyRd ⋅⋅⋅

µ (14)

2bhfM

cd

zRdzRd ⋅⋅⋅

µ (15)

where, α is the fraction of the solid part of the cross

section which will be referred to as “solidity ratio” in short and the definitions of other variables are as in Eq. (1) to (3).

It can be seen from Eq. (13) to (15) that the design axial load and biaxial bending resistance of an associated rectangular hollow cross section whose side lengths and design compressive strength of concrete fcd satisfy Eq. (16) to (18) would be identical to the required relative values of the design axial load and biaxial bending resistance νRd, µRd y, and µRd z, of the original rectangular hollow section.

1=⋅⋅⋅ hbfcd α (16) 12 =⋅⋅⋅ hbfcd α (17) 12 =⋅⋅⋅ bhfcd α (18)

Equation (16) to (18) represents a system of independent simultaneous equations in b, h, and fcd. The solutions are: b = 1, h = 1, and fcd = 1/α (19) Thus the relative values of the design axial load and biaxial bending resistance of a rectangular hollow section with design compressive strength fcd can be determined by using a transformed square hollow section of unit length side with the corresponding design compressive strength of concrete equal to 1/α. Moreover the solutions indicate that the coordinate transformation matrix for rectangular hollow sections is the same as that for rectangular solid cross section described in section 3.1. Therefore for the example hollow cross-section shown in Fig. 2(a) with b/ h = 2.0, wb = wh = 0.2 ⋅ h, the outer and inner corner points of the transformed concrete section in the positive quadrant have the coordinates (0.5/0.5) and (0.4/0.3) respectively as determined by the transformations shown in Eqs. (20) and (21).

=

5.05.0

22

1001

hb

hb (20)

=

⋅⋅

3.04.0

3.04.0

1001

hb

hb (21)

y

z

(0.5, 0.5)

wb/b wb/b

wh/h

wh/h

1

1

(0.4, 0.3)

b

z wb wb

wh

wh

h

(b/2, h/2)

(0.4 ⋅ b, 0.3 ⋅ h)

y

Figure 2(a) Rectangular hollow section; and (b) Square cross section of unit-length side (a)

(b)

Page 47: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

35

The locations of individual rebar in the square hollow section of unit-length side are also determined using the same transformation matrix.

AREA AND STEEL STRENGTH TRANSFORMATION

Rectangular solid sections The transformation of the rectangular cross section into the equivalent square cross section of unit-length side and the transformation of the design compressive strength of concrete fcd into unity have consequences on the magnitudes of cross sectional areas and material strengths that will be used in the transformed section. According to the solutions in Eq. (7), the gross cross sectional area b ⋅ h of a solid reinforced concrete section is transformed into the unit area, 1 ⋅ 1 = 1, while the design compressive strength of concrete fcd is transformed into unity. Therefore the transformation coefficients for areas and design strengths of materials are 1/(b ⋅ h) and 1/fcd respectively. Thus using the superscript “u” to designate the square cross section of unit-length side, the transformed area of reinforcement u

sA , and design yield strength of reinforcement u

ydf are:

hb

AA su

s ⋅= (22)

cd

yduyd f

ff = (23)

Further the geometric reinforcement ratio in the transformed section is:

uu

usu

hbA⋅

=ρ (24)

Substituting bu = hu = 1 and the expression for u

sA from Eq. (22)

ρρ =⋅

=hb

Asu (25)

Equation (25) indicates that the geometric reinforcement ratio is invariant under the transformation. Similarly the mechanical reinforcement ratio in the transformed section is:

u

cd

uyduu

ff

⋅= ρω (26)

Substituting 1=u

cdf u

yduu f⋅= ρω (27)

Substituting further for uρ and u

ydf from Eqs. (25) and (23) ωρω =⋅=

cd

ydu

ff (28)

Equation (28) indicates that ω is also invariant under the transformation. Finally from Eqs. (24), (25), and (28):

yd

cdus f

fA ⋅= ω (29)

Equation (29) gives the transformed area of steel in the square cross section of unit-length side in terms of the mechanical reinforcement ratio ω, the design compressive strength of the concrete, and yield strength of the reinforcement in the original cross section. This same amount of concrete area is to be deducted if the analysis would be based on net cross section. Usually analysis is based on gross cross sections as the use of net cross sections does not affect the result significantly. The effect of the displaced amount of concrete on the cross section capacity may however be significant if high strength of concrete is used requiring analysis on the basis of net cross section for high strength concrete [3] [4] . The transformed area of reinforcement u

sA can also be expressed in terms of the transformed design yield strength of reinforcement u

ydf as:

uyd

us f

A ω= (30)

Additional analytical advantage can be gained by setting u

ydf = 1, because it allows the direct substitution of the reinforcement data by the mechanical reinforcement ratio ω. It is to be noted that this is not a consequence of the transformations discussed so far. It is rather an isolated action that allows the substitution of the amount of reinforcement u

sA in the square cross section of unit-length side by ω, provided that

uydf =1. The direct use of ω as reinforcement data

Page 48: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

36

can be used advantageously in the calculation of biaxial interaction diagrams where it can be systematically varied to cover the practical range of the mechanical reinforcement ratio. Rectangular Hollow Sections The gross concrete area plus the hollow part of the cross section constitute the total area equal to b ⋅ h. According to the solutions in Eq. (19), this area is transformed into the unit area, 1 ⋅ 1 = 1, while the design compressive strength of concrete fcd is transformed into 1/α. Therefore the transformation coefficients for areas and design strengths of materials are 1/(b ⋅ h) and 1/(α⋅ fcd) respectively. Using the superscript “u” to designate the square cross section of unit-length side, the transformed area of reinforcement u

sA and design yield strength of reinforcement u

ydf are:

hb

AA su

s ⋅= (31)

cd

yduyd f

ff

⋅=

α (32)

Similarly the transformed area of concrete ucA is:

αα

=⋅

⋅⋅=

⋅=

hbhb

hbAA cu

c (33)

Since Eq. (33) can be rewritten as 11⋅⋅= αu

cA , it can be concluded that the “solidity” ratio does not change under the transformation. The geometric reinforcement ratio in the transformed section is:

uu

usu

hbA

⋅⋅=

αρ (34)

Substituting bu = hu = 1 and the expression for u

sA from Eq. (31)

ρα

ρ =⋅⋅

=hb

Asu (35)

Equation (35) indicates that the geometric reinforcement ratio is invariant under the transformation.

Similarly the mechanical reinforcement ratio in the transformed section is:

ucd

uyduu

ff

⋅= ρω (36)

Substituting α/1=u

cdf u

yduu f⋅⋅= ραω (37)

Substituting further for uρ and u

ydf from Eqs. (35) and (32) ωρω =⋅=

cd

ydu

ff (38)

Equation (38) indicates that ω is also invariant under the transformation. Finally from Eqs. (34), (35), and (38):

yd

cdus f

fA ⋅⋅= ωα (39)

Equation (39) gives the transformed area of steel in the square hollow section of unit-length side in terms of the solidity ratio, mechanical reinforcement ratio ω, design compressive strength of the concrete, and yield strength of the reinforcement in the original rectangular hollow section. This same amount of concrete area is to be deducted if the analysis would be based on net cross section. The transformed area of reinforcement u

sA can also be expressed in terms of the transformed design yield strength of reinforcement u

ydf as: u

s uyd

Afω

= (40)

Thus the amount of reinforcement in the hollow square cross section of unit length side can be replaced by ω by setting u

ydf = 1.

CONCLUSIONS The following conclusions can be drawn from this study: 1. The relative values of the design axial load and

biaxial bending resistance of a rectangular reinforced concrete section is identical to the design axial load and biaxial bending resistance of an associated square cross section of unit-length side, provided that the design

Page 49: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

37

compressive strength of concrete ucdf that is

used in the analysis of the square cross section of unit-length side is equal to one and its rebar locations are in conformity with the transformation of the side lengths of the actual rectangular cross section.

2. Design axial load and biaxial bending

resistance are calculated on the basis of net cross section for high strength concrete by deducting the amount of the transformed area of steel

yd

cdus f

fA ⋅= ω from the square cross

section of unit-length side. 3. The relative design axial load and biaxial

bending resistance of a rectangular hollow reinforced concrete section with “solidity ratio” α is identical to the design axial load and biaxial bending resistance of an associated square hollow section of unit-length side, provided that the design compressive strength of concrete, u

cdf that is used in the analysis of the square cross section of unit-length side is equal to 1/α, and its rebar locations are in conformity with the transformation of the side lengths of the actual rectangular hollow section.

4. Design axial load and biaxial bending

resistance are calculated on the basis of net cross section for high strength concrete by deducting the amount of the transformed area of steel

yd

cdus f

fA ⋅⋅= ωα from the square

hollow section of unit-length side.

ACKNOWLEDGMENTS The Author would like to express his deepest appreciation to the German Academic Exchange Service, DAAD, for sponsoring this study in the summer of 2008 at the Technical University of Kaiserslautern, Germany. Special thanks go to professor, Dr.-Ing. J. Schnell, Head of the Institute of Concrete Structures for supervising the work and providing me with suitable space and necessary equipments to carry out my study successfully.

REFERENCES [1] Cedolin, L., Cusatis, G., Eccheli, S., Rovda,

M., “Capacity of Rectangular Cross Sections under Biaxially Eccentric Loads”, ACI Structural Journal, April 2008, pp. 215-224.

[2] ACI Committee 318, “Building Code

Requirements for Structural Concrete (ACI 318-05) and Commentary (318R-05),” American Concrete Institute, Farmington Hills, MI, 2005, 430 pp.

[3] DIN 1045-1:2001-07, Tragwerke aus Beton,

Stahlbeton und Spannbeton. Teil 1: Bemessung und Konstruktion.

[4] Zilch, K., Jähring, A., Müller, A.,

“Erläuterungen zu DIN 1045-1, Deutscher Ausschuss für Stahlbeton”, Heft 525. Berlin: Beuth Verlag, 2003.

[5] Zerayohannes, G., “Bemmesungsdiagramme

fuer Schiefe Biegung mit Laengskraft nach DIN 1045-1 : 2001-07, Schriftenreihe der Fachgebiete Baustofftechnologie und Bauschadenanalyse, Massivbau und Baukonstruktion und Stahlbau des Studienganges Bauingenieurwesen der Technischen Universitaet Kaiserslautern, Band 4, 2006, 270 pp.

Page 50: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

38

Appendix-Verification Calculations of Axial Load and Biaxial Bending Resistance of Rectangular and Equivalent Square Cross Section of Unit Length-side Table 1: Input Data for the Calculation of a Typical Interaction Curve-Normal Strength Concrete

Square cross section of unit length-side – C12/15 up to C50/60; dimensionless cover = 0.1; νEd = -0.8, and ω = 0.5 (4- corner reinforcement arrangement based on gross cross section)

2 fcd 1. -2.000 -3.500 .0000 .0000 2.000 fyd 1. -2.174 25.000 .0000 2.174 1.000 2 3 1 1 1 5 .5000 .5000 .0000 1 2 5 .4000 .4000 .125 ω/4 3 -.8 1.0 1.0 0 0 0 Table 2: Analysis Result of the Square Cross Section of Unit-length Side Used as Coordinates for the Example

Interaction Curve and Design Normal Load and Moment Resistances of the Actual Cross Section [5] ν μEdy μEdz N MEdy MEdz Ru/R -0.8 0.2409 0 -2901.33 698.9312 0 1 -0.8 0.2394 0.0084 -2901.33 694.5792 12.1856 1 -0.8 0.2377 0.0166 -2901.33 689.6469 24.08107 1 -0.8 0.2357 0.0248 -2901.33 683.8443 35.97653 1 -0.8 0.2335 0.0328 -2901.33 677.4613 47.58187 1 -0.8 0.231 0.0407 -2901.33 670.208 59.04213 1 -0.8 0.2264 0.0481 -2901.33 656.8619 69.77707 1 -0.8 0.2213 0.0552 -2901.33 642.0651 80.0768 1 -0.8 0.2163 0.062 -2901.33 627.5584 89.94133 1 -0.8 0.2114 0.0687 -2901.33 613.3419 99.6608 1 -0.8 0.2065 0.0752 -2901.33 599.1253 109.0901 1 -0.8 0.2017 0.0815 -2901.33 585.1989 118.2293 1 -0.8 0.1968 0.0876 -2901.33 570.9824 127.0784 1 -0.8 0.192 0.0936 -2901.33 557.056 135.7824 1 -0.8 0.1871 0.0995 -2901.33 542.8395 144.3413 1 -0.8 0.1823 0.1052 -2901.33 528.9131 152.6101 1 -0.8 0.1774 0.1109 -2901.33 514.6965 160.8789 1 -0.8 0.1726 0.1164 -2901.33 500.7701 168.8576 1 -0.8 0.1677 0.1218 -2901.33 486.5536 176.6912 1 -0.8 0.1628 0.1272 -2901.33 472.3371 184.5248 1 -0.8 0.1578 0.1324 -2901.33 457.8304 192.0683 1 -0.8 0.1528 0.1376 -2901.33 443.3237 199.6117 1 -0.8 0.1478 0.1427 -2901.33 428.8171 207.0101 1 -0.8 0.1428 0.1478 -2901.33 414.3104 214.4085 1 -0.8 0.1376 0.1528 -2901.33 399.2235 221.6619 1 -0.8 0.1324 0.1578 -2901.33 384.1365 228.9152 1 -0.8 0.1272 0.1628 -2901.33 369.0496 236.1685 1 -0.8 0.1218 0.1677 -2901.33 353.3824 243.2768 1 -0.8 0.1164 0.1726 -2901.33 337.7152 250.3851 1

Page 51: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

39

-0.8 0.1109 0.1774 -2901.33 321.7579 257.3483 1 -0.8 0.1052 0.1823 -2901.33 305.2203 264.4565 1 -0.8 0.0995 0.1871 -2901.33 288.6827 271.4197 1 -0.8 0.0936 0.192 -2901.33 271.5648 278.528 1 -0.8 0.0876 0.1968 -2901.33 254.1568 285.4912 1 -0.8 0.0815 0.2017 -2901.33 236.4587 292.5995 1 -0.8 0.0752 0.2065 -2901.33 218.1803 299.5627 1 -0.8 0.0687 0.2114 -2901.33 199.3216 306.6709 1 -0.8 0.062 0.2163 -2901.33 179.8827 313.7792 1 -0.8 0.0552 0.2213 -2901.33 160.1536 321.0325 1 -0.8 0.0481 0.2264 -2901.33 139.5541 328.4309 1 -0.8 0.0407 0.231 -2901.33 118.0843 335.104 1 -0.8 0.0328 0.2335 -2901.33 95.16373 338.7307 1 -0.8 0.0248 0.2357 -2901.33 71.95307 341.9221 1 -0.8 0.0166 0.2377 -2901.33 48.16213 344.8235 1 -0.8 0.0084 0.2394 -2901.33 24.3712 347.2896 1 -0.8 0 0.2409 -2901.33 0 349.4656 1

Table 3: Example Input Data for Verification Calculation of the Actual Cross Section

Actual cross section – C20/25 - based on gross cross section (b/ h = 0.4m/ 0.8m) 2 fcd 11333.3 -2.000 -3.500 .0000 .0000 2.000 fyd 434782.6 -2.174 25.000 .0000 2.174 1.000 2 3 1 1 1 5 .2000 .4000 .0000 1 2 5 .1600 .3200 0.0010426667 As/4 3 -2901.33 414.3104 214.4085 0 0

0 Table 4: Input Data for the Calculation of a typical Interaction Curve-High Strength Concrete

Square cross section of unit length-side – C100/115; dimensionless cover = 0.1; νEd = -0.8, and ω = 0.5 (4-corner reinforcement arrangement based on net cross section)

2 fcd 1. -2.200 -2.200 .0000 .0000 1.550 fyd 1. -2.174 25.000 .0000 2.174 1.000 3 3 1 1 1 5 .5000 .5000 .0000 4 1 1 .4000 .4000 -.004692 -.4000 .4000 -.004692 -.4000 -.4000 -.004692 .4000 -.4000 -.004692 1 2 5 .4000 .4000 .125 ω/4 3 -.8 1.0 1.0 0 0 0

Page 52: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

40

Table 5: Analysis Result of the Square Cross Section of Unit-length Side Used as Coordinates for the Example Interaction Curve and Design Normal Load and Moment Resistances of the Actual Cross Section [5]

ν μEdy μEdz N MEdy MEdz Ru/R -0.8 0.1635 0 -13056 2134.656 0 1 -0.8 0.1589 0.0055 -13056 2074.598 35.904 1 -0.8 0.1545 0.0108 -13056 2017.152 70.5024 1 -0.8 0.1502 0.0158 -13056 1961.011 103.1424 1 -0.8 0.1461 0.0205 -13056 1907.482 133.824 1 -0.8 0.1422 0.0251 -13056 1856.563 163.8528 1 -0.8 0.1383 0.0294 -13056 1805.645 191.9232 1 -0.8 0.1346 0.0336 -13056 1757.338 219.3408 1 -0.8 0.131 0.0376 -13056 1710.336 245.4528 1 -0.8 0.1275 0.0414 -13056 1664.64 270.2592 1 -0.8 0.124 0.0451 -13056 1618.944 294.4128 1 -0.8 0.1207 0.0487 -13056 1575.859 317.9136 1 -0.8 0.1174 0.0523 -13056 1532.774 341.4144 1 -0.8 0.1142 0.0557 -13056 1490.995 363.6096 1 -0.8 0.111 0.059 -13056 1449.216 385.152 1 -0.8 0.1079 0.0623 -13056 1408.742 406.6944 1 -0.8 0.1048 0.0655 -13056 1368.269 427.584 1 -0.8 0.1018 0.0686 -13056 1329.101 447.8208 1 -0.8 0.0988 0.0717 -13056 1289.933 468.0576 1 -0.8 0.0958 0.0748 -13056 1250.765 488.2944 1 -0.8 0.0928 0.0779 -13056 1211.597 508.5312 1 -0.8 0.0898 0.0809 -13056 1172.429 528.1152 1 -0.8 0.0868 0.0839 -13056 1133.261 547.6992 1 -0.8 0.0839 0.0868 -13056 1095.398 566.6304 1 -0.8 0.0809 0.0898 -13056 1056.23 586.2144 1 -0.8 0.0779 0.0928 -13056 1017.062 605.7984 1 -0.8 0.0748 0.0958 -13056 976.5888 625.3824 1 -0.8 0.0717 0.0988 -13056 936.1152 644.9664 1 -0.8 0.0686 0.1018 -13056 895.6416 664.5504 1 -0.8 0.0655 0.1048 -13056 855.168 684.1344 1 -0.8 0.0623 0.1079 -13056 813.3888 704.3712 1 -0.8 0.059 0.111 -13056 770.304 724.608 1 -0.8 0.0557 0.1142 -13056 727.2192 745.4976 1 -0.8 0.0523 0.1174 -13056 682.8288 766.3872 1 -0.8 0.0487 0.1207 -13056 635.8272 787.9296 1 -0.8 0.0451 0.124 -13056 588.8256 809.472 1 -0.8 0.0414 0.1275 -13056 540.5184 832.32 1 -0.8 0.0376 0.131 -13056 490.9056 855.168 1 -0.8 0.0336 0.1346 -13056 438.6816 878.6688 1 -0.8 0.0294 0.1383 -13056 383.8464 902.8224 1 -0.8 0.0251 0.1422 -13056 327.7056 928.2816 1 -0.8 0.0205 0.1461 -13056 267.648 953.7408 1 -0.8 0.0158 0.1502 -13056 206.2848 980.5056 1 -0.8 0.0108 0.1545 -13056 141.0048 1008.576 1 -0.8 0.0055 0.1589 -13056 71.808 1037.299 1 -0.8 0 0.1635 -13056 0 1067.328 1

Page 53: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

41

Table 6: Example Input Data for Verification Calculation of the Actual Cross Section-High Strength Concrete

Actual cross section - C100/115 – (based on net cross section (b/ h = 0.4m/ 0.8m) 2 fcd 51000 -2.200 -2.200 .0000 .0000 1.550 fyd 434782.61 -2.174 25.000 .0000 2.174 1.000 3 3 1 1 1 5 .2000 .4000 .0000 4 1 1 .1600 .3200 -.004692 -.1600 .3200 -.004692 -.1600 -.3200 -.004692 .1600 -.3200 -.004692 1 2 5 .1600 .3200 .004692 As/4 3 -13056. 936.1152 644.9664 0 0 0

Page 54: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

42

Table 7: Input Data for the Calculation of a typical Interaction Curve for Rectangular Hollow Section (b/h = 2.0; wh/ h = wb/ h = 0.2; Ac = α⋅ b ⋅ h; α = 0.52) - High strength concrete

Square hollow section of unit-length side; C100/115; νEd = -0.8 and ω = 2.0 - based on net cross section 2 1.923076923 -2.20 -2.200 .0000 .0000 1.550 1. -2.174 25.000 .0000 2.174 1.000 4 3 1 4 1 1 -.5000 .5000 .0000 -.5000 -.5000 .0000 .5000 -.5000 .0000 .5000 .5000 .0000 4 1 1 .4000 .3000 .0000 .4000 -.3000 .0000 -.4000 -.3000 .0000 -.4000 .3000 .0000 36 1 1 .4500 .4000 -.0033886666565006667 .3750 .4000 -.0033886666565006667 .3000 .4000 -.0033886666565006667 .2250 .4000 -.0033886666565006667 .1500 .4000 -.0033886666565006667 .0750 .4000 -.0033886666565006667 .0000 .4000 -.0033886666565006667 .4500 .2666666667 -.0033886666565006667 .4500 .1333333333 -.0033886666565006667 .4500 .0000 -.0033886666565006667 -.4500 .4000 -.0033886666565006667 -.3750 .4000 -.0033886666565006667 -.3000 .4000 -.0033886666565006667 -.2250 .4000 -.0033886666565006667 -.1500 .4000 -.0033886666565006667 -.0750 .4000 -.0033886666565006667 -.4500 .2666666667 -.0033886666565006667 -.4500 .1333333333 -.0033886666565006667 -.4500 .0000 -.0033886666565006667 -.4500 -.4000 -.0033886666565006667 -.3750 -.4000 -.0033886666565006667 -.3000 -.4000 -.0033886666565006667 -.2250 -.4000 -.0033886666565006667 -.1500 -.4000 -.0033886666565006667 -.0750 -.4000 -.0033886666565006667 .0000 -.4000 -.0033886666565006667 -.4500 -.2666666667 -.0033886666565006667 -.4500 -.1333333333 -.0033886666565006667 .4500 -.4000 -.0033886666565006667 .3750 -.4000 -.0033886666565006667 .3000 -.4000 -.0033886666565006667 .2250 -.4000 -.0033886666565006667 .1500 -.4000 -.0033886666565006667 .0750 -.4000 -.0033886666565006667 .4500 -.2666666667 -.0033886666565006667 .4500 -.1333333333 -.0033886666565006667 10 2 5 .4500 .4000 .0555555555555555 .3750 .4000 .0555555555555555 .3000 .4000 .0555555555555555 .2250 .4000 .0555555555555555 .1500 .4000 .0555555555555555 .0750 .4000 .0555555555555555 .0000 .4000 .0555555555555555 .4500 .2666666667 .0555555555555555 .4500 .1333333333 .0555555555555555 .4500 .0000 .0555555555555555 3 -.8 .185 .2961538 0 0 0

α ⋅ (ω/36) ⋅ fcd/ fyd

fyd fcd

ω / 36

Page 55: zede26

The use of Square Cross Sections of Unit-Length Side for the Analysis…

Journal of EEA, Vol. 26, 2009

43

Table 8: Analysis Result of a Square Hollow Cross Section of Unit-length Side Used as Coordinates for the Example Interaction Curve and Design Normal Load and Moment Resistances of the Actual Cross Section [5].

νEd μEdy μEdz N MEdy MEdz Ru/R -0,8 0,4938461 0,01730769 -42432 26193,5971 1835,99976 1 -0,8 0,4763462 0,03326923 -42432 25265,4024 3529,19992 1 -0,8 0,46 0,04826923 -42432 24398,4 5120,39992 1 -0,8 0,4446154 0,0625 -42432 23582,4008 6630 1 -0,8 0,43 0,07576923 -42432 22807,2 8037,59992 1 -0,8 0,4161538 0,08846154 -42432 22072,7976 9384,00016 1 -0,8 0,4028846 0,1005769 -42432 21368,9992 10669,1976 1 -0,8 0,3903846 0,1119231 -42432 20705,9992 11872,8024 1 -0,8 0,3784615 0,1230769 -42432 20073,598 13055,9976 1 -0,8 0,3669231 0,1336538 -42432 19461,6012 14177,9951 1 -0,8 0,3559616 0,1438462 -42432 18880,2033 15259,2049 1 -0,8 0,3451923 0,1536538 -42432 18308,9996 16299,5951 1 -0,8 0,3348077 0,1632692 -42432 17758,2004 17319,5967 1 -0,8 0,3246154 0,1726923 -42432 17217,6008 18319,1992 1 -0,8 0,3148077 0,1817308 -42432 16697,4004 19278,0033 1 -0,8 0,305 0,1905769 -42432 16177,2 20216,3976 1 -0,8 0,2953846 0,1992308 -42432 15667,1992 21134,4033 1 -0,8 0,2859615 0,2078846 -42432 15167,398 22052,3984 1 -0,8 0,2767308 0,2161538 -42432 14677,8016 22929,5951 1 -0,8 0,2675 0,2244231 -42432 14188,2 23806,8024 1 -0,8 0,2582692 0,2325 -42432 13698,5984 24663,6 1 -0,8 0,2492308 0,2405769 -42432 13219,2016 25520,3976 1 -0,8 0,2401923 0,2486538 -42432 12739,7996 26377,1951 1 -0,8 0,2309615 0,2565385 -42432 12250,198 27213,6041 1 -0,8 0,2219231 0,2644231 -42432 11770,8012 28050,0024 1 -0,8 0,2126923 0,2723077 -42432 11281,1996 28886,4008 1 -0,8 0,2036538 0,2801923 -42432 10801,7976 29722,7992 1 -0,8 0,1944231 0,2880769 -42432 10312,2012 30559,1976 1 -0,8 0,185 0,2961538 -42432 9812,4 31415,9951 1 -0,8 0,1755769 0,3040385 -42432 9312,59878 32252,4041 1 -0,8 0,1659615 0,3121154 -42432 8802,59796 33109,2016 1 -0,8 0,1561539 0,3203846 -42432 8282,40286 33986,3984 1 -0,8 0,1463462 0,3286538 -42432 7762,20245 34863,5951 1 -0,8 0,1361538 0,3371154 -42432 7221,59755 35761,2016 1 -0,8 0,1257692 0,3457692 -42432 6670,79837 36679,1967 1 -0,8 0,1151923 0,3546154 -42432 6109,79959 37617,6016 1 -0,8 0,1042308 0,3636538 -42432 5528,40163 38576,3951 1 -0,8 0,09307692 0,3730769 -42432 4936,79984 39575,9976 1 -0,8 0,08134615 0,3826923 -42432 4314,5998 40595,9992 1 -0,8 0,06923077 0,3928846 -42432 3672,00004 41677,1984 1 -0,8 0,05673077 0,4032692 -42432 3009,00004 42778,7967 1 -0,8 0,04346154 0,4140384 -42432 2305,20008 43921,1935 1 -0,8 0,02980769 0,4253846 -42432 1580,99988 45124,7984 1 -0,8 0,01519231 0,4373077 -42432 805,800122 46389,6008 1 -0,8 0 0,45 -42432 0 47736 1

Page 56: zede26

Girma Zerayohannes

Journal of EEA, Vol. 26, 2009

44

Table 9: Example Input Data Verification of the Actual Cross Section Actual Cross Section; b = 2.0 m; h = 1.0 m; α = 0.52; wb/h=wh/h=0.20 C100/115 (based on net cross section) 2 51000 -2.200 -2.200 .0000 .0000 1.550 434782.61 -2.174 25.000 .0000 2.174 1.000 4 3 1 4 1 1 -1.000 .5000 .0000 -1.000 -.5000 .0000 1.000 -.5000 .0000 1.000 .5000 .0000 4 1 1 .8000 .3000 .0000 .8000 -.3000 .0000 -.8000 -.3000 .0000 -.8000 .3000 .0000 36 1 1 .9000 .4000 -.006777333333333 .7500 .4000 -.006777333333333 .6000 .4000 -.006777333333333 .4500 .4000 -.006777333333333 .3000 .4000 -.006777333333333 .1500 .4000 -.006777333333333 .0000 .4000 -.006777333333333 .9000 .2666666667 -.006777333333333 .9000 .1333333333 -.006777333333333 .9000 .0000 -.006777333333333 -.9000 .4000 -.006777333333333 -.7500 .4000 -.006777333333333 -.6000 .4000 -.006777333333333 -.4500 .4000 -.006777333333333 -.3000 .4000 -.006777333333333 -.1500 .4000 -.006777333333333 -.9000 .2666666667 -.006777333333333 -.9000 .1333333333 -.006777333333333 -.9000 .0000 -.006777333333333 -.9000 -.4000 -.006777333333333 -.7500 -.4000 -.006777333333333 -.6000 -.4000 -.006777333333333 -.4500 -.4000 -.006777333333333 -.3000 -.4000 -.006777333333333 -.1500 -.4000 -.006777333333333 .0000 -.4000 -.006777333333333 -.9000 -.2666666667 -.006777333333333 -.9000 -.1333333333 -.006777333333333 .9000 -.4000 -.006777333333333 .7500 -.4000 -.006777333333333 .6000 -.4000 -.006777333333333 .4500 -.4000 -.006777333333333 .3000 -.4000 -.006777333333333 .1500 -.4000 -.006777333333333 .9000 -.2666666667 -.006777333333333 .9000 -.1333333333 -.006777333333333 10 2 5 .9000 .4000 .006777333333333 .7500 .4000 .006777333333333 .6000 .4000 .006777333333333 .4500 .4000 .006777333333333 .3000 .4000 .006777333333333 .1500 .4000 .006777333333333 .0000 .4000 .006777333333333 .9000 .2666666667 .006777333333333 .9000 .1333333333 .006777333333333 .9000 .0000 .006777333333333 3 -42432 18880.2033 15259.2049 0 0 0

fyd fcd

As / 36

As / 36

Page 57: zede26

Journal of EEA, Vol. 26, 2009

WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM THE PERSPECTIVE OF AN IMPROVED APPROACH OF CONTINUUM-BASED

SUBGRADE MODELING

Asrat Worku Department of Civil Engineering

Addis Ababa University

ABSTRACT Based on an isotropic elastic continuum of thickness H overlying a rigid stratum, a generalized formulation for the classical single-parameter Winkler's subgrade model is presented. In this formulation, all the normal components of the stress tensor are taken into consideration, whereas the shear stresses are intentionally dropped with the purpose of providing a useful perspective, with which Winkler's model and its associated coefficient of subgrade reaction can be viewed. The formulation takes into account the variation of the elasticity modulus with depth. It only demands specifying a relationship between the vertical and horizontal normal stresses. Accordingly, two such different assumptions are made to obtain two new Winkler-type subgrade models with the corresponding closed-form relations for the subgrade modulus. The models give consistently larger stiffness for the Winkler springs as compared to previously proposed similar continuum-based models that ignore the lateral stresses. It has also been pointed out that it is only if the shear stress components of the subgrade are taken into consideration that a multi-parameter model evolves regardless of whether the lateral normal stresses are included. Finally, the effective stiffness per unit area of the multiple beds of springs of such a higher order model is exactly the same as the subgrade modulus of the corresponding single-parameter Winkler model presented in this work. Keywords: Heterogeneous subgrade, Reissner's simplified continuum, Shear interaction, Simplified continuum, Winkler model, Winkler-type models.

INTRODUCTION The simplest representation of a foundation subgrade is in the form of the classical Winkler model, which replaces the subgrade by a mechanical analogy consisting of a single bed of closely spaced vertical springs acting independently of each other. Mathematically, Winkler's model translates into

( ) ( )yxwkyxp s ,, 0= , in which p is the vertical contact pressure at an arbitrary point (x,y) in the foundation-soil interface area; w0 is the corresponding vertical deformation; and ks is a proportionality constant representing contact pressure per unit deformation - commonly referred to as the coefficient of subgrade reaction or simply as the subgrade modulus [1-7]. Thus, ks is the only quantity characterizing the subgrade material. Winkler's model has the well-known shortcoming of bringing about a vertical deformation of those springs alone that are located just under the loaded region. Because of this, the model entails a discontinuity of vertical deformation at the edges of the loaded area. Furthermore, the model implies that a point undergoes vertical deformation independently of other adjoining points [1, 3, 4, 5]. Both are consequences of the neglected vertical shear stresses that would have coupled the vertical deformations of neighboring points with each other so that continuity of displacement exists. This shortcoming can be overcome by appropriately accounting for the shear stress components of the subgrade. A number of attempts have been made in the past to incorporate the shear stresses following two different approaches. In the first approach, the subgrade is idealized as an elastic continuum of finite thickness, and certain simplifying assumptions are made to reduce the mathematical work involved [3, 4, 8, 9]. In the second approach, mechanical models are developed that involve different combinations of spring beds and shear elements [3, 7]. Both approaches can be judiciously synthesized for the purpose of solving practical problems, whereby the mechanical-model parameters are quantified in terms of the elastic parameters of the continuum [3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and its long time familiarity among practical engineers, its usage has endured to this date. Many current commercial softwares continued incorporating the model as a major feature of their programs for the purpose of

Page 58: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

12

analysis and design of beams and plates on elastic foundations. A number of both analytical and empirical relationships have been suggested in the past for estimating ks. As a pioneer, Terzaghi [10] identified the width of the foundation as the most important influencing factor of ks in addition to the elastic properties of the soil. Accordingly, he suggested empirical relations for converting ks values from field plate loading tests to ks values of actual foundations that decrease with increasing width. For long beams, Vesić [11] later proposed a formula that depends on the rigidity of the beam itself in addition to its width and the elastic properties of the subgrade material. Based on a subgrade idealized as a simplified elastic continuum of finite thickness, in which only the vertical normal stress components are taken into consideration, Horvath [5] recently derived closed form relations for ks for constant and varying elasticity modulus of the subgrade. Correlations with standard penetration blow counts were also suggested more recently [12]. Recognizing the enduring usage of Winkler's model in wide ranging applications of geotechnical engineering, this article attempts to provide some insight into this model and its associated coefficient of subgrade reaction from the perspective of continuum modeling. In this technique, the subgrade is idealized as an isotropic elastic continuum of finite thickness H. Heterogeneity with respect to subgrade rigidity is taken into consideration by assuming a variable elasticity modulus with depth. In order to clearly understand the influence of the soil shear stress on the form of the resulting mathematical model, the shear components are first intentionally omitted with the normal stress components alone accounted for in the formulation. Open functions of depth are introduced to relate the horizontal and vertical normal stress components. It is shown that the resulting model is a single-parameter Winkler-type

model, for which the coefficient of subgrade reaction can be evaluated from an analytical relation obtained in form of a definite integral. The choice of the functions relating the normal stresses is at the discretion of the user. Two such functions are employed in this work to come up with two correspondingly different sets of closed-form relations for the coefficient of subgrade reaction for constant as well as variable elasticity modulus. These are compared with similar relations proposed in the past. Finally, a brief account of ways of incorporating the shear stress components of the elastic subgrade is presented with the details being presented in the companion paper [9]. Here, it is pointed out that an additional assumption is needed regarding the variation of the vertical shear stress components with depth. It has been found that the resulting mathematical models are always second-order differential equations with constant coefficients regardless of the nature of this assumption. It is pointed out that a three-parameter mechanical model consisting of two beds of springs and a layer of shear element results also in a differential equation of similar form and order to that of the continuum. By taking advantage of this important analogy, it has been shown that the effective spring stiffness per unit area of the two beds of springs of the three-parameter mechanical model is nothing other than the coefficient of subgrade reaction of the single-parameter Winkler-type model established in the present work by excluding the shear stresses.

A GENERALIZED FORMULATION OF WINKLER'S SUBGRADE MODULUS

The subgrade is idealized as an isotropic elastic continuum of thickness H similar to Reissner’s simplified continuum [8] as shown in Fig. 1.

Figure 1 The subgrade idealized as an isotropic, heterogeneous elastic layer

p(x,y) x

z H

Isotropic, heterogeneous elastic layer: E(z), ν

Page 59: zede26

Winkler's Single-Parameter Subgrade Model from the Perspective of…

Journal of EEA, Vol. 26, 2009

13

The depth-wise heterogeneity is taken into account by assuming a variable elasticity modulus with depth. The Poisson ratio is assumed constant, because it is known that the problem is less sensitive to its variation. The primary aim is to study the influence of the normal stress components alone by intentionally excluding the shear effect. Thus, 0=== yzxzxy τττ (1) With this assumption, the equilibrium equation for the vertical direction becomes 0, =zzσ (2) where the coma sign represents a derivative with respect to the symbol that follows. Equation (2) implies that σz is constant with respect to depth. Applying the stress boundary condition at the surface and noting that compressive stresses are negative, it follows that ( ) ( )yxpyxz ,, −=σ (3) where, -p(x,y) is the vertical contact pressure at the surface. The lateral normal stresses, σx and σy, can be related to the vertical normal stress, σz, through appropriately selected functions, gx(z) and gy(z), respectively, so that ( ) ( ) zyyzxx zgzg σσσσ == ; (4) The generalized Hooke's law for the normal strain in the vertical direction is given by

( ) ( )[ ]yxzz zEw σσνσ +−=

1,

(5)

where E(z) is the elasticity modulus that may generally vary with depth. Substituting Eqs. (3) and (4) in Eq. (5) and integrating, one obtains for the vertical displacement

( ) ( ) ( )( ) ( )∫ ++−= 11 ,,,, cyxfdzzEzgyxpzyxw (6)

where, ( ) ( ) ( )[ ]zgzgzg yx +−= ν1 (7)

The function f1 in Eq. (6) is a function of x and y, and c1 is a constant of integration. Applying the zero-displacement boundary condition at the interface with the rigid base, these two unknowns are readily determined at once. Substituting the resulting expression back in Eq. (6) one obtains the vertical displacement function as

( ) ( ) ( )( )

( )( )

= ∫∫

=

dzzEzgdz

zEzgyxpzyxw

Hz

,,, (8)

The vertical displacement, w0, of the surface in particular is obtained by evaluating Eq. (8) at z=0, so that

( ) ( ) ( )( ) ( ) ( )

( )dzzEzgyxpdz

zEzgyxpyxw

HHz

z∫∫ =

=

=

= 000 ,,, (9)

This equation is the relationship sought between the vertical surface displacement and the vertical surface pressure that can be written as ( ) ( )yxwkyxp s ,, 0= (10) where, ks is the coefficient of subgrade reaction given by

( )( )∫

= Hs

dzzEzg

k

0

1 (11a)

It is evident from this result that it is always a Winkler-type single-parameter model that evolves for an elastic subgrade as far as the normal stress components alone are taken into account. Eq. (11a) provides a generalized analytical formulation for quantifying the coefficient of subgrade reaction, which depends only on the elastic properties of the subgrade and, implicitly through g(z) of Eq. (7), on the size and shape of the loaded region on the surface as was correctly pointed out by Terzaghi [10]. If the lateral normal stress components are neglected in addition to the vertical shear stresses, g(z) in Eq. (7) reduces to unity and Eq. (11a) takes the simplified form of

( )∫= Hs

dzzE

k

0

11 (11b)

It will be shown in a later section that Eq. (11b) is a unified formulation of the closed-form relations

Page 60: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

14

proposed by Horvath [5] for estimating the subgrade modulus. According to Eq. (11a), what it all demands to estimate the subgrade modulus is to select an appropriate function g(z) that relates the vertical and horizontal normal stresses with each other and to employ a suitable function E(z) for the variation of the elasticity modulus with depth if there is the need to do so. The following sections deal with these two functions. THE ELASTICITY MODULUS FUNCTION, E(Z) The depth-wise variation of E(z) can be taken into consideration in one of two different forms: a power function or an exponential function of z. The most commonly employed option is the power function of z given by ( ) βBzzE = (12a) where β is a positive dimensionless constant known as the non-homogeneity parameter, and B is a dimension-bound coefficient [6, 13, 14]. For β = 0, Eq. (12a) represents a homogenous elastic layer with a constant elasticity modulus, B=E0. In this case, the coefficient B takes the dimension of a stress. For β = 1, Eq. (12a) represents a heterogeneous soil layer with a linearly varying elasticity modulus. The coefficient B takes in this case the dimensions of the coefficient of subgrade reaction. For other values of β, B assumes correspondingly different dimensions. Soils, the Young's moduli of which vary in accordance with Eq. (12a), are often referred to as Gibson soils [6, 14]. For the general case of a heterogeneous layer (β ≠ 0), Eq. (12a) gives a zero value for the elasticity modulus at the surface (z = 0). This condition poses some difficulties in the evaluation of the integral in Eq. (11a) at z = 0 when β = 1. A more suitable variant for such a case is the form ( ) βBzEzE += 0 (12b) This alternative formulation enables the assignment of a non-zero elasticity modulus of E = E0 at z = 0. In both Eqs. (12a) and (12b), it is common to use β = 1 for clayey soils and β = 1/2 for granular soils [13]. The other form of variation of the elasticity modulus that found some usage in the past is the exponential function given by [13]

( ) zeEzE λ0= (13)

where E0 is the elasticity modulus at the surface (for z=0) and λ is a non-negative quantity having the dimension of m-1. Appropriate values of λ can be easily obtained by matching plots of Eqs. (13) and (12b). For reasons of mathematical convenience, only the relations in Eqs. (12b) and (13) are further used in this work. Since the other important function in the estimation of ks using Eq. (11a) is g(z), two alternative forms of this function that lead to correspondingly two different Winkler-type models are discussed next.

WINKLER-TYPE CONTINUUM MODEL VARIANT I

In the development of this particular model, the functions gx and gy in Eq. (4) are assumed constant so that zyyzxx kk σσσσ == ; (14) where kx and ky are constants that are presumed to be estimated from knowledge of lateral earth pressure theories. Substituting Eq. (14) in Eq. (7) and this in turn in Eq. (11a), one obtains

( )∫= Hs

zEdz

k

0

1

α

(15)

where α is a constant given by ( )yx kk +−= να 1 (16) If, as a special case, the assumption is made that

0kkk yx == (the coefficient of lateral pressure for at rest condition) and it is noted that this coefficient can be expressed as ( )νν −= 10k , then the factor α in Eq. (16) becomes dependent only on the Poisson and takes the form

ν

ννα

−−−

=1

21 2 (17)

In plane-strain problems, the assumption of

0kk x = alone is sufficient, because it can be easily shown in this case that 0kkk yx == . It may thus be expected that Eq. (17) can give reasonable results for strip foundations and for rectangular foundations with large aspect ratios.

Page 61: zede26

Winkler's Single-Parameter Subgrade Model from the Perspective of…

Journal of EEA, Vol. 26, 2009

15

Stratum with Constant E The case of a homogenous stratum corresponds to B = 0 in Eq. (12b) and λ = 0 in Eq. (13), both of which give a constant modulus of elasticity, E = E0. Evaluation of the integral in Eq. (15) then yields

HE

ks α0= (18)

Stratum with Variable E Substituting Eqs. (12b) and (13) in Eq. (15) and performing the respective integrals, one obtains for the heterogeneous layer, depending on the type of E(z) used, For E(z)=E0+Bzβ:

=

+−

=

+

=

21;

ln2

1;ln

0

00

2

0

0

β

α

β

α

EHBEEHB

B

EBHE

B

ks

(19)

For E(z)=E0eλz:

( )Hs e

Ek λα

λ−−

=1

0 (20)

WINKLER-TYPE CONTINUUM MODEL

VARIANT II This model is motivated by observations of plots of the depth-wise variation of the horizontal-to-vertical normal stress ratio underneath uniformly loaded circular and square regions on the surface of

both a layered and non-layered elastic half space as presented in Fig. 2 [4,9, 15]. Accordingly, this ratio can be represented by a decaying exponential function of z as shown in the figure for a typical vertical plane through the loaded region. In this figure, nz and nr represent the normal stresses in the vertical and radial directions, respectively; b is the radius of the loaded circular region; s is the radial coordinate according to the cylindrical coordinate system, the origin of which is located at the center of the circle. Based on observations of the trend in Fig. 2, the functions gx(z) and gy(z) are taken as ( ) ( ) z

yyz

xx erzgerzg ζζ −− == ; (21) where, the constants rx and ry, and the dimension-bound ζ can be established from best-fitting curves for the plots of the horizontal-to-vertical normal stress ratio. These constants take the values rx = ry = 0.8 and ζ = 3.96/H for a typical vertical plane and are indicated in Fig. 2 [9]. With Eq. (21) substituted in Eq. (7), this further in Eq. (11a), and noting that r = rx+ry one obtains

( )( )∫

−−= H zs

zEdzre

k

0

11

ζν

(22)

Stratum with Constant E For a homogenous soil layer with a constant E =E0, evaluation of the integral in Eq. (22) leads to

( )Hs

eHr

HEkζ

ζν −−−

=11

0 (23)

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

nz,s/b=0.6 nr,s/b=0.6nr/nz 0.8exp(-1.98z/b)

Figure 2 Plots of typical vertical and horizontal stresses and their ratios with depth for a circular region

subjected to a uniformly distributed load together with the best-fitting curve for σr/σz [9]

depth ratiio

stress ratio

Page 62: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

16

Stratum with Variable E For the general case of a heterogeneous stratum, the use of the exponential function of Eq. (13) for E(z) is much more convenient to evaluate the integral of Eq. (22) than using Eq. (12b). This yields

( ) ( )( )[ ]HH

s

ere

Ek

λζλ

λζλν

λ+−− −

+−−

=11

0 (24)

It is also possible to use Eq. (12b) for the variation of E. However, the evaluation of the integral in Eq. (22) leads to complicated expressions for the coefficient of subgrade reaction. For example, for a linearly varying E (i.e β = 1), ks takes the form

( )( )

( )

−−+

−+

+=

∑ ∑∞

=

=

−+−+−

1 1 0

11

0

11

0

0 11lnn n

nn

nn

nn

nnH

s

EB

BHEBerB

EBHE

Bk

ζζν ζ

(25)

The expression for ks becomes even much more complicated in the case of β = 1/2. Obviously, Eq. (24) is much simpler to use than Eq. (25) and demands only selecting appropriate values of λ that give variations of E(z) sufficiently closely matching with the power function, when β takes the respective values of 1 and1/2. The foregoing two sections presented two different Winkler-type models together with the corresponding closed-form relations for ks based on two different forms of assumed lateral-to-vertical normal stress ratio distribution with depth. It is important to note, however, that the approach enables to develop as many such models as the number of different assumptions made. COMPARISON WITH SIMILAR PREVIOUS

STUDIES A similar study by Horvath [5] employed the same simplified-continuum idealization of the subgrade, but with only σz taken into account and all other stress components neglected. The cases of both constant and variable E were considered. The power function of z in Eq. (12b) with β = 1 and β = 1/2 was used to account for the variation of E.

The subgrade models obtained were all Winkler-type similar to Eq. (10) and given by For Constant E (E = E0): HEks 0= (26) For Variable E (E = E0+Bzβ):

=

+−

=

+

=

21

ln2

1ln

0

00

20

0

β

β

EHBE

EHB

BE

BHEB

ks (27)

Equations (26) and (27) can be obtained directly from Eq. (11b) or be retrieved from Eqs. (18) and (19), respectively, of the models presented in this work, when α = 1 - a case that corresponds to kx = ky = 0 or to a condition of zero lateral normal stresses. It is to be noted that Eqs. (18) and (19) are derived using the same power function of z of Eq. (12b). If Eqs. (18) and (19) are now normalized with respect to the respective moduli of Eqs. (26) and (27), one obtains in both cases 1/α - a parameter dependant on the soil Poisson ratio. Therefore, the model presented in this work based on the power function of z for E gives always a Winkler-type subgrade model with a spring stiffness that is 1/α times the stiffness of the corresponding model of Horvath irrespective of how E varies with depth. The plot of 1/α against ν is provided in Fig. 3 together with that of Horvath (1/α = 1), where ( )νν −= 10k is used for both kx and ky. This plot is referred to as New Variant I in the Fig. 3.

Page 63: zede26

Winkler's Single-Parameter Subgrade Model from the Perspective of…

Journal of EEA, Vol. 26, 2009

17

0

2

4

6

8

10

0 0.1 0.2 0.3 0.4 0.5 0.6Poisson ratio

ks/(E

o/H

)

HorvathNew Variant INew Variant IIVesic (H=B)Vesic (H=2B)Vlasov

Figure 3 shows that the difference between the spring stiffness of the Winkler-type model New Variant I and the corresponding model of Horvath increases with increasing Poisson ratio for both the homogenous and heterogeneous cases, but becomes indeterminate at ν = 0.5 - the case of an incompressible fluid. The indeterminacy is attributed to the nature of the definition of k0 employed, which is expressed in terms of ν. Equation (26) of Horvath can also be obtained from Eq. (23) by substituting r = 0 - a case that corresponds once again to zero lateral normal stresses. However, Eq. (27) cannot be retrieved from Eq. (24) by substituting r = 0, because the exponential function used to represent the variation of E in this case is different from the power function used by Horvath [5]. Similarly, Eq. (23) can be normalized with respect to Eq. (26) to get

( )H

s

eHrHE

ζν −−−

=11

1

0

(28a)

Using the values of r and ζ suggested in Eq. (21), this relation simplifies to

ν4.011

0 −=

HEks (28b)

The plot of this relation is also given in Fig. 3 as New Variant II, which shows that this second model presented based on a decaying exponential function for the lateral-to-vertical normal stress ratio also gives consistently larger spring stiffness that increases with increasing ν, but not as large as in the first model.

Vesić [11] proposed an analytical relation for computing the modulus of the homogenous, elastic subgrade in terms of the rigidity of the foundation element, its size, and the elastic subgrade properties. This relation, which is widely used in practice, is given by

( ) ( )2

02

012

40

1165.0

νν −≈

−=

BE

BE

IEBEk

ffs

(29)

In this equation, EfIf is the foundation rigidity. For relatively large-sized foundations like rafts, the value of the 12th root multiplied by 0.65 is close to 1, so that the simplified expression on the right hand side is commonly employed [18]. The plots of this expression normalized with respect to E0/H are also given in Fig. 3 for the cases of H = B and H = 2B, the latter case being applicable for a thick stratum or a half space. It can be seen from Fig.3 that the plot for the case of H = B almost coincides with that of New Variant II, whereas the plot for the case of H = 2B is double that of the case of H = B. Vesić's relation gives the highest estimate for the subgrade modulus for thick formations of most soils (ν≤0.42), whereas New Variant I gives the highest values for soft cohesive soils (ν>0.42). Furthermore, Vesić's model gives at least double the magnitude of stiffness provided by the new Variant II model of this author for thick strata or a half space. This implies that Vesić's model underestimates deflections, especially for foundations on thick strata. Vlasov and Leontiev presented [1, 16] a subgrade model based on the continuum approach, in which they introduced displacement constraints to simplify the continuum. Their model takes into account the shear interaction missing in the Winkler model. In the case of an assumed linear variation of the vertical displacement w(x,z) with respect to depth that is deemed reasonable, according to the authors for relatively shallow

Figure 3 Plots of the normalized subgrade modulus according to different models

Page 64: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

18

subgrade, their model results in a subgrade modulus given by

( )4320

21 ννν −+−=

HEks

(30)

The normalized plot of Eq. (30) is also included in Fig. 3. The plot shows ks increasing with increasing Poisson ratio, though at a very low rate. For all practical purposes, Vlasov's subgrade modulus is the same as that of Horvath, whereas values of Vesić's subgrade modulus for H = B are close to those of New Variant II. The largest estimates of ks are obtained from Vesić's model for H = 2B and from the New Variant I. These two models interchange positions at about ν =0.42 Following Schmertmann's semi-analytical approach [15,17] for estimating immediate (elastic) settlement, the thickness of the elastic subgrade stratum can be expressed as BIH z= , where B is the width of the loaded area, and Iz is an influence factor dependant on the relative width of the foundation with respect to the thickness of the subgrade (For a half space Iz = 2 is taken). Using this approach, which was also employed by Horvath [5], the various relations for ks presented in the preceding sections can be expressed in terms of the foundation width, B, and plotted. Such plots for two representative types of thick subgrade material with Ik = 2 are given in Fig. 4. Figure 4(a) is for a medium dense coarse sand, for which ν = 0.25 and E = 40MPa are taken. Figure 4(b) is for a medium stiff clay with ν = 0.45 and E = 50MPa. In the case of the medium dense sand (Fig. 4(a)), Horvath's model, the New Variant II, and Vlasov's model give practically identical values of ks for all sizes of the plate. New Variant I gives a bit larger values of ks in comparison, but the difference dwindles fast with increasing width of the plate. In contrast, Vesić's subgrade modulus values are consistently larger than those given by the other three models. In the case of the medium stiff clay (Fig. 4(b)), Horvath's model, the New Variant II, and Vlasov's model give ks values, which are close to each other for all sizes of the plates in the case of the medium dense sand. New Variant I gives the largest of ks values for all foundation widths, with the difference from the rest increasing with decreasing plate width. Vesić's subgrade modulus fall in this case between values of New Variant I and II.

As could be observed from Eqs. (26) and (27) and Figs. 3 and 4, Horvath's model give the least values of subgrade modulus that are independent of Poisson ratio. This is a consequence of the omission of the lateral normal stresses in his highly simplified model. Based on the above comparison, it can be concluded that Horvath's subgrade moduli form the lower bound for the subgrade modulus values that can be estimated using different assumed functions of g(z) in Eq. (7), and Vlasov's model gives only slightly higher values of ks, whereas Vesić's model gives consistently higher values of ks only exceeded by the New Variant I model for soils with ν>0.42. INTERPRETATION USING A CLUSTER OF

CONTIGUOUS SOIL COLUMNS Horvath's simplified subgrade [5] can be visualized as a medium made up of a cluster of contiguous short columns (no buckling) of each with a height of H and a cross sectional area of A that do not interact with each other at all and are behaving in a uniaxial state of strain. This can be easily verified as follows for both the constant and the variable E. In the case of a constant E, it is evident that the stiffness of such a uniaxial member is given by K = EA/H (Figs. 5(a) and 5(b)). On the other hand, the stiffness of the substitute spring responsible for the tributary area A in a Winkler's foundation is given by K = ksA. Equating the two expressions then yields ks = E/H, which is identical to Horvath's result of Eq. (26) for the case of constant E. In the case of a variable E, the following relation is obtained using the uniaxial Hooke's law for the vertical deformation of a typical soil column at the location (x, y) (Fig. 6(b)):

( ) 22 ,)(

),,( cyxfdzzE

zyxw z ++= ∫σ (31)

where, f2 and c2 are determined from the boundary conditions. With βBzEzE += 0)( substituted in Eq. (31) for the variation of E, noting that σz=-p(x,y) is constant with respect to depth, and applying the boundary conditions at the two ends of the soil column, Eq. (31) gives expressions for ks identical to Eq. (27) for β = 1 and 1/2. This shows that Horvath's subgrade can be idealized as a cluster of contiguous short soil columns that do not interact with each other in any manner.

Page 65: zede26

Winkler's Single-Parameter Subgrade Model from the Perspective of…

Journal of EEA, Vol. 26, 2009

19

02000400060008000

10000120001400016000

0 10 20 30 40 50 60

ks (kN/m̂3)

B (m)

HorvathNew Variant INew Variat IIVesicVlasov

05000

100001500020000250003000035000

0 20 40 60

ks (kN/m3̂)

B (m)

HorvathNew Variant INew Variant IIVesicVlasov

Figure 4 Plots of subgrade modulus against foundation width for: (a) a medium dense, coarse sand (ν = 0.25 and E = 40MPa); (b) a medium stiff clay (ν = 0.45 and E = 50MPa)

In contrast to this, the subgrade represented by the models presented in this work is equivalent to the same cluster of contiguous soil columns, but with interaction through the lateral normal stresses as shown in Fig. 5(c). The mathematical proof goes analogously, the main difference being the use of the generalized three-dimensional Hooke's law because of the inclusion of the lateral normal stresses in these models. Thus,

( )[ ]33)(

),,( cfdzzE

zyxw yxz +++−

= ∫σσνσ (32)

in which, f3 and c3 are dependant on the boundary conditions. The integral in Eq. (32) is identical to Eq. (6) and its evaluation leads to the respective expressions for ks presented in the preceding sections for the different cases considered. Due to the propping-up effect of the lateral normal stresses, the present model gives consistently stiffer springs. One can finally draw the important conclusion that the use of any type of relationship between the vertical and the horizontal normal stresses and any

form of distribution of E(z) will always result in a Winkler-type subgrade as far as the shear components of the stress tensor are not taken into account.

THE MISSING SHEAR INTERACTION A more complete representation of the interaction among the soil columns or the Winkler springs can be achieved, only if the shear stress components in the stress tensor are taken into account in the formulation of the subgrade model. This can be achieved with relative ease, if simplifying, but reasonable, assumptions are made with regard to the depth-wise variation of the vertical shear stress components τxz and τyz in addition to the assumptions already made with respect to the lateral normal stresses. Such comprehensive considerations result in more complex mathematical models of higher order for the subgrade. Two such models have been proposed by the author in the accompanying paper that are counterparts of the two Winkler-type models presented in this paper. Both models are similar second order partial differential equations with constant coefficients given by

p(x,y) x

z H

p(x,y)

σx+∆σx σx

Figure 5 (a) The elastic subgrade idealized as a cluster of closely spaced soil columns; (b) a typical soil column without lateral normal stresses; (c) a typical soil column propped up by lateral normal stresses

(a) (b) (c)

(a) (b)

k s (k

N/m

3 )

Page 66: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

20

( ) ( ) ( ) ( )yxwcyxwcyxpcyxp ,,,, 0

2302

21 ∇−=∇− (33a)

where, ∇ is the Laplace operator. The constant coefficients c1 to c3 depend on the soil properties and are different for the two models: Counterpart Model to Winkler-Type Variant I:

3;;

12 32

2

1GHc

HEc

EGHc ===

αα (33b)

Counterpart Model to Winkler-Type Variant II:

3;;

12 3322

2

11GHac

HEac

EGHac === (33c)

In Eqs. (33b) and (33c), E and G are the Young's modulus and the shear modulus of the homogenous subgrade, respectively, and H is the layer thickness. The coefficients a1, a2, and a3 are dependent on the Poisson ratio of the soil, and α is the same as defined in Eq. (16). Equation (33a) is similar in form and order to the three-parameter mechanical model proposed by Kerr [7]. This mechanical model consists of two beds of springs, one overlying the other, with corresponding subgrade moduli of ku and kl, separated by a shear layer of parameter gk. Its governing differential equation is given by

( ) ( ) ( ) ( ) ( )

( ) ( )yxwkk

kg

yxwkk

kkyxpkk

gyxp

lu

uklu

ul

lu

k

,

,,,

02

02

∇+

+=∇

+−

(34)

Through comparison of coefficients in Eqs. (33a) and (34), one can easily express each of the three parameters of the Kerr mechanical model in terms of the known elastic soil properties and the layer thickness for both continuum-based models presented in Eq. (33). Of special interest here is the effective subgrade modulus, ke. Since the two spring beds in Kerr model are arranged in series, the effective subgrade modulus can be expressed as

lu

lue kk

kkk+

= (35)

Inserting the relations for ku and kl in Eq. (35), one can easily obtain the following expressions for ke for the two continuum-based models under consideration that take into account the shear interaction [9]:

Counterpart Model to Winkler-Type Variant I:

HEke α

= (36)

Counterpart Model to Winkler-Type Variant II:

( )H

Eke ν4.01 −= (37)

Equations (36) and (37) for the effective subgrade modulus of the three-parameter Kerr mechanical model are identical to Eqs. (18) and (23), respectively, of the single-parameter Winkler's mechanical model. It is important to note that a complete subgrade model should take into account the shear interaction, which becomes more significant with increasing relative rigidity of the soil with respect to that of the foundation.

CONCLUSIONS The presented work shows that a Winkler-type model will always evolve for an elastic subgrade as far as the shear components of the stress tensor are omitted regardless of whether all or part of the normal stress components are taken into consideration. A generalized analytical formulation in form of a definite integral for evaluating the coefficient of subgrade reaction is provided by accounting for all normal stresses. The variation with depth of the elasticity modulus is taken into consideration. It is only required too make a reasonable assumption on the function g(z) relating the normal stresses, which is at the discretion of the user. With the introduction of two such functions, the paper provided closed-form relations for estimating the subgrade reaction for both constant and variable elasticity modulus. A comparison with previously proposed simplified models shows some notable differences in the coefficient of subgrade reaction that generally decrease with decreasing Poisson ratio of the subgrade material and with increasing width of the foundation. With the introduction of H = IzB in the expressions derived for ks, it is possible to calibrate the Winkler-type models so that they give results that are in good agreement with finite-element based models by conducting a numerical study. Such a work has been completed recently and suggested the use of a calibrating factor Iz = 2.8 to 3 for beams on elastic foundations. Similar studies for plates are underway.

Page 67: zede26

Winkler's Single-Parameter Subgrade Model from the Perspective of…

Journal of EEA, Vol. 26, 2009

21

The work has also shown that the effective spring stiffness per unit area of higher order models remains the same as the subgrade modulus of the single-parameter Winkler's model as far as the way the normal stresses are considered remains the same regardless of the subgrade shear stresses.

REFERENCES [1] Selvadurai, A. P. S. Elastic Analysis of Soil-

Foundation Interaction. Elsevier Scientific Publishing Company, New York, 1979.

[2] Hetényi, M. Beams on Elastic Foundation,

University of Michigan Press, Ann Arbor, 1946.

[3] Horvath, J. S. Basic SSI Concepts and

Applications Overview. Soil-Structure Interaction Research Project, Report No. CGT-2002-2, Manhattan College, School of Engineering, New York, 2002.

[4] Horvath, J. S. New Subgrade Model Applied

to Mat Foundations. Journal of Geotechnical Engineering, ASCE, 1983, 109 (12); 1567-1587.

[5] Horvath, J. S. Modulus of Subgrade

reaction: New perspective. Journal of Geotechnical Engineering, ASCE, 1983, 109 (12); 1591-1596.

[6] Wang, H., Tham, L. G., Cheung, Y. K.

Beams and Plates on Elastic foundations: a review. Progress in Structural Engineering and Materials, John Wiley 2005, 7; 174-182.

[7] Kerr, A. D. Elastic and Viscoelastic

Foundation Models. Journal of Applied Mechanics, ASME, 1964, 25 (80); 491-498.

[8] Reissner, E. A Note on Deflections of Plates

on a Viscoelastic Foundation. Journal of Applied Mechanics, ASME, 1958, 25 (80); 144-145.

[9] Worku, A. New Variants of Continuum-

based Models for an Elastic Subgrade. Submitted for publication in Zede, Journal of Ethiopian Engineers and Architects.

[10] Terzaghi, K. Evaluation of Coefficients of Subgrade Reaction. Geotechnique, 1955, 5, 4; 297-326.

[11] Vesić, A. S. Bending of Beams Resting on

Isotropic Elastic solid. Journal of Engineering mechanics division, ASCE, 1961, 87, EM2; 35-53.

[12] Das, B. M. Principles of Foundation

Engineering, PWS publishing Company, Boston, 1995.

[13] Gibson, R. E. Some results Concerning

Displacements and Stresses in a Non-homogenous Elastic Half-space. Geotechnique, 1967, 17; 58-67.

[14] Stark, R. F. and Booker, J. R. Surface

Displacements of a Non-homogenous Elastic Half-space Subjected to Uniform surface Tractions. Part i: Loading on arbitrarily Shaped areas. International Journal for Numerical and Analytical Methods in Geomechanics, John Wiley, 1997, 21; 361-378.

[15] Das, B. M. Advanced Soil Mechanics,

McGraw-Hill Book Company, New York, 1983.

[16] Vlasov, V. Z. and Leontiev, U. N. Beams,

Plates, and Shells on Elastic Foundations. Israel program for Scientific Translations, Jerusalem (translated from Russian), 1966.

[17] Schmertmann, J. H. Static Cone to Compute

Static Settlement over Sand. Journal of Soil mechanics and Foundation Division, ASCE, 1970, 96, SM3; 1011-1043.

[18] Bowles, J. E. Foundation Analysis and

design, McGraw-Hill, New York, Fifth Edition, 1996.

Page 68: zede26

Journal of EEA, Vol. 26, 2009

PROPOSED HIGHER ORDER CONTINUUM-BASED MODELS FOR AN ELASTIC SUBGRADE

Asrat Worku

Department of Civil Engineering Addis Ababa University

ABSTRACT

Three new variants of continuum-based models for an elastic subgrade are proposed. The subgrade is idealized as a homogenous, isotropic elastic layer of thickness H overlying a firm stratum. All components of the stress tensor in the subgrade are taken into account. Reasonable assumptions are made regarding the depth-wise variation of the vertical shear stress components and of the horizontal-to-vertical normal stress ratios to simplify mathematical work. The assumptions are based on observation of available analytical results of stress distributions and on knowledge of lateral earth pressure theories. The resulting differential equations are similar in form and order to a high-order model developed earlier by Reissner based on a number of simplifying assumptions, but with different coefficients dependant on Poisson ratio. With the help of appropriately selected mechanical models, it has been shown that all of the new model variants consistently give larger effective vertical stiffness and larger shear interaction among the classical Winkler springs for the range of Poisson ratio of practical interest. Keywords: Continuum models, Kerr model, Mechanical models, Reissner model, Shear interaction, Winkler model.

INTRODUCTION Subgrade models developed so far can be categorized into two classes: continuum-based and mechanical models. A number of mechanical models have been proposed in the past having varying degrees of mathematical complexity and different numbers of model parameters [1-6]. These models range from the classical single-parameter Winkler model to the multi-parameter models of Rhines [3,4]. In contrast, continuum-based subgrade models proposed in the past are relatively few in number. Reissner's simplified continuum model (RSCM) can be regarded as one of the pioneering works based on some direct simplifying assumptions [3,4,7,8]. Horvath used later on two modifications of Reissner model to study the behavior of mat foundations [3,7]. Vlasov and Leont'ev presented a relatively indirect application

of the simplified continuum that involves variational calculus [9]. The RSCM makes the simplifying assumption of zero in-plane stresses; i.e. 0=== xyyx τσσ [8]. The works of Horvath are also based on the same assumption [3,7]. The consequence of this assumption is that the vertical shear stress components become constant and the vertical normal stress component varies linearly with depth. This assumption considerably simplifies the mathematical work needed in arriving at the mathematical model, which is a second-order differential equation with constant coefficients. However, it underestimates both the vertical stiffness of the subgrade and the inherent shear interaction rendering the model one of the most conservative. The present work follows Reissner's approach, but without neglecting any stress components. Instead, three different combinations of assumptions are made with regard to the lateral normal and the vertical shear stress components resulting in three correspondingly different variants of subgrade models. In the first model variant, a linear relationship between the horizontal and the vertical normal stress components is assumed. This assumption has as its basis the classical theory of lateral earth pressure, in which linear relationships are assumed between the normal stresses, whether one deals with active, passive or at-rest lateral earth pressure condition. The other assumption in this model is a constant depth-wise variation of the vertical shear stress components. In the second model variant, whereas the first assumption in the first variant with respect to the horizontal normal stresses is maintained, the second assumption with respect to the vertical shear stresses is improved by assuming a bilinear depth-wise variation. This second assumption is introduced as a reasonable approximation of the nonlinear variation of the shear stresses observed in plots of analytical results available in the literature [10].

Page 69: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

2

The third model variant is derived by maintaining the second assumption in the second variant with respect to the vertical shear stresses and introducing an exponentially decaying function to approximate the vertical variation of the ratio of the horizontal to the vertical normal stresses. Once again, plots of available analytical results of the normal stresses are used to support this assumption [10]. Interestingly, all of the three model variants result in differential equations similar in form and order to that of RSCM, but exhibiting notable differences in their coefficients. A synthesis of these models with equivalent mechanical models helps in interpreting the differences observed in the coefficients of the differential equations. For this purpose, the Kerr mechanical model is used. Comparisons show that all of the new model variants brought about increases in the vertical stiffness of the subgrade and the shear interaction among the Winkler springs. There is a trend of increasing interest in numerical and analytical studies of beams and plates on elastic foundations using such models [11-15]. Similar studies on beams and plates using various subgrade models that include the variants reported in this work are currently underway with the objective of corroborating the findings presented here based on theoretical considerations alone. Numerical results found so far are encouraging. It appears possible to calibrate these models so that they give results in excellent agreement with finite-element based models.

THE PROPOSED MODELS

In all model variants proposed in this work, the subgrade under consideration is similar to that of RSCM, which consists of a homogenous, isotropic elastic layer of thickness H underlain by a rigid formation as shown in Fig 1. The layer is characterized by its elastic parameters of Young's modulus, E, and Poisson's ratio, ν [8]. At the depth H, all deformation components are assumed zero. Figure 1 The elastic soil layer of thickness H

overlying a rigid stratum

The difference between the stress tensors of the RSCM and the present model variants is as shown in the arrays below in accordance with the Cartesian coordinate system:

zyzxz

yzyxy

xzxyx

zyzxz

yz

xz

sNewVariantRSCM

στττστττσ

σττττ

0000

In the RSCM, the three independent in-plane stress components are neglected (i.e. 0=== xyyx τσσ ). As a consequence, the two remaining shear stress components, τxz and τyz, become constant with respect to depth, and the only non-zero normal stress component, σz, varies linearly with depth. In contrast, no stress component is neglected in any of the models proposed in this work as shown in the array on the right-hand side. The proposed three models differ from each other according to the assumptions made with regard to the depth-wise variation of the vertical shear stress components, xzτ and yzτ , and the ratio of the lateral normal stress components, xσ and

yσ , to the vertical normal stress, zσ . Simplifying assumptions with regard to the rest of the stress components are unnecessary. Model variant 1 This model, referred to simply as Variant 1, is based on the following two basic assumptions: i. The horizontal normal stresses, σx and σy, are

linearly related to the vertical normal stress, σz; i.e. zxx k σσ = and

zyy k σσ = . Knowledge of lateral earth pressure theories motivated this assumption and can be utilized to reasonably estimate the values of the coefficients kx and ky. Specifying values for the coefficients in advance is not necessary for the derivation of the model.

ii. The vertical shear stress components, τxz and

τyz, are assumed constant with depth. This assumption is made only for the sake of mathematical ease and in order to show that RSCM is a simplified form of this model variant. This assumption results in a linear depth-wise variation of the vertical normal stress, σz.

p(xy)

z

x Isotropic elastic

stratum: E, ν

Page 70: zede26

Proposed Higher Order Continuum-Based Models for an Elastic Subgrade

Journal of EEA, Vol. 26, 2009

3

Pertinent stress equations, strain-displacement relations, Hooke's law for isotropic elastic materials, and the boundary conditions at the ground surface and at the interface with the rigid formation are utilized to derive the mathematical model, the details of which are presented in Appendix A. The resulting subgrade model relating the surface pressure p(x, y) and the corresponding surface displacement w0(x, y) is given by

( ) ( ) ( )

( )yxwGH

yxwHE

yxpE

GHyxp

,3

,1

,12

,

02

02

2

∇−

=∇

αα (1a)

In this second order partial differential equation with constant coefficients, G is the shear modulus of the upper layer, and the coefficient α is given by the relation

( )yx kk +−= να 1 (2) For kx= ky=0 (or zero lateral normal stresses), the coefficient α becomes unity, and Eq. (1a) reduces to the form of RSCM [8] given by

( ) ( ) ( )

( )yxwGH

yxwHEyxp

EGHyxp

,3

,,12

,

02

02

2

∇−

=∇− (1b)

If the higher derivatives in Eqs. (1a) and (1b) are dropped, Winkler-type models are obviously recovered. If the coefficients kx and ky are assumed

to be equal to the lateral earth pressure coefficient for at rest condition, k0, and noting that this coefficient can be expressed as ( )νν −= 10k , then α in Eq. (1a) takes the form

ννν

α−−−

=1

21 2 (3)

According to this relation, α is always less than one tending to zero with ν approaching 0.5. For ν=0.5, Eq. (3) becomes indeterminate. However, the coefficients kx and ky can generally be selected different from k0, and k0 can also be expressed in terms of other soil properties like the strength parameters or the degree of consolidation. Model variant 2 This model, referred to as Variant 2, is based on the following assumptions: i. The horizontal normal stresses, σx and σy, are

linearly related to the vertical normal stress, σz, as in Variant 1.

ii. The vertical shear stress components, τxz and

τyz, are assumed to vary with depth according to a bilinear relation. This assumption emanates from observation of plots of the stress components underneath a uniformly loaded circular region on the surface of an elastic half space. These plots are prepared from tabular values provided by Das [10] citing the works of Ahlvin and Ullery and are given in Fig. 2(a) for different vertical planes.

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 1 2 3 4 5z/b

s/b=0.2s/b=0.4s/b=0.6s/b=0.8s/b=1.0s/b=1.2s/b=1.5

Figure 2 (a) Plots of the vertical shear stress with depth for a circular region subjected to a uniformly

distributed load; (b) A qualitative plot of the influence factor I(z)

(a) (b)

τ z /q

Page 71: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

4

In Fig. 2(a), b is the radius of the loaded circular region and s is the radial coordinate in accordance with the cylindrical coordinate system, the origin of which is taken as the center of the circle. All the curves in Fig. 2(a) exhibit a similar trend, and their peaks occur within a narrow band of z. An exception is the curve for points below the edge of the circle due to the discontinuity there. Plots of shear stresses exhibiting a similar trend are also reported by Horvath for points beneath a loaded square region on the surface of a layered formation [7]. The plots in both Fig. 2(a) and in Horvath [7] suggest the use of the following bilinear variation for the vertical shear stress components with depth:

( ) ( ) ( )

( ) ( )( ) ( )

≤≤+

≤≤=

=

HzHifyxzba

Hzifyxzb

yxzIzyx

xz

xz

xzxz

ητ

ητ

ττ

;,

0;,

,,,

22

1

(4)

where ( )yxxz ,τ gives the variation of τxz in the horizontal x, y-plane, and I(z) is the depth-dependant influence factor assumed to vary bi-linearly. A qualitative plot of I(z) is given in Fig. 2(b). This assumption results in a quadratic variation of the vertical stress, σz, unlike the linear variation in Variant 1 and RSCM. The coefficients a2, b1, and b2 characterizing the two line segments of I(z) can be readily determined from the known coordinates of the three points in Fig. 2(b) - the origin, the peak and the lower-most point. As in Variant 1, all pertinent elasticity equations together with the boundary conditions are employed to derive the mathematical model, the details of which are given in Appendix B. In addition, continuity of deformation at the depth

Hz η= is taken into consideration. The corresponding mathematical model of the subgrade becomes

( ) ( ) ( ) ( )yxwHGyxw

HEyxp

HEG

yxp wp ,,,, 02

02 ∇−=∇−

ξβα

ξαβ

(5a) Where 31 ξξξ =p and 32 ξξξ =w . The coefficients

β, ξ1, ξ2, and ξ3 in Eq. (5a) are provided in Appendix B. The coefficient α is as defined in Eqs. (2) for the first new variant. For thick elastic strata, the depth-wise variation of the vertical shear stresses shown in Fig. 2 for an elastic half space may be taken as sufficiently representative for most foundations. The thickness

H may be taken as that, beyond which the shear stress becomes negligibly small. The corresponding value of IH at this depth may be taken around 0.1. Furthermore, the normalized average depth, at which the shear stresses assume peak values, may be taken as 6.0=η in both cases. With the peak and the lower-most points in Fig. 2(b) so established, the influence curve for the vertical shear stresses is fully defined and the various constants in the coefficients of Eq. (5a) can be readily evaluated from the relations given in Appendix B. Substituting these coefficients, one obtains the following relation for the subgrade model sought:

( ) ( ) ( ) ( )yxwGHyxwHEyxp

EGHyxp ,

322.2,1,

1235.2, 0

20

22

−=∇

αα

(5b) Comparison of Eq. (5b) with (1a) and (1b) shows that the three models are still similar in form, but exhibit differences in their coefficients. If the higher derivatives in Eq. (5b) are dropped, one obtains the same Winkler-type model as in Variant 1, because the two models differ from each other only in the depth-wise variation of the vertical shear stresses - a condition irrelevant in Winkler model, which entirely ignores the shear stresses. If the coefficients kx and ky are assumed equal to the lateral earth pressure coefficient for at rest condition, k0, as in the previous model, then α can be determined from Eq. (2) for any value of ν.

Model variant 3 In this variant, whereas the second assumption in Variant 2 of bilinear variation of the vertical shear stress components with depth is maintained, the first assumption of a linear relation between the normal stress components is replaced by the decaying exponential functions of the form:

zz

yyzz

xx erer σσσσ ζζ −− == ; (6) where rx, ry, and ς are constants. This assumption is motivated by observation of plots of zr σσ for points directly below a circular region of diameter equal to the layer thickness on the surface of an elastic half space subjected to a uniformly distributed vertical load, where σr is the radial normal stress and σz is the vertical normal

Page 72: zede26

Proposed Higher Order Continuum-Based Models for an Elastic Subgrade

Journal of EEA, Vol. 26, 2009

5

stress [10]. An exponential function of the type of Eq. (6) tallies excellently with the plots, from which the constants rx, ry, and ς can be easily estimated. Horvath gave plots of the normal stresses for points below a vertically loaded square region of side length B on a layered half space [3]. These plots also suggest the use of relations similar to Eq. (6) for depths of up to around 0.7B, especially for points directly below the loaded region. All pertinent elasticity equations together with the boundary and continuity conditions are employed to derive the mathematical model, the details of which are described in Appendix C. The resulting mathematical model takes the form

( ) ( ) ( )

( )yxwGyxwEyxp

EG

yxp

w

p

,

,,,

02

02

∇−

=∇−

δκλ

δδ

κλ (7a)

Where 31 κκκ =p and 32 κκκ =p . Expressions

for the coefficients λ, δ, κ1, κ2, and κ3 in Eq. (7a) are given in Appendix C. Observation of the plots suggests the use of the following function:

zHz

yx e σσσ 96.38.0 −== The same bilinear depth-wise variation of the vertical shear stresses shown in Fig. 2 is taken with IH =0.1 and η=0.6. Once the depth-wise variations of the lateral normal stresses and of the vertical shear stresses are known, the coefficients in Eq. (7a) can be readily evaluated in terms of the Poisson's ratio and the layer thickness H using the relations provided in Appendix C. Accordingly, one obtains

( )( )

λν

λκ

νλν

λδ

λνδ

κ

νλ

νδ

8.6672.1715.1

19.071.31

46.221

72.171

68.231

286.1236.0

404.01

22

2

2

HH

HHHH

H

H

w

p

++=

+−+−=

+=

−=

(7b) The coefficients in the brackets and in the expression for κw are dimensionless quantities and dependent only on the Poisson ratio that can be easily evaluated. Once again, the differences in the coefficients of this and the previous mathematical models are to be noted on the coefficients.

One can note from the forgoing material that all the proposed continuum-based models have the same form and order as the RSCM. All involve the functions p and w0 and their respective second derivatives. The trend suggests that the likelihood of the occurrence of other terms and other orders of derivatives is unlikely with further refinement of assumptions. A recently published work of the author that generalizes the presented approach of subgrade modeling showed that the maximum order of the differential equation and its general form remain indeed unchanged [15].

SYNTHESIS OF THE PROPOSED

CONTINUUM-BASED MODELS WITH A PERTINENT MECHANICAL MODEL

The synthesis of continuum-models with pertinent mechanical models provides a means of quantifying the mechanical model parameters from those of the continuum. It also provides a useful perspective to compare the continuum models with each other. Looking at the form and order of the governing differential equations of the subgrade models presented above, the most appropriate mechanical model for this purpose is the three-parameter Kerr model [4]. This model consists of two beds of springs with spring constants of ku and kl per unit area separated by a shear layer of parameter gk as shown in Fig. 3. Figure 3 The Kerr (or modified-Pasternak)

mechanical model The governing differential equation of this model as derived by Kerr is given by [4]

( ) ( ) ( ) ( ) ( )

( ) ( )yxwkk

kg

yxwkk

kkyxpkk

gyxp

lu

uk

lu

ul

lu

k

,

,,,

02

02

∇+

+=∇

+−

(8)

Equation (8) is similar in form and order to the equations of all continuum-based models presented

g kl

ku

q

Page 73: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

6

above. Equating the coefficients in Eq. (8) with the corresponding coefficients in Eqs. (1a), (1b), (5b) and (7a) provides in each case three equations. Simultaneous solution of these equations yields different expressions for the mechanical model parameters, which are summarized below together with the effective stiffness coefficient established using the relation ( )lulue kkkkk += for the springs in Fig. 3 that are arranged in series: Kerr Parameters from RSCM

HEkGHg

HEk

HEk elu ==== ;

94;

34;4 (9)

Kerr Parameters from Variant 1

HEkGHg

HEk

HEk elu ααα

==== ;9

4;34;4 (10)

Kerr Parameters from Variant 2

( )

( ) HEkG

Hg

EH

kEk

epw

w

pw

wl

p

wu

αβ

ξξξ

αξξξ

αξξ

=−

=

−==

;

;;

2 (11)

Kerr Parameters from Variant 3

( )

( ) HEkGg

EkEk

epw

w

pw

wl

p

wu

νλ

κδκκ

κδκκ

κκ

404.011;

;;

2

−=

−=

−==

(12)

Comparison of the relations in Eqs. (9) and (10) shows that the spring constants of Variant 1 are always 1/α times those of RSCM. Since the factor α is always less than unity for common values of Poisson ratio of soils, both the individual bed of springs and the effective spring of Variant 1 are always stiffer than that of RSCM. In contrast, the shear parameter remained unchanged as could be expected, because constant vertical shear stresses are either assumed or implied in both models. The parameters from Variant 2 become clearer by introducing the pertinent relations for the bilinear variation of the vertical shear stress components. This yields Hp 35.0−=ξ and 31.1−=wξ , which when inserted in Eqs. (11) give

GHgHEk

HEk lu === ;36.1;75.3

αα (13)

These relations show that even though the effective spring coefficient remains the same in both Variant 1 and Variant 2, the individual spring beds are different. The shear parameter of Variant 2, on the other hand, has increased significantly to 2.25-fold that of RSCM and Variant 1 irrespective of the Poisson ratio. Therefore, Variant 2 predicts significantly higher stiffness and higher shear interaction of the subgrade as compared to RSCM, especially for soils with larger Poisson ratio. With the introduction of the bilinear variation for the vertical shear stresses and the decaying exponential function for the ratio of the lateral to vertical normal stresses, the various constants in Eqs. (12) can be determined from Eqs. (7b). Then, the coefficients of the individual spring beds and the shear parameter can be easily evaluated. For two selected values of Poisson ratio, this gives: For 3.0=ν :

GHgHEk

HEk

HEk elu 95.0;14.1;22.1;7.17 ====

For 45.0=ν :

GHgHEk

HEk

HEk elu 30.122.1;30.1;3.21 ====

These expressions show that Variant 3 predicts always a shear parameter much higher than Variant 1 and RSCM, and even larger than Variant 2 for Poisson ratio larger than about 0.33.

Plots of Kerr Parameters

The three normalized Kerr parameters are plotted in Fig. 4 against Poisson ratio.

Page 74: zede26

Proposed Higher Order Continuum-Based Models for an Elastic Subgrade

Journal of EEA, Vol. 26, 2009

7

Plots of shear parameter

00.20.40.60.8

11.21.41.6

0 0.1 0.2 0.3 0.4 0.5 0.6Poisson ratio

g/G

H

RSCMVariant 1Variant 2Variant 3

Figure 4: Plots of the normalized model

parameters against Poisson ratio as predicted by different continuum-based models

The following important observations can be made from these normalized plots within the common range of values of Poisson ratio: 1. The three parameters of the RSCM are all

independent of Poisson ratio and always less than those of the new model variants. This model is thus the most conservative of all.

2. The increase in the normalized spring stiffness

values from 1.0 at ν =0 to 1.25 at ν=0.5 in Variant 3 is slow as compared to the rate of increase observed in Variants 1 and 2.

Furthermore, the stiffness does not become indeterminate at ν = 0.5. This is due to the more realistic decaying exponential function used in Variant 3 for the normal stress ratios in lieu of a constant.

3. In the important range of Poisson ratio of soils,

Variant 3 represents the highest shear parameter of all that increases nearly linearly with increasing values of ν. This variation can be approximated by 259.0317.2)/( += νGHg k . The shear parameter of Variant 3 for ν = 0.5 is more than threefold that of RSCM and Variant 1 and about 40% larger than that of Variant 2.

CONCLUSIONS

From the foregoing material, the following conclusions are drawn: 1. The RSCM consistently underestimates both

the vertical stiffness and the inherent shear interaction of the subgrade.

2. The highest shear interaction is achieved in

Variant 3, especially for large Poisson ratios, by combining the assumption of bilinear vertical shears with the assumption of an exponentially decaying vertical-to-horizontal normal stress ratio with depth. However, the increase in the vertical stiffness is not as large as in the other two model variants.

3. Considering the reasonable assumptions they

are based on, Variant 2 and Variant 3 are promising alternatives to existing continuum subgrade models.

4. Results of recently completed studies on

beams using these models indicate that the new models, in particular Variant 3, can be easily calibrated to give results in excellent agreement with finite-element based models.

REFERENCES

[1] Fletcher, D. Q. and Hermann L. R. Elastic

Foundation Representation of Continuum. Journal of Engineering Mechanics, ASCE, 1971, 97, No. EM1, 95-107

[2] Hetényi, M. Beams on Elastic Foundation,

University of Michigan Press, Ann Arbor, 1946.

[3] Horvath, J. S. Basic SSI Concepts and

Applications Overview. Soil-Structure

Page 75: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

8

Interaction Research Project, Report No. CGT-2002-2, Manhattan College, New York, 2002.

[4] Kerr, A. D. Elastic and Viscoelastic

Foundation Models. Journal of Applied Mechanics, ASME, 1964, 25 (80); 491-498.

[5] Selvadurai, A. P. S. Elastic Analysis of Soil-

Foundation Interaction. Elsevier Scientific Publishing Company, New York, 1979.

[6] Wang, H., Tham, L. G., Cheung, Y. K.

Beams and Plates on Elastic foundations: a review. Progress in Structural Engineering and Materials, John Wiley 2005, 7; 174-182.

[7] Horvath J. S. New Subgrade Model Applied

to Mat Foundations. Journal of Geotechnical Engineering, ASCE, 1983, 109 (12); 1567-1587.

[8] Reissner, E. A Note on Deflections of Plates

on a Viscoelastic Foundation. Journal of Applied Mechanics, ASME, 1958, 25 (80); 144-145.

[9] Vlasov, V. Z., Leont'ev, N. N. Beams,

Plates, and Shells on Elastic Foundations. Israel Program for Scientific Translations, Jerusalem, 1966 (Translated from Russian, 1960).

[10] Das, B. M., Advanced Soil Mechanics,

McGraw-Hill Book Company, New York, 1983.

[11] Avramidis, I. E., Morfidis, K. Bending of Beams on Three-Parameter Elastic Foundation. International Journal of Solids and structures, Elsevier, 2006, 43; 357-375.

[12] Nogami T., O'Neill, M. W. Beam on

Generalized Two-Parameter Foundation. Journal of Engineering Mechanics, ASCE, 1985, 111, 5; 664-679.

[13] Tanahashi, H. Formulas for an Infinitely

Long Bernoulli-Euler Beam on the Pasternak Model. Soils and Foundations, Japanese Geotechnical Society, 2004, 44, 5; 109-118.

[14] Tanahashi, H., Pasternak Model Formulation

of Elastic Displacements in the Case of a Rigid Circular Foundation. Journal of Asian Architecture and Building Engineering, 2007, 6, 1; 167-173.

[15] Worku, A. “Part I: A generalized

Formulation of Continuum Models for Elastic Foundations,” accepted for presentation and publication, GeoFlorida 2010, Annual Conference of the Geo-Institute, American Society of Civil Engineers, February 20-24, 2010.

[16] Yin, J. H. Closed-Form Solution for

Reinforced Timoshenko Beam on Elastic Foundation. (Technical Note) Journal of Engineering Mechanics, ASCE, 2000, 126, 8; 869-874.

Page 76: zede26

Proposed Higher Order Continuum-Based Models for an Elastic Subgrade

Journal of EEA, Vol. 26, 2009

9

APPENDICES Appendix A: Derivation of Model 1 Equation (1a) is derived following the same procedure as in Reissner’s and Horvath’s work [2,3]. The stress equilibrium equation in the vertical direction is given by

0,,, =++ yzyxzxzz ττσ (A1)

Assuming that the vertical shear stresses are constant with depth, integrating Eq. (A1) with respect to z, and employing the stress boundary condition at the surface, one obtains

pQzz −−=σ (A2) Substituting Eq. (A2) in the combined stress-strain and strain-displacement equation for the vertical direction, noting the assumption

zyzx kandkyx

σσσσ == , , integrating with respect to

z, and employing the displacement boundary conditions at the surface and at the bottom of the layer, one obtains the following relations:

( )

+−= pzQz

Ewyxw

2,

2

0α (A3)

where

−= pHwE

HQ 02

(A4)

and

( )yx kk +−= να 1 (A5)

Equation (A3) is now substituted in the combined equations of stress-strain and strain-displacement for the vertical shear stresses given by

GwvGwu

zyyz

xzxz

τ

τ

=+

=+

,,

,, (A6)

Equations (A6) are integrated with respect to z and the remaining displacement boundary conditions applied. The resulting equations are solved for the shear stresses to yield

yyyz

xxxz

pE

GHwG

pE

GHwG

,,0

,,0

632

632

ατ

ατ

−=

−= (A7)

Finally, Eqs. (A2) and (A7) are inserted in Eq. (A1) and rearranged to obtain

( ) ( ) ( ) ( )yxwGHyxwHEyxp

EGHyxp ,

3,1,

12, 0

20

22

∇−=∇

αα

(A8) Equation (A8) is the differential equation for Variant 1 given in Eq. (1a). Appendix B: Derivation of Model 2 Equation (5a) is derived in a similar manner as that of Eq. (A8) above, except that the continuity condition of displacements at Hz η= should be observed in addition to the boundary conditions. This involves a relatively lengthy mathematical work. For brevity reasons, only the final forms of the coefficients are given below:

( )( ) ( )2

2166

232

2323221 HaHbHbbηηηβ −++

−−=

(A9)

( ) ( )HH PPHHHHb−+

−+−= ηβηη

βξ 241

1 21

24 (A10)

( ) ( )HH PPHHb−++−= ηβ

ηηβ

ξ1

2441

2 (A11)

( )( ) ( ) HaHbHbb

222221

3 122

ηηξ −++−

= (A12)

( ) ( ) ( ) ( )HPHR

HaHbP HH

H ηηηη ηη −++= 23242

2624 (A13)

HPHR

HaHbP HH

H −++= 23242

2624η (A14)

HRHa

Hb

P HH η++= 2232

26 (A15)

( ) ( ) ( )HaHbbR H ηηη 2

221

2−

−= (A16)

Page 77: zede26

Asrat Worku

Journal of EEA, Vol. 26, 2009

10

Appendix C: Derivation of Model 3 Equation (7a) has also been derived analogously, in which the compatibility condition of displacements at Hz η= is observed in addition to the boundary conditions. This involves a much more lengthy mathematical work. Only at one stage in the process of derivation of the model, a term involving the product of the quantity He ζ− and another negligibly small quantity is assumed zero. Considering the relatively large value of ζ used for the best-fitting curve and the rather small value of the coefficient of He ζ− , this approximation is easily justified. Only the final forms of the coefficients are given below for the sake of brevity:

( )

HHHH

H

PPeR

HHbHar

rbHHerHb

ηηζη

ζη

ζζη

ζη

ζζη

ζν

ζν

ζη

ζη

ζνη

λ

−+

+

+++

+−

+++=

22222

31

222

31

222

1

2232

(A17)

ζν

δrH −=

(A18)

( ) ( ) ( )HaHbbR H ηηη 2

221

2−

−= (A19)

HRH

bH

aP HH η++= 3222

62 (A20)

( ) ( ) HRH

bH

aP HH ηηη ηη ++= 3222

62 (A21)

241

31

2

441

2

2221

3122

2~

22~

ζν

ζδν

λη

ζν

λζδν

ζνη

λδ

λδ

λδ

ηλδ

ζη

ζν

λδ

κ

ζηη

ηηζη

η

rrbHrrbeTrHb

HPHPHHP

TbRHaerP

HH

HHH

HHH

H

++

+−

−+

−−+

++

++

+−−=

(A22)

( )

41

31

2

441

2222

31

122

~12

2~1

ζν

λη

λζν

ζνη

λ

ηζ

ηζν

λκ

ζηη

ηηζη

η

rbHrbeTrHb

PHPTbRHaerP

HH

HHHHH

H

++

−−

−−+

++

+−=

(A23)

( ) ( ) 22

22221

3 21

2H

bHaH

bb+−+

−= ηηκ (A24)

2

22 64ζ

ηζ

ηη ++= HHT H (A25)

24232

2246~ H

RH

bH

aP H

Hη++= (A26)

( ) ( ) ( )24232

2246~ H

RH

bH

aP H

H ηηη ηη ++= (A27)

yx rrr += (A28)

Page 78: zede26

Journal of EEA, Vol. 26, 2009

STABILIZING CINDER GRAVELS FOR HEAVILY TRAFFICKED BASE COURSE

Girma Berhanu

Department of Civil Engineering Addis Ababa University

ABSTRACT

Investigation into the improvement of natural cinder gravels with the use of stabilization techniques was made using samples collected from quarry sites near Alemgena and Lake Chamo. Mechanical and cement stabilizations were investigated in two subsequent phases. In the first phase, optimum amount of fine soils that makes up the deficiency of fine particles of natural cinder gravels was found to be 12 %. In the second phase, natural cinder gravel samples without, and with 12% fine soils were stabilized with 3, 5, 7, and 10% of cement by mass. The results of the investigation indicated that the optimum amount of cement required to achieve the minimum UCS of 3.0 MPa as specified in ERA and AACRA pavement design standards for heavily trafficked base course without adding fine soils is found to be 7 % cement. However, this high cement requirement was reduced to 5% cement which is a practical value by mechanically stabilizing cinder gravel with 12 % of fine soils before cement stabilization. Nevertheless, the performance of cement stabilized cinder gravel should be investigated in a full-scale road experiment against cracking due to stresses induced by thermal, shrinkage, and traffic. Keywords: Base course, Cement stabilization, Cinder gravel, Mechanized stabilization, Optimum cement content.

INTRODUCTION Natural gravels are abundant source of road building materials but do not always meet the quality requirements for bases and are frequently rejected in favour of expensive alternatives such as crushed stone. However, these alternatives are often not locally available and the transportation of large quantities in heavy vehicles is expensive and consequently large financial and environmental benefits can be achieved if the properties of locally available materials such as natural cinder gravels can be improved by stabilization techniques and used with confidence. Natural cinder gravels are pyroclastic natural materials associated with recent volcanic activity.

They vary in colour, often within the same quarry and may be red, brown, grey, or black. The particle sizes also vary from irregularly shaped lumps of 0.5 m in diameter to sand and silt sizes. Other characteristics features of cinder are their light weight, their rough vesicular surface, and their high porosity. In a joint research project between the Ethiopian Roads Authority and Transport and Road Research Laboratory (UK), a preliminary investigation on the location and engineering properties and a full-scale road experiment of cinder gravels were undertaken. The preliminary study [1] involved field surveys, laboratory investigation, and investigation of a cinder gravel road. According to the study, the two most important factors that affect the engineering behaviour of cinder gravels are grading and the strength of the gravel particles. The particle size analysis carried out on 53 samples collected during the field survey indicated that the grading of natural cinder gravel is deficient in fine particles and do not satisfy the recommended grading limits for base course. The aggregate impact test carried out on 23 samples using a modified procedure for weak aggregates resulted values ranging from 46 to 177 which indicate that the cinder gravel is weak. The main findings from the investigation include: • The importance of sampling below the

weathered zone which can extend to a depth of two meters to obtain representative materials;

• Due to the weak nature of cinder particles, compaction causes the breakdown of particles which improves both grading and strength properties, and this is further confirmed on the gravel road study;

• Changes in moisture content do not affect the

properties of cinder gravels; and • The addition of locally available clayey soil to

make up for the deficiency of fine materials improves the compactability and stability of cinder gravels.

Page 79: zede26

Girma Berhanu

Journal of EEA, Vol. 26, 2009

24

The full-scale road experiment [2] was comprised of 20 different sections out of which six sections were gravel surfaced and the remaining 14 sections were bitumen surfaced. The results of the gravel surfaced experimental sections showed that improved performance can be obtained by mechanically stabilizing cinder with plastic fines. The findings from the surface dressed sections showed satisfactory performance of cinder gravels whether untreated or mechanically stabilized for use in base course for up to 440,000 ESA when designed according to Road Note 31. The road-mix surfacing was, however, not found a satisfactory method of providing a bituminous surface for cinder gravel. The results of the joint research work should be further taken up to investigate the potentials use of these abundantly available natural gravels in base course for heavily trafficked roads by improving their engineering properties. Both pavement design manuals used in Ethiopia, the Ethiopian Roads Authority Pavement Design Manual (ERA-PDM) [3] and the Addis Ababa City Roads Authority Pavement Design and Rehabilitation Manual (AACRA-PDRM) [4], recommend the use of stabilized locally available natural gravels when the cost of importing quality aggregate is expensive and hauling distance is far away. This paper presents the results of laboratory investigations made in improving the properties of cinder gravels by cement stabilization.

MATERIALS AND EXPERIMENTAL INVESTIGATIONS

Sources and Descriptions of Materials In this study, natural cinder gravel samples were investigated from two sites, near Alemgena and Lake Chamo. The samples near Alemgena were obtained from the freshly dozed stoke pile to be used for the subbase construction of Alemgena-Butajira Road while the samples from Lake Chamo were obtained from the quarry site which was used for the construction of unpaved roads in Arba Minch town. These samples were prepared for various tests after repeated quartering using sample splitter riffle box and studied in laboratories at Addis Ababa and Arba Minch Universities, respectively. In the presentation through out the paper, the samples are designated by Alemgena and Lake Chamo respectively. Figure 1 depicts the gradation curves of the samples before and after compaction from both sites with desirable gradation specification limits specified for stabilised base materials in ERA_PDM and AACRA_PDRM. The gradation of cinder obtained from Alemgena is generally coarser than cinder from Lake Chamo. However, in both cases the gradation is almost within the specification limits except that the samples are deficient in fines before compaction and coarse particles are finer than the desirable limits after compaction because cinder particles break down during compaction due to their weak strength.

a) Gradation before compaction b) Gradation after compaction

Figure 1 Gradation of cinder gravel samples before and after modified compaction

Page 80: zede26

Stabilizing Cinder Gravels for Heavily Trafficked Base Course

Journal of EEA, Vol. 26, 2009

25

Table 1: Descriptive test results of natural cinder gravels used for the study

Type of test Desirable limits

Test results Alemgena Lake Chamo

Los Angeles Abrasion < 50 % 43 % - Aggregate Impact Value < 35 % 44 % - Specific Gravity

2.5 – 3.0

2.26; materials > 4.75 mm and, 2.04; materials < 4.75 mm

1.89 materials > 4.75 mm and, 1.95 materials < 4.75 mm

Water absorption

1.0 – 2.0

6.0 %; materials between 37.5 mm and 10 mm, and 5.5%; materials < 10 mm

10.08%; materials between 37.5 mm and 10 mm, and 8.07 %; materials < 10 mm

Procter test - OMC = 24.8% MDD = 12.55 kN/m3

OMC = 25.5% MDD = 12.75 kN/m3

CBR 34 % 43 % The descriptive test results in Table 1 shows that cinder gravel is a weak material and has high water absorption capacity because of its high porosity. The CBR value of the material is low for base course, but satisfies the requirements for subbase course materials. As underlined in the previous studies, one can clearly observe in Fig. 1 (a) that the gradation of cinder gravel is deficient of fines particles. During this study, fine soils obtained near the quarry sites were blended with cinder gravel samples in order to make up for the deficiency of fine materials before cement stabilization. Table 2 shows the descriptive test results carried out on fine soil samples that were collected near the cinder gravel quarry sites for this purpose. Table 2: Descriptive test results of fine soils used

for mechanical stabilization

Type of test Test Results Alemgena Lake

Chamo Gradation: % Passing -

4.75 mm

100

100 2.36 mm 88 86 0.425 mm 51 45 0.075 mm 17 10 Liquid Limit 55 39 Plastic Limit 39.5 30 Plastic Index 15.5 9 Specific gravity 2.7 2.7 Soil classification A-2-7(0) A-2-4 (0) Experimental Investigations In this study, cement stabilization of cinder gravels with and without the addition of fine soils to compensate for the deficiency of fine materials were investigated. Two series of trail mixes of cement stabilized cinder gravel were prepared three

specimens for each percent of cement. The cement used was Ordinary Portland Cement. The first series of trail mixes are natural cinder gravel samples stabilized with 3, 5, 7, and 10 % cement by mass. The second series are mechanically stabilized cinder gravel samples (cinder gravels blended with optimum amount of fine soils) stabilised with 3, 5, 7, and 10 % cement by mass. Compaction and CBR testes were carried out on trial mixes prepared by blending 10 to 15 % fine soils by mass with natural cinder gravel samples to determine the optimum amount of fine soils required to mechanically stabilize cinder gravel samples. The optimum moisture content pertaining to the maximum dry density for the range of cement content was determined according to ASTM D558. The specimens were then compacted using the optimum moisture contents determined in the standard mould following the same procedure as that used for proctor density. After compaction, the specimens in the mould were placed in a plastic bag, covered with wet cloth and kept in a humid room to cure for 24 to 48 hours. The specimens were then removed from the mould carefully, labelled, put in plastic bag, covered with wet cloth, and kept in a humid room to moist cure until it was tested at the required curing period. Curing was necessary to ensure that there is always adequate water for the hydration reaction to proceed and to avoid the possible dry shrinkage while the reactions are proceeding. At the end of the moist curing period the specimens were soaked in water for four hours. Tests were conducted as soon as they were removed from the water bath. After moist curing the specimens for 7, 14, and 28 days and soaking, unconfined compressive strength (UCS) test was conducted following ASTM

Page 81: zede26

Girma Berhanu

Journal of EEA, Vol. 26, 2009

26

D1633-90 procedure. CBR tests were also carried out on trail mixes prepared with 3 and 5 % cement contents following ASTM D 1883-87.

RESULTS AND DISCUSSION Stabilization of Natural Cinder Gravel with Cement Table 3 shows the CBR test results carried out on cement stabilized cinder. The CBR values have increased by two folds for 3% cement in 7 days and almost four folds for 3% in 14 days and for 5% in 7 days. In general, CBR is not recommended [5] to be used when the strength of stabilised materials is more than 3 times that of the unstabilised materials. It is not also widely used to specify stabilised materials for road construction. This is because the strength of stabilised materials usually exceeds the limits of the CBR test procedure. CBR test is used

here to indicate the general response of cement stabilization in cinder gravels. The extensively used method to determine the relative response to cement stabilization and specify the strength of stabilised materials for pavement structure is UCS test. Table 4 shows the results of UCS tests carried out on cement stabilised cinder gravels with 7, 14, and 28 days of curing period. As shown in Fig. 2, the UCS test results clearly indicate that the strength of cement stabilised cinder shows a consistent trend of increase with both increase in stabilising cement and curing period. According to the results obtained, stabilised cinder gravel attains 3.0 MPa, which is the minimum strength requirement specified in ERA-PDM and AACRA-PDRM at 7% cement after 28 days of curing.

Table 3: CBR test results of cement stabilized cinder gravel

Cement

by weight %

Alemgena Lake Chamo OMC (%)

MDD (kN/m3)

Age (days)

CBR (%)

OMC (%)

MDD (kN/m3)

Age (days)

CBR (%)

3

25.5

13.75

7 69 26.4

13.80

7 78 14 111 14 - 28 140 28 -

5

27.0

13.90

7 124 27.0

13.84

7 94 14 134 14 - 28 202 28 -

Table 4: UCS test results of cement stabilized cinder gravel in MPa

Cement by weight %

Alemgena Lake Chamo 7 days 14 days 28 days 7 days 14 days 28 days

3 1.12 1.33 1.61 1.31 1.449 1.801 5 1.49 1.81 2.22 1.51 1.802 2.082 7 1.72 2.41 3.08 1.69 2.324 3.033

10 1.99 3.34 4.24 2.17 3.366 4.113

Figure 2 Variation of UCS with percentage of cement and curing period

Alemgena

0.01.02.03.04.05.0

0 1 2 3 4 5 6 7 8 9 10 11

% Cement

UC

S (M

Pa)

7 Days14 Days28 Days

Lake Chamo

0.01.02.03.04.05.0

0 1 2 3 4 5 6 7 8 9 10 11

% Cement

UC

S (M

Pa)

7 Days14 Days28 Days

Page 82: zede26

Stabilizing Cinder Gravels for Heavily Trafficked Base Course

Journal of EEA, Vol. 26, 2009

27

Stabilization of Mechanically Stabilized Cinder with Cement Because cinder gravel is deficient in fine particles, the amount of cement required to stabilise natural cinder gravel that satisfies the minimum strength requirement (as seen in Section 3.1) for heavily trafficked base course is generally high. In this section, the results of two series of tests: (a) to determine an optimum amount of fine soil required to compensate the deficiency of fines in cinder and (b) to investigate the cement stabilised cinder gravel blended with the optimum amount of fine soils determined are discussed. These are presented in subsequent subsections as “mechanical stabilization” and “cement stabilization” of cinder gravel below. Mechanical Stabilization of Cinder Gravel Table 5 shows the compaction and CBR test results conducted on mechanically stabilised cinder gravel with 10 to 15% fine soils by mass to determine the optimum amount of fines. Figure 3 shows the plots of these results. In both cases, the results consistently indicate that MDD and CBR increase with increase in fine soils up to 12%. However, beyond 12%, these values decreased.

This result indicates that the optimum amount of fine soil that makes up for the deficiency of fines in cinder gravel samples from both Alemgena and Lake Chamo areas is 12%. This optimum amount, 12% fine soils by mass, was mixed with the cinder gravel samples in order to investigate the improvement in strength that can be obtained when mechanically stabilised cinder is stabilized with cement. Cement Stabilization of Cinder Gravel Blended with Fine Soils Table 6 and Table 7 show the CBR and UCS test results carried out on cement stabilized cinder gravel with 12% fine soils respectively. The plot of UCS in Figure 4 shows that the strength of cement stabilised cinder blended with 12% fine soils has increased compared with that of stabilised cinder without the addition of fine soils. Accordingly, the minimum strength requirement specified in ERA-PDM and AACRA-PDRM is attained at 5% of cement by mass.

Table 5: Results of mechanical stabilization of cinder with fine soils

Fine Soil by weight

Alemgena Lake Chamo OMC (%)

MDD (kN/m3)

CBR (%)

OMC (%)

MDD (kN/m3)

CBR (%)

Cinder only 24.8 12.55 34 25.50 12.75 43 Cinder + 10 % 25.0 13.70 40 26.00 13.38 47 Cinder + 11 % 25.3 13.75 43 26.80 13.40 50 Cinder + 12 % 25.8 13.88 56 27.01 13.98 58 Cinder + 13 % 27.2 13.80 45 28.00 13.32 47 Cinder + 14 % 28.0 13.70 43 28.30 13.31 46 Cinder + 15 % 28.3 13.72 40 - - -

Figure 3 Variation of MDD and CBR with percentage of fine soil mixed with cinder

13.2

13.4

13.6

13.8

14.0

14.2

9 10 11 12 13 14 15

% Fine Soil

MD

D (k

N/m

3)

AlemgenaLake Chamo

30

40

50

60

9 10 11 12 13 14 15

% Fine Soil

CB

R (%

)

AlemgenaLake Chamo

Page 83: zede26

Girma Berhanu

Journal of EEA, Vol. 26, 2009

28

Table 6: CBR test results of mechanically stabilized cinder with cement Cement

by weight %

Alemgena Lake Chamo

Age (days) CBR (%) Age (days) CBR

(%) 3

7 63 7 122 14 74 14 - 28 172 28 -

5

7 71 7 - 14 112 14 - 28 213 28 -

Table 7: UCS test results of mechanically stabilized cinder with cement in MPa

Cement by weigh

%

Alemgena Lake Chamo

7 days 14 days 28 days 7 days 14 days 28 days

3 1.56 2.21 2.73 1.623 2.138 2.596 5 2.23 3.17 3.91 2.336 3.282 3.934 7 2.63 3.40 4.69 2.695 3.573 4.332

10 3.27 4.24 5.59 3.126 4.027 5.23

Figure 4 Variation of UCS with percentage of cement and curing period

The percentage increase in strength gained by adding fine soils was evaluated as shown in Fig. 5. As one can observe from the figure, the percentage increase in UCS gained as a result of adding fine soils has an increasing trend up to 5 % cement and then starts falling. This clearly indicates that 5 % cement by mass is an optimum amount to stabilise cinder gravel with sufficient fines.

0

20

40

60

80

100

1 2 3 4 5 6 7 8 9 10 11

% Cement

% In

crea

se in

UC

S AlemgenaLake Chamo

Figure 5 Variation of increase in UCS with cement

content

CONCLUSIONS Although cinder gravels occur extensively in Ethiopia, rarely support any vegetation other than grasses, and relatively easy to dig with shovel or hand tools, their use for road construction is limited for the reason that they do not meet the specification requirements. In this investigation, mechanical and cement stabilisation techniques were used to improve their properties so that they are used as base course materials for heavily trafficked roads. Based on the results obtained, the following conclusions are made: • The gradation of cinder gravel samples lacked

fine particles and 12 % of fine soil by mass was found to be optimum for making up this deficiency which confirms the results of previous studies.

• Cement stabilised cinder gravels without the addition of fine soils require 7 % of cement by mass to attain a UCS of 3.0 MPa which is the

Alemgena

0.01.02.03.04.05.06.0

0 1 2 3 4 5 6 7 8 9 10 11

% Cement

UC

S (M

Pa)

7 Days14 Days28 Days

Lake Chamo

0.01.02.03.04.05.06.0

0 1 2 3 4 5 6 7 8 9 10 11

% Cement

UC

S (M

Pa)

7 Days

14 Days28 Days

Page 84: zede26

Stabilizing Cinder Gravels for Heavily Trafficked Base Course

Journal of EEA, Vol. 26, 2009

29

minimum specified strength in ERA-PDM and AACRA-PDRM for a base course in heavily trafficked pavement structure.

• With the addition of the optimum amount of fine soils to make up for the deficiency of fine particles, 5 % cement by mass is found to be optimum to achieve a UCS of 3.0 MPa, minimum specified by ERA and AACRA for a base course in heavily trafficked pavement structure for cement stabilised materials.

The results obtained in this investigation ascertained that the properties of cinder gravel can be improved by stabilisation and used for heavily trafficked base course. However, a full-scale road experiment is necessary in order to study the performance of stabilised cinder gravels against the possible detrimental effects of cracking due to stresses induced by thermal, shrinkage, and traffic. Moreover, the feasibility of cement stabilised cinder gravels should be checked for every project versus expenses related to getting quality aggregate and hauling distance.

ACKNOWLOGMENT This investigation was carried out as students' final project at Addis Ababa and Arba Minch Universities. The Author wishes to thank all students who have been involved in this investigation.

REFERENCES [1] Newil, D. and Kassaye Aklilu. "The Location

and Engineering Properties of Volcanic Cinder Gravels in Ethiopia". Seventh Regional Conference for Africa on Soil Mechanics and Foundation Engineering/Accra, June 1980.

[2] Newil, D., Robinson, R. and Kassaye Aklilu.

"Experimental Use of Cinder Gravels on Roads in Ethiopia". 9th Regional Conference for Africa on Soil Mechnics and Foundation Engineering / Lagos, September 1987.

[3] Ethiopian Roads Authority. "Pavement Design

Manual Volume I: Flexible Pavements and Gravel Roads", 2002.

[4] Addis Ababa City Roads Authority. "Pavement

Design and Rehabilitation Manual", 2003. [5] National Association of Australian State Road

Authorities. ”Guide to Stabilization in Road Works". Sydney, 1986.