Performance Of Encryption Techniques On Data
CHAPTER 1
INTRODUCTION
1.1 Information Security:
The concept of information will be taken to be an understood quantity. To introduce
cryptography, an understanding of issues related to information security in general is
necessary.
Information security manifests itself in many ways according to the situation and
requirement.
Regardless of who is involved, to one degree or another, all parties to a transaction
must have confidence that certain objectives associated with information security have been
met. Over the centuries, an elaborate set of protocols and mechanisms has been created to
deal with information security issues when the information is conveyed by physical
documents. Often the objectives of information security cannot solely be achieved through
mathematical algorithms and protocols alone, but require procedural techniques and abidance
of laws to achieve the desired result.
For example, privacy of letters is provided by sealed envelopes delivered by an
accepted mail service. The physical security of the envelope is, for practical necessity, limited
and so laws are enacted which make it a criminal offense to open mail for which one is not
authorized. It is sometimes the case that security is achieved not through the information
itself but through the physical document recording it.
For example, paper currency requires special inks and material to prevent
counterfeiting. Conceptually, the way information is recorded has not changed dramatically
over time. Whereas information was typically stored and transmitted on paper, much of it
now resides on magnetic media and is transmitted via telecommunications systems, some
wireless. What has changed dramatically is the ability to copy and alter information. One can
make thousands of identical copies of a piece of information stored electronically and each is
indistinguishable from the original. With information on paper, this is much more difficult.
Dept. of ECE, ACTS Page 1
Performance Of Encryption Techniques On Data
What is needed then for a society where information is mostly stored and transmitted
in electronic form is a means to ensure information security which is independent of the
physical medium recording or conveying it and such that the objectives of information
security rely solely on digital information itself. One of the fundamental tools used in
information security is the signature. It is a building block for many other services such as
non-repudiation, data origin authentication, identification, and witnessing, to mention a few.
Having learned the basics in writing, an individual is taught how to produce a handwritten
signature for the purpose of identification. At contract age the signature evolves to take on a
very integral part of the person’s identity. This signature is intended to be unique to the
individual and serve as a means to identify, authorize, and validate.
With electronic information the concept of a signature needs to be redressed; it cannot
simply be something unique to the signer and independent of the information signed.
Electronic replication of it is so simple that appending a signature to a document not signed
by the originator of the signature is almost a triviality. Analogues of the “paper protocols”
currently in use are required. Hopefully these new electronic based protocols are at least as
good as those they replace. There is a unique opportunity for society to introduce new and
more efficient ways of ensuring information security. Much can be learned from the
evolution of the paper based system, mimicking those aspects which have served us well and
removing the inefficiencies. Achieving information security in an electronic society requires
a vast array of technical and legal skills. There is, however, no guarantee that all of the
information security objectives deemed necessary can be adequately met. The technical
means is provided through cryptography.
1.2 Cryptography:
Cryptography has a long and fascinating history. The most complete non-technical
account of the subject is Kahn’s The Code breakers. This book traces cryptography from its
initial and limited use by the Egyptians some 4000 years ago, to the twentieth century where
it played a crucial role in the outcome of both world wars.
The most striking development in the history of cryptography came in 1976 when
Diffie and Hellman published New Directions in Cryptography. This paper introduced the
revolutionary concept of public-key cryptography and also provided a new and ingenious
method for key exchange, the security of which is based on the intractability of the discrete
Dept. of ECE, ACTS Page 2
Performance Of Encryption Techniques On Data
logarithm problem. Although the authors had no practical realization of a public-key
encryption scheme at the time, the idea was clear and it generated extensive interest and
activity in the cryptographic community.
In 1978 Rivest, Shamir, and Adleman discovered the first practical public-key
encryption and signature scheme, now referred to as RSA. The RSA scheme is based on
another hard mathematical problem, the intractability of factoring large integers. This
application of a hard mathematical problem to cryptography revitalized efforts to find more
efficient methods to factor. The 1980s saw major advances in this area but none which
rendered the RSA system insecure. Another class of powerful and practical public-key
schemes was found by ElGamal in 1985. These are also based on the discrete logarithm
problem. One of the most significant contributions provided by public-key cryptography is
the digital signature. In 1991 the first international standard for digital signatures (ISO/IEC
9796) was adopted. It is based on the RSA public-key scheme. In 1994 the U.S. Government
adopted the Digital Signature Standard, a mechanism based on the ElGamal public key
scheme.
“Cryptography is the study of mathematical techniques related to aspects of
information security such as confidentiality, data integrity, entity authentication, and data
origin authentication. Cryptography is not the only means of providing information security,
but rather one set of techniques.”
1.2.1 Cryptography Terminology
Until modern times cryptography referred almost exclusively to encryption, which is
the process of converting ordinary information (called plaintext) into unintelligible gibberish
(called ciphertext).Decryption is the reverse, in other words, moving from the unintelligible
cipher text back to plaintext. A cipher (or cipher) is a pair of algorithms that create the
encryption and the reversing decryption. The detailed operation of a cipher is controlled both
by the algorithm and in each instance by a "key". This is a secret parameter (ideally known
only to the communicants) for a specific message exchange context. A "cryptosystem" is the
ordered list of elements of finite possible plaintexts, finite possible cipher texts, finite
possible keys, and the encryption and decryption algorithms which correspond to each key.
Keys are important, as ciphers without variable keys can be trivially broken with only the
knowledge of the cipher used and are therefore useless (or even counter-productive) for most
Dept. of ECE, ACTS Page 3
Performance Of Encryption Techniques On Data
purposes. Historically, ciphers were often used directly for encryption or decryption without
additional procedures such as authentication or integrity checks.
In colloquial use, the term "code" is often used to mean any method of encryption or
concealment of meaning. However, in cryptography, code has a more specific meaning. It
means the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code
word (for example, wallaby replaces attack at dawn). Codes are no longer used in serious
cryptography except incidentally for such things as unit designations (e.g., Bronco Flight or
Operation Overlord) since properly chosen ciphers are both more practical and more secure
than even the best codes and also are better adapted to computers.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of
encrypted information without access to the key normally required to do so; i.e., it is the
study of how to crack encryption algorithms or their implementations.
Some use the terms cryptography and cryptology interchangeably in English, while
others (including US military practice generally) use cryptography to refer specifically to the
use and practice of cryptographic techniques and cryptology to refer to the combined study of
cryptography and cryptanalysis. English is more flexible than several other languages in
which cryptology (done by cryptologists) is always used in the second sense above. In the
English Wikipedia the general term used for the entire field is cryptography (done by
cryptographers).
The study of characteristics of languages which have some application in
cryptography (or cryptology), i.e. frequency data, letter combinations, universal patterns, etc.,
is called crypto linguistics.
1.2.2 Cryptographic goals
(1) Privacy or confidentiality
(2) Data integrity
(3) Authentication
(4) Non-repudiation
Dept. of ECE, ACTS Page 4
Performance Of Encryption Techniques On Data
1. Confidentiality is a service used to keep the content of information from all but those
authorized to have it. Secrecy is a term synonymous with confidentiality and privacy. There
are numerous approaches to providing confidentiality, ranging from physical protection to
mathematical algorithms which render data unintelligible.
2. Data integrity is a service which addresses the unauthorized alteration of data. To assure
data integrity, one must have the ability to detect data manipulation by unauthorized parties.
Data manipulation includes such things as insertion, deletion, and substitution.
3. Authentication is a service related to identification. This function applies to both entities
and information itself. Two parties entering into a communication should identify each other.
Information delivered over a channel should be authenticated as to origin, date of origin, data
content, time sent, etc. For these reasons this aspect of cryptography is usually subdivided
into two major classes: entity authentication and data origin authentication. Data origin
authentication implicitly provides data integrity
4. Non-repudiation is a service which prevents an entity from denying previous
commitments or actions. When disputes arise due to an entity denying that certain actions
were taken, a means to resolve the situation is necessary. For example, one entity may
authorize the purchase of property by another entity and later deny such authorization was
granted. A procedure involving a trusted third party is needed to resolve the dispute. A
fundamental goal of cryptography is to adequately address these four areas in both theory and
practice. Cryptography is about the prevention and detection of cheating and other malicious
activities.
There are number of basic cryptographic tools (primitives) used to provide
information security. Examples of primitives include encryption schemes hash functions, and
digital signature schemes provides a schematic listing of the primitives considered and how
they relate. These primitives should be evaluated with respect to various criteria such as:
1. Level of security. This is usually difficult to quantify. Often it is given in terms of the
number of operations required (using the best methods currently known) to defeat the
intended objective. Typically the level of security is defined by an upper bound on the
amount of work necessary to defeat the objective. This is sometimes called the work factor.
2. Functionality. Primitives will need to be combined to meet various information security
Dept. of ECE, ACTS Page 5
Performance Of Encryption Techniques On Data
objectives. Which primitives are most effective for a given objective will be determined by
the basic properties of the primitives.
3. Methods of operation. Primitives, when applied in various ways and with various inputs,
will typically exhibit different characteristics; thus, one primitive could provide be evaluated
very different functionality depending on its mode of operation or usage.
4. Performance. This refers to the efficiency of a primitive in a particular mode of operation.
(For example, an encryption algorithm may be rated by the number of bits per second which
it can encrypt.)
5.Ease of implementation. This refers to the difficulty of realizing the primitive in a
practical instantiation. This might include the complexity of implementing the primitive in
either a software or hardware environment.
The relative importance of various criteria is very much dependent on the application
and resources available. For example, in an environmentwhere computing power is limited
one may have to trade off a very high level of security for better performance of the system as
a whole. Cryptography, over the ages, has been an art practiced by many who have devised
ad hoc techniques to meet some of the information security requirements. The last twenty
years have been a period of transition as the discipline moved from an art to a science. There
are now several international scientific conferences devoted exclusively to cryptography and
also an international scientific organization, the International Association for Crypto logic
Research (IACR), aimed at fostering research in the area.
Figure1.1: A Taxonomy Of Cryptographic Primitives
Dept. of ECE, ACTS Page 6
Performance Of Encryption Techniques On Data
1.2.3 Cryptography Types
i ) Classic Cryptography:
The earliest forms of secret writing required little more than local pen and paper
analogs, as most people could not read. More literacy, or literate opponents, required actual
cryptography. The main classical cipher types are transposition ciphers, which rearrange the
order of letters in a message (e.g., 'hello world' becomes 'ehlolowrdl' in a trivially simple
rearrangement scheme), and substitution ciphers, which systematically replace letters or
groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmzbupodf'
by replacing each letter with the one following it in the Latin alphabet). Simple versions of
either have never offered much confidentiality from enterprising opponents. An early
substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced
by a letter some fixed number of positions further down the alphabet. Suetonius reports that
Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an
example of an early Hebrew cipher. The earliest known use of cryptography is some carved
cipher text on stone in Egypt (ca 1900 BCE), but this may have been done for the amusement
of literate observers rather than as a way of concealing information. Cryptography is
recommended in the Kama Sutra (ca 400 BCE) as a way for lovers to communicate without
inconvenient discovery.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale
transposition cipher claimed to have been used by the Spartan military). Steganography (i.e.,
hiding even the existence of a message so as to keep it confidential) was also first developed
in ancient times. An early example, from Herodotus, concealed a message—a tattoo on a
slave's shaved head—under the re grown hair. Another Greek method was developed by
Polybius (now called the "Polybius Square"). More modern examples of steganography
include the use of invisible ink, microdots, and digital watermarks to conceal information.
Cipher texts produced by a classical cipher (and some modern ciphers) always reveal
statistical information about the plaintext, which can often be used to break them. After the
discovery of frequency analysis perhaps by the Arab mathematician and polymath, Al-Kindi
(also known as Alkindus), in the 9th century, nearly all such ciphers became more or less
readily breakable by any informed attacker. Such classical ciphers still enjoy popularity
today, though mostly as puzzles (see cryptogram). Al-Kindi wrote a book on cryptography
Dept. of ECE, ACTS Page 7
Performance Of Encryption Techniques On Data
entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic
Messages), in which described the first cryptanalysis techniques.
16th-century book-shaped French cipher machine, with arms of Henri II of France
Enciphered letter from Gabriel de Luetzd'Aramon, French Ambassador to the Ottoman
Empire, after 1546, with partial decipherment
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis
technique until the development of the poly alphabetic cipher, most clearly by Leon Battista
Alberti around the year 1467, though there is some indication that it was already known to
Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for
various parts of a message (perhaps for each successive plaintext letter at the limit). He also
invented what was probably the first automatic cipher device, a wheel which implemented a
partial realization of his invention. In the poly alphabetic Vigenère cipher, encryption uses a
key word, which controls letter substitution depending on which letter of the key word is
used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was
vulnerable to Kasiski examination, but this was first published about ten years later by
Friedrich Kasiski.
Although frequency analysis is a powerful and general technique against many
ciphers, encryption has still been often effective in practice; many a would-be cryptanalyst
Dept. of ECE, ACTS Page 8
Performance Of Encryption Techniques On Data
was unaware of the technique. Breaking a message without using frequency analysis
essentially required knowledge of the cipher used and perhaps of the key involved, thus
making espionage, bribery, burglary, defection, etc., more attractive approaches to the
cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that
secrecy of a cipher's algorithm is not a sensible or practical safeguard of message security; in
fact, it was further realized that any adequate cryptographic scheme (including ciphers)
should remain secure even if the adversary fully understands the cipher algorithm itself.
Security of the key used should alone be sufficient for a good cipher to maintain
confidentiality under an attack. This fundamental principle was first explicitly stated in 1883
by AugusteKerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more
bluntly, it was restated by Claude Shannon, the inventor of information theory and the
fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the
system'.
Different physical devices and aids have been used to assist with ciphers. One of the
earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans
as an aid for a transposition cipher (see image above). In medieval times, other aids were
invented such as the cipher grille, which was also used for a kind of steganography. With the
invention of poly alphabetic ciphers came more sophisticated aids such as Alberti's own
cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's multi-cylinder
(not publicly known, and reinvented independently by Bazeries around 1900). Many
mechanical encryption/decryption devices were invented early in the 20th century, and
several patented, among them rotor machines—famously including the Enigma machine used
by the German government and military from the late '20s and during World War II.[16] The
ciphers implemented by better quality examples of these machine designs brought about a
substantial increase in cryptanalytic difficulty after WWI.
ii) Symmetric-Key Cryptography:
Symmetric-key cryptography refers to encryption methods in which both the sender
and receiver share the same key (or, less commonly, in which their keys are different, but
related in an easily computable way). This was the only kind of encryption publicly known
until June 1976.
Dept. of ECE, ACTS Page 9
Performance Of Encryption Techniques On Data
Figure1.2: Symmetric Key Cryptography
One round (out of 8.5) of the patented IDEA cipher, used in some versions of PGP for
high-speed encryption of, for instance, e-mail
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A
block cipher enciphers input in blocks of plaintext as opposed to individual characters, the
input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES)
are block cipher designs which have been designated cryptography standards by the US
government (though DES's designation was finally withdrawn after the AES was adopted).
Despite its deprecation as an official standard, DES (especially its still-approved and much
more secure triple-DES variant) remains quite popular; it is used across a wide range of
applications, from ATM encryption to e-mail privacy and secure remote access. Many other
block ciphers have been designed and released, with considerable variation in quality. Many
have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key
material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat
like the one-time pad. In a stream cipher, the output stream is created based on a hidden
internal state which changes as the cipher operates. That internal state is initially set up using
the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as
stream ciphers..
Dept. of ECE, ACTS Page 10
Performance Of Encryption Techniques On Data
Cryptographic hash functions are a third type of cryptographic algorithm. They take a
message of any length as input, and output a short, fixed length hash which can be used in
(for example) a digital signature. For good hash functions, an attacker cannot find two
messages that produce the same hash. MD4 is a long-used hash function which is now
broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The
U.S. National Security Agency developed the Secure Hash Algorithm series of MD5-like
hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely
deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the
SHA-2 family improves on SHA-1, but it isn't yet widely deployed, and the U.S. standards
authority thought it "prudent" from a security perspective to develop a new standard to
"significantly improve the robustness of NIST's overall hash algorithm toolkit."[25] Thus, a
hash function design competition is underway and meant to select a new U.S. national
standard, to be called SHA-3, by 2012.
Message authentication codes (MACs) are much like cryptographic hash functions,
except that a secret key can be used to authenticate the hash value upon receipt.
Symmetric-key cryptosystems use the same key for encryption and decryption of a
message, though a message or group of messages may have a different key than others. A
significant disadvantage of symmetric ciphers is the key management necessary to use them
securely. Each distinct pair of communicating parties must, ideally, share a different key, and
perhaps each cipher text exchanged as well. The number of keys required increases as the
square of the number of network members, which very quickly requires complex key
management schemes to keep them all straight and secret. The difficulty of securely
establishing a secret key between two communicating parties, when a secure channel does not
already exist between them, also presents a chicken-and-egg problem which is a considerable
practical obstacle for cryptography users in the real world.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the
notion of public-key (also, more generally, called asymmetric key) cryptography in which
two different but mathematically related keys are used—a public key and a private key. A
public key system is so constructed that calculation of one key (the 'private key') is
computationally infeasible from the other (the 'public key'), even though they are necessarily
related. Instead, both keys are generated secretly, as an interrelated pair. The historian David
Dept. of ECE, ACTS Page 11
Performance Of Encryption Techniques On Data
Kahn described public-key cryptography as "the most revolutionary new concept in the field
since poly alphabetic substitution emerged in the Renaissance".
In public-key cryptosystems, the public key may be freely distributed, while its paired
private key must remain secret. In a public-key encryption system, the public key is used for
encryption, while the private or secret key is used for decryption. While Diffie and Hellman
could not find such a system, they showed that public-key cryptography was indeed possible
by presenting the Diffie Hellman key exchange protocol, a solution that is now widely used
in secure communications to allow two parties to secretly agree on a shared encryption key.
Diffie and Hellman's publication sparked widespread academic efforts in finding a
practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest,
Adi Shamir, and Len Adleman, whose solution has since become known as the RSA
algorithm.
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly
known examples of high quality public-key algorithms, have been among the most widely
used. Others include the Cramer–Shoup cryptosystem, ElGamal encryption, and various
elliptic curve techniques.
To much surprise, a document published in 1997 by the General Communications
Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at
GCHQ had anticipated several academic developments. Reportedly, around 1970, James H.
Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks
invented a solution that essentially resembles the RSA algorithm. And in 1974, Malcolm J.
Williamson is claimed to have developed the Diffie-Hellman key exchange.
Public-key cryptography can also be used to implement digital signature schemes. A
digital signature is reminiscent of an ordinary signature; they both have the characteristic that
they are easy for a user to produce, but difficult for anyone else to forge. Digital signatures
can also be permanently tied to the content of the message being signed; they cannot then be
'moved' from one document to another, for any attempt will be detectable. In digital signature
schemes, there are two algorithms: one for signing, in which a secret key is used to process
the message (or a hash of the message, or both), and one for verification, in which the
matching public key is used with the message to check the validity of the signature. RSA and
Dept. of ECE, ACTS Page 12
Performance Of Encryption Techniques On Data
DSA are two of the most popular digital signature schemes. Digital signatures are central to
the operation of public key infrastructures and many network security schemes (e.g.,
SSL/TLS, many VPNs, etc.).
Public-key algorithms are most often based on the computational complexity of
"hard" problems, often from number theory. For example, the hardness of RSA is related to
the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete
logarithm problem. More recently, elliptic curve cryptography has developed in which
security is based on number theoretic problems involving elliptic curves. Because of the
difficulty of the underlying problems, most public-key algorithms involve operations such as
modular multiplication and exponentiation, which are much more computationally expensive
than the techniques used in most block ciphers, especially with typical key sizes. As a result,
public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality
symmetric-key encryption algorithm is used for the message itself, while the relevant
symmetric key is sent with the message, but encrypted using a public-key algorithm.
Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is
computed, and only the resulting hash is digitally signed.
iii) Public-Key Cryptography :
Figure1.3: Public Key Cryptography
Dept. of ECE, ACTS Page 13
Performance Of Encryption Techniques On Data
In an asymmetric key encryption scheme, anyone can encrypt messages using the
public key, but only the holder of the paired private key can decrypt. Security depends on the
secrecy of that private key.
In some related signature schemes, the private key is used to sign a message; anyone
can check the signature using the public key. Validity depends on security of the private key.
In the Diffie–Hellman key exchange scheme, each party generates a public/private
key pair and distributes the public key... After obtaining an authentic copy of each other's
public keys, Alice and Bob can compute a shared secret offline. The shared secret can be
used, for instance, as the key for a symmetric cipher.
Public-key cryptography refers to a cryptographic system requiring two separate keys,
one to lock or encrypt the plaintext, and one to unlock or decrypt the cipher text. Neither key
will do both functions. One of these keys is published or public and the other is kept private.
If the lock/encryption key is the one published then the system enables private
communication from the public to the unlocking key's owner. If the unlock/decryption key is
the one published then the system serves as a signature verifier of documents locked by the
owner of the private key. Although in this latter case, since encrypting the entire message is
relatively expensive computationally, in practice just a hash of the message is encrypted for
signature verification purposes.
This cryptographic approach uses asymmetric key algorithms such as RSA, hence the
more general name of "asymmetric key cryptography". Some of these algorithms have the
public key/private key property; that is, neither key is derivable from knowledge of the other;
not all asymmetric key algorithms do. Those with this property are particularly useful and
have been widely deployed, and are the source of the commonly used name.
Although unrelated, the key pair is mathematically linked. The public key is used to
transform a message into an unreadable form, decryptable only by using the (different but
matching) private key. By publishing the public key, the key producer empowers anyone who
gets a copy of the public key to produce messages only s/he can read—because only the key
producer has a copy of the private key (required for decryption). When someone wants to
send a secure message to the creator of those keys, the sender encrypts it (i.e., transforms it
into an unreadable form) using the intended recipient's public key; to decrypt the message,
the recipient uses the private key. No one else, including the sender, can do so.
Dept. of ECE, ACTS Page 14
Performance Of Encryption Techniques On Data
Thus, unlike symmetric key algorithms, a public key algorithm does not require a
secure initial exchange of one, or more, secret keys between the sender and receiver. These
algorithms work in such a way that, while it is easy for the intended recipient to generate the
public and private keys and to decrypt the message using the private key, and while it is easy
for the sender to encrypt the message using the public key, it is extremely difficult for anyone
to figure out the private key based on their knowledge of the public key. They are based on
mathematical relationships (the most notable ones being the integer factorization and discrete
logarithm problems) that have no efficient solution.
The use of these algorithms also allows authenticity of a message to be checked by
creating a digital signature of a message using the private key, which can be verified using
the public key.
Public key cryptography is a fundamental and widely used technology. It is an
approach used by many cryptographic algorithms and cryptosystems. It underpins such
Internet standards as Transport Layer Security (TLS) (successor to SSL), PGP, and GPG.
How It Works
The distinguishing technique used in public key cryptography is the use of
asymmetric key algorithms, where the key used to encrypt a message is not the same as the
key used to decrypt it. Each user has a pair of cryptographic keys — a public encryption key
and a private decryption key. The publicly available encrypting-key is widely distributed,
while the private decrypting-key is known only to the recipient. Messages are encrypted with
the recipient's public key and can be decrypted only with the corresponding private key. The
keys are related mathematically, but parameters are chosen so that determining the private
key from the public key is prohibitively expensive. The discovery of algorithms that could
produce public/private key pairs revolutionized the practice of cryptography beginning in the
mid-1970s.
In contrast, symmetric-key algorithms, variations of which have been used for
thousands of years, use a single secret key which must be shared and kept private by both
sender and receiver for both encryption and decryption. To use a symmetric encryption
scheme, the sender and receiver must securely share a key in advance.
Dept. of ECE, ACTS Page 15
Performance Of Encryption Techniques On Data
Because symmetric key algorithms are nearly always much less computationally
intensive, it is common to exchange a key using a key-exchange algorithm and transmit data
using that key and a symmetric key algorithm. PGP and the SSL/TLS family of schemes do
this, for instance, and are thus called hybrid cryptosystems.
Description
The two main branches of public key cryptography are:
Public key encryption: a message encrypted with a recipient's public key cannot be
decrypted by anyone except a possessor of the matching private key, it is presumed
that this will be the owner of that key and the person associated with the public key
used. This is used for confidentiality.
Digital signatures: a message signed with a sender's private key can be verified by
anyone who has access to the sender's public key, thereby proving that the sender had
access to the private key (and therefore is likely to be the person associated with the
public key used), and the part of the message that has not been tampered with. On the
question of authenticity, see also message digest.
An analogy to public-key encryption is that of a locked mail box with a mail slot. The
mail slot is exposed and accessible to the public; its location (the street address) is in essence
the public key. Anyone knowing the street address can go to the door and drop a written
message through the slot; however, only the person who possesses the key can open the
mailbox and read the message.
An analogy for digital signatures is the sealing of an envelope with a personal wax
seal. The message can be opened by anyone, but the presence of the seal authenticates the
sender.
A central problem for use of public-key cryptography is confidence (ideally proof)
that a public key is correct, belongs to the person or entity claimed (i.e., is 'authentic'), and
has not been tampered with or replaced by a malicious third party. The usual approach to this
problem is to use a public-key infrastructure (PKI), in which one or more third parties, known
Dept. of ECE, ACTS Page 16
Performance Of Encryption Techniques On Data
as certificate authorities, certify ownership of key pairs. PGP, in addition to a certificate
authority structure, has used a scheme generally called the "web of trust", which decentralizes
such authentication of public keys by a central mechanism, substituting individual
endorsements of the link between user and public key. No fully satisfactory solution to the
public key authentication problem is known.
iv) Private Key (secret key) Cryptography
In cryptography, a private or secret key is an encryption/decryption key known only
to the party or parties that exchange secret messages. In traditional secret key cryptography, a
key would be shared by the communicators so that each could encrypt and decrypt messages.
The risk in this system is that if either party loses the key or it is stolen, the system is broken.
A more recent alternative is to use a combination of public and private keys. In this system, a
public key is used together with a private key. See public key infrastructure (PKI) for more
information.
Dept. of ECE, ACTS Page 17
Performance Of Encryption Techniques On Data
The Advanced Encryption Standard, in the following referenced as AES, is the winner
of the contest, held in 1997 by the US Government, after the Data Encryption Standard was
found too weak because of its small key size and the technological advancements in
processor power. Fifteen candidates were accepted in 1998 and based on public comments
the pool was reduced to five finalists in 1999. In October 2000, one of these five algorithms
was selected as the forthcoming standard: a slightly modified version of the Rijndael.
The Rijndael, whose name is based on the names of its two Belgian inventors,
JoanDaemen and Vincent Rijmen, is a Block cipher, which means that it works on fixed-
length group of bits, which are called blocks. It takes an input block of a certain size, usually
128, and produces a corresponding output block of the same size. The transformation requires
a second input, which is the secret key. It is important to know that the secret key can be of
any size (depending on the cipher used) and that AES uses three different key sizes: 128, 192
and 256 bits.
While AES supports only block sizes of 128 bits and key sizes of 128, 192 and 256
bits, the Original Rijndael supports key and block sizes in any multiple of 32, with a
minimum of128 and a maximum of 256 bits.
Dept. of ECE,ACTS Page 18
Performance Of Encryption Techniques On Data
CHAPTER: 2
DATA ENCRYPTION STANDARD
2.1 Introduction
The Data Encryption Standard (DES) is a previously predominant algorithm for the
encryption of electronic data. It was highly influential in the advancement of modern
cryptography in the academic world. Developed in the early 1970s at IBM and based on an
earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of
Standards (NBS) following the agency's invitation to propose a candidate for the protection
of sensitive, unclassified electronic government data. In 1976, after consultation with the
National Security Agency (NSA), the NBS eventually selected a slightly modified version,
which was published as an official Federal Information Processing Standard (FIPS) for the
United States in 1977. The publication of an NSA-approved encryption standard
simultaneously resulted in its quick international adoption and widespread academic scrutiny.
Controversies arose out of classified design elements, a relatively short key length of the
symmetric-key block cipher design, and the involvement of the NSA, nourishing suspicions
about a backdoor. While these suspicions eventually have turned out to be unfounded, the
intense academic scrutiny the algorithm received over time led to the modern understanding
of block ciphers and their cryptanalysis.
DES is now considered to be insecure for many applications. This is chiefly due to the
56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier
Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes . There are
also some analytical results which demonstrate theoretical weaknesses in the cipher, although
they are infeasible to mount in practice. The algorithm is believed to be practically secure in
the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has
been superseded by the Advanced Encryption Standard (AES). Furthermore, DES has been
withdrawn as a standard by the National Institute of Standards and Technology (formerly the
National Bureau of Standards).
Some documentation makes a distinction between DES as a standard and DES as an
algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm).
Dept. of ECE, ACTS Page 19
Performance Of Encryption Techniques On Data
2.2 Description
DES is the archetypal block cipher — an algorithm that takes a fixed-length string of
plaintext bits and transforms it through a series of complicated operations into another cipher
text bit string of the same length. In the case of DES, the block size is 64 bits. DES also uses
a key to customize the transformation, so that decryption can supposedly only be performed
by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits;
however, only 56 of these are actually used by the algorithm. Eight bits are used solely for
checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it
is never quoted as such. Every 8th bit of the selected key is discarded, that is, positions 8, 16,
24, 32, 40, 48, 56, 64 are removed from the 64 bit key leaving behind only the 56 bit key.
Figure2.1: DES Algorithm Structure
The algorithm's overall structure is shown in Figure2.1 there are 16 identical stages of
processing, termed rounds. There is also an initial and final permutation, termed IP and FP,
which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have almost no
Dept. of ECE, ACTS Page 20
Performance Of Encryption Techniques On Data
cryptographic significance, but were apparently included in order to facilitate loading blocks
in and out of mid-1970s hardware.
Before the main rounds, the block is divided into two 32-bit halves and processed
alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures
that decryption and encryption are very similar processes the only difference is that the sub
keys are applied in the reverse order when decrypting. The rest of the algorithm is identical.
This greatly simplifies implementation, particularly in hardware, as there is no need for
separate encryption and decryption algorithms.
The ⊕ symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles
half a block together with some of the key. The output from the F-function is then combined
with the other half of the block, and the halves are swapped before the next round. After the
final round, the halves are not swapped; this is a feature of the Feistel structure which makes
encryption and decryption similar processes.
2.3 Modes of Operation
ECB (Electronic Code Book)
o This is the regular DES algorithm.
o Data is divided into 64-bit blocks and each block is encrypted one at a time.
o Separate encryptions with different blocks are totally independent of each
other.
o This means that if data is transmitted over a network or phone line,
transmission errors will only affect the block containing the error.
o It also means, however, that the blocks can be rearranged, thus scrambling a
file beyond recognition, and this action would go undetected.
o ECB is the weakest of the various modes because no additional security
measures are implemented besides the basic DES algorithm.
o However, ECB is the fastest and easiest to implement, making it the most
common mode of DES.
Dept. of ECE, ACTS Page 21
Performance Of Encryption Techniques On Data
CBC (Cipher Block Chaining)
o In this mode of operation, each block of ECB encrypted cipher text is XORed
with the next plaintext block to be encrypted, thus making all the blocks
dependent on all the previous blocks.
o This means that in order to find the plaintext of a particular block, you need to
know the cipher text, the key, and the cipher text for the previous block.
o The first block to be encrypted has no previous cipher text, so the plaintext is
XORed with a 64-bit number called the Initialization Vector, or IV for short.
o So if data is transmitted over a network or phone line and there is a
transmission error, the error will be carried forward to all subsequent blocks
since each block is dependent upon the last.
o This mode of operation is more secure than ECB because the extra XOR step
adds one more layer to the encryption process.
CFB (Cipher Feedback)
o In this mode, blocks of plaintext that is less than 64 bits long can be encrypted.
o Normally, special processing has to be used to handle files whose size is not a
perfect multiple of 8 bytes, but this mode removes that necessity (Stealth
handles this case by adding several dummy bytes to the end of a file before
encrypting it).
o The plaintext itself is not actually passed through the DES algorithm, but
merely XORed with an output block from it, in the following manner: A 64-bit
block called the Shift Register is used as the input plaintext to DES. This is
initially set to some arbitrary value, and encrypted with the DES algorithm.
The cipher text is then passed through an extra component called the M-box,
which simply selects the left-most M bits of the ciphertext, where M is the
number of bits in the block we wish to encrypt. This value is XORed with the
real plaintext, and the output of that is the final ciphertext. Finally, the
ciphertext is fed back into the Shift Register, and used as the plaintext seed for
the next block to be encrypted.
Dept. of ECE, ACTS Page 22
Performance Of Encryption Techniques On Data
o As with CBC mode, an error in one block affects all subsequent blocks during
data transmission.
o This mode of operation is similar to CBC and is very secure, but it is slower
than ECB due to the added complexity.
OFB (Output Feedback)
o This is similar to CFB mode, except that the ciphertext output of DES is fed
back into the Shift Register, rather than the actual final ciphertext.
o The Shift Register is set to an arbitrary initial value, and passed through the
DES algorithm.
o The output from DES is passed through the M-box and then fed back into the
Shift Register to prepare for the next block.
o This value is then XORed with the real plaintext (which may be less than 64
bits in length, like CFB mode), and the result is the final ciphertext.
o Note that unlike CFB and CBC, a transmission error in one block will not
affect subsequent blocks because once the recipient has the initial Shift
Register value; it will continue to generate new Shift Register plaintext inputs
without any further data input.
o This mode of operation is less secure than CFB mode because only the real
ciphertext and DES ciphertext output is needed to find the plaintext of the
most recent block.
o Knowledge of the key is not required
Dept. of ECE, ACTS Page 23
Performance Of Encryption Techniques On Data
CHAPTER 3
ADVANCED ENCRYPTION STANDARD
3.1 Introduction
What is AES?
AES is short for Advanced Encryption Standard and is a United States encryption
standard defined in Federal Information Processing Standard (FIPS) 192, published in
November 2001. It was ratified as a federal standard in May 2002. AES is the most recent of
the four current algorithms approved for federal us in the United States. One should not
compare AES with RSA, another standard algorithm, as RSA is a different category of
algorithm. Bulk encryption of information itself is seldom performed with RSA.RSA is used
to transfer other encryption keys for use by AES for example, and for digital signatures.
AES is a symmetric encryption algorithm processing data in block of 128 bits. A bit
can take the values zero and one, in effect a binary digit with two possible values as opposed
to decimal digits, which can take one of 10 values. Under the influence of a key, a 128-bit
block is encrypted by transforming it in a unique way into a new block of the same size. AES
is symmetric since the same key is used for encryption and the reverse transformation,
decryption. The only secret necessary to keep for security is the key. AES may configured to
use different key-lengths, the standard defines 3 lengths and the resulting algorithms are
named AES-128, AES-192 and AES-256 respectively to indicate the length in bits of the key.
Each additional bit in the key effectively doubles the strength of the algorithm, when defined
as the time necessary for an attacker to stage a brute force attack, i.e. an exhaustive search of
all possible key combinations in order to find the right one
Some Background on AES
In 1997 the US National Institute of Standards and Technology put out a call for
candidates for a replacement for the ageing Data Encryption Standard, DES. 15 candidates
were accepted for further consideration, and after a fully public process and three open
international conferences, the number of candidates was reduced to five. In February 2001,
the final candidate was announced and comments were solicited. 21 organizations and
individuals submitted comments. None had any reservations about the suggested algorithm.
About AES Axantum Software AB Svante Seleborg2(3)AES is founded on solid and well-
Dept. of ECE, ACTS Page 24
Performance Of Encryption Techniques On Data
published mathematical ground, and appears to resist all known attacks well. There’s a strong
indication that in fact no back-door or known weakness exists since it has been published for
a long time, has been the subject of intense scrutiny by researchers all over the world, and
such enormous amounts of economic value and information is already successfully protected
by AES. There are no unknown factors in its design, and it was developed by Belgian
researchers in Belgium therefore voiding the conspiracy theories sometimes voiced
concerning an encryption standard developed by a United States government agency.
A strong encryption algorithm need only meet only single main criteria:
There must be no way to find the unencrypted clear text if the key is unknown, except
brute Force, i.e. to try all possible keys until the right one is found.
A secondary criterion must also be met:
The number of possible keys must be so large that it is computationally infeasible to
actually stage a successful brute force attack in short enough a time.
The older standard, DES or Data Encryption Standard, meets the first criterion, but no
longer the secondary one – computer speeds have caught up with it, or soon will.
AES meets both criteria in all of its variants: AES-128, AES-192 and AES-256
The AES algorithm is a round-based, symmetric block cipher. It processes data blocks
of fixed size (128 bits) using cipher keys of length 128, 196 or 256 bits. Depending on the
key used, it is usually abbreviated as AES-128, AES-196 or AES-256 respectively. In this
project only AES-128 is considered, as it is the most popular variant of the algorithm. The
functional blocks of the algorithm are Key expansion and encryption. In this project we are
concentrating on the key generation algorithm. The initial 128-bit cipher key has to be
expanded to new eleven round keys of same length. In order to produce a new round key, two
transformations have to be performed, RotWord and SubWord. The first one simply
cyclically shifts the bytes of the first 32-bit word of the previous key by one position to the
left. SubWord on the other hand performs the SubBytes transformation to each byte of the
rotated word. Simple bit wise Xors are then needed in order to produce the final round key.
The SubWord (SubBytes) transformation is implemented with a ROM (LUT).
Dept. of ECE, ACTS Page 25
Performance Of Encryption Techniques On Data
Figure3.1: AES Algorithm Structure
Figure3.2: Architecture of AES Key Expander
Dept. of ECE, ACTS Page 26
Performance Of Encryption Techniques On Data
3.2 Cipher
A cipher (pronounced SAI-fuhr) is any method of encrypting text (concealing its
readability and meaning). It is also sometimes used to refer to the encrypted text message itself
although here the term ciphertext is preferred. Its origin is the Arabic sifr, meaning empty or zero. In
addition to the cryptographic meaning, cipher also means someone insignificant, and a combination of
symbolic letters as in an entwined weaving of letters for a monogram.
Some ciphers work by simply realigning the alphabet (for example, A is represented by F, B
is represented by G, and so forth) or otherwise manipulating the text in some consistent pattern.
However, almost all serious ciphers use both a key (a variable that is combined in some way with the
unencrypted text) and an algorithm (a formula for combining the key with the text). A block cipher is
one that breaks a message up into chunks and combines a key with each chunk (for example, 64-bits
of text). A stream cipher is one that applies a key to each bit, one at a time. Most modern ciphers are
block ciphers
3.3 Description of the cipher
AES is based on a design principle known as a substitution-permutation network. It is
fast in both software and hardware. Unlike its predecessor, DES, AES does not use a Feistel
network.
AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas
Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum
of 128 bits. The block size has a maximum of 256 bits, but the key size has no theoretical
maximum.
AES operates on a 4×4 column-major order matrix of bytes, termed the state (versions
of Rijndael with a larger block size have additional columns in the state). Most AES
calculations are done in a special finite field.
The AES cipher is specified as a number of repetitions of transformation rounds that
convert the input plaintext into the final output of ciphertext. Each round consists of several
processing steps, including one that depends on the encryption key. A set of reverse rounds
are applied to transform ciphertext back into the original plaintext using the same encryption
key.
Dept. of ECE, ACTS Page 27
Performance Of Encryption Techniques On Data
3.3.1 Block cipher
A block cipher is a method of encrypting text (to produce ciphertext) in which a
cryptographic key and algorithm are applied to a block of data (for example, 64 contiguous
bits) at once as a group rather than to one bit at a time. The main alternative method, used
much less frequently, is called the stream cipher.
So that identical blocks of text do not get encrypted the same way in a message
(which might make it easier to decipher the ciphertext), it is common to apply the ciphertext
from the previous encrypted block to the next block in a sequence. So that identical messages
encrypted on the same day do not produce identical ciphertext, an initialization vector derived
from a random number generator is combined with the text in the first block and the key. This
ensures that all subsequent blocks result in ciphertext that doesn't match that of the first
encrypting.
3.3.2 Stream Cipher
Stream ciphers can encrypt plaintext messages of variable length. The one-time pad
can be thought of as an example – each message uses a portion of the key with length equal
to the length of the plaintext message. (Then that portion of the key is never re-used.)
The ideas that resulted in modern stream ciphers originated with another AT&T Bell
Labs engineer, Gilbert Vernam (1890 – 1960). In 1917, Vernam developed a scheme to
encrypt teletype transmissions. Unlike Morse code, which uses symbols of different lengths
to substitute for letters of the alphabet, teletype transmission used what we would today call a
5-bit code for the letters of the alphabet and certain keyboard commands. Its use was similar
to the way that the 8-bit ASCII code is used today in computing.
3.3.3 Cipher text
Ciphertext is encrypted text. Plaintext is what you have before encryption, and
ciphertext is the encrypted result. The term cipher is sometimes used as a synonym for
ciphertext, but it more properly means the method of encryption rather than the result.
Dept. of ECE, ACTS Page 28
Performance Of Encryption Techniques On Data
3.4 Key size
In cryptography, key size or key length is the size measured in bits of the key used in
a cryptographic algorithm (such as a cipher). An algorithm's key length is distinct from its
cryptographic security, which is a logarithmic measure of the fastest known computational
attack on the algorithm, also measured in bits. The security of an algorithm cannot exceed its
key length (since any algorithm can be cracked by brute force), but it can be smaller. For
example, Triple DES has a key size of 168 bits but provides at most 112 bits of security,
since an attack of complexity 2112 is known. This property of Triple DES is not a weakness
provided 112 bits of security is sufficient for an application. Most symmetric-key algorithms
in common use are designed to have security equal to their key length. No asymmetric-key
algorithms with this property are known; elliptic curve cryptography comes the closest with
an effective security of roughly half its key length.
Significance
Keys are used to control the operation of a cipher so that only the correct key can
convert encrypted text (ciphertext) to plaintext. Many ciphers are based on publicly known
algorithms or are open source, and so it is only the difficulty of obtaining the key that
determines security of the system, provided that there is no analytic attack (i.e., a 'structural
weakness' in the algorithms or protocols used), and assuming that the key is not otherwise
available (such as via theft, extortion, or compromise of computer systems). The widely
accepted notion that the security of the system should depend on the key alone has been
explicitly formulated by AugusteKerckhoffs (in the 1880s) and Claude Shannon (in the
1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim
respectively.
A key should therefore be large enough that a brute force attack (possible against any
encryption algorithm) is infeasible – i.e., would take too long to execute. Shannon's work on
information theory showed that to achieve so called perfect secrecy, it is necessary for the
key length to be at least as large as the message to be transmitted and only used once (this
algorithm is called the One-time pad). In light of this, and the practical difficulty of managing
such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as
a requirement for encryption, and instead focuses on computational security, under which the
computational requirements of breaking an encrypted text must be infeasible for an attacker.
Dept. of ECE, ACTS Page 29
Performance Of Encryption Techniques On Data
The preferred numbers commonly used as key sizes (in bits) are powers of two,
potentially multiplied with a small odd integer.
3.4.1 Symmetric Algorithm Key Lengths
US Government export policy has long restricted the 'strength' of cryptography which
can be sent out of the country. For many years the limit was 40 bits. Today, a key length of
40 bits offers little protection against even a casual attacker with a single PC, a predictable
and inevitable consequence of governmental restrictions limiting key length. In response, by
the year 2000, most of the major US restrictions on the use of strong encryption were relaxed.
However, not all regulations have been removed, and encryption registration with the U.S.
Bureau of Industry and Security is still required to export "mass market encryption
commodities, software and components with encryption exceeding 64 bits" (75 F.R. 36494 ).
When the Data Encryption Standard cipher was released in 1977, a key length of 56
bits was thought to be sufficient. There was speculation at the time, however, that the NSA
has deliberately reduced the key size from the original value of 112 bits (in IBM's Lucifer
cipher) or 64 bits (in one of the versions of what was adopted as DES) so as to limit the
strength of encryption available to non-US users. The NSA has major computing resources
and a large budget; some thought that 56 bits was NSA-breakable in the late '70s. However,
by the late 90s, it became clear that DES could be cracked in a few days' time-frame with
custom-built hardware such as could be purchased by a large corporation.[5] The book
Cracking DES (O'Reilly and Associates) tells of the successful attempt to break 56-bit DES
by a brute force attack mounted by a cyber civil rights group with limited resources; see EFF
DES cracker. 56 bits is now considered insufficient length for symmetric algorithm keys, and
may have been for some time. More technically and financially capable organizations were
surely able to do the same long before the effort described in the book. Distributed.net and its
volunteers broke a 64-bit RC5 key in several years, using about seventy thousand (mostly
home) computers.
The Advanced Encryption Standard published in 2001 uses a key size of (at
minimum) 128 bits. It also can use keys up to 256 bits (a specification requirement for
submissions to the AES contest). 128 bits is currently thought, by many observers, to be
Dept. of ECE, ACTS Page 30
Performance Of Encryption Techniques On Data
sufficient for the foreseeable future for symmetric algorithms of AES's quality. The U.S.
Government requires 192 or 256-bit AES keys for highly sensitive data.
3.4.2 Asymmetric Algorithm Key Lengths
The effectiveness of public key cryptosystems depends on the intractability
(computational and theoretical) of certain mathematical problems such as integer
factorization. These problems are time consuming to solve, but usually faster than trying all
possible keys by brute force. Thus, asymmetric algorithm keys must be longer for equivalent
resistance to attack than symmetric algorithm keys. As of 2002, a key length of 1024 bits was
generally considered the minimum necessary for the RSA encryption algorithm.
As of 2003RSA Security claims that 1024-bit RSA keys are equivalent in strength to
80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys
to 128-bit symmetric keys. RSA claims that 1024-bit keys are likely to become crackable
sometime between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. An RSA
key length of 3072 bits should be used if security is required beyond 2030. NIST key
management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to
256-bit symmetric keys.
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA
for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete
logarithm problem, which is related to the integer factorization problem on which RSA's
strength is based. Thus, a 3072-bit Diffie-Hellman key has about the same strength as a 3072-
bit RSA key.
One of the asymmetric algorithm types, elliptic curve cryptography, or ECC, appears
to be secure with shorter keys than those needed by other asymmetric key algorithms. NIST
guidelines state that ECC keys should be twice the length of equivalent strength symmetric
key algorithms. So, for example, a 224-bit ECC key would have roughly the same strength as
a 112-bit symmetric key. These estimates assume no major breakthroughs in solving the
underlying mathematical problems that ECC is based on. A message encrypted with an
elliptic key algorithm using a 109-bit long key has been broken by brute force.
The NSA specifies that "Elliptic Curve Public Key Cryptography using the 256-bit
prime modulus elliptic curve as specified in FIPS-186-2 and SHA-256 are appropriate for
Dept. of ECE, ACTS Page 31
Performance Of Encryption Techniques On Data
protecting classified information up to the SECRET level. Use of the 384-bit prime modulus
elliptic curve and SHA-384 are necessary for the protection of TOP SECRET information.
3.5 Effect Of Quantum Computing Attacks On Key Strength
The two best known quantum computing attacks are based on Shor's algorithm and
Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all
mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve
cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The
time needed to factor an RSA integer is the same order as the time needed to use that same
integer as modulus for a single RSA encryption. In other words, it takes no more time to
break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately
on a classical computer." The general consensus is that these public key algorithms are
insecure at any key size if sufficiently large quantum computers capable of running Shor's
algorithm become available. The implication of this attack is that all data encrypted using
current standards based security systems such as the ubiquitous SSL used to protect e-
commerce and Internet banking and SSH used to protect access to sensitive computing
systems is at risk. Encrypted data protected using public-key algorithms can be archived and
may be broken at a later time.
Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash
functions (such as SHA) are widely conjectured to offer greater security against known
quantum computing attacks. They are widely conjectured to be most vulnerable to Grover's
algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key
search on a quantum computer cannot be faster than roughly 2n/2 invocations of the
underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in
the presence of large quantum computers an n-bit key can provide at least n/2 bits of security.
Quantum brute force is easily defeated by doubling the key length, which has little extra
computational cost in ordinary use. This implies that at least a 160-bit symmetric key is
required to achieve 80-bit security rating against a quantum computer.
3.6 Key Generation
Dept. of ECE, ACTS Page 32
Performance Of Encryption Techniques On Data
Key generation is the process of generating keys for cryptography. A key is used to
encrypt and decrypt whatever data is being encrypted/decrypted.
Modern cryptographic systems include symmetric-key algorithms (such as DES and
AES) and public-key algorithms (such as RSA). Symmetric-key algorithms use a single
shared key; keeping data secret requires keeping this key secret. Public-key algorithms use a
public key and a private key. The public key is made available to anyone (often by means of
a digital certificate). A sender encrypts data with the public key; only the holder of the private
key can decrypt this data.
Since public-key algorithms tend to be much slower than symmetric-key algorithms,
modern systems such as TLS and SSH use a combination of the two: one party receives the
other's public key, and encrypts a small piece of data (either a symmetric key or some data
used to generate it). The remainder of the conversation uses a (typically faster) symmetric-
key algorithm for encryption.
Computer cryptography uses integers for keys. In some cases keys are randomly
generated using a random number generator (RNG) or pseudorandom number generator
(PRNG). A PRNG is a computer algorithm that produces data that appears random under
analysis. PRNGs that use system entropy to seed data generally produce better results, since
this makes the initial conditions of the PRNG much more difficult for an attacker to guess. In
other situations, the key is created using a passphrase and a key generation algorithm, usually
involving a cryptographic hash function such as SHA-1.
The simplest method to read encrypted data is a brute force attack—simply attempting
every number, up to the maximum length of the key. Therefore, it is important to use a
sufficiently long key length; longer keys take exponentially longer to attack, rendering a brute
force attack impractical. Currently, key lengths of 128 bits (for symmetric key algorithms)
and 1024 bits (for public-key algorithms) are common.
Key Generation Steps
a)The Rotate Word Step
Dept. of ECE, ACTS Page 33
Performance Of Encryption Techniques On Data
Figure3.3: Rotate Word
b)The Subbytes Step
Figure3.4:The Sub Bytes
In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit
lookup table, S; bij = S(aij).
In the SubBytes step, each byte in the matrix is updated using an 8-bit substitution box,
the Rijndael S-box. This operation provides the non-linearity in the cipher. The S-box used is
derived from the multiplicative inverse over GF(28), known to have good non-linearity
properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by
combining the inverse function with an invertible affine transformation. The S-box is also
Dept. of ECE, ACTS Page 34
Performance Of Encryption Techniques On Data
chosen to avoid any fixed points (and so is a derangement), and also any opposite fixed
points.
3.7 RCON
Rcon is what the Rijndael documentation calls the exponentiation of 2 to a user-
specified value. Note that this operation is not performed with regular integers, but in
Rijndael's finite field.
In polynomial form, 2 is
, and we
compute
in or equivalently,
in .
Figure3.5: Operation of RCON
3.8 S-BOX
Dept. of ECE, ACTS Page 35
Performance Of Encryption Techniques On Data
In cryptography, an S-Box (Substitution-box) is a basic component of symmetric key
algorithms which performs substitution. In block ciphers, they are typically used to obscure
the relationship between the key and the ciphertext — Shannon's property of confusion. In
many cases, the S-Boxes are carefully chosen to resist cryptanalysis.
In general, an S-Box takes some number of input bits, m, and transforms them into
some number of output bits, n: an m×n S-Box can be implemented as a lookup table with 2m
words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard
(DES), but in some ciphers the tables are generated dynamically from the key; e.g. the
Blowfish and the Twofish encryption algorithms. Bruce Schneier describes IDEA's modular
multiplication step as a key-dependent S-Box.
Figure3.6: S-Box functioning
3.9 Lookup Table
Dept. of ECE, ACTS Page 36
Performance Of Encryption Techniques On Data
In computer science, a lookup table is a data structure, usually an array or associative
array, often used to replace a runtime computation with a simpler array indexing operation.
The savings in terms of processing time can be significant, since retrieving a value from
memory is often faster than undergoing an 'expensive' computation or input/output operation.[1] The tables may be pre calculated and stored in static program storage or calculated (or
"pre-fetched") as part of a programs initialization phase (memorization). Lookup tables are
also used extensively to validate input values by matching against a list of valid (or invalid)
items in an array and, in some programming languages, may include pointer functions (or
offsets to labels) to process the matching input.
History
Part of a 20th century table of common logarithms in the reference book Abramowitz
and Stegun.
Before the advent of computers, lookup tables of values were used by people to speed
up hand calculations of complex functions, such as in trigonometry, logarithms, and
statistical density functions.
In ancient India, Aryabhata created one of the first sine tables, which he encoded in a
Sanskrit-letter-based number system. In 493 A.D., Victorius of Aquitaine wrote a 98-column
multiplication table which gave (in Roman numerals) the product of every number from 2 to
50 times and the rows were "a list of numbers starting with one thousand, descending by
hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the
fractions down to 1/144" . Modern school children are often taught to memorize "times
tables" to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).
Early in the history of computers, input/output operations were particularly slow -
even in comparison to processor speeds of the time. It made sense to reduce expensive read
operations by a form of manual caching by creating either static lookup tables (embedded in
the program) or dynamic pre fetched arrays to contain only the most commonly occurring
data items. Despite the introduction of system wide caching that now automates this process,
application level lookup tables can still improve performance for data items that rarely, if
ever, change.
Dept. of ECE, ACTS Page 37
Performance Of Encryption Techniques On Data
3.10 Encryption
Encryption is the conversion of data into a form, called a cipher text, that cannot be
easily understood by unauthorized people. Decryption is the process of converting encrypted
data back into its original form, so it can be understood.
The use of encryption/decryption is as old as the art of communication. In wartime, a
cipher, often incorrectly called a code, can be employed to keep the enemy from obtaining
the contents of transmissions. (Technically, a code is a means of representing a signal without
the intent of keeping it secret; examples are Morse code and ASCII.) Simple ciphers include
the substitution of letters for numbers, the rotation of letters in the alphabet, and the
"scrambling" of voice signals by inverting the sideband frequencies. More complex ciphers
work according to sophisticated computer algorithms that rearranges the data bits in digital
signals.
In order to easily recover the contents of an encrypted signal, the correct decryption
key is required. The key is an algorithm that undoes the work of the encryption algorithm.
Alternatively, a computer can be used in an attempt to break the cipher. The more complex
the encryption algorithm, the more difficult it becomes to eavesdrop on the communications
without access to the key.
Encryption/decryption is especially important in wireless communications. This is
because wireless circuits are easier to tap than their hard-wired counterparts. Nevertheless,
encryption/decryption is a good idea when carrying out any kind of sensitive transaction,
such as a credit-card purchase online, or the discussion of a company secret between different
departments in the organization. The stronger the cipher -- that is, the harder it is for
unauthorized people to break it -- the better, in general. However, as the strength of
encryption/decryption increases, so does the cost.
In recent years, a controversy has arisen over so-called strong encryption. This refers
to ciphers that are essentially unbreakable without the decryption keys. While most
companies and their customers view it as a means of keeping secrets and minimizing fraud,
some governments view strong encryption as a potential vehicle by which terrorists might
evade authorities. These governments, including that of the United States, want to set up a
key-escrow arrangement. This means everyone who uses a cipher would be required to
Dept. of ECE, ACTS Page 38
Performance Of Encryption Techniques On Data
provide the government with a copy of the key. Decryption keys would be stored in a
supposedly secure place, used only by authorities, and used only if backed up by a court
order. Opponents of this scheme argue that criminals could hack into the key-escrow database
and illegally obtain, steal, or alter the keys. Supporters claim that while this is a possibility,
implementing the key escrow scheme would be better than doing nothing to prevent criminals
from freely using encryption/decryption.
i)The Shiftrows Step
Figure3.7: The Shiftrows Step
In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left.
The number of places each byte is shifted differs for each row.
The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in
each row by a certain offset. For AES, the first row is left unchanged. Each byte of the
second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets
of two and three respectively. For blocks of sizes 128 bits and 192 bits, the shifting pattern is
the same. In this way, each column of the output state of the ShiftRows step is composed of
bytes from each column of the input state. (Rijndael variants with a larger block size have
slightly different offsets). For a 256-bit block, the first row is unchanged and the shifting for
the second, third and fourth row is 1 byte, 3 bytes and 4 bytes respectively—this change only
applies for the Rijndael cipher when used with a 256-bit block, as AES does not use 256-bit
blocks.
Dept. of ECE, ACTS Page 39
Performance Of Encryption Techniques On Data
ii)The Mix columns Step
Figure3.8: Mix Columns
In the MixColumns step, each column of the state is multiplied with a fixed
polynomial c(x).
In the MixColumns step, the four bytes of each column of the state are combined
using an invertible linear transformation. The MixColumns function takes four bytes as input
and outputs four bytes, where each input byte affects all four output bytes. Together with
ShiftRows, MixColumns provides diffusion in the cipher.
iii)The Add roundkey Step
Figure3.9: The Add RoundKey Step
Dept. of ECE, ACTS Page 40
Performance Of Encryption Techniques On Data
In the AddRoundKey step, each byte of the state is combined with a byte of the round
subkey using the XOR operation ( ).⊕
In the AddRoundKey step, the subkey is combined with the state. For each round, a
subkey is derived from the main key using Rijndael's key schedule; each subkey is the same
size as the state. The subkey is added by combining each byte of the state with the
corresponding byte of the subkey using bitwise XOR.
Figure3.10: Encryption
3.11 Read-Only Memory
Read-only memory (ROM) is a class of storage medium used in computers and other
electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly
or with difficulty, so it is mainly used to distribute firmware (software that is very closely tied
to specific hardware, and unlikely to need frequent updates).
In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state
ROM), which is fabricated with the desired data permanently stored in it, and thus can never
be modified. Despite the simplicity, speed and economies of scale of mask ROM, field-
Dept. of ECE, ACTS Page 41
Performance Of Encryption Techniques On Data
programmability often make reprogrammable memories more flexible and inexpensive. As of
2007, actual ROM circuitry is therefore mainly used for applications such as microcode, and
similar structures, on various kinds of digital processors (i.e. not only CPUs).
Other types of non-volatile memory such as erasable programmable read only
memory (EPROM) and electrically erasable programmable read-only memory (EEPROM or
Flash ROM) are sometimes referred to, in an abbreviated way, as "read-only memory"
(ROM), but this is actually a misnomer because these types of memory can be erased and re-
programmed multiple times. When used in this less precise way, "ROM" indicates a non-
volatile memory which serves functions typically provided by mask ROM, such as storage of
program code and nonvolatile data.
Use for storing programs:
Every stored-program computer needs some form of non-volatile storage (that is,
storage that retains its data when power is removed) to store the initial program that runs
when the computer is powered on or otherwise begins execution (a process known as
bootstrapping, often abbreviated to "booting" or "booting up"). Likewise, every non-trivial
computer needs some form of mutable memory to record changes in its state as it executes.
Forms of read-only memory were employed as non-volatile storage for programs in
most early stored-program computers, such as ENIAC after 1948 (until then it was not a
stored-program computer as every program had to be manually wired into the machine,
which could take days to weeks). Read-only memory was simpler to implement since it
needed only a mechanism to read stored values, and not to change them in-place, and thus
could be implemented with very crude electromechanical devices (see historical examples
below). With the advent of integrated circuits in the 1960s, both ROM and its mutable
counterpart static RAM were implemented as arrays of transistors in silicon chips; however, a
ROM memory cell could be implemented using fewer transistors than an SRAM memory
cell, since the latter needs a latch (comprising 5-20 transistors) to retain its contents, while a
ROM cell might consist of the absence (logical 0) or presence (logical 1) of one transistor
connecting a bit line to a word line. Consequently, ROM could be implemented at a lower
cost-per-bit than RAM for many years.
Most home computers of the 1980s stored a BASIC interpreter or operating system in
ROM as other forms of non-volatile storage such as magnetic disk drives were too costly. For
Dept. of ECE, ACTS Page 42
Performance Of Encryption Techniques On Data
example, the Commodore 64 included 64 KB of RAM and 20 KB of ROM contained a
BASIC interpreter and the "KERNAL" of its operating system. Later home or office
computers such as the IBM PC XT often included magnetic disk drives, and larger amounts of
RAM, allowing them to load their operating systems from disk into RAM, with only a
minimal hardware initialization core and boot loader remaining in ROM (known as the BIOS
in IBM-compatible computers). This arrangement allowed for a more complex and easily
upgradeable operating system.
In modern PCs, "ROM" (or flash) is used to store the basic bootstrapping firmware
for the main processor, as well as the various firmware needed to internally control self
contained devices such as graphic cards, hard disks, DVD drives, TFT screens, etc., in the
system. Today, many of these "read-only" memories – especially the BIOS – are often
replaced with Flash memory (see below), to permit in-place reprogramming should the need
for a firmware upgrade arise. However, simple and mature sub-systems (such as the keyboard
or some communication controllers in the ICs on the main board, for example) may employ
mask ROM or OTP (one time programmable).
ROM and successor technologies such as flash are prevalent in embedded systems.
These are in everything from industrial robots to home appliances and consumer electronics
(MP3 players, set-top boxes, etc.) all of which are designed for specific functions, but are
based on general-purpose microprocessors. With software usually tightly coupled to
hardware, program changes are rarely needed in such devices (which typically lack hard disks
for reasons of cost, size, or power consumption). As of 2008, most products use Flash rather
than mask ROM, and many provide some means for connecting to a PC for firmware
updates; for example, a digital audio player might be updated to support a new file format.
Some hobbyists have taken advantage of this flexibility to reprogram consumer products for
new purposes; for example, the iPodLinux and OpenWrt projects have enabled users to run
full-featured Linux distributions on their MP3 players and wireless routers, respectively.
ROM is also useful for binary storage of cryptographic data, as it makes them difficult
to replace, which may be desirable in order to enhance information security.
Use for Storing Data
Since ROM (at least in hard-wired mask form) cannot be modified, it is really only
suitable for storing data which is not expected to need modification for the life of the device.
Dept. of ECE, ACTS Page 43
Performance Of Encryption Techniques On Data
To that end, ROM has been used in many computers to store look-up tables for the evaluation
of mathematical and logical functions (for example, a floating-point unit might tabulate the
sine function in order to facilitate faster computation). This was especially effective when
CPUs were slow and ROM was cheap compared to RAM.
Notably, the display adapters of early personal computers stored tables of bitmapped
font characters in ROM. This usually meant that the text display font could not be changed
interactively. This was the case for both the CGA and MDA adapters available with the IBM
PC XT.
The use of ROM to store such small amounts of data has disappeared almost
completely in modern general-purpose computers. However, Flash ROM has taken over a
new role as a medium for mass storage or secondary storage of files.
Endurance and Data Retention
Because they are written by forcing electrons through a layer of electrical insulation
onto a floating transistor gate, rewriteable ROMs can withstand only a limited number of
write and erase cycles before the insulation is permanently damaged. In the earliest
EAROMs, this might occur after as few as 1,000 write cycles, while in modern Flash
EEPROM the endurance may exceed 1,000,000, but it is by no means infinite. This limited
endurance, as well as the higher cost per bit, means that Flash-based storage is unlikely to
completely supplant magnetic disk drives in the near future.
The time span over which a ROM remains accurately readable is not limited by write
cycling. The data retention of EPROM, EAROM, EEPROM, and Flash may be limited by
charge leaking from the floating gates of the memory cell transistors. Leakage is accelerated
by high temperatures or radiation. Masked ROM and fuse/anti fuse PROM do not suffer from
this effect, as their data retention depends on physical rather than electrical permanence of the
integrated circuit (although fuse re-growth was once a problem in some systems).
3.12 Register
In digital electronics, especially computing, a register stores bits of information, in a
way that all the bits can be written to or read out simultaneously. The hardware registers
inside a central processing unit (CPU) are called processor registers. Signals from a state
Dept. of ECE, ACTS Page 44
Performance Of Encryption Techniques On Data
machine to the register control when registers transmit to or accept information from other
registers. Sometimes the state machine routes information from one register through a
functional transform, such as an adder unit, and then to another register that stores the results.
Typical uses of hardware registers include configuration and start-up of certain
features, especially during initialization, buffer storage e.g. video memory for graphics cards,
input/output (I/O) of different kinds, and status reporting such as whether a certain event has
occurred in the hardware unit.
Reading a hardware register in "peripheral units" -- computer hardware outside the
CPU—involves accessing its memory-mapped I/O address or port-mapped I/O address with a
"load" or "store" instruction, issued by the processor. Hardware registers are addressed in
words, but sometimes only use a few bits of the word read in to, or written out to the register.
Strobe registers have the same interface as normal hardware registers, but instead of
storing data, they trigger an action each time they are written to (or, in rare cases, read from).
They are a means of signaling.
Registers are normally measured by the number of bits they can hold, for example, an
"8-bit register" or a "32-bit register". Registers can be implemented in a wide variety of ways,
including register files, standard SRAM, individual flip-flops, or high speed core memory.
In addition to the "programmer-visible" registers that can be read and written with
software, many chips have internal micro architectural registers that are used for state
machines and pipelining; for example, registered memory.
Commercial design tools such as Socrates Bitwise by Duolog Technologies, simplify
and automate memory-mapped register specification and code generation for hardware,
firmware, hardware verification, testing and documentation.
Using IP-XACT IEEE 1685, commercial design tools, such as Socrates Bitwise by
Duolog Technologies and MRV Magillem Register View by MAGILLEM, provide a real
synchronization between the register description and the RTL hardware platform description,
then collaborative work in the design flow can be addressed.
This hardware registers alignment becomes critical when multiple levels of
abstraction are used when switching from TLM to RTL during IP integration.
Dept. of ECE, ACTS Page 45
Performance Of Encryption Techniques On Data
SPIRIT IP-XACT and DITA SIDSC XML define standard XML formats for memory-
mapped registers.
Because write-only registers make debugging almost impossible, lead to the read-
modify-write problem, and also make it unnecessarily difficult for the Advanced
Configuration and Power Interface (ACPI) to determine the device's state when entering
sleep mode in order to restore that state when exiting sleep mode, many programmers tell
hardware designers to make sure that all writable registers are also readable. However, there
are some cases when reading certain types of hardware registers is useless. For example, a
strobe register bit that generates a one cycle pulse into specialized hardware will always read
logic 0.
Dept. of ECE, ACTS Page 46
Performance Of Encryption Techniques On Data
CHAPTER 4
INPUT/OUTPUT PORTS DESCRIPTION
4.1 ROM 256X8:
ROM is a sequential circuit which stores information in the form of bits.
In AES ROM stores look up table values.
Block Diagram
Figure4.1: Block Diagram of ROM
I/O Ports Description:
S.N0 Port Name Mode Size Port Description
1 SubByteAddr In [7:0]
Gives the address of the memory location
2 Rst In 1 bit When asserted Entity is initiliased to default values
3 SubByteEna In 1 bit Enables sub byte generation
4 SubByte out [7:0] Sub bytes are generated from the data taken from the subbyte address.
Table 1:I/O ports description of ROM256X8
Dept. of ECE, ACTS Page 47
Performance Of Encryption Techniques On Data
4.2 Key register
Block diagram:
Figure4.2: Block Diagram of Key Register
I/O Ports Description:
S.N0
Port Name Mode Size Port Description
1 KeyI In
[127:0]
128 bit input key
2 Clk In 1 bit Entity function synchronized for Rising
edge of the Write Clk and read Clk.
3 Ena In 1 bit Device is enabled
4 Rst In 1 bit When asserted Entity is initiliased to
default values
5 KeyO Out [127:0] 128 bit output key
Table 2: I/O Ports Description of Key Register
Dept. of ECE, ACTS Page 48
Performance Of Encryption Techniques On Data
4.3 Register 128
Block diagram:
Figure4.3: Block Diagram of Register 128
I/O Description:
S.NO PORT
NAME
MODE SIZE PORT DESCRIPTION
1 Keyl Input [127:0
]
Input key signal
2 Clk Input 1 bit On board synchronous clock
3 Regld Input 1 bit Input register load signal
4 Rst Input 1 bit Onboard reset signal
5 Keyo Output [127:0
]
Output key output signal
Table 3: I/O Ports Description of Key Register128
Dept. of ECE, ACTS Page 49
Performance Of Encryption Techniques On Data
4.4 AES Key generation
I/O Block Diagram:
Figure4.4: Block Diagram of AES Key Generation
I/O Ports Description:
S.N
0
Port Name Mode Size Port Description
1 keyIn In [127:0] The initial key is given as input
2 clk in 1 bit Entity function synchronized for Rising of
the Clk
Dept. of ECE, ACTS Page 50
Performance Of Encryption Techniques On Data
3 Rst In 1 bit When asserted Entity is initiliased to
default values
4 KeyGenEna In 1 bit When high, it enables key generation
5 KeyLd In 1 bit Key is loaded into the register
6 Key0 Out [127:0] 127 bit output key
7 Key1 Out [127:0] 127 bit output key
8 Key2 Out [127:0] 127 bit output key
9 Key3 Out [127:0] 127 bit output key
10 Key4 Out [127:0] 127 bit output key
11 Key5 Out [127:0] 127 bit output key
12 Key6 Out [127:0] 127 bit output key
13 Key7 Out [127:0] 127 bit output key
14 Key8 Out [127:0] 127 bit output key
15 Key9 Out [127:0] 127 bit output key
16 Key10 Out [127:0] 127 bit output key
17 KeyGenOvr Out 1 bit Indicates key generation is completed
Table 4: I/O Ports Description of AES Key Generation
Dept. of ECE, ACTS Page 51
Performance Of Encryption Techniques On Data
4.5 AES Encryption
Block diagram:
Figure4.5: Block Diagram of Encryption Module
I/O Description:
S.N0 Port Name Mode Size Port Description
1 Datal Input [127:0] Input data signal
2 Keyl Input [127:0] Input key signal
3 Rkeyl Input [127:0] Input rkey signal
4 Clk Input 1 Bit On board clock signal
5 Dkld Input 1 bit Input load signal
6 EncEna Input 1 bit Encryption enable signal
7 Rst Input 1 bit Asyncronous resets signal
8 Encdata Output [127:0] Encoded data output
9 Encovr Output 1 bit Output encoder over signal
10 keygenovr Output 1 bit Key generator over signal
Table 5: I/O Ports Description of AES Encryption
Dept. of ECE, ACTS Page 52
Performance Of Encryption Techniques On Data
CHAPTER 5
SIMULATION RESULTS
TOP LEVEL ENVIRONMENT & SIMULATION RESULTS
This chapter explains the top level overview of the environment and simulation
results. The following is the sequence of operations followed to verify the design
VHDL code is written for every module in key expander, encryption and Simulation
of these modules is done in Active HDL.
Test benches are written to give a required input to each block and check the outputs
for their correctness.
The simulation of each module is shown for both key expander and encryption and
also for top module of key expander and encryption:
Dept. of ECE, ACTS Page 53
Performance Of Encryption Techniques On Data
5.1 Register 128:
Functional description:
For every raising edge of the clock if RegLd is 1 then the data in the key register is given to the input key(KeyI) else the data register will remain same.
Figure5.1: Register128 Waveform
Dept. of ECE, ACTS Page 54
Performance Of Encryption Techniques On Data
5.2 Key Register
Functional description:
Initially the 128 bit key is given as the input and for every rising edge of the clock and
when enable is high the key is given to the register and parallel to it the key comes as the
output.
Figure5.2: Key Register Waveform
Dept. of ECE, ACTS Page 55
Performance Of Encryption Techniques On Data
5.3 ROM256x8
Functional description:
Here the subbyte address gives the address of the memory location and when reset is
high, the subbyteEna enables the sub byte generation. And the data at the memory address is
taken to generate the subbytes.
Figure5.3: ROM256X8 Waveform
Dept. of ECE, ACTS Page 56
Performance Of Encryption Techniques On Data
5.4 AES Key Generation
Functional description:
Initially the 128 bit key is given as the input key. For every rising edge of the clock
and rst=1, the key is loaded into the register and key generation is enabled, and finally eleven
round keys are generated. After the key generation the key generation which was initially low
becomes high indicating the completion of key generation.
Figure5.4: Key Generation Waveform
Dept. of ECE, ACTS Page 57
Performance Of Encryption Techniques On Data
5.5 AES Encryption
Functional description:
If reset becomes 0 then dreg becomes zero. For every raising edge of the clock, if
enable is high then our data will be XOR with the generated keys else the data in the dreg
will not change.
Figure5.5: AES Encryption Waveform
Dept. of ECE, ACTS Page 58
Performance Of Encryption Techniques On Data
5.6 AES Decryption Waveform
Functional description:
The process of AES decryption is similar to that of the AES encryption, but the
difference arises at the point of operation. In decryption we perform the operations carried
out in encryption in reverse or vice versa order.
Dept. of ECE, ACTS Page 59
Performance Of Encryption Techniques On Data
Figure5.6: AES Decryption Waveform
5.7 DES Key Generation Waveform
Functional description:
Initially the 64 bit key is given as the input key. For every rising edge of the clock and
rst=1, the key is loaded into the register and key generation is enabled, and finally sixteen
round keys are generated. After the key generation the key generation which was initially low
becomes high indicating the completion of key generation.
Dept. of ECE, ACTS Page 60
Performance Of Encryption Techniques On Data
Figure5.7: DES Key Generation Waveform
5.8 DES Encryption Waveform
Functional description:
For every increment in the value of Counter, at the positive edge of the CLK, different
Key (from Key1 to Key16) is used to perform the operation of encryption. The encrypted
data is stored in Dreg internal register.
Dept. of ECE, ACTS Page 61
Performance Of Encryption Techniques On Data
Figure5.8: DES Encryption Waveform
5.9 DES Decryption Waveform
Functional description:
In DES decryption, the operational steps of encryption are carried out in reverse way
to extract the original data given as input at the time of encryption
Dept. of ECE, ACTS Page 62
Performance Of Encryption Techniques On Data
Figure5.9: DES Decryption Waveform
CHAPTER 6
RTL SYNTHESIS REPORT
6.1 AES Key generation
Device utilization summary:
Dept. of ECE, ACTS Page 63
Performance Of Encryption Techniques On Data
Selected Device : 3s50pq208-4
Number of Slices : 3133 out of 768 407% (*)
Number of Slice Flip Flops : 1801 out of 1536 117% (*)
Number of 4 input LUTs : 2061 out of 1536 134% (*)
Number of bonded IOBs : 1540 out of 124 1241% (*)
Number of GCLKs : 1 out of 8 12%
Timing Summary:
---------------
Speed Grade: -4
Minimum period: 5.792ns (Maximum Frequency: 172.652MHz)
Minimum input arrival time before clock: 5.664ns
Maximum output required time after clock: 8.630ns
Maximum combinational path delay: No path found
Total memory usage is 105024 kilobytes
6.2 AES ENCRIPTION
Device utilization summary:
---------------------------
Selected Device: v50efg256-7
Dept. of ECE, ACTS Page 64
Performance Of Encryption Techniques On Data
Number of Slices : 1006 out of 768 130% (*)
Number of Slice Flip Flops : 1256 out of 1536 81%
Number of 4 input LUTs : 1859 out of 1536 121% (*)
Number of IOs : 518
Number of bonded IOBs : 518 out of 176 294% (*)
Number of GCLKs : 1 out of 4 25%
Timing Summary:
---------------
Speed Grade: -7
Minimum period: 9.902ns (Maximum Frequency: 100.990MHz)
Minimum input arrival time before clock: 15.021ns
Maximum output required time after clock: 10.050ns
Maximum combinational path delay: No path found
CHAPTER 7
ADVANTAGES AND DISADVANTAGES
Dept. of ECE, ACTS Page 65
Performance Of Encryption Techniques On Data
Advantages of AES Encryption Algorithm:
Advanced Encryption Standard not only assures security but also improves the
performance in a variety of settings such as smartcards, hardware implementations
etc.
AES is federal information processing standard and there are currently no known non-
brute-force direct attacks against AES.
AES is strong enough to be certified for use by the US government for top secret
information
Alternative to Advanced Encryption Standard – The ciphers which are used alternatively to
Advanced Encryption Standard are SSl and TLS. RC4 encryption is next to AES. RC4 is of
128- bits RC4 is a fast cipher and is always subjected to many types of attacks. That is the
reason WEP wireless encryption is poor. Thus AES is given priority than other standards.
Disadvantages:
It can provide security only for the fixed length data
Advantages of DES:
Use of S-Box: The design provides confusion and diffusion of bits from each round to
the next.
Use of P-Box: They provide diffusion of bits.
Number of Rounds: DES uses sixteen rounds of Feistel ciphers. the ciphertext is
thoroughly a random function of plaintext and ciphertext.
Disadvantages of DES:
DES performs lots of bit manipulation in substitution and permutation boxes in each
of 16 rounds. For example, switching bit 30 with 16 is much simpler in hardware than
software. DES encrypts data in 64 bit block size and uses effectively a 56 bit key. 56 bit key
Dept. of ECE, ACTS Page 66
Performance Of Encryption Techniques On Data
space amounts to approximately 72 quadrillion possibilities. Even though it seems large but
according to today’s computing power it is not sufficient and vulnerable to brute force
attack. Therefore, DES could not keep up with advancement in technology and it is no longer
appropriate for security.
Because DES was widely used at that time, the quick solution was to introduce 3DES
which is secure enough for most purposes today. 3DES is a construction of applying DES
three times in sequence. 3DES with three different keys (K1, K2 and K3) has effective key
length is 168 bits (The use of three distinct key is recommended of 3DES.). Another
variation is called two-key (K1 and K3 is same) 3DES reduces the effective key size to 112
bits which is less secure. Two-key 3DES is widely used in electronic payments industry.
3DES takes three times as much CPU power than compare with its predecessor which is
significant performance hit. AES outperforms 3DES both in software and in hardware.
The Rijndael algorithm has been selected as the Advance Encryption Standard (AES)
to replace 3DES. AES is modified version of Rijndael algorithm
Rijndael was submitted by Joan Diemen and Vincent Rijmen. When considered
together Rijndael’s combination of security, performance, efficiency, implementability, and
flexibility made it an appropriate selection for the AES.
By design AES is faster in software and works efficiently in hardware. It works fast
even on small devices such as smart phones, smart cards etc. AES provides more security
due to larger block size and longer keys. AES uses 128 bit fixed block size and works with
128, 192 and 256 bit keys. Rigndael algorithm in general is flexible enough to work with key
and block size of any multiple of 32 bit with minimum of128 bits and maximum of 256 bit.
.
CHAPTER 8
CONCLUSIONS AND FUTURE SCOPE
Dept. of ECE, ACTS Page 67
Performance Of Encryption Techniques On Data
Conclusion:
In this paper, we have studied AES and DES encryption as well as decryption
mechanisms and have highlighted some of the important mathematical properties as well as
the security issues of both algorithms. Since AES provides better security and has less
implementation complexity when compared to DES, it has emerged as one of the strongest
and most efficient algorithms in existence today. Hence, the optimal solution is the use of a
hybrid encryption system in which typically AES is used to encrypt and also decrypt large
data block.
Future Scope:
The future work can done for the fixed length of secret key that is considered as a
critical issue of AES like other symmetric encryption algorithms.
Dept. of ECE, ACTS Page 68
Performance Of Encryption Techniques On Data
APPENDIX –A
INTRODUCTION TO HDL TOOLS
SIMULATION TOOL
Active HDL Overview:
Active-HDL is an integrated environment designed for development of VHDL,
Verilog, and EDIF and mixed VHDL-Verilog-EDIF designs. It comprises three different
design entry tools, VHDL'93 compiler, Verilog compiler, single simulation kernel, several
debugging tools, graphical and textual simulation output viewers, and auxiliary utilities
designed for easy management of resource files, designs, and libraries.
Standards Supported
VHDL:
The VHDL simulator implemented in Active-HDL supports the IEEE Std. 1076-1993
standard.
Verilog:
The Verilog simulator implemented in Active-HDL supports the IEEE Std. 1364-
1995 standard. Both PLI (Programming Language Interface) and VCD (Value Change
Dump) are also supported in Active-HDL.
EDIF:
Active-HDL supports Electronic Design Interchange Format version 2 0 0
Waves:
Active-HDL supports automatic generation of test benches compliant with the
WAVES standard. The basis for this implementation is a draft version of the standard dated
to May 1997 (IEEE P1029.1/D1.0 May 1997). The WAVES standard (Waveform and Vector
Exchange to Support Design and Test Verification) defines a formal notation that supports
the verification and testing of hardware designs, the communication of hardware design and
test verification data, the maintenance, modification and procurement of hardware system.
Dept. of ECE,ACTS Page 69
Performance Of Encryption Techniques On Data
ACTIVE-HDL Macro Language:
All operations in Active-HDL can be performed using Active-HDL macro language.
The language has been designed to enable the user to work with Active-HDL without using
the graphical user interface (GUI).
1. HDL Editor:
HDL Editor is a text editor designed for HDL source files. It displays specific syntax
categories in different colors (keyword coloring). The editor is tightly integrated with the
simulator to enable debugging source code. The keyword coloring is also available when
HDL Editor is used for editing macro files, Perl scripts, and Tcl scripts.
2. Block Diagram Editor:
Block Diagram Editor is a graphical tool designed to create block diagrams. The editor
automatically translates graphically designed diagrams into VHDL or Verilog code.
3. State Diagram Editor:
State Diagram Editor is a graphical tool designed to edit state machine diagrams. The
editor automatically translates graphically designed diagrams into VHDL or Verilog code.
4. Waveform Editor:
Waveform Editor displays the results of a simulation run as signal waveforms. It
allows you to graphically edit waveforms so as to create desired test vectors.
5. Design Browser:
The Design Browser window displays the contents of the current design, that is:
a. Resource files attached to the design.
b. The contents of the default-working library of the design.
c. The structure of the design unit selected for simulation.
d. VHDL, Verilog, or EDIF objects declared within a selected region
of the current design.
6. Console window:
The Console window is an interactive input-output text device providing entry for
Active-HDL macro language commands, macros, and scripts. All Active-HDL tools output
their messages to Console.
Dept. of ECE,ACTS Page 70
Performance Of Encryption Techniques On Data
Compilation:
Compilation is a process of analysis of a source file. Analyzed design units contained
within the file are placed into the working library in a format understandable for the
simulator. In Active-HDL, a source file can be on of the following:
VHDL file (.vhd)
Verilog file (.v)
EDIF net list file (.EDIF)
State diagram file (.asf)
Block diagram file (.bde)
In the case of a block or state diagram file, the compiler analyzes the intermediate
VHDL, Verilog, or EDIF file containing HDL code (or net list) generated from the diagram.
A net list is a set of statements that specifies the elements of a circuit (for example,
transistors or gates) and their interconnection.
Active-HDL provides three compilers, respectively for VHDL, Verilog, and EDIF.
When you choose a menu command or toolbar button for compilation, Active-HDL
automatically employs the compiler appropriate for the type of the source file being
compiled.
Simulation:
The purpose of simulation is to verify that the circuit works as desired.
The Active-HDL simulator provides two simulation engines.
Event-Driven Simulation
Cycle-Based Simulation
The simulator supports hybrid simulation – some portions of a design can be
simulated in the event-driven kernel while the others in the cycle-based kernel. Cycle-based
simulation is significantly faster than event-driven.
Synthesis Tool:
Overview of Xilinx ISE:
Integrated Software Environment (ISE) is the Xilinx design software suite. This
overview explains the general progression of a design through ISE from start to finish.
Dept. of ECE,ACTS Page 71
Performance Of Encryption Techniques On Data
ISE enables you to start your design with any of a number of different source types,
including:
HDL (VHDL, Verilog HDL, ABEL)
Schematic design files
EDIF
NGC/NGO
State Machines
IP Cores
From your source files, ISE enables you to quickly verify the functionality of these
sources using the integrated simulation capabilities, including ModelSim Xilinx Edition and
the HDL Bencher test bench generator. HDL sources may be synthesized using the Xilinx
Synthesis Technology (XST) as well as partner synthesis engines used standalone or
integrated into ISE. The Xilinx implementation tools continue the process into a placed and
routed FPGA or fitted CPLD, and finally produce a bit stream for your device configuration.
Design Entry:
ISE Text Editor - The ISE Text Editor is provided in ISE for entering design code
and viewing reports.
Schematic Editor - The Engineering Capture System (ECS) is a graphical user
interface (GUI) that allows you to create, view, and edit schematics and symbols for
the Design Entry step of the Xilinx® design flow.
CORE Generator - The CORE Generator System is a design tool that delivers
parameterized cores optimized for Xilinx FPGAs ranging in complexity from simple
arithmetic operators such as adders, to system-level building blocks such as filters,
transforms, FIFOs, and memories.
Constraints Editor - The Constraints Editor allows you to create and modify the
most commonly used timing constraints.
PACE - The Pin out and Area Constraints Editor (PACE) allows you to view and edit
I/O, Global logic, and Area Group constraints.
State CAD State Machine Editor - State CAD allows you to specify states,
transitions, and actions in a graphical editor. The state machine will be created in
HDL.
Dept. of ECE,ACTS Page 72
Performance Of Encryption Techniques On Data
Implementation:
Translate - The Translate process runs NGDBuild to merge all of the input net lists
as well as design constraint information into a Xilinx database file.
Map - The Map program maps a logical design to a Xilinx FPGA.
Place and Route (PAR) - The PAR program accepts the mapped design, places and
routes the FPGA, and produces output for the bit stream generator.
Floor planner - The Floor planner allows you to view a graphical representation of
the FPGA, and to view and modify the placed design.
FPGA Editor - The FPGA Editor allows you view and modify the physical
implementation, including routing.
Timing Analyzer - The Timing Analyzer provides a way to perform static timing
analysis on FPGA and CPLD designs. With Timing Analyzer, analysis can be
performed immediately after mapping, placing or routing an FPGA design, and after
fitting and routing a CPLD design.
Fit (CPLD only) - The CPLDFit process maps a net list(s) into specified devices and
creates the JEDEC programming file.
Chip Viewer (CPLD only) - The Chip Viewer tool provides a graphical view of the
inputs and outputs, macro cell details, equations, and pin assignments.
Device Download and Program File Formatting:
BitGen - The BitGen program receives the placed and routed design and produces a
bit stream for Xilinx device configuration.
iMPACT - The iMPACT tool generates various programming file formats, and
subsequently allows you to configure your device.
XPower - XPower enables you to interactively and automatically analyze power
consumption for Xilinx FPGA and CPLD devices.
Integration with Chip Scope Pro.
Dept. of ECE,ACTS Page 73
Performance Of Encryption Techniques On Data
BIBLOGRAPHY
1. J. D. Johnston, “Testing Loudness Models–Real vs. Artificial Content”, 125 AES
Convention, New York, NY, October 2-5, 2008.
2. J. D. Johnston and Smirnov, Serge, “A Low Complexity Perceptually Tuned Room
Correction System”, 123 AES Convention, New York, NY, October 5-8, 2007.
3. J. D. Johnston, “Perceptual Audio Coding - A History and Timeline”, 41st Asilomar
Conference on Signals, Systems and Computers, 2007.
4. J. Li, and J. D. Johnston, “Perceptually layered scalable coding”, 40th Asilomar Conf.
on Signals, Systems and Computers, pp.2125-2129, Pacific Grove, CA, October 2006.
5. Blank, T., Atkinson, B., Isard, M., Johnston, J. D., and Olynyk, K., “An Internet
Protocol (IP) Sound System,” 117 AES Convention San Francisco, CA, 28-31
October 2004.
6. S. Kuo, J. D. Johnston, W. Turin, S. R. Quackenbush, “Covert Audio Watermarking
Using Perceptually Tuned Signal Independent Multiband Phase Modulation”,ICASSP
2002, Orlando, FL, May 2002.
7. Kuo, S. and Johnston, J. D., “A study of why cross channel prediction is not
applicable to perceptual audio coding,” 111 AES Convention, New York, NY, 21-24
September 2001.
8. Johnston, J. D., “Automatic loudspeaker directivity control for sound field
reconstruction,” 19th AES International Conference, Bavaria, Germany, 21-24 June
2001.
9. Kuo, S. and Johnston, J. D., “Enhancing image coders by using spatial noise shaping
(SNS),” Proc. Data Compression Conference, 2001.
10. Kuo, S. and Johnston, J. D., “Spatial noise shaping based on human visual sensitivity
and its application to image coding,” Proc. Int. Conf. Image Processing, 2001.
11. Johnston, J. D., “Perceptual soundfield reconstruction,” Audio Engineering Society
Oct. 2000 Fall Convention Program, Los Angeles, CA, October 2000.
Dept. of ECE,ACTS Page 74