Upload
mohana-sundaram
View
37
Download
0
Embed Size (px)
Citation preview
1. INTRODUCTION
The term DDoS whistles past whenever cyber-activism rears up its head en-masse.
These kind of attacks make international headlines because of multiple reasons.
The issues that jumpstart those DDoS attacks are often controversial or highly
political. Since a large number of regular users are affected by the attacks, it’s an
issue that plays with the people.
Perhaps most importantly, a lot of people don’t know what constitutes a DDoS
attack. Despite its rising frequency, looking at the paper’s headlines, DDoS attacks
can be anything from digital vandalism to fully-fledged cyber-terrorism.
So what does a DDoS, or Distributed Denial of Service attack entail? How does it
work, and how does it affect the intended target and its users? These are important
questions, and this is what we’re going to be focusing on in this instance
of MakeUseOf Explains.
Denial Of Service
Before we tackle the issue of DDoS, or Distributed Denial of Service attacks, let’s
look at the larger group of Denial of Service (DoS) issues.
Denial of Service is a broad issue. Simply put, a website experiences DoS issues
when it is no longer able to service its regular users. When too many people flock
to Twitter, the Fail Whale pops up, indicating that the website has reached and
passed maximum capacity. In essence, Twitter experiences DoS.
Most of the time, these issues are instigated without malicious intent. A large
website links to a small website, which isn’t built for the same level of traffic.
1.3 ORGANIZATION PROFILE
COMPANY PROFILE
Every career aspiring student wants to study to get a job. ELMAQ focusses to support a student
to complete his / her degree so as to get a job. ELMAQ academic support solutions enhance the
employability to bridge the ever widening Academics-Industry divide.
ELMAQ launched EdCademy, the academic support programme for Engineering, and
Management students to provide the much needed support for the student to complete a degree
so as to get a job. Under this programme, ELMAQ conducts Promptive Metrics based Learning
(PML) methods to enhance the performance of students in their semester exams so that their
placement prospects become better as they move to final year campus programmes. EdCademy
at the outset offers three engines of support viz., Learning Engine (LE), Assessment Engine
(AE), and Placement Engine (PE).
EdCademy assures a better student’s performance rating in their core degree subjects and
assessment essentials. This is achieved by continuous assessments that can be forced upon to
every student by the concerned HoD by unlimited access to EdCademy’s Learning Engine (LE).
Further, before the semester exam, a student can attempt many model papers of a particular
subject which the student feels tough to pass or score high through the Assessment Engine (AE)
of EdCademy. Final year students can also access for unlimited job positions through Placement
Engine (PE) of EdCademy by which virtually an e-campus interview can be organized which
could land a student in a job offer much before he / she completes the course.
EdCademy is planned for other Universities in other states as well. EdCademy is well supported
by EdClass, a product that makes a student to meet a tutor in a virtual class network (VCN)to
clarify doubts and also to get quick lectures on important and difficult topics. There will be both
live and pre-recorded sessions in EdClass.
EdCademy with EdClass is sure to simplify the process of passing exams and getting degree with
better grades. Students with EdCademy of ELMAQ can look for right placements with clients
already getting the information and screening them for their performance which are given as
footprints by the LAMPS system seamlessly in web network.
About ELMAQELMAQ is World's first 4th generation education company that uses technology to efficiently
synchronize manpower demand and supply in number and skills right from development to
deployment. The size of India's addressable market, favorable demography, growing acceptance
of technology-aided learning, and ELMAQ innovative Humanware Education And Deployment
System (HEADS) coupled with its trademarked content enable rapid growth. Its business model
is scalable nationally and internationally.
ELMAQ Softsystems Limited is incepted in Mar 1998 as ELMAQ Softsystems P Ltd., with an
objective of providing niche training solutions, customised software solutions, and placement
consulting on contract as well as permanent basis to both Corporate and Institutional clients. The
company successfully provided staffing solutions to premier clients during 2002 to 2004 viz., TI
Group, TVS Group, Office Tiger, KSB Pumps, Infosys Technologies, HCL Technologies,
Shasun Chemicals, Hexaware Technologies, Ashok Leyland, Seshasayee Paper & Boards among
others. ELMAQ acquired ELMAQ EDUCATION, leading IT Training brand in the South in the
year 2007 to move aggressively in Education & Training space. ELMAQ employs e-learning and
has patented the education models which seamlessly control the complete learning life-cycle of a
student from development to deployment in a web controlled framework. ELMAQ holds
copyrighted e-contents covering over a dozen domains and verticals.
ELMAQ is promoted by Mr. S. Giridharan, as senior technocrat of Indian IT space who is a
pioneer in Job oriented education of 90's. He has over 2 decades of experience in Education
having been instrumental in the success of SSI and Radiant education brands. Giridharan later
led ELMAQ education brand and subsequently sold to ELMAQ and merged all the businesses
under ELMAQ.
ELMAQ provides job linked next generation education through Integrated Learning Model
(ILM). ELMAQ EdCenters, EdCampuses, and EdCademys are partner-led and in-campus online
contact centers which are spread across India and South Asia providing education & placement
through ILM.
Offerings SCHOOLS
PRESCHOOL: Powerful and magnificent channel of 3 D animation videos
specially designed for your child. Interactive learning just a click away. A
program designed for your child keeping in mind the various levels of aspirations
from basic to advanced. Come and explore this unique interactive education
platform with your child today. With illustrations in the form of stories, games,
quizzes and more..your child will find it engaging and you useful. Enjoy, explore
and measure your child's development with interesting and captivating graphics
like never before.
PRIMARY: Powerful and magnificent channel of 3 D animation videos specially
designed for your child. Interactive learning just a click away. A program
designed for your child keeping in mind the various levels of aspirations from
basic to advanced. Come and explore this unique interactive education platform
with your child today. With illustrations in the form of stories, games, quizzes and
more. Your child will find it engaging and you useful. Enjoy, explore and
measure your child's development with interesting and captivating graphics like
never before.
MIDDLE: The best way to build a Nation is at Foundation The SMART C.A.P
program - Learning Concepts.Application and Problem solving. Lays solid
foundation for your preparation for Olympaids/NTSE and similar exams.
SECONDARY: ELMAQ is fully cognizant of your problems of getting the
expert tutor, wasting more of your productive time in travelling, paying high fees,
and still scoring less marks in your relentless pursuit to look for a perfect tuition
support for your subjects.Our solution is to connect you to expert tutors live from
the comfort of your home at a fraction of the tuition fee that you are paying or
will be paying.
HIGHER SECONDARY:
TEST PREP
IITJEE: Our unique program provides the following contents to prepare you
adequately for the rigors of the IITJEE exam:
i. 81 Exhaustive chapters covering all topics prescribed for IITJEE.
ii. 2500+ videos containing lectures by senior professors and top IITians
covering everything from the very basic fundamentals and tips, tricks and
shortcut methods to help you solve advanced level IIT JEE Problems.
iii. Exposure to about 10000+ unique problems spread over the IITJEE
syllabus
iv. tate-of-the-Art Assessment Engine with an extensive online database of
thousands of questions exactly on the level of IITJEE examination to build
your speed and accuracy thereby ensuring nothing is left to chance
v. 20 hours of access to IITJEE learning module and Assessment Engine.
AIEEE: Our unique program provides the following contents to prepare you
adequately for the rigors of the AIEEE exam:
i. 60 chapters spanning all topics in AIEEE across physics, chemistry and
mathematics
ii. 1400+ videos containing lectures by senior professors covering everything
from the very basic fundamentals and tips, tricks and shortcut methods to
help you solve advanced level AIEEE Problems.
iii. Exposure to about 5000+ unique problems spread over the AIEEE
syllabus
iv. State-of-the-Art Assessment Engine with an extensive online database of
thousands of questions exactly on the level of AIEEE examination to build
your speed and accuracy thereby ensuring nothing is left to chance.
v. Get access to AIEEE learning module and Assessment Engine.
OLYMPAID: The Central Board of Secondary Education conducts the National
Informatics Olympiad in collaboration with the Indian Association for Research
in Computing Science (IARCS), Mumbai, across the country and abroad. The
examination which is held in two stages is open to all the students of classes VIII
to XII studying in CBSE schools as well as other Boards in the country. At
lampsglow we believe the best way to build a Nation is at Foundation. The
SMART C.A.P program - Learning Concepts. Application and Problem solving.
The program lays a solid foundation for your child’s preparation for Olympaids
and similar exams.
NTSE: NCERT awards 1000 Scholarships to the talented students of Class VIII
each year through its National Talent Search Scheme and then nurture the talent
by way of providing financial assistance in the form of both monthly scholarship
and annual book grant. At lampsglow we believe the best way to build a Nation is
at Foundation. The SMART C.A.P program - Learning Concepts.Application and
Problem solving. The program lays a solid foundation for your child’s preparation
for NTSE and similar exams.
UNDERGRADUATE/POST GRADUATE
IT SKILLS: ELMAQ brings 7 IT courses that matter most to the studies of an
undergraduate or anyone looking to enhance their career at every stage of growth.
To access all and read through those 7 star courses viz., Computer Fundamentals,
MS-Office, C, C++, Java, Softskills, and Numerical Ability and more.
ENGINEERING STUDY: Taking education to your door steps. For all those
students looking at improving and enhancing their academic endeavors –
ELMAQ offers video lessons that enhance the quality of your revisions and re
revisions. Revisions that are the the most crucial part of exam preparation, are just
a click away. Take the lessons anytime, anywhere at your convenience.
ENGINEERING PREP: For aspiring Engineers of Anna University and All
India Engineering exams can pass your ANNA UNIVERSITY Engineering
Semester or Engineering exams easily by scoring high marks. ELMAQ offers
unlimited Access to MODEL TEST PAPERS through our online ELMAQ
LAMPSGLOW anywhere, anytime to clear your arrears and to look for careers !
"Practice until you perfect - try it and experience the perfection that you so much
wanted. Appear in your exams with greater confidence. Engineering support tests
has practice papers just for that."
PartnersA Distributor is appointed for each postal pin code. Distributor is authorized to sell all products
displayed under LG through dealer's network. Distributor can enter LG site and create his/her
own login under the distributor’s column at no cost basis which will be exclusively owned by the
distributor. The entry Fee for a distributor per pin code is nominal considering the marketing
support they would get in their region from us. A Distributor gets the edcredits in his/her login
and also in his mobile phone as registered in LG.
The distributor then appoints Dealers under his area of operation viz: the pin code for which the
distributor has paid. A distributor has the freedom to appoint as many dealers as possible in the
said pin code . Upon receiving payment from dealers , a distributor has to transfer part of
EdCredits as agreed to the dealer which can done through by transferring the same from
distributor’s login to dealer’s login.
A distributor can sell the stocks to none other than registered and paid up dealers.
Payment towards Edcredit transfer paid by dealer to distributor.
Clients BenefitsIndustry that benefits out of HEADS model is wide. Both IT and non-IT verticals and domains
are addressed. The divide between the academics and industry is ever widening especially after
India is projected as a sole source for humanware in Information Technology sector. Institutions
focus so much on IT that core industry suffers from lack of qualified and trained resources.
ELMAQ as a strategy focuses in generating competent fresher for non-IT domains upto 70% of
its business. It is expected that a lot of support would come from the student resources for non-IT
positions considering the fact that the core industries pay-packs are better and the student's
knowledge levels are low at the moment about non-IT opportunities. Further the HEADS model
is strictly automated upto a job offer after a strict training and assessment.
A client company, as industry, is always looking at employable manpower resource as a fresher
at volumes. With ELMAQ HEADS model, the client company can breathe easy and plan the
project schedules better with pre-determined lead time in a proactive way. The LAMPS portal
has its entry point for every such client who can enter their preferences, candidate's background,
curriculum essentials with assessment criteria, and commit on a defined job offers.
With rupee appreciating, it may sound interesting for companies to look into our one-stop-shop
solution of providing them with the exact set of their dream resources by taking care of the
following:
The exact requirements of fresher and the lead-time before they have to be inducted into
your actual project
Entry-level criteria
The curriculum of training to be done on them
The choice of colleges
The choice of academic excellence
The likely compensation package vs qualification and training performance
The other wish-list in terms of job-fitment straight from the college
Once clients sign-up with us with all those crucial information as your specification, they are put
up in their own digital-dashboard of what is going on from inside our automated learning system
for their selected chunks of fresher registered through our dealer gateways.
Rather their HR is directed through our digital window right from the time of the course
enrolment up to the final grading out of the regular ILT sessions with strictly controlled
assessments as per their specification What's more, the exactly matched resources to client's
specification automatically get their Job Intent on your approval from their HR who operates in
our system through a secured access login on a continuous basis.
The specifications as well as their selected resources are secured and protected till the end.
Enough flexibility is given in our model to ensure a client doesn't lose out any including the
timeframe that they are waiting for. A client may not step out of their office to achieve this 100%
matched-resource-making exercise. And all these may cost a client as low as 20% of the cost-
burn otherwise spent on their own exercise of trial-runs!
Investors
ELMAQ is India’s only company to have seamelessly integrated education and placement end-
to-end in partner based web network. ELMAQ focusses on the bottom of the pyramid, the Real
India, the hugely recession-proof market having the continuous need for affordable education
and a career growth path and guidance.
ELMAQ came out with the IPO successfully in February 2009 and is now listed in BSE and NSE,
India's premier stock exchanges. ELMAQ has consolidated its presence in Tamil Nadu as a
principal job linked e-learning solution provider through partner driven network for the masses
growing significantly from 35 odd partner centers just before IPO to 200 by Aug 2009. The
company rolled out EdCampus, EdCademy, and EdCenters with EdClass that offer learning, jobs,
metrics, and live lectures online and in-campus.
The ELMAQ strives to provide the next generation education system that matches demand-
supply in scale and quality, reach and spread, and bridges the Academics-Industry divide by
technology-led, career integrated learning framework through its last mile stretching network.
The future plans include providing end-to-end school solutions to guide every student from K-
12 to Careers. Comprehensive industry demand aligned education shall be deployed to schools.
ELMAQ believes strongly that students at their younger age are in a better position to seed with
apt careers by way of skill developments with industry orientation suiting to their aptitude and
the flair for excellence in a particular field.
2.SYSTEM REQUIREMENTS
2.1 Hardware Requirements
Processor : Dual core 2.0GHZ
Hard Disk : 320GB HDD
RAM : 2 GB RAM
Monitor : 14 LCD Monitor
Keyboard : Multimedia Keyboard
Mouse : Optical Mouse
2.2Software Requirements
Operating environment : Windows 7ultimate
Front end : c#.net
Back end : Sql Server 2005
3. SOFTWARE OVERVIEW
3.1 .NET Framework
Now that you are familiar with the major goals of the .NET Framework,
let's briefly examine its architecture. As you can see in Figure 3.2, the .NET
Framework sits on top of the operating system, which can be a few different
flavors of Windows and consists of a number of components. . NET is essentially a
system application that runs on Windows.
Fig: 3.1.1 .NET Framework
Conceptually, the CLR and the JVM are similar in that they are both runtime
infrastructures that abstract the underlying platform differences. However, while
the JVM officially supports only the Java language, the CLR supports any
language that can be represented in its Common Intermediate Language (CIL). The
JVM executes byte code, so it can, in principle, support many languages, too.
Unlike Java's byte code, though, CIL is never interpreted. Another conceptual
difference between the two infrastructures is that Java code runs on any platform
with a JVM, whereas .NET code runs only on platforms that support the CLR.
The Common Language Runtime
At the heart of the .NET Framework is the common language runtime. The
common language runtime is responsible for providing the execution environment
that code written in a .NET language runs under. The common language runtime
can be compared to the Visual Basic 6 runtime, except that the common language
runtime is designed to handle all .NET languages, not just one, as the Visual Basic
6 runtime did for Visual Basic 6. The following list describes some of the benefits
the common language runtime gives you:
Automatic memory management
Cross-language debugging
Cross-language exception handling
Full support for component versioning
Access to legacy COM components
XCOPY deployment
Robust security model
You might expect all those features, but this has never been possible using
Microsoft development tools. Figure 4.1.2 shows where the common language
runtime fits into the .NET Framework.
Fig: 3.1.2 The common language runtime.
Code written using a .NET language is known as managed code. Code that
uses anything but the common language runtime is known as unmanaged code.
The common language runtime provides a managed execution environment
for .NET code, whereas the individual runtimes of non-.NET languages provide an
unmanaged execution environment.
Inside the Common Language Runtime
The common language runtime enables code running in its execution
environment to have features such as security, versioning, memory management
and exception handling because of the way .NET code actually executes. When
you compiled Visual Basic 6 forms applications, you had the ability to compile
down to native node or p-code.
Fig: 3.1.3 Visual Basic 6 compiler options dialog.
When you compile your applications in .NET, you aren't creating anything
in native code. When you compile in .NET, you're converting your code—no
matter what .NET language you're using—into an assembly made up of an
intermediate language called Microsoft Intermediate Language.
The file format for the IL is known as PE (portable executable) format, which
is a standard format for processor-specific execution.
When a user or another component executes your code, a process occur
called just-in-time (JIT) compilation, and it's at this point that the IL is converted
into the specific machine language of the processor it's executing on. This makes it
very easy to port a .NET application to any type of operating system on any type of
processor because the IL is simply waiting to be consumed by a JIT compiler.
When the IL code is JITted into machine-specific language, it does so on an
as-needed basis. If your assembly is 10MB and the user is only using a fraction of
that 10MB, only the required IL and its dependencies are compiled to machine
language. This makes for a very efficient execution process. But during this
execution, how does the common language runtime make sure that the IL is
correct? Because the compiler for each language creates its own IL, there must be a
process that makes sure what's compiling won't corrupt the system. The process
that validates the IL is known as verification. Figure 4.1.4 demonstrates the process
the IL goes through before the code actually executes.
Fig: 3.1.4 The JIT process and verification
When code is JIT compiled, the common language runtime checks to make
sure that the IL is correct. The rules that the common language runtime uses for
verification are set forth in the Common Language Specification (CLS) and the
Common Type System (CTS).
The .NET Framework Class Library
The second most important piece of the .NET Framework is the .NET
Framework class library (FCL). As you've seen, the common language runtime
handles the dirty work of actually running the code you write. But to write the
code, you need a foundation of available classes to access the resources of the
operating system, database server, or file server. The FCL is made up of a
hierarchy of namespaces that expose classes, structures, interfaces, enumerations,
and delegates that give you access to these resources.
The Structure of a .NET Application
To understand how the common language runtime manages code execution,
you must examine the structure of a .NET application. The primary unit of a .NET
application is the assembly. An assembly is a self-describing collection of code,
resources, and metadata. The assembly manifest contains information about what
is contained within the assembly.
The assembly manifest provides:
Identity information, such as the assembly’s name and version number
A list of all types exposed by the assembly
A list of other assemblies required by the assembly
A list of code access security instructions, including permissions required by
the assembly and permissions to be denied the assembly
Each assembly has one and only one assembly manifest, and it contains all
the description information for the assembly. However, the assembly manifest can
be contained in its own file or within one of the assembly’s modules.
Compilation and Execution of a .NET Application
When you compile a .NET application, it is not compiled to binary machine
code; rather, it is converted to IL. This is the form that your deployed application
takes—one or more assemblies consisting of executable files and DLL files in IL
form. At least one of these assemblies will contain an executable file that has been
designated as the entry point for the application.
When execution of your program begins, the first assembly is loaded into
memory. At this point, the common language runtime examines the assembly
manifest and determines the requirements to run the program. It examines security
permissions requested by the assembly and compares them with the system’s
security policy. If the system’s security policy does not allow the requested
permissions, the application will not run. If the application passes the system’s
security policy, the common language runtime executes the code. It creates a
process for the application to run in and begins application execution.
The .NET Framework base class library contains the base classes that provide
many of the services and objects you need when writing your applications. The
class library is organized into namespaces. A namespace is a logical grouping of
types that perform related functions. Namespaces are logical groupings of related
classes. The namespaces in the .NET base class library are organized
hierarchically. The root of the .NET Framework is the System namespace. Other
namespaces can be accessed with the period operator. A typical namespace
construction appears as follows:
System
System.Data
System.Data.SQLClient
The first example refers to the System namespace. The second refers to the
System.Data namespace. The third example refers to the System.Data.SQLClient
namespace. Table 1.1 introduces some of the more commonly used .NET base
class namespaces.
Introduction to Object-Oriented Programming
Programming in the .NET Framework environment is done with objects.
Objects are programmatic constructs that represent packages of related data and
functionality. Objects are self-contained and expose specific functionality to the
rest of the application environment without detailing the inner workings of the
object itself. Objects are created from a template called a class. The .NET base
class library provides a set of classes from which you can create objects in your
applications. You also can use the Microsoft Visual Studio programming
environment to create your own classes. This lesson introduces you to the concepts
associated with object-oriented programming.
Objects, Members, and Abstraction
An object is a programmatic construct that represents something. In the
real world, objects are cars, bicycles, laptop computers, and so on. Each of these
items exposes specific functionality and has specific properties. In your
application, an object might be a form, a control such as a button, a database
connection, or any of a number of other constructs. Each object is a complete
functional unit, and contains all of the data and exposes all of the functionality
required to fulfill its purpose. The ability of programmatic objects to represent real-
world objects is called abstraction.
Classes Are Templates for Objects
Classes were discussed in Chapter 1 and represent user-defined reference
types. Classes can be thought of as blueprints for objects: they define all of the
members of an object, define the behavior of an object, and set initial values for
data when appropriate. When a class is instantiated, an in-memory instance of that
class is created. This instance is called an object. To review, a class is instantiated
using the New (new) keyword as follows:
Visual Basic .NET
' Declares a variable of the Widget type
Dim myWidget As Widget
' Instantiates a new Widget object and assigns it to the myWidget
' variable
myWidget = New Widget()
When an instance of a class is created, a copy of the instance data defined by
that class is created in memory and assigned to the reference variable. Individual
instances of a class are independent of one another and represent separate
programmatic constructs.
There is generally no limit to how many copies of a single class can be
instantiated at any time. To use a real-world analogy, if a car is an object, the plans
for the car are the class. The plans can be used to make any number of cars, and
changes to a single car do not, for the most part, affect any other cars.
Objects and Members
Objects are composed of members. Members are properties, fields, methods,
and events, and they represent the data and functionality that comprise the object.
Fields and properties represent data members of an object. Methods are actions the
object can perform, and events are notifications an object receives from or sends to
other objects when activity happens in the application.
Object Models
Simple objects might consist of only a few properties, methods, and perhaps
an event or two. More complex objects might require numerous properties and
methods and possibly even subordinate objects. Objects can contain and expose
other objects as members.
Similarly, every instance of the Form class contains and exposes a Controls
collection that comprises all of the controls contained by the form. The object
model defines the hierarchy of contained objects that form the structure of an
object.
Encapsulation
Encapsulation is the concept that implementation of an object is
independent of its interface. Put another way, an application interacts with an
object through its interface, which consists of its public properties and methods. As
long as this interface remains constant, the application can continue to interact with
the component, even if implementation of the interface was completely rewritten
between versions.
Polymorphism
Polymorphism is the ability of different classes to provide different
implementations of the same public interfaces. In other words, polymorphism
allows methods and properties of an object to be called without regard for the
particular implementation of those members. There are two principal ways through
which polymorphism can be provided: interface polymorphism and inheritance
polymorphism.
Interface Polymorphism
An interface is a contract for behavior. Essentially, it defines the members a
class should implement, but states nothing at all about the details of that
implementation. An object can implement many different interfaces, and many
diverse classes can implement the same interface. All objects implementing the
same interface are capable of interacting with other objects through that interface.
Inheritance Polymorphism
Inheritance allows you to incorporate the functionality of a previously
defined class into a new class and implement different members as needed. A class
that inherits another class is said to derive from that class, or to inherit from that
class. A class can directly inherit from only one class, which is called the base
class. The new class has the same members as the base class, and additional
members can be added as needed. Additionally, the implementation of base
members can be changed in the new class by overriding the base class
implementation. Inherited classes retain all the characteristics of the base class and
can interact with other objects as though they were instances of the base class.
C#.Net:
C# is an elegant and type-safe object-oriented language that enables
developers to build a variety of secure and robust applications that run on the .NET
Framework. You can use C# to create traditional Windows client applications,
XML Web services, distributed components, client-server applications, database
applications, and much, much more. Visual C# provides an advanced code editor,
convenient user interface designers, integrated debugger, and many other tools to
make it easier to develop applications based on version 4.0 of the C# language and
version 4.0 of the .NET Framework.
C# syntax is highly expressive, C# makes it easy to develop software components
through several innovative language constructs, including the following:
Encapsulated method signatures called delegates, which enable type-safe event
notifications.
Properties, which serve as accessors for private member variables.
Attributes, which provide declarative metadata about types at run time.
Inline XML documentation comments.
Language-Integrated Query (LINQ) which provides built-in query capabilities
across a variety of data sources.
yet it is also simple and easy to learn. The curly-brace syntax of C# will be
instantly recognizable to anyone familiar with C, C++ or Java. Developers who
know any of these languages are typically able to begin to work productively in C#
within a very short time. C# syntax simplifies many of the complexities of C++
and provides powerful features such as nullable value types, enumerations,
delegates, lambda expressions and direct memory access, which are not found in
Java. C# supports generic methods and types, which provide increased type safety
and performance, and iterators, which enable implementers of collection classes to
define custom iteration behaviors that are simple to use by client code. Language-
Integrated Query (LINQ) expressions make the strongly-typed query a first-class
language construct.
As an object-oriented language, C# supports the concepts of encapsulation,
inheritance, and polymorphism. All variables and methods, including the Main
method, the application's entry point, are encapsulated within class definitions. A
class may inherit directly from one parent class, but it may implement any number
of interfaces. Methods that override virtual methods in a parent class require the
override keyword as a way to avoid accidental redefinition. In C#, a struct is like a
lightweight class; it is a stack-allocated type that can implement interfaces but does
not support inheritance. The C# build process is simple compared to C and C++
and more flexible than in Java. There are no separate header files, and no
requirement that methods and types be declared in a particular order. A C# source
file may define any number of classes, structs, interfaces, and events.
C# programs run on the .NET Framework, an integral component of
Windows that includes a virtual execution system called the common language
runtime (CLR) and a unified set of class libraries. The CLR is the commercial
implementation by Microsoft of the common language infrastructure (CLI), an
international standard that is the basis for creating execution and development
environments in which languages and libraries work together seamlessly.
Source code written in C# is compiled into an intermediate language (IL) that
conforms to the CLI specification. The IL code and resources, such as bitmaps and
strings, are stored on disk in an executable file called an assembly, typically with
an extension of .exe or .dll. An assembly contains a manifest that provides
information about the assembly's types, version, culture, and security requirements.
When the C# program is executed, the assembly is loaded into the CLR, which
might take various actions based on the information in the manifest. Then, if the
security requirements are met, the CLR performs just in time (JIT) compilation to
convert the IL code to native machine instructions. The CLR also provides other
services related to automatic garbage collection, exception handling, and resource
management. Code that is executed by the CLR is sometimes referred to as
"managed code," in contrast to "unmanaged code" which is compiled into native
machine language that targets a specific system. The following diagram illustrates
the compile-time and run-time relationships of C# source code files, the .NET
Framework class libraries, assemblies, and the CLR.
Distinguishing features
C# supports strongly typed implicit variable declarations with the keyword
var, and implicitly typed arrays with the keyword new[] followed by a
collection initialize.
Meta programming via C# attributes is part of the language. Many of these
attributes duplicate the functionality of GCC's and VisualC++'s platform-
dependent preprocessor directives.
Like C++, and unlike Java, C# programmers must use the keyword virtual to
allow methods to be overridden by subclasses.
Extension methods in C# allow programmers to use static methods as if they
were methods from a class's method table, allowing programmers to add
methods to an object that they feel should exist on that object and its
derivatives.
The type dynamic allows for run-time method binding, allowing for
JavaScript like method calls and run-time object composition.
C# has strongly typed and verbose function pointer support via the keyword
delegate.
The C# languages does not allow for global variables or functions. All
methods and members must be declared within classes. Static members of
public classes can substitute for global variables and functions.
Local variables cannot shadow variables of the enclosing block, unlike C
and C++.
A C# namespace provides the same level of code isolation as a Java package
or a C++ namespace, with very similar rules and features to a package.
C# supports a strict Boolean data type, bool. Statements that take conditions,
such as while and if, require an expression of a type that implements the true
operator, such as the Boolean type. While C++ also has a Boolean type, it
can be freely converted to and from integers, and expressions such as if(a)
require only that a is convertible to bool, allowing a to be an int, or a pointer.
C# disallows this "integer meaning true or false" approach, on the grounds
that forcing programmers to use expressions that return exactly bool can
prevent certain types of common programming mistakes in C or C++ such
as if (a = b) (use of assignment = instead of equality ==).
Categories of data types
CTS separates data types into two categories.
1. Value types
2. Reference types
Instances of value types do not have referential identity nor referential comparison
semantics - equality and inequality comparisons for value types compare the actual
data values within the instances, unless the corresponding operators are
overloaded. Value types are derived from System.ValueType, always have a
default value, and can always be created and copied. Some other limitations on
value types are that they cannot derive from each other (but can implement
interfaces) and cannot have an explicit default (parameter less) constructor.
Examples of value types are all primitive types, such as int (a signed 32-bit
integer), float (a 32-bit IEEE floating-point number), char (a 16-bit Unicode code
unit), and System.DateTime (identifies a specific point in time with nanosecond
precision). Other examples are enum (enumerations) and struct (user defined
structures).
In contrast, reference types have the notion of referential identity - each instance of
a reference type is inherently distinct from every other instance, even if the data
within both instances is the same. This is reflected in default equality and
inequality comparisons for reference types, which test for referential rather than
structural equality, unless the corresponding operators are overloaded (such as the
case for System. String). In general, it is not always possible to create an instance
of a reference type, nor to copy an existing instance, or perform a value
comparison on two existing instances, though specific reference types can provide
such services by exposing a public constructor or implementing a corresponding
interface (such as ICloneable or Incomparable). Examples of reference types are
object (the ultimate base class for all other C# classes), System. String (a string of
Unicode characters), and System. Array (a base class for all C# arrays).
Both type categories are extensible with user-defined types.
Code comments
C# utilizes a double slash (//) to indicate the rest of the line is a comment.
This is inherited from c++
public class Foo
{
// a comment
public static void Bar(int first Aram) {} // also a comment
}
Multi-line comments can start with slash-asterisk (/*) and end asterisk-slash
(*/). This is inherited from standard C. public class Foo
{
/* A Multi-Line
comment */
public static void Bar(int firstParam) {}
}
A C# developer with strong severside and multithreading skills is required to join a
leading investment banks front office FX team to work on an options pricing desk.
Declaration
Instantiation
Invocation
A very basic example (SimpleDelegate1.cs):
using System;
namespace Akadia.BasicDelegate
{
// Declaration
public delegate void SimpleDelegate();
class Test Delegate
{
public static void MyFunc()
{
Console.WriteLine("I was called by delegate ...");
}
public static void Main()
{
// Instantiation
SimpleDelegate simpleDelegate = new SimpleDelegate(MyFunc);
// Invocation
simpleDelegate();
}
}
}
3.2 Sql Server 2005
Microsoft SQL Server is a relational database management system
developed by Microsoft. As a database, it is a software product whose primary
function is to store and retrieve data as requested by other software applications, be
it those on the same computer or those running on another computer across a
network (including the Internet). There are at least a dozen different editions of
Microsoft SQL Server aimed at different audiences and for different workloads
(ranging from small applications that store and retrieve data on the same computer,
to millions of users and computers that access huge amounts of data from the
Internet at the same time). Its primary query languages are T-SQL and ANSI SQL.
SQL Server 2005
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It
included native support for managing XML data, in addition to relational data. For
this purpose, it defined an xml data type that could be used either as a data type in
database columns or as literals in queries. XML columns can be associated with
XSD schemas; XML data being stored is verified against the schema. XML is
converted to an internal binary data type before being stored in the database.
Specialized indexing methods were made available for XML data. XML data is
queried using Query; SQL Server 2005 added some extensions to the T-SQL
language to allow embedding XQuery queries in T-SQL. In addition, it also
defines a new extension to XQuery, called XML DML that allows query-based
modifications to XML data. SQL Server 2005 also allows a database server to be
exposed over web services using Tabular Data Stream (TDS) packets encapsulated
within SOAP protocol requests. When the data is accessed over web services,
results are returned as XML.
SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of
allowing usage of database connections for multiple purposes.
Microsoft makes SQL Server available in multiple editions, with different feature
sets and targeting different users.
Mainstream editions
Datacenter
SQL Server 2008 R2 Datacenter is the full-featured edition of SQL Server
and is designed for datacenters that need the high levels of application
support and scalability. It supports 256 logical processors and virtually
unlimited memory. Comes with Stream Insight Premium edition.] The
Datacenter edition has been retired in SQL Server 2012; all its features are
available in SQL Server 2012 Enterprise Edition.
Enterprise
SQL Server Enterprise Edition includes both the core database engine and
add-on services, with a range of tools for creating and managing a SQL
Server cluster. It can manage databases as large as 524 petabytes and
address 2 terabytes of memory and supports 8 physical processors. SQL
2012 Edition supports 160 Physical Processors]
Standard
SQL Server Standard edition includes the core database engine, along with
the stand-alone services. It differs from Enterprise edition in that it supports
fewer active instances (number of nodes in a cluster) and does not include
some high-availability functions such as hot-add memory (allowing memory
to be added while the server is still running), and parallel indexes.
Web
SQL Server Web Edition is a low-TCO option for Web hosting.
Business Intelligence
Introduced in SQL Server 2012 and focusing on Self Service and Corporate
Business Intelligence. It includes the Standard Edition capabilities and
Business Intelligence tools: PowerPivot, Power View, the BI Semantic
Model, Master Data Services, Data Quality Services and xVelocity in-
memory analytics
Workgroup
SQL Server Workgroup Edition includes the core database functionality but
does not include the additional services. Note that this edition has been
retired in SQL Server 2012
Express
SQL Server Express Edition is a scaled down, free edition of SQL Server,
which includes the core database engine. While there are no limitations on
the number of databases or users supported, it is limited to using one
processor, 1 GB memory and 4 GB database files (10 GB database files
from SQL Server Express 2008 R2) It is intended as a replacement for
MSDE. Two additional editions provide a superset of features not in the
original Express Edition. The first is SQL Server Express with Tools,
which includes SQL Server Management Studio Basic. SQL Server
Express with Advanced Services adds full-text search capability and
reporting services.
[edit] Specialized editions
Azure
Microsoft SQL Azure Database is the cloud-based version of Microsoft SQL
Server, presented as software as a service on Azure Services Platform.
Compact (SQL CE)
The compact edition is an embedded database engine. Unlike the other
editions of SQL Server, the SQL CE engine is based on SQL Mobile
(initially designed for use with hand-held devices) and does not share the
same binaries. Due to its small size (1 MB DLL footprint), it has a markedly
reduced feature set compared to the other editions. For example, it supports
a subset of the standard data types, does not support stored procedures or
Views or multiple-statement batches (among other limitations). It is limited
to 4 GB maximum database size and cannot be run as a Windows service,
Compact Edition must be hosted by the application using it. The 3.5 version
includes supports ADO.NET Synchronization Services. SQL CE does not
support ODBC connectivity, unlike SQL Server proper.
Developer
SQL Server Developer Edition includes the same features as SQL Server
2012 Enterprise Edition, but is limited by the license to be only used as a
development and test system, and not as production server. This edition is
available to download by students free of charge as a part of Microsoft's
DreamSpark program.
Embedded (SSEE)
SQL Server 2005 Embedded Edition is a specially configured named
instance of the SQL Server Express database engine which can be accessed
only by certain Windows Services.
Evaluation
SQL Server Evaluation Edition, also known as the Trial Edition, has all the
features of the Enterprise Edition, but is limited to 180 days, after which the
tools will continue to run, but the server services will stop.
Fast Track
SQL Server Fast Track is specifically for enterprise-scale data warehousing
storage and business intelligence processing, and runs on reference-
architecture hardware that is optimized for Fast Track.
LocalDB
Introduced in SQL Server Express 2012, LocalDB is a minimal, on-demand,
version of SQL Server that is designed for application developers. It can also
be used as an embedded database/.
Parallel Data Warehouse (PDW)
A massively parallel processing (MPP) SQL Server appliance optimized for
large-scale data warehousing such as hundreds of terabytes]
Datawarehouse Appliance Edition
Pre-installed and configured as part of an appliance in partnership with Dell
& HP base on the Fast Track architecture. This edition does not include SQL
Server Integration Services, Analysis Services, or Reporting Services.
Architecture
The protocol layer implements the external interface to SQL Server. All operations
that can be invoked on SQL Server are communicated to it via a Microsoft-defined
format, called Tabular Data Stream (TDS). TDS is an application layer protocol,
used to transfer data between a database server and a client. Initially designed and
developed by Sybase Inc. for their Sybase SQL Server relational database engine
in 1984, and later by Microsoft in Microsoft SQL Server, TDS packets can be
encased in other physical transport dependent protocols, including TCP/IP, Named
pipes, and Shared memory. Consequently, access to SQL Server is available over
these protocols. In addition, the SQL Server API is also exposed over web
services.
Data storage
Data storage is a database, which is a collection of tables with typed columns. SQL
Server supports different data types, including primary types such as Integer,
Float, Decimal, Char (including character strings), Varchar (variable length
character strings), binary (for unstructured blobs of data), Text (for textual data)
among others. The rounding of floats to integers uses either Symmetric Arithmetic
Rounding or Symmetric Round Down (Fix) depending on arguments: SELECT
Round(2.5, 0) gives 3.
Microsoft SQL Server also allows user-defined composite types (UDTs) to be
defined and used. It also makes server statistics available as virtual tables and
views (called Dynamic Management Views or DMVs). In addition to tables, a
database can also contain other objects including views, stored procedures, indexes
and constraints, along with a transaction log. A SQL Server database can contain a
maximum of 231 objects, and can span multiple OS-level files with a maximum file
size of 260 bytes.[39] The data in the database are stored in primary data files with an
extension .mdf. Secondary data files, identified with a .ndf extension, are used to
store optional metadata. Log files are identified with the .ldf extension.
Storage space allocated to a database is divided into sequentially numbered pages,
each 8 KB in size. A page is the basic unit of I/O for SQL Server operations. A
page is marked with a 96-byte header which stores metadata about the page
including the page number, page type, free space on the page and the ID of the
object that owns it. Page type defines the data contained in the page - data stored in
the database, index, allocation map which holds information about how pages are
allocated to tables and indexes, change map which holds information about the
changes made to other pages since last backup or logging, or contain large data
types such as image or text. While page is the basic unit of an I/O operation, space
is actually managed in terms of an extent which consists of 8 pages. A database
object can either span all 8 pages in an extent ("uniform extent") or share an extent
with up to 7 more objects ("mixed extent"). A row in a database table cannot span
more than one page, so is limited to 8 KB in size. However, if the data exceeds 8
KB and the row contains Varchar or Varbinary data, the data in those columns are
moved to a new page (or possibly a sequence of pages, called an Allocation unit)
and replaced with a pointer to the data.
For physical storage of a table, its rows are divided into a series of partitions
(numbered 1 to n). The partition size is user defined; by default all rows are in a
single partition. A table is split into multiple partitions in order to spread a database
over a cluster. Rows in each partition are stored in either B-tree or heap structure.
If the table has an associated index to allow fast retrieval of rows, the rows are
stored in-order according to their index values, with a B-tree providing the index.
The data is in the leaf node of the leaves, and other nodes storing the index values
for the leaf data reachable from the respective nodes. If the index is non-clustered,
the rows are not sorted according to the index keys. An indexed view has the same
storage structure as an indexed table. A table without an index is stored in an
unordered heap structure. Both heaps and B-trees can span multiple allocation
units.[53]
Buffer management
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be
buffered in-memory, and the set of all pages currently buffered is called the buffer
cache. The amount of memory available to SQL Server decides how many pages
will be cached in memory. The buffer cache is managed by the Buffer Manager.
Either reading from or writing to any page copies it to the buffer cache. Subsequent
reads or writes are redirected to the in-memory copy, rather than the on-disc
version. The page is updated on the disc by the Buffer Manager only if the in-
memory cache has not been referenced for some time. While writing pages back to
disc, asynchronous I/O is used whereby the I/O operation is done in a background
thread so that other operations do not have to wait for the I/O operation to
complete. Each page is written along with its checksum when it is written. When
reading the page back, its checksum is computed again and matched with the
stored version to ensure the page has not been damaged or tampered with in the
meantime.
Concurrency and locking
SQL Server allows multiple clients to use the same database concurrently. As such,
it needs to control concurrent access to shared data, to ensure data integrity—when
multiple clients update the same data, or clients attempt to read data that is in the
process of being changed by another client. SQL Server provides two modes of
concurrency control: pessimistic concurrency and optimistic concurrency. When
pessimistic concurrency control is being used, SQL Server controls concurrent
access by using locks. Locks can be either shared or exclusive. Exclusive lock
grants the user exclusive access to the data—no other user can access the data as
slong as the lock is held. Shared locks are used when some data is being read—
multiple users can read from data locked with a shared lock, but not acquire an
exclusive lock. The latter would have to wait for all shared locks to be released.
Locks can be applied on different levels of granularity—on entire tables, pages, or
even on a per-row basis on tables. For indexes, it can either be on the entire index
or on index leaves. The level of granularity to be used is defined on a per-database
basis by the database administrator. While a fine grained locking system allows
more users to use the table or index simultaneously, it requires more resources. So
it does not automatically turn into higher performing solution. SQL Server also
includes two more lightweight mutual exclusion solutions—latches and spinlocks
—which are less robust than locks but are less resource intensive. SQL Server uses
them for DMVs and other resources that are usually not busy. SQL Server also
monitors all worker threads that acquire locks to ensure that they do not end up in
deadlocks—in case they do, SQL Server takes remedial measures, which in many
cases is to kill one of the threads entangled in a deadlock and rollback the
transaction it started.[39] To implement locking, SQL Server contains the Lock
Manager. The Lock Manager maintains an in-memory table that manages the
database objects and locks, if any, on them along with other metadata about the
lock. Access to any shared object is mediated by the lock manager, which either
grants access to the resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is
similar to the multiversion concurrency control used in other databases. The
mechanism allows a new version of a row to be created whenever the row is
updated, as opposed to overwriting the row, i.e., a row is additionally identified by
the ID of the transaction that created the version of the row. Both the old as well as
the new versions of the row are stored and maintained, though the old versions are
moved out of the database into a system database identified as Tempdb. When a
row is in the process of being updated, any other requests are not blocked (unlike
locking) but are executed on the older version of the row. If the other request is an
update statement, it will result in two different versions of the rows—both of them
will be stored by the database, identified by their respective transaction IDs.
Data retrieval
The main mode of retrieving data from an SQL Server database is querying for it.
The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft
SQL Server shares with Sybase SQL Server due to its legacy. The query
declaratively specifies what is to be retrieved. It is processed by the query
processor, which figures out the sequence of steps that will be necessary to retrieve
the requested data. The sequence of actions necessary to execute a query is called a
query plan. There might be multiple ways to process the same query. For example,
for a query that contains a join statement and a select statement, executing join on
both the tables and then executing select on the results would give the same result
as selecting from each table and then executing the join, but result in different
execution plans. In such case, SQL Server chooses the plan that is expected to
yield the results in the shortest possible time. This is called query optimization and
is performed by the query processor itself.
SQL Server includes a cost-based query optimizer which tries to optimize on the
cost, in terms of the resources it will take to execute the query. Given a query, then
the query optimizer looks at the database schema, the database statistics and the
system load at that time. It then decides which sequence to access the tables
referred in the query, which sequence to execute the operations and what access
method to be used to access the tables. For example, if the table has an associated
index, whether the index should be used or not - if the index is on a column which
is not unique for most of the columns (low "selectivity"), it might not be
worthwhile to use the index to access the data. Finally, it decides whether to
execute the query concurrently or not. While a concurrent execution is more costly
in terms of total processor time, because the execution is actually split to different
processors might mean it will execute faster. Once a query plan is generated for a
query, it is temporarily cached. For further invocations of the same query, the
cached plan is used. Unused plans are discarded after some time.
SQL Server also allows stored procedures to be defined. Stored procedures are
parameterized T-SQL queries, that are stored in the server itself (and not issued by
the client application as is the case with general queries). Stored procedures can
accept values sent by the client as input parameters, and send back results as output
parameters. They can call defined functions, and other stored procedures, including
the same stored procedure (up to a set number of times). They can be selectively
provided access to. Unlike other queries, stored procedures have an associated
name, which is used at runtime to resolve into the actual queries. Also because the
code need not be sent from the client every time (as it can be accessed by name), it
reduces network traffic and somewhat improves performance. Execution plans for
stored procedures are also cached as necessary Services
SQL Server also includes an assortment of add-on services. While these are not
essential for the operation of the database system, they provide value added
services on top of the core database management system. These services either run
as a part of some SQL Server component or out-of-process as Windows Service
and presents their own API to control and interact with them.
Service Broker
Used inside an instance, it is used to provide an asynchronous programming
environment. For cross instance applications, Service Broker communicates over
TCP/IP and allows the different components to be synchronized together, via
exchange of messages. The Service Broker, which runs as a part of the database
engine, provides a reliable messaging and message queuing platform for SQL
Server applications.
Replication Services
SQL Server Replication Services are used by SQL Server to replicate and
synchronize database objects, either in entirety or a subset of the objects present,
across replication agents, which might be other database servers across the
network, or database caches on the client side. Replication follows a
publisher/subscriber model, i.e., the changes are sent out by one database server
("publisher") and are received by others ("subscribers"). SQL Server supports three
different types of replication
Transaction replication
Each transaction made to the publisher database (master database) is synced
out to subscribers, who update their databases with the transaction.
Transactional replication synchronizes databases in near real time.[61]
Merge replication
Changes made at both the publisher and subscriber databases are tracked,
and periodically the changes are synchronized bi-directionally between the
publisher and the subscribers. If the same data has been modified differently
in both the publisher and the subscriber databases, synchronization will
result in a conflict which has to be resolved - either manually or by using
pre-defined policies. rowguid needs to be configured on a column if merge
replication is configured
Snapshot replication
Snapshot replication publishes a copy of the entire database (the then-
snapshot of the data) and replicates out to the subscribers. Further changes to
the snapshot are not tracked.
Analysis Services
SQL Server Analysis Services adds OLAP and data mining capabilities for SQL
Server databases. The OLAP engine supports MOLAP, ROLAP and HOLAP
storage modes for data. Analysis Services supports the XML for Analysis standard
as the underlying communication protocol. The cube data can be accessed using
MDX and LINQ queries. Data mining specific functionality is exposed via the
DMX query language. Analysis Services includes various algorithms - Decision
trees, clustering algorithm, Naive Bayes algorithm, time series analysis, sequence
clustering algorithm, linear and logistic regression analysis, and neural networks -
for use in data mining.
Reporting Services
SQL Server Reporting Services is a report generation environment for data
gathered from SQL Server databases. It is administered via a web interface.
Reporting services features a web services interface to support the development of
custom reporting applications. Reports are created as RDL files
Reports can be designed using recent versions of Microsoft Visual Studio (Visual
Studio.NET 2003, 2005, and 2008) with Business Intelligence Development
Studio, installed or with the included Report Builder. Once created, RDL files can
be rendered in a variety of formats including Excel, PDF, CSV, XML, TIFF (and
other image formats) and HTML Web Archive.
Notification Services
Originally introduced as a post-release add-on for SQL Server 2000,[71]
Notification Services was bundled as part of the Microsoft SQL Server platform
for the first and only time with SQL Server 2005. SQL Server Notification
Services is a mechanism for generating data-driven notifications, which are sent to
Notification Services subscribers. A subscriber registers for a specific event or
transaction (which is registered on the database server as a trigger); when the event
occurs, Notification Services can use one of three methods to send a message to the
subscriber informing about the occurrence of the event. These methods include
SMTP, SOAP, or by writing to a file in the filesystem Notification Services was
discontinued by Microsoft with the release of SQL Server 2008 in August 2008,
and is no longer an officially supported component of the SQL Server database
platform.
Integration Services
SQL Server Integration Services is used to integrate data from different data
sources. It is used for the ETL capabilities for SQL Server for data warehousing
needs. Integration Services includes GUI tools to build data extraction workflows
integration various functionality such as extracting data from various sources,
querying data, transforming data including aggregating, duplication and merging
data, and then loading the transformed data onto other sources, or sending e-mails
detailing the status of the operation as defined by the user.
[edit] Full Text Search Service
The SQL Server Full Text Search service architecture
SQL Server Full Text Search service is a specialized indexing and querying service
for unstructured text stored in SQL Server databases. The full text search index can
be created on any column with character based text data. It allows for words to be
searched for in the text columns. While it can be performed with the SQL LIKE
operator, using SQL Server Full Text Search service can be more efficient. Full
allows for inexact matching of the source string, indicated by a Rank value which
can range from 0 to 1000 - a higher rank means a more accurate match. It also
allows linguistic matching ("inflectional search"), i.e., linguistic variants of a word
(such as a verb in a different tense) will also be a match for a given word (but with
a lower rank than an exact match). Proximity searches are also supported, i.e., if
the words searched for do not occur in the sequence they are specified in the query
but are near each other, they are also considered a match. T-SQL exposes special
operators that can be used to access the FTS capabilities.
The Full Text Search engine is divided into two processes - the Filter Daemon
process (msftefd.exe) and the Search process (msftesql.exe). These processes
interact with the SQL Server. The Search process includes the indexer (that creates
the full text indexes) and the full text query processor. The indexer scans through
text columns in the database. It can also index through binary columns, and use
iFilters to extract meaningful text from the binary blob (for example, when a
Microsoft Word document is stored as an unstructured binary file in a database).
The iFilters are hosted by the Filter Daemon process. Once the text is extracted, the
Filter Daemon process breaks it up into a sequence of words and hands it over to
the indexer. The indexer filters out noise words, i.e., words like A, And etc., which
occur frequently and are not useful for search. With the remaining words, an
inverted index is created, associating each word with the columns they were found
in. SQL Server itself includes a Gatherer component that monitors changes to
tables and invokes the indexer in case of updates.[78]
When a full text query is received by the SQL Server query processor, it is handed
over to the FTS query processor in the Search process. The FTS query processor
breaks up the query into the constituent words, filters out the noise words, and uses
an inbuilt thesaurus to find out the linguistic variants for each word. The words are
then queried against the inverted index and a rank of their accurateness is
computed. The results are returned to the client via the SQL Server process.
SQLCMD
SQLCMD is a command line application that comes with Microsoft SQL Server,
and exposes the management features of SQL Server. It allows SQL queries to be
written and executed from the command prompt. It can also act as a scripting
language to create and run a set of SQL statements as a script. Such scripts are
stored as a .sql file, and are used either for management of databases or to create
the database schema during the deployment of a database.
SQLCMD was introduced with SQL Server 2005 and this continues with SQL
Server 2008. Its predecessor for earlier versions was OSQL and ISQL, which is
functionally equivalent as it pertains to TSQL execution, and many of the
command line parameters are identical, although SQLCMD adds extra versatility.
Visual Studio
Microsoft Visual Studio includes native support for data programming with
Microsoft SQL Server. It can be used to write and debug code to be executed by
SQL CLR. It also includes a data designer that can be used to graphically create,
view or edit database schemas. Queries can be created either visually or using
code. SSMS 2008 onwards, provides intellisense for SQL queries as well.
SQL Server Management Studio
SQL Server Management Studio is a GUI tool included with SQL Server 2005 and
later for configuring, managing, and administering all components within
Microsoft SQL Server. The tool includes both script editors and graphical tools
that work with objects and features of the server.[79] SQL Server Management
Studio replaces Enterprise Manager as the primary management interface for
Microsoft SQL Server since SQL Server 2005. A version of SQL Server
Management Studio is also available for SQL Server Express Edition, for which it
is known as SQL Server Management Studio Express (SSMSE).[80]
A central feature of SQL Server Management Studio is the Object Explorer, which
allows the user to browse, select, and act upon any of the objects within the server.
[81] It can be used to visually observe and analyze query plans and optimize the
database performance, among others SQL Server Management Studio can also be
used to create a new database, alter any existing database schema by adding or
modifying tables and indexes, or analyze performance. It includes the query
windows which provide a GUI based interface to write and execute queries.
Business Intelligence Development Studio
Business Intelligence Development Studio (BIDS) is the IDE from Microsoft used
for developing data analysis and Business Intelligence solutions utilizing the
Microsoft SQL Server Analysis Services, Reporting Services and Integration
Services. It is based on the Microsoft Visual Studio development environment but
is customized with the SQL Server services-specific extensions and project types,
including tools, controls and projects for reports (using Reporting Services), Cubes
and data mining structures (using Analysis Services) Microsoft SQL Server is a
relational database management system developed by Microsoft. As a database, it
is a software product whose primary function is to store and retrieve data as
requested by other software applications, be it those on the same computer or those
running on another computer across a network (including the Internet). There are at
least a dozen different editions of Microsoft…
.
4.SYSTEM ANALYSIS
4.1 EXISTING SYSTEM:
Previous work focused on extracting DDoS attack features, and was
followed by detecting and filtering DDoS attack packets by the known features.
However, these Methods cannot actively detect DDoS attacks. Flash crowds are
unexpected, but legitimate, dramatic surges of access to a server, such as breaking
news. The work of discriminating DDoS attacks from flash crowds has been
explored for around a decade. This method involves human responses and can be
annoying to users.
4.2 PROPOSED SYSTEM:
The current most popular defence against flash crowd attacks is the use of
graphical puzzles to differentiate between humans and bots. This method involves
human responses and can be annoying to users. Bots are caught by honeypots and
analyzed thoroughly via inverse engineering techniques. The proposed algorithm
works independently of specific DDoS flooding attack genres. Therefore, it is
effective against unknown forthcoming flooding attacks. The proposed correlation
coefficient-based method is delay proof. This property is very effective against
explicit random delay insertion among attack flows. We used the flow correlation
coefficient as a metric to measure the similarity among suspicious flows to
differentiate DDoS attacks from genuine flash crowds.
4.3 FEASIBILITY STUDY
Feasibility study has three aspects. They are
5.3.1 Technical Feasibility
5.3.2 Economic Feasibility
5.3.3 Operational Feasibility
4. 3. 1 Technical Feasibility
The Technical Feasibility Study assesses the details of how you will deliver
a product or service (i.e., materials, labor, transportation, where your business will
be located, technology needed, etc.). Think of the technical feasibility study as the
logistical or tactical plan of how your business will produce, store, deliver, and
track its products or services.
A technical feasibility study is an excellent tool for trouble-shooting and long-term
planning. In some regards it serves as a flow chart of how your products and
services evolve and move through your business to physically reach your market.
4. 3. 2 Economic Feasibility
Feasibility studies are crucial during the early development of any project and form
a vital component in the business development process. Accounting and Advisory
feasibility studies enable organizations to assess the viability, cost and benefits of
projects before financial resources are allocated. They also provide independent
project assessment and enhance project credibility.
Built on the information provided in the feasibility study, a business case is used to
convince the audience that a particular project should be implemented. It is often a
prerequisite for any funding approval. The business case will detail the reasons
why a particular project should be prioritized higher than others. It will also sum
up the strengths, weaknesses and validity of assumptions as well as assessing the
financial and non-financial costs and benefits underlying preferred options.
4. 3. 3 Operational Feasibility
Operational Feasibility: Under this category of service we conduct a study
to analysis and determine whether your business need can be fulfilled by
using a proposed solution. The result of our operational feasibility Study will
clearly outline that the solution proposed for your business is operationally
workable and conveniently solves your problems under consideration after
the proposal is implemented. This is sometimes referred to as ‘Feasibility
Evaluations’. We would precisely describe how the system will interact with
the systems and persons around. Our feasibility report would provide results
of interest to all stakeholders
Module Description
Command Authentication
Compared with a client, because attack in the proposed system do not receive commands
from predefined places, it is especially important to implement a strong command authentication.
A standard Automatic authentication would be sufficient. A master generates a pair of Log file
for monitoring Ddos Attack. There is no need for key distribution because the public automatic
is hard-coded in key generation program. Later, the command messages sent from the master
could be digitally signed by the private key to ensure their authentication and integrity
automatically. This Auto-based authentication could also be readily deployed by current Server .
So DDos hacking is not a major issue.
Individualized Log File
In the proposed, the server randomly generates its symmetric e Log key. With the help of
same key only the client will be able to decrypt the contents.
This individualized log file is guarantees that if defenders capture each and every attack
on real time transaction, they wont be able to restrict attack until and unless the hacker knows
the detection and prevention. Thus the individualized will not let the systems to be
compromised.
Networks
A Server network is a general method of transforming any function (usually called an
Ffunction) into a permutation. It was invented by Horst Server and has been used in many block
cipher designs. The working of a Festal Network is given below:
Split each block into halves
Right half becomes new left half
New right half is the final result when the left half is XOR’d with the result of applying f
to the right half and the key.
Note that previous rounds can be derived even if the function f is not invertible.
6.SYSTEM DESIGN
6.1 DATABASE DESIGN
TABLE 6.1.1 User Details
S. No FIELD DATA
TYPE
SIZE
1 User_Name String 25
2 Password String 25
3 User_Type String 15
4 Date Date/Time 6
6.2 DATA FLOW DIAGRAM
Level 0
Level 1
Master Sending file Command & Control Server(C&C)
Easily affected by hackers
Master Taken Control Over all peer to peer
Implementing
Hybrid Peer-To-Peer
Botnet
Harder To Be Shut Down, Monitored, And Hacked
Master Monitoring
Monitoring the unauthorized system
Monitoring compromised system
Implementing
Hybrid Peer-To-Peer
Botnet
Harder To Be Shut Down, Monitored, And Hijacked By BOTNET
7. SOFTWARE TESTING
7.1 TESTING
INTRODUCTION:
After finishing the development of any computer based application, the next
complicated time consuming process is testing. During the time of testing only the
development company knows that, how far the user requirements have been met.
The testing methods applied to our project are as follows.
7. 1 Unit Testing
7. 2 Integration Testing
7. 3 Validation Testing
7. 4 Output Testing
7. 5 Acceptance Testing
7.2 TESTING METHODOLOGY
7.2.1 UNIT TESTING:
In unit testing each module is tested individually. It focuses the verification efforts
on the smallest unit of software in the module. This is known as module testing.
In the project all the modules are tested separately. For our project testing
is performed at programming stage itself. In this testing each module project is
found to work satisfactorily with regard to the expected output from the module.
There are some validations for fields. The unit testing is performed now and then
whenever a patch is delivered for each module. This makes our project standard.
Unit testing comprises the set of tests performed by an individual programmer
Prior to integration of the unit in to larger system. A program unit is
usually small enough that the programmer who developed it can test it in a great
detail. In unit testing the modules are independent of one another are tested, the
test data for unit testing should follow each condition and option this testing helps
to find out the errors in coding and option.
7.2.2 INTEGRATION TESTING:
Integration testing is the phase of software testing in which individual
software modules are combined and tested as a group. In this project the parts of a
module are tested first and then it will test some of its parts, integration testing is
done successfully.
7.2.3 VALIDATION TESTING:
It begins after the integration testing is successfully assembled. Validation
succeeds when the software functions in a manner that can be reasonably accepted
by the client. In this project, the majority of the validation is done during the data
entry operation where there is a maximum possibility of entering wrong data.
Other validation will be performed in all process where correct details and data
should be entered to get the required results.
7.2.4 OUTPUT TESTING:
After performing the validation testing, the next step is output testing of the
proposed system since no application would be termed as useful until it produces
the required output in the specified format.
7.2.5 ACCEPTANCE TESTING:
User Acceptance Testing is the key factor for the success of any application.
The project under consideration is tested for user acceptance by constantly keeping
in touch with perspective system users at the time of developing and making
changes whenever required.
All the above mentioned testing techniques are successfully completed by
our project.
8. CONCLUSION
Both Flash crowds and DDOS (Distributed Denial -of-Service) attacks have very similar
properties in terms of internet traffic, however Flash crowds are legitimate flows and
DDOS attacks are illegitimate flows, and DDOS attacks have been a serious threat to
internet security and stability. In this paper we propose a set of novel methods using
probability metrics to distinguish DDOS attacks from Flash crowds effectively, and our
simulations show that the proposed methods work well. In particular, these methods
can not only distinguish DDOS attacks from Flash crowds clearly, but also can
distinguish the anomaly flow being DDOS attacks flow or being Flash crowd flow from
Normal network flow effectively. Furthermore, we show our proposed hybrid probability
metrics can greatly reduce both false positive and false negative rates in detection. In this
paper we surveyed on four different techniques to differentiate DDOS attack from flash
crowd. Among the techniques “Discriminating DDOS Attacks from Flash Crowd Using
Flow Correlation Coefficient” shows better results.
APPENDIX
a) Screen Layout
Login Form
Screen Shot
1. BIBLIOGRAPHY