Bomb Detection Robotics Using Embedded Controller

Embed Size (px)

Citation preview

Bomb Detection Robotics Using Embedded Controller

Bomb Detection Robotics Using Embedded Controller

Detect the bomb by sending the robot to the respective place. A person can operate the system from personal computer through wireless RF control. The operator can control the system by visual moving pictures transmitted from the robotic system. If it senses there is a bomb means it gives the buzzer sound will be generated by the robot. It always possible that when influenced by destructive metals, it may lead to severe damages. For this purpose we configure a circuitry for metal detection in our project .We are making use of proximity sensors in our project. Proximity sensors when ferrous materials came closer to the sensors it senses and produces an output signal.

2. Blind navigation system using RFID for indoor environments

AbstractA location and tracking system becomes very important to our future world of pervasive computing, where information is all around us. Location is one of the most needed information for emerging and future applications. Since the public use of GPS satellite is allowed, several state-of-the-art devices become part of our life, e.g. a car navigator and a mobile phone with a built-in GPS receiver. However, location information for indoor environments is still very limited. Several techniques are proposed to get location information in buildings such as using a radio signal triangulation, a radio signal (beacon) emitter, or signal fingerprinting.

Using Radio Frequency Identification (RFID) tags is a new way of giving location information to users. Due to its passive communication circuit, RFID tags can be embedded almost anywhere without an energy source. The tag stores location information and gives it to any reader that is within a proximity range which can be up to 1015 meters for UHF RFID systems. We propose an RFID-based system for navigation in a building for blind people or visually impaired. The system relies on the location information on the tag, a users destination, and a routing server where the shortest route from the users current location to the destination. The navigation device communicates with the routing server using GPRS networks. We build a prototype based on our design and show some results. We found that there are some delay problems in the devices which are the communication delay due to the cold start cycle of a GPRS modem and the voice delay due to the file transfer delay from a MMC module

Description:This System having three different modules,1) Blind RF-ID Module2) Room RF-ID Tag Unit:3. IR TX Sensor for Object Type and Range Sensing Unit

Always this unit will transmit the IR Object ID-Code through RC5 Protocol by 38KHZ frequency.

1. Blind RF-ID ModuleThis is divided into two different modules,a) RF-IDb) IR-RC5 Protocolc) PIR Human Detect

Advantage of these Systems:1) Automation of all CUSTOMER to communicate through remote GSM using mobile2) Save data using automatic control systems3) Less cost to communicate4) Less power to automate5) Increase Safty6) To increase n number of person to communicate and automate.7) Easy and fast identification system

Demerit:1) This system will support less distance.2) It wont Sense Long Distance object.

PROPOSED SYSTEM:1) This system will support all type of object to identify the blind.2) Less cost3) Signal will support all places.4) Portable5) Wireless communication6) It wont affect any other defects of human being.7) The blind will not depends the help from any others.

Fire Fighting Robot

AbstractThe need for a device that can detect and extinguish a fire on its own is long past due. Many house fires originate when someone is either sleeping or not home. With the invention of such a device, people and property can be saved at a much higher rate with relatively minimal damage caused by the fire. Our task as electrical engineers was to design and build a prototype system that could autonomously detect and extinguish a fire. Also aims at minimizing air pollution.In this Project we design a Fuzzy based Microcontroller controlled Robot. It is the Robot that can move through a model structure, find a oburning oil derrick (lit candle) and then extinguish it with help of a Blower.

This is meant to simulate the real world operation of a Robot performing a fire extinguishing function in an oilfield. Fuzzy logic provided an appropriate solution to the otherwise complex task of mathematically deriving an exact model for the non-linear control system upon which conventional control techniques could then be applied. The fuzzy inference system was designed to act as a PID-like controller. We are using the Popular 8 bit Microcontroller the 8051 family Microcontroller. Program code to control the fire fighting robot is written in assembly language

Green House Monitoring And ControlAppropriate environmental conditions are necessary for optimum plant growth, improved crop yields, and efficient use of water and other resources. Automating the data acquisition process of the soil conditions and various climatic parameters that govern plant growth allows information to be collected at high frequency with less labor requirements. The existing systems employ PC or SMS-based systems for keeping the user continuously informed of the conditions inside the greenhouse; but are unaffordable, bulky, difficult to maintain and less accepted by the technologically unskilled workers.

The objective of this project is to design a simple, easy to install, microcontroller-based circuit to monitor and record the values of temperature, humidity, soil moisture and sunlight of the natural environment that are continuously modified and controlled in order optimize them to achieve maximum plant growth and yield. The controller used is a low power, cost efficient chip manufactured by ATMEL having 8K bytes of on-chip flash memory. It communicates with the various sensor modules in real-time in order to control the light, aeration and drainage process efficiently inside a greenhouse by actuating a cooler, fogger, dripper and lights respectively according to the necessary condition of the crops. An integrated Liquid crystal display (LCD) is also used for real time display of data acquired from the various sensors and the status of the various devices.Weather StationAbstractWeather station with pressure reading, relative humidity, indoor & remote outdoor temperature display. This project is intended to develop the capacity of the national meteorological services by improving the observing station networks and development of human resources. The specific project components include procurement and installation of meteorological instruments and the training of staff required for manning the stations where the equipment will be deployed. The equipment will be deployed at agro-meteorological, hydro-meteorological, and synoptic stations.

Description:This projects is divided into 3 modules,1) Main unit2) Mobile Unit3) Sensors Unit

Both Celsius or Fahrenheit & mbar/hPa or mm Hg supported. With calendar & clock. To use 3-button user-menu. 42 hour-history display (curve). Auto-memory & display of all high and low-values.

1)Main unit: This Unit contains LCD Display, Keypad, Alarm.It will Read and control the data and send the data through mobile.LCD Display : It will Display all Measurement of all datas and key inputsKeypad: Used to get the input and control the output of this system.Serial Communication: Which is used to send the data through mobile wireless protocols.

2)Mobile Unit:Which is used to send all the data to PC using mobile communication. This unit having interface Hardware and wireless protocols.

3)Sensing Unit:This unit having 3-different type of sensing Unit,

A) Water Level measurement Sensing UnitsB) River Force measurement Sensing UnitsC) Weather measurement Units

Over view of projects:This was a wireless communication project. The circuit may be powered by a small 9V battery, Consumption for the base station is around 8 to 9 mA whilst active and only 2 to 3 mA in sleep mode The receiver (base station) is active during 5 seconds & then goes to sleep for 45 seconds. The transmitter takes a nap every 30 seconds or so. All data is stored in EEPROM and is loaded at power-up. In case of a power failure (or when changing batteries), there will be no data (nor history) lost.

Menu mode is entered when pushing the "menu" button for 1 second. Browsing & value changes are done with the "min" & "plus" keys. When in normal mode, the "min" and"plus" keys can browse through the different histories. All these controls will wake up the processor if it was in sleep mode. On the left-hand side of the LCD , from top to bottom, there are Outside temperature, Pressure, Inside Temperature, Relative Humidity, Calendar and Clock. On the PC: High value of the past 42-hours, Bar graph histogram (right is most recent value), Low value. Output of pressure sensor is an analogue voltage, which is temperature-compensated . This analogue voltage is given to the 8051 10-bit ADC.

Tanker Robo For Vision Based Surveillance SystemAbstractThis project aims at designing control system for a robot such that the unmanned vehicle is controlled using PC and wireless RF communication. In this project the controlling is done depending on the feedback provided by the IR sensor, which is the part of object detection circuit.

Description:

The project contains different modules such as Object detection and angle determination in the path using IR sensor.( Obstacle avoidance and enemy detection.( Design of RF circuits for data transfer and camera interface.(In the object detection module when the AT89c2051 micro controller is powered up the stepper motor starts rotating at 360-degree arc. The IR sensor is mounted on stepper motor. When an object is detected by the IR sensor the microcontroller stops the robot and stepper motor. The video camera mounted on the stepper motor starts transmitting the object picture continuously. Here the IR sensor consists of IR transmitter and receiver .the IR transmitter is a led and receiver is detected the object and receive the IR pulse. Here the frequency used is 38 kHz. The angle detected is sent to PC using RF communication. The rotation of each step is calculated in to each step. From PC keyboard to control the speed, left and right control key value is sending through Serial port. The printer port is connected into RF Encoder circuits and RF Transmitter.

The data transmission is happened in 4-bit communication. PC key entering value is sending to RF. The value will receive into the RX-RF and compare of each value and control all steps. But IR it will work under object detection and controller will shoot or enable the Laser Light. In the RF circuit design two pairs of RF transmitter and receiver are used. One set is used for communication between AT89c2051 micro controllers.

FUCTIONING OVERVIEW OF VEHICLE CONTROLLER CIRCUITS:The Main Micro Controller is Connected into Object detect circuits, Moving Object Detect circuits, Speed Control Circuits, Video Camera with Angle detection of 360 degree circuits, Laser control circuits and RF Receiver circuits, Which is communicating to PC Using RF Communication.

When the vehicle will move in the any area automatically will detect the object through object detect IR circuits. If Object is detected it will track the other path.( When Moving Object is detected then it will decide which is enemy or our army person.( If its enemy, the object is moving object then the micro control will read the data if the RFID is not matching else no enemy (RFID value is matched) detected the vehicle will moving to detect other object, Which contains IR TX and RX using 38KHZ and 40KHZ.( If the enemy is detected then it will shoot to the moving object / enemy through Laser beam and it will pass sleeping Gas.( The stepper motor will rotate 360-degree angle, when the object is detected then it will send the angle to PC using RF communication.( DC motor Driver used to control the RPM (Speed), forward and Reverse of the Vehicle.( Another DC motor is used to Control the Left and Right controller.( When the object will detect, then the Sensor will sense the object and where ever the object will move the motor will move in to the object moving places.( All control Data will controlling from PC using RF Communication.(Bomb Detection Robotics Using Embedded Controller

AbstractDetect the bomb by sending the robot to the respective place. A person can operate the system from personal computer through wireless RF control. The operator can control the system by visual moving pictures transmitted from the robotic system. If it senses there is a bomb means it gives the buzzer sound will be generated by the robot. It always possible that when influenced by destructive metals, it may lead to severe damages. For this purpose we configure a circuitry for metal detection in our project .We are making use of proximity sensors in our project. Proximity sensors when ferrous materials came closer to the sensors it senses and produces an output signal.

GSM Path Planning For Blind Person Using Ultrasonic

AbstractThis paper describes the system architecture for a navigation tool for visually impaired persons. The major parts are: a multi-sensory system (comprising stereo vision, acoustic range finding and movement sensors), a mapper, a warning system and a tactile human-machine interface. The sensory parts are described in more detail, and the first experimental results are presented.

Objectives:

About 1% of the human population is visually impaired, and amongst them about 10% is fully blind. One of the consequences of being visually impaired is the limitations in mobility. For global navigation, many tools already exist. For instance, in outdoor situations, handheld GPS systems for the blind are now available. These tools are not helpful for local navigation: local path planning and collision avoidance. The traditional tools, i.e. the guide dog and the cane, are appreciated tools, but nevertheless these tools do not adequately solve the local navigation problems. Guide dogs are not employable at a large scale (the training capacity in the Netherlands is about 100 guide dogs yearly; just enough to help about 1000 users). The cane is too restrictive.

The goal of this research is to develop a wearable tool that assists the blind to accomplish his local navigation tasks. Fig. 1 shows the architecture of the proposed tool. It consists of a sensory system controlled by the user. The primary data needed for local navigation is range data (which is not necessarily obtained from visual data alone; at this point, the type of sensors is still an open question). The mapper converts the range data into map data. The local map is the input to a warning system that transforms the map data into a form that is suitable for communication. In order to give the blind person freedom of movement, he must be able to control the focus of attention of the sensory system. For that purpose, the tool must be provided with a man-machine interface.

The ultimate goal of this project is to provide an electronic tool for the local navigation task of the blind. The tool must provide information about the direct surroundings of the blind to enable him to move around without collisions. We assume that, although mostly unknown, the environment does have some structure such as in an urban outdoor situation (e.g. a street), or in an indoor situation: smooth floors, now and then a doorstep, stairs, walls, door openings and all kind of objects that possibly obstruct the passage.

We start with three sensor types: stereovision, optical flow, and sonar. Preliminary research has shown that other types ofsensors are also of interest, e.g. ladar, radar and infrared (detection of people and traffic). The system should be expandable such that the information from these types of sensors can be integrated easily in a later stage of the project.

Functioning overview of this Projects:1. Whenever the blind want go to particular place, before that he will set the path through mobile2. Wherever he wants to go he has to carry this system3. When he is going out his system will communicate to house through GSM.4. His system will communicate to bus stop/shop system through RF communication.5. After receiving the data from blind system, it will communicate through voice using head phone.6. This same data will send to house by GSM.7. The house members can monitor the blind through mobile and which street, which area he is going.8. This system will support the blind and the children also.9. The ultrasonic will support the blind distance of each object10. RF will support the path name, signal identification.

Methodology of this Project:1. Ultra Sonic Object Sensing and Distance Measuring 50KHZ2. Path Planning Algorithm3. FBus Protocol(GSM)4. RF-433MHZ Communication

Abstractof Bubble PowerIn sono fusion a piezoelectric crystal attached to liquid filled Pyrex flask send pressure waves through the fluid, exciting the motion of tiny gas bubbles. The bubbles periodically grow and collapse, producing visible flashes of light. The researchers studying these light emitting bubbles speculated that their interiors might reach such high temperature and pressure they could trigger fusion reaction. Tiny bubbles imploded by sound waves can make hydrogen nuclei fuse- and may one day become a revolutionary new energy source.

When a gas bubble in a liquid is excited by ultrasonic acoustic waves it can emit short flashes of light suggestive of extreme temperatures inside the bubble. These flashes of light known as sono luminescence, occur as the bubble implode or cavitates. It is show that chemical reactions occur during cavitations of a single, isolated bubble and yield of photons, radicals and ions formed. That is gas bubbles in a liquid can convert sound energy in to light.

Sono luminescence also called single-bubble sono luminescence involves a single gas bubble that is trapped inside the flask by a pressure field. For this loud speakers are used to create pressure waves and for bubbles naturally occurring gas bubbles are used. These bubbles can not withstand the excitation pressures higher than about 170 kilopascals. Pressures higher than about 170 kilopascals would always dislodge the bubble from its stable position and disperse it in the liquid. A pressure at least ten times that pressure level to implode the bubbles is necessary to trigger thermonuclear fusion. The idea of sonofusion overcomes these limitations.

Introduction of Bubble PowerSonofusion is technically known as acoustic inertial confinement fusion. In this we have a bubble cluster (rather than a single bubble) is significant since when the bubble cluster implodes the pressure within the bubble cluster may be greatly intensified. The centre of the gas bubble cluster shows a typical pressure distribution during the bubble cluster implosion process. It can be seen that, due to converging shock waves within the bubble cluster, there can be significant pressure intensification in the interior of the bubble cluster. This large local liquid pressure (P>1000 bar) will strongly compress the interior bubbles with in the cluster, leading to conditions suitable for thermonuclear fusion. More over during the expansion phase of the bubble cluster dynamics, coalescence of some of interior bubbles is expected, and this will lead to the implosion of fairly large interior bubbles which produce more energetic implosions.

The apparatus consists of a cylindrical Pyrex glass flask 100 m.m. in high and 65m.m.in diameter. A lead-zirconate-titanate ceramic piezoelectric crystal in the form of a ring is attached to the flask's outer surface. The piezoelectric ring works like the loud speakers in a sonoluminescence experiment, although it creates much stronger pressure waves. When a positive voltage is applied to the piezoelectric ring, it contracts; when the voltage is removed, it expands to its original size.

The flask is then filled with commercially available deuterated acetone (C 3 D 6 O), in which 99.9 percent of the hydrogen atoms in the acetone molecules are deuterium (this isotope of hydrogen has one proton and one neutron in its nucleus). The main reason to choose deuterated acetone is that atoms of deuterium can undergo fusion much more easily than ordinary hydrogen atoms. Also the deuterated fluid can withstand significant tension (stretching) without forming unwanted bubbles. The substance is also relatively cheap, easy to work with, and not particularly hazardous.

Applications:

Thermonuclear fusion gives a new, safe, environmental friendly way to produce electrical energy.

This technology also could result in a new class of low cost, compact detectors for security applications. That use neutrons to probe the contents of suitcases.

Devices for research that use neutrons to analyze the molecular structure of materials.

Machines that cheaply manufacture new synthetic materials and efficiently produce tritium, which is used for numerous applications ranging from medical imaging to watch dials.

A new technique to study various phenomenons in cosmology, including the working of neutron star and black holes.

With the steady growth of world population and with economic progress in developing countries, average electricity consumption per person has increased significantly.

There fore seeking new sources of energy isn't just important, it is necessary. So for more than half a century, thermonuclear fusion has held out the promise of cheap clean and virtually limitless energy. Unleashed through a fusion reactor of some sort, the energy from 1 gram of deuterium, an isotope of hydrogen, would be equivalent to that produced by burning 7000 liters of gasoline. Deuterium is abundant in ocean water, and one cubic kilometer of seawater could, in principle, supply all the world's energy needs for several hundred years.

Abstractof Human-Robot InteractionA very important aspect in developing robots capable of Human-Robot Interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g .. human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human-human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too.

Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction.

Introduction of Human-Robot InteractionFor face detection, a method originally developed by Viola and Jones for object detection is adopted. Their approach uses a cascade of simple rectangular features that allows a very efficient binary classification of image windows into either the face or non face class. This classification step is executed for different window positions and different scales to scan the complete image for faces. We apply the idea of a classification pyramid starting with very fast but weak classifiers to reject image parts that are certainly no faces. With increasing complexity of classifiers, the number of remaining image parts decreases. The training of the classifiers is based on the AdaBoost algorithm . Combining the weak classifiers iteratively to more stronger ones until the desired level of quality is achieved.

As an extension to the frontal view detection proposed by Viola and Jones, we additionally classify the horizontal gazing direction of faces, as shown in Fig. 4, by using four instances of the classifier pyramids described earlier, trained for faces rotated by 20", 40", 60", and 80". For classifying left and right-turned faces, the image is mirrored at its vertical axis, and the same four classifiers are applied again. The gazing direction is evaluated for activating or deactivating the speech processing, since the robot should not react to people talking to each other in front of the robot, but only to communication partners facing the robot. Subsequent to the face detection, a face identification is applied to the detected image region using the eigenface method to compare the detected face with a set of trained faces. For each detected face, the size, center coordinates, horizontal rotation, and results of the face identification are provided at a real-time capable frequency of about 7 Hz on an Athlon64 2 GHz desktop PC with I GB RAM.

Voice Detection:As mentioned before, the limited field-of-view of the cameras demands for alternative detect ion and tracking methods. Motivated by human perception, sound location is applied to direct the robot's attention. The integrated speaker localization (SPLOC) realizes both the detection of possible communication partners outside the field-of-view of the camera and the estimation whether a person found by face detection is currently speaking. The program continuously captures the audio data by the two microphones.

To estimate the relative direction of one or more sound sources in front of the robot, the direction of sound toward the microphones is considered . Dependent on the position of a sound source in front of the robot, the run time difference t results from the run times tr and tl of the right and left microphone. SPLOC compares the recorded audio signal of the left and the right] microphone using a fixed number of samples for a cross power spectrum phase (CSP) to calculate the temporal shift between the signals. Taking the distance of the microphones dmic and a minimum range of 30 cm to a sound source into account, it is possible to estimate the direction of a signal in a 2-D space. For multiple sound source detection, not only the main energy value for the CSP result is taken, but also all values exceeding an adjustable threshold.

In the 3-D space, distance and height of a sound source is needed for an exact detection.

This information can be obtained by the face detection when SPLOC is used for checking whether a found person is speaking or not. For coarsely detecting communication partner, outside the field-of-view, standard values are used that are sufficiently accurate to align the camera properly to get the person hypothesis into the field-of-view. The position of a sound source (a speaker mouth) is assumed at a height of 160 Cm for an average adult. The standard distance is adjusted to 110 Cm, as observed during interactions with naive users.