102
University of Alexandria Faculty of Engineering Computer Science and Automatic Control Departement Graduation Project Academic Year 2005 / 2006 Exploration and Map-Building Using a Mobile Robot Ahmed Mohamed El-Sayed Hassan Ayman Mohammed Abdel-Hameed Mohamed Abd El-Rahman Al-Khazendar Mohamed Amir Mansour Yousef Supervisor: Prof. Dr Mohamed Salah El-Deen Selim

Robot Project Documentation

Embed Size (px)

Citation preview

Page 1: Robot Project Documentation

University of Alexandria

Faculty of Engineering

Computer Science and Automatic Control Departement

Graduation Project

Academic Year 2005 / 2006

Exploration and Map-Building

Using a Mobile Robot

Ahmed Mohamed El-Sayed Hassan

Ayman Mohammed Abdel-Hameed

Mohamed Abd El-Rahman Al-Khazendar

Mohamed Amir Mansour Yousef

Supervisor: Prof. Dr Mohamed Salah El-Deen Selim

Page 2: Robot Project Documentation

To our families who supported us during all the logical and nonlogical

actions we did in the previous 4 years..

To all people who gave us material and/or immaterial help..

Page 3: Robot Project Documentation

Preface

This document goes into 9 chapters, each descriping certain part of the

project.

Chapter 1 Gives an overview about the whole system, the reqiured speci-

fication of the hardware, and the general architecture of the system.

Chapter 2 Gives a detailed view about the components of the robot. It

illustrates the theory of operation of different used sensors.

Chapter 3 Explains robot motion and how to calibrate it.

Chapter 4 Illustrates how the communication between the computer and

the robot is accomplished. It shows the detailes of required software

and hardware for the communication.

Chapter 5 Explains the selected methods of exploration and map-building.

It also shows the results of simulating such methods.

Chapter 6 Shows the performed experiments to test the system.

Chapter 7 Shows the analysis of different sources of errors in measurments.

Chapter 8 Gives a proposed appraoch to extend the capaabilities of the

robot. It also shows the progress status of implementing this approach.

Chapter 9 Gives a short summary about the project. It also shows the

proposed future work ideas.

Page 4: Robot Project Documentation

Contents

1 System Overview 1

1.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Introductoin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.3 System Architecture . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Required Computer Specifications . . . . . . . . . . . . . . . . 4

1.5 Robot Hardware . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Robot Components 8

2.1 Hardware Components at a Glance . . . . . . . . . . . . . . . 8

2.2 Basic Stamp Module . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.1 Memory organization . . . . . . . . . . . . . . . . . . . 10

2.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3.1 Infrared Headlights . . . . . . . . . . . . . . . . . . . . 11

2.3.2 Ultrasonic Range Finder . . . . . . . . . . . . . . . . . 13

2.3.3 Digital Compass . . . . . . . . . . . . . . . . . . . . . 14

2.3.4 Digital Encoders . . . . . . . . . . . . . . . . . . . . . 16

2.4 Servo Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.4.1 Continuous Rotation Servo Motors . . . . . . . . . . . 17

2.4.2 Ultrasonic Bracketing Kit . . . . . . . . . . . . . . . . 17

3 Robot Motion 19

3.1 Motion Requirements . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Motion Calibration . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2.1 Pre-Motion Calibration . . . . . . . . . . . . . . . . . . 22

3.2.2 Coordinating Motion . . . . . . . . . . . . . . . . . . . 24

3.2.3 Ramping . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2.4 Error Detection and Correction . . . . . . . . . . . . . 25

4 Robot-Computer Communication 27

4.1 Communication Environment . . . . . . . . . . . . . . . . . . 27

Page 5: Robot Project Documentation

CONTENTS b

4.1.1 Communication Hardware Components at a Glance . . 27

4.1.2 Robot Communication Side . . . . . . . . . . . . . . . 27

4.1.3 Computer Communication Side . . . . . . . . . . . . . 30

4.2 Communication API . . . . . . . . . . . . . . . . . . . . . . . 31

4.2.1 Why Using an API? . . . . . . . . . . . . . . . . . . . 31

4.2.2 API Commands . . . . . . . . . . . . . . . . . . . . . . 32

4.2.3 API Implementation . . . . . . . . . . . . . . . . . . . 33

5 Map Building Application 35

5.1 Method 1: Points Locality . . . . . . . . . . . . . . . . . . . . 35

5.1.1 Co-linearity Problem . . . . . . . . . . . . . . . . . . . 36

5.1.2 Neighbourhood Decidablity using Points Locality . . . 36

5.1.3 Fitting Problem . . . . . . . . . . . . . . . . . . . . . . 37

5.1.4 Perpendicular Regression . . . . . . . . . . . . . . . . . 37

5.2 Method 2: Occupancy Grid . . . . . . . . . . . . . . . . . . . 40

5.2.1 Measurements Uncertainty . . . . . . . . . . . . . . . 40

5.2.2 Probability updating over time . . . . . . . . . . . . . 42

5.2.3 Method Implementation . . . . . . . . . . . . . . . . . 43

5.2.4 Occupancy-Grid Advantages . . . . . . . . . . . . . . . 44

5.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 45

6 Experiments and Results 48

6.1 Experiment 1: . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.2 Experiment 2: . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7 Error Analysis 52

7.1 Robot Motion Errors . . . . . . . . . . . . . . . . . . . . . . . 52

7.2 Sensors Measurements Errors . . . . . . . . . . . . . . . . . . 53

7.2.1 Ultrasonic Range Finder . . . . . . . . . . . . . . . . . 53

7.2.2 Digital Compass . . . . . . . . . . . . . . . . . . . . . 55

7.3 Application Results Errors . . . . . . . . . . . . . . . . . . . . 55

8 Extending Robot Capabilities 57

8.1 Communication Design Considerations . . . . . . . . . . . . . 57

8.2 PIC Microcontroller Module . . . . . . . . . . . . . . . . . . . 60

8.2.1 Memory organization . . . . . . . . . . . . . . . . . . . 60

8.3 Communication between the Modules . . . . . . . . . . . . . . 61

8.4 Reliability in the devised protocol . . . . . . . . . . . . . . . . 62

Page 6: Robot Project Documentation

CONTENTS c

8.4.1 Reliability in commands . . . . . . . . . . . . . . . . . 62

8.4.2 Reliability in data . . . . . . . . . . . . . . . . . . . . . 62

8.5 Progress Status . . . . . . . . . . . . . . . . . . . . . . . . . . 64

9 Summary and Future Work 66

9.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

9.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

A Bluetooth Overview 69

A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

A.2 Bluetooth Protocol Stack . . . . . . . . . . . . . . . . . . . . . 70

A.3 Bluetooth Profiles . . . . . . . . . . . . . . . . . . . . . . . . . 73

A.4 Security in Bluetooth . . . . . . . . . . . . . . . . . . . . . . . 73

B Serial Communication Interface (USART) 75

B.1 Synchronous Serial Transmission . . . . . . . . . . . . . . . . . 76

B.2 Asynchronous Serial Transmission . . . . . . . . . . . . . . . . 76

B.3 Other UART Functions . . . . . . . . . . . . . . . . . . . . . . 78

B.4 Bits, Baud and Symbols . . . . . . . . . . . . . . . . . . . . . 78

B.5 Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

C Servo Motors 82

C.1 What is a Servo? . . . . . . . . . . . . . . . . . . . . . . . . . 82

C.2 How Servo Works? . . . . . . . . . . . . . . . . . . . . . . . . 82

C.3 Modifying Servo for Continuous Rotation . . . . . . . . . . . . 85

D Perpendicular Regression 86

E Occupancy Grid Formula Proof 89

E.1 Integration over Time . . . . . . . . . . . . . . . . . . . . . . . 89

E.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

F Examples on API 91

F.1 Example 1: Motion in a Square . . . . . . . . . . . . . . . . . 91

F.2 Example 2: Motion in an Isosceles Triangle . . . . . . . . . . . 92

Bibliography 93

Page 7: Robot Project Documentation

List of Figures

1.1 An example of simple environment . . . . . . . . . . . . . . . 2

1.2 An example of output of the points locality method . . . . . . 3

1.3 An example of output of the occupancy grid method . . . . . 3

1.4 System Architecture . . . . . . . . . . . . . . . . . . . . . . . 4

1.5 Board of Education . . . . . . . . . . . . . . . . . . . . . . . . 5

1.6 BOE-BOT Robot Components . . . . . . . . . . . . . . . . . . 6

1.7 Robot Components Schematic Diagram . . . . . . . . . . . . . 6

2.1 BASIC Stamp Module, Model: BS2pe . . . . . . . . . . . . . 9

2.2 Infrared Transmitter and Receiver . . . . . . . . . . . . . . . . 9

2.3 PING)))TM Ultrasonic Range Finder . . . . . . . . . . . . . . 10

2.4 Hitachi HM55B Compass Module . . . . . . . . . . . . . . . . 10

2.5 Digital Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.6 Parallax Standard Servo Motor . . . . . . . . . . . . . . . . . 11

2.7 Using Infrared in detecting ojects . . . . . . . . . . . . . . . . 12

2.8 Ping Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.9 Calculating Angle using Compass . . . . . . . . . . . . . . . . 15

2.10 Pulse Width Control . . . . . . . . . . . . . . . . . . . . . . . 17

3.1 Using Digital Encoder to calculate distance . . . . . . . . . . . 20

3.2 Centering Servo Motor . . . . . . . . . . . . . . . . . . . . . . 22

3.3 Ramping wheel velocity curve . . . . . . . . . . . . . . . . . . 25

3.4 Error detection in robot motion . . . . . . . . . . . . . . . . . 26

4.1 EmbeddedBlue eb500 Bluetooth Module . . . . . . . . . . . . 28

4.2 D-Link DBT-122 Bluetooth USB Adapter . . . . . . . . . . . 28

4.3 The UML of the API Implementation at the Computer Side . 33

5.1 The solid points belongs to the line segment but the non-solid

points are far points and doesn’t belong to the line segment. . 37

5.2 An example of Perpendicular fitting . . . . . . . . . . . . . . . 38

Page 8: Robot Project Documentation

LIST OF FIGURES e

5.3 An example of corners problem in the points locality method . 39

5.4 An example of curves problem in the points locality method . 40

5.5 Ultrasonic Sensor Range Regions . . . . . . . . . . . . . . . . 41

5.6 Calculating Probability From Distance and Angle Between

Sensor and Cell . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.7 Simulation Result for Occupancy Grid Method . . . . . . . . . 46

5.8 Simulation Result for Points Locality Method . . . . . . . . . 47

6.1 Generated Maps for Experiment 1 using Occupancy Grid Method 49

6.2 Generated Maps for Experiment 1 using Points Locality Method 49

6.3 Actual Map for Experiment 2 . . . . . . . . . . . . . . . . . . 50

6.4 Generated Maps for Experiment 2 using Occupancy Grid Method 50

6.5 Generated Maps for Experiment 2 using Points Locality Method 51

7.1 Error in Robot Motion . . . . . . . . . . . . . . . . . . . . . . 53

7.2 Error in Ultrasonic Sensor Readings . . . . . . . . . . . . . . . 54

7.3 Experiment 1: Error in Walls Dimensions . . . . . . . . . . . . 56

7.4 Experiment 1: Error in Angles Between Walls . . . . . . . . . 56

8.1 Computer issues a move command. . . . . . . . . . . . . . . . 58

8.2 Computer issues a stop command but the microcontoller isn’t

listening. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8.3 Limit the commands to fully predefined tasks. . . . . . . . . . 59

8.4 The Stamp forwards the commands to another microcontoller

that supports interrupts. . . . . . . . . . . . . . . . . . . . . . 59

8.5 PIC microcontoller, Model: PIC 16F628A . . . . . . . . . . . 60

8.6 Stamp-PIC Communication Scheme . . . . . . . . . . . . . . . 61

8.7 PIC microcontoller Board . . . . . . . . . . . . . . . . . . . . 64

A.1 Bluetooth Protocol Stack . . . . . . . . . . . . . . . . . . . . . 71

C.1 A Futaba S-148 Servo Motor . . . . . . . . . . . . . . . . . . . 83

C.2 Servo Motor Components . . . . . . . . . . . . . . . . . . . . 83

C.3 Servo Motor Pulse Code . . . . . . . . . . . . . . . . . . . . . 84

D.1 Fitting a line segment to a set of points . . . . . . . . . . . . . 87

F.1 API Example 1: Motion in a Square . . . . . . . . . . . . . . 91

F.2 API Example 2: Motion in an Isosceles Triangle . . . . . . . . 92

Page 9: Robot Project Documentation

List of Tables

1.1 PIN Assignment for Basic Stamp Module . . . . . . . . . . . . 7

2.1 Standard servo pulse width for different Ping angles . . . . . . 18

3.1 Motors pulse width for motion in straight line with different

speeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7.1 Error in Robot Motion . . . . . . . . . . . . . . . . . . . . . . 53

7.2 Error in Ultrasonic Sensor Readings . . . . . . . . . . . . . . . 54

7.3 Experiment 1: Error in Walls Dimensions . . . . . . . . . . . . 55

7.4 Experiment 1: Error in Angles Between Walls . . . . . . . . . 56

8.1 Code words for PIC commands . . . . . . . . . . . . . . . . . 63

Page 10: Robot Project Documentation

Chapter 1

System Overview

1.1 Abstract

This project addresses the problem of map-building using a mobile robot for

small scale indoor environments. The robot will collect data using a range

finder sensor while an application is running on a remote station, which has

a kind of wireless connection with the mobile robot, and whose task is the

control of the mobile robot and the construction of the map from the received

data using an appropriate method.

1.2 Introductoin

Building maps of indoor environments is a pivotal problem in mobile robotics.

The problem of map building is the problem of determining the location of

entities of interest such as landmarks and obstacles, often relative to a global

frame of reference (sush as a Cartesian coordinate frame). The mapping prob-

lem is often referred to as the concurrent mapping and localization problem.

The distance is measured relative to the position of the mobile robot.

Therefore, the more accurately the location of the robot is determined, the

more accurately the location of the surrounding sensed objects is determined,

and the more accurate the map is constructed. A hardware calibration has

been carefully performed to obtain more accurate motion and more accurate

sensor measurements.

Because of that the processing unit on the mobile robot does not pro-

vide the enough processing power required for the data processing and map

Page 11: Robot Project Documentation

Chapter 1. System Overview 2

construction, a more powerfull remote processing unit is used, a PC in this

project.

Because of the need to the mobility of robot and the remote control, a

wireless communication is used to exchange data between the mobile robot

and the PC. The PC runs an application responsible for the control of the

behaviour of the mobile robot and the map construction using the data re-

ceived from the mobile robot. Bluetooth is picked to be used in wireless

communication. An API has been developed to be the interface between

the PC application and the mobile robot. This interface is composed of the

commands which control and guide the mobile robot through the execution

of its tasks. The data and commands are sent through bluetooth connection

according to a simple protocol which provide synchronization and data reli-

abilty.

The map is extracted from the received data according to one of two meth-

ods. The first is Points Locality method. It uses the line-based representation

of the map, i.e. the map is represented as a set of line segments which fit

to sets of two-dimensional range scan data that are acquired from multiple

positions. The second method is the Occupancy Grid method. It represents

the environment as a two-dimensional grid whose cells have a probability

of being occupied by objects. Figure1.2 and Figure1.3 show the results of

applying the two methods when exploring the area shown in Figure1.1.

Figure 1.1: An example of simple environment

Page 12: Robot Project Documentation

Chapter 1. System Overview 3

Figure 1.2: An example of output of the points locality method

Figure 1.3: An example of output of the occupancy grid method

1.3 System Architecture

The proposed system consists of three independent entities: application en-

tity, API entity and robot hardware entity. Each entity has its own design

which doesn’t depend on other entities. This approach provides extendibil-

ity and changeability of each part of the system. That’s because when the

commands are sent according to a predefined API, changing application will

not affect the program on the robot. On the other hand, changing some part

of the robot, or even replacing robot with another one, with the same API,

will not affect the application. Figure 1.4 shows these entities.

Physically, system consists of three parts: Computer side, robot side, and

communication between the robot and the computer. Computer side con-

tains both application and API. Robot side consists of its physical compo-

Page 13: Robot Project Documentation

Chapter 1. System Overview 4

Figure 1.4: System Architecture

nents and software modules that control such components. The communi-

cation between the robot and the computer is accomplished over Bluetooth

connection. Bluetooth dongle is connected to the computer, and Bluetooth

module is connected to the robot.

1.4 Required Computer Specifications

• Processor: Pentium M 1.6GHz Centrino

• Memory: 512MB

• Platform: Windows XP, .NET framework 2

• Connectivity : Bluetooth Connection.

1.5 Robot Hardware

The Board of Education Rev C carrier board for BASIC Stamp 24-pin micro-

controller modules is used. The Board of Education makes it easy to connect

a power supply and serial cable to the BASIC Stamp module. Its major

components and their functions are indicated by labels in Figure 1.5.

BOE-BOT robot kit provided by Parallax has been used. Below is a list

of all components fixed on BOE-BOT board. This section will show how to

connect them. Details of each component will come later.

• 6-9 Volt Battery (regulated to 5 Volt)

Page 14: Robot Project Documentation

Chapter 1. System Overview 5

Figure 1.5: Board of Education

• BS2pe Basic Stamp module

• 2 Continuous rotation servo motors (one for each side to rotate wheels)

• 2 Digital Encoders (one for each wheel)

• 2 Infrared headlights modules

• PING)))TM ultrasonic range finder

• Standard servo motor (to rotate PING)

• Hitachi HM55B Digital Compass Module

• EmbeddedBlue eb500 Bluetooth Module

Figure 1.6 shows BOE-BOT with all components fixed on it. Schematic

diagram for components connection is shown in Figure 1.7. Pin assignment

for BASIC Stamp is shown in Table 1.1

Page 15: Robot Project Documentation

Chapter 1. System Overview 6

Figure 1.6: BOE-BOT Robot Components

Figure 1.7: Robot Components Schematic Diagram

Page 16: Robot Project Documentation

Chapter 1. System Overview 7

I/O PIN number Device PIN

0 Bluetooth INPUT

1 Bluetooth OUTPUT

2 Infrared Leds (in both sides)

3 Left Infrared detector

4 Right Infrared detector

5 Bluetooth Status

6 Bluetooth Mode

7 Compass Enable

8 Compass Clock

9 Compass INPUT/OUTPUT

10 Right Digital Encoder

11 Left Digital Encoder

12 Right Continuous Rotation Servo

13 Left Continuous Rotation Servo

14 PING Standard Servo

15 PING Signal

Table 1.1: PIN Assignment for Basic Stamp Module

Page 17: Robot Project Documentation

Chapter 2

Robot Components

2.1 Hardware Components at a Glance

BASIC Stamp, Figure 2.1, which is produced by Parallax Company, is the

microcontroller module used to control the other robot components.

Four types of sensors are fixed on robot board: infrared headlights, ul-

trasonic range finder, digital compass and digital encoder.

Infrared Headlights, Figure 2.2, are used to detect objects but cannot

determine exactly the distance to object. It is important in many appli-

cations such as obstacle avoidance and roaming. Ultrasonic range finder is

used mainly to detect obstacles and measure how far they are. Parallax

PING)))TM Ultrasonic Range Finder, Figure 2.3, has been used. The third

type of sensors is digital compass, which is used to determine robot direc-

tion with accuracy of around five degrees. Hitachi HM55B Compass Module,

Figure 2.4, has been used. The last type of sensors is digital encoder, Figure

2.5. Digital encoder is a reflective sensor used to detect objects that are very

closed to the sensor. It’s mainly used to control robot motion.

Ultrasonic sensor is fixed on Bracketing Kit produced by Parallax. The

main part of this kit is the standard servo motor, Figure 2.6, which is used

to rotate sensor around 180 degrees. Another type of servo motors is Con-

tinuous Rotation servo motor which is used to rotate robot wheels.

Page 18: Robot Project Documentation

Chapter 2. Robot Components 9

Figure 2.1: BASIC Stamp Module, Model: BS2pe

Figure 2.2: Infrared Transmitter and Receiver

2.2 Basic Stamp Module

In the project, the BS2pe packaged in 24-PIN DIP 1 is used. The module

may be programmed using PBASIC2 language. The main specifications of

this module are:

Microcontroller Ubicom SX48AC

Clock Speed 8 MHz Turbo3

Program Execution Speed 6000/sec4

RAM Size 38 Bytes (12 I/O, 26 Variable)

Scratch Pad RAM 128 Bytes

1DIP stands for ”Dual In-line Package”. Typical IC with two rows of legs parallel to

one another.2Parallax Basic All-Purpose Simple Instruction Code3The instruction execution time is derived by dividing the oscillator frequency by either

one (turbo mode) or four (non-turbo mode).4The exact number depends on the instructions executed and the number of parameters

to each

Page 19: Robot Project Documentation

Chapter 2. Robot Components 10

Figure 2.3: PING)))TM Ultrasonic Range Finder

Figure 2.4: Hitachi HM55B Compass Module

EEPROM (Program) Size 16×2K Bytes (16 K for source)

Number of I/O pins 16 + 2 Dedicated Serial

PC Programming Interface Serial (9600 baud)

2.2.1 Memory organization

EEPROM (Program Memory) This is the memory portion that holds

the program. It’s divided into 16 pages, or slots, each of size 2K Bytes.

RAM Organization The BS2pe has 32 bytes of Variable RAM space. Of

these, the first six bytes are reserved for input, output, and direc-

tion control of the I/O pins. The remaining 26 bytes are available for

general-purpose use as variables.

Scratchpad RAM The BS2pe has some additional RAM called Scratchpad

RAM. The BS2pe has 128 bytes of Scratchpad RAM (0-127). Scratch-

pad RAM can only be accessed with the GET and PUT commands 5

and cannot have variable names assigned to it. The highest location in

Scratchpad RAM, location 127 on the BS2pe, is read-only, and always

contains the number of the currently running program slot. This can

5see the GET and PUT command descriptions for more information

Page 20: Robot Project Documentation

Chapter 2. Robot Components 11

Figure 2.5: Digital Encoder

Figure 2.6: Parallax Standard Servo Motor

be handy for programs that need to know which program slot they exist

in.

2.3 Sensors

2.3.1 Infrared Headlights

Theory of Operation

Infrared light is used to illuminate the robot’s path and determine when the

light reflects off an object.

The infrared object detection system built on the Boe-Bot is like a car’s

headlights in several respects. When the light from a car’s headlights reflects

off obstacles, your eyes detect the obstacles and your brain processes them

and makes your body guide the car accordingly. The Boe-Bot uses infrared

LEDs for headlights as shown in Figure 2.7. They emit infrared, and in some

Page 21: Robot Project Documentation

Chapter 2. Robot Components 12

Figure 2.7: Using Infrared in detecting ojects

cases, the infrared reflects off objects and bounces back in the direction of

the Boe-Bot. The eyes of the Boe-Bot are the infrared detectors. The in-

frared detectors send signals indicating whether or not they detect infrared

reflected off an object.

The IR detectors have built-in optical filters that allow very little light

except the 980 nm infrared that we want to detect with its internal photodi-

ode sensor. The infrared detector also has an electronic filter that only allows

signals around 38.5 kHz to pass through. In other words, the detector is only

looking for infrared that’s flashing on and off 38,500 times per second. This

prevents IR interference from common sources such as sunlight and indoor

lighting. Sunlight is DC interference (0 Hz), and indoor lighting tends to

flash on and off at either 100 or 120 Hz, depending on the main power source

in the region. Since 120 Hz is outside the electronic filter’s 38.5 kHz band

pass frequency, it is completely ignored by the IR detectors.

Using Basic Stamp to Control Infrared sensor

Below is a code snippet that uses infrared.

FREQOUT 8, 1, 38500

irDetector = IN9

Page 22: Robot Project Documentation

Chapter 2. Robot Components 13

In this code snippet, FREQOUT sends 38.5 kHz signal to the IR LED connected

to Pin 8, next line stores the IR detector’s output, which is connected to Pin

9, in a bit variable named irDetector. The IR detector’s output state when

it sees no IR signal is high. When the IR detector sees the 38500 Hz har-

monic reflected by an object, its output is low. The IR detector’s output

only stays low for a fraction of a millisecond after the FREQOUT command is

done sending the harmonic, so it’s essential to store the IR detector’s output

in a variable immediately after sending the FREQOUT command.

As we’ve seen, each IR LED/detector pair is connected with two pins

of Basic Stamp controller, one for light source and the other for detector.

IF two sensors are used, in both sides of robot, four pins of Stamp will be

needed, what if three sensors are used? It becomes too much for the limited

number of Stamp I/O bins. A good solution for this problem is to use one

pin for all light sources; it doesn’t matter if all sources transmit light even

if only one detector is used. In case of two sensors, total number of Stamp

pins becomes three, which saves one I/O pin.

2.3.2 Ultrasonic Range Finder

Theory of Operation

The Parallax PING ultrasonic range finder provides precise, non-contact dis-

tance measurements from about 3 cm to 3 meters. It is very easy to connect

to BASIC Stamp, requiring only one I/O pin.

The Ping sensor works by transmitting an ultrasonic (well above human

hearing range) burst and providing an output pulse that corresponds to the

time required for the burst echo to return to the sensor. By measuring the

echo pulse width, the distance to target can easily be calculated.

Figure 2.8 shows how this operation is done, under control of a host mi-

crocontroller (trigger pulse), the sensor emits a short 40 kHz (ultrasonic)

burst. This burst travels through the air at about 1130 feet per second, hits

an object and then bounces back to the sensor. The PING))) sensor provides

an output pulse to the host that will terminate when the echo is detected;

hence the width of this pulse corresponds to the distance to the target.

Page 23: Robot Project Documentation

Chapter 2. Robot Components 14

Figure 2.8: Ping Signals

Using Basic Stamp to Control Ping

Below is a code snippet that uses infrared.

LOW Ping ’ make trigger 0-1-0

PULSOUT Ping, Trigger ’ activate sensor

PULSIN Ping, IsHigh, rawDist ’ measure echo pulse

rawDist = rawDist / 2 ’ remove return trip

rawDist = rawDist * Scale ’ convert to uS

In this code snippet, PULSEOUT generates input pulse on Ping pin, PULSEIN

generates output pulse that terminates when detecting echo, the width of this

pulse is stored in rawDist variable, which is divided be 2 to remove return

trip and then converted to microseconds.

2.3.3 Digital Compass

Theory of Operation

The Hitachi HM55B Compass Module is a dual-axis magnetic field sensor

that can add a sense of direction to the robot. The sensing device on the

Compass Module is a Hitachi HM55B chip. An onboard regulator and resis-

tor protection make the 3 volt HM55B chip compatible with 5 volt BASIC

Stamp microcontroller supply and signal levels.

As shown in Figure 2.9, The Hitachi HM55B Compass Module has two

axes, x and y. Each axis reports the strength of the magnetic field’s compo-

nent parallel to it. The x-axis reports (fieldstrength)∗cos(θ), and the y-axis

Page 24: Robot Project Documentation

Chapter 2. Robot Components 15

Figure 2.9: Calculating Angle using Compass

reports the (fieldstrength ∗ sin(θ). To resolve θ into a clockwise angle from

north, use arctan(−y/x), which in PBASIC 2.5 is x ATN − y. The ATN

command returns the angle in binary radians.

The Hitachi HM55B chip on the Compass Module reports its x and y

axis measurements in terms of microteslas µT in 11-bit signed values. The

HM55B is designed to return a value of 1 for a north magnetic field of 1 µT

parallel to one of its axes. If the magnetic field is south, the value will be

-1. These are nominal values. According to the HM55B datasheet, the ac-

tual µT value for a measurement of 1 could range anywhere from 1 to 1.6 µT .

Using Basic Stamp to Control Compass

The microcontroller connected to the HM55B must control its enable and

clock inputs and use synchronous serial communication to get the axis mea-

surements from its data input and data output pins. For example, a BASIC

Stamp 2 can be programmed to control the Compass Module’s enable lines

with HIGH/LOW and send values that reset the device and start a mea-

surement with SHIFTOUT commands. The SHIFTOUT command controls

the Compass Module’s clock input as it sends data bit values to its data

input. The converse of SHIFTOUT is SHIFTIN, which also controls the de-

vice’s clock input as it collects data bits sent by the device’s data output pin.

Below is a code snippet that controls compass:

HIGH En: LOW En ’ Send reset command to HM55B

SHIFTOUT DinDout,clk,MSBFIRST,[Reset\4]

HIGH En: LOW En ’ HM55B start measurement command

Page 25: Robot Project Documentation

Chapter 2. Robot Components 16

SHIFTOUT DinDout,clk,MSBFIRST,[Measure\4]

status = 0 ’ Clear previous status flags

DO ’ Status flag checking loop

HIGH En: LOW En ’ Measurement status command

SHIFTOUT DinDout,clk,MSBFIRST,[Report\4]

SHIFTIN DinDout,clk,MSBPOST,[Status\4] ’ Get Status

LOOP UNTIL status = Ready ’ Exit loop when status is ready

SHIFTIN DinDout,clk,MSBPOST,[x\11,y\11] ’ Get x & y axis values

HIGH En ’ Disable module

IF (y.BIT10 = 1) THEN y = y | NegMask ’ Store 11-bits as signed word

IF (x.BIT10 = 1) THEN x = x | NegMask ’ Repeat for other axis

2.3.4 Digital Encoders

Digital Encoders are reflective sensors fixed in front of robot wheels to detect

their motion. The sensors emit infrared light and look for its return from a

reflective surface. They are calibrated for optimal sensing of surfaces a few

millimeters away. The Boe-Bot’s wheels, even though they are black, reflect

sufficient IR to cause the sensors to respond. When a sensor ”sees” part of

a wheel, it pulls its output low. When it’s looking through a hole, its output

floats, and the pullup resistor pulls it high. Because the sensors emit and

detect only modulated IR (at about 7.8KHz) they are relatively insensitive to

ambient light. Nevertheless, some fluorescent fixtures may also emit light at

this frequency and could interfere with their operation. As a Boe-Bot wheel

turns, the sensor will see an alternating pattern of hole-no hole-hole-no hole,

etc. Its output will be a square wave whose frequency corresponds to the

speed of rotation.

More details aboout digital encoders and using them in motion control

and calibration will be mentioned in chapter 3.

Page 26: Robot Project Documentation

Chapter 2. Robot Components 17

2.4 Servo Motors

2.4.1 Continuous Rotation Servo Motors

Continuous Rotation servo motors6 are pulse width controlled motors. They

are fed with train of pulses, with 20 ms between each two pulses. The width

of the pulse controls the speed of rotation, 1.5 ms pulse width will stop

the motor, greater pulse width will rotate it counterclockwise, smaller pulse

width will rotate it clockwise. Figure 2.10 shows this operation.

Figure 2.10: Pulse Width Control

2.4.2 Ultrasonic Bracketing Kit

The PING Bracketing Kit includes a standard servo and all mounting hard-

ware required to attach the PING ultrasonic sensor to the front of the Paral-

lax Boe-Bot robot (or any custom-made robot chassis with a flat mounting

6For more information about continuous rotation servo motors, check Appendix C

Page 27: Robot Project Documentation

Chapter 2. Robot Components 18

spot on the front).

Standard Servo7 provides 180 degrees of ultrasonic scanning ability. Train

of pulses is fed to the motor to rotate it, the width of pulse controls the

amount of rotation and then the angle of Ping.

Appropriate pulse width for each angle is experimentally calculated using

Trial and Error. Table 2.1 contains experiment results.

Angle Pulse width (no. of Stamp cycles) Pulse width (ms)

90 (left) 1148 2.296

45 (left) 910 1.82

0 (forward) 693 1.386

45 (right) 465 0.93

90 (right) 245 0.49

Table 2.1: Standard servo pulse width for different Ping angles

7For more information about standard servo motors, check Appendix C

Page 28: Robot Project Documentation

Chapter 3

Robot Motion

Robot motion is either in straight lines or pivoting on its center. General

motion is available but will complicate the problem and increase inaccuracies

in measurements, hence, for simplicity, this part is ignored. Parallax Con-

tinuous Servo Motors and Digital Encoders have been used to apply motion.

Motion calibration is the most important part in any robotics application.

Without calibration, motion will be completely unreliable and will break

down any application. Some experiments and techniques are used to calibrate

robot motion.

3.1 Motion Requirements

In order to move the robot, continuous rotation servo motors are used. Two

servo motors are fixed at both sides of the robot, they must be identical and

perfectly aligned to achieve an appropriate motion.

Boe-Bot is able to perform two motion actions: move forward, or back-

ward, for a certain distance, and rotate right, or left, around its center by a

certain degree. Each action is treated in a different way.

• Motion in a straight line:

In order to move in a straight line, the two wheels of robot must rotate

with the same velocity. e.g. to move robot forward, right wheel must

rotate clockwise and left wheel must rotate counterclockwise. Below is

a Basic Stamp code snippet to move robot in a straight line:

MOVE

Do

Page 29: Robot Project Documentation

Chapter 3. Robot Motion 20

’ Assume that 750 is the pulse width that stops robot

PULSEOUT RightWheel, 650 ’ 750 - 100

PULSEOUT LeftWheel, 850 ’ 750 + 100

LOOP

RETURN

Unfortunately, this code will not work as expected because of calibra-

tion considerations. This point will be mentioned in details in Section

3.2.

The previous code uses infinite loop to rotate wheels. It needs some

modification to move robot for a certain distance. To achieve that, it’s

important to find a way to calculate the distance robot has traveled so

far. The simplest solution is to make use of robot velocity and time to

calculate distance, however, this solution it’s not a practical one since

it needs an ideal environment. Therefore, another solution is adopted

that uses digital encoders.

Figure 3.1: Using Digital Encoder to calculate distance

BOE-BOT wheels come equipped with eight evenly spaced holes. A

digital encoder is used as well. It is a reflective sensor fixed in front of

wheel to detect its holes. As shown in figure 3.1, sixteen encoder pulses

indicates that the wheel made a full rotation, (2πr cm, where r is wheel

radius). Using wheel radius, any distance can be easily converted from

meters to encoder pulses. MOV E routine will count encoder pulses till

Page 30: Robot Project Documentation

Chapter 3. Robot Motion 21

they reach the required number.

According to BOE-BOT dimensions, one encoder pulse indicates that

the wheel travelled 1.3 cm. When robot moves forward, this distance

will be the distance that the robot actually moved.

Below is a pseudo code that moves robot for a certain distance.

MOVE(Distance)

EP = Convert Distance to Encoder Pulses

While(EP <> 0)

PULSEOUT right_wheel, 650

PULSEOUT left_wheel, 850

if(Right encoder pulse)

EP = EP - 1

Loop

RETURN

• Rotation around center:

The main disadvantage of digital encoder is its inaccuracy. All dis-

tances are measured using encoder pulses. One encoder pulse indicates

about 1.3 cm of wheel travel. This problem appears more clearly in

rotation. Using digital encoders may introduce more than 5◦ of error,

which is very high and affects application accuracey. Another point is

why to involve digital encoder in rotation while having a digital com-

pass? Although the latter nearly has the same error tolerance, using it

is easier. Adding to that, error in digital compass doesn’t accumulate

as it does with digital encoders. That’s because compass gives absolute

measurments indepenedt from previous inaccuracies.

Rotation is split into two routines:

– General rotation: using compass, rotate robot slowly until it reaches

the target angle.

– 90◦ rotation: it must be more accurate, as it will be frequently

used in the application. Special routine is used to handle this

action by calculating the exact number of Stamp pulses needed to

rotate robot 90◦.

Page 31: Robot Project Documentation

Chapter 3. Robot Motion 22

3.2 Motion Calibration

The key to success in any odometry system is calibration. Calibration takes

its importance because real world is not ideal, documented features of robot

motors, wheels and other components may not be so accurate, or even may

change with the environment. For example, motor current consumption de-

pends on power supply, wheel rotation depends on surface friction and weight

of robot, wheels may not be exactly the same size, and their axles may not

align perfectly.

Below are four ways to calibrate robot motion.

3.2.1 Pre-Motion Calibration

Pre-Motion calibration targets calculating the real values for servo motors,

based on the current state of robot motors, wheels, power supply, etc. It’s

performed after the final assembly of robot parts and before using it.

The first step is to center the servos, such that they do not rotate when fed

with 1.5 ms pulses, then, to establish the relationship between pulse width

and servo speed.

• Step 1: Centering the Servos

The first source of error is that the pulse width expected to stop the

Figure 3.2: Centering Servo Motor

servo motor, 1.5 ms in our case, is not the actual one. The following

experiment is done to avoid this problem. Figure 3.2 shows the signal

that has to be sent to the servo to calibrate it. This is called the cen-

ter signal, and after the servo has been properly adjusted, this signal

instructs it to stay still.

If servo turns, it means that it needs calibration, then, using a screw-

driver, potentiometer in the servo is adjusted until the servo stop turn-

ing.

Page 32: Robot Project Documentation

Chapter 3. Robot Motion 23

After this step, supplying servo with 1.5 ms pulse train will make it

not turning.

• Step 2: Establishing relationship between pulse width and

speed

In step two, the goal is to determine for each servo, because they may

differ, the correspondences between the various pulse widths and the ac-

tual rotation speeds. The importance of this step is to ensure that both

wheels will rotate with the same velocity when moving in a straight line.

Experiment steps are as follows:

– Center the two servos.

– determine a single maximum speed for both wheels. To do this,

both servos are sent a stream of 256 pulses of the same width,

at one extreme of their pulse range. This will cause the Boe-

Bot to spin around. While its doing this, the program counts the

transitions on each encoder output. Next, do the same at the other

extreme, and the Boe-Bot will spin the other direction. Finally,

take the lowest of the four counts measured, and this becomes the

maximum common sustainable speed.

– Next, cycle the servos through a series of pulse streams, each with

256 pulses, but each series with a different pulse width. Again,

count edges for each servo. From this data one could construct

a graph for each servo of its velocity at each of the tested pulse

widths. But this is not what we want.

– What we really want is a graph of the pulse width for each of

several equally-spaced velocities. To get these, the program uses

linear interpolation between the points on the first graph to ap-

proximate the needed pulse values.

After this experiment, each servo has sixteen pulse width values corre-

sponding to sixteen equally spaced velocities.

Unfortunately, these values are not so accurate and depend on digital

encoder with its range of error. For simplicity and more accuracy,

motion in straight line uses other values which are calculated using

trial and error, these values are not valid for different surfaces, motors,

Page 33: Robot Project Documentation

Chapter 3. Robot Motion 24

or any other factors. Table 3.1 shows these values1.

Speed Level Right Motor Pulse width Left Motor Pulse width (ms)

No Motion(Null) 744 744

Level 1 733 754

Level 2 725 762

Level 3 720 767

Level 4 707 780

Level 5 655 815

Table 3.1: Motors pulse width for motion in straight line with different speeds

3.2.2 Coordinating Motion

Previous calibration ensures that robot constants are correct, but we still

need to overcome errors occurred because of the surrounding environment.

One of the most important sources of error is the difference between

velocities of the two wheels, it may happen because the two motors, or the two

wheels, are not identical, or because of friction. To overcome this problem,

we must first detect that one wheel gets proportionately ahead of the other

one, then leave out pulses to retard its motion until the other wheel catches

up. To detect such case, Digital encoder is used again, but now while motion

itself. If robot moves in straight line, then two wheels must cross the same

distance with the same velocity, which means that digital encoders must give

a consistent pulses sequence, {left, right, left, right,....}, if the left wheel is

faster than the other, the sequence of encoder pulses will contain {left, left}which can be easily detected by robot program. Below is a pseudo code of

this scenario:

While motion

read digital encoders pulses

if left encoder reads two consecutive pulses

stop left wheel for some servo pulses

if right encoder reads two consecutive pulses

1Pulse width is measured in number of BASIC Stamp clock pulses. To convert it to

milliseconds, multiply by 0.002

Page 34: Robot Project Documentation

Chapter 3. Robot Motion 25

stop right wheel for some servo pulses

Loop

3.2.3 Ramping

Inertia is a fact of life. ”A body at rest will remain at rest, and a body in

motion will remain in motion, unless acted upon by another force.” And this,

of course, applies to robots and robot wheels. Starting or stopping motion

without gradual acceleration and deceleration is not only jarring to the servos

internal mechanisms but wastes precious battery energy. If STOP command

is sent to robot while moving, inertia will keep it moving a little, possibly

for another servo pulse or two, completely messing up the precision we set

out to achieve. On the other hand, if MOVE command is sent to robot, inertia

may make it shake and divert from its proper path.

Fortunately, ramping can solve the problem. Figure 3.3 shows how to

do it. To move robot for a distance D with velocity Vmax, firstly, ramp

velocity from Vmin to Vmax, then, move with Vmax until it almost finishes,

and finally, ramp velocity again from Vmax to Vmin

Figure 3.3: Ramping wheel velocity curve

Implementing ramping on BOE-BOT is easy, using values in Table 3.1, but

it highly improved the motion.

3.2.4 Error Detection and Correction

After all calibration steps, previously mentioned, BOE-BOT moves with an

acceptable behavior. But what if any accidental problem happened and

caused the robot to divert from its path?

A final solution using compass is provided. If the robot receive MOVE 50 cm

FORWARD command, figure 3.4 shows its new position if it diverts from its

path by an angle θ, assuming that diversion was in a straight line, the steps

Page 35: Robot Project Documentation

Chapter 3. Robot Motion 26

to detect and correct this error will be as follows.

Figure 3.4: Error detection in robot motion

• Compass will detect that robot direction changed by angle θ

• Using compass, rotate robot to the correct direction again.

• Using the assumption that diversion was in a straight line, calculate

the new location of robot, point B.

(XB, YB) = (XA +Dsinθ, YA +Dcosθ) (3.1)

Page 36: Robot Project Documentation

Chapter 4

Robot-Computer

Communication

4.1 Communication Environment

4.1.1 Communication Hardware Components at a Glance

The communication between the robot and the computer is accomplished

over Bluetooth connection. That necessitates having Bluetooth modules on

each of the involved devices. For the robot, the EmbeddedBlue Series eb500,

Figure 4.1, has been used. It’s produced by ”A7 Engineering” company to be

distributed by Parallax, where most of the components used in this project

are bought. For the computer, D-Link DBT-122 Bluetooth USB Adapter,

Figure 4.2, is used.

4.1.2 Robot Communication Side

The module used supports many features. The most important is it supports

simple serial UART1 communications and control. Its range in the open field

is 328 feet. It also has low current consumption for long battery life.

The eb500 implements all components of the Bluetooth stack on board

so that additional host processor code is not required. Once a connection to

another Bluetooth device has been established, the link has the appearance

of a cabled serial connection eliminating the need for special wireless protocol

knowledge.

1For more informatio, check appendix B

Page 37: Robot Project Documentation

Chapter 4. Robot-Computer Communication 28

Figure 4.1: EmbeddedBlue eb500 Bluetooth Module

Figure 4.2: D-Link DBT-122 Bluetooth USB Adapter

The module supports 2 modes. The first mode is Data mode where any

data in sent to the module from the robot is transferred directly over Blue-

tooth connection to the other pair of connection. The second mode is Com-

mand mode where any data sent to the module from the robot is transferred

to the module itself and interpreted as a command. The module supports

some commands for connection establishment and termination. There are

also commands for change the security setting. The full listing of the com-

mands can be found in the manual of the module.

The steps done at the robot side to communicate with the computer are

as follows:

• The connection is established and that can be checked from the status

pin of the eb500

• The module automatically changes to data mode.

Page 38: Robot Project Documentation

Chapter 4. Robot-Computer Communication 29

• Both the Stamp microcontroller and the Bluetooth module have US-

ART modules. Those USART modules are used to exchange data be-

tween them.

• To disconnect, the Bluetooth module is changed to Command Mode

and a disconnect command is sent to it.

Below is a code snippet that exchanges data with eb500 module.

’Wait for the connection to be established and switch into data mode.

’When switching into data mode, a 300ms timeout is required to give the

’module enough time to make the change.

’in5 is the status pin.

’0 = Not Connected, 1 = Connected

WaitForConnection:

IF in5 = 0 THEN WaitForConnection

’Switch to Data Mode

’ 6 is the number of mode pin

’ LOW = Command Mode, HIGH = Data Mode

HIGH 6

PAUSE 300

’Now Connection established

’Send "Hello World"

SEROUT 1,84,["Hello World",CR]

’Receive some data and store it in variable "ReceivedData"

SERIN 0,84,ReceivedData

’Switch to Command Mode

LOW 6

’ Wait for acknowledgment from the module

SERIN 0,84,[WAIT(CR,>)]

’Disconnect from the remote device by sending dis command

SEROUT 1,84,["dis",CR]

Page 39: Robot Project Documentation

Chapter 4. Robot-Computer Communication 30

’ Wait for acknowledgment from the module

SERIN 0,84,[wait(CR,">")]

’It is disconnected now

4.1.3 Computer Communication Side

For the hardware part, the D-Link adapter driver is easily installed. The

adapter supports the SPP2. The adapter is mapped into a new serial port

installed when the driver is installed. The application running on the com-

puter should use this serial port for sending and receiving data which hides

the Bluetooth connection complexities making use of the Serial Port Profile 3.

For the software part, the application running should use the pre-mentioned

serial port. In order to simplify the task of interfacing the serial port, the

.NET2.0 framework is used. It’s the first .NET framework the provide classes

for communicating with the serial ports. The SerialPort class is used. It sup-

ports methods to send/receive bytes to/from the serial port.

The steps required at the computer side to communicate with the com-

puter are as follows:

• Make the connection between the computer and the robot using Blue-

tooth adapter tools.

• From System Settings check the serial port assigned to the Bluetooth

adapter.

• In the SerialPort class, use the method Open to allocate the required

resources by the system for the port.

• Now, you can easily use the methods provided by SerialPort class for

sending and receiving data.

2SPP stands for Serial Port Profile3For more information about Bluetooth profiles, check appendix A

Page 40: Robot Project Documentation

Chapter 4. Robot-Computer Communication 31

4.2 Communication API

One of the points that were taken into consideration while designing this

project is how different modules in the project can communicate together.

For software modules, OOP concepts are applied to provide encapsulation

and reusability. The tricky point is that there exist some hardware modules

in the project. This chapter gives an overview about the correspondence

between the hardware module and the application running on the computer.

4.2.1 Why Using an API?

The robot is a very generic module. It can be used in many applications.

In order to account for such generality, there should be certain API through

which any application that uses the robot can control it. To make it sim-

pler, the commands related to controlling the robot and the modules on it

are gathered together in some kind of API. Adding to the simplicity of this

design, it provides changeability, reusability and extendibility.

Changeability and reusability are achieved by being able to replace any

module on the robot, even the whole robot, without affecting the applica-

tion that uses it. That is possible provided that the new robot configuration

supports the same commands the previous one supported. For example, as

mentioned in Chapter 2, the robot has compass on it. The compass can be

replaced with a more accurate one without affecting the application running

on the computer. That was the changeability at the robot side. At the

application side, the whole application can be changed without affecting the

robot. For example, in this project the proposed application is map-building.

It can be smoothly changed to any other application without modifying the

robot, assuming the robot components fulfill the requirements of the new

application for sure.

Extendibility is achieved by putting the commands in a separated layer.

More commands can be added to the API seamlessly. Even at the robot

side, new hardware modules can be added and their corresponding com-

mands should be provided to the API which results in a more powerful API

with minor changes.

Page 41: Robot Project Documentation

Chapter 4. Robot-Computer Communication 32

4.2.2 API Commands

According to the modules attached to the robot, the API commands have

been determined. Below are the commands and their description.

Initialize Initializes the status variables of the robot, e.g. the position, the

angle and sensors directions.

Connect Initializes the connection with robot.

Disconnect Disconnects from the robot.

Move Forward Makes the robot move till another command of stopping is

invoked.

Move Distance Moves the robot to the forward a certain distance and then

stop it automatically.

Stop Stops the robot whether it was moving or rotating.

Measure Ping Returns the distance in front of the ultra-sonic sensor at its

current position and angle.

Measure Compass Returns the angle of the robot with respect to the mag-

netic north.

Rotate Ping Rotates the Ultra-sonic sensor to one of the five positions.

(90◦ to the left, 45◦ to the left, to the forward, 45◦ to the right, 90◦ to

the right)

Rotate Left 90◦ Rotates the robot 90◦ to the left without using the com-

pass.

Rotate Right 90◦ Rotates the robot 90◦ to the right without using the

compass.

Rotate Left Slow Rotates the robot slowly to the left till stopping com-

mand is invoked.

Rotate Right Slow Rotates the robot slowly to the left till stopping com-

mand is invoked.

Rotate With Compass Rotates the robot to any angle with respect to the

magnetic north using the compass.

Page 42: Robot Project Documentation

Chapter 4. Robot-Computer Communication 33

Figure 4.3: The UML of the API Implementation at the Computer Side

4.2.3 API Implementation

The API implantation is paired between the application and the robot. There

is one to one correspondence between the methods in the API application

part and the modules on the robot microcontroller.

At the robot side, the code is divided into modules. Upon the reception

of a command, the appropriate module is executed. Such modular design

facilitates the pre-explained changeability and extendibility.

At the application side, the API is implemented as certain classes. The UML

shown in Figure 4.3 illustrates them.

The abstract class Robot encapsulates the standard data members and

Page 43: Robot Project Documentation

Chapter 4. Robot-Computer Communication 34

methods that should be available in any robot supporting this API. Those

methods are abstract and have no implementation. Any actual robot class,

BoeBotRobot class in this project, must inherit from Robot class and give

the appropriate implementation for such methods.

The Robot class, or any inherited class from it, needs a class implement-

ing the interface CommunicationPort. This interface provides the meth-

ods required to open connection with the robot, disconnect from the robot,

send data to the robot, and receive data from the robot. In this project,

the serial port profile of the Bluetooth dongle is used, and hence a class

SerialPortCommunication is implementing the interface CommunicationPort.

An instance SerialPortCommunication class is passed to constructor of the

BoeBotRobot class.

Page 44: Robot Project Documentation

Chapter 5

Map Building Application

Map Building is one of the fundamental tasks of mobile robots and many re-

searchers have focused on the problem of how to represent the environment

as well as how to acquire models using this representation. Exploration is

the task of guiding a vehicle in such a way that it covers the environment

with its sensors.

Two methods have been used to build the map. The first is Points

Locality, it aims to fit the points returned by sensor readings in line seg-

ments. The second method is a well known method called Occupancy Grid,

it usees a probabilistic approach to build the map. For navigation, Wall

Follower technique has been used.

5.1 Method 1: Points Locality

This section will discuss one of the two methods used in map construction. In

this method the map is represented as a list of line segments which descripe

the contours of the objects. This representation provides the simplicity and

compression because it saves the memory needed for holding the map, and

on the other hand it provides an appropriate level of details.

Now, how we can find those line segments that descripe the map. A method

has been developped to obtain these line segments, from the sensor raw data

received from the robot, which we called ”Points Locality Method”, this

name is extarcted from the idea this method depends on to determine each

segment.

If we try to extract the line segments from the raw data measured by robot

range finder sensor, we will have two problems. First we should devide the

set of points into a number of sub-sets of co-linear points each of them holds

Page 45: Robot Project Documentation

Chapter 5. Map Building Application 36

the points related to one segment. Second, because of the inaccuracy in the

measured points, we need to fit these points to find a candidate line segment

which results in minimum error.

5.1.1 Co-linearity Problem

We can solve this problem by grouping the points from the beginning into

co-linear points depending on the locality of the new point and the locality of

the existing line segments. Each group of neighbour points should be related

to the same line segments and each new point can be inserted in one of the

existing line segments or it can form a new one depending on the locality of

the new point and the locality of the existing segments, hence the loclity of

points enables us to decide if they compose one segment or not as will be

explained in the next section.

5.1.2 Neighbourhood Decidablity using Points Local-

ity

Each new received point is checked with each existed segment to see if it is a

neighbour of its points, i.e. it corresponding to this segment, or not. There

are some different cases:

1. The new point is near by the start or the end of the line segment, in

this case either this point belongs to this sigment or belongs to another

segment which is intersected with it as in corners. If the new height of

the new point is small enough and the total error after the insertion of

the new point is still small enough, then the new point would belong

to this segment, otherwise it would belong to an intersected segment.

2. The new point is far from the start and end points of the line segment

but the height of the point is small enough, as in the first case, the new

point may belongs to the line segment or an intersected segment, and

this is decided acording to the current and the new total error as in the

first case.

3. The new point is far from the start and end points of the line segment

and its height is not small enough, in this case the new point doesn’t

Page 46: Robot Project Documentation

Chapter 5. Map Building Application 37

belong to this line segment.

If it’s found that the new point doesn’t belong to any one of the existed line

segment, it will be inserted in a new line segment which will have only one

point untill it receives other neighbour points.

Figure 5.1: The solid points belongs to the line segment but the non-solid

points are far points and doesn’t belong to the line segment.

5.1.3 Fitting Problem

Because of the errors resulted from the robot motion and sensor measure-

ments, we need to find the candidtae line segments which give the minimum

error in its set of points. Using the Perpendicular regression which minimize

the heights of the points over the line segments, we can find that candidate

segment. A mathematical method has been developped to implement the

Perpendicular regression and is explained in the next section.

5.1.4 Perpendicular Regression

We aim to minimize the error in the points which determine the line segment,

according to Perpendicular regression, the error can be defined as the sum of

points height (Perpendicular distance between the point and the candidate

line segment):

Page 47: Robot Project Documentation

Chapter 5. Map Building Application 38

Figure 5.2: An example of Perpendicular fitting

ε =n∑

i=1

hi (5.1)

ε =n∑

i=1

√∆2

X + ∆2Y (5.2)

and for simplicity, the error will be defined as the sum of the squares of

heights rather than the sum of heights:

ε =n∑

i=1

h2i (5.3)

ε =n∑

i=1

(∆2X + ∆2

Y ) (5.4)

Now, we need to minimize the error ε wich is a function of the coefficeints

(a, b) of the equation of the line segment (y = ax+ b).

We need to find a and b that minimize the error ε. By partially differenting

with respect to a and b and equating with zero, we get:

∂ε

∂a= a

n∑i=1

P 2xi−a

n∑i=1

P 2yi

+(a2−1)n∑

i=1

PxiPyi

−b(a2−1)n∑

i=1

Pxi+2ab

n∑i=1

Pyi−nab2 = 0

(5.5)

∂ε

∂b= a

n∑i=1

Pxi−

n∑i=1

Pyi+ nb = 0 (5.6)

Page 48: Robot Project Documentation

Chapter 5. Map Building Application 39

by solving these two equations simultaneously, we get the value of a and b

as following 1

a =−ψ +

√ψ2 + 4φ2

2φb =

−a∑n

i=1 Pxi+

∑ni=1 Pyi

n(5.7)

where:

ψ = n(n∑

i=1

P 2xi−

n∑i=1

P 2yi

)−(n∑

i=1

Pxi)2−(

n∑i=1

Pyi)2 φ = n

n∑i=1

PxiPyi

−n∑

i=1

Pxi

n∑i=1

Pyi

(5.8)

After finding the values of a and b, we can decide the start and end points of

Figure 5.3: An example of corners problem in the points locality method

the line segment according to the list of its points. Now, each line segment is

determined with the start and end points and the map can ignore the large

number of data points.

There are two problems will appear when using this method. The first one

is the curves which can’t be extarcted by fitting line segments to the data

points. The curves will appear as a number of non-continuous line segments

as shown in Figure 5.4. The second problem is the corner problem which will

lead to undesired results , as shown in Figure 5.3, because of the method of

1for more information about the mathematical proof, return to appendix D

Page 49: Robot Project Documentation

Chapter 5. Map Building Application 40

Figure 5.4: An example of curves problem in the points locality method

determinig the neighbourhood of points which belong to one line segment as

descripe in Section 5.1.2.

5.2 Method 2: Occupancy Grid

The second map-building method is called Occupancy Grid. This method

can overcome the problem of uncertainty about distance measured by the

ultrasonic sensor. This method uses a probabilistic approach to build the

map. The map is implemented as a two dimensional grid in which each cell

(x, y) has a value that represents the probability of occupancy of this cell, i.e.

the probability that this cell contains an object. This value is updated upon

each sensor reading. The final value of the cell is the value used to draw the

map.

5.2.1 Measurements Uncertainty

Although the Ultrasonic sensor could measure the distance with an accept-

able error, it isn’t enough to determine the obstacle position. As shown in

Fig 5.5, the ultrasonic sensor only gives an arc on which the detected obstacle

Page 50: Robot Project Documentation

Chapter 5. Map Building Application 41

is located. We couldn’t use the ultrasonic measurements directly to build the

map assuming that the obstacle on the line of side of the ultrasonic sensor,

especially if the reading is from a big distance, as the resulting map would

be too erroneous.

Figure 5.5: Ultrasonic Sensor Range Regions

To overcome the measurements uncertainty problem we used a probabilistic

method. As shown in Figure 5.5, and due to sensor detection of the object,

region A contains cells that may be occupied by the object, region B contains

cells that most probable to be empty.

We have to raise the probability of occupancy in all cells in region A and

decrease the probability of occupancy in all cells in region B. The problem

now is to determine the probability of each cell. This probability could be es-

timated using different method, one of them is using a mathematical method

to estimate the occupancy probability for each cell.

For region A, the probability of occupancy of each cell given this sensor

reading is inversely proportional to both the distance and the angle between

it and the sensor. At contrary for every cell in region B the probability of

occupancy of each cell given this sensor reading is directly proportional to

both the distance and the angle between it and the sensor. We used these

Page 51: Robot Project Documentation

Chapter 5. Map Building Application 42

Figure 5.6: Calculating Probability From Distance and Angle Between Sensor

and Cell

factors to estimate the probability of occupancy of each cell given this sensor

reading as follows:

As shown in Figure 5.6, let D be the maximum distance sensor can get,

β be the maximum cone angle, d and α are the distance and angle of the

current cell to be updated, respectively).

In Region A:

Pr(Occ) =

c1D−dc1D

+ c2β−αc2β

2(5.9)

In Region B:

Pr(Occ) =

c1D−dc1D

+ c2β−αc2β

2(5.10)

Where Pr(Occ) is the probability that the cell is occupied, Pr(Occ) is the

probability that the cell is free. c1 and c2 are constants depend on cell size.

The smaller cell size, the larger the constants are. Minimum value of these

constants is 2, just to ensure that probability is greater than 0.5. If not,

sensor reading may cause a negative effect on the total probability.

5.2.2 Probability updating over time

Till now, we just estimated the probability of occupancy of each cell given

one sensor reading. To determine a good estimation for occupancy prob-

ability, every cell should be sensed more than one time. The problem is

Page 52: Robot Project Documentation

Chapter 5. Map Building Application 43

how to update the cell occupancy probability upon the new sensor reading.

Let the probability of occupancy of each cell given the ST sensor reading is

Pr(Occx,y|ST )

The final stored probability after T sensor readings would be

Pr(Occx,y|S1, S2, ..., ST )

Using Baye’s low and conditional probability rules, the following formula can

be proved2:

Pr(Occx,y|S1, S2, ..., ST ) =

1− (1 +Pr(Occx,y|S1)

1− Pr(Occx,y|S1)

T∏τ=2

Pr(Occx,y|Sτ )

1− Pr(Occx,y|Sτ )

Pr(Occx,y)

1− Pr(Occx,y))

−1(5.11)

The previous formula is suitable for cells in region A. For Region B, the suit-

able value would be Pr(Occx,y|S1, S2, ..., ST ) and can be calculated, similarly,

using the formula:

Pr(Occx,y|S1, S2, ..., ST ) =

1− (1 +Pr(Occx,y|S1)

1−Occx,y|S1)

T∏τ=2

Pr(Occx,y|Sτ )

1− Pr(Occx,y|Sτ )

Pr(Occx,y)

1− Pr(Occx,y))

−1 (5.12)

In real implementation, the only needed information for each cell is Pr(Occx,y|S1, S2, ..., ST ),

which is updated when cell lies in region A, and Pr(Occx,y|S1, S2, ..., ST ),

which is updated when cell lies in region B. When drawing map, the maxi-

mum of these two values will be chosen.

5.2.3 Method Implementation

• Initialize every cell with initial probability equal to 0.5, because we

have no information about the map. By this initialization, the map is

totally unknown and the entropy is the.

• Substituting in Equations 5.9 and 5.10, get the probability of occupancy

given the current sensor reading for all cells in regions A,B.

• Use the recursive formulae, 5.11 and 5.12, to update the two values of

each cell, Pr(Occx,y|S1, S2, ..., ST ) and Pr(Occx,y|S1, S2, ..., ST ).

• After finishing navigation, and for each cell, select the maximum value

from Pr(Occx,y|S1, S2, ..., ST ) and Pr(Occx,y|S1, S2, ..., ST ).

• Using the selected values, fill the grid points with a grayscale color

corresponding to the probability.

2See Appendix E for the complete proof

Page 53: Robot Project Documentation

Chapter 5. Map Building Application 44

5.2.4 Occupancy-Grid Advantages

• It is independent of the objects shape; it could draw both curves and

straight lines with the same accuracy.

• It is applicable whether the map exits or not, in other words we could

use this method to update an existing map if the environment is dy-

namic.

• It treated the problem of uncertainty with an acceptable solution.

• It could be used with some modifications to build 3D maps, in this

case, we assume that the occupancy probability of any cell is directly

proportional to the height of this cell; this assumption could give good

results as the ultrasonic sensor cone is 3D and its readings depend on

the height of the object

5.3 Navigation

A common technique for exploration strategies is to extract frontiers between

known and unknown areas and to visit the nearest unexplored place. An-

other simple technique is wall folllower. In this technique, robot is assumed

to begin motion at a point that the wall is right to it. The technique guides

the robot to follow the wall such that the wall always ramains right to the

robot. Robot keeps motion until it reaches its first location again. This

technique may not navigate the whole area if there is some locations far from

the wall, but for simplicity, wall follower has been choosen as a navigation

technique. Frontiers-Based approach is left as a futer work.

Below is a pseudo code for wall follower technique:

WallFollow

For i = 1 to number_of_steps

rotate sensor to forward direction and read it (Forward Read)

rotate sensor to right direction and read it (Right Read)

if Forward Read < forward_threshold then

rotate left 90 degrees

else if Right Read > right_threashold then

rotate right 90 degrees

Page 54: Robot Project Documentation

Chapter 5. Map Building Application 45

end if

move one step

Loop

RETURN

This code snippet checks for two cases, the first is when forward sensor

reading becomes lower than a certain threshold, it means that there is a

wall in front of the robot. The second case is when the right sensor reading

becomes higher than a certain threshold, it means that the wall right to the

robot just ended.

5.4 Simulation Results

Simulation is an important step before real implementation, to validate and

verify mapping methods. Simulation ensures that methods are valid and well

implemented apart from robot hardware problems. The next step is to im-

plement the valid and verified mapping methods in a real application with

the robot itself. Real experiments will be mentioned in chapter 6.

MobotSim free trial simulator has been used to simulate robot motion

and sensors. The following figures shows simulation results.

Figure 5.7 shows result of ocuupancy grid method simulation, Figure 5.8

shows result of points locality method simulation. Note that in occupancy

grid there is no constraint on the shape of the border, it may be either lines

or curves. Points locality method will not give the same result if the border

contains curves. In general, simulation results are encouraging.

Page 55: Robot Project Documentation

Chapter 5. Map Building Application 46

Figure 5.7: Simulation Result for Occupancy Grid Method

Page 56: Robot Project Documentation

Chapter 5. Map Building Application 47

Figure 5.8: Simulation Result for Points Locality Method

Page 57: Robot Project Documentation

Chapter 6

Experiments and Results

This chapter illustrates some experiments in a prepared environment for

small scale indoor areas. These experiments target examining the system

behavior, determining the points of weakness and discovering any recom-

mended refinements. Such expriments was performed using the following

components:

• Parallax Inc’s BOE −BOT TM Autonomous Wheeled Robot.

• PING)))TM Ultrasonic Range Finder

• Hitachi HM55B Compass Module

A surface of plates of white foam is used for being a nearly ideal surface.

It ensures little amount of slipping as well as providing a plane surface for

the robot to move on. Blocks of small stones are used to build walls and

objects. C# map-building application is developed using Microsoft Visual

Studio .NET 2oo5 IDE, Express Edition.

6.1 Experiment 1:

• Map Size: 1×1 meter

• Step Length: 15 cm

• Number of sensor reading: 5 for each step

• Number of Steps: 14

• Total Time: 1:40 minutes

Page 58: Robot Project Documentation

Chapter 6. Experiments and Results 49

Figure 6.1: Generated Maps for Experiment 1 using Occupancy Grid Method

Figure 6.2: Generated Maps for Experiment 1 using Points Locality Method

The map is this experiment was merely an empty squared room. Figure 6.1

shows the resulted map using occupancy grid method. Figure 6.2 shows the

resulted map using points locality method.

6.2 Experiment 2:

• Map Size: 1×2 meter

• Step Length: 15 cm

• Number of sensor reading: 5 for each step

• Number of Steps: 28

• Total Time: 4:30 minutes

Figure 6.3 shows the actual map. Figure 6.4 shows the resulted map using

occupancy grid method. Figure 6.5 shows the resulted map using points

Page 59: Robot Project Documentation

Chapter 6. Experiments and Results 50

Figure 6.3: Actual Map for Experiment 2

Figure 6.4: Generated Maps for Experiment 2 using Occupancy Grid Method

locality method.

Some analysis has been performed on the results of these expriments and is

illustrated in Chapter 7.

Page 60: Robot Project Documentation

Chapter 6. Experiments and Results 51

Figure 6.5: Generated Maps for Experiment 2 using Points Locality Method

Page 61: Robot Project Documentation

Chapter 7

Error Analysis

This chapter contains the experiments used to estimate the errors in robot

motion, sensors measurements and the resulted maps. Hardware errors in

robot servo mototrs and sensors measurements are considered pivotal factors

which have a significant effect on the resulted maps. Therefore, a lot of

experiments are performed to estimate the values of these different errors as

illustrated in the following two sections. In order to judge the efficiency of

the system, the errors in the resulted maps are considered as well.

7.1 Robot Motion Errors

Errors in motion result from the servo motors rotation and the friction of

the wheels with ground surface. Some expriments have been performed to

measure the error in distances and robot position after motion. By moving

the robot in straight line for different distances, the following results are

obtained

Table 7.1 and Figure 7.1 show error in robot Motion. The average error

is 1.577777778 cm, error standard deviation is 1.126696252 cm.

As shown in Figure 7.1, the error in travelled distance remains negligible

till certain threashold, after which the error increases. That threashold =

30 Cm. Hence, for accurate operation, distance shold be kept below this

threashold.

Page 62: Robot Project Documentation

Chapter 7. Error Analysis 53

Requred Distance (cm) Travelled Distance (cm) Absolute Error (cm)

10 8.9 1.1

15 15.3 0.3

20 20.1 0.1

30 30.8 0.8

40 41.5 1.5

50 51.9 1.9

60 62.5 2.5

70 72.5 2.5

80 83.5 3.5

23 21.386 1.614

Table 7.1: Error in Robot Motion

Figure 7.1: Error in Robot Motion

7.2 Sensors Measurements Errors

7.2.1 Ultrasonic Range Finder

The errors of the measurements of the ultrasonic sensor result in errors in the

locations of the sensed objects. That causes more errors in the resulted map.

The distance measured by the sensor depends on the time calculated between

sending the wave and receiving the reflected one, and the velocity of that

wave. Any error in calculating the time or any change in the medium affects

the velocity of the waves or the reflection of it. That will affect the measured

distances. Some experiments have been performed to measure the error in

the measurements of the ultrasonic range finder sensor. By placing the sensor

Page 63: Robot Project Documentation

Chapter 7. Error Analysis 54

in front of some solid budies and measure the distance, the following results

are obtained

Table 7.2 shows error in ultrasonic sensor readings. The average error is

0.658545455 cm, error standard deviation is 0.702285464 cm.

Actual Distance(cm) Measured Distance(cm) Absolute Error(cm)

3 3.23 0.23

5 5.27 0.27

8 7.88 0.12

10 9.724 0.276

11 10.812 0.188

13 12.886 0.144

15 15.062 0.062

17 16.014 0.986

20 18.564 1.436

23 21.386 1.614

25 23.052 1.948

Table 7.2: Error in Ultrasonic Sensor Readings

Figure 7.2: Error in Ultrasonic Sensor Readings

As shown in Figure 7.2, the error in ultrasonic sensor remains negligible

till certain threashold, after which the error increases. That threashold = 15

Cm. Hence, for accurate operation, distance over this threashold shouldn’t

be considered.

Page 64: Robot Project Documentation

Chapter 7. Error Analysis 55

7.2.2 Digital Compass

The digital compass used to identify and adjust robot direction during its

motion. Its measurements depend on the sensitivity of the sensor to the

magnetic field. It suffers from a lot of sources of interference such as magnetic

fields resident to the module PCB and the carrier board it’s mounted on.

Nearby jumper wires and batteries are other sources of interference. Since

it’s highly affected by the surrounding environment, no certain experiments

were performed to estimate this error.

7.3 Application Results Errors

By measuring the difference between the dimensions of different parts on

the map and the actual dimensions, we can determine the errors and how

much the resulted map is identical to the actual environment. Not only the

dimensions are important but also the level of recognized details. Here are

some results from the experiments illustarted in Chapter 6

Table 7.3 and Figure 7.3 show error in walls dimension. The average error

is 11 cm, error standard deviation is 6.542170894 cm.

Table 7.4 and Figure 7.4 show error in angles between walls. The average

error is 7.471428571◦, error standard deviation is 8.435384655circ.

Actual Dimmension (cm) Measured Dimension (cm) Absolute Error (cm)

100 80 20

20 26 6

20 24 4

40 30 10

200 182 18

100 92 8

Table 7.3: Experiment 1: Error in Walls Dimensions

Some details such as corners and curves may suffer from some distortion.

The parts with dimensions less than the robot motion step may not be fully

detected.

Page 65: Robot Project Documentation

Chapter 7. Error Analysis 56

Figure 7.3: Experiment 1: Error in Walls Dimensions

Actual Angle (degree) Measured Angle (degree) Error (degree)

90 91 1

90 96.3 6.3

90 80 10

90 91 1

90 90 0

0 -10 10

0 -24 24

Table 7.4: Experiment 1: Error in Angles Between Walls

Figure 7.4: Experiment 1: Error in Angles Between Walls

Page 66: Robot Project Documentation

Chapter 8

Extending Robot Capabilities

8.1 Communication Design Considerations

To fully understand how the communication between the robot and the com-

puter is achieved, more details should be illustrated about how data sending

and receiving is performed. The USART module of the robot can receive

serial data from the eb500 module. When the robot issues a read command

to the USART, it blocks waiting till the data arrives. After certain timeout,

the robot assumes there is no data to receive and continue executing other

commands. The USART doesn’t provide a means to determine if there are

data to read or not.

Several issues arise in implementing the serial port. A lot of this material

is not explained at all in the Stamp manuals, especially the timing issues.

Difficulties with timing on the serial ports can be one of the most frustrating

aspects of developing stamp applications. It usually comes down to quanti-

tative questions of how fast data can be received, recognized, processed and

transmitted.

As mentioned in chapter 3, the motion is done by generating train of

pulses to each of the motors of the robot. The speed of the motor precisely

depends on the width of the pulse. The calibration of the motion is highly

affected by such width. Therefore, the timing is of big importance in the

process of moving the robot.

Now, look at this scenario. The computer issues a move command to the

robot, Figure 8.1. Since the timing of the generated pulse is of big impor-

Page 67: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 58

tance, the robot shouldn’t do something else during the process of moving

the robot. That is because the Basic Stamp doesn’t support timer interrupts

and hence, the timing should be maintained by the running code. If the

computer issues a stop command to the robot, Figure 8.2, the latter cannot

receive the command since the microcontroller is busy moving the robot.

Figure 8.1: Computer issues a move command.

Figure 8.2: Computer issues a stop command but the microcontoller isn’t

listening.

There are three approaches to solve this problem

First Approach: Limit The Commands

Limit the commands the robot provides to small fully predefined tasks. The

computer isn’t allowed to issue such general move command. The command

will be, for example, move 50Cm, Figure 8.3. That way, the robot will move

for 50Cm and then stops automatically and waits for another command from

the computer. That approach fulfills non-interrupted motion. But it limits

the type of commands the robot provides. Besides, that approach increases

the communication traffic between the robot and computer.

Second Approach: Add Another Microcontroller

Another simple microcontroller can be used to be responsible for the motion

Page 68: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 59

Figure 8.3: Limit the commands to fully predefined tasks.

of the robot. All other tasks and communication with the computer are done

by the first microcontroller. This approach is valid provided that the second

microcontroller supports interrupts. The scenario will be modified as follows.

The computer issues a move command to the first microcontroller. The latter

interrupts the second microcontroller and forwards the move command to it,

Figure 8.4. At this state, the first microcontroller is waiting for another

command from the computer and the second microcontroller do nothing but

moving the robot. It at any instance, the computer issues a stop command

to the first microcontroller. The latter interrupts the second microcontroller

and forwards the stop command to it.

Figure 8.4: The Stamp forwards the commands to another microcontoller

that supports interrupts.

Third Approach: Microcontroller Replacement

Replace the microcontroller with another one that supports timer interrupts.

The major challenge with this approach is to be able to use all other sensors

and peripherals smoothly with the new microcontroller. Most of the hard-

ware components were bought from the same vendor to avoid incompatibility

among different components. At that vendor we couldn’t find such kind of

microcontrollers that supports timer interrupts.

Although the first approach was sufficient for this project, this may not

be the case with other situations. The second approach is investigated in

Page 69: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 60

more details in the coming sections.

8.2 PIC Microcontroller Module

Figure 8.5: PIC microcontoller, Model: PIC 16F628A

In this approach, PIC 16F628A microcontroller, packaged in 18-PIN DIP,

is used, Figure 8.2. It may be programmed using certain version of assembly

language. The main specifications of this module are:

Clock Speed an external oscillator can be used (range 4MHz 2̃0MHz)

RAM Size 224 bytes

EEPROM Size 128 bytes

Flash (Program) Size 2048 words

Number of I/O pins 16

USART module

Timers 3

External Interrupt Pin

8.2.1 Memory organization

Program Memory The PIC16F628A has a 13-bit program counter capable

of addressing an 8K×14 program memory space. Only the first 2K×14

(0000h-07FFh) are physically implemented. Accessing a location above

these boundaries will cause a wrap around within the first 2K×14 space.

The reset vector is at 0000h and the interrupt vector is at 0004h.

Page 70: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 61

Data Memory The data memory is partitioned into four banks, which con-

tain the GPRs 1 and the SFRs 2. There are GPRs implemented as static

RAM in each bank.

8.3 Communication between the Modules

Figure 8.6: Stamp-PIC Communication Scheme

As mentioned in Section 8.1, the proposed design of the project uses two

microcontrollers to accomplish the required functionality of the components.

The two microcontrollers are communicating together through a serial proto-

col. Since communication protocols between digital devices, such as I2C, are

too complicated to be used in such relatively simple functionality of the com-

munication, a new simpler protocol has been devised. The protocol has used

the external interrupt PIN of PIC microcontroller. As shown in Figure 8.3,

for simplicity, 3 lines are wired between the two microcontrollers. The first is

the interrupt line. The BASIC Stamp uses it to interrupt the PIC. The other

two lines are the Clock and Data wires which are used for synchronous data

transmission between the two microcontrollers. A communication scenario

will fully explain the protocol.

• The Stamp activates the Interrupt line and waits for enough time till

the PIC is ready to receive the data.

• The Stamp begins sending a command to the PIC using Clock and

Data lines.

• The Data are sent in units of 8 bits. Longer data or command param-

eters are sent in separated consecutive bytes.

1GPR stands for General Purpose Register2SFR stands for Special Function Register

Page 71: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 62

• The Stamp waits for an ACK from the PIC to make sure the PIC

successfully received the command.

• The PIC either receives the command correctly and acknowledges pos-

itively or receivers an invalid byte and acknowledges negatively.

8.4 Reliability in the devised protocol

Besides the care given to the timing of the serial communication, some ex-

tra checking is done to ensure reliable data exchange. The first additional

method to ensure reliaibility is acknowleged transmission. Other methods

are explained below.

8.4.1 Reliability in commands

The code words corresponding to the commands have been selected to pro-

vide maximum possible hamming distance between the code words. That

ensures maximum error detection probability.

Table 8.1 shows the selected code words that provide 4-bit minimum ham-

ming distance for 16 8-bit commands.

8.4.2 Reliability in data

The data exchanged between the two microcontrollers are in the form of 8

bits. In order to provide error checking, the data bits are reduced to 7 bits to

allow 1 parity bit. The parity generation and check have been implemented

in both sides; BASIC Stamp side and PIC side.

In BASIC Stamp side, data is just passed to the computer which will

check the parity and decide the next step. On the other hand, parity bit is

added by computer before sending it to BASIC Stamp.

In PIC side, the problem is more complex. PIC will be responsible for

the whole matter. Parity generation and check must be done using PIC as-

Page 72: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 63

Command Code word

1 00000000

2 11110000

3 11001100

4 00111100

5 10101010

6 01011010

7 01100110

8 10010110

9 01101001

10 10011001

11 10100101

12 01010101

13 11000011

14 00110011

15 00001111

16 11111111

Table 8.1: Code words for PIC commands

sembly language. The following code snippet reduces number of calculations

to generate parity bit as possible:

CalculateParity

; 7-bit parity

; This routine will calculate the parity of a 7-bit

; integer "CalulateParityByte" and place the result in the 8-position

bcf CalulateParityByte,7 ;assume the parity is even

;Note: for odd parity, use bsf

; assume the bits in "CalulateParityByte" are abcdefgh

swapf CalulateParityByte,w ;W = efghabcd

xorwf CalulateParityByte,w ;W = ea.fb.gc.hd.ea.fb.gc.hd

; where ea means e^a, etc

movwf CalulateParityTemp ;

rlf CalulateParityTemp,f ;CalulateParityTemp =

;fb.gc.hd.ea.fb.gc.hd.??

rlf CalulateParityTemp,f ;CalulateParityTemp =

;gc.hd.ea.fb.gc.hd.??.ea

Page 73: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 64

xorwf CalulateParityTemp,f ;CalulateParityTemp =

;gcea.hdfb.gcea.hdfb.gcea.?.?

;again, gcea means g^c^e^a

rlf CalulateParityTemp,w ;w = hdfb.gcea.hdfb.gcea.hdfb.?.fb

xorwf CalulateParityTemp,w ;w = abcdefgh.abcdefgh.....

;ie, the upper 5-bits of w each

;contain the parity calculation.

andlw 0x80 ;We only need one of them

iorwf CalulateParityByte,w ;copy to the MSB of the byte to send.

return

parity check uses the same routine, but it stores the original data, and

than compare it to the calculated data.

8.5 Progress Status

Due to time constraints, this approach isn’t fully implemented yet. The

following steps to implement the new approach are actually finished:

• The design and implementation of the communication protocol between

the two microcontroller modules, as mentioned before.

• The design,implementation and testing of PIC PCB3 with a suitable

size to be fixed in BOE-BOT body, Figure 8.5.

• Testing communication between the two modules.

• Testing PIC capability of controlling servo motors.

Figure 8.7: PIC microcontoller Board

The following steps are remaining to complete implementation of the new

approach:

3PCB stands for Printed Circuit Board

Page 74: Robot Project Documentation

Chapter 8. Extending Robot Capabilities 65

• Implementation of PIC program to control robot motors.

• Calibrating robot motion with the new controller.

• Testing mapping application with the modified system.

Page 75: Robot Project Documentation

Chapter 9

Summary and Future Work

9.1 Summary

The objective of the project was to use a mobile robot to build a map. The

proposed system is divided into two major entities each of which has its own

characteristics and challenges. Besides a computer, a robot with motors and

sensors attached to it were used to accomplish the objective of the project.

The two entities were to communicate together. The proposed medium of

communication was Bluetooth.

The most important task concerning the robot was how to fully control

it as well as the sensors attached to it. The first stage of achieving such task

has some minor subtasks beginning from calibrating the motion till testing

each robot component separately. Up to this point, no exhaustive design

efforts were exerted. The effort was targeted to getting familiar with the

new companion, the robot. If a name should be give to this stage, it will be

”Collecting Information”. Later on, more time was to be dedicated for the

design. Making all required sensors work together was not a piece of cake

after all. That was the task of this phase.

At this point, the robot is ready to obey its master, the computer. The

missing part was who will convey the orders of the computer to the robot. It

was time to begin testing the Bluetooth communication. Two different mod-

ules of Bluetooth were used, one at the robot and the other at the computer.

Both have been tested for working together seamlessly.

Regarding the communication between the two entities, a robot API was

Page 76: Robot Project Documentation

Chapter 9. Summary and Future Work 67

proposed. All commands that the robot can fulfill were gathered into a

layer between the robot and the computer application. Computer applica-

tion should use this API to control the robot.

Parallel to that process dealing with the robot, another one was investi-

gating different methods of exploration and map-building. It was also nec-

essary to select the applicable methods with respect to time and resources

constraints. Upon the selection, the phase of implementation took place.

The applicable methods of exploration and map-building were implemented

using the proposed API. Those methods have been simulated on the com-

puter before the phase of real testing. Since simulation results were highly

acceptable, the selected map-building methods have been tested and com-

pared according to more than a test case.

During the whole process of the design and implementation of this project,

some other objectives were considered. The most important of those was how

to make the design changeable and extendable. This is objective was success-

fully achieved. To some extent, every module in the system can be replaced

with another one without affecting other modules provided that the new

module supports the same functionality the former supports.

9.2 Future Work

• Implementing a robust exploration algorithm that could cover the whole

map without any constraints about the map shape. Frontier-based ex-

ploration algorithm is suggested, especially, it benefits from the occupancy-

grid method data structure.

• Using multiple robots to build the map, the robots would communicate

with each other and divide the tasks between them, multiple robots

approach could be used the to build the map faster, and to increase

the accuracy as the error in robot localization is accumulating with

time.

• Improving the accuracy of the resulted maps using error analysis results

and statistical methods.

• Using Laser range-finder to overcome the uncertainty in the ultra-

Page 77: Robot Project Documentation

Chapter . Summary and Future Work 68

sonic sensor that could result more accurate maps using points locality

method.

• Enhancements in point’s locality method to estimate curved contours.

• Building 3D maps using a camera to capture photos for the navigated

area. Another wireless communication with a higher bandwidth than

Bluetooth is required.

• Using another microcontroller to make use of the Bluetooth module to

remotely program the robot while it’s on-site.

- Finishing the 2nd approach of system design to add another controller

Page 78: Robot Project Documentation

Appendix A

Bluetooth Overview

A.1 Introduction

To put it simply, Bluetooth is a technology standard for electronic devices to

communicate with each other using short-range radio. It is often referred to

as a ”cable replacement” technology, because it is commonly used to connect

things, such as cameras, headsets, and mobile phones that have traditionally

been connected by wires. Bluetooth is much more than simply a way to cut

the cord between todays existing electronic devices. It is an enabling technol-

ogy that will take these devices to new levels of productivity and functionality

and enable a whole new class of devices designed with communications and

connectivity in mind.

The Bluetooth Special Interest Group (SIG) defines Bluetooth a bit more

broadly as the ”worldwide specification for small-form-factor, low-cost radio

solutions that provide links between mobile computers, mobile phones, other

portable devices, and connectivity to the Internet.” In defining Bluetooth,

the SIG has taken a very different approach than the IEEE 802.11 Com-

mittees did. Rather than build Bluetooth as an adjunct to TCP/IP, it was

defined as a standalone protocol stack that includes all layers required by an

application. This means that it encompasses not only wireless communica-

tions but also service advertisement, addressing, routing, and a number of

application-level interfaces referred to as profiles.

Bluetooth is based on a frequency hopping spread spectrum (FHSS) mod-

ulation technique. The term spread spectrum describes a number of methods

for spreading a radio signal over multiple frequencies, either simultaneously

Page 79: Robot Project Documentation

Appendix A. Bluetooth Overview 70

(direct sequence) or in series (frequency hopping.) Wi-Fi devices are based on

direct sequence spread spectrum transmission which uses multiple channels

simultaneously. While this technique increases the speed of transmission (for

example in Wi-Fi from 1.5MHz to 11MHz), it is more susceptible to interfer-

ence from other radio sources as well as being a greater source of interference

to the surrounding area. In contrast, Bluetooth utilizes the frequency hop-

ping method of spread spectrum which uses multiple radio channels to reduce

interference and increase security. The signal is rapidly switched from chan-

nel to channel many times per second in a pseudo-random pattern that is

known by both the sender and receiver(s). This provides robust recovery of

packet errors caused by interference from another radio source at a particular

frequency. Also, data is generally more secure because it is not possible to

receive more than a fraction of the data unless the hopping pattern is known.

Bluetooth utilizes frequency hopping in the 2.4GHz radio band and hops at

a relatively fast pace with a raw data rate of about 1 Mbps. This translates

to about 700 kbps of actual useful data transfer.

A.2 Bluetooth Protocol Stack

Here is an outline of the different levels in the Bluetooth protocol stack.

As shown in Figure A.1, the main layers of the stack are:

• Radio Layer

When looking at the different layers of the Bluetooth protocol stack,

you will always find the raio layer first. Everything in Bluetooth runs

over the Radio Layer, which defines the requirements for a Bluetooth

radio transceiver, which operates in the 2.4GHz band. The radio

layer defines the sensitivity levels of the transceiver, establishes the

requirements for using Spread-spectrum Frequency Hopping and clas-

sifies Bluetooth devices into three different power classes:

– Power Class 1 long rang devices (100m)

– Power Class 2 normal or standard range devices (10m)

– Power Class 3 short (10cm)-range operation

• Baseband Layer

The next ”floor” in the Bluetooth protocol stack is the Baseband Layer,

Page 80: Robot Project Documentation

Appendix A. Bluetooth Overview 71

Figure A.1: Bluetooth Protocol Stack

which is the physical layer of the Bluetooth. It is used as a link con-

troller, which works with the link manager to carry out routines like

creating link connections with other devices. It controls device address-

ing, channel control (how devices find each other) through paging and

inquiry methods, power-saving operations, and also flow control and

synchronization among Bluetooth devices.

• Link Manager Protocol (LMP)

A Bluetooth devices Link Manager Protocol (LM) carries out link

setup, authentication, link configuration and other protocols. It dis-

covers other LMs within the area and communicates with them via the

Link Manager Protocol (LMP).

Page 81: Robot Project Documentation

Appendix A. Bluetooth Overview 72

• Host Controller Interface (HCI)

Next in the protocol stack, above the LMP is the Host Controller In-

terface (HCI), which is there to allow command line access to the Base-

band Layer and LMP for control and to receive status information. Its

made up of three parts: 1) The HCI firmware, which is part of the

actual Bluetooth hardware, 2) The HCI driver, which is found in the

software of the Bluetooth device, and 3) The Host Controller Transport

Layer, which connects the firmware to the driver.

• Logical Link Control and Adaptation Protocol (L2CAP)

Above the HCI level is the Logical Link Control and Adaptation Pro-

tocol (L2CAP), which provides data services to the upper level host

protocols. The L2CAP plugs into the Baseband Layer and is located

in the data link layer, rather than riding directly over LMP. It provides

connection-oriented and connectionless data services to upper layer pro-

tocols.

Protocol types are first identified in the L2CAP. Data services are pro-

vided here using protocol multiplexing, segmentation and reassembly

operation, and group abstractions occur. L2CAP allows higher-level

protocols and applications to send and receive data packets up to 64

kilobytes. The L2CAP spends a lot of its time handling segmentation

and reassembly tasks.

• Radio Frequency Communication (RFCOMM)

Above L2CAP, the RFCOMM protocol is what actually makes upper

layer protocols think theyre communicating over a RS232 wired serial

interface, so theres no need for applications to know anything about

Bluetooth.

• Service Discovery Protocol (SDP)

Also relying on L2CAP is the Service Discovery Protocol (SDP). The

SDP provides a way for applications to detect which services are avail-

able and to determine the characteristics of those services.

• OBEX Object Exchange Protocol (OBEX)

It is a widely used protocol for simple file transfers between mobile

devices. Its main use is in infrared communication, where it is used

for generic file transfers between notebooks or PDAs, and for sending

Page 82: Robot Project Documentation

Appendix A. Bluetooth Overview 73

business cards or calendar entries between cellular phones and other

devices with PIM applications.

A.3 Bluetooth Profiles

Bluetooth devices can support interoperability with one or more types of

devices. In order for two Bluetooth devices to communicate with each other,

they must share at least one common profile e.g. Serial Port Profile (SPP)

which is one of the earliest and most widely supported profiles.

As previous mentioned, there are a number of profiles that sit roughly

on top of the L2CAP layer that provide much of the power (and also the

complexity) of the Bluetooth protocols.

These profiles are the primary entry into the stack for an application. Es-

sentially, they define the set of services that are available to that application.

Currently there are more than 25 different profiles defined or in the process

of being defined by the Bluetooth SIG. With so much variety, acquiring an

in-depth understanding of Bluetooth is not a trivial task. However, the ab-

straction by a single profile can provide an application the use of the profile

without such detailed knowledge. There are a number of profiles that are

exposed in very familiar forms. For instance, the SPP profile enables the

device implementing it to appear like a traditional serial port. This virtually

eliminates the need for the user to have specific Bluetooth knowledge, and

allows the radios to be integrated into applications very quickly.

A.4 Security in Bluetooth

Bluetooth security is defined by three main elements: availability, access,

and confidentiality. It is important to distinguish between these elements

because Bluetooth security is also highly configurable so that it can meet the

needs of devices in many different scenarios. An understanding of the basics

will provide the knowledge that you need to choose a security strategy for

your device.

Page 83: Robot Project Documentation

Appendix A. Bluetooth Overview 74

The first important element of Bluetooth security is availability. If a de-

vice cannot be seen or connected with, it is obviously quite secure. This is a

very coarse level of control, but it is also quite effective and can be used in

combination with other security features.

The second and most complex element of Bluetooth security is access con-

trol. This type of security is only relevant when the module is connectable

and is designed to provide protection in this case. The general idea is that

remote devices must become trusted before they will be allowed to connect

and communicate with the device. In order to become trusted, a remote de-

vice must present a passkey that matches the stored local passkey. This only

needs to be done once, as both devices will remember their trusted status

and allow future connections with that specific device without exchanging

passkeys again.

The last element of Bluetooth security is confidentiality. Once a link with

a trusted device has been established, it may be important to know that the

data being transmitted cannot be intercepted by a third party. All transmit-

ted data can be encrypted.

Page 84: Robot Project Documentation

Appendix B

Serial Communication Interface

(USART)

Copyright 1996 Frank Durda IV ¡[email protected]¿, All Rights Re-

served. 13 January 1996.

The Universal Asynchronous Receiver/Transmitter (UART) controller is

the key component of the serial communications subsystem of a computer.

The UART takes bytes of data and transmits the individual bits in a sequen-

tial fashion. At the destination, a second UART re-assembles the bits into

complete bytes.

Serial transmission is commonly used with modems and for non-networked

communication between computers, terminals and other devices.

There are two primary forms of serial transmission: Synchronous and

Asynchronous. Depending on the modes that are supported by the hard-

ware, the name of the communication sub-system will usually include a A

if it supports Asynchronous communications, and a S if it supports Syn-

chronous communications. Both forms are described below.

Some common acronyms are:

UART Universal Asynchronous Receiver/Transmitter

USART Universal Synchronous-Asynchronous Receiver/Transmitter

Page 85: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 76

B.1 Synchronous Serial Transmission

Synchronous serial transmission requires that the sender and receiver share

a clock with one another, or that the sender provide a strobe or other timing

signal so that the receiver knows when to read the next bit of the data. In

most forms of serial Synchronous communication, if there is no data avail-

able at a given instant to transmit, a fill character must be sent instead so

that data is always being transmitted. Synchronous communication is usu-

ally more efficient because only data bits are transmitted between sender

and receiver, and synchronous communication can be more costly if extra

wiring and circuits are required to share a clock signal between the sender

and receiver.

A form of Synchronous transmission is used with printers and fixed disk

devices in that the data is sent on one set of wires while a clock or strobe

is sent on a different wire. Printers and fixed disk devices are not normally

serial devices because most fixed disk interface standards send an entire word

of data for each clock or strobe signal by using a separate wire for each bit

of the word. In the PC industry, these are known as Parallel devices.

The standard serial communications hardware in the PC does not sup-

port Synchronous operations. This mode is described here for comparison

purposes only.

B.2 Asynchronous Serial Transmission

Asynchronous transmission allows data to be transmitted without the sender

having to send a clock signal to the receiver. Instead, the sender and receiver

must agree on timing parameters in advance and special bits are added to

each word which are used to synchronize the sending and receiving units.

When a word is given to the UART for Asynchronous transmissions, a

bit called the ”Start Bit” is added to the beginning of each word that is to be

transmitted. The Start Bit is used to alert the receiver that a word of data is

about to be sent, and to force the clock in the receiver into synchronization

with the clock in the transmitter. These two clocks must be accurate enough

to not have the frequency drift by more than 10

Page 86: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 77

After the Start Bit, the individual bits of the word of data are sent, with

the Least Significant Bit (LSB) being sent first. Each bit in the transmis-

sion is transmitted for exactly the same amount of time as all of the other

bits, and the receiver looks at the wire at approximately halfway through

the period assigned to each bit to determine if the bit is a 1 or a 0. For

example, if it takes two seconds to send each bit, the receiver will examine

the signal to determine if it is a 1 or a 0 after one second has passed, then

it will wait two seconds and then examine the value of the next bit, and so on.

The sender does not know when the receiver has ”looked” at the value of

the bit. The sender only knows when the clock says to begin transmitting

the next bit of the word.

When the entire data word has been sent, the transmitter may add a

Parity Bit that the transmitter generates. The Parity Bit may be used by

the receiver to perform simple error checking. Then at least one Stop Bit is

sent by the transmitter.

When the receiver has received all of the bits in the data word, it may

check for the Parity Bits (both sender and receiver must agree on whether

a Parity Bit is to be used), and then the receiver looks for a Stop Bit. If

the Stop Bit does not appear when it is supposed to, the UART considers

the entire word to be garbled and will report a Framing Error to the host

processor when the data word is read. The usual cause of a Framing Error

is that the sender and receiver clocks were not running at the same speed,

or that the signal was interrupted.

Regardless of whether the data was received correctly or not, the UART

automatically discards the Start, Parity and Stop bits. If the sender and

receiver are configured identically, these bits are not passed to the host.

If another word is ready for transmission, the Start Bit for the new word

can be sent as soon as the Stop Bit for the previous word has been sent.

Because asynchronous data is ”self synchronizing”, if there is no data to

transmit, the transmission line can be idle.

Page 87: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 78

B.3 Other UART Functions

In addition to the basic job of converting data from parallel to serial for

transmission and from serial to parallel on reception, a UART will usually

provide additional circuits for signals that can be used to indicate the state

of the transmission media, and to regulate the flow of data in the event that

the remote device is not prepared to accept more data. For example, when

the device connected to the UART is a modem, the modem may report the

presence of a carrier on the phone line while the computer may be able to

instruct the modem to reset itself or to not take calls by raising or lowering

one more of these extra signals. The function of each of these additional

signals is defined in the EIA RS232-C standard.

B.4 Bits, Baud and Symbols

Baud is a measurement of transmission speed in asynchronous communica-

tion. Because of advances in modem communication technology, this term is

frequently misused when describing the data rates in newer devices.

Traditionally, a Baud Rate represents the number of bits that are actually

being sent over the media, not the amount of data that is actually moved

from one DTE 1 device to the other. The Baud count includes the overhead

bits Start, Stop and Parity that are generated by the sending UART and

removed by the receiving UART. This means that seven-bit words of data

actually take 10 bits to be completely transmitted. Therefore, a modem ca-

pable of moving 300 bits per second from one place to another can normally

only move 30 7-bit words if Parity is used and one Start and Stop bit are

present.

If 8-bit data words are used and Parity bits are also used, the data rate

falls to 27.27 words per second, because it now takes 11 bits to send the

eight-bit words, and the modem still only sends 300 bits per second.

The formula for converting bytes per second into a baud rate and vice

versa was simple until error-correcting modems came along. These modems

1DTE stands for Data Terminal Equipment. A typical Data Terminal Device is a

computer

Page 88: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 79

receive the serial stream of bits from the UART in the host computer (even

when internal modems are used the data is still frequently serialized) and

converts the bits back into bytes. These bytes are then combined into pack-

ets and sent over the phone line using a Synchronous transmission method.

This means that the Stop, Start, and Parity bits added by the UART in the

DTE (the computer) were removed by the modem before transmission by

the sending modem. When these bytes are received by the remote modem,

the remote modem adds Start, Stop and Parity bits to the words, converts

them to a serial format and then sends them to the receiving UART in the

remote computer, who then strips the Start, Stop and Parity bits.

The reason all these extra conversions are done is so that the two modems

can perform error correction, which means that the receiving modem is able

to ask the sending modem to resend a block of data that was not received

with the correct checksum. This checking is handled by the modems, and

the DTE devices are usually unaware that the process is occurring.

By striping the Start, Stop and Parity bits, the additional bits of data that

the two modems must share between themselves to perform error-correction

are mostly concealed from the effective transmission rate seen by the sending

and receiving DTE equipment. For example, if a modem sends ten 7-bit

words to another modem without including the Start, Stop and Parity bits,

the sending modem will be able to add 30 bits of its own information that

the receiving modem can use to do error-correction without impacting the

transmission speed of the real data.

The use of the term Baud is further confused by modems that perform

compression. A single 8-bit word passed over the telephone line might rep-

resent a dozen words that were transmitted to the sending modem. The

receiving modem will expand the data back to its original content and pass

that data to the receiving DTE.

Modern modems also include buffers that allow the rate that bits move

across the phone line (DCE2 to DCE) to be a different speed than the speed

that the bits move between the DTE and DCE on both ends of the conver-

sation. Normally the speed between the DTE and DCE is higher than the

2DCE stands for Data Communication Equipment. A typical Data Communications

Device is a Modem.

Page 89: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 80

DCE to DCE speed because of the use of compression by the modems.

Because the number of bits needed to describe a byte varied during the

trip between the two machines plus the differing bits-per-seconds speeds that

are used present on the DTE-DCE and DCE-DCE links, the usage of the

term Baud to describe the overall communication speed causes problems and

can misrepresent the true transmission speed. So Bits Per Second (bps) is

the correct term to use to describe the transmission rate seen at the DCE

to DCE interface and Baud or Bits Per Second are acceptable terms to use

when a connection is made between two systems with a wired connection,

or if a modem is in use that is not performing error-correction or compression.

Modern high speed modems (2400, 9600, 14,400, and 19,200bps) in re-

ality still operate at or below 2400 baud, or more accurately, 2400 Symbols

per second. High speed modem are able to encode more bits of data into

each Symbol using a technique called Constellation Stuffing, which is why

the effective bits per second rate of the modem is higher, but the modem

continues to operate within the limited audio bandwidth that the telephone

system provides. Modems operating at 28,800 and higher speeds have vari-

able Symbol rates, but the technique is the same.

B.5 Flow Control

If our DTE to DCE speed is several times faster than our DCE to DCE speed

the PC can send data to your modem at 115,200 BPS. Sooner or later data

is going to get lost as buffers overflow, thus flow control is used. Flow control

has two basic varieties, Hardware or Software.

Software flow control, sometimes expressed as Xon/Xoff uses two char-

acters Xon and Xoff. Xon is normally indicated by the ASCII 17 character

where as the ASCII 19 character is used for Xoff. The modem will only have

a small buffer so when the computer fills it up the modem sends a Xoff char-

acter to tell the computer to stop sending data. Once the modem has room

for more data it then sends a Xon character and the computer sends more

data. This type of flow control has the advantage that it doesn’t require any

more wires as the characters are sent via the TD/RD lines. However on slow

links each character requires 10 bits which can slow communications down.

Page 90: Robot Project Documentation

Appendix B. Serial Communication Interface (USART) 81

Hardware flow control is also known as RTS/CTS flow control. It uses

two wires in your serial cable rather than extra characters transmitted in

your data lines. Thus hardware flow control will not slow down transmission

times like Xon-Xoff does. When the computer wishes to send data it takes

active the Request to Send line. If the modem has room for this data, then

the modem will reply by taking active the Clear to Send line and the com-

puter starts sending data. If the modem does not have the room then it will

not send a Clear to Send.

Page 91: Robot Project Documentation

Appendix C

Servo Motors

C.1 What is a Servo?

A Servo, Figure C.1 is a small device that has an output shaft. This shaft

can be positioned to specific angular positions by sending the servo a coded

signal. As long as the coded signal exists on the input line, the servo will

maintain the angular position of the shaft. As the coded signal changes, the

angular position of the shaft changes. In practice, servos are used in radio

controlled airplanes to position control surfaces like the elevators and rud-

ders. They are also used in radio controlled cars, puppets, and of course,

robots.

Servos are extremely useful in robotics. The motors are small, have built

in control circuitry, and are extremely powerful for thier size. A standard

servo such as the Futaba S-148 has 42 oz/inches of torque, which is pretty

strong for its size. It also draws power proportional to the mechanical load.

A lightly loaded servo, therefore, doesn’t consume much energy. The guts

of a servo motor are shown in the picture below. You can see the control

circuitry, the motor, a set of gears, and the case. You can also see the 3 wires

that connect to the outside world. One is for power (+5volts), ground, and

the white wire is the control wire.

C.2 How Servo Works?

The servo motor has some control circuits and a potentiometer (a variable

resistor, aka pot) that is connected to the output shaft. In figure C.2, the pot

Page 92: Robot Project Documentation

Appendix C. Servo Motors 83

Figure C.1: A Futaba S-148 Servo Motor

Figure C.2: Servo Motor Components

can be seen on the right side of the circuit board. This pot allows the control

circuitry to monitor the current angle of the servo motor. If the shaft is at

the correct angle, then the motor shuts off. If the circuit finds that the angle

is not correct, it will turn the motor the correct direction until the angle is

correct. The output shaft of the servo is capable of travelling somewhere

around 180 degrees. Usually, its somewhere in the 210 degree range, but it

varies by manufacturer. A normal servo is used to control an angular motion

of between 0 and 180 degrees. A normal servo is mechanically not capable

of turning any farther due to a mechanical stop built on to the main output

gear.

The amount of power applied to the motor is proportional to the distance it

Page 93: Robot Project Documentation

Appendix C. Servo Motors 84

needs to travel. So, if the shaft needs to turn a large distance, the motor will

run at full speed. If it needs to turn only a small amount, the motor will run

at a slower speed. This is called proportional control.

How do you communicate the angle at which the servo should turn? The

control wire is used to communicate the angle. The angle is determined by

the duration of a pulse that is applied to the control wire. This is called

Pulse Coded Modulation. The servo expects to see a pulse every 20 millisec-

onds (.02 seconds). The length of the pulse will determine how far the motor

turns. A 1.5 millisecond pulse, for example, will make the motor turn to the

90 degree position (often called the neutral position). If the pulse is shorter

than 1.5 ms, then the motor will turn the shaft to closer to 0 degress. If the

pulse is longer than 1.5ms, the shaft turns closer to 180 degress.

As you can see in the figure C.3, the duration of the pulse dictates the angle

of the output shaft (shown as the green circle with the arrow). Note that

the times here are illustrative, and the actual timings depend on the motor

manufacturer. The principle, however, is the same.

Figure C.3: Servo Motor Pulse Code

Page 94: Robot Project Documentation

Appendix C. Servo Motors 85

C.3 Modifying Servo for Continuous Rota-

tion

A servo, unmodified, typically has a rotation of some set amount. In other

words, they cannot rotate continuously. This is because of the built in angle

feedback control system. There is an internal potentiometer which is used to

determine the angle which the servo is at. Pots, or variable resistors, cannot

rotate continuously.

There is however a way to modify a servo so that they can rotate continuously.

Why do this? Because although you lose position control, you gain speed

control. Obviously the pot needs to be altered in someway. There is also a

mechanical stop within the gears which needs to be removed as well.

Modification may be either electronical or michanicalDetails. Details of this

modofication depends on the type of servo, Hence, it is not in the scope of

this Appendix.

Page 95: Robot Project Documentation

Appendix D

Perpendicular Regression

The Perpendicular regression is one of the methods used in the linear fitting

of the lines to a given data points. It aims to identify the line segment which

minimizes the total error in the perpendicular distance between each point

and the line segment. It needs to find the coefficeints a and b of the line

equaion y = ax + b that identify the candidate line segment. Here is the

mathematical method used to obtain the coefficeints a and b.

The total error ε is defined as

ε =n∑

i=1

hi (D.1)

ε =n∑

i=1

√∆2

X + ∆2Y (D.2)

and for simplicity, the error will be defined as the sum of the squares of

heights rather than the sum of heights:

ε =n∑

i=1

h2i (D.3)

ε =n∑

i=1

(∆2X + ∆2

Y ) (D.4)

where

∆X = Px − Ix ∆Y = Py − Iy (D.5)

the point (Ix, Iy) is the projection of the point (Px, Py) on the line y = ax+ b

and can be calculated as

Ix =Px + aPy − ab

a2 + 1(D.6)

Iy =aPx + a2Py + b

a2 + 1(D.7)

Page 96: Robot Project Documentation

Appendix D. Perpendicular Regression 87

Figure D.1: Fitting a line segment to a set of points

from D.5

∆X =a2Px − aPy + ab

a2 + 1(D.8)

∆Y =−aPx + Py − b

a2 + 1(D.9)

at the minimum total error, we have

∂ε

∂a= 0

∂ε

∂b= 0 (D.10)

by partially differntiating the the total error defined in equation D.4 with

respect to a and eqauting to zero, we obtain

∂ε

∂a=

∂a

n∑i=1

(∆2Xi

+ ∆2Yi

) = 0

∂ε

∂a=

n∑i=1

(2∆X∂∆X

∂a+ 2∆Y

∂∆Y

∂a) = 0

a

n∑i=1

P 2xi−a

n∑i=1

P 2yi

+(a2−1)n∑

i=1

PxiPyi

+(b−a2b)n∑

i=1

Pxi+2ab

n∑i=1

Pyi−nab2 = 0

a2(nn∑

i=1

PxiPyi

−n∑

i=1

Pxi

n∑i=1

Pyi) + a(n

n∑i=1

P 2xi− n

n∑i=1

P 2yi− (

n∑i=1

Pxi)2

+(n∑

i=1

Pyi)2)− n

n∑i=1

PxiPyi

+n∑

i=1

Pxi

n∑i=1

Pyi= 0

Page 97: Robot Project Documentation

Appendix D. Perpendicular Regression 88

φa2 + ψa− φ = 0 (D.11)

where

φ = nn∑

i=1

PxiPyi

−n∑

i=1

Pxi

n∑i=1

Pyiψ = n(

n∑i=1

P 2xi−

n∑i=1

P 2yi

)−(n∑

i=1

Pxi)2+(

n∑i=1

Pyi)2

similarly, by differentiating the equation D.4 with respect to b and equating

with zero∂ε

∂b=

n∑i=1

(2∆X∂∆X

∂b+ 2∆Y

∂∆Y

∂b) = 0

− an∑

i=1

Pxi+

n∑i=1

Pyi− nb = 0 (D.12)

By solving the two equations in D.11 and D.12 simultaneously, we can find

the values of a b as follows

a =−ψ ±

√ψ2 + 4φ2

2φ(D.13)

b =−a

∑ni=1 Pxi

+∑n

i=1 Pyi

n(D.14)

From the equation D.13, we fing that ther are two solution to the second

order equation which results in two different line equations, but one solution

only is the correct one and the other, resulted from the squaring of the height,

gives a another line which is orthogonal on the line obtained from the first

solution.

Now, let’s look carefully at the two solutions, resulted from the equation

D.13, to find discover which one is the correct solution. If we examine the

effect of φ and ψ on a, we will find that, a has the same sign of φ, and from

this property, we find that the solution with the positive sign is the correct

one and the other is of the orthogonal line. Finally, if we calculate the values

of φ and ψ and substituting in equation D.13 and equation D.14, we can find

the coefficeints a and b.

a = a =−ψ +

√ψ2 + 4φ2

2φqquadb =

−a∑n

i=1 Pxi+

∑ni=1 Pyi

n(D.15)

Page 98: Robot Project Documentation

Appendix E

Occupancy Grid Formula Proof

E.1 Integration over Time

Sonar interpretations must be integrated over time to yield a single, consis-

tent map. To do so, it is convenient to interpret the networks output for

the t-th sensor reading (denoted by St) as the probability that a grid cell

< x, y > is occupied conditioned onthe sensor reading St:

Pr(Occx,y|St)

A map is obtained by integrating these probabilities for all available sen-

sor readings, denoted by S1, S2, ..., St. In other words, the desired occupancy

value for each grid cell < x, y > can be written as the probability:

Pr(Occx,y|S1, S2, ..., ST )

which is conditioned on all sensor readings. A straightforward approach

to estimating this quantity is to apply Bayess rule. To do so, one has to

assume independence of the noise in different readings. More specifically,

given the true occupancy of a grid cell < x, y >, the conditional probability

Pr(St|Occx,y) must be assumed to be independent of Pr(St′|Occx,y). This

assumption is not implausiblein fact, it is commonly made in approaches

to building occupancy grids. It is important to note that the conditional

independence assumption does not imply the independence of Pr(St) and

Pr(St′) . The latter two random variables are usually dependent.

The desired probability can be computed in the following way:

Pr(Occx,y|S1, S2, ..., ST ) =

1− (1 +Pr(Occx,y|S1)

1−Occx,y|S1)

T∏τ=2

Pr(Occx,y|Sτ )

1− Pr(Occx,y|Sτ )

Pr(Occx,y)

1− Pr(Occx,y))

−1 (E.1)

Page 99: Robot Project Documentation

Appendix E. Occupancy Grid Formula Proof 90

Here, Pr(Occx,y) denotes the prior probability for occupancy(which, if set to

0.5, can be omitted in this equation).

E.2 Proof

The update formula (1) follows directly from Bayes’s rule and the conditional

independence assumption. According to Bayess rule,

Pr(Occx,y|S1, S2, ..., ST )

Pr(Occx,y|S1, S2, ..., ST )

=Pr(ST |Occx,y, S1, S2, ..., ST − 1)

Pr(ST |Occx,y, S1, S2, ..., ST − 1)

Pr(Occx,y|S1, S2, ..., ST − 1)

Pr(Occx,y|S1, S2, ..., ST − 1)

(E.2)

which can be simplified by virtue of the conditional independence assumption

to

=Pr(ST |Occx,y)

Pr(ST |Occx,y)

Pr(Occx,y|S1, S2, ..., ST − 1)

Pr(Occx,y|S1, S2, ..., ST − 1)(E.3)

Applying Bayes’s rule to the first term leads to:

=Pr(Occx,y|ST )

Pr(Occx,y|ST )

Pr(Occx,y)

Pr(Occx,y)

Pr(Occx,y|S1, S2, ..., ST − 1)

Pr(Occx,y|S1, S2, ..., ST − 1)(E.4)

Induction over T yields:

=Pr(Occx,y)

1− Pr(Occx,y)

T∏τ=1

Pr(Occx,y|Sτ )

1− Pr(Occx,y|Sτ )

1− Pr(Occx,y)

Pr(Occx,y)(E.5)

The update equation E.1 is nowobtained by solving E.5 for Pr(Occx,y|S1, S2, ..., ST ),

using the fact that Pr(Occx,y|S1, S2, ..., ST ) = 1 − Pr(Occx,y|S1, S2, ..., ST ).

This probabilistic update rule, which is sound given our conditional indepen-

dence assumption, is frequently used for the accumulation of sensor evidence.

It differs from Bayes networks in that albeit the fact that occupancy causally

determines sensor readings {Sτ}τ=1,...,T and not the other way round, the

networks represent the inverse conditional probability: Pr(Occx,y|St). No-

tice that Equation E.1 can be used to update occupancy values incrementally,

i.e., at any point in time τ it suffices to memorize a single value per grid cell:

Pr(Occx,y|S1, S2, ..., ST ). Technically speaking, this single value is a sufficient

statistic for S1, S2, ..., ST .

Page 100: Robot Project Documentation

Appendix F

Examples on API

The proposed API in Section 4.2.2 should be sufficient to do most of robot

motion tasks. Below are 2 examples to illustrate that.

F.1 Example 1: Motion in a Square

The required sequence of API commands to move the robot according to the

motion path shown in Figure F.1 are:

Figure F.1: API Example 1: Motion in a Square

1. Move Distance (Square Side Length)

2. Rotate Left 90◦

Page 101: Robot Project Documentation

Appendix F. Examples on API 92

3. Move Distance (Square Side Length)

4. Rotate Left 90◦

5. Move Distance (Square Side Length)

6. Rotate Left 90◦

7. Move Distance (Square Side Length)

F.2 Example 2: Motion in an Isosceles Tri-

angle

The required sequence of API commands to move the robot according to the

motion path shown in Figure F.2 are:

Figure F.2: API Example 2: Motion in an Isosceles Triangle

1. referenceAngle = MeasureCompass

2. Move Distance (Triangle Side Length)

3. Rotate With Compass (referenceAngle-60)

4. Move Distance (Triangle Side Length)

5. Rotate With Compass (referenceAngle-120)

6. Move Distance (Triangle Side Length)

Page 102: Robot Project Documentation

Bibliography

[1] S. Thrun. Learning occupancy grids with forward sensor models Au-

tonomous Robots, 2002.

[2] C. Stachniss and W. Burgard, Exploring unknown environments with

mobile robots using coverage maps, in Proc. of the Int. Conf. on Artificial

Intelligence (IJCAI), Acapulco, Mexico, 2003.

[3] Robotics with the Boe-Bot, Student Guide, VERSION 2.2, Parallax Inc.

[4] Philip C. PilgrimApplying the Boe-Bot Digital Encoder Kit, Parallax

Inc.