27

MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection
Page 2: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Page 3: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

DARPA BAA #06-36 — DARPA Urban Grand Challenge

The MIT Urban Grand Challenge Team Proposal Lead Organization: Massachusetts Institute of Technology Contractor’s type of business: Other Educational Technical point of contact:

Prof. John Leonard MIT Computer Science and Artificial Intelligence Lab (CSAIL) The Stata Center, 32 Vassar Street, 32-335 Cambridge, MA 02139 Phone: (617) 253-0607, Fax: (617) 253-8125 email: [email protected]

Administrative point of contact: Laureen Augustine

Sr. Contract administrator Office of Sponsored Program MIT, E19-750 77 Massachusetts Ave. Cambridge, MA 02139 Phone: (617) 253-3922

Cost Estimates: Base cost: $500,000

Option 1: $250,000 Option 2: $250,000 Total proposal cost: $1,000,000

Proposal prepared: June 26, 2006

Page 4: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Executive Summary The MIT team believes that the difficulty of the DARPA Grand Challenge (DGC) arises principally from three types of uncertainty inherent in the autonomous urban driving task: in the input, i.e., the relationship of the provided environment and mission descriptions to the actual driving environment; in sensing, i.e., the relationship of available sensor data to the actual static and dynamic surroundings of the vehicle; and in actuation, i.e., the relationship between commanded vehicle motions and the vehicle’s actual physical progress. In the absence of any uncertainty, meeting the challenge would be a straightforward engineering exercise, albeit a very complex one. Yet such uncertainty is unavoidable in reality; recognizing this fact, and developing strategies to account for uncertainty, are (we believe) the keys to a successful DGC effort.

Like other teams, we propose the use of a nimble vehicle, using LIDAR and vision sensing in concert with GPS and IMU to achieve robust localization, dynamics, and control. In contrast to other teams, our team’s central, and distinctive, focus is addressing the above sources of uncertainty in a way that is both scalable to spatially extended environments, and efficient enough for real-time on-board operation in a dynamic world. Our team has demonstrated expertise in all aspects of the task: Planning (Roy, How), Sensing and Perception (Teller, Rus, Freeman, Horn), Mapping and Navigation (Leonard, Teller), Dynamics and Control (How, Tedrake, Iagnemma, Frazzoli), and successful Field Deployment of autonomous vehicles (Barrett, How, Iagnemma, Rus, Leonard, Roy).

Specifically, we propose a system architecture (elaborated below) that combines functional modules for each task component, while maintaining a simple and uniform environment representation to the extent achievable. For example, rather than attempt fine-grained recognition or classification of world features, our perception system simply estimates the location, size, and relative velocity of all observed world features that may come into contact with the vehicle. Based only on these estimates, and how they change over time, we have designed a multi-level resilient planning architecture which ensures that the autonomous vehicle can respond reasonably (and make progress) under almost any conditions that may occur on the Challenge course.

The DARPA Grand Challenge presents an opportunity for MIT to apply and extend the breadth and depth of its research and engineering expertise to a problem of immediate, life-saving importance. MIT is already a leader in autonomous robotic operations at all levels. Further, MIT has a long history of developing state-of-the-art research into practical and deployable systems. This experience enables MIT to formulate a uniquely innovative technical solution to address the challenges specific to the urban driving problem. Olin College brings to the team its significant expertise and experience in the modification of road vehicles for autonomous driving, and is well versed in the lessons learned from previous challenges. Our team’s strength and distinctive approach are both well-matched to the DGC.

1

Page 5: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Table of Contents Executive Summary ......................................................................................................................... 1 Table of Contents............................................................................................................................. 2 Abbreviations................................................................................................................................... 3 Chapter 1 - Technical Approach ...................................................................................................... 4

1.1 Summary of Approach.......................................................................................................... 4 1.2 Sensing and Perception ......................................................................................................... 6 1.3 Planning ................................................................................................................................ 9 1.4 Vehicle Hardware ............................................................................................................... 11 1.5 Software .............................................................................................................................. 11 1.6 Execution Strategy .............................................................................................................. 12

Chapter 2 - Team Description........................................................................................................ 13 2.1 Team Overview................................................................................................................... 13 2.2 Biographical Sketches of Key Personnel............................................................................ 14 2.3 Test facilities....................................................................................................................... 17

Chapter 3 - Management and Funding Plan .................................................................................. 18 3.1 Personnel Plan..................................................................................................................... 18 3.2 Task Breakdown and Milestone Schedule.......................................................................... 19 3.3 Cost ..................................................................................................................................... 21 3.4 Funding Plan ....................................................................................................................... 23 3.5 Export Laws and Regulations ............................................................................................. 23

2

Page 6: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Abbreviations CPU Central Processing Unit DGC DARPA Grand Challenge DOF Degrees of Freedom EKF Extended Kalman Filter EMC Electronic Mobility Controls FOV Field of View GPS Global Positioning System IMU Inertial Measurement Unit LIDAR Light Detection and Ranging MDF Mission Data File MIT Massachusetts Institute of Technology NQE National Qualifying Event RNDF Route Network Definition File SLAM Simultaneous Localization and Mapping VCU Vehicle Control Unit

3

Page 7: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Chapter 1 - Technical Approach

1.1 SummaryAchieving an autonWe believe that theat multiple levels: ito the actual drivinactual static and dbetween commande

If none of theseaccurate input, senexercise, albeit a veconclude that any s

Mission Planner

Find best path through RNDF for mission

RNDF

MDF

Map Fragment

Database

Situational Interpreter / PlannerFind safe trajectory through

local map to follow route plan

Perceptual State Estimator

Produce local map and obstacle trajectories

Actuator

Vehicle Control Unit

Sensors

Safety Behaviors Overrides situational

planner

Plan step Map Fragments Segment

Costs

Local map (vehicle relative)

Vehicle/Object trajectories

Desired vehicle trajectory

Feedback

Figure 1.1: Block diagram of the vehicle’s control architecture.

of Approach omous urban driving capability is clearly a tough, multi-dimensional problem. source of the difficulty of this problem is that significant uncertainty occurs n the input, i.e., the relationship of the provided data and mission descriptions g environment; in sensing, i.e., the relationship of available sensor data to the ynamic surroundings of the vehicle; and in actuation, i.e., the relationship d vehicle motions and the vehicle’s actual physical progress. sources of uncertainty were present—that is, if we could arrange for perfectly sing, and actuation—meeting the challenge would just be an engineering ry complex one. Of course, such arrangements are not possible in reality. We uccessful strategy for meeting the real-world challenge must address each of

4

Page 8: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

these sources of uncertainty. Moreover, it must do so in a way that is scalable to spatially extended environments, and efficient enough for real-time implementation on a rapidly moving vehicle.

Our focus on managing uncertainty is reflected in our system architecture (Figure 1.1). Our architecture includes subsystems for planning, sensing and perception, navigation, control, and actuation. The top-level module, the Mission Planner, reads the RNDF/MDF and obtains

updates of segment costs from the Situational Interpreter. Given the next checkpoint, the Mission Planner computes the route plan that completes the mission in the minimum expected time. The route plan is then sent to the Situational Planner. At each time step, the Situational Planner selects an optimal sequence of vehicle maneuvers to safely follow the route plan through the local map (consisting of known and identified obstacles, lane markings, road boundaries, and other vehicles). The Situational Planner also provides feedback to the Mission Planner about any inferred road blockages or traffic delays.

Local maps are computed by the Perceptual State Estimator, which fuses all available

Figure 1.2: Block diagram of the sensing and perception subsystems. There are three main subsystems: road-detection and surface-estimation (based on “push-broom” LIDAR and texture-segmenting color video); vehicle state estimation (using IMU, GPS, and odometry); and obstacle tracking and trajectory estimation (based on an array of video cameras and LIDARs).

LIDAR Forward “push-

broom”

Front road camera

(with local feature detectors)

Road shape estimator/

Hazard map

Integrated GPS, IMU, Odometry

Foveated Vision System

(each with local CPU)

LIDAR

(Proximity skirt)

Panoramic Obstacle Tracker

Vehicle State Estimator

Perceptual State Estimator

Map fragment (Static data only)

Vehicle state

estimate

Obstacle map/

trajectories

Safe Road Feature

Detections Surface

Map

3D road surface builder

6DOF state estimates, velocities,

accelerations

To safety behaviors

Object Detections

Object Detections

(< 40 meters)

5

Page 9: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

sensor data, including estimated trajectories of other vehicles, into a local representation managed by the Map Fragment Database. The Vehicle Control Unit executes the low-level control necessary to achieve the desired vehicle motions issued by the Situational Planner. The control unit uses the output from the vehicle estimator (position, velocity, and attitude) to issue actuation commands (steering, acceleration, brakes, and directional signals). Given the importance of vehicle safety in successfully winning the DGC, safety is addressed throughout the system (firmware, hardware, software), and also by the Safety Module. This module monitors sensor data, overriding vehicle control as necessary to avoid collisions.

The following sections describe our approaches to Sensing, Planning, and Control/Actuation.

1.2 Sensing and Perception Sensing and perception of the environment is a fundamental challenge of the Urban Challenge because the RNDF is sparse, potentially incomplete, and contains no information about obstacles and other traffic. The MIT team is well equipped to handle these issues, as demonstrated by our existing research into sensing, classification, mapping, and tracking problems. Our approaches

explicitly consider uncertainty, improving performance in difficult conditions (see Figure 1.2).

Figure 1.3: (a) The proposed sensor platform. (b) Camera fields of view.

LIDAR sensors and Processing The LIDAR system is comprised of two complementary systems: “push-broom” sensors for evaluating the navigability of nearby terrain, and a 360-degree obstacle detection and tracking system.

The “push broom” LIDAR maps the surrounding terrain as the vehicle moves. The vehicle’s speed and the shape of the terrain affect the optimal downwards angle of the push brooms; by adjusting the pitch of the LIDARs we can direct the sensor’s attention on the most important areas (Figure 1.3a). Vertically-oriented LIDARs provide additional road mapping data, helping to determine the optimal angle for the push brooms while making it easier to fuse push broom readings into a 3D mesh. This mesh is used to determine which local terrain the vehicle can negotiate. The resulting hazard map is later fused with data from cameras to estimate the position

6

Page 10: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

and extent of road surface. The obstacle detection and tracking system comprises a ring of LIDAR sensors affording 360

degree coverage. Any off-ground-plane objects within range, whether stationary or moving, are tracked. This produces a trajectory from which the tracking system can extrapolate future locations and the uncertainty associated with those predictions. Object detections and trajectories are passed to the situational interpreter and planner.

Vision Sensors and Processing The vision system detects static and moving obstacles, finds road surfaces, and locates road markings. Since the vision system has a much longer viewing range than the LIDAR, it is the key component that enables the vehicle to travel at the maximum course speed. The right side of Figure 1.3b shows an arrangement of six color cameras that provide obstacle and road-marking detection. The narrow-field-of-view camera in front improves our sensing ability at long ranges, acting as a “fovea” that allows safe high-speed forward travel. When the vehicle is stopped at an intersection, the side-facing cameras detect vehicles moving along cross-streets. The regions directly to the left and right of the vehicle are not as well covered by the cameras and instead are sensed by the LIDAR system in preparation and execution of lane change maneuvers.

Detecting far-field obstacles is the primary goal of the vision system. Since other vehicles on the DGC course may have unusual shapes and sizes, we cannot rely on a priori models for their appearance. Instead we use a lower-level detection approach: optical flow. We compute the motion vectors of image features to solve for obstacle depth, using novel optical flow algorithms developed at MIT [12, 8] which are robust to the uncertainties common to natural images. We choose optical flow over stereo vision because of the difficulty of finding good pixel correspondences, especially for large baselines.

Figure 1.4: (a) Illustration of optical flow. Optical flow is used to identify objects moving relative to the vehicle. Blue grid denotes the vehicle’s hypothesized road position; obstacles are in red; and lane markings are indicated in green. Waypoints (orange) are reprojected for visualization. (b) Example mission plan. The mission planner finds the shortest paths based on a dynamic graph representation of the environment.

Separating obstacles from the road’s surface requires more than just optical flow. This is because the road’s visual appearance is generally textureless except for surface discolorations,

7

Page 11: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

manhole covers, and lane markings. However, since we know the orientation and position of the camera on the vehicle, we can project the ground plane onto the image [6]. The LIDAR detects situations in which the ground is not nearly flat—i.e., hills and dips—enabling a generalized “ground surface” assumption and projection into the imagery. With an estimate of true ground shape, the vision system can predict the motion vector of any pixel lying on the ground’s surface. If the observed motion of a pixel in the image is consistent with prediction, the associated scene patch is classified as part of the road’s surface. Every other tracked image region is considered an obstacle, positive (obstruction) or negative (pothole). In Figure 1.4a, the blue grid represents the infinite plane predicted for road surface. The red highlights are those motion vectors inconsistent with the road surface and thus determined to be obstacles.

The previous two paragraphs describe how to identify whether an obstacle is present, but not how to track its motion. Optical flow vectors allow the depth of any salient image feature to be computed, given the 3D velocities of both our vehicle and the target. Thus we must estimate the velocity of those objects that are moving. To do this, the vision system estimates the obstacle’s intersection with the projected ground plane. Together with its image-space motion vector, this information is sufficient to compute the obstacle’s depth and velocity.

Detection of road markings such as lane markers and stop lines is a major new technical requirement of the DGC that is handled by our vision system. We employ a matched filter, tuned to detect fragments of lane markings, which are then clustered subject to continuity and linearity constraints. This approach handles uncertainty better than alternatives based on sharp edges or contiguous colored blobs, particularly when lane markings are faded or partially worn away. This approach also draws upon our team’s expertise in data clustering [10].

Perceptual State Estimation and Mapping The perceptual state estimator fuses sensor measurements from all sensor systems (LIDAR, vision, and GPS/IMU) into local maps for use by the situational planner. Our system maintains two frames of reference: A global (absolute) frame and a local (vehicle-relative) frame.

The global frame is the geo-referenced latitude/longitude frame, consistent with the RNDF. The vehicle continually estimates its global position by fusing GPS, IMU, and odometry data using a conventional Extended Kalman Filter. Uncertainty in GPS measurements is handled gracefully because we do not use this global frame for situational planning. Instead, we rely on GPS only to ensure that the local map is periodically registered with the global frame (to identify the RNDF segment on which the vehicle lies), and to ensure accurate checkpoint arrivals. Since this requires only intermittent GPS availability, we ensure robustness in the presence of GPS failures.

The local frame is vehicle-relative, consisting of two local maps: a map of obstacles (and their trajectories), and a map of safe road surfaces. Both maps are the product of fused LIDAR and vision data, and they are the primary input to the situational planner. Since these measurements are expressed locally, GPS uncertainty does not impact their accuracy; the system must account only for the uncertainty inherent in the LIDAR and vision systems. In the obstacle map, object trajectories reported by the sensors are continuously updated in the local frame and spurious/outlier detections are discarded [10]. The safe road surface map is constructed first from LIDAR data, identifying safe road surfaces in the near-field then extrapolating those surfaces to the far field using vision through color/texture similarity. In both maps, underlying sensor uncertainty is propagated throughout the estimator, yielding probabilistic estimates of where the objects were, are, and will be. This principled approach to uncertainty management stems from

8

Page 12: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

our recent work in robotic SLAM at MIT [2, 9]. Local maps constructed by the perceptual state estimator are saved by the Map Fragment

Database for future use. If the vehicle returns to a road segment that was previously visited, the previous map can be recalled. These archival maps can be used to predict what lies beyond sensor range, allowing higher travel speed. Archived maps are similarly useful for remembering the shapes of intersections. However, for these old maps to be reused, the vehicle’s position must be determined with respect to the old map. This is the problem of relocalization. We have demonstrated robust relocalization with better-than-GPS precision using features extracted from LIDAR and cameras [2], a technique we extend to our vehicle.

1.3 Planning Vehicle control is performed at three levels: mission, situational, and low-level. Large-scale questions (“what sequence of segments should be followed in the RNDF? ”) are handled by the mission planner. More detailed issues (“what is the best path through this intersection? ”) are handled by the situational planner. Finally, the low-level controller allows the vehicle to precisely follow the maneuvers computed by the situational planner. Our particular focus is on resilient planning, ensuring that the vehicle makes progress even in the presence of environmental uncertainty.

Mission Planner The mission planner generates a route plan that is expected to accomplish the mission in minimum time. This plan consists of a sequence of abstract actions like “follow road segment 13”, “turn right at intersection with waypoint 10.2.1”. Detailed vehicle trajectories are not computed; instead, the amount of time required to complete particular actions is modeled using probability distributions. Since these probability distributions will incorporate new information about the environment from the situational interpreter and planner, such as blocked roads and/or slow traffic ahead, the route plans will be created online in real-time (see Figure 1.4b). We avoid alternating between unworkable plans by employing task filtering [1]. We also use robust planning algorithms to account for the risk associated with candidate plans [16], since the transit times for some actions may be poorly known. The mission planner is resilient because it prevents itself from getting stuck in a loop and never insists upon a route that has proven inefficient in the past.

Situational Interpreter and Planner The situational interpreter and planner takes as input the local maps from the perceptual state estimator and attempts to find a vehicle trajectory that reaches the goal provided by the mission planner. The “interpreter” portion of this module is also responsible for determining updates to the segment costs to be provided to the mission planner. Segment costs will change as traffic and blockage conditions are perceived by monitoring the progress of the trajectory planner and operating on the obstacle map directly when necessary. The “planner” portion of the system is responsible for computing the trajectory that passes through both the obstacle map and safe road surface map.

Safe trajectory planning requires prediction of future behavior of obstacles, with explicit accounting for uncertainty. Obstacles in the local map will be classified according to behavior. Those with a velocity below a certain threshold are temporarily interpreted as static obstacles,

9

Page 13: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

while objects that have been observed to move will be considered dynamic and thus given a wider berth. We predict the future paths for these dynamic objects using the velocity estimates provided in the local map. These obstacles paths are combined with the safe road surface map to compute the cost of a trajectory that avoids them. This cost, which is provided to the mission planner, incorporates both the delays expected in the future as well as delays already experienced.

The situational planner ensures computed trajectory follows the road and avoids obstacles, both stationary and moving. This planner, a model predictive controller, will function as a hybrid maneuver automaton [13]. This affords a computationally tractable approach for solving the guidance and control problems for agile vehicles in real-time. The main objective is to optimally follow the route plan from the mission planner by choosing from a set of basic vehicle maneuvers (both steady-state and transient). The primary steady-state maneuver will be driving at a constant speed following road/lane markers, but there are many other transient maneuvers:

• Operations within a lane: choosing a lane, and switching between lanes; precise control to traverse checkpoints; plan/road following;

• Turning: simple right/left turns; pulling out into traffic; 3-point turns / U-turns; parking; • Passing: acceleration/deceleration profiles; pull-out to see around other vehicles in front; • Obstacle avoidance: safety maneuvers to avoid being hit; planning around moving

vehicles/obstacles to avoid collisions. These maneuvers will encompass all possible vehicle operations, and the planner will choose the best sequence of vehicle maneuvers (and reference commands, such as speed/heading) to reach the next waypoint while ensuring vehicle safety with respect to the obstacle maps. This research team has extensive experience with hybrid maneuver automata, and are experts on designing maneuvers, developing algorithms for optimizing maneuver sequences, and verifying that the hybrid controller will not become deadlocked [5, 7].

Our algorithms for unmanned vehicle navigation also robustly capture various types of constraints including maneuver capabilities and discrete mode switching decisions [11]. The situational planner will use similar techniques to handle uncertainty associated with hazard locations and motion within the local map while simultaneously ensuring robustness to RNDF sparseness, e.g. the appearance of unexpected intersections.

Vehicle safety is a key function of the situational planner. Building on existing work, our planning algorithm ensures the existence of a feasible safety maneuver for various types of threats (e.g., reaching full stop without hitting other vehicles or obstacles, or moving out of an errant vehicle’s path), so the vehicle always has the option of interrupting execution of the nominal plan to switch to a contingency plan [14]. The situational planner designs these safety maneuvers in real-time to remain in the high confidence region of the local map and minimize the overall collision risk. The primary task here is to identify reactive maneuvers that can be executed given the perceived threat. These maneuvers can be designed off-line, but the selection of which to execute will depend on the current maneuver being performed, the vehicle state (velocity and acceleration, engine speed, gear, wheel orientation), the constraints within the region/map local to the vehicle, and local sensor measurements.

The situational planner’s handling of uncertainty, prediction of obstacle motion, and careful monitoring of trajectory cost ensures that it is resilient under widely varying conditions—important capabilities for autonomous urban driving.

10

Page 14: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Low-level Control The low-level controller finds throttle, brake, and steering inputs that cause the vehicle to follow the trajectory generated by the situational planner. Low-level control has been studied extensively in the past and is considered a mature research area. Our approach is to decouple path tracking (steering angle) from velocity (throttle and brake). Good velocity tracking performance can be achieved with a linear proportional-integral controller, yielding fast control response and small steady-state error. Generating an effective controller requires the vehicle/actuator models and, in particular, how these vary as a function of environmental and road parameters. We have extensive experience constructing these models using analysis and experimental data, ensuring that the low-level controllers will be both robust and adaptable to the environment [4, 15].

1.4 Vehicle Hardware We have opted for a maneuverable and robust platform, the Land Rover LR3. This particular vehicle is notable for its advanced features such as standard wheel odometry encoders accessible by a built-in data bus, computer controlled traction control, and fine-grained velocity control via the pedal at low speeds. Our modifications to the vehicle, shown in Figure 1.5, are minimally invasive, demonstrating the wide applicability of our methods, and allowing us to change vehicles with minimal effort should our primary vehicle become damaged.

2.4kW alternator

Sensor package

10 CPU rack

Vehicle Control Unit

To Vehicle’s CAN bus

Figure 1.5: (a) MIT Team Vehicle. Our vehicle is a Land Rover LR3 with a modular sensor rack mounted on the roof. (b) Vehicle modification block diagram.

1Gb/s Ethernet

Drive-by-wire conversion will be done by a commercial vendor with extensive experience automating many types of vehicles, ensuring that our control system is generalizable to other vehicles with minimal effort. The conversion kit is modular; it physically manipulates the steering wheel, accelerator, brake, and auxiliary vehicle components.

1.5 Software Software reliability is of paramount importance. We continually perform regression testing on

11

Page 15: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

our software suite, on both simulated and real data to guarantee that bugs are detected early. We also have procedures in place for code review to ensure that every line of code is examined by several team members to check for reliability and robustness.

Software modules are made as independent as possible, running on different CPUs and communicating via Ethernet. If a module should fail (due to hardware or software problems), other modules will be largely unaffected.

1.6 Execution Strategy The system is prepared by inserting a USB key; the mission planner parses the MDF and RNDF and is then ready to begin motion. The mission planner dynamically replans, passing the next desired step of the currently optimal route to the situational planner.

The bulk of the technical criteria are the responsibility of the situational planner, which computes trajectories that do not result in the vehicle coming too close to any other vehicles or objects (avoiding collisions, maintaining vehicle separation, leaving lane to pass, returning to lane after pass, maintaining minimum following distance, queuing, merging into traffic, maintaining vehicle separation during merge, left turn, vehicle separation during left turn, zones, emergency braking.). Robustness to GPS outages is ensured by using the local map for trajectory planning and only relying on intermittent GPS availability for registering local maps to the global frame.

Traffic jams and obstacle fields are just particular cases of obstacle avoidance, and are identified by the situational interpreter and handled by the situational planner. However, if the vehicle is excessively delayed, the mission planner may select a new plan and thus the vehicle will try to find a way around the traffic jam. If the mission planner determines that the vehicle should turn around, the situational planner will use these safe separation criteria to compute a safe U-turn or 3 point turn (as conditions permit). The planner also respects maximum speed limits, and minimum speed limits as safety allows.

Road following is accomplished using the output of the road markings detection algorithms. Defensive driving comes at no additional development cost: the situational planner computes trajectories that avoid collisions, even those that would be the result of other vehicles’ motions. If another vehicle is on a collision course, the predicted position of the MIT vehicle becomes undesirable, causing another trajectory to be selected. Emergency braking is performed when necessary.

When the vehicle arrives at an intersection, the vehicle enters a special mode that encodes rules of the road that do not follow solely from vehicle-separation concerns. Special rules exist for stop lines, intersection precedence, and queueing. For example, the situational planner aligns the vehicle with the stop line feature when it appears in the local map. A special alignment mode also exists for parking maneuvers to ensure that an acceptable orientation is achieved. Otherwise, parking lots are similar to conventional obstacle-avoidance scenarios like traffic jams.

Excess delay is prevented because the vehicle is always trying to minimize transit time. The vehicle will traverse intersections well less than ten seconds after determining that the intersection is safe to cross. However, if the vehicle’s progress is unsatisfactory (due, for example, to another vehicle stuck in an intersection), the mission planner will select a new solution. For example, instead of insisting upon a left turn across traffic, a longer but faster route may be selected.

12

Page 16: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Chapter 2 - Team Description This section provides an overview of the MIT Team, biographical sketches for key personnel, and a description of our test facilities.

2.1 Team Overview MIT offers a diverse team of world-class faculty, staff, post-docs, and students for this effort. Our team leader is Prof. John Leonard, who will oversee the effort and implement navigation and mapping algorithms. The other MIT faculty members on our team include Prof. Seth Teller (perception and mapping), Prof. Jonathan How (planning, guidance, and control), Dr. Karl Iagnemma (hazard avoidance and control), Prof. Nicholas Roy (planning under uncertainty), Prof. Daniela Rus (motion), Prof. Bill Freeman (vision), Prof. Berthold Horn (vision), Prof. Russ Tedrake (learning and control) and Prof. Emilio Frazzoli (planning and control).

This group of faculty will be joined by a team of “core” DGC participants, working exclusively on this project. This will include two full-time postdoctoral fellows focusing on perception, planning and control, and a team of graduate student research assistants.

The single subcontractor on our team is Prof. David Barrett of Olin College, who will oversee LR3 vehicle modification and conversion. As former Vice President of Engineering at iRobot, Barrett has overseen the conversion of numerous commercial, military, and agricultural vehicles to autonomous operation (excerpts of the capabilities of these vehicles are shown in our companion video). At Olin College, Barrett has created an intelligent vehicles program with extensive facilities including a 150-acre autonomous vehicle test site on campus. Olin College is conveniently located eleven miles from MIT; its campus and other nearby roadways will serve as MIT’s primary test site.

MIT has chosen appropriate vendors for supporting technologies that have become “standard” for unmanned vehicles of this class. MIT will purchase services for conversion of a Land Rover LR3 vehicle from Electronic Mobility Controls, Inc. (EMC) of Baton Rouge, La. EMC provided conversion services for four teams in the 2005 Grand Challenge.

The Ford Motor Company has offered MIT collaborative assistance both directly and indirectly through the Ford-MIT Alliance. Ford has expressed support for our choice of the Land Rover LR3, and will make available engineering contacts within Ford to answer technical queries on vehicle control and sensor interfaces.

MIT will purchase computer vision systems and technical support from MobilEye Inc. for lane detection and car tracking. MobilEye hardware was previously used successfully by MIT team member Emilio Frazzoli as part of the Golem/UCLA team in the 2005 Grand Challenge.

The MIT team is uniquely capable of addressing the perception, planning and control problems under uncertainty posed by the Urban Challenge. MIT has a long and documented history of perception, planning and control under uncertainty. Our competitive advantage is a tight integration between statistical models of perception and the decision-making process. MIT team members have demonstrated capabilities in statistical mapping, statistical state estimation, motion planning in GPS-denied environments, and robust control in dynamic environments. We have repeatedly demonstrated the ability to model sensor data and build rich environmental models for navigating without GPS. We have demonstrated efficient techniques for ensuring that navigating vehicles can recognize and continue to make progress through GPS outages when

13

Page 17: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

possible, along with efficient techniques for incorporating visual landmark tracking when computing motion plans. Finally, we have demonstrated robust control techniques that allow a vehicle to move without collision in the presence of unpredictable dynamic obstacles.

2.2 Biographical Sketches of Key Personnel John Leonard is an Associate Professor of Mechanical and Ocean Engineering at MIT. Leonard holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He has extensive experience developing and deploying autonomous mobile robots, and expertise in large-scale state estimation, sensor fusion, and real-time control. He has worked with a variety of sensor instrumentation including sonar, LIDAR, and omni-directional video. He has participated in numerous field deployments of autonomous underwater vehicles at sea and is currently leading research programs for the US Office of Naval Research for mine counter-measures and ship hull inspection.

http://cml.mit.edu/~jleonard/

David Barrett is Associate Professor of Mechanical Engineering and Design, Franklin W.

Olin College of Engineering. Prior to joining the Olin faculty, Dr. Barrett was Vice President of Engineering at the iRobot Corporation. At iRobot, Dr. Barrett led the creation of iRobot’s intelligent vehicles program. Before iRobot, Dr. Barrett held positions as a Director of the Walt Disney Imagineering Corporation, as a Research Engineer at MIT’s Artificial Intelligence Laboratory, and as a Technical Director at the Charles Stark Draper Laboratory. With over 25 years of experience in the robotics industry, Dr. Barrett has built robots that walk, hop, swim, roll and entertain for a wide variety of government, commercial and industrial customers. Dr. Barrett received his Ph.D. and M.S. in Ocean Engineering and M.S. in Mechanical Engineering from MIT. In addition to his many published articles, Dr. Barrett holds nine patents with previous colleagues on a variety of robotic systems. Robotics, intelligent/unmanned vehicles, mechanical design, agricultural engineering, ocean engineering, design for manufacturing, and product design are a few of Dr. Barrett’s teaching and research interests.

http://www.olin.edu/scope/index.cfm?itemid=PRD

Berthold K.P. Horn is Professor of Computer Science and Engineering at MIT. Prof. Horn

pioneered the “physics based” approach to machine vision, developed methods for recovering the shape of an object from an image using “shading” information, and methods for recovering “optical flow,” as well so-called “direct” methods for determining rigid body motion from image sequences. He has been connected with the MIT Artificial Intelligence Laboratory since soon after its inception in the 1960s and has been on the faculty of the MIT Electrical Engineering Department since 1973. He is the author or co-author of three books, including "Robot Vision", as well as about a hundred articles in refereed journals and five patents on vision-related inventions. His current interest include use of vision in “intelligent vehicles,” and “computational imaging.”

http://csail.mit.edu/~bkph

Emilio Frazzoli has accepted a position as Assistant Professor in the Aeronautics and

14

Page 18: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Astronautics Department at MIT. His research interests include algorithmic, computational and geometric approaches for control of autonomous systems, in aerospace and other domains. Frazzoli’s application areas include: distributed cooperative control of multiple vehicle systems over wireless networks; guidance and control of agile vehicles; high-confidence software engineering for high-performance dynamical systems; and verification of hybrid systems. Frazzoli was one of two faculty members involved in the development of Golem 2, UCLA’s grand challenge entry in 2005. Golem 2 was a finalist, driving at up to 45 miles per hour before a software malfunction halted the vehicle after 22 miles. (See http://www.golemgroup.com.)

http://ares.seas.ucla.edu

William Freeman is Associate Professor in MIT’s Department of Electrical Engineering and

Computer Science, and a member of the Computer Science and Artificial Intelligence Laboratory. His expertise lies at the intersection of computer vision and machine learning and he contributes in both fields. He has developed algorithms for separating reflectance from 3-D shading properties in images, recognizing and categorizing locations, and exploiting visual context for object detection.

http://people.csail.mit.edu/billf/wtf.html

Jonathan P. How is an Associate Professor and Raymond L. Bisplinghoff Fellow in the

Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). He received a B.A.Sc. in Engineering Science (Aerospace Option) from the University of Toronto in 1987 and his S.M. and Ph.D. in Aeronautics and Astronautics from MIT in 1990 and 1993, respectively. He then studied for two years at MIT as a postdoctoral associate in charge of control design for the Middeck Active Control Experiment (MACE) that flew on-board the Space Shuttle Endeavour in March 1995. Prior to joining MIT in 2000, he worked for 5.5 years as an Assistant Professor in the Department of Aeronautics and Astronautics at Stanford University. He has graduated a total of 19 Ph.D. (and 20 S.M.) students while at MIT and Stanford University on topics related to GPS, formation flying, advanced control, and trajectory optimization using mixed-integer programming. His current research focuses on: 1) Decentralized coordination and trajectory design for teams of cooperating UAVs; 2) Spacecraft navigation, control, and autonomy, including GPS sensing for formation-flying vehicles; and 3) Theoretical analysis and synthesis of robust, hybrid, and adaptive controllers. He was the recipient of the 2002 Institute of Navigation Burka Award for outstanding achievement in the preparation of papers contributing to the advancement of navigation and space guidance, is an Associate Fellow of AIAA, and a senior member of IEEE.

http://www.mit.edu/people/jhow/

Karl Iagnemma is a principal research scientist in the Mechanical Engineering department at

the Massachusetts Institute of Technology. He holds a B.S. from the University of Michigan, and an M.S. and Ph.D from MIT, where he was a National Science Foundation Graduate Fellow. Iagnemma’s primary research interests are in the areas of sensing, motion planning, and control of mobile robots in outdoor terrain, including modeling and analysis of robot-terrain interaction. He is author of the book, Mobile Robots in Rough Terrain: Estimation, Planning and Control with Application to Planetary Rovers (Springer, 2004). He has recently led research programs for

15

Page 19: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

agencies including the U.S. Army Tank-Automotive and Armaments Command, the Army Research Office, DARPA, the NASA Mars Office, Ford Motor Company, and the NASA Institute for Advanced Concepts, among others. He has authored or co-authored many conference and journal papers on a wide range of robotics topics, and has consulted for various private companies and government agencies.

http://robots.mit.edu/people/Karl/karl.html

Nicholas Roy is the Charles Stark Draper Assistant Professor of Aeronautics and Astronautics

at the Massachusetts Institute of Technology. He received his B.Sc. in Physics and Cognitive Science in 1995 and his M.Sc. in Computer Science in 1997, both from McGill University. He received his Ph. D. in Robotics from Carnegie Mellon University in 2003. Prof. Roy’s main research interests include autonomous systems, mobile robotics, human-computer interaction, decision-making under uncertainty and machine learning. He has developed new algorithms for decision making in the context of Partially Observable Markov Decision Processes, and demonstrated their utility on a number of real-world robotic applications. His most recent interest includes decision-theoretic models of sequential decision making for exploration and active learning problems. Prof. Roy has participated in and led the deployment of a number of robot systems, including winning entries in the American Association for Artificial Intelligence (AAAI) robot and Grand Challenge competitions. He was one of the development team of Minerva, one of the first museum tour guide robots deployed in 1998, and was the lead developer of robotic technology for health-care assistance from 1999 to 2003. He is a co-author of CARMEN, an Open Source mobile robot navigation software suite developed under the DARPA MARS program.

http://web.mit.edu/nickroy/www/

Daniela Rus is a professor in the EECS Department and co-director of the CSAIL Center for

Robotics at MIT. Previously, she was a professor in the computer science department at Dartmouth College. Rus holds a PhD in computer science from Cornell University. Her research interests in robotics includes robot design, control, planning, and perception. She has developed and built several novel robots ranging from systems capable of autonomous climbing, autonomous shape morphing, and autonomous underwater operation. She has developed one of the first complete motion planning systems for dexterous manipulation and numerous task-specific motion planners for navigation, sensor placement, coordination, and manipulation for single robot and distributed robot systems. She has pioneered distributed networked control of robots and has contributed novel Lie-algebraic controllers for robots. She was the recipient of an NSF Career award. She is an Alfred P. Sloan Foundation Fellow and a 2002 MacArthur Fellow.

http://people.csail.mit.edu/rus/

Russ Tedrake is an Assistant Professor of Electrical Engineering and Computer Science at

MIT, and a member of the Computer Science and Artificial Intelligence Laboratory. He holds a B.S.E. in Computer Engineering (1999) from the University of Michigan, and a Ph.D. in Electrical Engineering and Computer Science from MIT (2004). He joined the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate in 2004, and then joined the faculty of EECS in 2005. Prof. Tedrake’s main research interests are in designing learning control systems for very dynamic and agile motions, including efficient and robust bipedal walking,

16

Page 20: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

multi-legged locomotion over extreme terrain, and agile autopilots for fixed-wing aircraft, and flapping-winged flight. He has built and supervised the design, construction, and control implementation on many sophisticated robots.

http://people.csail.mit.edu/russt/

Seth Teller is an Associate Professor of Computer Science and Engineering at MIT’s

Department of Electrical Engineering and Computer Science, and a member of the Computer Science and Artificial Intelligence Laboratory. He holds a B.A. in Physics (1985) from Wesleyan University and M.Sc. (1990) and Ph.D (1992) degrees from the University of California at Berkeley. Since 1994 Teller has been at MIT, where he has been a recipient of an NSF Career award and Alfred P. Sloan Foundation Fellowship. He has extensive experience in computational geometry and 3D modeling, including the development of spatial data structures for efficient geometric analysis and visualization. He has demonstrated scaling of image-based and video-based camera calibration and structure extraction to tens of thousands of images acquired over tens of thousands of square meters. He and co-PI John Leonard developed the Atlas framework to scale robotic mapping capabilities to sustained robot excursions both indoors and outdoors.

http://people.csail.mit.edu/teller/

2.3 Test facilities The MIT team has unlimited access to the Olin College’s existing intelligent vehicle test track facility, including over 150 acres of roadways and parking lots at both Olin and Babson College. This is private property for which permission for autonomous vehicle testing has already been approved. This is a “drive on, drive off” test facility only eleven miles from MIT. Garage space for vehicle development is available both at Olin College and at MIT. We plan to perform extensive tests of our sensor systems with manual driving control in Cambridge and Boston, Massachusetts (arguably one of the more challenging urban driving settings in the US). If needed, there are a number of larger tests facilities within several hours drive of MIT, including former military and MIT-owned facilities such as the MIT Haystack Observatory, a 1,400 acre facility 40 miles from MIT in rural Massachusetts.

17

Page 21: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Chapter 3 - Management and Funding Plan

3.1 Personnel Plan Our team is organized around a small complement that will devote nearly all of their time to the MIT DGC vehicle. This core group of workers is composed of post-doctoral and doctoral students, and is advised on both administrative and technical subjects by the Co-PIs. Augmenting this work force is a larger pool of graduate students, undergraduates, and technicians. While this larger work force officially reports to the Co-PIs, their specific assignments will tend to be directed by the full-time core team.

Co-Investigators Administrative and Technical Advisors

Technical Lead

The cpicture”, external mexperimethe lab ou

BeneaBarrett), effort is osuch surgdevelopin

Vehicle Team Planning & ControlTeam

Perception Team

Vision LIDAR

Full-time DGC teammembers

Additional graduate/undergraduate workforce

Figure 3.1: Management structure. The MIT team is organized around a team of full-time team members divided among three major subsystems: the vehicle, planning/control, and perception. Above them are the Co-I advisors, and beneath them is a pool of less-than-full-time members who are called upon by team leaders as needed.

ore team is managed by Leonard, who as technical lead is responsible for the “big ensuring that progress on systems and integration occurs in accord with internal and ilestones. Our team has extensive experience in deploying field robots worldwide for

nts and has developed the necessary logistics capabilities for operating robots far from t in the real world. th the technical lead, there are three system teams: vehicle engineering (led by David

planning and control (led by Jonathan How), and perception (led by Seth Teller). Each rganized around a “surgical team”, as described by Brooks [3]. An important element of ical teams is the “tester,” drawn from the larger pool and primarily responsible for g an automatic test/regression system and maintaining a database of test cases.

18

Page 22: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Additionally, testers ensure that tests are run automatically every night on the team’s code, and that useful reports are sent to team members.

Barrett, Iagnemma and their students form the vehicle engineering team, responsible for physical preparation of the vehicle, including necessary mechanical modifications, maintenance, and installation of equipment. They also have ultimate responsibility for ensuring vehicle safety and serve as the range safety officers.

Olin College will convert the MIT acquired Land Rover LR3 vehicle to autonomous robotic operation. Olin will design and fabricate the advanced sensor mounting system for the roof of the vehicle. Olin will install the auxillary power unit for the vehicle. Olin will also provide field engineering support for outdoor field testing.

How, Roy, Tedrake, Frazzoli and their students form the planning and control team; they are responsible for mission planning and situational planning, including trajectory computation.

Teller, Freeman, Horn, Rus and their students form the perception team, responsible for the vision and LIDAR systems and processing of all data produced by these sensors. Each sensor subsystem represents significant amounts of work. Consequently, this team has two sub-teams, one focused on vision, the other on LIDAR.

Leonard will manage the overall system integration of the components developed by the team. He has extensive experience in fielding autonomous robot vehicles that must operate in real-time in dynamic and unpredictable environments. He has led the deployment of autonomous underwater vehicles equipped with real-time intelligent navigation and control software in a range of difficult environments, including the Arctic and Pacific Oceans and the Mediterranean Sea.

In addition, our plan makes good use of a larger pool of engineers and technical experts, whose availability is constantly shifting due to other commitments. Their expertise can be employed when it is useful, while being realistic about the number of hours that they can devote.

We believe this management plan will be successful due to the group of core team members who will devote their full-time to the project. Keeping this core small reduces communication overhead and keeps the teams agile.

3.2 Task Breakdown and Milestone Schedule The task breakdown is organized according to the overall system architecture discussed in the Technical Approach. The schedule presented in the following chart ensures that milestones are reached on time, while providing a number of internal milestones to guide development:

19

Page 23: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

20

Page 24: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

3.3 Cost (Redacted)

21

Page 25: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Salary Justification During the “base” period of this proposal, which runs through milestone 2 performance notification (August 31, 2007), we will support two full-time postdoctoral fellows and two full-time graduate student research assistants. One post-doc will develop software and algorithms for sensing and perception; the other post-doc will develop planning and control capabilities.

Option 1 of the budget covers the time frame after milestone 2 notification (September 1 through October 31st). During this time, our budget is augmented to support for two additional research assistants, and to provide 50% salary support for one principal research scientist (Iagnemma). Option 2 of the budget covers the time frame after milestone 3 notification (November 1 through December 31st).

Equipment Justification The base component of our proposal requests $70,000 of equipment funding; this will supplement the core vehicle (purchased with MIT funding) by enabling acquisition of a high-performance integrated GPS/INS system — the RT3052 from Oxford Technical Solutions (quoted price without delivery of $58,905). Delivery time for this item is only two weeks, so it can be obtained and integrated rapidly soon after contract award.

Option 1 of our proposal requests $60,000 of equipment funds; this will be used to purchase a second LR3 race vehicle and vehicle conversion from Electronic Mobility Controls, Inc. The second race vehicle will enable our team to perform more extensive testing leading up to the final challenge, and will provide our team with a backup should a last minute problem be encountered with our primary vehicle.

Option 2 of our proposal also requests $60,000 of equipment funds, to be used for purchase of “backup” equipment consisting of cameras, LIDARs, and computer systems for the final competition.

Olin Subcontract Olin’s base budget will have two components: equipment and labor. The labor component will fund six Olin students for summer research during summer 2007 at a rate of $9.61/hour for 40 hours/week for 13 weeks. There is no overhead on this salary, hence the cost is $5,000 per student for a total of $30,000

The equipment component of the subcontract consists of mechanical, computer, and electrical components that together will form a fabricated piece of major equipment, the active roof mounting system for the MIT team cameras and LIDARs, as follows:

• mechanical: $10,000 • electrical: $5,000 • computer: $5,000 • total: $20,000

As a major fabricated equipment, there is no overhead on these costs. This equipment will be fabricated in fall, 2006 during which time there are no labor costs to the project as Olin staff and student costs are covered under academic year funding.

22

Page 26: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Olin’s option 1 and option 2 funding will be used for fabrication of electromechanical and computer systems needed for the final challenge demonstrations, with an estimated cost of $25,000 for each option.

3.4 Funding Plan

MIT Seed Funding Expenditures The MIT School of Engineering has provided our team with initial startup resources of $250,000 to launch our team effort in Summer 2006. This funding will be used to purchase a Land Rover LR3 vehicle, automate it, and equip it with eight SICK laser scanners and six high-performance video cameras and associated on-board computation. This acquisition will be performed in June, July and August 2006 with the goal of having a fully automated vehicle capable of acquiring sensor data and navigating autonomously with GPS by October 1, 2006. MIT provides full salary support for Faculty during the 9 month academic year. Our startup funding will also be used to support one full-time graduate student research assistant, who will assist in the conversion of the LR3 to autonomous control.

DARPA Award Funding Funds received through a DARPA Award would be used to satisfy the costs outlined in Section 3.3.

3.5 Export Laws and Regulations MIT's proposed research effort falls into the category of fundamental unversity research, and accordingly export laws and regulations such as ITAR 22 C.F.R. 120-130 are not applicable.

23

Page 27: MIT – Source Selection Sensitivegrandchallenge.mit.edu/wiki/images/d/da/Track_A_Proposal.pdfMIT – Source Selection Sensitive and extent of road surface. The obstacle detection

MIT – Source Selection Sensitive

Bibliography [1] ALIGHANBARI, M., BERTUCCELLI, L. F., AND HOW, J. Filter-Embedded UAV Task

Assignment Algorithms For Dynamic Environments. In Proceedings of the AIAA Guidance, Navigation and Control Conference (Aug 2004), no. AIAA-2004-5251.

[2] BOSSE, M., NEWMAN, P., LEONARD, J., AND TELLER, S. Simultaneous localization and map building in large-scale cyclic environments using the Atlas framework. The International Journal of Robotic Research 23 (2004), 1113–1139.

[3] BROOKS JR., F. P. The Mythical Man-Month. Addison-Wesley, 1975. [4] COLLINS, S. H., RUINA, A., TEDRAKE, R., AND WISSE, M. Efficient bipedal robots based

on passive-dynamic walkers. Science 307 (February 18 2005), 1082–1085. [5] FRAZZOLI. Robust Hybrid Control for Autonomous Vehicle Motion Planning. PhD

thesis, MIT, June ’01. [6] HOIEM, D., EFROS, A., AND HEBERT, M. Putting objects in perspective. In IEEE

Computer Society Conference on Computer Vision and Pattern Recognition (June 2006).

[7] LYNCH, N., SEGALA, R., AND VAANDRAGER, F. Hybrid I/O automata revisited. Lecture Notes in Computer Science 2034 (2001), 403+.

[8] NEGAHDARIPOUR, S., AND HORN, B. A direct method for locating the focus of expansion. Computer Vision, Graphics, and Image Processing 46, 3 (June 1989), 303–326.

[9] OLSON, E., LEONARD, J., AND TELLER, S. Fast iterative optimization of pose graphs with poor initial estimates.

[10] OLSON, E., WALTER, M., LEONARD, J., AND TELLER, S. Single cluster graph partitioning for robotics applications. In Proceedings of Robotics Science and Systems (2005), pp. 265–272.

[11] RICHARDS, A., BREGER, L., AND HOW, J. P. Analytical performance prediction for robust constrained model predictive control. International Journal of Control 79, 8 (August 2006), 877–894.

[12] SAND, P., AND TELLER, S. Particle video: Long-range motion estimation using point trajectories. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (June 2006).

[13] SCHOUWENAARS, T., METTLER, B., FERON, E., AND HOW, J. P. Hybrid Model for Trajectory Planning of Agile Autonomous Vehicles. AIAA Journal of Aerospace Computing, Information, and Communication (Dec. 2004), 629–651.

[14] T. SCHOUWENAARS, M. VALENTI, E. F., AND J. P. HOW, E. R. Linear Programming and Language Processing for Human-Unmanned Aerial-Vehicle Team Mission Planning and Execution. AIAA Journal of Guidance, Control, and Dynamics 29, 2 (March-April 2006), 303–313.

[15] TEDRAKE, R. L. Applied Optimal Control for Dynamically Stable Legged Locomotion. PhD thesis, Massachusetts Institute of Technology, 2004.

[16] TIN, C. Robust Multi-UAV Planning in Dynamic and Uncertain Environments. Master’s thesis, MIT, Aug. ’04.

24