53
General Motors Vision System Qing Cheng<[email protected]> John Brichetto<[email protected]> Matthew Carman<[email protected]> Jaber Almegbah<[email protected]> Senior Design (ECE405) Revision Dr. Cochran May 2, 2018

General Motors Vision System

Embed Size (px)

Citation preview

General Motors Vision SystemQing Cheng<[email protected]>

John Brichetto<[email protected]>Matthew Carman<[email protected]>

Jaber Almegbah<[email protected]>

Senior Design (ECE405)Revision Dr. Cochran

May 2, 2018

General Motors Vision System CONTENTS

Table 1 Document Revision History

Revision Date For Revision Change(s) Reason

Contents

1 Introduction 4

2 Problem Statement 4

2.1 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 State Customer Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3 Define Use Case(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.4 System Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.5 Interface Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.6 Customer-Defined Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.7 Requirements and Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Detailed Design 9

3.1 Product Design Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.2 Conceptual Design Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.3 Criteria for Selection and Testing Among Alternatives . . . . . . . . . . . . . . . . 17

3.4 Design Alternatives Verification Plan 1 . . . . . . . . . . . . . . . . . . . . . . . . 17

3.5 Design Alternatives Validation Plan 2 . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.6 Risk Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Component Definition and Planned Build 19

4.1 Planned Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.2 Wiring Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.3 Central Storage User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5 Test Plan 25

5.1 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.2 Failure Mode Effects Analysis (FMEA) . . . . . . . . . . . . . . . . . . . . . . . . 27

2

General Motors Vision System CONTENTS

5.3 Physical Testing and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6 Cost 31

6.1 Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.2 Material and Cost Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

7 Project Plan and Timeline 33

8 Design Changes 35

9 System as Built 36

10 Moving Forward 39

11 Conclusions 40

12 Lessons Learned 41

13 References 42

14 Appendix A - Design Process Decomposition 43

15 Appendix B - List of Tables 45

16 Appendix C - List of Figures 46

17 Appendix D - Raspberry PI Code 47

3

General Motors Vision System 2 PROBLEM STATEMENT

1 IntroductionGeneral Motors Assembly of Fort Wayne is one of the largest manufacturers in Indiana, witha facility of over 2,850,000 square feet and over 3,900 employees. The economic benefit toboth the company and the region are considerable. However, the cost of doing business on thatscale is also considerable. This project’s goal is to reduce the operating costs of the GeneralMotors Production Facility by reducing the number of surface mutilations on finished vehicles.To correct each surface mutilation, the repair costs range from $500 to $1000. Projecting thiscost to production of this scale means approximately $200,000 is being spent to repair vehiclesurface mutilations every week.

2 Problem StatementGeneral Motors Fort Wayne Assembly Direct Run Rate is currently 86% and the goal of thecompany is to increase it to 95%. The Direct Run Rate is defined by GM as the percentageof vehicles that roll off of the production line with no defects. Of the 14% of vehicles that areremoved from the production line for quality assurance purposes, 18% are surface defects.Finding the sources of surface mutilations will allow the Quality Control team to make appropri-ate changes to the production line, tools, and methods, in order to prevent surface mutilationsfrom occurring, thus raising the Direct Run Rate of the production facility.

2.1 Literature Survey

The design team has researched some similar projects, that were designed for locating sur-face mutilations and defects, in order to modify their design to the vision system. In reference,one project is called High-Speed Inspection, it was built to find defects in steel production line.The vision system team learned how the user interface was created and use it to our projectUI command. After research about how transmitting data into raspberry pi, the user interfaceshows the information. The technology software is called secure shell(SSH) from raspberry piit allows design team to build server that allows pi camera to send images to the server.

The reference "How do I send an image from raspberry pi (Linux) to a windows pc via TCPIP" demonstrates how a raspberry pi sends images to the centralized data storage station. Vi-sion system design team need to have this kind of knowledge to implement the software whichresponsibility for sending the information (image, job number, date) to centralized computer.

4

General Motors Vision System 2 PROBLEM STATEMENT

Figure 1: High-Speed Inspection GUI

5

General Motors Vision System 2 PROBLEM STATEMENT

2.2 State Customer Needs

The customer needs to reduce the number of surface mutilations occurring on vehicles at theproduction facility. To accomplish this goal the customer has asked IPFW to implement a systemof cameras that will capture images of vehicles as they move down the production line, andrecord those images into a database so that a user may look at a vehicle’s history and determinewhen a surface mutilation may have occurred.

2.3 Define Use Case(s)

Figure 2: Use Case Diagram

The system will capture and store images of vehicles, as well as physical location and timedata, and organize it in an easily accessible database. The user of the system will then beable to examine a vehicle’s physical appearance at various points throughout the factory anddetermine that a mutilation has occurred between camera stations x and y.

This system will allow the user to examine possible sources of mutilation between camerasx and y, as opposed to the current method of tracking mutilation sources down between Endof Line and paint shop. It is expected that this system will save General Motors a significantamount of time and effort on the part of quality control, which will allow the quality controldepartment to track down, and identify more sources of mutilation in less time. This savesGeneral Motors time and money, while increasing the Direct Run Rate.

2.4 System Boundary

The system boundary diagram, show above in Figure 2, represents the controlled and uncon-trolled aspects of the vision system. Important interfaces include the barcodes on each of thevehicles, the surface mutilations on each panel, the assembly line, and the users of the system.

2.5 Interface Definition

Production Line

The vision system to be implemented will need to work alongside the production line tocapture image data. It is imperative that the system does not impede with the production line

6

General Motors Vision System 2 PROBLEM STATEMENT

Figure 3: System Boundary Diagram

or the production line timing. As each vehicle passes the system, each of the six cameras willcapture images from different angles and views of the vehicle. The barcode reader will read thebarcode on the vehicle and attach the vehicle number, date and time to each image.

Vehicles

The vision system needs to be able to capture high resolution images of the passing vehiclefrom six different angles. These images need to accurately portray the physical propertiesof each vehicle panel, as quality control personnel will be overlooking these images for surfacemutilations. It is crucial that scratches, dings and dents are easily identifiable using the designedsystem.

Vehicle Identification Barcode

Each vehicle on the General Motors’ assembly line is fitted with a barcode that carries infor-mation about each vehicle. The information needed for the vision system is the Primary VehicleIdentifier (PVI). The PVI is the job number that each vehicle is associated with. This PVI will bescanned by a barcode reader controlled by a single Raspberry PI. This PVI is then attached toeach image captured by the system and is stored in the centralized database. This will be themain method of sorting the gathered images.

Quality Control Department

The quality control department of General Motors is the main interface of the vision system.Once the the images of each vehicle has been captured, a central data storage unit will beutilized to both store and access the data. This unit will allow General Motors quality controlemployees to browse the collected data and sort by a number of methods. The most utilizedmethod will be sorting the data by vehicle number. This will allow vehicles to be examinedthrough different vision system stations and will assist in narrowing down the line location ofwhen the mutilation occurred. Since this is what allows General Motors employees to use thesystem, the user interface must be user friendly and intuitive.

2.6 Customer-Defined Constraints

• $15,000 budget

7

General Motors Vision System 2 PROBLEM STATEMENT

• 2 Semesters to complete

• Implement system within GM physical constraints-Hardware must fit along production linewithout creating safety concerns.

• Operate with the existing assembly line’s timing.

• System cannot interfere/impede with the production line timing.

• Operate multiple instances of the system in different assembly line locations.

• Adaptable to different body panels.

• Must capture and store at least one week’s worth of production data (approximately 15,000vehicles).

• Output must show physical location, date, time, and job number.

• Data logging must be easily sorted and user friendly.

• The aquired data must be searchable by angle of view, PVI, and the station number.The constraints listed above have changed with the progression of the project. As moreinformation was uncovered about what GM’s expectations were, there were more con-straints defined

2.7 Requirements and Specifications

• The vision system must be implemented in at least 6 different locations along the assemblyline to adequately capture each commonly mutilated body panel.

• The captured images must accurately represent the physical mutilations on each truck panel.

• At least 6 different cameras are to be implemented in each of the 6 locations along theassembly line.

• The cameras are to be triggered automatically by a proximity sensor or by the barcode at-tached to each vehicle.

• After the images have been captured, a microcontroller will grab each image, attach the time,date, job number, and view of the truck, and send this information to a centralized datastorage unit.

• The centralized data storage unit must have sufficient capacity to store at least 8 days worthof collected truck data.

• Data access must be timely and user friendly.

8

General Motors Vision System 3 DETAILED DESIGN

3 Detailed Design3.1 Product Design Map

Figure 4: System Design Decomposition Overview

Figure 5: Decomposition Levels 1 and 2 Highlight

9

General Motors Vision System 3 DETAILED DESIGN

Figure 6: Decomposition Levels 1 and 2

As shown above in Figure 5, the physical solution to FR2.1 has not been completed. Thisis intentional as it is up to General Motors and the quality control department to ensure thereduction of surface mutilations. The vision system is a tool that will be utilized by the qualitycontrol department in search of mutilation occurrence. General Motors’ quality personnel mustdevise a method of identifying mutilations, viewing the data logged by the vision system, andusing this data to further track down sources of defects.

Figure 7: Decomposition Level 3 Highlight

10

General Motors Vision System 3 DETAILED DESIGN

Figure 8: Decomposition Level 3

Level 3 functional requirements include the collection, access, storage, and transmissionof data. The collection of data is going to be handled by a modular sensor network utilizingeither a Raspberry Pi or and Arduino. Data access is going to be granted through a GUI or noncommand line interface allowing the user to browse all collected data from the previous week.The data will be stored by using a PC to construct a centralized data storage unit. The PC willalso provide a basis for data access. Data transmission will be handled with Ethernet cable,Bluetooth, or Wi-Fi depending on performance. These alternatives are discussed in section 3.2.

Figure 9: Decomposition Levels 4.1 and 5.1 Highlight

11

General Motors Vision System 3 DETAILED DESIGN

Figure 10: Decomposition Levels 4.1 and 5.1

The collection of data requires several components that must work together. The data col-lected must include images of scratches, date, time and location information, the vehicle jobnumber and images of dents and dings. Scratches will be captured using a camera triggeredby a barcode scanner or a proximity sensor and exported automatically. Date, time and locationinformation will be stored on the devices at each location for easy data logging. The vehiclenumber will be captured using the barcode scanner and the capturing of dents and dings willmost likely be done using a 2D projected grid on each panel. The alternatives are consideredin section 3.2.

12

General Motors Vision System 3 DETAILED DESIGN

Figure 11: Decomposition Level 4.2 Highlight

Figure 12: Decomposition Level 4.2

A graphical user interface will allow General Motors personnel to access the stored data. Asearch bar will be provided, allowing the data to be sorted by various parameters (Date, time,location, job number).

13

General Motors Vision System 3 DETAILED DESIGN

Figure 13: Decomposition Level 4.3 Highlight

Figure 14: Decomposition Level 4.3

The vision system must store at least 1 week’s worth of collected data. This will allowquality control to reflect upon the past 7 days. To store this amount of data, hard-disk driveswill be utilized to store approximately 5.4 terabytes. Other possible alternatives are discussedin section 3.2. After a week has passed, the storage system will be programmed to purge thedata to free up space. After a mutilation is found, quality control may wish to export specificimages and data for further analysis. This can be done using a USB interface, Bluetooth orWi-Fi. These alternatives are described in detail in section 3.2.

14

General Motors Vision System 3 DETAILED DESIGN

3.2 Conceptual Design Alternatives

Data Collection Alternatives (PS3.1A and PS3.1B)

For data collection handling, FR3.1, a microprocessor is needed to control the cameras andto send data to the centralized data storage unit (PS3.1A and PS3.1B). Two possible microcon-trollers are being considered, one being the Raspberry Pi 3 and the Arduino. The Raspberry Piprovides increased data processing capability, an embedded operating system, and more im-portantly plug in Raspberry Pi cameras. The Pi cameras will be easily implemented as they aredesigned to interface with the Raspberry Pi. The resolution and effectiveness of the cameras isto be determined through testing.

The Raspberry Pi can be programmed using C/C++ programming language which is veryfamiliar to the members of the design team. The Arduino provides an embedded real time sys-tem which would allow for easier timing and the triggering of the cameras. However, these fewbenefits would complicate the implementation of cameras as the cameras will not be designedto work specifically with the Arduino.

System Trigger Alternatives (PS5.1.1A and PS5.1.1B)

Another design consideration of the vision system is how the cameras are triggered. GeneralMotors currently implements limit switches throughout the plant. These switches are physicalswitches that are triggered when a carrier passes and presses on the switch. Since the visionsystem cannot impede with the assembly line, it is best to devise another system to trigger thecameras.

A possible alternative is to implement a proximity sensor to detect when a vehicle has movedinto the desired position. The sensor will be mounted relatively close (less than 2ft) to where thevehicle should be when the desire images are captured. This sensor will detect when a vehicleis present, send a signal to the microcontroller, and trigger the cameras. Another alternativeis to use a barcode scanner. A barcode scanner can be set in position to read the barcodeon each vehicle as they pass. When a vehicle is present, the barcode scanner will scan thebarcode and signal the cameras to capture images. The microcontroller will then attach the PVIto each capture image. A truly robust system could utilize both barcode and proximity sensors.

Time/Date Information (PS4.1.2A and PS4.1.2B)

Date and time information is essential to providing a sortable user interface to effectivelynarrow down where mutilations occur. There are two alternatives to tracking date and time in-formation. This information could be stored on each Raspberry Pi and attached to each imagerespectively. The benefits to this is better security, and simplicity. When dealing with a companyof General Motors magnitude, security is paramount.A second method of providing date and time information is to pull the current date and timeinformation from a web server. This alternative would allow the information to be more accurateand error-proof. A major concern in this alternative is reliability. If General Motors’ IT services

15

General Motors Vision System 3 DETAILED DESIGN

go down at any point, the system would struggle to pull accurate time and date information.This would dramatically reduce the efficiency and practicality of the vision system.

Centralized Data Storage/Access (PS3.3A and PS3.3B)

Access to the collected data is key to the reduction of mutilations. Quality control personnelneed a way to observe a week’s worth of data to identify locations of frequent surface mutila-tions. To do this, data access unit will be provided by a personal computer. The alternative tothis is to allow data access through a microcontroller. A microcontroller would provide a lowcost alternative, but would require significant upgrades. The PC is the logical choice for dataaccess as it is user friendly, sufficiently powerful, and allows for large amounts of data storage.

Data Storage (PS4.3.1A, PS4.3.1B and PS4.3.1C)

As a PC is the logical data access choice, a hard disk drive will be required for sufficientdata storage need. Presented with several storage options, the hard disk drive will be the log-ical choice among the alternatives. Solid state drives have the advantage of faster read andwrite times and longevity. However, with the data being stored, the read and write times of ahard disk drive will not be an issue. An SD card was another possibility because of the trans-portability, but size was severely limited.

Exportation Alternatives (PS4.3.3A, PS4.3.3B and PS4.3.3C)

After a surface mutilation is detected, the users of the system will access the stored datathrough the PC. The PC will provide access to all of the collect truck data for at least one week.Once the user has found a specific mutilation, the truck data can be exported from the PC forfurther analysis. This exportation will be done through a USB port, allowing the user to deter-mine when and what data to export. Also, utilizing a USB port will allow the data to be easilytransportable. Some alternatives to a USB port include exportation via Bluetooth or Wi-Fi. Themain advantage of both of the alternatives is that there is no physical hardware needed to imple-ment these solutions. However, Bluetooth and Wi-Fi would be much more difficult to implementin software.

Capturing Dents/Dings (PS4.1.4A, PS4.1.4B and PS4.1.4C

Identifying scratches from an image is a rudimentary task, while identifying dings and dentsbecome much more difficult. Scratches provide a clear contrast where dents and dings presentthemselves through glares and reflections. To capture dents and dings a projected grid willbe implemented to see mutilations. The grid will provide consistent spacing between minutesections of the vehicle panel and will show dents and dings by the warping of the grid lines. Thissolution is inexpensive and simple to implement however, the effectiveness is to be determinedthrough testing.

Some possible alternatives to the projected grid include angling a camera down the bodypanel and implementing a 3D camera. An angled camera would point along the side of the

16

General Motors Vision System 3 DETAILED DESIGN

panel and hopefully show the body panel lines and mutilations. General Motors’ assembly lineundergoes different lighting in different parts of the factory. This will make lighting and glaringand issue. The 3D camera could possibly be the most effective alternative as it generates a 3Dtopographical image of the body panel. This option is most expensive and would be the mostcomplex to integrate into the vision system.

Data Transmission (PS3.4A, PS3.4B and PS3.4C)

Data transmission is another crucial part of the vision system. To get the collected data tocentralized data storage unit, there are several alternatives to be considered. The most reliableand robust option would be to pull cable to directly wire the system together. This option wouldbe the most physically challenging alternative to implement. Bluetooth would be inexpensive,but is low security and the physical distance between stations may be limiting. Using Wi-Fi totransmit data is physically the simplest to implement but again, distances between stations willbe a major concern.

3.3 Criteria for Selection and Testing Among Alternatives

The primary criterion for choosing design alternatives was first and foremost to achieve thefunctional requirements specified in Section 2.7 Requirements and Specifications. Secondarydesign criteria were price, hardware/software support, and ease of implementation.

3.4 Design Alternatives Verification Plan 1

The primary choices for design alternatives will be implemented first on a software level fora single camera station. The required functions will be tested under a variety of conditionsto reflect real-world uncertainties. Once the software has been verified to meet the specifiedrequirements, it will be installed on a single camera, and then it will be tested again for compat-ibility with the hardware.

Once a single camera has been shown to function as desired, multiple cameras will be linkedtogether to form a single functioning station. The station will be tested to verify that all deviceswork together as a single cohesive unit. Data from the multiple devices will be sent to one"master device" within the group.

The single working station will then be duplicated and spread over the production line wherespace allows. The "master devices" from each station will then be networked together and datawill be concentrated on a "master device of master devices".

Both the system and the test plan have been designed to mimic a bureaucratic or pyramidalstructure. This was incorporated into our design to encourage modularity throughout the sys-tem. Single devices report to a "master device", the first tier of master devices then report to

17

General Motors Vision System 3 DETAILED DESIGN

their own master device. The approach allows for each device to be responsible for collectingand processing it’s own data, which keeps centralized processing to a minimum and maximizesthe efficiency of the system.

3.5 Design Alternatives Validation Plan 2

The chosen design alternatives will be presented in stages to the customer. After a single cam-era is working as desired, the collected information will be shared with the customer in orderto address customer questions or concerns. This method allows the customer to be involvedin the project as it progresses and voice their thoughts, opinions, and expectations throughoutthe design process, as well as giving the design team the time and information necessary todeliver a product that will impress the customer. This production method allows for continualimprovement and customer satisfaction to be verified throughout each stage of development.

After the single camera’s data is presented to the customer, the design team will take customerconcerns into consideration and redesign the initial prototype as needed. Customer validationwill occur again at the single station, and multi-station design checkpoints.

3.6 Risk Mitigation

The current production method at GM has the vehicles moving through the entire production cy-cle without being checked for surface mutilations. This causes an unnecessarily large problemto detection time. By lowering the time between a problem occurring and detecting that prob-lem, the system allows its users to make more informed decisions about the potential sourcesof error, as well as what to do with the defective product. This information only represents onehalf of risk mitigation however, the other half of risk mitigation involves preparing for potentialfailures of the designed system. As with all complex system design projects, this system hasvarious stages in it’s development where it may fail to meet customer needs. However, manyof these different modes of failure can be prepared for in advance by estimating the differentways and levels of severity that a system may fail. The scope of this project extends to observ-ing the environment and reporting that data to a user, as this involves no moving parts, hightemperatures or voltages, any failures of the system are unlikely to impact human health andsafety. Therefore the most dangerous failure mode of the system would be improper/unsecuredplacement of system components. This aspect of system design will be subcontracted out bythe customer to professional installers ensuring that all relevant workplaces safety standardsare met.

The second most severe failure mode of the system would be the failure of the system to reducethe number of surface mutilations at the assembly facility. If all components and subsystems arefunctioning as intended and the data produced by the system fails to result in any meaningfulreduction of surface mutilations, that would mean that the system was not being utilized to it’smaximum potential. This failure could be a result of poor communication between the system

18

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

design team and the users of the system. The failure could also result from poor planning at ahigh level of system design. The design team will minimize these risks by maintaining contactwith the users of the system throughout the design process, collecting details of: how the usersplan to use the system; how they are tracking and eliminating sources of surface mutilationwithout the aid of the system; as well as how they plan to change their quality assurance pro-cedures after acquiring the system.

Finally, the least severe failure mode of the system would be a failure of a system component.Whether it be a hardware or software system component, design alternatives have been givenfor every functional requirement that needs to be achieved by the system. This process allowsthe design team to attempt to rework a current malfunctioning system, or to change componentsand proceed with a different hardware/software solution. These failures modes apply to overar-ching ideas from a high level system design perspective, to see more information about specificcomponent level failures please refer to Section 5.2 Failure Mode Effects Analysis (FMEA).

4 Component Definition and Planned BuildThe build of the project will take place in 4 stages. First, a single camera will be connected to asingle microcontroller. These components will communicate via code developed by the designteam. After the design team verifies that both the hardware and software are working togetheras intended, they will have a user verify the quality of the images captured. If the images cap-tured by the system are not of sufficient quality, as determined by human inspection, the designteam will redesign using an alternative they believe will perform better.

Once the single camera/microcontroller combination is validated by the customer, the designteam will begin networking the camera/microcontroller combinations together to form completecamera stations capable of taking photographs from all sides of a vehicle. These complete sta-tions will have detection devices to alert the system to a vehicle’s presence, and also capturethe vehicle’s identifying barcode. This data will be associated with the images collected fromindividual cameras and then forwarded on to a central station for user interaction.

After the verification of a single networked station, the design team will design a central stationto manage a database of vehicle information, including vehicle identification numbers, times,locations, and image data. This central station will be where all networked stations forward datato. The Central Station will also be the point of access for users to interact with system data.

Finally, after all components are verified to be working and validated by the customer to meettheir expectations (as defined by the level 4 and 5 Functional Requirements in the DesignDecomposition of Section 3), the individual components of the system will be installed in theproduction facility. Testing the system in stages allows the design team to verify each pieceworks as desired in a controlled environment. By physically installing the system on site afterall of these devices are known to be working, any malfunctions in the system after installation

19

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

can safely be assumed to have resulted from the installation process, rather than a flaw in thedesign of the system.

4.1 Planned Layout

Figure 15: Planned Layout

Figure 14 above, show the layout of the Pi cameras and how they will be positioned aroundthe vehicle. The will be six camera taking photos of the vehicle from six different angles. Threeof the cameras will capture one side of the vehicle, and the remaining three will capture thereverse side of the vehicle. Aside from the cameras, there will be a barcode scanner constantlyscanning for the PVI barcode on each vehicle. Once this barcode is detected, the RaspberryPi connected to the barcode scanner will communicate with the six PI cameras and images willbe captured. The wiring diagram for this setup is explained below in section 4.2.

20

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

4.2 Wiring Diagram

Figure 14, Figure 15, and Figure 16 below, show the wiring diagrams for the components andtheir wired connections. Each Camera Pi will contain their respective SD cards along with a Picamera connected via the given ribbon cable. The port on the Pi board is labeled "CAMERA."Each Pi board will be connected to a power supply from the port labeled "PWR" to a power stripor a wall outlet via the provided power cable. The Control Pi will be connected to the barcodescanner via "GPIO" port. The figures below, Figure 15 and Figure 16, show more detailed viewsof the connections and the respective components.

Figure 16: Single Station Setup

21

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

Figure 17: Camera Pi Setup

22

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

Figure 18: Control Pi Setup

23

General Motors Vision System 4 COMPONENT DEFINITION AND PLANNED BUILD

4.3 Central Storage User Interface

The user interface shown below will allow the user to search by the particular PVI respective tothe vehicle in question. Once this PVI has been entered, the user has the choice of seeing onespecific angle at all of the six locations, or the choice of seeing all of the pictures captured ofthe vehicle at a single station. Once this is done, all images will be shown below, and the userwill be able to inspect each image for surface mutilations. To expand each image, a left click willsuffice. The image will be scaled to the maximum size of the screen for ease of inspection.

Figure 19: Designed Graphical User Interface

24

General Motors Vision System 5 TEST PLAN

5 Test PlanCreate a data structure for holding images with appropriate data fields

• Does the data structure have the ability to obtain all required data fields?Testing the data structure programing code can store the incoming data and sort thembase on date, job number and assembling line

• Can the data structure still function without certain data fields?Testing the data structure programing weather they can handle all image file, text file with-out any issue

Add appropriate hardware to utilize data structure

• Does the data structure function as intended with attached hardware?

Relevant FR(s): FR5.1.1, FR5.1.2, FR5.1.3, FR4.1.3, FR4.1.2

Description:Testing weather the software will received the image, data, job number andassembling number from raspberry pi and finally send everything to centralized computer’shard drive. The data structure programing also required short run time in order to imple-ment huge amount of data.

Implement a single camera test station

• Does the system handle the acquisition and handling of data automatically?

Relevant FR(s): FR4.1.1, FR4.1.2, FR4.1.3, FR4.1.4, FR3.1

Description:Measure the raspberry pi vision device to making sure camera will take pic-ture every time when the panel on the assembly line has arrived. The bar code systemwill be also communicate with raspberry pi ready for taking picture.

Implement a multi camera test station

• Do the different cameras work together as intended?

Relevant FR(s): FR4.1.1, FR4.1.2, FR4.1.3, FR4.1.4, FR3.1

Description:Testing at each station, the different camera will take picture at different anglewith acceptable resolution at same time. Each camera must all connect to raspberry piand be able to send picture to raspberry pi

25

General Motors Vision System 5 TEST PLAN

Implement a plant-wide solution

• Do the different multi-camera stations work together as intended?

Relevant FR(s): FR3.1, FR3.2, FR3.3, FR2.2

Description:Testing each station are ready for picture taking when panel has arrived atbuffer station. The raspberry pi can send picture(including every angle) to centralizedcomputer.

5.1 Test Procedure

Camera Test:After implement the camera capture class, the GM team has tested if the camera class is work-ing. The camera module need to connect to the Raspberry Pi. The tester has generated asample code with Pi camera captures one picture and save to the local drive.

File transmitting Test:After implement the file transmitting class, the program use looking for the specific .PNG imageat Raspberry Pi module and send all station/angle images to the centralize PC. The method thedesign team test the code is to use one PC install both client and server program code(Clientand server are locate at different root for verify the validation of testing). User two separateterminal to run both client code and server code. Check to verify if new image file are generateunder the client root(server sending image to client)

User Interface Test:After implement the UI application, the design team has tested the efficiency of this application.The way to test is to use some picture name at correct format Jobnumber,StationNumber,AngleNumber, PNG place under the UI root. Open and search the image to check if all the image willdisplay on the UI

Bar-code Scanner Test:After implement the bar-code scanner class, the design team has tested the code to verify it isworking. The procedure of test is to run the bar-code canner program and scan any bar-codeto verify the bar-code number if it is correct.

26

General Motors Vision System 5 TEST PLAN

5.2 Failure Mode Effects Analysis (FMEA)

Figure 20: FMEA Form Part 1

27

General Motors Vision System 5 TEST PLAN

Figure 21: FMEA Form Part 2

28

General Motors Vision System 5 TEST PLAN

Figure 14 shows the Failure Modes and Effect Analysis (FMEA). Rows in gray, green andorange represent the system design Functional Requirements levels 3, 4, and 5, respectively(shown in Figure 9 for Decomposition Levels). The figure also discusses each functional re-quirement and its corresponding potential failure modes, potential failure effects, potential fail-ure causes, and some of the actions and controls that will be potentially taken to prevent anydesign failure. The last column represents the Risk Priority Number (RPN) of each functionalrequirement, where the higher the RPN is, the more important that related functional require-ment should be addressed.

29

General Motors Vision System 5 TEST PLAN

5.3 Physical Testing and Analysis

Camera Testing and Validation:The camera class is verified working. The defect on the image are clearly identified The sampletest image are posted below:

File transmitting Test and Validation:The image file can be sent from server to client under one second which less than the panel’swaiting time at GM’S plant(The picture and to be taken and sent to centralized computer beforenext panel arrive at the station. The core program code and file location show below to proveworking program:

User Interface Test and Validation:The UI can display image accuracy, the image of UI application show below:

Bar-code Scanner Test and Validation: The bar-code class worked correct. It can scan thebar-code and display in numerical format which allow the developer to name the image file. Theimage showing the bar-code and bar-code number:

30

General Motors Vision System 6 COST

6 Cost6.1 Bill of Materials

Below, in Figure 15, shows the raw cost for all of the materials needed for vision system. Theprices for each Raspberry Pi component was found on the Raspberry Pi website. All other costswere slightly overestimated to allow room for modification or alternative design selection.

Figure 22: Bill of Materials

31

General Motors Vision System 6 COST

6.2 Material and Cost Analysis

Component Cost Projected for a Single Station:

There will be seven Raspberry Pi units at each station. Along with the Raspberry Pis, sevenpower supplies and seven SD cards. Also, each Raspberry Pi will be enclosed in its own casefor protection. Only six cameras will be needed as one of the Raspberry Pi units will be usedto collect the barcode information and trigger the other Raspberry Pi cameras. Also, a barcodescanner will be needed at each station to allow for the collection of the PVI and the triggeringof the six cameras. All of these component add up to approximately $1056.36.

Figure 23: Single Station Cost

The cost of the central data storage station is approximated to $1,400 to cover the cost of aPC, a 12TB HHD, a keyboard, mouse, and other minor components. This is more than enoughcost for these materials, but allows for some selection for the preferred PC and data storagebrands.For a projected six stations, the total cost of the system components and installationis figure to $9,738.16. Coming in under budget allows for design modifications and higherinstallation costs if necessary.

32

General Motors Vision System 7 PROJECT PLAN AND TIMELINE

7 Project Plan and TimelineThe below figure, Figure 23, shows the timeline of the developement of vision system over thefall and spring semesters.

Figure 24: Project Timeline

Important Milestones:

• Single Camera Testing

– By January 19th, the plan is to have a single camera set up with all hardware andsoftware connections and ready for testing.

• Multi-Camera Station Testing

– By February 19th, the plan is to have at least one station done and ready for testingwith all cameras required set up.

• Networked Station Testing

– After completing the design test for one station set up, we should be able to finalizethe design by March 2nd and have all alternatives chosen.

• Plant Installation

– The project plan and timeline chart shows that the design should be ready for instal-lations for all stations by March 9th.

• Complete System Installation

33

General Motors Vision System 7 PROJECT PLAN AND TIMELINE

– The design implementations should be done for all stations and verified by the cus-tomer by at most April 20th, that way there is enough time in case anything needs tobe changed or added to the project.

34

General Motors Vision System 8 DESIGN CHANGES

8 Design ChangesSome aspects of the design have changed to improve efficiency and to solve problems that wereemerging. The first change is the team has decide reduce the amount of Raspberry Pi fromseven to six. The reason for this is because, the developer decided to let each Raspberry Pisend the image to PC instead of using the blue-tooth to send master Pi and then send to PC. Itcan reduce the amount of work for master pi, if the master Pi has issue other Pis can still deliverthe image to PC. The second change is from client side, the design team decide at centralizedPC run six terminal for receiving image from six stations. Each station will communicate thecentralized PC privately. The benefit for is reduce the queue time and in case if any of theraspberry failed to working other will still working.

35

General Motors Vision System 9 SYSTEM AS BUILT

9 System as BuiltThe system as built varies slightly from the designed system. The as built system utilizes a realtime QR code scanner as we were unable to secure proper hardware for the barcode scanner intime for this project. In Figure 25 shown below a mock up of the system is shown. Three camerapi’s (top left) capture images of one side of the door panel when triggered by the barcode pi (inblue 3d printed box). All pi’s are connected to each other wirelessly via wifi, the only wire to thedevice is 5 Volt power.

Figures 26 and 27, on the next two pages, show the 3D printed case designed for thesystem. This case will prove more resilient to real factory conditions than previously chosenunits. The case is completely sealed, with only a power connection being exposed to outsideconditions, however even this is sealed with epoxy.

Figure 25: Pi Camera Setup

36

General Motors Vision System 9 SYSTEM AS BUILT

Figure 26: 3D-Printed Raspberry Pi Case

37

General Motors Vision System 9 SYSTEM AS BUILT

Figure 27: 3D-Printed Raspberry Pi Case

38

General Motors Vision System 10 MOVING FORWARD

10 Moving ForwardWhile the system has been designed to account for as many errors as possible, it is unrealisticto assume it will work perfectly for all possible factory conditions for all time. For this reason wemake the following suggestions before implementation.

1. The QR code scanner should be replaced with a barcode scanner that will import abarcode as an integer. The code for this has been sectioned off in a comment block from othercode with our recommendations on how to accomplish that task.

2. The user of the system should enable the SSH (secure Shell) for remote access to thedevices. This will allow a user to control and reset devices without physically being present atthe device. We believe regular resetting of devices could also improve system efficienct.

3. Future stations may be implemented solely by changing the IP addresses and stationnumber variables in the source code. The code provided has detailed instructions and clearindications of where and how to accomplish this.

4. Having the Pi’s boot directly to the source code can be accomplished by altering therc.local file in the /home/pi/etc/rc.local directory. This can be done by running the sudo nanocommand from the terminal, replacing the original text with: sudo python /home/pi/Desktop/-CameraPi.py. Obviously changing the filepath or filename as necessary for either barcode orcamera pi’s.

39

General Motors Vision System 11 CONCLUSIONS

11 ConclusionsThe goal of the project is to design a camera vision system that helps GM Assembly Fort Waynereduce vehicle surface defects throughout the production process. The camera system shall en-able a user to view any vehicle, from six (user specified) different angles, at six (user specified)different locations in the facility. GM’s current quality control procedures involve identifying sur-face mutilations at the end of the production line and moving backward down the line to find thesources of those mutilations by consulting each line worker along the way. Surface mutilationsmay result from improper handling of tools or from carelessness on the part of the line worker.This method of finding sources of surface mutilation leaves the quality control department withtoo many potential sources to investigate and does not make efficient use of their time.

Based on the team’s use case analysis, it is projected that the vision system will reduce theamount of time that Quality Control spends investigating sources of surface mutilations by asmuch as 84%. Instead of searching the entire production line for sources of defects, our systemwill require The Quality Control department to search at most 1/6 of the productions line.

The next steps in the project, as shown in Figure 20, are to develop software and integrateour software solutions with the hardware chosen. With contingencies in place for various hard-ware failures, moving forward with the project and implementing the system design in iterativestages should result in a swift installation.

40

General Motors Vision System 12 LESSONS LEARNED

12 Lessons LearnedOver the course of the vision system design and construction, there were numerous issues andsetbacks. These issues sometimes forced a different approach to the design. By examiningthe sources of these issues and how the problem could have been avoided, the design teamlearned some lessons the hard way. Some of these lessons are mentioned below.

1. The first and most important lesson learned is to keep safety in mind when dealing with ahighly populated area of work. When interacting with a place that has large foot traffic an heavymachinery, an implemented design must acheive the goal while maintaining a safe environment.

2. The second lesson learned is that when working on a project, coupling should be keptto a minimum to avoid unwanted interfaces with other systems. As coupling become moreprevalent, it becomes difficult to make changes to the design without effecting other aspects ofthe design.

3. The third lesson learned is the distinction between design validation and design verifica-tion, and the importance of that distinction. Any system can be verified to function as designedbut not necessarily validated by the client, which in our case validation is more important sinceit is concerned with the system meeting the customer’s specifications and requirements.

4. The fourth lesson learned is to maintain active communications with the system users, andfollow up with them regarding system design updates, just to make sure what is being designedis what they are actually looking for.

5. The fifth lesson learned is that it is important to test every single physical solution duringsystem implementation before moving on to the next one in order to avoid a system installationerror.

6. The sixth lesson learned is that it is crucial to communicate effectively with the customerearly on in the development of the project to avoid requirements creep. As the team beganforming ideas about how to design and implement the vision system, the GM contacts wouldfurther elaborate on their wants and needs that would add requirements and complexity to theproject.

41

General Motors Vision System 13 REFERENCES

13 ReferencesHigh-Speed Inspection System Finds Defects in Steelhttp://www.vision-systems.com/articles/print/volume-21/issue-11/features/

high-speed-inspection-system-finds-defects-in-steel.html

A Machine Vision Quality Control System for Industrial Acrylic Fibre Productionhttps://link.springer.com/content/pdf/10.1155/S1110865702204114.pdf

SSH (SECURE SHELL)https://www.raspberrypi.org/documentation/remote-access/ssh/

How do I send an image form raspberry pi (linux) to a windows pc via TCP iphttps://www.codeproject.com/Questions/1089213/How-do-I-send-an-image-form-

raspberry-pi-linux-to

Raspberry Pi Informationhttps://www.adafruit.com/raspberrypi?gclid=CjwKCAjwq_vWBRACEiwAEReprNLtMpy91THQwolSWZ8-

auPC7ak7HTTVUY-Gaex4n3E6lTG5WPswHxoC5JkQAvD_BwE

42

General Motors Vision System 14 APPENDIX A - DESIGN PROCESS DECOMPOSITION

14 Appendix A - Design Process DecompositionFunctional Requirements and Physical SolutionsFR1.1 Improve GM’s Direct Run Rate (DRR)PS1.1 Process of finding and reducing surface mutilationsFR2.1 Reduce surface mutilationsPS2.1 To be determined by customerFR2.2 Find sources of surface mutilationsPS2.2 Database of mutilation time and locationFR3.1 Collect DataPS3.1 Modular Sensor NetworkFR3.2 Access DataPS3.2 GUI (No Command Line)FR4.1.1 Capture image of scratchesPS4.1.1 High Resolution CameraFR4.1.2 Capture physical data (Date/Time/Location)PS4.1.2A Calendar/clock stored on devicePS4.1.2B Calendar/clock pulled from web serverFR4.1.3 Capture Vehicle/ Job NumberPS4.1.3 Automatic Barcode ScannerFR4.1.4 Capture image of dents/dingsPS4.1.4A 3D CameraPS4.1.4B Angled high-res camera to show dingsPS4.1.4C 2D Camera with projected gridFR5.1.1 Capture images automaticallyPS5.1.1A Camera triggered by barcode sensorPS5.1.1B Camera triggered by proximity sensorFR5.1.2 Upload/export images automaticallyPS5.1.2 Hardware/software integrationFR5.1.3 Zoom to maximize surface data per pixelPS5.1.3 LensFR4.2.1 Display all data by defaultPS4.2.1 Well-organized presentation of dataFR4.2.2 Searchable by job number/date/locationPS4.2.2 Search bar to pull specific dataFR3.3 Store DataPS3.3 Centralized Data StorageFR4.3.1 Store at least 8 days worth of collected dataPS4.3.1A HDDPS4.3.1B SD CardPS4.3.1C Solid State DriveFR4.3.2 Purge data after specified timePS4.3.2 Data management program

43

General Motors Vision System 14 APPENDIX A - DESIGN PROCESS DECOMPOSITION

FR4.3.3 Export mutilated truck dataPS4.3.3A USB InterfacePS4.3.3B BluetoothPS4.3.3C WifiFR3.4 Transmit DataPS3.4A Ethernet or wired connectionPS3.4B BluetoothPS3.4C Wi-fi

44

General Motors Vision System LIST OF TABLES

15 Appendix B - List of Tables

List of Tables1 Document Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

45

General Motors Vision System LIST OF FIGURES

16 Appendix C - List of Figures

List of Figures1 High-Speed Inspection GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 System Boundary Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 System Design Decomposition Overview . . . . . . . . . . . . . . . . . . . . . . 9

5 Decomposition Levels 1 and 2 Highlight . . . . . . . . . . . . . . . . . . . . . . . 9

6 Decomposition Levels 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

7 Decomposition Level 3 Highlight . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

8 Decomposition Level 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

9 Decomposition Levels 4.1 and 5.1 Highlight . . . . . . . . . . . . . . . . . . . . . 11

10 Decomposition Levels 4.1 and 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . 12

11 Decomposition Level 4.2 Highlight . . . . . . . . . . . . . . . . . . . . . . . . . . 13

12 Decomposition Level 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

13 Decomposition Level 4.3 Highlight . . . . . . . . . . . . . . . . . . . . . . . . . . 14

14 Decomposition Level 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

15 Planned Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

16 Single Station Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

17 Camera Pi Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

18 Control Pi Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

19 Designed Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 24

20 FMEA Form Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

21 FMEA Form Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

22 Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

23 Single Station Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

24 Project Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

25 Pi Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

26 3D-Printed Raspberry Pi Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

27 3D-Printed Raspberry Pi Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

46

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

17 Appendix D - Raspberry PI CodeCamera Raspberry Pi Code:

47

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

48

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

49

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

Barcode Reading Raspberry PI Code:

50

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

51

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

Central Data Storage Code:

52

General Motors Vision System 17 APPENDIX D - RASPBERRY PI CODE

53