8
Personal Brief... The Idea: For this unit we will be creating a personal project brief, which means we will be able to produce a design and brief of our own choice. After being given an introduction to this unit and doing some brainstorming I came up with some ideas that I would like to work with and use within this project. After watching a video documentary on the EyeWriter by Mick Ebeling I was inspired to create something for this project using gestures to create visualisations which could then be developed and used as a method of communication for those who are unable to communicate and suffer from disabilities that restrict them to do so. The reason for why I chose this design would be due to the example given in the video where Graffiti artist Tempt suffers from paralysis and is unable to move any limb in his body and is given the Eyewriter glasses which enables him to carry on his artwork using his eyes and also alows him to communicate with his family and friends which is something he hasnt been able to achieve for many years. What I would like to achieve for this project is to show design that uses gestures to create visuals. What is Gesture Recognition? As part of my initial research for this project I began looking into gesture recognition, which enabled me to gain better understanding within this area, of what was expected of me to create this method. Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, this building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse. Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant. Gesture recognition can be conducted with techniques from computer vision and image processing.

Personal brief

Embed Size (px)

DESCRIPTION

These are the folio sheet for the personal brief project showing allof my work

Citation preview

Page 1: Personal brief

Personal Brief...The Idea:

For this unit we will be creating a personal project brief, which means we will be able to produce a design and brief of our own choice. After being given an introduction to this unit and doing some brainstorming I came up with some ideas that I would like to work with and use within this project.

After watching a video documentary on the EyeWriter by Mick Ebeling I was inspired to create something for this project using gestures to create visualisations which could then be developed and used as a method of communication for those who are unable to communicate and suffer from disabilities that restrict them to do so. The reason for why I chose this design would be due to the example given in the video where Graffiti artist Tempt suffers from paralysis and is unable to move any limb in his body and is given the Eyewriter glasses which enables him to carry on his artwork using his eyes and also alows him to communicate with his family and friends which is something he hasnt been able to achieve for many years.What I would like to achieve for this project is to show design that uses gestures to create visuals.

What is Gesture Recognition?

As part of my initial research for this project I began looking into gesture recognition, which enabled me to gain better understanding within this area, of what was expected of me to create this method.

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.

Gesture recognition can be seen as a way for computers to begin to understand human body language, this building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.

Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.Gesture recognition can be conducted with techniques from computer vision and image processing.

Page 2: Personal brief

Influences/Research...

The EyeWriter is a low-cost eye-tracking apparatus and custom software that allows artists with paralysis resulting from Amyotrophic Lateral Sclerosis (ALS) to draw using only their eyes. The project is an ongoing collaborative research effort to empower people who are suffering from ALS with creative technologies. Members of the Graffiti Research Lab, Free Art and Technology (FAT), Openframeworks and The Ebeling Group communities teamed-up with a legendary LA graffiti writer, publisher and activist, named Tony Quan, aka TEMPTONE. Tony was diagnosed with ALS in 2003, a disease which has left him almost completely physically paralyzed… except for his eyes. This international team is working together to create a low-cost, open source eye-tracking system that will allow ALS patients to draw using just their eyes. The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make art with their eyes.

The EyeWriter:

During my research I came across some gesture recognition work produced by designer Theo watson. the reason for why I chose to look at his work is because his use of gesture controlled visualisation using Kinect. I would like to incorporate this method into my own design, as it portrays what I would like to achieve from this project which is creating simple gestures which could create interesting visuals.

Kinect Russian Roulette is a speed project by Theo Watson, which allows you to play Russian Roulette with just your hand and a Kinect.

This is an example of some work produced by Theo Watson which I thought was very interesting and would like to incorporate this style and method within my own final piece, as it consists of basic hand gestures for which he has created visuals.

Here I have got another example of some work produced by Theo Watson using Kinect and gesture recognition this video shows how arm and hand gestures are being used to control a visual display of a “Funky Bird” making it react and respond to the gestures.

Overall I would say that his work is very inspiring and influential as it relates to the outcome that I would like to achieve for this project.

Theo Watson:

Page 3: Personal brief

Experimentation/Development...Long Exposure Photography:

This is a method I looked at during my research which involves using a long-duration shutter speed to sharply capture the stationary elements of images while blurring, smearing, or obscuring the moving elements.

When a scene includes both stationary and moving subjects (for example, a fixed street and moving cars or a camera within a car showing a fixed dashboard and moving scenery), a slow shutter speed can cause interesting effects, such as light trails.

Here I have began doing some of my own experimentation work using the long exposure photography method I looked at. To gain these visual outcomes I used a laser pen/led light and began creating different shapes, text and motions which I captured using a camera.

Although this method is completely different to my idea and may not show much significance at this stage, the visual effect gained from this technique and the flow of the light captured portrays the type of visuals I would like to achieve but using gestures to create them rather than photography.

However the reason for why I chose to show this method is because I am going to use it as part of my experimentation and as a process which can be incorporated in the build up and development to my final design idea.

Page 4: Personal brief

Experimentation/Development...

After recieving some useful feedback, I decided to begin looking at designing a type of language/series of gestures which I will be able to use and produce visualisations for; I will be keeping it simple to start of with by creating getures for words such as YES, NO, MAYBE, this will enable people to understand what each visual represents.

Therfore I began looking at the gestures used for the MacBook Trackpad/mouse which enable you to achieve different functions and controls for each gesture.

The reason for why I chose to look at this would be because these gestures show a type of language created for the user which is unique and relate to what I want to create, I would like to create some gestures of my own using a similar style and method.

Macbook User Gestures: Phillip Worthington:

As part of some further research and development work I came across an example of some work produced by Philip Worthington called Shadow Monsters.

I would say that this piece is very inspiring as it makes the gestures and shadows formed fun and interesting to look at, which relates to my idea and what I would like to portray; I would also like to have this element of fun within my own design and make the gestures and visualisations interesting to look at.

Sign Language:

I also decided to look at sign language as it is a languge created using gestures.

sign language is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns (manual communication, body language) to convey meaning—simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker’s thoughts.

The reason for why I chose to look at this would be due to the fact that sign language is a unique language created to help people communicate using gestures and is recognised around the world.

Page 5: Personal brief

Design Ideas/Concepts...Sketches/Ideas:

Here I have began doing some quick sketches/concepts for the gestures I will be using in my final design, I would say that I have kept the gestures and functions fairly basic, the reason for this would be so that the gestures are easily picked up and responded to with a visual display, also due to the fact that the main theme of this project is for people that have difficulties communicating and expressing thier feelings can interact with this and use gestures to communicate. Therefore I thought it would be fairly complicated if the gestures were too complex.

Page 6: Personal brief

Design Ideas/Concepts...Processing/Arduino:

Here I have started working on the final piece for this project using processing and arduino software. To begin with I put together some code in processing to create a function for gesture recognition, using the webcam on my laptop to pick up the gestures and coding to create a visual display representing the movement. For the webcam to pick up the gestures it needed something to respond to, I therefore decided to use a small led light, which created a circle on the page, which then moved around depending on the gestures I was using.

Page 7: Personal brief

Processing/Final Piece...Processing/IR EyeToy:

However during my experimentation for this method I realised that other sources of light were also being picked up by the webcam causing irregular patterns that were not coming from the gestures I was doing. I then thought of a method to prevent this from happening which consisted of an eyetoy camera that was modified to pick up Infrared signals only; using an infrared light bulb, which I connected through an arduino board and attached to my hand, I then began applying the gestures I created for Left and Right and gained a visual display that informs the user of their gestures that are Left or Right.

After completing this method in processing I will now create visuals for the gestures, so that the final piece can be fun and interesting having more visuals to look at rather than the one circle I have used in my experimentation.

Page 8: Personal brief

The Final Piece...

Gesture Controlled Visualisation:Using Processing I have created the visual design for my final piece, the design consists of particles that create a sense of motion as they are controlled by the gestures. I chose to use vibrant colours as they are more attractive and eye catching giving the user sense of their interaction and what is being achieved by the gestures they are applying, also another reason for why I chose to create this type of visual display would be due to the fact that it is fun and playful making it more enjoyable for the user. After my research and development I decided to experiment with different methods for the gesture recognition and applying gestures using just the hand, an led light and infrared to see how the outcomes varied and the difference between each methods visual display.

User Testing:

Hand Gesture ControlHere I have decided to apply the gestures using just the hand with no lighting attached to see if the gestures are recognised. Overall I would say that this method was interesting as it allowed the user to gain real sense of satisfaction as the gestures were being picked up, however other sources of light were also being picked up causing a fair amoung of irregular patterns at times.

Led Light ControlThis is another example of my experimentation which consisted of attaching an led light to the hand and applying the gestures, I would say that this method was fairly controlled depending on the environment and making sure that other sources of light were not being recognised as this caused a slight affect on the visual display and positioning of the paricles.

Infrared ControlThis method consisted of using an eyetoy camera that had been modified to pick up infrared signals only. Using an infrared bulb which was connected and attached to my hand, I applied the gestures, overall I would say that this method was the most controlled as the camera was focussing on the IR signal only without any other interference.

Final Piece Video EditingUsing Adobe Premiere I put together my video footage, documenting my final piece, how it was created using coding and my user testing which allowed me to see if the design was user friendly,