65
1 Table of Contents Nokia Research, p20 University of Tokyo, p29 Tokyo Metropolitan Univ, p34 Universitat Darmstadt, p54 Letter from the publisher: Will Apple enable Microsoft’s vision for the Tablet PC? by Mark Fihn 2 News from around the world 12 Mobile HCI, September 7-10, 2010, Lisbon, Portugal 22 SIGGRAPH, July 25-29, 2010, Los Angeles, California 29 Korea Display Conference, July 8-9, 2010, Seoul, South Korea 36 EuroITV Conference on Interactive TV and Video, June 9-11, 2010, Tampere, Finland 38 Graphics Interface Conference, May 31–June 2, 2010, Ottawa, Ontario 39 Conference on Human Factors in Computing Systems, April 10-15, 2010, Atlanta, Georgia 44 Interview with Adam Bogue from Circle Twelve 55 Putting the “Touch” in Multi-Touch: An in-depth look at the future of interactivity by Gary Barrett 59 Last Word: RIM gets serious about user interface …by Jin Kim 64 The Touch Panel is focused on bringing news and commentary about the activities of the companies and technologies related to touch screens and related technologies for the displays industry. The Touch Panel is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Gary Barrett and Jin Kim Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate to be determined based on location and mailing method. Copyright 2010 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others. Touch Panel Veritas et Visus December 2010 Vol 5 no 5

VVTP-45, December 2010

Embed Size (px)

Citation preview

Page 1: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

1

Table of Contents

Nokia Research, p20 University of Tokyo, p29 Tokyo Metropolitan Univ, p34 Universitat Darmstadt, p54

Letter from the publisher: Will Apple enable Microsoft’s vision for the Tablet PC? by Mark Fihn 2

News from around the world 12

Mobile HCI, September 7-10, 2010, Lisbon, Portugal 22

SIGGRAPH, July 25-29, 2010, Los Angeles, California 29

Korea Display Conference, July 8-9, 2010, Seoul, South Korea 36

EuroITV Conference on Interactive TV and Video, June 9-11, 2010, Tampere, Finland 38

Graphics Interface Conference, May 31–June 2, 2010, Ottawa, Ontario 39

Conference on Human Factors in Computing Systems, April 10-15, 2010, Atlanta, Georgia 44

Interview with Adam Bogue from Circle Twelve 55

Putting the “Touch” in Multi-Touch: An in-depth look at the future of interactivity by Gary Barrett 59

Last Word: RIM gets serious about user interface …by Jin Kim 64

The Touch Panel is focused on bringing news and commentary about the activities of the companies and

technologies related to touch screens and related technologies for the displays industry. The Touch Panel is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com

Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Gary Barrett and Jin Kim

Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate to be determined based on location and mailing method. Copyright 2010 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others.

Touch Panel Veritas et Visus December 2010 Vol 5 no 5

Page 2: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

2

Has Apple enabled Microsoft’s vision for the Tablet PC? Follow-up commentary…

by Mark Fihn

In December 2007, I wrote an article for the Touch Panel newsletter entitled Will Apple enable Microsoft’s vision for the Tablet PC? (published in Touch Panel #17/18, January 6, 2008). The enormous recent popularity of the iPad gave me pause to re-read my article to see if my commentary from three years ago was on target or not. After re-reading the article, I realized that the discussion is really one that considers more than a decade related to the Tablet PC, during which time Apple emerged from a struggling PC company to the envy of innovators in PC, the telecommunications, and the music distribution industries – successfully bringing touch interfaces to the forefront of designs for all manner of electronic products.

My December 2007 article was derived from an article that I wrote in December 2002, and so I’ve decided to reprint the commentaries from 2002, 2007, and add a few comments now from 2010. Enjoy!

In August 1999, when Microsoft announced ClearType, they also announced that they planned to create their own tablet “for computing, communicating and reading electronic books”. In June 2000, Bill Gates demonstrated the very first prototype of the Tablet PC during the unveiling of Microsoft’s .Net strategy. During his Comdex keynote in November 2000, Gates demonstrated a prototype of the Tablet PC. Gates positioned the product as “a full-function Microsoft Windows operating system-based PC incorporating the convenient and intuitive aspects of pencil and paper into the PC experience”. On November 7, 2002, Microsoft launched the Windows XP Tablet PC Edition, an operating system designed specifically for pen-input computing.

Although pen-based computing was not particularly new, Microsoft argued compellingly that technology had advanced to a point that made the Tablet PC concept compelling. Microsoft officials pointed to the significant improvements made since the early 1990s in CPUs (lower power, higher performance), LCDs (lower power, higher resolution), batteries (lower weight, higher capacity), digitizers (lower power, higher performance), memory, wireless LAN network infrastructure, etc. Microsoft also pointed to improvement in handwriting recognition, speech recognition and font readability (ClearType), and offers a “new UI designed for pen input and easy navigation”. The Tablet PC is one of the very first products where Microsoft actually designed and built a hardware prototype themselves, rather than relying on their OEMs. This is significant in that it forced the company to recognize that not all problems are hardware issues, that occasionally their software designs created problems.

One of the most significant aspects of Microsoft’s Tablet PC initiative has been the very widespread industry-wide support that Microsoft has assembled to develop the product concept. In addition to numerous system-level brands and OEM manufacturers, (including HP, Toshiba, NEC, Fujitsu, and Acer as top-10 players in the notebook PC industry), Microsoft recruited support from a wide array of component, software, and applications providers. One summary suggested that more than 100 corporations had a significant part in the eventual rollout of the new OS. That said, one of the obvious problems Microsoft faced was that several key notebook PC brands, most notably the then-market leader, Dell, was not supportive of the concept.

Back in December 2002, only five years ago, (now eight years ago), I wrote an article about the future potential of Tablet PCs. I was very much taken up by the notion that Tablet PCs would quickly emerge to a substantial portion of the PC market. I predicted at the time that in 2007, Tablet PCs would account for almost 40% of the notebook

December 2010 December 2007 December 2002

Page 3: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

3

PC market. Hindsight is a great tool – although not frequently reviewed by most market analysts – who tend to remind you of when they correctly predicted market trends, but conveniently forgetting their forecasting gaffes. Although my market forecast for Tablet PCs was hugely over-optimistic, it’s valuable to consider the sorts of things that kept the Tablet PC from meeting my prior expectations. As such, this article will take a look at my 2002 efforts at prognosticating the future, with a commentary today, (and again today).

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

Critics in the press have suggested many reasons for why Tablet PCs will fail. In fact, if there is one area where Microsoft has been unsuccessful in encouraging the Tablet PC concept, it has to be in the technology press. One of the most difficult challenges to the popularization of Tablet PCs will simply be to reverse the popular negative media attitude about Tablet PCs. A few examples of the negative buzz:

“Been There – Done That”. Perhaps one of the most common criticisms about Tablet PCs is simply that they are not a new concept and have already been attempted in the market. Usually these critics point to the failures of products that were introduced 8-10 years ago, culminating with references to Apple’s Newton, (which should more appropriately be an example about how PDAs (now called “smart phones”) have emerged to find market-place success, rather than an explanation about why Tablet PCs will fail). More up-to-date critics point to two of the PC companies that did not introduce Tablet PCs in concert with Microsoft’s November 2002 rollout, and to their prior “failed” efforts to bring out pen-based devices within the past couple of years. IBM’s TransNote and Sony’s Slimtop Pen Tablet failed because they were not Tablet PCs, among other reasons. The argument suggests that because pen-input devices failed in the past, that they will fail again this time. This argument is problematic:

o First, it’s rare that things are done perfectly the first time. Only by trial-and-error can new devices be improved to the point of “success”. One can easily point to Apple’s Newton as a failed product, as many analysts have done in the past days when writing about Tablet PCs, but perhaps it’s more sensible to say that the Apple Newton successfully introduced the PDA concept to the world, and that if any “failure” exists in relation to the Newton, it’s Apple’s failure to continue pursuing the product through until its eventual commercial success. OK, the iPhone is Apple’s follow-up, although Apple has conspicuously avoided the sort of pen-input design they created with the Newton. While it’s true that GriD, Agilis, Go Corp, Eo, General Magic, Momenta, and others failed to create successful businesses with pen-input based devices, there are also many companies that do have successful pen-based efforts.

o Secondly, the examples of IBM’s TransNote and Sony’s Slimtop as failed products in this category forgets the fact that these were novel devices created almost exclusively by the companies in hopes of finding some high-margin niche. Such niche market experiments, rarely transform into mainstream market successes. The Tablet PC effort, however, is a broad industry-wide effort to bring the tablet-style form factor to the mainstream.

Neither the TransNote nor the SlimTop can really be compared appropriately to the batch of Tablet PCs that have been launched recently, and I think the biggest mistake of these two products is that IBM and Sony failed to take the lessons they learned from their niche-market design experiments and apply them to the emerging industry-wide tablet-style computing market. By the way, I believe that another reason the IBM and Sony devices were not successful is because both products used low-resolution displays. The TransNote used a 10.4-inch TFT LCD at 800x600 pixels (96 ppi), while the SlimTop used a 15.0-inch TFT LCD at 1024x768 pixels (85 ppi). I am convinced that displays with higher pixel densities are required to make pen-input devices successful.

December 2010 December 2007 December 2002

Page 4: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

4

Neither the TransNote nor the SlimTop can really be compared appropriately to the batch of Tablet PCs that have been launched recently, and I think the biggest mistake of these two products is that IBM and Sony failed to take the lessons they learned from their niche-market design experiments and apply them to the emerging industry-wide tablet-style computing market. By the way, I believe that another reason the IBM and Sony devices were not successful is because both products used low-resolution displays. The TransNote used a 10.4-inch TFT LCD at 800x600 pixels (96 ppi), while the SlimTop used a 15.0-inch TFT LCD at 1024x768 pixels (85 ppi). I am convinced that displays with higher pixel densities are required to make pen-input devices successful.

Pictured on the left, IBM’s TransNote, although a portable computer, involved writing or drawing in one place, while the image was displayed in a different place. Although I never tried it, several people I talked to advised that it was best to completely ignore the display, or even turn it off, if you were making inputs on the paper pad, so as to avoid distraction. Not only were pen and display separated, but also IBM sold

the TransNote for $3000, at a time when similarly configured notebook PCs were priced at less than $1500. The middle image is Sony’s failed SlimTop Pen Tablet, which resembled Tablet PCs even less than the TransNote, since it wasn’t portable and was designed as a desktop electronic easel. Like the

TransNote, the niche device was sold at $3000. On the right, is Apple’s Newton.

Even after the iPhone, it remains amazing to me that the Newton is still cited frequently in the press as a reason why Apple is not interested in Tablet PCs. I continue to believe that one of the reasons Tablet PCs have struggled is due to display resolution. When you write with pen on paper, you do not worry about pixelation in the same way that can be awkward when writing on a relatively low-resolution screen. Apple has now demonstrated with the iPhone 4 that even for finger input, high pixel densities are desirable. I personally think that if you’re going to invest in 323ppi displays, you should have an input capability to match; your thumb just doesn’t suffice…

Not differentiated from Notebook PCs. In the recent buzz about Tablet PCs, one of the most interesting explanations about why Tablet PCs would fail is that “Microsoft’s new product is too similar to existing notebook computers to be successful”. This criticism is similar to the analysts that suggested DVD movie titles would fail because they were not adequately differentiated from videocassette tapes. To clarify our position again – Tablet PCs are not a new category of a computing device – they are simply a subset of notebook PCs. Just think of portable PCs – some in a tablet-style, some in a clamshell-style, and some that are convertible between the two styles.

The traditional differentiation between PC categories has certainly eroded in the past decade. The iPad, arguably, is not a PC in the traditional sense, but Apple has certainly identified a way in which to differentiate the product, although physically, it looks very similar to many slate-style Tablet PCs. For more than a decade now, I’ve chuckled every time someone tries to talk to me about the commoditization of the notebook PC. There truly has not been a move to commoditization; indeed today’s notebook PCs are much more differentiated in terms of features and usage models than they were a few years ago, to the point that we struggle to categorize the segment.

December 2010 December 2007 December 2002

Page 5: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

5

Typing is much faster than handwriting. Ever try drawing a picture with a keyboard? Ever try to add comments to a .pdf file with a keyboard? Tablet PCs will not be successful without the capability to link to a keyboard, but to conclude that Tablet PCs will fail because people type faster than they write is simply ignoring the fact that handwriting is very important for some applications, while keyboards are more appropriate for other applications. A device that allows users the advantages of both typed input and pen input will be more successful that any device that only allows one. I think that Apple is doing a disservice to its customers by creating devices that have limited function when it comes to input – both the iPhone and the iPad would benefit tremendously, in my opinion, if they allowed both finger and pen input. Moreover, there may be some truth to the argument that keyboard input is faster than handwriting in North America and Europe, but in most Asian languages, keyboard input is very awkward and written input is much simpler. Since the Asian market continues to grow faster than the rest of the world, companies that avoid Tablet PCs because they believe that keyboards are the wave of the future are likely to be companies that see their market share shrink on a global basis.

As the keyboard dominates, handwriting is deteriorating. This argument suggests that as people increasingly become dependent on keyboard input; their handwriting gets worse and worse, to the point that handwriting is no longer necessary. While there is some merit to this argument, at least in North America, it seems premature to announce the death of writing…

Handwriting recognition must work perfectly. While it is certainly true that handwriting recognition is not perfect, it’s also true that few people type without error. While perfect handwriting recognition is desirable, we should realize that the Tablet PC software includes what is probably the best recognizer currently available anywhere. Microsoft has had a small team of engineers working on handwriting recognition continuously since the early 1990s. Their first product was the less-than-stellar MARS recognizer (Pen Services 1.0, 1991), followed by the improved GRECO recognizer (Pen Windows 2.0, 1995). Windows CE 1.0 shipped in 1996 in the US without any recognizer at all. In 1998, Windows CE 2.0 came with a recognizer, but it wasn't based on the Microsoft core product. Although Microsoft was working on a version for CE 2.0, it was not finished in time, and so they substituted CIC’s Jot recognizer at the last minute. In September of 1999, Microsoft acquired the right to use the source code of Calligrapher. The initial use of Calligrapher was in Windows CE 3.0 (released in 2000 as the “Pocket PC”). Next, Microsoft “blended” Calligrapher with the Microsoft core recognizer technology to form the basis of a new recognizer that has been released in the Tablet PC. We should also understand that handwriting recognition is only one factor associated with digital ink. In fact, many argue that the inking functions in themselves are a potent improvement to the portable PC, regardless of whether handwriting recognition is used at all.

A Tablet PC must have voice recognition to be really useful. Further improvements in voice recognition will certainly make any computer more useful. Voice recognition, however, really has little to do with whether or not a tablet-style device will be more or less successful than a competing clamshell device.

Tablet PCs are limited to vertical applications, particularly healthcare. Segments that currently benefit from tablet-style computing and will benefit even more from the enhancements offered by Tablet PC Edition include: field sales/service, healthcare/medical, insurance, manufacturing, industrial, retail, hospitality, public works, real estate, inspection, legal, military, logistics, and education.

The Tablet PC makers seem to believe that the mass market is satisfied with a keyboard-only notebook device, and that Tablet PCs are premium products to be targeted to these vertical markets. Personally, I continue to believe that by only targeting niche markets, Tablet PCs have not really been introduced to the mainstream.

December 2010 December 2007 December 2002

Page 6: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

6

The displays aren’t good enough. Perhaps the biggest challenge that Tablet PCs must face is that their introduction is flying in the face of market trends for portable PCs. At the same time that the ultraportable notebook PC market is getting smaller and smaller, all Tablet PCs are currently designed with 12.1-inch and smaller displays, at 1024x768 pixels or less. The argument is that the Tablet PC form factor must use a small display, based on focus group results about portability. Unfortunately, the same focus groups stated as recently as three years ago that notebook PCs with 15-inch displays would be nothing more than a small niche market application. In other words, perhaps the biggest risk facing Tablet PCs is the fact they are going after such a small portion of the portable PC market. The 12.1-inch and smaller portion of the notebook PC market is very small and shrinking. The market that is growing, and that continues to exceed even DisplaySearch forecasts, is the 15.0-inch and even larger market. In fact, it now seems likely that within 2003, 15.0-inch and larger panels will represent more than 50% of the notebook PC market, surpassing 14.1-inch displays. That’s the mainstream market that Tablet PCs are currently ignoring. Some company will eventually test the waters with a Tablet PC that uses a larger display. No doubt when that happens, media critics will declare that Tablet PCs with large size panels will never be successful…

To my knowledge, after five years, there is still not a 15-inch Tablet PC and only a small choice at 14.1-inches. The mainstream notebook PC market has shifted to dominantly to 15.0-inch and 17.0-inch panel sizes, while the Tablet PC market has chosen (I think incorrectly) to only serve the very small segment of the market that wants 12.1-inch and smaller displays. This is clearly not the panel size of choice for mainstream portable PC users and that Tablet PC manufacturers continue to ignore the trend to larger sizes is a mystery to me. The argument that users are unwilling to carry a large tablet-size product are almost meaningless to me (except perhaps for some niche market applications), as it’s the same argument that critics made about the original “mega-notebook PCs” (with 13.3-inch panels), then 14.1-inch, then 15.0-inch, and then about the now popular 17.0-inch solutions.

Although I continue to be a strong advocate for Tablet PCs, I do not use one myself. The reason is simple – I am unwilling to sacrifice the screen real estate and resolution that my current mainstream notebook PC offers me (17.0 inches at 1920x1200 pixels). I personally would buy a Tablet PC in a heartbeat, if it had an equivalent display (although I personally would probably prefer a 2560x1600 resolution in a 17.0-inch form factor). Currently, Tablet PCs would require me to downgrade my working space, both in terms of image quality and screen real estate, and those are tradeoffs that pen-input just cannot make up for… I still use a notebook PC with a 17.0-inch display at 1920x1200 pixels. Although I long for a display with more pixels, it’s now been almost a decade since the notebook PC market has offered its customers an improvement in display resolution. I won’t belabor the point in this newsletter, (I do so more adequately on a regular basis in the High Resolution newsletter), but suffice it to say that Tablet PCs are still not available in a way that enable the sort of size and pixel density that I demand as a minimum.

The number of tablet-style devices that have been introduced over the years is staggering. I maintain a database that categorizes almost 400 devices from more than 250 different companies that can be called a tablet-style product. Skeptics will argue that none of these products have been truly successful, so the quantity of innovation simply identifies millions of wasted development dollars. Perhaps a more important perception is that there is indeed broad niche market acceptability, but more importantly, there is a critical mass of innovation that is mushrooming into what will inevitably be a mainstream market. Innovation continues as tablet-style devices are still evolving. The UMPC platform has gained some traction, although not really directed at the notebook PC market. The Tablet PC itself continues to evolve, with the EMR-only solutions from 2002 giving way to dual-mode devices that combine EMR and capacitive touch capabilities.

December 2010 December 2007 December 2002

Page 7: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

7

The tablet-style notebook PC adds function, without diminishing the advantages associated with clamshell designs. Factors supporting the adoption of Tablet PCs:

A tablet-style is more natural for most people when it comes to reading and writing. While keyboard input has become familiar to most people in the world, handwriting is still much more natural and pen input is much simpler for many applications (such as drawing-based applications).

A tablet-style allows for simple transitions from landscape to portrait mode. Many applications are more suited to portrait mode – something that a clamshell-style notebook PC cannot easily accommodate.

A clamshell-style notebook PC creates two barriers – one between people and one between user and computer. The barrier between people is obvious in a meeting when the back of the display puts up a wall between notebook PC user and others in the meeting. A tablet-style device will remove a barrier and allow for much “friendlier” person-to-person communication while still allowing for note taking. The barrier between the user and the computer is created because of the length of one’s arms – which identifies an eye-to-screen working distance. To bring the working distance closer requires the user to move his body. With a Tablet PC, the working distance can be adjusted by moving the device, rather than the user. This again, is much more natural.

These same factors all still apply to Tablet PCs, although it’s now clear that none of them has proven to be a significant driving force behind their broad commercial adoption. Clearly, PC users are accustomed to keyboard input, landscape mode, and are willing to put up with the “barriers” created by the clamshell form factor.

The media attention associated with the launch of Microsoft’s November 7 Tablet PC OS release, designed especially for Tablet PCs, was broad and comprehensive. According to a Business Week article, Microsoft spent $400 million developing the Tablet PC operating system to incorporate its handwriting-recognition and digital inking tools. And Microsoft is now spending another $70 million to market the machine. Many media analysts seem to suggest that this investment will be a colossal failure on Microsoft’s part (not to mention a broad array of hardware and software partners), but even if the Tablet PC is slow to gain popularity, Microsoft will be able to reap gains from their Tablet PC investment in numerous ways in the future:

Numerous touch-screen niche-market applications exist, regardless of the success of the Tablet PC. Providing OS support for such applications is a significant niche. Certainly we can expect future versions of the Pocket PC OS to contain portions of the lessons learned from the Tablet PC.

To date, it’s really only these niche-markets that have done reasonably well in the Tablet PC market. Widespread adoption by mainstream users has not take place.

High-resolution displays are currently not adequately supported by Windows. The new Tablet PC edition, however, claims optimized support at pixel densities from 120 to 150 ppi. We can expect to see future operating systems supporting higher pixel densities, based on the developments associated with the Tablet PC edition.

For the most part, Tablet PCs do not improve display resolution in the notebook segment, using the same pixel formats as other notebook PC models. There are a few exceptions to this, but unfortunately the promise of Vista offering a resolution-independent OS did not materialize and the PC market continues to languish largely at sub-100ppi pixel densities.

Even if the pessimistic analysts are right and the Tablet PC is slow to take off, the allure of digital inking will eventually prove to be popular. Continuous improvements in digital inking and handwriting recognition will serve to benefit Microsoft’s Tablet PC efforts for many years into the future.

December 2010 December 2007 December 2002

Page 8: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

8

As of early December 2002, system level supporters of the Tablet PC effort include an increasingly long list of players from around the world, and include Acer, AOpen, Aplux Communications, Compal Electronics, Hewlett Packard, FIC, Fujitsu, Fujitsu Siemens, NEC, Motion Computing, PaceBlade Technology, Quanta, Research Machines, SOTEC, Tatung, Time Group, Toshiba, Twinhead, VIA Technologies, ViewSonic, WalkAbout Computers, Wistron, and Xplore Technologies.

Notably absent from the list of supporters are Dell, IBM, Sony, and Apple. The companies advised that they do not think the market is ready yet for Tablet PCs and that their customers are not suggesting there’s any real demand for the products. While the impetus toward Tablet PCs would have been even more substantial had all or some of these major players introduced products, it can also be argued that while these major companies are waiting on the sidelines, they are giving others an opportunity to penetrate their market strongholds. Not only will new entrants like Viewsonic and Motion Computing have opportunities to enter the notebook PC market, but other players will certainly be focused on taking back market share from market place leader Dell Computer.

IBM (via its acquisition by Lenovo) did eventually introduce a Tablet PC, and Sony has several touch-related PC products, although none adopts the Tablet PC OS. But for five years, Dell and Apple steadfastly stayed away from the Tablet PC concept. That’s now changing, with Dell just recently introducing their first own-brand Tablet PC and rumors running rampant about an upcoming product from Apple feeding off the successes of the touch-enabled iPhone.

During most of the past five years, I’ve stated rather emphatically that as soon as market-leader Dell joined the Tablet PC club, that would ignite the transition from traditional keyboard-centric notebook PCs to pen-enabled computing. Considering Dell’s recent product launch, I no longer believe that’s the case, for several reasons:

Dell is no long the leading notebook PC company, dropping substantially behind current market leader HP over the past year, and recently dropping to 3rd in the market behind Acer as well. While still a major player, Dell’s influence in the notebook PC market is much weaker than before.

Five years ago, Dell was clearly the market leader, not only terms of market share, but also in terms of the display technologies employed in the company’s notebook PCs. Today, Dell tends to be late to market with me-too products, at least in terms of the displays used in notebook PCs.

I first learned about Dell’s plans to introduce a Tablet PC in August 2006 (and reported it in the October 2006 edition of the Touch Panel). Initial plans were to introduce the product as early as Q1’07, but delay after delay resulted in almost a nine-month slip to the product launch. Unfortunately, such a long delay means that the platform is already “old” by the time it’s released. Rather than introducing a fresh product to the market, Dell’s new Tablet PC is relatively ho-hum in terms of product features. Despite Dell’s hype, this is not a breakthrough product by any stretch of the imagination.

The N-trig digitizer is quite novel, but have now been around for some time, and Wacom’s acquisition of TouchKO and the re-emergence of FinePoint products via InPlay Technologies, suggests that many of the advantages associated with the N-trig digitizer have suffered as a consequence of Dell’s delayed product launch. (Note that InPlay is no longer in business).

For the past five years, Dell’s leadership and sales force has been telling the market that the world is not yet ready for a Tablet PC. Since nothing seems to have changed in terms of market readiness, Dell’s newly pronounced faith in the Tablet PC is still not an entirely convincing message.

December 2010 December 2007 December 2002

Page 9: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

9

Dell continues to sell the Motion Computing line of Tablet PCs, as well as a model from Fujitsu. They also sell UMPC’s from Samsung. Tablet PCs are the only computers where Dell has chosen to serve as a distributor for brands other than their own. None of Dell’s major rivals sell branded products from their competitors. As Dell continues to lose market share for their own-brand products, perhaps the logic is that they can prop up revenues by stepping into more of a distribution role, but it’s a somewhat confusing message…

Dell’s price positioning with their new Tablet PC is inexplicably and significantly higher than comparable competitive configurations. As such, rather than spurring broad market penetration, Dell is positioning the product only for the high end of what is already a rather small market.

Still, Dell’s entry into the Tablet PC space is positive news and although the product could easily have been a more impressive entrée to the market, it appears to be a solid solution. At the very least, Dell’s sales force will no longer be telling customers that Tablet PCs are not yet ready for market.

I won’t dwell more on Dell, as their performance speaks for itself without additional comment from me..

Even if Dell, IBM, Sony, and Apple are correct in their predictions that Tablet PCs will represent only a very small portion of the market, it’s safe to predict that every single large corporation in the world will have at least a small percentage of their workers who can truly benefit from the additional functionality brought by Tablet PCs. By not having a Tablet PC in their product line-up, Dell, IBM, Sony, and Apple are at some risk of fulfilling major corporate bids simply because they cannot fulfill the bid and are put in the untenable position of convincing their corporate customers that they really don’t have a need for Tablet PCs.

As depicted in the accompanying chart, (updated to add December 2010 information), the notebook PC market has changed considerably in terms of brand dominance. Essentially the same players still exist, but their ranking has shifted, most notably the downward slide by Dell and Toshiba and the upward movement by Acer and newly by Apple as a result of the iPad. In terms of Tablet PCs, Apple is now the sole notebook PC company without a pen-enabled computer. With the success of the iPad and the rush by all players in the market to launch similar products, the future of tablet-style computing devices is now assured.

Based on an unusually large flow of rumors, along with a relatively large number of touch-related patent applications out of Apple in the past couple of years, we suspect that Apple will soon introduce a Tablet PC of some sort, probably with a user interface that draws on Apple’s experience with the iPhone. We also expect that if Apple’s design charms continue, any such Apple device will be far more successful than any Microsoft-based Tablet PC efforts to date. We offer several comments:

December 2002 December 2007 December 2010

Notebook PC brand ranking

Tablet PC

Notebook PC brand ranking

Tablet PC

Notebook PC brand ranking

Tablet PC

Dell 17.2% HP 21.4% Yes HP 17.3% Yes Toshiba 14.2% Yes Acer 16.2% Yes Acer 16.5% Yes HP 12.4% Yes Dell 13.8% Yes Apple 12.4% Yes IBM 10.4% Toshiba 9.1% Yes Dell 11.0% Yes Sony 6.7% Lenovo 8.9% Yes Toshiba 8.0% Yes Fujitsu 5.3% Yes Sony 5.6% Yes Lenovo 7.1% Yes NEC 6.4% Yes Fujitsu-Siemens 4.7% Yes Asus 5.8% Yes Acer 4.9% Yes Apple 4.6% Apple 2.8% Asus 4.2% Yes

Source: DisplaySearch

December 2010 December 2007 December 2002

Page 10: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

10

Five years ago, Apple’s notebook PC business was floundering. With regard to the display, Apple’s notebooks were weak, at best. The company completely missed the transition to larger panel sizes, was noticeably lagging in terms of resolution, and although they had just started to shift to wide aspect ratios, Apple inexplicably chose an oddball aspect ratio (1152x768) that was used nowhere else in the industry. In any case, for whatever reason, Apple’s display engineers and markers had inexplicably chosen to ignore overall image quality in their notebook PC products, to the point that Apple’s displays were truly among the least impressive offered in the industry at the time. There are several things I dislike about the iPad, one of which is their choice of a 4:3 aspect ratio. After leading the market to wide aspect ratios (albeit the 3:2 aspect ratio they picked was not widely adopted in the industry and even Apple quickly shifted to 16:10), it’s puzzling to me why Apple chose a 4:3 aspect ratio with the iPad.

I personally was quite vocal in public forums at the time in criticizing Apple. Then, as today, I believed that Apple’s unique image and clientele would best be served with leading edge displays with a focus on image quality and the overall visual experience.

Over the past five years, Apple’s focus on the display has become obvious, to the point that I now believe that on average, Apple’s displays are the best in the industry (although they still have a ways to go in terms of improving resolution). I think that with the iPhone 4, for the first time in a mass-market product, I am finally satisfied with pixel density. My hats off to Apple for their complete and successful reversal in strategy related to display resolution. I have little doubt that the iPad will soon be fitted with a much higher resolution display.

Five years ago, I had the opportunity to visit a number of Apple people in Cupertino to discuss displays. On the topic of the Tablet PCs, the Apple folks were almost hostile in their attitude. I recall one marketing person stating simply that “Tablet PCs are a step backwards – we learned years ago that pen input is something the consumer simply does not want on a computing platform”. And maybe this marketing person was correct – it could very well be that the interface of choice will be the finger and not the pen. Apple still has shied away from pen input, relying on finger input only. I suspect this limitation will be remedied in the not too distant future – there are just too many advantages related to pen+finger input for an innovative company like apple to continue to ignore…

In any case, within 2008, I expect we’ll see a touch-enabled PC from Apple in some form or another. OK, I missed by a few months… Already, the Apple buffs are hinting at such a device, and there are numerous concepts that are viewable on the Internet as teasers:

Technology editor Glenn Derene, from Popular Mechanics recently predicted that Apple would introduce a MacBook Tablet at the upcoming Macworld 2008. He offered his own vision of what sort of device Apple customers would want,

as depicted above. http://www.popularmechanics.com/technology/upgrade/4243000.html?page=1

December 2010 December 2007 December 2002

Page 11: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

11

Gizmodo recently published the results of an “Apple Tablet” design contest they sponsored. A few of the results are depicted above, with a larger gallery at: http://gizmodo.com/photogallery/appletabletcontest/. One thing that is clear is that if Apple does not introduce some sort of touch-based Tablet computing device in the near future, they will be

disappointing a sizable number of their customers… Note that all of these concept designs seem to incorporate a wide aspect ratio display. Note also, that the upper center image of a piano keyboard is not likely to be a satisfying

musical experience on an iPad for any but the most amateur pianist.

I think the success of the iPhone will almost certainly lead Apple to a touch-enabled PC platform. I further think that Apple is likely to popularize the concept to the consumer market that has eluded Microsoft and the companies that have introduced Tablet PCs over the past five years. If correct, this leads to two big ironies:

Although a touch-enabled PC from Apple is quite likely to boost Apple’s share in the PC market further, and perhaps substantially so, it’s more likely to help fulfill Bill Gates’ prophecies about the Tablet PC, with Microsoft-based systems finally growing to the sort of market penetration predicted by Bill Gates more than five years ago.

Dell’s long-standing position that the market just wasn’t ready for the Tablet PC was correct, (although it’s arguable that it was a self-fulfilling prophecy). Perhaps Dell’s strategists recognized that it wouldn’t be a market leader like Dell that would inspire the Tablet PC, but rather a company like Apple, influenced by its non-PC successes.

Despite the ironies, if indeed Apple enters the market in 2008, I think it’s a safe prediction that the Touch Panel newsletter will have plenty of news and information to cover as this story continues to unfold.

While it’s true that predicting in 2007 that touch technologies would easily fill the Touch Panel newsletter looking forward, I must admit that even I did not anticipate the touch interface would be as bountiful as it has turned out to be… Will it continue? I’m quite sure it will… Will Apple continue to lead in terms of market innovation? The answer to this question is less obvious, but I currently don’t see a likely successor to innovation in the electronics market place.

Page 12: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

12

News from around the world

compiled by Phillip Hill and Mark Fihn

NXT’s haptic tactile feedback module creates touch sensation and sound on flat panels NXT plc announced a 15x10mm control module that, in combination with two 0.5W moving coil transducers, adds haptic touch to a flat panel whilst simultaneously turning the panel into a loudspeaker. The module is designed primarily for applications in white goods, security systems, home automation and industrial control. The haptic effect makes a touch panel feel like a mechanical keypad or keyboard. The NXT HCM-1 module controls the transducers, which generate bending waves across the surface of the panel, creating the sensation of pressing a mechanical key when a user touches it. The touch panel also acts as a flat-panel speaker, using the same drive module and transducers. The speaker can be used to provide additional confirmation of a positive touch on the panel or audible confirmation of other circuit functions being activated. It can also be used to deliver voice feed prompts, advertising messages, or entertainment. The use of two transducers results in even dispersion of signal strength across the whole touch panel. NXT will develop additional control modules for displays, multi-touch feedback and alternative transducer types. Higher power versions for large displays are also planned. The HCM-1 operates from a single +3.3V power supply. Operating current consumption depends on the transducers being driven and the haptic signal profile but the module consumes only 20µA in standby. Engineering samples are being offered to key partners now, with mass production parts available in February 2011 at a price of $6.70 each in 100+ quantities. http://www.nxtsound.com

Augen expands Gentouch tablet line with six new models Augen Electronics Corp has unveiled six new Android tablets to expand its Gentouch line. Retailing between $199 and $599, all are WiFi equipped and will have access to Augen’s SlideME app store. The four models in the more expensive Espresso line add in a forward facing camera for video chat and Bluetooth, and increased memory to 8GB and processor speed to 1GHz. The $429 Dolce has a 10.2-inch 1024x768 display. The $599 Dopio is intended to bridge the divide between tablets and laptops by including a dual-boot option utilizing Android or Ubuntu, and features a desktop dock with a back-lit QWERTY keyboard and additional lithium battery. The $349 Firma incorporates a wireless pen that can be used to transfer handwritten text and images. http://www.augenus.com

E Fun launches “Nextbook” tablet E Fun introduced a tablet PC dubbed the Nextbook. The first model to hit the market will be a 7-inch version that uses the Android operating system and is powered by an ARM processor. The unit will carry a $199 suggested retail. E Fun is working closely with FiYang, a Chinese ODM manufacture that will make the products. The

Nextbook has a deal with Borders for supplying books, but the unit can also play video, music and search the web. It has 802.11g Wi-Fi and 2GB of internal memory. A follow-up 8.4-inch model is expected at a later date, and more should follow next spring. The company is also considering upgrading the ARM processor next year to something more powerful. Adding 3G connectivity is also under consideration. E Fun concentrated on improving the touch sensitivity of the model, compared with others on the market, through improved software to make page turning easier. In addition, it is outfitted with page-turning buttons along the side of the screen. http://www.nextbookusa.com

Page 13: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

13

Gartner sets aggressive forecast for tablet sales Gartner Inc. said that it expects media tablet sales to reach 19.5 million units this year and to nearly triple in 2011, in one of the most aggressive forecasts published so far. It believes tablet computers such as Apple’s iPad, Samsung’s Galaxy Tab and Cisco’s Cius will cannibalize other portable devices such as e-readers, gaming devices, and media players. In the enterprise sector, tablets will function as a secondary device to use on the road or for fast access to email, calendar, Web applications and presentations. Gartner’s forecast assumes that over 80% of tablet sales this year will come from the iPad. According to The Wall Street Journal’s calculation, that is at least 15.6 million units. The newspaper currently expects Apple, which dominates the tablet market, to sell about 11 million iPads this year. The research firm expects 54.8 million units in 2011, 103.4 million in 2012 and 154.2 million in 2013. The iPad will keep its lead until 2013. http://www.gartner.com

Displaybank predicts eReader market will reach 54 million units in 2015 Through their recently released “e-Reader Industry Trend and Market Forecast” report, Displaybank studies, analyzes, and forecasts the e-Reader industry trends and technologies. Expectations for the eReader market rise after the launch of Kindle. Lately, new e-Reader product launches increase and prices drop to result in the demand increase. The U.S. sales volume of bestseller e-book contents reaches twice the volume of paper sales only three years after the launch of Kindle. Also, the B2B market is expected to grow that the overall e-Reader demand is projected to expand to 54M units in 2015. The e-Reader demand market is divided into B2C market and B2B market. The B2C market is a general consumer market which requires additional functions on top of the "book reading" function due to advancement of multi-purpose multi-media devices such as smart phone, tablet, and netbook. Therefore, the market is rather limited as it competes with multi-media devices. On the other hand, the B2B market is expected to expand in the future as it can save purchase and management costs as well as spaces for book display when corporations or institutions apply the e-Reader.

The overall e-Reader market is expected to show CAGR 56.3% and

grow to about 54 million units in 2015 and 88 million units in 2018.

Semtech proximity function to capacitive touch button platform Semtech expanded its ultra-low power capacitive touch sensor ICs with a new platform of enhanced proximity sensing devices. The new SX863x capacitive touch button controllers come in 8- and 12-channel versions and feature a long range of proximity detection of 10cm allowing the sensor to control power-management and other convenience features such as system wake-up, LED wake-up and automatic backlight activation upon user detection. These devices support a wide range of button, slider and touch-wheel capacitive sensor panels that are increasingly being designed into white goods, mobile phones, media players, notebooks/netbooks, set top boxes and other consumer and industrial electronic products. The SX863x devices simplify capacitive touch control design by providing a plug-and-play solution to reduce time-to-market while still allowing users to customize various parameters (for example: scan time, slave address, channel gain, sensitivity thresholds) for their specific applications, without the need for new firmware development. They also reduce the overall footprint and BOM (bill of materials) cost by incorporating dedicated capacitive sense inputs, without the need for external resistors and capacitors. The SX863x platform integrates eight, 256-step LED drivers with individual intensity (linear or logarithmic) fading control optimized for human vision. This allows design of buttons that automatically fade in when touched and fade out when released, with interval speed and initial and final light intensity levels set by the designer. http://www.semtech.com

Page 14: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

14

Microsoft’s Arc Touch Mouse uses Cypress CapSense solution to implement touch scroll strip Cypress Semiconductor announced that Microsoft’s new Arc Touch Mouse uses the Cypress CapSense solution for the touch scroll strip that lets users navigate documents and Web pages without using the on-screen scroll bar. The CapSense-based scroll wheel delivers a smooth, comfortable navigation experience as opposed to the jerky scrolling that some mechanical wheels provide. The Arc Touch Mouse was designed to deliver ultimate portability to easily slip into a bag, purse or pocket. It also goes from curved to flat with one simple movement. The Arc Touch Mouse’s touch strip enables consumers to scroll with a flick of a finger. Using the CapSense solution, the strip corresponds to each position and velocity change to give users accurate, controlled scrolling no matter how fast or slow the movement. The strip also has three tap “buttons” for added functionality: page up, page down and the middle click area, which is reprogrammable for whatever mouse function the user needs most. The Arc Touch Mouse is equipped with BlueTrack Technology, letting consumers track on virtually any surface. Two AAA batteries give the Arc Touch Mouse more than six months of battery life, and the two-color battery life indicator lets people know when the power is running low. www.cypress.com/go/capsense

Cando to invest NT$24 billion for touch panel capacity expansion Cando, a subsidiary of LCD panel maker AU Optronics (AUO), plans to invest NT$24 billion (US$789.1 million) to start constructing two touch panel production plants in the Central Taiwan Science Park (CTSP) by the end of the year in order to produce touch sensors for the second-generation iPad, according to a Chinese-language report on Economic Daily News (EDN). The report cited AUO chairman KY Lee as saying that based on the estimation that 200 million smart phones will be shipped in 2011, touch panel shipments for handsets are expected to reach 100-200 million units next year, and with other applications, the overall touch panel market is expected to exceed 500 million units per year in the future. http://www.digitimes.com

Osaka Prefecture University finds way to recover touch screen material indium Scientists have developed a way to produce microbe-filled capsules that can recover rare metal indium, used in the creation of touchscreens. Health products maker Morishita Jintan has worked with Osaka Prefecture University to develop the technology which consists of microbes surrounded by a semi-permeable synthetic membrane. When placed in a solution containing indium the capsules are able to let metal ions pass through the membrane, before being taken up by the microbes. According to Morisha Jintan the capsules facilitate the work of the microbes, making the use of organic solvents unnecessary, making the recovery process far safer. A patent has been applied for and the firm aim to have commercial applications ready in the next few years, utilizing its drug making technology and support from manufacturers that use rare metals. http://www.jintanworld.com

Young Fast to top touch panel makers in 2010 EPS, Wintek and TPK in 2011 Young Fast Optoelectronics, among Taiwan-based providers of touch solutions, will record the highest EPS in 2010 due to its booming sales of ITO-film touch panels used in handsets, while Wintek and TPK Touch Solutions are expected to lead in 2011 EPS because of fast growing demand for their medium-sized ITO-glass touch panels used in in-car navigation devices and tablet PCs, according to a report from DigiTimes. With touchscreen handsets accounting for over 40% of the handset market in 2010, demand for film-based maker Young Fast will continue to grow in 2011 for sure, but will unlikely increase as significantly as in 2010. Touch panel demand will turn into medium-size products such as in-car navigation devices and tablet PCs, while film-based medium-size touch panels have yet to gain enough trust from clients, despite improving sensitivity for supporting multi-touch and gesture recognition. Glass-based touch panels are highly accepted due to already-mass-produced products like iPad. Supply for glass-based touch panel is tightening as suppliers such as TPK Touch and Wintek are also producing for other vendors in addition to Apple. Revenues of TPK Touch and Wintek are expected to increase by over 40-60% in 2011 as new capacity will start joining production starting in the fourth quarter of 2010. http://www.digitimes.com

Page 15: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

15

Fujitsu shows off clamshell handset with two touch panels Fujitsu developed a clamshell mobile phone that is equipped with two touch-sensitive LCD panels on its upper and lower chassis. Both of the panels are capacitive touch panels and have the “Swing Side Motion” function, which enables to rotate a screen by 90°. The screen sizes of the upper and lower panels are 3.4 and 3.5 inches, respectively. Their pixel count is 480x960. The prototyped handset is capable of wireless communication via a wireless LAN. It is also equipped with a communication module for 3G networks. The operating system is Symbian, but the user interface of the handset can be used with the Android platform too, Fujitsu said. The company has not yet determined when to commercialize the handset.

A photo folder displayed on the two panels; the upper panel defined as 'social' and the lower panel defined as “local”; the upper and lower screens rotated by 90 degrees

Fnac to release eReader Fnac, the French electronic retailer owned by the PPR Group, is to launch a new eReader. The Fnacbook is manufactured by Sagem and incorporates a 6-inch capacitive touch screen over an E Ink electrophoretic display, 2GB of memory and boasts 3G and WiFi connectivity. The device supports TXT, ePub, PDF, HTML, JEG and MP3 file formats and its dimensions are 152mm x 123mm x 10.7mm – making it shorter, but slightly thicker than Amazon’s Kindle. Fnac is selling the 240g eReader for €199, while eBook prices will be 20-30% lower than paperback editions. The company is also planning to inaugurate an online eBook store with 80,000 titles in the near future. http://www.fnac.com

EIZO Releases 17.0-inch multitouch monitor Eizo Nanao announced the 17-inch FlexScan T1751 LCD monitor. The FlexScan T1751 offers multitouch capabilities through Windows 7 and is the second multitouch offering from EIZO. It follows the FlexScan T2351W, a 23-inch widescreen monitor introduced in September. The FlexScan T1751 is ideal for schools and businesses that prefer a smaller screen in a standard 5:4 aspect ratio. Furthermore, it complies with leading medical standards including EN60601-1, UL60601-1, and the CE Medical Device Directive, making it suitable for use in hospitals and clinics. The FlexScan T1751 features a vertical alignment structure with a native resolution of 1280x1024 pixels. The monitor comes with a unique stand called the “LaidBack” stand which was developed specifically for multitouch use. The LaidBack stand is short like a picture frame stand so the monitor bezel rests on the desktop and tilts back in six stages from 15° to 65°. This allows the user to look almost directly down at the screen and comfortably touch it without having to extend his/her arm. The FlexScan T1751 uses optical imaging touch screen technology which accepts input from a bare or gloved hand as well as a touch pen. The non-scratch touch surface is made of reinforced glass with a hardness of 5.5 Mohs, to ensure it has the durability to withstand use in schools, hospitals, and public access areas. http://www.eizo.com

Page 16: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

16

Premier Farnell and demmel products global agreement boosts time to market at low development costs Premier Farnell announced a global franchise agreement with demmel products, a manufacturer of “Next Generation Intelligent LCD” (iLCD) technology. demmel products’ iLCD technology enables users to store all necessary Windows fonts, graphics, text templates and macros in the flash memory of the iLCD’s controller. This considerably reduces the development time normally required for the pixel-based programming process of conventional LCDs. iLCD panels can be accessed through legacy control systems or operated by low cost

microcontrollers using high-level commands. Development costs are almost completely eliminated and time-to-market is shortened significantly by using iLCDs. The extremely compact 240x400 pixel touch screen iLCD is optimized for landscape mode but can also be operated in portrait mode. iLCDs are available from 2.8-inch (240x320 pixel) to 10.2-inch (1024x600 pixel) and every model comes with the same set of commands and electrical connections. http://www.element14.com http://www.PremierFarnell.com

Global agreement provides Electronic Design Engineers with newest intelligent touch iLCD technology from demmel products

Queens University offers plans for DIY multi-touch globe Queens University’s human media lab is offering plans for a DIY spherical multitouch display you can use for all your interactive globe needs. While the design does require a high-quality projector, the software is not processor intensive so if you have an extra PC then the rest of the materials are actually quite inexpensive. The project uses an infrared sensing webcam to detect whenever anybody’s touching the globe using a process called Diffused Illumination. The Human Media Lab website looks to be ramping up to do more open source projects like the spherical display in the future. http://hml.queensu.ca/node/302

You can link to a YouTube video of the spherical display at: http://www.youtube.com/watch?v=Wf7g9tnhOFc&feature=player_embedded

Perceptive Pixel closes $17 million Series B funding Perceptive Pixel announced today that it has closed a $17 million Series B round of financing, which includes investments from Intel Capital, previous investors, and from 3M as previously announced. Perceptive Pixel has developed several patent-pending approaches to multi-touch interfaces. The company’s flagship 100-inch Multi-Touch Wall transformed the way existing investor CNN covered the historic 2008 US Presidential elections. Earlier this year, Perceptive Pixel pushed the boundaries of touch technology even further by launching the world’s first true multi-touch 55-inch LCD, capable of sensing an unlimited number of fingers and styluses with pressure sensitivity. The company previously raised $4.75 million in a Series A round led by Intel Capital in 2008. www.perceptivepixel.com www.intelcapital.com

Page 17: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

17

Alps shows smart phone operated by touch pad on steering wheel Alps Electric recently exhibited new user interfaces for vehicles including a touch pad embedded in a steering wheel and a sensor located above a front seat. The touch pad, which can be operated by the thumb, can be used to control a smart phone. The touch panel of the smart phone can be operated by tapping the touch pad on the steering wheel, which is connected to a controller equipped with a wireless LAN interface. And the steering wheel and the smart phone are connected via a wireless LAN. The controller is also connected to an in-vehicle monitor and a CAN (car area network) so that data sent from the smart phone can be displayed on the monitor and an air conditioner can be controlled by the smart phone. A passenger sitting on a rear seat changed the temperature on an air conditioner by using the smart phone. In the demonstration of the sensor attached to the ceiling of a vehicle, Alps Electric used a stereo camera and a human presence sensor. The stereo camera was used to (1) monitor the state of passengers and to (2) operate in-vehicle devices with hand motions. This makes it possible to determine the height of a passenger for sending air from an air conditioner to an appropriate location as well as to change the way

in which an airbag deploys depending on how a passenger is sitting. Also, when a hand came close to the ceiling, a map displayed on the car navigation device is enlarged. And the map can be scrolled by moving a fisted hand. Furthermore, Alps uses the human presence sensor for preventing car theft.

A smart phone can be operated by using the touch pad on the steering wheel with a thumb. The white

box seen in the lower right is a controller; the sensor attached to the ceiling

Tastitalia to integrate Immersion haptics technology in display product line Tastitalia and Immersion Corporation announced that they have entered into a reseller agreement. Tastitalia will integrate Immersion’s TouchSense 1000 and 2000 technology into its touch screen and display units under the Tasthaptic product name. Targeted for the industrial, medical, automotive, and aerospace markets, Tasthaptic touch screens use sealed resistive or capacitive touch panel technology designed to stand up to harsh environments. By incorporating Immersion’s TouchSense technology, Tasthaptic touch screens provide tactile response and rich touch effects to enhance the display’s usability in noisy and distracting environments. In addition to the tactile effects, the Tasthaptic touch screen provides a gapless surface which is easy to clean and sterilize. The frontal protection level can reach IP67 or NEMA 6. http://www.tastitalia.com http://www.immersion.com

DigiTimes Research reports that handset touch panel shipments will reach 640 million worldwide in 2011 Worldwide handset-use touch panel shipments will reach 640 million units in 2011 amid fast growth in the markets for smart phones and China's white-box handsets, according to DigiTimes Research. Taiwan's touch panel makers will have a 40% share of global touch panel shipments to the handset market in 2010. Total small- to medium-size touch panel shipments worldwide will reach 768 million and 916 million units in 2010 and 2011, respectively. http://www.digitimes.com

Page 18: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

18

Microsoft announced that it has sold a million Kinect units in its first 10 days on the market Microsoft announced that it sold one million of the Kinect video game peripheral in only 10 days. If this pace keeps up, it is on course to easily break its holiday sales projections of five million. Microsoft did not provide any details on how the units were sold. Customers can buy the $150 Kinect add-on camera alone or as part of a bundle with the Xbox 360. Dennis Durkin, corporate VP and chief operating and financial officer of Microsoft’s Interactive Entertainment Business, said that units are being sold as fast as they can be manufactured. Microsoft is spending $500 million to advertise the new device over the holidays.

One of the three “cameras” on the face of the Kinect is actually an IR projector, throwing out infrared light so the other (non-RGB) camera can get a sense of your room in 3D space. It's invisible to the naked eye, but if you happen to have a pair of those inexpensive night vision goggles sitting around – like the kind that came with the most expensive edition of Modern Warfare 2 – you can see just how much IR light the Kinect throws out. Here are two videos showing off how the Kinect sees your room. The effect is actually rather creepy.

http://www.youtube.com/watch?v=nvvQJxgykcU&feature=player_embedded http://www.youtube.com/watch?v=r7nRKU0nFxA&feature=player_embedded

Bathed in light: how the Kinect paints your room in IR

Zytronic and NUITEQ cooperate on multi-touch PCT-based touchscreen solution Zytronic announced that, following collaboration with Swedish multi-touch software solution provider NUITEQ, its range of Projected Capacitive Technology (PCT) sensors and supporting family of ZXY100 dual-touch controllers now have full compatibility with NUITEQ’s Snowflake Suite. The PCT-based touch sensors manufactured by Zytronic utilize patented technology to differentiate themselves from more traditional approaches using front-facing sensors, such as resistive or surface capacitive. The PCT sensor structure is made up of a matrix of micro-fine capacitor elements embedded into a laminated substrate and placed behind a thick protective overlay. It means the resultant touchscreen is not affected by scratches, shock, dirt build up, extremes of heat, harsh chemicals, liquid ingress, or other factors that could shorten its operational lifespan. As a consequence these sensors are well suited to highly demanding application areas, such as public information systems, point of sales units, industrial controls and digital signage equipment. Also touchscreens that employ PCT do not need inclusion of a bezel in which to house the sensors – something that is mandatory for IR- or SAW-based touch sensing arrangements. Thus more attractive, smooth-fronted system designs can be created. http://www.zytronic.co.uk

Page 19: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

19

CIT installs printed electronics pilot line for touch screen sensors UK-based Conductive Inkjet Technology (CIT) has completed the installation of a pilot line that will be used in the production of touch screen sensors for mobile electronics. The line will coat film used to make the sensors for CIT's partner Atmel in the US. Volume production will start in the latter part of 2011. The pilot line marks an important step in CIT's efforts to commercialize a process for depositing conductive inks for application in several electronics markets. Terms of Atmel and CIT's partnership were initiated in December 2009. Atmel agreed to pay for preferential access to CIT's capacity subject to certain milestones being reached, including the installation and validation of a dedicated pilot line production at CIT's premises in Cambridge. Through Atmel, CIT has been working closely with mobile phone manufacturers to develop its deposition technology. The company has modified its materials and process, to eliminate any visible line traces and potential reflectivity from screens. This was achieved by printing sub-micron lines, which CIT managed to reduce down from 8µ widths. Due to the scale and potential of the touchscreen sensor market, driven by smart phone and tablet PC applications, CIT extended its exclusive access granted to Atmel for up to 10 years depending on the progress of the technology's adoption. Atmel will make a prepayment of over €7 million to CIT. http://www.conductiveinkjet.com

Zytronic launches new controller for ultra large form factor PCT touch sensors Zytronic announced the launch of a new addition to its range of touch controllers designed to enhance user experience in ultra large format applications. The ZXY100/128 controller can support touchscreens greater than 50 inches (127cm) across. Developed for Zytronic’s proprietary range of touch sensors, the all-new ZXY100 touch controller is supported by the same WHQL certified driver as the smaller controllers in the range, and offers features such as dual-touch output for linkage to “multi-touch” or gesture recognition software, Windows 7 plug and play, in-field firmware upgradeability and reduced PCB footprint. At the heart of all ZXY100 controllers is a 32-bit ARM microprocessor containing Zytronic’s proprietary firmware, driving faster touch detection and enhanced responsiveness in both direct touch and through-glass sensing modes. Working together with the powerful processor is a custom designed ASIC which incorporates the multiplexing functionality of a number of analogue components, and contributes to the small footprint of the controller. An on-board Flash memory also enables in-field firmware updates to be performed. Zytronic will also offer the chip set to customers for integration of the PCT touch sensor controller onto their motherboards in higher volume or space-constrained applications. A unique feature of PCT is its ability to operate behind thick protective glass overlays, enabling highly reliable, functional and versatile touch interfaces in outdoor or demanding public-use applications, with proven solutions already in action worldwide including information kiosks, vending & ticketing systems, interactive digital signage, medical displays, industrial PCs and ATMs. The 128-channel ZXY100/128 joins two other controllers in the range, which were introduced earlier in 2010. The smallest, a 32-channel unit, can be used with 5-19 inch (13-49 cm) sensors; the midrange, 64-channel controller is designed for 20-50 inch (50-127 cm) PCT sensors. All models are available with either USB or Serial connectivity. http://www.zytronic.co.uk

J Touch supplies 5.5-inch touch panel for tablet PC to Sharp J Touch landed touch panel orders for 5.5-inch tablet PCs from Sharp, who recently launched 5.5- and 10.1-inch touch screen tablet PCs. J Touch also supplies touch panels for Samsung Electronics' 7-inch Galaxy Tab. Due to tight supply of glass-based capacitive touch panels, as well as longer design and mass production times and higher cost, some vendors chose film-based capacitive touch panels for tablet PCs in order to introduce their products to the market faster. http://www.digitimes.com

3M completes acquisition of Cogent 3M announced that it has completed its acquisition of Cogent pursuant to a merger agreement that was approved and adopted by Cogent’s stockholders at a special meeting held on Dec. 1, 2010. Under the terms of the merger agreement, Cogent’s stockholders are entitled to receive $10.50 per share in cash. The subsidiary will be known as 3M Cogent Inc. http://www.3M.com

Page 20: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

20

Nuance XT9 brings predictive touch input to connected IPTV Nuance Communications announced that its XT9 predictive touch input is entering the digital living room, with new expanded support for interactive, connected television. The world’s first XT9 deployment in the connected TV market will be with TVBLOB via BLOBbox, TVBLOB’s next-generation “over-the-top” connected TV software that blends traditional TV with the Internet for a unique television experience. With XT9, BLOBbox users can find, search and access content faster and easier than otherwise possible with traditional remote controls, as well as send email or text messages, and interact with social media applications and games. Nuance is bringing its predictive touch input experience for mobile phones to connected television and IPTV, enabling service providers, and TV, set top box, media player, DVR, gaming console and remote control manufacturers to integrate XT9 with their TV-centric keyboards in more than 80 languages worldwide. With this latest TVBLOB integration, BLOBbox users will also be able to rapidly enter keywords, search Electronic Program Guide (EPG) and on-demand programming, as well as chat with friends on and create content for TV-based social media applications. XT9 is platform-agnostic and can be integrated on any kind of keyboard and remote control, including standard 9- and 12-key keypads, scroll wheel, touchpad, or joystick. http://www.nuance.com/t9/xt9/

RIM to buy TAT to boost UI development Research In Motion plans to acquire Swedish company The Astonishing Tribe (TAT), which specializes in developing user interfaces. TAT will reportedly help RIM develop the user interfaces on smart phones and its upcoming tablet, PlayBook. The deal may be a blow to Google and the Android camp, because TAT and its work has become increasingly valuable to the Android community. RIM hasn't announced any financial details. http://www.rim.com http://www.tat.se

Nokia shows off RDI touch screen in ice Nokia developed a 2-by-1.5 meter surface built from 50 centimeter square slabs of ice smoothed out with a heat gun. Near-infrared lights and cameras were mounted behind it. Reflections of objects touching the outward surface of the ice are sent to a PC for determination of size and movement. A connected projector used the data to project imagery beneath the object, thus creating the illusion of flames under a user’s hand. The technology is called rear-diffused illumination (RDI). No sensors were actually placed in the ice itself to keep costs down, and as the movie shows it seems to be fairly responsive and accurate for cameras looking for objects and movement through a thick piece of ice. http://www.nokia.com

Nokia’s research lab in Tampere, Finland has developed a rather unique kind of touchscreen, a touchscreen made of ice. Nokia says that this “playful experiment” reveals that interactive touch displays can be built anywhere.

Philips Healthcare Announces Ocular as Touch Panel Supplier Philips Healthcare division announced that Ocular LCD will provide the touch panel assemblies for Philips’ next generation patient monitoring device. Phillips will incorporate a projected capacitive user interface panels, designed and manufactured by Ocular, as well as utilize additional enhancements and customization services that Ocular provides. http://www.philips.com/newscenter http://www.ocularlcd.com

Page 21: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

21

Sony patents special rear-surface touch input technology A set of several Sony Computer Entertainment patents filed in October describe a device in which “the visual display is disposed on a front side of the case and the touch pad is disposed on a back side of the case.” According to one patent for a “hand-held device with two-finger touch triggered selection and transformation of active elements,” the two sides are “slidably connected to each other ... in a hinged configuration,” with the touch-sensitive side able to distinguish a “two-fingered touch” from that of a single finger. According to the patent, this dual-surface solution improves on prior touch screens on which “the user's fingers often obscure the part that is to be selected making selection difficult.” This matches closely with recent, widely reported rumors that Sony's next PlayStation Portable will contain such a back-mounted touchscreen. http://appft.uspto.gov

The abstract for the patent #20100299595 reads as follows:

A hand-held electronic device, method of operation and computer readable medium are disclosed. The device may include a case having one or more major surfaces. A visual display and a touch interface are disposed on at least one of the major surfaces. A processor is operably coupled to the visual display and touch screen. Instructions executable by the processor may be configured to a) present an image on the visual display containing one or more active elements; b) correlate one or more active portions of the touch interface to one or more corresponding active elements in the image on the visual display; c) operate the one or more active elements according to a first mode of operation in response to a first mode of touch on one or more of the active portions; and d) operate the one or more active elements according to a second mode of operation in response to a second mode of touch on one or more of the active portions, wherein in the second mode of operation, the second mode of touch activates an enhancement of one or more of the active elements.

Microsoft patent shows polymer technology that enables tactile keyboards A recent Microsoft patent filing involves technology that transforms a touch screen’s surface into a real, tactile keyboard (among other things). The technology involves a “light-induced shape-memory polymer display screen” in which ultraviolet light changes the shape or topography of a layer of the screen. Differing UV wavelengths will result in different heights, shapes, hardness and softness, meaning this could theoretically create a keyboard/keypad springy enough for practical uses. Microsoft’s inventor Erez Kikin-Gil mentioned that the idea is meant to be implemented more in table-size computing displays rather than small gadgetry like tablets and smart phones.

The new Microsoft patent coats the display with a light-induced shape-memory polymer. Once a wavelength of ultraviolet light is transmitted at a pixel, the material will become hard and protrude. When another wavelength is

transmitted, it will become soft. Through modulating these wavelengths Microsoft hopes to create texture.

Page 22: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

22

Mobile HCI September 7-10, 2010, Lisbon, Portugal

In the first of three reports, Phillip Hill covers papers from Lancaster University/DOCOMO Euro-Labs, DOCOMO Euro-Labs/Ludwig-Maximilians-Universität (LMU), University of Glasgow, Graz

University of Technology/University of Canterbury, Microsoft Research/RWTH/University of Toronto, University of Toronto, University of Glasgow, Newcastle University/Deutsche Telekom

Laboratories, and Nokia Research Center

Mobile Interaction with Static and Dynamic NFC-based Displays Robert Hardy, and Enrico Rukzio, Lancaster University, Lancaster, England Paul Holleis, and Matthias Wagner, DOCOMO Euro-Labs, Munich, Germany

This paper reports on a development framework, two prototypes, and a comparative study in the area of multi-tag near-field communication (NFC) interaction. By combining NFC with static and dynamic displays, such as posters and projections, services are made more visible and allow users to interact with them easily by interacting directly with the display with their phone. In this paper, the researchers explore such interactions, in particular, the combination of the phone display and large NFC displays. They also compare static displays and dynamic displays, and present a list of deciding factors for a particular deployment situation. They discuss one prototype for each display type and develop a corresponding framework that can be used to accelerate the development of such prototypes whilst supporting a high level of versatility. The findings of a controlled comparative study indicate, among other things, that all participants preferred the dynamic display, although the static display has advantages, e.g. with respect to privacy and portability.

The first prototype consists of a static tourist guide poster (see image), which has no ability to provide visual feedback other than on the phone. The poster’s back is completely covered with a mesh of tags that provides extra functionality, yet leaves the traditional use of the map unchanged. One deployment scenario is that many posters can be created and installed at various bus shelters around the city. The poster features a map of POIs (points of interest) specific to its deployment location. Located on this particular map are various nearby restaurants. The poster also provides a method of querying the restaurants based on a set of criteria. This can be seen on the left-hand side of the poster. There are radio button options that enable the user to specify a particular type of restaurant (e.g. Italian). There are also checkboxes used to specify one or more desired restaurant ratings. The poster cannot show the selections; this has to be done on the phone. Once the criteria have been set, the user can select the “Query" action which returns the corresponding restaurant matches to the phone along with their gird references so they can be located on the map. In order to get more information about a specific POI, the user simply touches the corresponding grid area with the phone. When there are multiple POIs in a grid area, the POI can be selected using the phone’s number pad. POIs are identified locally within the bounds of the tag (e.g. if there are three POIs in one grid area, their identifiers will be 1, 2 and 3). Global identification is thus achieved by combining the local identifier with the map grid reference.

The second prototype is the dynamic prototype, which shares a similar appearance with the static example (see second image). However, the printed poster is replaced by a projector and thus a change of content and direct visual

Page 23: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

23

feedback is possible. The goal of this prototype is to extend the features of the static poster by taking advantage of a large dynamic display. Further capabilities of the dynamic display are map panning and zooming. There are also navigational route overlays used to show a particular route directly on the map.

The UI of the static, printed poster prototype; the UI of the dynamic, projected display prototype

Novel Interfaces for Digital Cameras and Camera Phones Christopher McAdam, Craig Pinkerton, and Stephen A. Brewster, University of Glasgow, Glasgow, Scotland

Camera phones are now very common but there are some usability issues that affect their use. These can occur because the users look through the LCD to frame the image and can often miss the icons displayed around the edges that present important information about the status of the camera. This may lead to shots being missed or poorly exposed. Most camera phones do not take full advantage of the features of the underlying phone platform to enhance their interfaces. The researchers created a camera application for the Nokia N95 that featured novel interface elements and made use of the features of the platform to provide a rich variety of information in more usable forms, such as sonifications of the luminance histogram to ensure better exposure before a picture is taken; phone orientation to give a level indicator to ensure the camera is straight; measuring phone movement to ensure the phone is being held steady; and the detection of image motion to support panning. They also present a scenario for how these features could be used in conjunction with each other during the photo taking process.

A common problem that can occur when taking photographs is that it is difficult to align the camera perfectly with the horizon (this problem also occurs when trying to align with the vertical). As such, the resultant photos are often at a slight angle. If people do not do much editing of their photos once taken, these might be discarded. To combat this the researchers developed the Levelometer to present the user with feedback on the phone’s orientation, using its built-in 3-axis accelerometer, prior to taking a photo. This type of display is useful when the user is taking a picture with the phone held above head height, for example at a concert. The user is still able to see the display, as the phone would be held slightly forward, and not directly above the head. However, the small screen is harder to see clearly at arm’s length, so having a clear visual indicator appear when the phone is level would be advantageous in this situation. It is very difficult to tell if the phone is being held level before a photo is taken and it can be hard to see the screen. With this new interface the user can look to see the red line extend across the display and then take the shot, knowing it will be straight. Since they completed this work two high-end digital SLR cameras have been released that include a similar feature (Nikon D3 and Canon 7D).

Levelometer with the red line extended across the display, indicating that the phone is correctly aligned with the horizon

Page 24: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

24

Zooming Interfaces for Augmented Reality Browsers Mulloni Alessandro, and Dieter Schmalstieg, Graz University of Technology, Graz, Austria Andreas Dünser, University of Canterbury, Christchurch, New Zealand

Augmented reality combines real world and virtual information in interactive visualizations. Since phones started integrating GPS, compasses and accelerometers, several augmented reality browsers for phones have hit the market. These are applications that access large amounts of geo-referenced information from online sources and present it at corresponding physical locations, superimposed onto a live video stream. However, augmented reality is constrained by the camera’s field of view and restricted to first-person views, limiting the amount of overview that users can gain. The researchers present two zooming interfaces that compensate for these constraints by enabling users to smoothly zoom between the augmented reality view and either an egocentric panoramic view of 360°, or an exocentric top-down view. They present the results from two studies that show how in most search tasks these zooming interfaces are faster and require less panning than an overlay-based tool, scaling better as the amount of information grows.

The proposed zooming interfaces, as the user turns the camera to the right. The researchers propose an egocentric zoom that increases the field of view up to 360° (top) and an exocentric zoom that gives the user a

top-down view onto the information (bottom).

Panning and zooming in the proposed zooming interfaces. (A) Panning is performed by physically turning the camera phone. (B) Zooming panorama is triggered by the zoom buttons of the camera phone. Zooming map is

triggered by tilting the phone (C) down for the exocentric view or (D) up for the egocentric view.

Page 25: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

25

RearType: Text Entry Using Keys on the Back of a Device James Scott, and Shahram Izadi, Microsoft Research, Cambridge, England Leila Sadat Rezai, RWTH, Aachen, Germany Dominika Ruszkowski, Xiaojun Bi, and Ravin Balakrishnan, University of Toronto, Toronto, Ontario

RearType is a text input system for mobile devices such as tablet PCs, using normal keyboard keys but on the reverse side of the device. The standard QWERTY layout is split and rotated so that hands gripping the device from either side have the usual keys under the fingers. This frees up the front of the device, maximizing the use of the display for visual output, eliminating the need for an onscreen keyboard and the resulting hand occlusion, and providing tactile and multi-finger text entry – with potential for knowledge transfer from QWERTY. Using a prototype implementation, which includes software visualization of the keys to assist with learning, the researchers conducted a study to explore the initial learning curve for RearType. With one hour’s training, RearType typing speed was an average 15wpm, and was not statistically different to a touch-screen keyboard.

RearType prototype front (left), with thumb buttons, showing study software including semi-transparent visualization (the “h” key has just been pressed), and back (right), with keys in rotated QWERTY layout

(fingers lifted from home keys for better view)

Mobile and Physical User Interfaces for NFC-based Mobile Interaction with Multiple Tags Gregor Broll, DOCOMO Euro-Labs, Munich, Germany Doris Hausen, Ludwig-Maximilians-Universität (LMU), Munich, Germany

Near field communication (NFC) is an emerging technology for mobile interaction with everyday objects and associated digital resources. Apart from simple interactions with single tags, NFC has the potential for more elaborate interactions with physical objects that comprise multiple tags and serve as physical user interfaces (UI). This paper investigates the design of mobile and physical UIs for the interaction with multiple NFC tags. It focuses on three basic interactions that qualify for multi-tag interaction: the navigation between parts of an application, the selection of items, and the combination of items. Two user studies compare different configurations of mobile and physical UIs for these interactions in order to evaluate the allocation of application features and UI elements to mobile devices and tagged objects. The results advocate the continuous interaction on the latter, instead of splitting interactions between mobile and physical UIs.

The first study investigated mobile and physical UIs for navigation and the selection of items, which often complement each other in mobile applications. Their evaluation is based on a use case for browsing different categories and selecting items from them. Its interaction workflow serves as the blueprint for the design, implementation and comparison of four prototypes whose UI configurations map features and UI elements for navigation and selection to mobile and physical UIs in different ways. This use case is inspired by applications for ordering food and complies with both navigation and selection. Users can browse different categories (appetizer, main course, dessert and drinks) and select items from them.

Page 26: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

26

Single-tag interaction (STI) maps both the navigation and the selection of items to the mobile UI. Users start the mobile application by touching an NFC-tag on a leaflet (Figure 1a). All further interactions are performed on the mobile device including the linear traversal of the four category-screens using the left and right soft keys, the selection of items from a list of radio buttons for each category (Figure 1b) and submitting the order from the summary (Figure 1c). Multi-tag interaction (MTI) maps the navigation to the physical UI and the selection of items to the mobile UI. It comprises six tags for starting the application, switching between the four categories and submitting the order (Figure 2). Users can touch the tags to perform actions or to access screens for each category on the mobile device. Users can access them arbitrarily and not in a fixed, linear order. The selection of items is performed on the mobile device, where users can select them from a list of radio buttons for each category.

Figure 1: Tagged leaflet to start the mobile application (a), the screen for selecting items from a category (b)

and the summary (c)

Figure 2: Tagged physical UI for navigation via MTI

Interacting With the Flow Janne Bergman, and Janne Vainio, Nokia Research Center, Tampere, Finland

Mobile devices offer challenges for UI design. Limited screen space leads to deep menus, complex navigation and loss of position. Nokia introduces a new user interface concept that reverses the traditional navigation paradigm. By utilizing context awareness and allowing the user to control the UI via filters, objects of interest navigate past the user instead of the user navigating to the object. The user operates on a single view without the need for deep menu navigation. The new UI is also easy to configure. They implemented the concept on the Nokia S60 5th edition touch platform and conducted user testing with 16 users. Initially, users felt confused because of new ways of accessing things. However, after a short period of usage, a majority of the users found it easy to use. Most of the users felt the system to be fun and playful.

The researchers have currently implemented simple text filtering in which the user can open a transparent keypad on top of the flow and select first letters of the desired object’s name, thus limiting the flow to objects matching that criteria as in the illustration. The keypad used for filtering shows the full alphabet at first, but shrinks as possible matches decrease in number.

Left: text filtering. Right: object’s context menu.

Page 27: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

27

User-Defined Gestures for Connecting Mobile Phones, Public Displays, and Tabletops Christian Kray, Daniel Nesbitt, and John Dawson, Newcastle University, Newcastle-upon-Tyne, England Michael Rohs, Deutsche Telekom Laboratories, Berlin, Germany

Gestures can offer an intuitive way to interact with a computer. In this paper, the researchers investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. They present results from a user study, where they asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. They investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. They report on the kinds of gestures observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.

Three example gestures observed during the study – images are taken from video footage (time progresses from left to right). The three rows depict gestures observed frequently. Top row (a-c): “pull away” gesture performed in the

phone-to-phone condition to abort running activity. Second row (d-f): “pointing gesture” performed in the phone-to-tabletop condition to send an item to the tabletop. Third row (g-i): “touch gesture” performed in the phone-to-public

display condition to trigger synchronization.

Pressure-Based Menu Selection for Mobile Devices Graham Wilson, Craig Stewart, and Stephen A. Brewster, University of Glasgow, Glasgow, Scotland

Despite many successes in desktop applications, little work has looked at the use of pressure input on mobile devices and the different issues associated with mobile interactions e.g. non-visual feedback. This study examined pressure input on a mobile device using a single force sensing resistor (FSR) with linearized output as a means of target selection within a menu, where target menu items varied in size and location along the z-axis. Comparing visual and audio feedback, results showed that, overall, eyes-free pressure interaction reached a mean level of 74% accuracy. With visual feedback mean accuracy reached 85%. Participants could accurately distinguish up to 10 pressure levels when given adequate feedback indicating a high level of control.

Timbremap: Enabling the Visually-Impaired to Use Maps on Touch-Enabled Devices Jing Su, Alyssa Rosenzweig, Ashvin Goel, Eyal de Lara, and Khai N. Truong University of Toronto, Toronto, Ontario

Mapping applications on mobile devices have gained widespread popularity as a means for enhancing user mobility and ability to explore new locations and venues. Visually impaired users currently rely on computer text-to-speech or human-spoken descriptions of maps and indoor spaces. Unfortunately, speech-based descriptions are limited in their ability to succinctly convey complex layouts or spatial positioning. This paper presents Timbremap, a sonification interface enabling visually impaired users to explore complex indoor layouts using off-the-shelf touch-screen mobile devices. This is achieved using audio feedback to guide the user’s finger on the device’s touch

Page 28: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

28

interface to convey geometry. A user study evaluation shows Timbremap is effective in conveying non-trivial geometry and enabling visually impaired users to explore indoor layouts.

The researchers chose to use the iPhone as the mobile device platform for two reasons. First, at the time, the iPhone was the only available mobile smart phone, which featured a capacitive touch-screen with support for multiple simultaneous touches on the screen. The application is implemented as a native iPhone application, written in Objective-C. Storage for reading shapes and logging data are done using the SQLite3 library provided as part of the iPhone application API. For aesthetics, the iPhone is designed with many smooth surfaces. While visually appealing, this results in a device that can be slippery to hold and lacking in tactile feedback for certain physical features. To mitigate this, they wrapped the iPhone in an after-market silicon “glove”. The silicon border enabled users to feel the boundaries of the touch screen area. This was important since the top and bottom edges of the touch screen area are smooth with respect to the earpiece and home button areas. The non-slip silicon glove also enables users to use both hands to interact with the iPhone on a table-top without the device sliding around.

iPhone and sample tile. Left: iPhone in silicon glove. Right: tile dimensions match device’s screen dimensions.

Alfred Poor, volume 12: 20 articles, 44 pages

http://www.veritasetvisus.com

Veritas et Visus

MultiView

David Barnes, volume 13: 20 articles, 60 pages

Page 29: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

29

SIGGRAPH July 25-29, 2010, Los Angeles, California

In the second of four reports, Phillip Hill covers papers from Immersion SAS, The University of Tokyo, University of Tsukuba (x2), The University of Tokyo/Keio University (x2), Gdansk

University of Technology, Hong-Ik University/Yonsei University, National Taiwan University, Yonsei University, University of Minnesota, and Tokyo Metropolitan University/National Institute of

Special Needs Education/Japan Electronics College

Z-touch: A Multi-touch System for Detecting Spatial Gestures near the Tabletop Yoshiki Takeoka, Takashi Miyaki, and Jun Rekimoto, The University of Tokyo, Tokyo, Japan

The paper introduces Z-touch, a multi-touch table that can detect spatial hand gestures near the tabletop surface. This system uses a combination of a multi-layered infrared laser plane and a high-speed camera. Multi-layered IR line laser modules are synchronized with the shutter signal of the high-speed camera. This system allows user interaction on the basis of the posture of the user’s fingers near the tabletop surface without the use of special markers or devices on the fingers.

These two pictures have been captured simultaneously. (a) Tabletop of Z-touch. Multi-layered line laser modules are placed at the edge of the tabletop panel. (b) Captured depth map of the hand near the top panel of Z-touch shown in (a). RGB channels correspond to the laser plane: R-highest, G-middle, B-lowest (very close to the surface of the tabletop). The image at the bottom of (b) is that of the thumb not touching to the

surface, so the silhouette is in a pale red color.

3D Multi-touch: When Tactile Tables Meet Immersive Visualization Technologies Jean-Baptiste de la Rivière, Cédric Kervégant, Nicolas Dittlo, Mathieu Courtois, and Emmanuel Orvain Immersion SAS, Bordeaux, France

Multi-touch tactile input, while having been in the research labs for quite some time, is just reaching the general public through well-known interfaces like mobile phones or multi-touch tables. The technology, when used in the right context in the right way, is known to provide an intuitive manipulation of the synthetic – mostly 2D – content that is displayed on its surface. Likewise, means to display stereoscopic images have been known for decades now, but are only just beginning to be understood and accepted by the general public thanks to very well received movies and all the announcements around the soon to be released 3D television. When rendered properly, stereoscopic

Page 30: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

30

displays indeed offer a huge sense of depth to previously flat images. Furthermore, combining stereo rendering with viewpoint tracking definitely provides an immersive visualization of 3D models. The combination of the two technologies introduces specific constraints that have not yet been dealt with nor extensively studied. This paper presents the company’s current work including the new issues and preliminary developments.

Trying to take the multi-touch tables from 2D interaction to an efficient and collaborative 3D immersive interface, Immersion built a two-user multiview multi-touch table. Two-user multiview, which ensures each user has their own perspective correct viewpoint on the same 3D model, is obtained by combining active and passive stereo technologies and using six DOF sensors that help to retrieve the position and orientation of each user’s head. The focal plane has been set right on the tactile surface. Negative parallax, which then leads to the most impressive stereo visualization with the 3D content coming out of the display, is used as long as no hand approaches the tactile surface. When a finger touches to the surface, the model may slowly be lowered to obtain a positive parallax, with the 3D content located right inside the table. Thanks to the diffuse infrared illumination that multi-touch video analysis algorithms rely on, they are also able to detect features beyond the simple 2D contact points, and therefore retrieve a hand presence even before fingers touch the surface and break the depth perception. While image plane interaction techniques would work quite well for a single user, they would hardly take into account the two users’ opposite viewpoints and the stereo positive parallax. Immersion therefore chose to experiment with shooting virtual rays orthogonal to the tactile surface from each finger in contact, which currently seems to strengthen the fingers presence within the virtual world and its relationship with 3D objects.

Gesture Controlled Interactive Whiteboard Based on SVM and Fuzzy Logic Michal Lech, and Bozena Kostek, Gdansk University of Technology, Gdansk, Poland

Nowadays, one of the main focuses of the human-computer interaction area is controlling computers by gestures. Various gesture types provide means of controlling user interfaces and applications. However, most of them involve a front-facing camera and the user’s gestures are recognized often from the static background. In addition, colorful gloves, gloves with motion sensors or infrared diodes are often used for this purpose. The approach presented here utilizes a camera placed behind the user, coupled with a multimedia projector. Gestures are recognized on the variable background. No special manipulators or infrared lights are necessary. To present the possibilities of such a use an interactive whiteboard application was developed. Besides the basic functionalities of writing, deleting the content, etc., the user can load images of various types and draw on them. Recognition of gestures enabling the user to rotate, zoom or to browse the images is provided by the system. Each gesture can be performed in very close proximity to the whiteboard (e.g. writing) or from a distance (e.g. for image handling). Since multimedia projectors are often a standard equipment of classrooms, the whiteboard presented may be a cheaper solution in comparison with existing interactive whiteboards based on projectors, special frames with sensors, and electronic pens.

The picture displayed by the multimedia projector (a), the corresponding frame from the camera with the perspective corrected (b), and the result of image subtracting after applying image processing methods (c)

Page 31: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

31

Colorful Touch Palette Yuki Hirobe, Shinobu Kuroki, Katsunari Sato, and Takumi Yoshida, The University of Tokyo, Tokyo, Japan Kouta Minamizawa, and Susumu Tachi, Keio University, Tokyo, Japan

Previously, pictures were painted using tools such as crayons or even by hand. Surfaces such as canvases or walls provided the tactile sensations of the drawing surface while painting. However, this tactile experience has got lost because of advances in computer graphics software. Besides, a conventional multi-touch interface cannot provide tactile sensation. The researchers propose a novel interactive painting interface called “Colorful Touch Palette” that may help us to rediscover our creativity. The user can touch the canvas having the electrode, select or blend tactile textures of their choice, draw a line, and experience the tactile sensations of painting as shown in Figure 1. Various tactile textures can be created by blending textures as paints. This interface can be used to design complex spatial tactile patterns for haptic-friendly products. Moreover, this system can be potentially used to create novel tactile paintings. This system is composed of a “Tactual Cap”, “Touch Palette” and “Texture Canvas”. “Tactual Cap”: a cap shaped device, as shown in Figure 2, that consists of a high-density electrode matrix for providing tactile feedback, a pressure sensor for estimating the contact state of the fingertip, and tracking markers for detecting the fingertip posture. “Touch Palette” is a palette shaped display that consists of a touch panel. Several types of visuo-tactile textures called “tactile colors” are arranged as shown in Figure 3. “Texture Canvas” consists of two types of canvases – a 2D canvas and a 3D canvas, as shown Figure 4. The 2D canvas consists of a touch panel monitor mounted on an easel. It is used for painting a tactile picture. The 3D canvas consists of a solid screen, a projector, and motion tracking cameras. It is used for surface prototyping. The user blends the tactile colors together on the Touch Palette using a finger with the Tactual Cap, which creates various tactile colors. Then the user touches and draws a visuo-tactile painting on the Texture Canvas at will with tactile feedback. The user can appreciate the visuo-tactile picture he/she painted on the basis of the tactile feedback.

Photochromic Canvas: Drawing with Patterned Light Tomoko Hashida, and Takeshi Naemura, The University of Tokyo, Tokyo, Japan Yasuaki Kakehi, Keio University, Tokyo, Japan

Drawing tools using digital technology can stimulate creativity in people. For example, the Wacky Brush tool in KidPix can produce effects (such as a line of dripping paint or a line of shapes) that cannot be obtained using ordinary paper and brushes [Hickman 1991]. This feature makes it easy for people to draw pictures having a combination of patterns. Such software, however, has so far been used only with electronic displays such as LCDs and PDPs. In this paper, the researchers propose a mechanism that would allow the user to draw such pictures while using paper as a canvas instead of electronic displays. With this mechanism, a variety of patterns can be made to appear along lines traced out by the user by moving an electronic paint brush over paper. The advantages of using paper in this way include a high degree of freedom in shape and size as well as portability.

To achieve these objectives, they focused their attention on photochromic material in which the appearance and disappearance of color can be controlled by light. In this regard, KAMI CHAT has been proposed as a means of controlling the generation of color by placing paper coated with photochromic material over an array of LEDs [Osada et al.2008]. Here they adopt photochromic material that can maintain a color-forming state and propose a system that does not place limitations on where the paper is placed. In this system, moving a handheld projector

Page 32: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

32

like a paint brush over paper coated with this photochromic material results in a reaction between the material and the projected light and a change in paper color. Effects like those of Wacky Brush can be obtained by projecting patterned light from the handheld projector. They call their system “Photochromic Canvas”.

Left: Writing with light. Center: Effects of patterned light. Right: Effects of periodic light. How People Tend to Organize Sensory Information into Unified Wholes in Haptic Phone? Ji-Hye An, Hong-Ik University, Seoul, South Korea Su-Jin Lee, Yonsei University, Seoul, South Korea

Considerations of interface design have been limited to the senses of sight and hearing. However, as the sense of touch, such as haptics, began to be applied to equipment, new interactions have emerged. Due to the integrated nature of people, it is important for a new system that added tactile stimuli to correctly analyze and understand users’ experiences. This study analyzes integrated cross modality user experiences from devices providing information on the senses of sight, hearing, and touch.

Since people embrace information easily without extra cognitive processes, users might consider cross modality as a simple phenomenon, but actually cross modality is not simple and does not occur in the same way. From the study, it was determined that cross modality phenomena strongly influences the emotional level of users, due to the significant increase of cross modality phenomena of information devices providing tactile sense and active interaction between private devices and users. This is because tactile information is actively introduced as the touch function of recent information devices. In particular, in mobile phones, the characteristics that users carry all the time in private fit well with the characteristics of tactile sense, which is sensitive and privatized sensory information. In transformation phenomenon, audio and visual senses intervene as the main stimuli while the tactile sense intervenes as sub-stimuli amplifying the main stimuli.

The interaction between cross-modality and sensory information

LLP+: Multi-Touch Sensing Using Cross Plane Infrared Laser Light for Interactive Based Displays Jae-Hee Park, and Tackdon Han, Yonsei University, Seoul, South Korea

Multi-touch sensing exists in a number of applications and is presently used in personal computing devices (i.e. laptops and desktop computers), mobile touch screens, kiosks, interactive wall displays (e.g. subway station map), ATMs, and any display requiring an interactive platform. Current multi-touch sensing methods use capacitive or resistive based touch screens both of which are expensive and difficult to make. Infrared based touch screens are being studied as an alternative method that is an effective and low-cost solution of producing equivalent results particularly with large interactive displays.

Page 33: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

33

Previous research using infrared based touch screens has been studied using several implementations most notably the Jefferson Han’s FTIR [Han 2005] implementation. FTIR does not suffer from occlusion, but cannot detect objects, and is well suited for detecting fingers and other point-based touch inputs. Microsoft’s Surface uses a diffuse based infrared light source but does not do well with high ambient light. Another technology uses an infrared light plane on top of the interaction surface but suffers from occlusion when objects are placed in front of the touch input. This new approach combines two technologies together to effectively eliminate limitations placed on the touch screen when only one implementation is used and reduces the complexity of design and cost.

(a) LLP+ showing touch input detection without occlusion. (b) An occluding object (hazard) in front of touch input (finger). (c) Two planes of infrared light. The x-laser plane detects objects without occlusion while the y-laser plane

detects touch input points with an object hazard.

AirTiles: Modular Devices to Create a Flexible Sensing Space Kazuki Iida, Junki Ikeuchi, Toshiaki Uchiyama, and Kenji Suzuki, University of Tsukuba, Tsukuba, Japan

There are a number of systems and devices for spatial measurement that can be used to measure body motion. However, these systems are usually large or fixed with the environment. In this study, the researchers propose a novel modular device that allows users to create geometric shapes in the real world and also a flexible sensing space within the created shape. Users can freely put devices and rotate them so that a geometrical shape could appear on the floor. As the modular devices and the emitted laser beam represent the corners and sides of the shape, respectively, the device therefore does not interrupt figures or textures on the ground. The developed system provides an interactive visual/audio environment as if children could make a hand drawing on the ground or floor and play with the created shapes.

AirTiles: spatial measurement can be done by the created geometrical shapes according to the small and lightweight modules placed on the ground. Users can easily modify the shape that corresponds to the sensing region.

beacon 2+: Networked Socio-Musical Interaction Takahiro Kamatani, Toshiaki Uchiyama, and Kenji Suzuki, University of Tsukuba, Tsukuba, Japan

This work proposes an environment for socio-musical interaction where people can generate sounds and play music by feet in collaboration with each other. The researchers have been developing a new musical interface, beacon 2+. Each beacon produces laser beams lying on the ground and rotating around it. Audio sounds are then produced when the beams pass individual performer’s foot. As the users are able to change the sound pitch and length

Page 34: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

34

according to the foot location and angles facing the instrument, their bodily motion and foot behavior can be translated into sound and music in an intuitive manner. In addition, two different types of beacons are connected via a network. Throughout the performance, walking, dancing and stepping around the devices, users could feel and make their presence felt in a distant place, and also they can easily and intuitively understand how it works.

beacon 2+ is a new interface for networked socio-musical interaction

Thermal Design Display Device to use the Thermal Tactile Illusions “Thermo-Paradox” Kumiko Kushiyama, and Tetsuaki Baba, Tokyo Metropolitan University, Tokyo, Japan; Kouki Doi, National Institute of Special Needs Education, Yokosuka, Japan; Shinji Sasada, Japan Electronics College, Tokyo, Japan

“Thermo-Paradox” is a thermal design display device that uses thermal tactile physiological illusions that can interactively present patterns of warm and cool temperatures. The technological success of a compact 80-pixel, 9-inch thermal display allows text information to be conveyed by temperature, which has never before been achieved, and the device compactness increases the degree of freedom in presentation methods. The researchers propose this unprecedented tactile expression as a device that can display thermal images that interactively match a visual image, using the tactile paradoxical sensation produced by the ability to control the temperature of each pixel.

The image generator uses the data sent from the event module to generate a 3D image in real time with OpenGL. What kind of sensation results when the middle finger senses cold and the index finger senses warmth? The image and thermal display are controlled simultaneously to create sensations of moist and dry, bumps and dents on the surface and other such tactile effects in high-speed real time using vision and the thermo-tactile illusions produced by touching the warm and cool stripes.

Image generations (left column), thermo camera image (right column)

Beyond the Surface: 3D Interactions for Tabletop Systems Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Home-Ru Lin, Ju-Chun Ko, Mike Y. Chen, Jane Hsu, and Yi-Ping Hung, National Taiwan University, Taipei, Taiwan

Current tabletop systems are designed to sense 2D interactions taking place on the tabletop surface, such as finger touches and tangible objects. The ability to interact above the tabletop surface makes it possible to support 3D interactions. For example, an architect can examine a 2D blueprint of a building shown on the tabletop display while inspecting 3D views of the building by moving a mobile display above the tabletop. Recent approaches to localize objects in 3D requires visible markers or the use of embedded sensors [Song et al. 2009]. The use of visible

Page 35: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

35

markers often interferes with the content users are focusing on, limiting its usefulness and applicability. In this work, the researchers propose a new 3D tabletop system that overcomes this limitation by using invisible markers. They combine an infrared projector and a regular, color projector to simultaneously project visible content with invisible markers. Embedded IR cameras are used to localize objects above the tabletop surface. In addition, they use programmable marker patterns to refine object localization at both far distance and close up.

(a) iView enables users to explore 3D views from different perspectives above the 2D table surface. (b) iLamp highlights and enhances resolution of a particular region. (c) iFlashlight, a mobile version of iLamp,

facilitates cooperative information exploration.

Shadow WIM: A Multi-Touch, Dynamic World-In-Miniature Interface for Exploring Biomedical Data Dane M. Coffey, and Daniel F. Keefe, University of Minnesota, Minneapolis/St. Paul, Minnesota

Advances in high-performance (supercomputer) simulations are revolutionizing biomedical research. The illustration shows a visualization of data from a cutting-edge computational fluid dynamics simulation of blood flow through a replacement heart valve. This work addresses the specific challenge of navigating through virtual reality visualizations that include complex anatomic geometries, such as the MRI-reconstructed geometry shown in the illustration. The researchers’ approach builds upon well-known world-in-miniature (WIM) techniques. The contributions of this work are extending scalable and scrollable WIMs to apply to virtual environments that do not include a ground plane or a well-established default viewing orientation, developing a multi-touch interface to control the shadow WIM that they introduce, demonstrating this interface in a biomedical engineering application, and reporting on lessons learned. The system consists of a dual surface, multi-touch, virtual reality environment. The first surface is a vertical stereo screen, the second a horizontal multi-touch table. In contrast to a typical WIM, which is a flat map, the shadow WIM has two parts: a shadow projection and the floating 3D geometry that casts it. The user interacts directly with the shadow, while the 3D geometry is useful, and only displayed, during reorienting operations. A WIM reorientation mode is used to change the orientation of both parts of the WIM (3D and shadow) simultaneously. The view definition mode is used define the real world view. Multi-touch gestures are used to seamlessly transition between the two modes. To reorient the WIM to create a useful interactive shadow, a single point of contact is used for film plane translation and two for scaling and rotating. Then, in an extension to typical direct manipulation interfaces, the virtual environment can be tilted or rolled by moving two points of contact together in the y or x direction respectively, effectively changing the up direction of the environment. New global 3D views for the vertical display are created by defining a new film plane using the shadow WIM.

Exploring a time-varying visualization of simulated blood flow through a heart valve model. (a) The 3D anatomical model. (b) The shadow shows an orientation of the WIM that the user wants to change. (c) Using seamless multi-touch gestures, the user reorients the WIM, changing the shadow. (d) With the WIM set, the

user defines a film plane, overlayed in blue, between the fingers.

Page 36: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

36

Korea Display Conference July 8-9, 2010, Seoul, South Korea

Phillip Hill covers presentations from LG Electronics, and LG Innotek

Up-to-date Touch Phone Technology Trend Cho Young Min, LG Electronics, Seoul, South Korea

The presentation dealt with mobile market trend; touch phone evolution; touch market volume; touch applications; choosing a touch screen; touch phone design; and the touch screen roadmap. The presentation was almost entirely made up of graphs and photographs some of which we reproduce here.

Left: Mobile phone TSP market forecast by technology (units). Right: Mobile phone TSP market forecast by technology (unit %).

Touch Panel and Touch Window Technology Trend LG Innotek, Seoul, South Korea

In another almost entirely visual presentation, the presenter covered a touch screen technology classification; touch screen technology details; touch technology trends; and LG Innotek’s capacitive touch module. For use in mobile applications, the resistive and capacitive types have been widely adopted, and the market share of capacitive touch screen technology is growing. There is no technology that meets all the conditions perfectly; the application of a technology varies depending on the use environments and conditions.

On LG Innotek’s capacitive module, the signals from the counter electrode against the electrode

inputting alternating current (AC) signal are altered when a finger is applied

Page 37: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

37 http://www.int-diplays.com

Page 38: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

38

EuroITV Conference on Interactive TV and Video June 9-11, 2010, Tampere, Finland

Phillip Hill covers a paper from University Paul Sabatier

Possible Entry Barriers for Touch Interaction in the Living Room Michael Pirker, Regina Bernhaupt, and Thomas Mirlacher, University Paul Sabatier, Toulouse, France

Interaction with various forms of media is changing. Today, interaction technologies like touch, gesture, or speech are ready for deployment in consumer products. Given that most of these technologies have been available for more than 20 years, the take up in end consumer products is limited. To investigate the problems on the introduction of touch interaction to the end consumer market, an experimental comparison of a traditional remote control interaction with a touch remote control was performed to investigate users’ perceived differences in usability and user experience. Finally, some insights on usability and user experience measurements as possible entry barriers for the market are presented.

The interactive TV system used in the study is called “vocomedia” and it offers features like an electronic program guide, time-shifting, video on demand and photo as well as music libraries. One of its main features is the ability to personalize it with profiles for the different users of the system (e.g. a sorted channel list for each user) and also security features (e.g. locking certain channels for children’s profiles or limit the daily viewing time for certain users). To access the user profiles, the system uses a fingerprint reader, which can be used to access security related functionalities like the filtering of content or the payment of video on demand movies.

The figure shows a depiction of some aspects of the iTV system. The system was chosen as it is fully functional in terms of user interaction; on the other hand it allowed the researchers to control the IR commands and inputs, to allow comparison of the two interaction models on a technical basis. The approach used in this usability and user experience study was to test and compare two remote controls with the same shape, size, weight and functionalities, but different interaction styles, on the same interactive television system. One of the remote controls offered a normal “press button” interaction, while the other remote control offered the possibility of touch interaction.

Remote controls used in the experiment: left, the standard remote control; right, the touch-sensitive version

Veritas et Visus (Truth and Vision) publishes a family of specialty newsletters about the displays industry:

Flexible Substrate Display Standard 3rd Dimension High Resolution Touch Panel

http://www.veritasetvisus.com

Page 39: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

39

Graphics Interface Conference May 31–June 2, 2010, Ottawa, Ontario

Phillip Hill covers papers from Clemson University/Greenville Technical College, Osaka University/Tohoku University, Queen’s University, Lockheed Martin Advanced Technology

Laboratories/University of Washington, Microsoft Research/Microsoft Research Cambridge/Microsoft Corporation, Microsoft Research/University of Washington, and Inesc-ID/University of Calgary

Coupling Panning Operation with Zooming and Tilting Based on the Anchor Point on a Map Kazuyuki Fujita, Kazuki Takashima, Takayuki Tsukitani, Yuichi Itoh, and Fumio Kishino, Osaka University, Osaka, Japan; Yoshifumi Kitamura, Tohoku University, Sendai, Japan

The researchers propose two novel map navigation techniques, called Anchored Zoom (AZ) and Anchored Zoom and Tilt (AZT). In these techniques, the zooming and tilting of a virtual camera are automatically coupled with users’ panning displacements so that the anchor point determined by users always remains in a viewport. This allows users to manipulate a viewport without mode-switching among pan, zoom, and tilt while maintaining a sense of distance and direction from the anchor point. They conducted an experiment to evaluate AZ and AZT and compare them with Pan & Zoom (PZ) and Speed-dependent Automatic Zooming (SDAZ) in off-screen target acquisition tasks and spatial recognition tests. Results showed that the proposed techniques were more effective than those of competitors in reducing time to reach off-screen objects while maintaining users’ sense of distance and direction as well as PZ.

Left: An example of viewport transition in AZ. Right: An example of viewport transition in AZT.

TouchMark: Flexible Document Navigation and Bookmarking Techniques for E-Book Readers Doug Wightman, Tim Ginn, and Roel Vertegaal, Queen’s University, Kingston, Ontario

The paper presents TouchMark, a set of page navigation techniques that preserve some of the physical affordances of paper books. TouchMark introduces physical tabs, one on each side of the display, to enable gestures such as page thumbing and bookmarking. TouchMark can be implemented on a variety of electronic devices, including tablet computers and laptops, by augmenting standard hardware with inexpensive sensors.

The researchers discuss a specific set of navigational flexing techniques, mostly focused at bookmarking. Bookmarks ease navigation between previously visited pages. According to previous research, an average of 10 bookmarks cover approximately 80% of document location revisitations. In the TouchMark design, the researchers

Page 40: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

40

limited the number of bookmarks that can be deployed, and provided a single physical means to access bookmarks. TouchMark supports the use of physical gestures such as thumbing to navigate to the previous or next page, and fingering to place and retrieve bookmarks in a document (see figure). It does this through two flexing touch sensors, or tabs, that are attached on the side of the display and that mimic the feeling of a previous or next page. The left tab provides access to the previous page when touched from above and to the previous bookmark when touched from below. The right tab mirrors this behavior for forward navigation.

Common physical navigation behaviors: thumbing to the next page, bookmarking a page with the fingers, and moving back and forth between two fingered bookmarks

Design and Evaluation of Interaction Models for Multi-touch Mice Hrvoje Benko, Andrew D. Wilson, Ken Hinckley, and Dan Rosenfeld, Microsoft, Redmond, Washington; Shahram Izadi, and Xiang Cao, Microsoft Research Cambridge, Cambridge, England

Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper Microsoft describes the design challenges and formalizes the design space of multi-touch mice interactions. From their design space categories they synthesize four interaction models that enable the use of both multi-touch and mouse interactions on the same device. They describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. They observed that their multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from the participants. The results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective. The work explores the rich interaction space for multi-touch (MT) mice and makes the following three contributions. It first describes a set of design challenges with integrating touches from the MT mouse to a desktop user interface and categorizes them in a taxonomy. The researchers then use this taxonomy to identify four techniques for integrating multi-touch with the existing cursor-based model for interaction (see figure). Lastly, they report on a controlled user study that compares their four techniques against two real-world baseline interactions using different devices – a multi-touch screen and a regular mouse. The results show that MT mouse interactions were overall significantly slower than the existing baselines (by 27%); however,

techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from the participants. They synthesize the lessons from the observations, expose the benefits and shortcomings of the interactions given the chosen task, and highlight the directions for future research of finding the appropriate tasks and interactions in order to make these devices effective.

The multi-touch mouse device in use and the corresponding cursor and the touch points in the interface

Page 41: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

41

A Lightweight Multistroke Recognizer for User Interface Prototypes Lisa Anthony, Lockheed Martin Advanced Technology Laboratories, Cherry Hill, New Jersey Jacob O. Wobbrock, University of Washington, Seattle, Washington State

With the expansion of pen and touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multi-stroke gesture recognizers is non-trivial and time-consuming. Luckily, user interface prototypes often do not require state-of-the-art recognizers that are general and maintainable, due to the simpler nature of most user interface gestures. To enable easy incorporation of multi-stroke recognition in user interface prototypes, the paper presents $N, a lightweight, concise multi-stroke recognizer that uses only simple geometry and trigonometry. A full pseudo-code listing is given as an appendix. $N is a significant extension to the $1 uni-stroke recognizer, which has seen quick uptake in prototypes but has key limitations.

$N goes further by (1) recognizing gestures comprising multiple strokes, (2) automatically generalizing from one multi-stroke to all possible multi-strokes using alternative stroke orders and directions, (3) recognizing one-dimensional gestures such as lines, and (4) providing bounded rotation invariance. In addition, $N uses two speed optimizations, one with start angles that saves 79.1% of comparisons and increases accuracy 1.3%. The other, which is optional, compares multi-stroke templates and candidates only if they have the same number of strokes, reducing comparisons further to 89.5% and increasing accuracy another 1.7%. These results are taken from a study of algebra symbols entered in situ by middle and high schoolers using a math tutor prototype, on which $N was 96.6% accurate with 15 templates.

Multistrokes from the HTML/JavaScript version of $N. Such strokes are kept simple for human memorability and performance.

Visuohaptic Borescope Inspection Simulation Training Deepak Vembar, Andrew Duchowski, Melissa Paul, and Anand Gramopadhye; Clemson University, Clemson, South Carolina; Carl Washburn, Greenville Technical College, Greenville, South Carolina

Results are presented from a transfer effects study of a visuohaptic borescope simulator developed for non-destructive aircraft inspection training. The borescope simulator supports multi-point collision detection to affect haptic feedback as the virtual probe slides along and collides with rigid surfaces. Such probe maneuvering is shown to be a significant aspect of the inspection task that benefits from training, regardless of whether a real or virtual probe simulator is used to provide the training.

Visual comparison of intersection response of actual and virtual probe at 30 degree angle of incidence; inspection of PT-6 aircraft engine using actual borescope, with operator’s view of the camera output

Page 42: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

42

Understanding Users’ Preferences for Surface Gestures Meredith Ringel Morris, and Andrew D. Wilson, Microsoft Research, Redmond, Washington State Jacob O. Wobbrock, University of Washington, Seattle, Washington State

The researchers compare two gesture sets for interactive surfaces – a set of gestures created by an end-user elicitation method and a set of gestures authored by three HCI researchers. Twenty-two participants who were blind to the gestures’ authorship evaluated 81 gestures presented and performed on a Microsoft Surface. The findings indicate that participants preferred gestures authored by larger groups of people, such as those created by end-user elicitation methodologies or those proposed by more than one researcher. This preference pattern seems to arise in

part because the HCI researchers proposed more physically and conceptually complex gestures than end-users. The paper discusses the findings in detail, including the implications for surface gesture design.

After viewing, imitating, and rating all of the gestures for a particular command, participants were presented a screen of thumbnail images depicting each of the

proposed gestures, and were asked to select which one was the best for that command.

This figure depicts the six alternatives shown for “zoom out”. Any gesture could

be replayed at this stage.

A Comparison of Ray Pointing Techniques for Very Large Displays Ricardo Jota, and Joaquim A. Jorge, Inesc-ID, Lisbon, Portugal Miguel A. Nacenta, Sheelagh Carpendale, and Saul Greenberg, University of Calgary, Calgary, Alberta

Ray-pointing techniques are often advocated as a way for people to interact with very large displays from several meters away. The researchers are interested in two factors that can affect ray pointing: the particular technique's control type, and parallax. Consequently, they tested four ray pointing variants on a wall display that covers a large part of the user's field of view. Tasks included horizontal and vertical targeting, and tracing. The results show that (a) techniques based on “rotational control” perform better for targeting tasks, and (b) techniques with low parallax are best for tracing tasks. They also show that a Fitts’s law analysis based on angles (as opposed to linear distances) better approximates people’s ray pointing performance.

The most common ray pointing variant is laser pointing. Here, the ray is specified directly by the position and direction of a physical device (see diagram). The device might or might not be an actual laser; in fact, the only

requirement is that the computer system has a way of determining the intersection of the ray with the screen surface. For example, vision technology or special markers on the hand recognizes finger postures as a pointing device. Laser pointing has been proposed and implemented for cursor control in 2D interfaces many times. It is often referred to as distant pointing, remote pointing or virtual pointing. In this study, the researchers implement a laser pointer via an infrared-marked wand tracked in 6DOF.

Four variants of ray pointing

Page 43: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

43

Page 44: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

44

Conference on Human Factors in Computing Systems April 10-15, 2010, Atlanta, Georgia

In the third of four reports, Phillip Hill covers presentations from Autodesk Research/University of Maryland/Cornell University, Microsoft Research, Telecom ParisTech, National Taiwan University,

EPFL Media and Design Lab, ASUSTek Computer Inc./Kun Shan University, Microsoft Research/Microsoft Corporation/University of Toronto, University of California, Deutsche Telekom

Laboratories, London Knowledge Lab, Georgia Tech, Pompeu Fabra University, Ryerson University, Microsoft/SMART Technologies, The University of Iowa, UMBC, KAIST, Technische

Universität Darmstadt, Orange Labs/ENSCI-Les ateliers, Korea Advanced Institute of Science and Technology/Inje University, University of Duisburg-Essen/Newcastle University, University of Duisburg-Essen/German Research Center for Artificial Intelligence, and The Open University

Tangible Interfaces for Download: Initial Observations from Users’ Everyday Environments Enrico Costanza, Matteo Giaccone, Olivier Küng, Simon Shelley, and Jeffrey Huang EPFL Media and Design Lab, Lausanne, Switzerland

Tangible user interfaces (TUIs) have been promoted and discussed in the HCI community for 15 years. Most reported TUIs are research prototypes, available in laboratories or museums. This paper reports an attempt to understand the impact of TUIs in users’ everyday environments through a low-cost, simple setup tangible interface for music that can be freely downloaded from a website. The system requires only a regular computer, a webcam and a printer – the physical parts of the interface can be folded out of ordinary paper. Logging interaction with the

interfaces and analyzing content posted by users on the web, the researchers observed that the TUIs were accepted as normal: just interfaces to make music rather than esoteric systems.

Audio d-touch is a small collection of applications for real-time musical composition and performance. It includes a drum machine and a sampling sequencer. Both are controlled by physically arranging a set of graspable interactive blocks on a flat surface (e.g. a table-top). For ease of construction the interactive surface is normally delimited by an A4 piece of paper, which defines a block size of approximately 2.5x3cm, as shown in the figure. However, the system can be scaled to practically any size, as long as interactive surface and blocks are scaled by the same factor and everything is in the field of view of the camera.

Setup of the d-touch system with the webcam, the active surface, the paper blocks and the speakers

Page 45: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

45

Interactivity and Non-Interactivity on Tabletops Kenton O’Hara, Microsoft Research, Cambridge, England

In the growing field of tabletop computing research, there has been an understandable focus on interactive aspects of tabletop use, in terms of technology, design, and behavioral analysis. In this paper, O’Hara highlights the importance of considering also non-interactive aspects of tabletop computing and the mutually dependent relationship between interactive and non-interactive. He illustrates aspects of this relationship using findings from a deployment of an interactive tabletop in a public setting. The findings highlight how consequences of interaction can impact on non-interactive behaviors and intentions and how non-interactive actions can constrain interactive behaviors on the tabletop. In doing this Microsoft aims to raise more awareness of the relationship between interactivity and non-interactivity within tabletop computing research.

The study was set in the café/bar of the Watershed Media Centre in Bristol, England. The café bar is there to serve people using the amenities but is also well established as a venue in its own right with people visiting who are not explicitly using the amenities. While there, people undertake a myriad of different activities beyond simply eating or drinking. They read, work on laptops, have work meetings, chat with friends, hold mobile phone conversations, surf the Internet, relax, or simply just soak up the atmosphere while waiting for friends. People mainly visit and sit round the tables in small groups but there are a significant proportion of solo visitors (or people waiting for friends to arrive), in particular during the quieter periods of the day more conducive to relaxed solo activities such as reading.

The interactive table was situated in a room in the café bar with a large sofa and wooden bench arranged for sitting at the table. The interactive table consisted of a PC and horizontal touch screen display housed in a low coffee table format casing. The touch screen display was a 40-inch plasma with a screen resolution of 1280x768. A wooden rim of approximately 11cm surrounded the screen on all sides. The touch screen was single touch only. The content on the display consisted of graphical links to a range of short films, interactive installations and games.

MouseLight: Bimanual Interactions on Digital Paper Using a Pen and a Spatially-Aware Mobile Projector Tovi Grossman, and George Fitzmaurice, Autodesk Research, Toronto, Ontario; Hyunyoung Song, University of Maryland, College Park, Maryland; Francois Guimbretiere, Cornell University, Ithaca, New York

MouseLight is a spatially-aware standalone mobile projector with the form factor of a mouse that can be used in combination with digital pens on paper. By interacting with the projector and the pen bimanually, users can visualize and modify the virtually augmented contents on top of the paper, and seamlessly transition between virtual and physical information. The researchers present a high fidelity hardware prototype of the system and demonstrate a set of novel interactions specifically tailored to the unique properties of MouseLight. MouseLight differentiates itself from related systems such as PenLight in two aspects. First, MouseLight presents a rich set of

bimanual interactions inspired by the ToolGlass interaction metaphor, but applied to physical paper. Secondly, the system explores novel displaced interactions, that take advantage of the independent input and output that is spatially aware of the underneath paper. These properties enable users to issue remote commands such as copy and paste or search. They also report on a preliminary evaluation of the system, which produced encouraging observations and feedback.

The proof-of-concept MouseLight system. It consists of three main components: a digital pen,

digital paper and spatially-aware projector.

Page 46: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

46

Drafting Tool feature. Left: Multiple draft and measuring tools can be used to guide drawing. Right: The viewport scroll wheel allows users to change the scale of the toolglass.

Clutch-Free Panning and Integrated Pan-Zoom Control on Touch-Sensitive Surfaces Sylvain Malacria, Eric Lecolinet, and Yves Guiard, Telecom ParisTech, Paris, France

This paper introduces two novel navigation techniques, CycloPan, for clutch-free 2D panning and browsing, and CycloZoom+, for integrated 2D panning and zooming. These techniques instantiate a more generic concept which the researchers call Cyclo* (CycloStar). The basic idea is that users can exert closed-loop control over several continuous variables by voluntarily modulating the parameters of a sustained oscillation. Touch-sensitive surfaces tend to offer impoverished input resources. Cyclo* techniques seem particularly promising on these surfaces because oscillations have multiple geometrical and kinematic parameters many of which may be used as controls. While CycloPan and CycloZoom+ are compatible with each other and with much of the state of the art, the experimental evaluations suggest that these two novel techniques outperform flicking and rubbing techniques.

CycloPan (left), CycloZoom+ (right), along with the schematic form of their controlling gestures

Vote-O-Graph: A Dishonest Touch-screen Voting System Andrea L. Mascher, Paul T. Cotton, and Douglas W. Jones, The University of Iowa, Iowa City, Iowa

The researchers present Vote-O-Graph, an experimental touch-screen voting system designed to simulate reported interface issues in existing electronic voting systems. Touch-screen miscalibration and the application of relative touch coordinates in anonymity-preserving user interface event logs are discussed. The visual design of Vote-O-Graph is based on layouts used in existing commercial and experimental voting systems, such as Pvote. Contests are normally presented one per page with contest description at the top of the screen, candidate selection options presented as a column of adjacent buttons in the middle of the screen. The “Next” and “Previous” navigation buttons are in the lower right and left hand corners, 20 pixels (4.1mm) from the bottom of the screen. All buttons had a height of 90 pixels (18.4mm).

Existing methods to detect and respond to user interface issues, such as touch-screen miscalibration, record too much information to be used in voting system event logs. The right to a secret ballot is compromised when it is possible to reconstruct how a person voted from the event log. This balance, between the need to protect voter anonymity and the desire to collect the maximum amount of meaningful data for post-election investigations, has

Page 47: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

47

prompted several questions: what user interface issues can be detected while maintaining voter privacy; how can said issues be detected without revealing a voter's selections; are the data for different types of issues differentiable from one another? The study described in this paper investigates what anonymity-protecting user interaction data can be maintained in a voting system event log and what measurable differences in behavior exist under a variety of interface issues.

Touching the Void: Direct-Touch Interaction for Intangible Displays Li-Wei Chan, Hui-Shan Kao, Mike Y. Chen, Ming-Sui Lee, Jane Hsu, and Yi-Ping Hung National Taiwan University, Taipei, Taiwan

In this paper, the researchers explore the challenges in applying and investigating methodologies to improve direct-touch interaction on intangible displays. Direct-touch interaction simplifies object manipulation, because it combines the input and display into a single integrated interface. While traditional tangible display-based direct-touch technology is commonplace, similar direct-touch interaction within an intangible display paradigm presents many challenges. Given the lack of tactile feedback, direct-touch interaction on an intangible display may show poor performance even on the simplest of target acquisition tasks. In order to study this problem, the researchers have created a prototype of an intangible display. In the initial study, they collected user discrepancy data corresponding to the interpretation of 3D location of targets shown on the intangible display. The result showed that participants performed poorly in determining the z-coordinate of the targets and were imprecise in their execution of screen touches within the system. Thirty percent of positioning operations showed errors larger than 30mm from the actual surface. This finding triggered their interest in designing a second study, in which they quantified task time in the presence of visual and audio feedback. The pseudo-shadow visual feedback was shown to be helpful both in improving user performance and satisfaction.

The prototype system of the intangible display, annotated for clarity. The Fresnel lens transmits the image on the LCD monitor into the air. The yellow arrow represents the optic path of image

formation.

Adaptive Mouse: A Deformable Computer Mouse Achieving Form-Function Synchronization Sheng Kai Tang, ASUSTek Computer, Taipei, Taiwan; Wen Yen Tang, Kun Shan University, Yongkang, Taiwan

In this paper, the researchers implement a computer mouse for demonstrating the idea of form-function synchronization by embedding deformation sensing modules consisting of deformable foam and Hall-effect sensors. Due to its automatic sensing, recognizing and actuating mechanisms actively responding to users’ diverse gestures, they have chosen to name it Adaptive Mouse. Working with Adaptive Mouse, all users have to do is to hold it with preferred hand gestures, then through the use of their fore and middle fingers the correct button functions will intuitively be triggered. Users can also freely move the mouse and always get accurate cursor feedbacks. This “intuitive hold then click” action creates a sense of “magic”, and the mouse shape with minimum visual clues not only lowers mental loads but also achieves the goal of simplicity design.

Concept of Adaptive Mouse

Page 48: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

48

Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input Ken Hinckley, Michel Pahud, Bill Buxton, Andy Wilson, and Hrvoje Benko, Microsoft Research, Redmond, Washington, Nicole Coddington, and Jenny Rodenhouse, Microsoft Corporation, Redmond, Washington, Koji Yatani, University of Toronto, Toronto, Ontario

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. The researchers explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. They also explore the simultaneous use of pen and touch to support novel compound gestures.

We are witnessing a shift towards systems employing direct manual input where the user interacts directly with the display, rather than indirectly, as with the mouse and cursor of traditional GUIs. This has renewed interest in both

pen and touch input, in form factors ranging from hand-helds, slates, desk-tops, table-tops, and wall displays. The iPhone, Tablet PC, Wacom Cintiq, Microsoft Surface, and Smartboard are, respectively, examples of each. Neither multi-touch nor pen input are new, but few systems explore their use in conjunction. The presence of both modalities may alter our perspective on multi-touch input; the same can be said for pen gestures. Hence simultaneous pen + touch is a nascent topic in need of further study. This argues for a holistic approach rather than focused evaluation of individual techniques, even though this is often not rewarded by CHI.

Microsoft’s experience is that trying many ideas – some good, some bad, and some intentionally chosen to highlight conflicting conventions or thorny design decisions rather than hide them – is an excellent way to draw out nuances and gain insights into novel input modalities. A systems-oriented approach offers a realistic perspective of how combined pen and touch input influences UI design issues and trade-offs. The result is Manual Deskterity, a scrapbooking application inspired by how designers work with design boards and notebooks – plus the company’s experiences with related prototypes. They advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen+touch yields new tools. This articulates how a system interprets unimodal pen, unimodal touch, and multimodal pen + touch inputs, respectively. The researchers contribute novel pen + touch gestures, while also raising, by way of examples, design questions that probe how the roles of pen and touch should be differentiated (or not) in UI design.

User behaviors observed during design study. (a) Users tuck the pen between fingers while manipulating items. (b) Thumb and forefinger grasp an item while writing about it. (c) Cut scraps fall onto the work surface. (d) Users often pull

tools and new content onto the notebook from above.

Interfaces beyond the Surface: A Structural Approach to Embodiment Fabian Hemmert, Deutsche Telekom Laboratories, Berlin, Germany

This work aims to contribute to the theory and practice of embodied interaction. It criticizes that its underlying term of “embodiment” has not been defined sufficiently, and is, consequently, used inconsistently. It also argues that this circumstance is a problematic one. The paper presents an attempt to provide more clarity to the theory of embodiment, as a basis for the practice of designing embodied interaction in tangible user interfaces (TUIs). It proposes a purely structural approach, derived from Heidegger’s works on “Being and Time”. Aspects and criteria of embodiment in the literature are reviewed in this work, and applied to the design practice of embodied interaction.

Page 49: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

49

Supporting Medical Communication with a Multi-modal Surface Computer Anne Marie Piper, University of California, San Diego, California

This research explores the utility of a multi-modal surface computer for supporting medical communication between older adults and health care providers. Research involves a field study of health care communication practices, the design of a multi-modal surface computer application, and an in-context evaluation of the technology at a local retirement community. The Shared Speech Interface (SSI) project begins to explore the idea of reifying speech on a multi-modal surface. SSI is an application for an interactive multi-touch tabletop display designed to facilitate medical communication between a deaf patient and a hearing, non-signing physician. The deaf patient types on a keyboard, and the hearing doctor speaks into a headset microphone. Speech recognition software transcribes the doctor’s speech. As the two people communicate, their speech appears on the display in the form of movable speech bubbles (see photo).

Doctor (left) and deaf patient (right) communicate using Shared Speech Interface (SSI). Moveable speech bubbles appear on the

surface computer.

Designing User Interfaces for Multi-touch and Surface-gesture Devices Daniel Wigdor, Microsoft, Redmond, Washington State Gerald Morrison, SMART Technologies, Calgary, Alberta

The design and research communities have access to multi-touch and gestural interfaces which have been released on a mass market scale. This allows them to design and develop new, unique, and richer design patterns and approaches. These methods are no longer confined to research projects or innovation labs, but are now offered on a large scale to millions of consumers. The aim of this SIG is to provide a forum for designers, researchers, and usability professionals to discuss this new and emerging technology trends for multi-touch and gesture interfaces, as well as discuss current design patterns within these interfaces. Their goal is to cross pollinate ideas and current solutions from practitioners and researchers across communities to help drive awareness of this new field for those interested in, just starting in, or currently involved in the design of these systems.

The Role of Tangible Technologies for Special Education Taciana Pontual Falcão, London Knowledge Lab, London, England

The physicality and multi-sensory aspect of tangibles make them particularly suitable for children with special needs. To date, however, there is little empirical research on tangibles for supporting cognition and learning difficulties. This research aims to investigate the role of tangibles in supporting attention, verbal memory and

abstract thinking of children with learning needs, particularly when collaborating with peers. There are many kinds of tangible technologies which could be used for science learning. As this work takes place alongside the London Knowledge Lab’s project Designing Tangibles for Learning (http://www.lkl.ac.uk/research/tangibles), the studies make use of the tangible artifacts for science learning developed in the project. So far, an interactive tangible tabletop for learning basic concepts of physics of light has been developed (see photo).

The tabletop with physical interaction devices

Page 50: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

50

LiquidText: Active Reading through Multi-touch Document Manipulation Craig Tashman, Georgia Tech, Atlanta, Georgia

Active reading, involving acts such as highlighting, writing notes, etc., is an important part of knowledge workers’ activities. Most computer based active reading support has sought to better replicate the affordances of paper. Instead, this dissertation seeks to go past paper by proposing a more flexible, fluid document representation, controlled through gesture and multi-touch input. Formative evaluations revealed details about modern active reading behavior and early reactions to the prototype system. Tashman discusses how these will inform the next design iteration, and current plans for a comparative study against other media.

With LiquidText, he is exploring an alternate approach: building a high degree-of-freedom document representation, which the user can richly manipulate throughout the active reading process (see photo). The system is decidedly unlike paper, allowing the user to collapse parts of the text, pull pieces of the text apart and rearrange them, create links throughout the text, and the like.

User interacting with several text objects in LiquidText

Maintaining Levels of Activity using a Haptic Personal Training Application Huimin Qian, Ravi Kuber, and Andrew Sears, UMBC, Baltimore, Maryland

This paper describes the development of a novel mobile phone-based application designed to monitor the walking habits of older adults. Haptic cues integrated within the prototype, are designed to inform an individual of changes which should be made to maintain a prescribed level of activity. A pilot study was conducted with fifteen older adults walking at varying speeds, both with and without the presence of assistive haptic feedback from the prototype. The results confirm that more steps were taken when haptic feedback was provided while walking at normal and fast paces. However, results also indicate that further refinements would be needed to improve the identification of haptic cues while individuals are in motion.

CheekTouch: An Affective Interaction Technique while Speaking on the Mobile Phone Young-Woo Park, Chang-Young Lim, and Tek-Jin Nam, KAIST, Daejeon, South Korea

The researchers present a new affective interaction technique, called CheekTouch, by combining tactile feedback, delivered through the cheek, and multi-finger input, while speaking on a mobile phone. They designed a prototype

by using a multi-touch mobile device and a 4x3 vibro-tactile display device. They identified six affective touch behaviors (pinching, stroking, patting, slapping, kissing and tickling) that can be exchanged through one another’s cheeks while speaking on the phone. They mapped the affective touch behaviors on tactile feedback expressions of the vibro-tactile display. Results of a preliminary user study suggest that the technique is positively evaluated by the participants and applicable to intimate and emotional communication.

Structure of the CheekTouch prototype

Page 51: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

51

The Reactable: Tangible and Tabletop Music Performance Sergi Jordà, Pompeu Fabra University, Barcelona, Spain

In this paper the researchers present the Reactable, a new electronic musical instrument with a simple and intuitive tabletop interface that turns music into a tangible and visual experience, enabling musicians to experiment with sound, change its structure, control its parameters and be creative in a direct, refreshing and unseen way. The Reactable is built upon a tabletop interface, which is controlled by manipulating tangible acrylic pucks on its surface. By putting these pucks on the Reactable’s translucent and luminous round surface, by rotating them and connecting them to each other, performers can combine different elements like synthesizers, effects, sample loops or control elements in order to create a unique and flexible composition. As soon as any puck is placed on the Reactable’s surface, it is illuminated and starts to interact with the other neighboring pucks, according to their positions and proximity. These interactions are visible on the table surface that acts as a screen, giving instant feedback about what is currently going on, turning music into something visible and tangible. Additionally, performers can also change the behavior of the objects by touching and interacting with the table surface, without a limit to the number of fingers that can be used simultaneously. The Reactable was specially designed so that it could be used by several simultaneous performers (which justifies its round shape), thus opening up new pedagogical, entertaining, and creative possibilities with its collaborative and multi-user capabilities.

The Reactable (see text for more details)

MusicJacket: The Efficacy of Real-time Vibro-tactile Feedback for Learning to Play the Violin Rose Johnson, Janet van der Linden, and Yvonne Rogers, The Open University, Milton Keynes, England

This research investigates the potential for vibrotactile feedback to enhance motor learning in the context of playing the violin. A prototype has been built that delivers vibrotactile feedback to the arms to indicate to a novice player how to correctly hold the violin and how to bow in a straight manner. This prototype was tested in a pilot user study with four complete beginners. Observations showed improvements in three of the four players whilst receiving the feedback. The researchers also discuss the pros and cons of using negative feedback to enhance learning. The deviation of the position of each hand with respect to an ideal is calculated and feedback is delivered accordingly. The ideal is recorded at the start of each use. The ideal position for the left hand is given by holding the violin under instruction from a teacher. To input the ideal trajectory for the right hand an assistant holds the bow on the string and the pupil runs their hand along the bow. A straight line is fitted to this, which is the ideal path for the

bowing hand. The line is drawn in the violin reference frame so that if the violin moves the ideal trajectory for the bow will move with it. The feedback is given in sets of opposable pairs, two for each hand. Vibrators are positioned using the push metaphor (see figure) so that the user should move their hand away from the vibrations.

One of the participants wearing the prototype. The labeled vibrators positioned on the arms are: (1) move

hand away from body, (2) move hand towards the body, (3) move hand down, (4) move hand up, (5)

move hand left, (6) move hand right, (7) and hand up and (8) move hand down

Page 52: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

52

Pot à Musique: Tangible interaction with Digital Media Steven Strachan, and Agnès Gimeno, Orange Labs, Meylan, France Benjamin Mazoin, ENSCI-Les ateliers, Paris, France

The researchers describe the conceptualization, design and prototype development of a tangible gesture-based interface for the control of a music player. The device takes the form of a pot, augmented with inertial sensing and model-based vibrotactile feedback, which it is envisioned will encourage a more playful form of interaction for a richer interactive experience with increasingly dematerialized digital media.

The full range of available gestures for use with the “pot à musique”

The Emoti-Chair: An Interactive Tactile Music Exhibit Maria Karam, Carmen Branje, Gabe Nespoli, Norma Thompson, Frank A. Russo, and Deborah I. Fels Ryerson University, Toronto, Ontario

The Emoti-Chair is a sensory substitution system that brings a high-resolution audio-tactile version of music to the body. The system can be used to improve music accessibility for deaf or hard of hearing people, while offering everyone the chance to experience sounds as tactile sensations. The model human cochlea (MHC) is the sensory substitution system that drives the Emoti-Chair. Music can be experienced as a tactile modality, revealing vibrations that originate from different instruments and sounds spanning the audio frequency spectrum along multiple points of the body. The system uses eight separate audio-tactile channels to deliver sound to the body, and provides an opportunity to experience a broad range of musical elements as physical vibrations.

Page 53: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

53

Design by Physical Composition for Complex Tangible User Interfaces Tanja Döring, Bastian Pfleging, and Albrecht Schmidt, University of Duisburg-Essen, Essen, Germany Christian Kray, Newcastle University, Newcastle Upon Tyne, England

In this paper, the researchers present a novel approach to create devices with tangible user interfaces by physical composition. While the separation of the user interface from the application logic has a long tradition in software engineering, for products with tangible user interfaces there is no equivalent approach that realizes a true separation and flexible combination of interface components, underlying technology, and software parts. They propose a novel concept that is based on an inner core for the basic technical and software platform of a product and an outer shell that builds a flexible and exchangeable tangible user interface from passive components. Using vision-based tracking, they can realize a clear separation between the components. No wiring is necessary. This paper introduces the novel approach and presents a first working prototype as well as initial results from its application in a design

workshop. The figure illustrates how they implemented the proof-of-concept version. The top part shows its main components: a (white) slide-on shell with cutouts to enable displays, where the (blue) core is visible, and to attach (dark gray) physical controls. The bottom part of the figure shows a side view of the front panel. Physical controls are attached to the shell by attaching a back part to the front part via a rod. The back part sits between the shell and the core and has a visual code attached to it. A camera (C) inside the Core can track this visual code and therefore detects when a user moves or turns a control. The reacTIVision framework, extended by a new marker detection engine, was used to track those markers visually. The new engine detects small-sized fiducials (17.5x3mm) while still using a low camera resolution (640x480 pixels). The core also houses a projector, which projects an image on the back plane of the housing of the core and thereby displays the content onto cut out areas in the shell.

Example device with white cut-out shell and blue core (top), and side view (bottom) with dark-gray control, attached visual code, camera (C) and projector (P)

A Multi-Touch Enabled Steering Wheel – Exploring the Design Space Max Pfeiffer, Dagmar Kern, Johannes Schöning, Tanja Döring, and Albrecht Schmidt, University of Duisburg-Essen, Essen, Germany; Antonio Krüger, German Center for Artificial Intelligence, Saarbrücken, Germany

Cars offer an increasing number of infotainment systems as well as comfort functions that can be controlled by the driver. This research investigates new interaction techniques that aim to make it easier to interact with these systems while driving. In contrast to the standard approach of combining all functions into hierarchical menus controlled by a multifunctional controller or a touch screen the researchers suggest utilizing the space on the steering wheel as an additional interaction surface. In this paper they show the design challenges that arise for multi-touch interaction on a steering wheel. In particular they investigate how to deal with input and output while driving and hence rotating the wheel. They describe the details of a functional prototype of a multi-touch steering wheel that is based on FTIR and a projector, which was built to explore experimentally the user experience created. In an initial study with 12 participants they show that the approach has a general utility and that people can use gestures for controlling applications intuitively.

A user interacting with the multi-touch steering wheel

Page 54: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

54

Hybrid Groups of Printed and Digital Documents on Tabletops: A Study Jürgen Steimle, Mohammadreza Khalilbeigi, and Max Mühlhäuser Technische Universität Darmstadt, Darmstadt, Germany

This paper presents an exploratory study investigating how physical and digital documents are used in combination on tabletops. The results identify hybrid piles as the most common grouping concept and show that users willingly occlude digital documents with physical paper. These findings have considerable impact on the design of novel

hybrid interaction techniques, which they sketch at the end of this paper. The study used an interactive tabletop of 130x105 cm size with a display size of 100x60cm. This is representative for the space available on a typical desk. The rear-projection had a full HD resolution of 1920x1080 pixels. The participants could interact with the digital documents using multi-touch gestures for moving, rotating, and zooming in and out of individual documents. Printed documents could be placed and manipulated on the display surface and also on the surrounding surface of the table.

The setting of the study: The participants worked with printed and digital documents simultaneously on a tabletop display.

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Ji-Won Song, Korea Advanced Institute of Science and Technology, Daejeon, South Korea Sung-Ho Yang, Inje University, Gyongnam, South Korea

Haptic Sight is a new interface idea providing immediate spatial information to visually impaired people in order to assist independent walking. The interface idea stems from a thorough investigation in which the researchers studied visually impaired people’s indoor walking behavior, decision making process, their unique concept of space, and information needs. The aim of this study is to identify an interface design and investigate an appropriate means of spatial information delivery. In order to assist a visually impaired person’s independent walking, a natural and effective interface is required to provide continuous spatial information from which he can decide where he is and how he moves. The researchers suggest an interface idea a visually impaired person can touch that provides information about 15 steps ahead of a person (see illustration). Haptic Sight has 30x50 small blocks that move into two layers. It uses double-acting cylinders and direction control valves, which are operated by electrical signals created from information received through ultrasonic or infrared sensors wirelessly. The tiny raised blocks create a touchable outline of building layout information. Through Haptic Sight, a visually impaired person can sense his surroundings just as an unimpaired person watches the space visually to know his location. He touches Haptic Sight with one hand while the other hand holds a cane, which is already familiar and an essential aid to blind people’s walking.

The concept of Haptic Sight (see text for more details)

Page 55: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

55

Interview with Adam Bogue from Circle Twelve

Adam Bogue is the President of Circle Twelve. Previously, he was VP of Business Development at the Mitsubishi Electric Research Labs for eight years. Bogue has 20 year experience in technology marketing and business management, BS degree from MIT and MS the Sloan School of Management.

Please give us some background information about Circle Twelve. Circle Twelve is a privately-held company founded in 2008, and is headquartered just outside Boston, Massachusetts.

Is Circle Twelve affiliated with MERL, or is it a completely separate entity? DiamondTouch was first invented at the Mitsubishi Electric Research Labs, or MERL, back in 2001. Circle Twelve has an exclusive license agreement with MERL for the DiamondTouch table technology, but the companies are separate.

Tell us about the technology behind the DiamondTouch. The motivation behind DiamondTouch is this: 1) 70% of business meetings take place between two or three people, face-to-face; and 2) Computers are particularly bad at supporting this. Computers were really designed for individuals, hence the name Personal Computer. There are network-based collaboration tools that allow people to work together who are in remote locations. But for people in the same place at the same time, I think computers get in the way. That is why people often print things out. The best a computer can do in a small-group setting is let one person drive, while everyone else looks over shoulders. The fact the computers do not support the most common interaction between people strikes us as crazy, and it is the motivation behind the company and the technology

Multi-touch has become a bit of an industry buzzword. What does multi-touch mean to you? DiamondTouch is a multi-touch input device, but that was never the “raison d’etre.” Our focus from the beginning was multi-user. That said, I have been using the DiamondTouch table as my primary computer interface for over six years now, and I can tell you that that would not be possible if it was not multi-touch.

Circle Twelve has emphasized that the DiamondTouch is more than multi-touch, but is actually multi-user-touch. Tell us what this means. “Multi-user” means that DiamondTouch can identify who is touching where, and it is still the only touch technology that can do this.

How do you recognize inputs from multiple users? Unlike many other multi-touch technologies that are based on cameras, DiamondTouch is capacitive. But it is also different from other capacitive touch technologies. The DiamondTouch surface has an array of antennas that emits very small RF signals. When a user touches, the signals are capacitively coupled through the user and to a receiver, typically connected to the user’s chair. Each user is in contact with a separate receiver. So there are signals in the table, receivers in the chairs, and the users connect the two. DiamondTouch is actually requires a lot less processing power compared to camera-based systems, which require computer vision techniques to interpret multi-touch interaction – and those technologies cannot identify who is who.

What is the touch resolution of DiamondTouch? On our larger model of DiamondTouch table, the touch resolution is 2752x2064. While many camera-based systems have a touch resolution that is about the size of a fingertip, the touch resolution on DiamondTouch is 0.3mm, which is typically smaller than a pixel. In some of the markets that we serve, including GIS and CAD, having good touch resolution is extremely important.

Page 56: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

56

Does DiamondTouch require a special surface? I guess you might say that DiamondTouch is a special surface. DiamondTouch is a USB input device, like some interactive whiteboards.

Please explain the differences between your front projection solution versus rear projection tabletops, such as the Microsoft Surface or SMART Table. Both Microsoft Surface and the SMART Table have cameras that are looking up, and so it makes sense for those products to be rear projected. The DiamondTouch table does not use cameras and so front-projection makes sense for DiamondTouch – just like many interactive whiteboards.

What happens if you project two images on the DiamondTouch surface? Is there a way to interact in a 3D mode? There are some university research groups that have done exactly what you describe here with the DiamondTouch table. I see this is interesting but not something we would want to turn into standard products.

With front projection, don’t you run into occlusion and shadowing problems? I think that front-projection has a bad reputation because of vertically-mounted interactive whiteboards. With those, if you stand in the wrong place, you can cast a big body-shadow, which can be a real problem. By contrast, with interactive tabletops, hands come in from the sides. For tabletops, it is the hand and not the shadow of the hand that obscures more of the image.

Are there any advantages to a front projection system as compared to a rear-projection system? Front projection gives a bit of an advantage in terms of visibility, because you can actually “see” what you are touching (because the image is projected onto the back of your fingertip), whereas the image is completely obscured by the finger on rear-projection and LCD displays. A bigger advantage is durability. The DiamondTouch table has no glass to break, and so you can treat it like a real table.

Can DiamondTouch be mounted on a wall, or is it intended for use only as a tabletop? You can mount it on the wall, and some of our customers have done that. However, I think they are missing the point of the DiamondTouch table. There has been some research work on vertical vs. horizontal displays, which make it clear that vertical displays are best for presentations. But for presentations, multi-user is not important. On the other hand, horizontal surfaces are best for collaboration. In the office, most small group collaboration takes place seated face-to-face, usually at a desk, small conference room table, or those one-meter diameter round tables in peoples’ offices. And usually people print things out and work with paper and pen. That is our target – we want to replace that one-meter round table with a DiamondTouch table, and bring tabletop computing into the place where people usually use paper and pen.

Are there size limitations associated with the DiamondTouch table? There are not really technical limitations to size or shape of the DiamondTouch table. However, we have two standard models – 32-inch and 42-inch diagonal – which were chosen based on the length of the human arm. Any larger, and users would not be able to reach everything.

Do you develop software for the DiamondTouch solution, or do your users tend to develop their own software solutions? Most of our customers are end users. We include a utility program called DTMouse that allows any Windows software to work on the DiamondTouch table right out of the box. This utility program enables multi-user mark-ups, so people can plan, brainstorm, and interact together. Circle Twelve also provides software extensions or plug-ins to existing software application (e.g., ArcGIS from ESRI) so that when you operate their software on the DiamondTouch hardware, there are new multi-touch and multi-user features.

We do include an SDK with every DiamondTouch table we sell, so that our customers who are software developers can make their own custom gesture interface or multi-user software application. So if all you want to do is

Page 57: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

57

Windows applications and some mark-up, DiamondTouch just works. If you are a software developer and you want to do your own thing, you can use our SDK. And then over time, Circle Twelve is going to be introducing more of the multi-user plug-ins to popular business software applications.

Is there an adequate ecosystem of applications developers for the DiamondTouch solution? Our SDK supports Java, C++, ActiveX, etc. But one of the best things we did is include an interface for Adobe Flash, which is great for rapid software prototyping and development and has a large free-lance developer community.

How does one go about acquiring a DiamondTouch system? Circle Twelve accepts purchase orders or credit cards, and we ship in a few days after receipt of order.

What’s the price for a DiamondTouch solution? What price point do you think is needed to attain a sizable market position? Today our price is about $10,000, and in the markets we sell to, that is not crazy. Long term, we would like to be in schools. The idea here is that interactive whiteboards serve as presentation tools for the teachers, while DiamondTouch can be for the students. And because DiamondTouch knows who is who, performance assessment of individual students is built in. For Circle Twelve to be successful in the education market, we will need to be a quarter of our current price point.

What’s your favorite multi-touch implementation that you’ve seen to date? There is a group out of the University of Haifa that was using the DiamondTouch table as a mediation interface for conflict resolution between Israeli and Palestinian youth. The point of the DiamondTouch table is to bring people together, face-to-face, and this work is really inspiring.

Please give us your spin on gesture recognition. Do you feel that there’s a formal need for some standardization, or will standard gestures evolve naturally over time to best optimize the specific interaction? I think it will evolve naturally. I am not at all concerned about this.

Does DiamondTouch have the capability recognize objects on the surface? No. Because we are all about tabletop computing, we wanted to let people treat the table like a real table – put their coffee on the table, books, car keys, etc. – and not that have those items cause any problems. So DiamondTouch ignores objects, and interacts with touch.

How important is tactile feedback in a typical DiamondTouch situation? I must confess that I do not understand what people are talking about when it comes to “tactile feedback”. For example, I have used an on-screen keyboard on the DiamondTouch table for over six years now, and I feel every keystroke. I do use speakers so I can hear the clicking, and I can’t type fast unless I have the speakers on. So I guess I would say that acoustic feedback is important.

What are some examples of hardware platforms that you think might have the opportunity to really create some new and exciting touch-based markets? It is hard not to admire the things that Apple is doing.

Tell us what you think are the biggest technological hurdles to really create a “natural user interface”. Supporting standard software is the thing we have done. In the end, the point of the DiamondTouch table is to support people working together face-to-face, having discussions and making decisions. It is less about a “cool” display or “natural user interface” and more about the people sitting at the table. My father was a bass player in a big band, and while the bass is the backbone of the band, I remember him telling me that a good bass player is not really noticed, that the best complement was no comment. Maybe that should be the goal for NUI.

Page 58: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

58

97 pages of insights from industry expert Geoff Walker about touch technologies and other display-related trends and insights. http://www.veritasetvisus.com

http://www.electronic-displays.de

Page 59: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

59

Putting the “Touch” in Multi-Touch An in-depth look at the future of interactivity

by Gary Barrett

Gary Barrett, chief technology officer, founded Touch International in 2002. He has held executive staff/director positions at Touch Technology, Carroll Touch (now part of Elo TouchSystems), Gunze Corporation, and The Graphics Technology Company. Barrett holds five key patents related to analog resistive and capacitive touch sensing technology, and has consulted in the development of touch panels with companies around the world. He founded two touch development companies, which he later sold. AMP (now Elo TouchSystems) acquired his first company, and 3M Touch Systems (formerly MicroTouch) acquired his second.

Multi-touch touch screens are fast becoming the standard for touch interface systems. This white paper examines what the term “multi-touch” means, what it does, and how it is used.

Background: The term multi-touch refers to the ability to simultaneously register more than one touch at the same time on a touch screen. Multi-touch capability is not new. The earliest forms of touch screens -- matrix resistive and IR (infra-red) -- were both capable of generating two coordinates; however, due to “ghosting”, both of these touch technologies registered two touches as an error, instead of making use of the data. It was not until Apple introduced the iPhone that two or more touches became anything more than a problem.

For the last 25 years, the two most common touch technologies -- analog resistive and surface capacitive -- only reported one X, Y touch coordinate. In fact, if there were two touches at the same time, the technology could not even recognize that there were two fingers on the sensor and would instead report the average of the two touches, which was a phantom point. The pseudo-multi-touch systems, most notably the games produced by Merit, made use of the fact that it was actually impossible (or extremely rare) that two people could touch at exactly the same time, so they would extrapolate the second touch from the first.

Scanning: Today, most multi-touch technologies are scanned. This means that the touch sensor (the thing you touch) is made up of rows and columns, most often one row on a top layer and one column on a bottom layer. Each row and column is connected to (at least) one conductive wire which goes to the electronics. The electronic controller will turn on one row and one column at a time and decide if there is a touch occurring at that point. For example, turning on column 1, the controller will start with row 1 and then check row 2 and so on until it has checked all rows for touches which may be occurring on column 1. Then the controller will move to column 2 and check all of the rows again. Once all of the row and column combinations have been scanned, the process starts over. There are more efficient variations of this row and column scanning method but this is essentially how all of today’s multi-touch systems work.

Two Touch and Ghosting: The reason multi-touch took so long to become accepted was because of a phenomenon called ghosting. Ghosting means the controller cannot determine which of two coordinate pairs the “right one” is. Ghosting occurs when two touches are diagonal to each other, which is almost always the case. The only time ghosting doesn’t happen is when the two touches are located on the same row or same column. Here is why:

Page 60: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

60

You can see that the touches are on row 5, column 3, and on row 2 column 6. The electronics can sense that rows 5 and 6, and columns 3 and 6 are active, so two touches are occurring. However, the electronics would sense the same rows and columns if the touches were occurring at row 5 and column 6 and row 2 and column 3. So for 25 years this inability to determine which were the real two points caused customers to reject multi-touch and the touch manufacturers to report two touches as an error state instead of useful data.

The Genius of Apple: It is reported that the first versions of the iPhone multi-touch sensors had a ghosting “problem”. This smart phone made use of two multi-touch gestures – “pinch” and “expand”. No doubt you are familiar with touching an image and making it larger or smaller by touching the corners and moving your fingers apart to expand, or moving your fingers closer together to pinch. To do this, notice from the drawing above, the electronics do not need to know which two corners are being touched, only that the picture is being pinched or expanded. So this super-cool feature of the iPhone was done from data which previously had been rejected as a problem!

True Multi-Touch: Once Apple had demonstrated the use for having two touches at the same time, the race was on for more applications which could use this feature. Microsoft had already developed a product called “Surface,” for which the applications called for the ability to unambiguously determine the exact location of up to ten touch points at the same time. In the subsequent release of Microsoft Windows 7 (Tablet version), which supported multi-touch touch screens, the touch coordinates were required to be absolute.

Thus, the term “multi-touch” applies to two or more touches occurring at the same time, with the capability of knowing the exact position of each touch (no ghosting). Most multi-touch systems can demonstrate up to 10 fingers at the same time, but the Microsoft requirement is just two.

Multi-Touch Sensing Technology: Today, there are three technologies in common use that can sense multiple touches.

The most common multi-touch technology, projective/projected capacitive or pro-cap/ p-cap, is used in all cell phones and e-readers. Projected capacitive is the most widely used because, in addition to sensing multi-touch, the visual appearance of the display is good and the sensor will never wear out.

A second technology is MARS or Multi-Touch Analog Resistive System, also called AMR. This is an older technology which is essentially a 4-wire resistive sensor, cut up into many small 4-wire touch screens. The advantage of this technology is that it works best with pen-input systems because it is pressure sensitive.

The best choice for a very large display is an optical system using cameras. There are two versions, one which uses small cameras in the corners of the flat panel display (SMART/NextWindow), and another which uses an overhead (sometime underneath) camera with a projector (Microsoft Surface).

Multi-Touch Resolution and Speed: Most touch screens, including multi-touch sensors, generate a resolution of 1024x1024, or ten bit. The way in which the coordinates are generated is different with each of the three multi-touch technologies but all of them require substantial computing power to do this for up to ten fingers at the same time. Human perception of “instant” is considered to be one-quarter of a second. To allow for the first point to be generated, processed by the computer and then shown on the display, the first touch point needs to be generated in a tenth of a second. This processing requirement is the same for multi-touch systems, which requires increased computing power. Some users have noticed a delay when ten fingers are simultaneously drawing on the screen. Typically this is not because there is an inherent delay in the touch system, but because the host computer and its video adapter also need to be more powerful to accommodate this increase in data.

Page 61: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

61

Software Interface: An early problem with multi-touch was how to report up to ten touches at the same time. Microsoft resolved (some would say, benevolently dictated) this problem by standardizing the way multiple input points were interpreted by the computer. To make it easy to integrate single touch displays into the computer environment, the X, Y coordinates (which is all that touch screens do) were treated as an “absolute” mouse. A regular mouse is a “relative” device which just tells the computer which direction to move the cursor. As an absolute mouse, the touch screen tells the computer where to move the cursor (under the finger, of course) by acting as though the mouse was manually moved to the touch point.

Multi-touch screens are treated as though they are multiple absolute mice connected to the computer at the same time. Thus, if the touch screen electronics are capable of generating 10 simultaneous touches, then the computer will see ten mice connected to it. Other operating systems, specifically Linux, have also adopted this method.

Multi-touch Zones: Projected capacitive and MARS systems generate analog (continuous) coordinates from zones formed by rows and columns; this is analog output from digital inputs. The rows and columns are generally 7-10 mm wide which is the best size to generate 1024 x 1024 coordinates quickly. In addition, the zones must be small enough that only one finger will fit in each zone; if the zones are larger, two fingers can enter one zone, making it impossible for the system to distinguish two separate touches.

Making Multi-Touch Bigger: Most multi-touch applications are for 12.1 inch displays and smaller, excluding camera systems which are typically used for displays 32 inches and larger. Producing multi-touch screens in sizes from 12 to 32 inches presents two problems. First, it takes multiple processors to generate fast touch points. Second, with rows and columns in the 10mm range, there are a large number of connections. These two factors tend to increase the price of multi-touch displays; however, the durability offered to kiosk, point of sale and transaction machines justifies the cost.

Multi-Touch Gestures: A gesture is a description of what happens when two or more touches do “something”. Originally, these actions were coded in the touch controller chip, but now that multi-touch drivers are available in Windows and Linux systems, that function is left to those standard interfaces. Thus, the touch controllers once again just generate coordinates, albeit now there can be simultaneously up to 10 pairs. There are only three gestures that are unique to multi-touch systems. They are pinch, expand, and rotate. The others – flick, hold, tap and pan – can be done with older technologies but are easier with a projected capacitive system.

Multi-Dimensional Multi-Touch: The future of multi-touch is to add the distance (proximity or “Z”) dimension to touch screens. The now familiar video of Jeff Han ( http://www.youtube.com/watch?v=89sz8ExZndc) waving his hands over the display to control the images is coming to smaller devices. Similar actions are common with TV news and sports broadcasts. Some game systems are already using cameras to allow multiple users to control the play without the need for wireless remote controls. This proximity sensing is being built into a number of smaller devices to both detect a presence of a person or to control actions without having to touch the display.

Conclusion: Thanks largely to the iPhone’s widespread success and the release of Windows 7, multi-touch technology has been firmly embedded into the mainstream market and is here to stay. In the years to come, multi-touch systems are expected to show strong growth due to improved designs, reduced component prices and better software development.

Page 62: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

62

Page 63: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

63

Parting Shot…

Designed by Petr Kubik, the electronic drums take knee tapping and air smashing actions and put sound to them. This electric percussion creates drum beats with each hit which allow you to drum to your favorite tunes without the

difficulty of actually knowing how to play. The surface also produces a colorful hand impression based on where and how hard you hit the drum using Hypercolor style technology.

Touch Panel Back issues are available for only $7.99 each.

http://www.veritasetvisus.com

Page 64: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

64

Last Word: RIM gets serious about user interface by Jin Kim

Jin Kim is the founder and president at DisplayBlog, bringing together news, information and analysis from the high-tech display industry to help, educate and entertain. By combining the experiences and knowledge gained serving as senior marketing manager at LG Display and as director of TFT LCD Market Research at DisplaySearch, Kim brings a fresh look at the display industry and products such as LCD TVs, LCD monitors and notebook PCs. Kim received a BA at UC Berkeley and an MBA at from Claremont Graduate University.

Research In Motion (RIM) announced that it has acquired TAT (The Astonishing Tribe), an interface design firm based in Sweden. RIM is getting serious about user interface design it seems. I’m not a BlackBerry user but I do appreciate the simple icons.

The icons are simple but have enough detail to easily figure out what they are for. There are some problems in terms of layout though. In the photo above there are six icons per row. That’s too many.

Touch-enabled BlackBerry icons should be bigger to match the size of a finger. The UI on the iPhone probably has the best imple-mentation: four icons per row on a 3.5-inch display. BlackBerry should do the same.

On the flip side, I would love to see iPhone icons in BlackBerry style.

There is one technical argument I’d like to make. The outline-styled simple icons on the BlackBerry are perfectly suited for an OLED display. Here’s why: OLEDs consume zero power when it is black. A BlackBerry sporting the current UI design combined with a Super AMOLED would last a lot longer between charges. The UI would look fantastic too with incredibly deep blacks.

Some neat ideas from a video prepared by TAT: http://www.youtube.com/watch?v=g7_mOdi3O5E&feature=player_embedded

Page 65: VVTP-45, December 2010

Veritas et Visus Touch Panel December 2010

65

Display-Industry Newsletters

http://www.veritasetvisus.com