7
LIGHT-TO-CAMERA COMMUNICATION FOR CONTEXT-AWARE MOBILE SERVICES IN EXHIBITS Xin-Lan Liao (廖歆蘭) ,Kun-Hsien Lin(林昆賢) , Yi-Chang Wang (王亦璋), Lih-Guong Jang(張立光) ,Yi-Yuan Chen (陳一元) ,Chi-Neng Liu (劉啟能) , Po-Yu Huang (黃博裕) ,Tai-Shen Ho (何台生) Industrial Technology Research Institute, Hsinchu, Taiwan E-mail: { XLLiao, KHL, ycw, LihGuoung, yiyuan, joeliu, PoyuHuang, hots }@itri.org.tw ABSTRACT This paper proposes a context-aware mobile service for better user experience in exhibitions. Given that lighting is one of the essentials of exhibits, the light-to- camera communication is implemented to enable interactions between viewers and displays through imperceptible optical markers. The light-to-camera communication can also provide an alternative solution to overcome some limitations of using QR code and RFID technologies in exhibits. The proposed system consists of two major components, a LED transmitter and an image sensor-based receiver. The LED driver circuit consisting of MOS, capacitor, and inductance controls the transmitter to output square wave at designated frequency. The receiver exploits rolling shutter mechanism and performs image processing to detect the flashing LED for subsequent user interaction. The experimental results and real-world application verify the efficacy and usability of the proposed service. Keywords Light-to-Camera Communication, Visible Light Communication, Context-Aware, Mobile Devices. 1. INTRODUCTION As mobile service continuously evolves over time, adapting its presentation and contents to ones context of use is necessary to improve the usability and effectiveness of the service and the related applications [1]. This paper aims to build a context-aware mobile service for better user experience in exhibits. Abowd et al. [2] defined context as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.” Therefore, capturing context information is essential for subsequent interpreting and reasoning functionality. Several sensing techniques have been proposed to enrich viewersexperiences in exhibits. In practice, radio-frequency based systems such as radio frequency identification, Bluetooth low energy, and wireless LAN can facilitate indoor positioning [3, 4, 5] or deliver information stored in the content management systems (CMS) [6]. Moreover, image recognition algorithms that detect quick response code (QR code) [7, 8] or match the artwork [4] allow context-aware services to infer user interests. These conventional techniques, however, have certain limitations on the interactions between viewers and the displays. Specifically, radio-frequency based systems provide weak visual association so viewers can hardly connect transmitters identity with line of sight. While QR code is a low-cost machine-readable marker that enables access to the CMS, its print size determines the scan distance and occupies exhibit space. In this case, we initiate an innovative approach on adopting the light- to-camera communication [9, 10, 11] and apply to exhibition scenarios, since it takes advantage of the light-emitting diode (LED) fixtures in exhibits and is imperceptible to human eyes. The light-to-camera communication utilizes rolling shutter mechanism of camera to capture optical pulses generated by the specially-designed flashing LED. Notably, different frequencies of such optical pulses can serve as distinct markers and lead to their respective entries in the CMS. The light-to-camera communication benefits exhibits from three perspectives: a) the lights shed on the displays coincide with viewersline of sight; b) the LED keeps its original function as it can be modulated to a higher flashing frequency than human eye sensitivity; c) lighting is one of the essentials of exhibits, and thus takes no extra space. This paper presents the design of transmitter and receiver for light-to-camera communication. To tailor the application to various specifications of mobile devices, we further investigate the parameters of cameras and propose appropriate settings accordingly.

Light to-camera communication for context-aware mobile services in exhibits

Embed Size (px)

Citation preview

Page 1: Light to-camera communication for context-aware mobile services in exhibits

LIGHT-TO-CAMERA COMMUNICATION FOR CONTEXT-AWARE

MOBILE SERVICES IN EXHIBITS

Xin-Lan Liao (廖歆蘭) ,Kun-Hsien Lin(林昆賢) , Yi-Chang Wang (王亦璋),

Lih-Guong Jang(張立光) ,Yi-Yuan Chen (陳一元) ,Chi-Neng Liu (劉啟能) ,

Po-Yu Huang (黃博裕) ,Tai-Shen Ho (何台生)

Industrial Technology Research Institute, Hsinchu, Taiwan

E-mail: { XLLiao, KHL, ycw, LihGuoung, yiyuan, joeliu, PoyuHuang, hots }@itri.org.tw

ABSTRACT

This paper proposes a context-aware mobile service

for better user experience in exhibitions. Given that

lighting is one of the essentials of exhibits, the light-to-

camera communication is implemented to enable

interactions between viewers and displays through

imperceptible optical markers. The light-to-camera

communication can also provide an alternative solution

to overcome some limitations of using QR code and

RFID technologies in exhibits. The proposed system

consists of two major components, a LED transmitter

and an image sensor-based receiver. The LED driver

circuit consisting of MOS, capacitor, and inductance

controls the transmitter to output square wave at

designated frequency. The receiver exploits rolling

shutter mechanism and performs image processing to

detect the flashing LED for subsequent user interaction.

The experimental results and real-world application

verify the efficacy and usability of the proposed service.

Keywords Light-to-Camera Communication, Visible

Light Communication, Context-Aware, Mobile Devices.

1. INTRODUCTION

As mobile service continuously evolves over time,

adapting its presentation and contents to one’s context

of use is necessary to improve the usability and

effectiveness of the service and the related applications

[1]. This paper aims to build a context-aware mobile

service for better user experience in exhibits. Abowd et

al. [2] defined context as

“any information that can be used to characterize

the situation of an entity. An entity is a person, place,

or object that is considered relevant to the

interaction between a user and an application,

including the user and applications themselves.”

Therefore, capturing context information is essential for

subsequent interpreting and reasoning functionality.

Several sensing techniques have been proposed to

enrich viewers’ experiences in exhibits. In practice,

radio-frequency based systems such as radio frequency

identification, Bluetooth low energy, and wireless LAN

can facilitate indoor positioning [3, 4, 5] or deliver

information stored in the content management systems

(CMS) [6]. Moreover, image recognition algorithms

that detect quick response code (QR code) [7, 8] or

match the artwork [4] allow context-aware services to

infer user interests.

These conventional techniques, however, have certain

limitations on the interactions between viewers and the

displays. Specifically, radio-frequency based systems

provide weak visual association so viewers can hardly

connect transmitter’s identity with line of sight. While

QR code is a low-cost machine-readable marker that

enables access to the CMS, its print size determines the

scan distance and occupies exhibit space. In this case,

we initiate an innovative approach on adopting the light-

to-camera communication [9, 10, 11] and apply to

exhibition scenarios, since it takes advantage of the

light-emitting diode (LED) fixtures in exhibits and is

imperceptible to human eyes.

The light-to-camera communication utilizes rolling

shutter mechanism of camera to capture optical pulses

generated by the specially-designed flashing LED.

Notably, different frequencies of such optical pulses can

serve as distinct markers and lead to their respective

entries in the CMS. The light-to-camera communication

benefits exhibits from three perspectives: a) the lights

shed on the displays coincide with viewers’ line of sight;

b) the LED keeps its original function as it can be

modulated to a higher flashing frequency than human

eye sensitivity; c) lighting is one of the essentials of

exhibits, and thus takes no extra space.

This paper presents the design of transmitter and

receiver for light-to-camera communication. To tailor

the application to various specifications of mobile

devices, we further investigate the parameters of

cameras and propose appropriate settings accordingly.

Page 2: Light to-camera communication for context-aware mobile services in exhibits

With respect to user experience, the experiments show

that light-to-camera communication outperforms QR

code for it supports longer scan distance and wider scan

angle. Moreover, the launch of the proposed context-

aware mobile service into 2016 Expo: Discovering

Technology Treasures verifies its usability in exhibits.

This paper is organized as follows: section 2 introduces

some background of rolling-shutter mechanism; section

3 specifies the light-to-camera communication system;

section 4 discusses the experimental results; section 5

concludes the work and suggests some future research

directions.

2. BACKGROUND

Two image sensor types are widely used in cameras:

global shutter and rolling shutter. In global shutter mode,

every pixel is exposed and digitized at the same time as

shown in Fig. 1(a). Instead of simultaneous exposure,

rolling shutter deals with readout row by row (cf. Fig.

1(b)) and consequently increases frame rates. The light-

to-camera communication exploits rolling shutter

mechanism, which is a common specification for

camera phones on the market. The row by row exposure

samples optical signal multiple times for a single

captured image, leaving bright and dark strips if the

light source flashes before completing the entire frame.

Figure 2 demonstrates the image view of optical pulses,

where the stripe pattern reflects the alternation of

brightness levels.

(a) Global shutter. (b) Rolling shutter.

Figure 1: Comparison between global shutter and

rolling shutter.

Figure 2: Capturing a rapidly flashing LED by rolling

shutter camera.

The frequency of the optical pulses is inversely

proportional to strip width and readout duration.

According to [9], given strip width W and readout

duration rT , the transmitted frequency f can be

estimated by

.2

rWTf

This estimation aids context-aware services in

identifying each dedicated transmitter. As a result,

gauging W and rT precisely is challenging due to

diverse specifications of image sensors in mobile

devices. Moreover, LED fixtures should be refit to emit

optical pulses with designated frequency.

3. SYSTEM OVERVIEW

The proposed context-aware mobile service adopts

light-to-camera communication. Figure 3 illustrates a

system overview. The transmitter (LED fixtures in

exhibits) has a driver circuit that controls flashing

frequency; and the receiver (image sensor or

smartphone built-in camera) captures the stripe pattern

to decode the message and deliver the corresponding

information in the CMS. This paper employs on-off

keying with 260Hz–300Hz modulating frequency. The

range is defined by the receiver’s capability of

collecting sufficient samples for decoding. Furthermore,

the LED keeps its original function because its optical

pulses have higher flashing frequency than human eye

can perceive.

Figure 3: System overview.

3.1. Design of the Receiver

Figure 4: Receiver flowchart.

Figure 4 describes the information flows through the

mobile device receiver end. In this scenario, smart

phone is used as the receiver. Prior to capturing images,

Page 3: Light to-camera communication for context-aware mobile services in exhibits

adjusting camera parameters is required to ensure the

stripe patterns are clearly visible on frames. Many

camera properties would affect the quality of captured

image. Significant camera control properties for mobile

phone receiver are described as follows [11]:

Exposure Control. Exposure time is the most critical

control for capturing clear stripe patterns because it

determines how long each pixel collects photons. The

shorter the exposure time is, the clearer the stripe

patterns will be. However, if the exposure time is set

too short, it will adversely dim the captured images.

Figure 5 shows two images captured by different

exposure time. It is clearly observed that shorter

exposure time will capture more distinct strips.

(a) Exposure time: 1/160. (b) Exposure time: 1/5000.

Figure 5: Images captured by different exposure time

(seconds).

Nevertheless, another critical issue is realized during

actual implementation. Different operating systems for

smart phones have significant distinctions on controlling

camera parameters. It is relatively straight forward to

control most of the camera parameters in iOS system by

using the native camera API. Therefore, we can easily

set the exposure time to 1/2000 seconds in iOS system.

In Android system, on the other hand, every brand has

its own camera specification. Most android phones

don’t even allow the control over exposure time.

Therefore, the alternative plan is to adjust the exposure

compensation. Figure 6 shows two images captured by

HTC Desire EYE under different exposure

compensation value. When compensation is set to 0, the

strip is very obscure; but when compensation is set to -2

or less, the stripe pattern becomes much clear.

(a) Compensation value: 0. (b) compensation value: -2.

Figure 6: Images captured by HTC Desire EYE under

different exposure compensation values.

Film Speed. FPS (Frame per Second) denotes the

number of images a camera captures per unit time. High

FPS is able to detect optical pulses with high frequency.

Therefore, the principle is to set the highest FPS value

the mobile device can provide.

ISO Setting. ISO determines the sensitivity or gain of

the image sensor. The higher ISO value is, the more

brightness and noise will be captured in the images.

Figure 7 shows the images captured under different ISO

value setting. Higher ISO value can increase the

brightness on the captured image, and this will directly

affect viewer’s visual experience. It is advised to set the

ISO value to the highest level the mobile device can

provide.

(a) ISO value: 50. (b) ISO value: 1600.

Figure 7: Images captured under different ISO values

with the same exposure time.

Once the camera controls are set, it is critical to ensure

minimal image processing in order to speed up response

time. Firstly, the center region of captured images is

cropped to downsize the input data. This ROI is used to

detect the patterns in captured images. Next, we utilize a

well-known pitch detection algorithm, YIN-based stripe

width estimation method [12], to enhance the accuracy

of the estimated strip width and then extract the data

encoded by LED modulation. Once enough stripe

patterns are received, their combination serves as the

identity that leads to the entry in the CMS.

3.2. Design of the LED Transmitter

This paper presents a hardware module design using

metal-oxide-semiconductor (MOS) of the LED driver

circuit and ATmega328 as a digital modulation signal

generation unit. The LED fixtures are used for general

lighting purpose as well as digital signal transmission.

The n-channel MOS (NMOS) transistor has load at the

drain side, and the p-channel MOS (PMOS) has load at

the source side. High channel resistance value RDS (on)

will cause temperature to rise, which could damage the

MOS due to thermal overloading. The RDS (on) is also

critical to the switching efficiency of the MOS. Figure 8

illustrates the circuit and the resultant waveform. We

can observe that switching on the LED will trigger an

initial rise in voltage before entering steady state. This

phenomenon is considered noise during the

implementation of light-to-camera communication.

Figure 8: Original driver circuit and the dumping signals

when switching on the LED.

Page 4: Light to-camera communication for context-aware mobile services in exhibits

As a result, a capacitor component is connected in

parallel to filter out the noise. Figure 9 shows that the

dumping signals are removed. Moreover, we add an

inductance component to the aforementioned circuit.

The output signal then becomes a clean square wave as

shown in Fig. 10.

Figure 9: Resultant waveform when the MOS and a

6800PF capacitor are connected in parallel.

Figure 10: Resultant waveform when the MOS and a

330uH inductance are connected in series.

However, signal transmission time is still an issue while

using mobile device as the receiver. Table 1 reveals the

average time of reading a frame with the same image

resolution (1920x1080) from test devices. We can see

that some devices take more time to read a frame, which

limits signal transmission time of light-to-camera

communication. To be specific, provided the flashing

frequency of the LED is higher than certain limitation,

some devices would end up losing signals. Therefore,

our implementation sets signal transmission time to

80ms considering the test devices.

Table 1: The average of read frame time from different

test devices.

4. EXPERIMENTAL RESULTS

In this section, we evaluate our design of transmitter and

receiver in some aspects. First, comparisons between

light-to-camera communication and QR code are made

because both acquire information through image

processing methodology. Second, we launch a context-

aware mobile service at 2016 Expo: Discovering

Technology Treasures, Kaohsiung, Taiwan, and discuss

its performance on different mobile devices.

4.1 Comparison with QR code

We use a showcase to simulate the environment in

exhibits. Figure 11 shows the outlook of this showcase.

The volume of the showcase is 40x40x180 cm3 and that

of the display is 16x16x15 cm3. An LED transmitter is

embedded on the top of the showcase. The distance

between the LED fixture and the display is 35 cm.

Figure 11: The outlook of the showcase.

The following experiments consider bright environment

and dim environment (Fig. 12). The bright environment

has an illuminance of 60 lux, where the illuminance on

the display is 5086 lux (Fig. 13(a)). Similary, the dim

environment has an illuminance of 10 lux, where the

illuminance on the display is 3506 lux (Fig. 13(b)).

Page 5: Light to-camera communication for context-aware mobile services in exhibits

(a)Bright environment. (b) Dim environment.

Figure 12: Environmental setting.

(a)Bright environment. (b) Dim environment.

Figure 13: The illuminance on the display in bright and

dim environment.

Table 2 shows the results of our measurement. At the

same scene, the scan distance of our system achieves 4

meters. Although QR code with bigger print size can

enable longer scan distance, it will distract the viewers’

attention from the display. On the contrary, the

proposed system is part of the furnishings in exhibits.

Moreover, dim environment has little influence on the

light-to-camera communication.

Wider scan angle implies that more viewers can access

to the CMS simultaneously, which greatly improves

user experience in exhibits. Compared with limited scan

angle, the proposed system enables viewers to scan

from any angle. Figure 15 demonstrates the limitations

of QR code and our service on scan angle. It is clearly

observed that optical pulses can be detected from

various angles. So the light-to-camera communication

offers a robust service and can effectively reduce

viewers’ queueing time.

Table 2: The measurement between QR code and our

system at different scenes.

(a)QR code size: 1.5x1.5(cm) (b)QR code size: 2x2(cm)

Figure 14: QR codes with different sizes.

(a) (b)

(c) (d) (e)

Figure 15: (a) and (b) show the maximal scan angles of

QR code. (c), (d) and (e) detect optical pulses of our

service from left-hand side, front, and right-hand side of

the display, respectively.

Page 6: Light to-camera communication for context-aware mobile services in exhibits

Table 3: Number of scans and success rate reached by the chosen viewers on different mobile operating systems and

mobile devices.

iOS Android

model success fail success rate brand success fail success rate

(#scans) (#scans) (%) (#scans) (#scans) (%)

iPhone 5 7 1 87.50 ASUS 15 5 75.00

iPhone 6 53 12 81.54 HTC 4 7 36.36

iPhone 6+ 21 2 91.30 InFocus 3 0 100.00

iPhone 6S 35 4 89.74 LG 3 0 100.00

iPhone 6S+ 4 2 66.67 OPPO 3 3 50.00

iPad Air 3 2 60.00 SAMSUNG 18 6 75.00

iPad Mini 3 1 75.00 Sony 3 14 17.65

Xiaomi 0 7 0.00

total 126 24 84.00 Total 49 42 53.85

4.2 Real-World Application

A context-aware mobile service was launched during an

exhibit named 2016 Expo: Discovering Technology

Treasures, which introduces innovative technology

development programs to the public. To meet the

requirements for its lighting design, we selected 40-watt

LED light with color temperature of 4000K, as well as

15° optical lens, and improved the LED driver circuit

according to subsection 3.2 for the transmission of

designated frequency (Fig. 16). Eight specially-designed

transmitters were installed in this event. The hanging

lights were two meters high from the display tables,

directing viewers’ attention toward the objects and

serving as their respective optical markers. Viewers

could scan these objects with their own mobile devices

and receive the corresponding information on demand.

The number of scans reached 917 during the exhibit.

Table 3 presents the results of 49 viewers, who either

requested information about the same three displays or

failed in scanning. The unsuccessful scans that have no

record of demodulation are removed since these imply

users could have closed the application or trained their

cameras on scenes without optical markers. Among the

241 sampled scans, the proposed mobile application

successfully identified the LED transmitters 175 times.

Furthermore, we achieved the success rate of 84% on

iOS-based mobile devices, showing that the camera

parameters were under control.

Viewers using Android-based devices are minorities in

this exhibit, still ASUS and SAMSUNG yield 75%

success rate. According to the log, the following

Android-based devices are able to offer our context-

aware mobile service: ASUS ZenFone 2 Z008D, ASUS

ZenFone 2 Z00AD, ASUS ZenFone Selfie Z00UD,

SAMSUNG Galaxy E7, SAMSUNG Galaxy Note 3,

SAMSUNG Galaxy Note 4, and SAMSUNG Galaxy J.

However, some devices failed to perform successful

scan, e.g. Xiaomi Mi 3, Sony Xperia Z5, HTC Desire

626, ASUS ZenFone 2 Laser Z00LD, SAMSUNG

Galaxy A5, SAMSUNG Galaxy S6, and SAMSUNG

Galaxy Note 5, indicating that more camera parameters

should be taken into consideration for wider adaptation

to diverse hardware/software specifications.

Figure 16: LED light with the proposed driver circuit

installed in 2016 Expo: Discovering Technology

Treasures.

5. CONCLUSIONS

This paper proposes a context-aware mobile service for

better user experience in exhibits. Given that lighting is

one of the essentials of exhibits, the light-to-camera

communication is implemented to enable interactions

between viewers and displays through imperceptible

optical markers. We control the camera parameters so

as to tailor the service to various specifications of

mobile devices. Specifically, exposure time and

exposure compensation are tuned for iOS-based and

Android-based mobile devices, respectively. In addition,

the design of LED transmitter is presented, where the

driver circuit consisting of MOS, capacitor, and

inductance outputs square wave at designated frequency.

The experiments reveal that light-to-camera

communication surpasses QR code in terms of scan

distance and scan angle. The proposed context-aware

mobile service supports up-to-date iOS-based mobile

Page 7: Light to-camera communication for context-aware mobile services in exhibits

devices and some Android-based devices, verifying its

usability in practice. Future work may further include

the control over more camera parameters and adaptive

parameter tuning. Moreover, referring to a systematic

encoding scheme will enhance the scalability of the

context-aware mobile service.

REFERENCES

[1] M. Baldauf, S. Dustdar, & F. Rosenberg (2007). A

survey on context-aware systems. International

Journal of Ad Hoc and Ubiquitous Computing, 2(4),

263–277.

[2] G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M.

Smith, & P. Steggles (1999). Towards a better

understanding of context and context-awareness.

In Handheld and ubiquitous computing (pp. 304–

307). Springer Berlin Heidelberg.

[3] L. M. Ni, Y. Liu, Y. C. Lau, & A. P. Patil (2004).

LANDMARC: Indoor location sensing using active

RFID. Wireless Networks, 10(6), 701–710.

[4] S. Alletto, R. Cucchiara, G. Del Fiore, L. Mainetti,

V. Mighali, L. Patrono, & G. Serra (2016). An

indoor location-aware system for an IoT-based smart

museum. IEEE Internet of Things Journal, 3(2),

244–253.

[5] P. Bahl & V. N. Padmanabhan (2000). RADAR: An

in-building RF-based user location and tracking

system. In Proceedings of the 9th Annual Joint

Conference of the IEEE Computer and

Communications Societies (Vol. 2, pp. 775–784).

IEEE.

[6] L. Caviglione, M. Coccoli, & A. Grosso (2011). A

framework for the delivery of contents in RFID-

driven smart environments. In Proceedings of the

2011 IEEE International Conference on RFID-

Technologies and Applications (pp. 45–49). IEEE.

[7] A. Gentile, S. Andolina, A. Massara, D. Pirrone, G.

Russo, A. Santangelo, S. Sorce, & E. Trumello

(2012). QRouteMe: A multichannel information

system to ensure rich user-experiences in exhibits

and museums. Journal of Telecommunications and

Information Technology, 58–66.

[8] A. Koutsoudis, F. Arnaoutoglou, & G. Pavlidis

(2014). Passive markers as a low-cost method of

enriching cultural visits on user's demand. Journal of

Advanced Computer Science & Technology, 3(1),

12–17.

[9] H. Y. Lee, H. M. Lin, Y. L. Wei, H. I. Wu, H. M.

Tsai, & K. C. J. Lin (2015). Rollinglight: Enabling

line-of-sight light-to-camera communications. In

Proceedings of the 13th Annual International

Conference on Mobile Systems, Applications, and

Services (pp. 167–180). ACM.

[10] T. Yamazato, I. Takai, H. Okada, T. Fujii, T.

Yendo, S. Arai, M. Andoh, T. Harada, K. Yasutomi,

K. Kagawa, & S. Kawahito (2014). Image-sensor-

based visible light communication for automotive

applications. IEEE Communications Magazine,

52(7), 88–97.

[11] Y. S. Kuo, P. Pannuto, K. J. Hsiao, & P. Dutta

(2014). Luxapose: Indoor positioning with mobile

phones and visible light. In Proceedings of the 20th

annual international conference on Mobile

computing and networking (pp. 447–458). ACM.

[12] A. De Cheveigné & H. Kawahara (2002). YIN, a

fundamental frequency estimator for speech and

music. The Journal of the Acoustical Society of

America, 111(4), 1917–1930.