22
Watch Video embedded-computing.com/topics/processing Sponsored by: ARM, Zebra Technologies, IAR Systems, and SeaLevel Systems ñ ARM DesignStart for fast innovation ñ How SoC growth influences software development ñ How CoAP can save us from protocol obesity 2015 Volume 2 Number 1

ARM E-mag 2015

Embed Size (px)

DESCRIPTION

The 2015 ARM E-mag covers the latest advances in ARM development tools, using CoAP for efficient data transport, and more.

Citation preview

Page 1: ARM E-mag 2015

Watch Video

embedded-computing.com/topics/processing

Sponsored by: ARM, Zebra Technologies, IAR Systems, and SeaLevel Systems

ñ ARM DesignStart for fast innovation

ñ How SoC growth influences software development

ñ How CoAP can save us from protocol obesity

2015 Volume 2 Number 1

Page 2: ARM E-mag 2015

EMAG

Featuring

Hobson Bullman,ARM

Sponsored By

© 2015 OpenSystems Media, © Embedded Computing Design. All registered brands and trademarks within the ARM E-mag are the property of their respective owners.

ARM DesignStart enables fast innovation on the cheap

By Brandon Lewis, Asst. Managing Editor

Dealing with a complex world – How SoC growth influences

software developmentQ&A with Hobson Bullman, ARM

IoT and efficient data transport: How CoAP can save us

from protocol obesityBy Zebra Technologies

Page 3: ARM E-mag 2015

Join us at ARM TechCon! Booth 512

High performance, ease of use, complete code control, functional safety certification – use the right tools.

Smaller, faster, smarter code

with IAR Embedded

Workbench

Page 4: ARM E-mag 2015

4

ARM DesignStart enables fast innovation on the cheapBy Brandon Lewis, Asst. Managing Editor

Page 5: ARM E-mag 2015

5

A few weeks ahead of ARM’s TechCon, the IP design house is pushing to give embedded and IoT product developers a head start of their own.

On Tuesday ARM announced a revamp of their DesignStart portal, a web-based resource for SoC designers that accelerates design times by providing pre-com-mercial access to physical and processor IP for the Cortex-M0 for free, and by “free” I mean free beer, not free speech. In addition to the -M0 System Design Kit (SDK) that includes the IP, peripherals, a test bench, and software, designers can leverage the full Keil MDK development suite on a free 90-day license, and when ready for prototyping or to mix in other IP, ARM is offers the relatively low-cost ($995) Versatile Express FPGA development board through the DesignStart portal as well. At production time the company has also implemented a $40,000 fast-track commercial IP licensing program through DesignStart that covers the -M0 IP, SDK, Keil tools, and a year of technical support; the “free” here being that the standardized license can hopefully “free you up of lawyers.”

The upgraded DesignStart program comes amidst a new wave of custom SoC development where designers are increasingly integrating connectivity, mixed-signal, and sensor IP alongside processing capabilities for systems at the edge.

Figure 1 | IP provided through DesignStart represents more than 15 ARM Partner foundries and 85 process node technologies ranging from 250 nm to 16 nm / 14 nm FinFET..

Page 6: ARM E-mag 2015

6

As the Cortex-M0 represents somewhat of an inflection point for IoT edge devices, the ability to design, simulate, and test SoCs based on the technology through a quick-start program significantly reduces barriers to entry for startups, Maker Pros, and even engineering design teams within larger companies that are working with ARM IP for the first time. Design house and foundry partnerships through DesignStart also help smooth out the commercialization process, which Ian Smythe, Director of Marketing Programs in ARM’s Processor Division said in a briefing matches the company’s goals of creating “a designer-friendly system that enables products to get out there, experimented with, and delivered as quickly and easily as possible.”

“The IoT has made embedded sexy, and it’s a combination of several factors,” he said. “Embedded has always been a huge market, but adding connectivity to it provides multiple possibilities and we’re still at that experimental growth stage with multiple opportunities being created. That’s the opportunity around the convergence of the Internet and the embedded space, the opportunity of connecting things together and developing new innovative services,” Smythe says. “When we looked at the model we could use to support this, we knew we had a strong ecosystem and that Cortex-M was shipping in mass quantities, but how can we enable people to build SoCs for their market, and quickly?

“The [DesignStart] relaunch is intended to accelerate innovation in the embedded and IoT markets,” he continues. “ARM felt the most important thing was to be able to equip designers in a start up, custom mode with free access to the IP, so we made the Cortex-M0 system IP, as well as the Cortex-M0 SDK, downloadable for free so people can go away and design their heart out. Download the IP, stick it into the toolchain, and simulate. You can go away and design and prototype, and at the point where you’re ready, ARM provides a fast, easy, low-cost way to get access to commercial IP. It’s a short license span that doesn’t need 5 lawyers, and for 40k you can take this to production.

“If we can encourage growth with a web-based bit of IP, that’s what we want to try and see,” he adds.

The Cortex-M0 evaluation package is available for download as a pre-configured Verilog netlist, and the updated DesignStart portal also gives access to a broad range of ARM Artisan physical IP. To get started go to http://designstart.arm.com.

Page 7: ARM E-mag 2015

7

Dealing with a complex world – How SoC growth

influences software developmentQ&A with Hobson Bullman, ARM

Over the past few years, electronics systems have become increasingly complex while the software running on them

has become even more complex. On the occasion of ARM’s 25th anniver-sary, Embedded Computing Design spoke with Hobson Bullman, General Manager of Development Solutions at ARM about how the growth of SoCs is influencing software development, and how development tools in turn are helping software developers cope with changing requirements.

I n the past 25 years, ARM has seen some major changes in electronics systems and particularly within mobile devices. Could you provide an overview of the major changes, in your view?

BULLMAN: Yes, certainly. ARM has been creating software tools as well as microprocessor designs for 25 years this year. And while we’re still doing appli-cations processors and more specialized

processors, compilers, debuggers, and models, the underlying SoC technology has become much more complex in the past couple of years.

Take a high-end mobile phone as an example. In the year 2000, a high-end mobile phone had one application pro-cessor, one DSP modem, and one simple operating system (OS). From a software point of view, one person could under-stand the complete software stack.

Page 8: ARM E-mag 2015

8

These days, however, a high-end mobile device looks very different. It has up to 10 application processors, a graphics processor, and many embedded proces-sors for connectivity – for instance, Wi-Fi, Bluetooth, GPS, and infrared – but also cameras, accelerometers, and heart rate monitors. The chip has become larger, the OS has become more open. The system is much more complex and much more capable; it can no longer be under-stood by one person. Instead it requires teams of people who understand various its parts, and this has a huge impact on software and tools development, which is my specialty.

Back in 2000, did you see all of these changes coming?

We saw some of it coming. For example, we knew that multicore, heterogeneous computing, and, to an extent, 64-bit architectures would become prevalent in high-end markets. It was just a question of when.

Hence, we knew we had to change but we didn’t understand upfront the whole impact on tools. For example, we didn’t imagine in those early days that we’d end up supporting high-level languages like OpenCL. We probably knew that high-level languages would come to market, but we couldn’t predict quite what they would be for. So while the general

direction didn’t surprise us, some spe-cific solutions were slightly surprising.

Were you able to anticipate any of the changes to development tools?

I think as an industry we were able to – as a broader ecosystem we are strong enough to find solutions and deal with the complexity. We are close enough with our partners to understand the challenges they face and provide ways of helping.

We have a great ecosystem where some of the tools come from ARM, some tools come from the open source community, and many others come from the large and strong ecosystem of third-party tools vendors.

How much more complicated is an SoC nowadays?

Heterogeneity and multiprocessing is a huge change. Cache coherency between the different processors, for instance, adds a lot of complexity to hardware and software.

Consider server chips. ARM is now coming into servers and there is a chip today that has up to 48 cores, with many more cores coming in the future. These chips have carrier-grade networking

Page 9: ARM E-mag 2015

9

fabric with PCI express and 100 Gigabit Ethernet. That, along with the cores, along with the cache coherent intercon-nect, and a whole bunch of domain-spe-cific workload accelerators, and you end up with an extremely sophisticated and capable system.

From a tooling point of view, part of the challenge is to abstract the complexity and provide higher level information taken from all of the details of the system. This lets the user get some insight into how the system is performing and how the system is behaving. You can deter-mine whether it is behaving correctly or incorrectly, and whether it’s performing fast or slow. Even with just an evaluation of our tool suite it is possible to test-drive all existing features so that users get an understanding of our tools.

How do you actually debug a system with 48 cores?

Well, the good news is that we have modern operating systems to help us, and workloads with many parallel tasks scale well on multicore systems running such an OS. However, debugging can be tough in the early days when the software isn’t mature yet, and this early day support is critical given the pace of innovation. But once the software is mature and these things tend to run, you can debug at a software level and not at a system level.

We pay equal regard to both correctness debugging and performance debug-ging. Correctness debugging is all about looking at the internals of a system and checking that the program is behaving as it is designed to. Performance debug-ging is more about finding performance bottlenecks and resolving them.

We use all of the capabilities that ARM provides on a chip to help users debug. There is a lot of infrastructure inside ARM called CoreSight, which has been adding more capability as processor and system technology have evolved. So effectively, the more processors, the more fabric, and the more capabilities we put onto a system, the more debug logic ARM puts into the system. And our job in the tools team is to gather information in real-time to present it to the user. With 48 cores running at 2 GHz this can be quite challenging, so in order to gather the right information for the user, the agent running on the device has to be designed intelligently.

G iven your explanation of how complex systems have become and how developers can analyze their software, where do people actually start building software nowadays?

There is a big change in the way soft-ware is developed today as compared to 15-20 years ago.

Page 10: ARM E-mag 2015

10

In the past, software was developed on either real hardware or an FPGA. Developing on real hardware has many advantages, but also has its limitations.

Today, many people have started to develop their software using software models. We’ve seen model adoption grow exponentially over the last five years. Modeling technology hit a sweet spot in the sense that as systems got more com-plex, models got more capable, making them more attractive to use.

At ARM we have responded by providing “gold standard” models. We run the same validation suites on these models as we do on real hardware. In most cases we develop the validation suite against these models and then build the actual hardware. Through this we’re confident that we have high-fidelity models people can build software against. As a software developer, this means you can start your software development much earlier.

A great example of this from inside ARM itself was that when we moved to a new 64-bit architecture, we spent two years building software on the model. When the first silicon came back we had Linux running on it within two weeks. If we hadn’t started two years before, that would have taken a lot longer.

We recently announced that the team from Carbon Design Systems will be joining

ARM, which extends our commitment to providing early and accurate models.

In addition to time to market, what are the other benefits of using software models?

Visibility is another benefit. For example, you can get very little and limited infor-mation from the core in a multicore device with each core running at 2 GHz. Model visibility is inherently higher than hardware visibility because you can trace whatever you like. We know of devel-opers debugging complex problems on models because of improved visibility.

G oing back to the complexity within mobile devices, what have you learned from the complexity of PCs?

Over time, mobile devices have become as capable as PCs, and while we started out with ARM producing tools for embedded systems, we now produce tools for computers.

Sometimes we’ve adopted some of the desktop tools and rolled them in. We have also increased our investments in important open source software, like the GNU compiler, to make ARM the best possible compiler target that we could. Actually, we’ve spent a lot of time supporting new architectures and new features of the various compilers, and

Page 11: ARM E-mag 2015

11

invested in the other components of our developer tool suites as well. Now we have a sophisticated debugger, a high-quality whole system performance analysis tool, and we ensure that a wide range of oper-ating systems is well supported. We also work with the ecosystem of desktop and server tools vendors to make sure they can port their tools to ARM.

A s ARM began developing tools for the embedded market, how have you seen it change over the years?

The embedded software market is very interesting. There is a lot more diversity in the embedded market today, both in devices and in the software running on them. We see embedded devices with Cortex-A-class processors and Cortex-M-class microcontrollers (MCUs); these devices run a huge variety of operating systems. The Cortex-M MCUs run RTOSs while the Cortex-A processors run embedded Linux or proprietary oper-ating systems. The variety is breathtaking compared to the OS landscape for mobile devices and enterprise systems, which have standardized around just a few platforms.

In embedded, we had to change to address this problem of diversity because systems are quite complex these days but the software is much less standardized. C and C++ are still

dominant in embedded rather than higher level languages. Debugging is still very much a number-one concern because a lot of problems exist at the bare metal level. People are trying to reuse code more and more, however, this is difficult due to the lack of soft-ware standards, and this means that the value in tools shifts. When we started this game it was all about compilers, and then debuggers and integrated development environments (IDEs) became important because of produc-tivity. Now we see a lot more value in middleware and in integration of dif-ferent software stacks.

At ARM we have a standardization exer-cise in embedded called CMSIS, the Cortex MCU Software Interface Standard, and this is our response to help control the diversity and complexity of software in the Cortex-M world (Figure 1, page 12).

How does CMSIS help to manage the diversity in embedded systems?

The approach is to embrace the diver-sity. We provide a standard for operating systems to interwork; standards for the embedded tools to interwork; we pro-vide peripheral description standards in embedded tools; driver standards for using different drivers with different oper-ating systems; software packaging stan-dards; and hardware abstraction layer

Page 12: ARM E-mag 2015

12

standards. Over time, it becomes easier for software engineers to learn and use these standards, and then move from device to device within a particular MCU family or across MCU families.

At ARM, it is very important to find ways in which our silicon partners can produce diverse, specialized, and differentiated devices, but not in a way that makes it very hard for the software engineers. We need to find ways that promote soft-ware compatibility and portability across diverse and differentiated MCUs.

So with these standards it is possible to customize tools for certain requirements?

Yes, absolutely. We provide technology that helps people reuse their software

assets without prescribing what these assets are.

M icroprocessors are increasingly used in applications that demand a high degree of reliability and safety. How have safety standards influenced software toolchains?

This is a different sort of complexity. Safety complexity is about process as much as about technology, and whether it’s in industrial, aerospace, automotive, or medical. the state of the art for safety con-tinues to get higher. Systems that didn’t contain technology in the past now have lots of technology inside, and people need to ensure that their systems are compliant with standards to prove that they are safe.

Figure 1 | The CMSIS standard family helps minimize some of the complexity of myriad operating systems, drivers, tools, and more to ease software development on ARM Cortex-M devices.

Page 13: ARM E-mag 2015

13

Creating a product that needs to be qual-ified to safety standards raises a diverse set of challenges for a project team. A lot of work is required to ensure the product behaves as specified, with well-under-stood and safe failure modes. Quite a few of these functions are done in software these days, and so there is an implicit strong dependency on the tools used to create such software, such as compilers and other code generators. Some other classes of tools do not generate software, but instead provide information or guide-lines to improve software quality. ARM believes that toolchain suppliers need to help project teams understand how the tools operate so that project teams can ensure the tools are being used appro-priately and with adequate safeguards. Therefore, ARM and its ecosystem pro-vide assets around tools to help product vendors with safety certification. The compiler collateral, for instance, can be shared with certification bodies to show that the compiler is generating good code. In addition, code coverage can be used to prove that software is well tested. It is no longer acceptable to treat software tools as a “black box.”

Finally, what’s next for ARM and the wider Ecosystem?

I thought you would ask that ques-tion, so fortunately I brought along my crystal ball.

The capability of mobile devices looks set to continue to increase, and so we can expect the industry to invest in more ways to make it easier for developers to harness that power, which probably includes further deployment of higher level languages. In enterprise systems, we talk about the “Intelligent Flexible Cloud” bringing more workloads closer to the end device, which again has impli-cations for how software is written and deployed. And in embedded, it’s quite clear that the Internet of Things (IoT) has the potential to roll in and add connec-tivity to a wide range of constrained devices, which will require new approaches compared with connecting computers to the Internet.

Note: Interested developers can evaluate all the tools and capabilities mentioned in this interview through a trial version of DS-5 Ultimate Edition or MDK-ARM. Those who’ve evaluated DS-5 in the past can claim another free trial until November 30, 2015.

Hobson Bullman is General Manager of Development Solutions at ARM.

ARM

www.arm.com

@ARMCommunity

www.linkedin.com/company/arm

www.facebook.com/ARMfans

Page 14: ARM E-mag 2015

14

IoT and efficient data transport How CoAP can save us from protocol obesityBy Zebra Technologies

Watch Video

Page 15: ARM E-mag 2015

15

As the Internet of Things (IoT) continues to grow, so does the number of con-nected devices and the different par-adigms around how to connect them. There are many different use cases in the IoT, but data consumption is changing in general. The way data is consumed on an Internet dominated by objects is quite different than the one dominated by humans and a few powerful com-puters, and where we have historically seen moderate traffic with heavy pay-loads, we now see higher traffic with smaller payloads. The reason for this

is quite simple: when a machine talks to another machine they communicate more efficiently, striving to get only the information they want at the bare min-imum required to get it.

There are many factors that influence why a device, sensor, or machine would want to communicate with lighter pay-loads. A few separate but related rea-sons are:

õ Human language and the associated metadata we are accustomed to are unnecessary for machines that operate on far-more-efficient binary code. The vast majority of machine-to-machine (M2M) communications won’t be directly interpreted by people, so utilizing machine languages thereby allows devices to save on not only data transmission overhead, but also the resources required to convert messages back into human-readable formats.

õ Optimized resource utilization is important because all machine interactions carry with them a cost, and therefore must be justified. Every signal sent or received costs not only memory but also power, and in order to remain autonomous within a large mesh, energy efficiency has become a critical aspect of device architectures and lifecycles.

õ With most devices on the IoT acting autonomously as part of a

Page 16: ARM E-mag 2015

16

larger mesh network, machine entities must be able to maintain full control of their state so they can retain information about transactions and make independent decisions when alternative communications paths are needed to get messages across a network.

All of this leads to one very important point: the mechanics through which autono-mous entities communicate with the outside world have evolved. Information con-veyed today must be just, light, and friendly to resource-constrained devices. And while the neo-ecosystem of devices we are watching unfold consists of both small and large machines, everything must interoperate.

The legacy of success When we look at protocols that interoperate well in our current ecosystem, we immediately arrive at the Hypertext Transfer Protocol (HTTP) that has served as a pillar of modern web communication and is at the core of almost all the content we consume and publish. The problem is that HTTP comes with its own requirements, and while justified for what the protocol was design for, those requirements may not play well in future use cases.

Let’s first take one of the more basic mechanisms of the protocol, the client-server model. One of the main problems with this architecture in the IoT is that the client must poll the server for information, so in most cases the “thing” in an Internet of Things sense is the server. This creates uncertainty when striving to get data as close to real time as possible because the server producing the content needs to wait for a request from the client before sharing an update. The repercussions of this are twofold: the need for a client to request an update inhibits real-time round-trip communications, and the high availability required on the server side demands a lot of resources in order to respond to requests, even if they are futile. Smaller devices in the IoT would expend large amounts of power and other resources quickly with this model.

One could argue that push notifications could fix the communications bottlenecks created by the client-server model, and while this is true, HTTP presents another fundamental challenge in IoT use cases in that it is not a lightweight protocol. Header files alone in HTTP can become quite large, amounting in some cases to several hundreds of bytes per request. This is partially fallout from the use of aforementioned textual (human-readable) language for marking options and

Page 17: ARM E-mag 2015

17

metadata about the transaction. When downloading a few gigabytes of informa-tion, header overhead can seem almost non-existent in the context of an entire payload. However, when attempting to retrieve basic data such as temperature, geo-coordinates, or battery level, header files alone can be larger than the payload itself. While headers can be compressed and sent in binary, this action requires resources that, as addressed previously, are limited for many IoT devices.

Figure 1 shows a very basic HTTP exchange for temperature.

If there are too many options or even missing options it’s debatable whether return characters are necessary to complete a meaningful transaction. While much more goes into a complete exchange, for the sake of experiment Figure 1 shows that we

Figure 1 | Depicted here is the size (in bytes) of a basic HTTP transaction for a temperature reading.

Page 18: ARM E-mag 2015

18

have a request on the order of 127 bytes and a 92-byte response. The more options or information about the transaction added to the header will increase these values quite rapidly, which can quickly climb into the kilobyte range.

So, why would we default to using HTTP? Well, there are a couple of reasons, and at the top of the list is that we are accustomed to it and it is widely used. But looking at the main principles of HTTP, another reason is the representational state transfer architecture type, commonly referred to as RESTful. The RESTful approach offers a fully stateless, cacheable client-server relationship with a common interface that enables one to manipulate resources with ease and uniformity. Manipulating resources with ease and uniformity is also how we think about building the Internet of Things, so one of the fundamental tenets of HTTP suits the IoT, just not the package as a whole.

The coreThe advent of the IoT has made it more than clear that current models cannot sustain a predicated technological evolution that permits resource-constrained devices to optimize communications based on their use case, environment, and hardware.

Enter the Constrained Application Protocol, or CoAP, which was designed with finite resources and the RESTful paradigm in mind to emulate some of the better features of HTTP without the overhead. CoAP is an efficient, bidirectional, reliable protocol that

Figure 2 | The CoAP header has a header of 4 bytes, which can be extended based on the paths and options selected.

Page 19: ARM E-mag 2015

19

utilizes a RESTful architecture style to provide features such as caching, URI-query, content format identification, and conditional requests. The protocol also adds new capabilities such as observation and the ability to fragment requests and responses as needed so that servers can be set up to send clients periodic updates when new data is available. The baseline CoAP header is 4 bytes, and can be extended based on the paths and options used (Figure 2).

Using the same exchange scenario as earlier but replacing HTTP with CoAP, Figure 3 shows how the same temperature information acquitted using CoAP’s RESTful

Figure 3 | Using the same temperature data exchange scenario as earlier, the CoAP protocol relies on a RESTful architecture in a client-server models (3a) to deliver information with a fraction of the communications overhead of HTTP (3b).

Page 20: ARM E-mag 2015

20

architecture results in a much more manageable transaction size that is suitable for resource-constrained devices. Furthermore, because there is less data to transmit, devices that rely on portable or disposable power sources are able to conserve energy to extend the time period between recharging or replacement.

Another important consideration to note is that CoAP was designed on the User Datagram Protocol (UDP), allowing packages to be even smaller than those reported in Figure 3a.

The Internet Engineering Task Force (IETF) is also currently developing CoAP for TCP and other transports.

Key takeawayCoAP is by no means a replacement for HTTP, as the ladder is packed with features that are critical for the web. However, for an Internet dominated by “things” that are meant to aggregate and send data in an agnostic fashion, payload size has become increasingly relevant. CoAP delivers the functionality required by these devices with a balance of features and efficiency.

Zatar is a product of Zebra Technologies, a global leader respected for innovation and reliability. Zebra provides products and services that enable real-time visibility into organization’s assets. Zebra solutions support Enterprise Asset Intelligence and the Zatar enterprise IoT service is a perfect example! Zatar provides a standards-based approach to connectivity and control of devices along with open APIs to create apps, onboard devices and enable collaboration. Zatar is an ARM mbed Cloud Partner.

Zebra Technologies Corporation

www.zebra.com

www.zatar.com

@ZebraTechnology

www.linkedin.com/company/167024

www.facebook.com/ZebraTechnologiesGlobal/timeline

www.youtube.com/user/ZebraTechnologies#p/u

Page 21: ARM E-mag 2015

21

COMPUTING SOLUTIONSN O C O M P R O M I S E

Give your application the solution it

deserves. Sealevel’s Computer on Module

system designs combine the advantages of

custom design with the convenience

of COTS.

COM Express modules offer a selection of

processors ranging from powerful multi-core

Intel i7 and i3 to the popular Atom. Low

power designs eliminate the need for cooling

fans, greatly enhancing system reliability.

Extended temperature models are available

offering -40C to +85C operating

temperature range. COM Express modules

contain the core computer functionality most

affected by changing technology. Since the

modules are based on an industry standard

specification, COM Express systems are

easily updated to stay current with the

latest technology.

• Variety of Processors and Form Factors

• Application Specific I/O

• Rugged, Solid State Operation

• Vibration Resistance

• Extended Operating Temperature

• Long-term Availability

• Superior Life Cycle Management

sealevel.com • 864.843.4343 • [email protected]

COM Express Modules

Q U I C KS TA RT K I TC O M E X P R E S SThe 121004-KT provides everything you need to get your COM Express project off to a fast start. Powered by a 1.8GHz Intel Atom N2800 CPU with 4GB RAM and integrated heatsink, the QuickStart kit includes an installed 2.5" 32GB SATA solid-state disk. Standard features include five USB 2.0, two RS-232, one RS-485, dual Gigabit Ethernet, SATA, DisplayPort and audio interfaces. To interface the RS-485 port, a 10-pin IDC to DB9M serial cable is included. The carrier board and module are powered by the included 100-240VAC to 24VDC external power supply with US power cord.

The QuickStart kit simplifies software develop-ment and prototyping while the target applica-tion carrier board is designed. Take advantage of Sealevel’s carrier board development services for the fastest time to market.

SL ECD eMagazine - June IoT.pdf 1 6/30/2015 4:59:18 PM

Page 22: ARM E-mag 2015

SPONSORS

EMAG

© 2015 OpenSystems Media, © Embedded Computing Design. All registered brands and trademarks within the ARM E-mag are the property of their respective owners.