21

P.I.N.G. 12.0

  • Upload
    ping

  • View
    251

  • Download
    1

Embed Size (px)

DESCRIPTION

P.I.N.G. 12.0 features an article on the incredible James Webb Space Telescope which is an improvement to the Hubble and has the ability to see around 13.4 billion years into the past. Other brilliant articles include Swarm Intelligence, Digital Neuron and Eyeriss, among many others. The editorial article discusses DARPA's revolutionary new chip that can self-destruct on command. The new feature, called Past Ruminated, takes an analytical look at Cinerama, a past technology that met with a fantastic response upon its release, but failed to survive the market. The magazine also features the inspiring interviews of two luminaries - Mr. Arun Nathani, CEO, Cybage Software Pvt Ltd. and Dr. Rajat Moona, Director General, C-DAC.

Citation preview

Page 1: P.I.N.G. 12.0
Page 2: P.I.N.G. 12.0

Dr. Rajesh B. IngleBranch Counsellor

Dear Readers,

It gives me immense pleasure to write this message for the new edition of PICT IEEE Student Branch’s (PISB)’s P.I.N.G. Credenz Tech Dayz and the membership drive are in full swing. P.I.N.G. is a unique activity of PICT IEEE Student Branch which has established itself and made an impact all over. This newsletter provides a platform for all, including student members to showcase their talent, views and further strengthen IEEE activities.

It is a great pleasure to serve PISB as a Counsellor. Year 2015 was a very important year for me as “ 2015 IEEE India Council Outstanding Student Branch Awards” was declared to PICT IEEE Student Branch and I received the award during a meeting of the India Council Execom, held on the sidelines of INDICON 2015 at Jamia Millia Islamiya, Nehru House, New Delhi on 20th December 2015. I show my gratitude to all the members of PISB for their activities and volunteering efforts. I had an opportunity to attend IEEE R10 2016 Execom meeting at Kuala Lumpur, in Malaysia, on 22nd January, 2016 to 24th of January 2016. I also got an opportunity to address the Region 10 technical seminar participants on “Towards Internet of Everything” held at UTM (Universiti Teknologi Malaysia), a public research University in Malaysia on 22nd January, 2016. To mark the 50th Anniversary of IEEE Region 10, in year 2016 we will be observing the 50th Anniversary Celebrations from 24th August 2016 for one year across the whole region. We at PICT will be organizing many events including an International conference on “Emerging Trends and Innovation in ICT”, during February 2017.

At PISB, we try our level best to create an environment where students keep themselves updated with the emerging trends, technology and innovations. Many events are conducted throughout the year and widely appreciated by students, acclaimed academicians and industry professionals alike. The events include IEEE Day, workshops, Special Interest Group (SIG) activities, Credenz and Credenz Tech Dayz. Credenz is the annual technical event held in August-September each year.

I thank all the authors for their contributions and interest. On behalf of IEEE Region 10 SAC Team & IEEE Pune Section, I wish the PISB as well as this newsletter a huge success. I congratulate PISB’s P.I.N.G team for their commendable efforts.

Prof. Dr. Rajesh IngleIEEE R10( Asia-Pacific) SAC Chair 2016Vice-Chair, IEEE India council

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 2CTD.CREDENZ.INFO

PICT IEEE Student Branch(PISB), one of the largest student branches in IEEE Region 10 was formed in 1988 with the aim of establishing technical excellence among its student members.

PICT IEEE Newsletter Group (P.I.N.G.), the official magazine of PISB is released bi-annually to commemorate two prestigious events: Credenz and Credenz Tech Dayz (CTD). With the intent of connecting the technocrats and the cognoscenti, CTD ‘16 offers a series of seminars ranging from Big Data to Entrepreneurship. CTD ‘16 introduces ‘Enigma’, a logical reasoning quiz, providing students yet another platform to showcase their skills.

P.I.N.G. 12.0 presents a plethora of technical articles with ‘James Webb Space Telescope’ as the featured article. Keeping up with the tradition of bringing something new for our young readers, P.I.N.G. 12.0 comes up with a new feature ‘Technology’s Honourable Mentions’ to acknowledge the innovations of the past which faded with time.

We thank our contributing authors and professionals for sharing their knowledge and expertise through this Issue. A special mention to our junior editors for their perseverance and diligent efforts to make P.I.N.G. 12.0 a great success.

1 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

Page 3: P.I.N.G. 12.0

Pulverising ChipYou cannot steal what does not exist

WikiLeaks, with their slogan ‘We open governments’, has unveiled obscured information incessantly. With every

leak, it has rattled governments or various organisations embroiled in the act. The world was taken by a storm when Edward Snowden, without prior authorisation, disclosed classified information from the United States National Security Agency (NSA) and United Kingdom Government Communications Headquarters (GCHQ) to the public. But looking through the eyes of the government, all these acts were carried out in the interest of national security. Could they have averted this divulgence in some way?

Doesn’t devising a technology to prevent the secret information from getting into the wrong hands sound like a promising solution?

Even for the most secret espionage agents across the world, receiving mission orders from a self-destructing device has been a dream. In their quest to turn this dream into reality, Defence Advanced Research Projects Agency (DARPA) and PARC, a Xerox company, have been triumphant in implementing this concept. It turns out, for them it was just a walk in the PARC!

Encephalon behind notion: DARPA & PARCWhen Sputnik took a big leap of faith for mankind in 1957, the world exulted in its success but the scientists of the United States were already on the drawing board finding reasons for their failure. It was then that DARPA was formed with the mission that the States would, from that time forward, be the initiator and not the victim of strategic technological surprises. DARPA has, since then, delivered on its mission by being involved in many path breaking inventions. Contrary to popular belief that its domain of innovation is restricted to just the military facilities, it has contributed to modern civilian society too. The Internet, Automated Voice Recognition (Siri), Global Positioning System (GPS) have been the highlights of its

contribution. From adding intelligence and automation to breaking new frontiers in clean technology and biomedical domain, DARPA and its innovation partners have lifted the curtains on a diverse set of innovative projects. In a similar quest for innovation, it decided to collude with PARC.

Palo Alto Research Centre (PARC) has given the world IPv6, the most recent version of the Internet Protocol. IPv6 provides enough addresses to assign to every atom on the earth! Graphical User Interface (GUI), Ethernet and Laser Printers are some of the most successful projects PARC has been associated with. It is a wholly owned, research and development subsidiary company of Xerox, which aims at reaching the infinite limits of evolution. It has also been involved in the development of several projects funded by DARPA.

The Vanishing Programmable Resource (VAPR) is one such project, which is aimed at enabling transient electronics as a deployable technology. Transient electronics are electronics with a pre-engineered service life, or in other words, electronics that disappear after achieving their objective. The challenge in developing transient electronics lies in the fact that they not only need to be comparable in performance to their commercial off-the -shelf counterparts, but also be durable enough to atomise beyond use and recognition on command. To achieve this objective, DARPA has contracted with BAE systems, a British multinational defence, security and aerospace company. With this

collaboration, DARPA aims to build electronics with system-on-a-chip design, useful for radios, remote sensors and phones.

Inception of aeon:DARPA and PARC introduced the ‘Vanishing’ chip under the VAPR Program recently that may revolutionise data security and piracy. On its release at DARPA’s “Wait, What?” forum on future technologies at St. Louis, Missouri, U.S.A., this chip was described as a new type of pictographic processor that is capable of self-destructing in a controlled triggerable manner. It was designed as a part of DARPA’s initiative to help safeguard top secret data. The chip could be used to store data such as encryption keys and on command, shatter into thousands of pieces so small, reconstruction is impossible. With every information about our personal and professional life being digitalised, there is an increasing need for data security which prevents hackers and unauthorised users access to computers, databases and websites. Technologies used in order to secure data include software or hardware disk encryption, data masking and also quantum computing. Quantum computing could offer a potential unravelment to this in the long run, since attempting to read the data being transferred between two quantum computers will inevitably change the data state and alert the users that they are being spied on. However, as quantum computing is a long way off, other solutions for data security are needed. This has resulted in growing interest in transient technology, which some experts expect to play a key role in securing sensitive data.

Carving the chip:PARC introduced DUST (Disintegration Upon Stress Release Trigger), a technology used in fabricating the chip. It is based on stress engineered glass that allows electronic devices to rapidly and remotely disintegrate on command, leaving behind extremely small particles hardly visible to the human eye.

PARC’s new computer chip uses tempered Gorilla Glass, the Corning-produced tough glass used in the displays of numerous smart phones. In order to temper the glass, the ion exchange method is employed. Normally, glass is tempered by cooling the edges. The glass exterior shrinks, putting the exterior into compression while the warmer interior maintains incredible tensile stress. Because glass is a poor temperature conductor, the heat-tempering process only works with pieces of glass that are at least 0.03 inches (1 millimetre) thick.

Due to this very reason, the ion exchange method is used, which involves a thin piece of glass, rich in sodium ions or atoms of sodium with one electron stripped off, put into a hot bath of potassium nitrate. Potassium ions then try to swap places with the sodium ions, but because the heftier potassium ions must squeeze into place within the silicon matrix, this creates enormous tension in the glass. Though this glass is stronger than normal, if a piece of it is broken, the glass shatters explosively into little particles. The silicon wafer is directly attached to the glass, or the two are fabricated together. The final product looks like a piece of glass with some metal lines drawn all over it.

3 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 4CTD.CREDENZ.INFO CTD.CREDENZ.INFO

the

edito

rial

Page 4: P.I.N.G. 12.0

Eliciting the self-destruction:The chip suicide is induced when it is triggered with a tiny heating element, causing a thermal shock that creates a fracture which spreads throughout the glass. Anything from mechanical switches, Wi-Fi, laser lights to radio waves, voice phrases and photodiodes could be used to send the kill signal to the chip. When someone initiates a crack in these glasses, it propagates at about 1,500 meters per second.

Possible applications:The technology has reached the prototype stage, but no commercial or government uses for the self-destructing electronics are in the works. Electronics play a wider role in modern warfare. Equipping remote sensors, cell phones and other gadgets, they feature in virtually all battlefield operations but can leave behind a potential national security risk. The ‘Vanishing’ chip can be used for fabricating these modern accessories used in battlefield. These accessories could then be destroyed on trigger after the war is over. Also the dreck left behind could be disintegrated using this transient chip.

Securing one’s personal data is as crucial as securing any secret government data. This chip may have applications in a civilian context, especially for mobile devices. Encryption keys on cell phones are usually very long random numbers that are nearly impossible to crack, but they are often protected by relatively weak passwords chosen by their owners. Various forensics tools rely on this principle to crack

the encryption technology included on most smartphones. Placing encryption keys on a self-destructing chip that could be activated if a device were lost or stolen, would make it much more difficult for a thief to extract information.

The Pulverising chip could also be useful for Digital Rights Management (DRM) technologies. All it needs for a DRM scheme to fail is one hacker extracting keys and sharing them. In case of any manipulation, the self-destructing chip would make extraction of data more challenging by eliminating stored encryption keys. Health records and banking records can be secured using self -destructing chip. Another application of this radical creation could be in diminishing e-waste. 20–50 million tons of e-waste is generated worldwide every year. This global mountain of waste is expected to continue growing by 8% per year. It is estimated that only 15-20% of e-waste is recycled, the rest of these electronics go directly into landfills and incinerators. This technology could make a giant difference in these facts, by making it easier to recycle and decompose e-waste.

Being a state-of-the-art creation in bailiwick of data security and transient electronics, there are climacteric questions that come to mind. While this chip provides enough protection to safeguard information in electronic devices, people are still sceptical about its reliability. What if hackers found a way of hacking into this chip and managed to trigger its self-destruction before it could attain its objective? On the contrary, if it were to be immune to such threats, there is always the possibility of it being in use already. What if the secret services already had millions of such sensors, that they have put to use in covert operations without the knowledge of the general public? If allegations like these were to be proved, it would be a major violation of privacy. But can this be proved at all? Isn’t wiping off the evidence, exactly what this chip does best! What if it already has?

5 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

-The Editorial Team

inte

rvie

w

Bridging the riftWith Arun Nathani, CEO, Cybage

Mr. Arun Nathani, CEO, Cybage

“Success is a combination of your dream and perseverance.”A Every person has to figure out their own

equation in the whole corporate game, and everyone comes to a different realisation. I have come to believe that “leadership” is a very overrated word. It has a feel of singularity that the leader is smarter than everyone else and is more brilliant. But there’s a fallacy in that. I believe that it’s not the human beings or heroism that has the power to be a great leader. It’s the systems, data and process-driven approaches. So, by underplaying leadership, and encouraging the power of data, you actually become a leader. The power of data is way more powerful than any leader. So a true leader is the one who believes, in today’s world, the power of data, information is a lot more than individual, or even a team. A data-driven approach can help you make decisions in both your personal and professional life. In personal life, we make data-based decisions in a very intuitive way. In the business world, you can do a lot more with it.

Cybage Software Pvt. Ltd. has, from its humble beginnings as a startup in the 90’s,

grown to be a global leader in the outsourced product engineering space. The man behind this phenomenal growth is CEO and founder, Mr. Arun Nathani. His ideology of system and data-driven approaches has led the company to be recognised across the world as a force to be reckoned with.

Q After being educated in the USA and working there, why did you choose to come

back to India?

A I did my undergraduate from G.B. Pant University in mechanical engineering.

Then I did my masters in USA from Virginia Tech, after which I worked for five years in Chicago as a design engineer for a manufacturing company. Then I came back to India, simply because India was home. I didn’t much enjoy living abroad. I was also getting married at the time and it is easier for one person to relocate, rather than both relocating. So I got married and stayed in Ahmedabad for a year.

Q What motivated you to start Cybage and what were the initial years like?

A In the entire picture that I have described so far, there was no big plan that I had. I was

going with the flow. I wanted a good job, to settle down and live happily. I found that professional opportunities in Ahmedabad were limited. I came to Pune because I got a job with an Internet startup company, which I worked with for a year. I started Cybage because the timing was opportune. The mid-90’s was when the whole internet boom had really taken off and I saw it as an opportunity to start my own organisation. In every industry, there are short bursts of immense excitement, followed by excruciatingly long periods of boredom. The trick is to identify the opportunity before the idea slips off.

Q What is the one quality that you believe a leader must have to succeed?

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 6CTD.CREDENZ.INFO

Page 5: P.I.N.G. 12.0

don’t even make it to that level. So you have to go to the fundamental level of kids at a very young age, and we have also adopted a school for that. India has a massive challenge. We can do all this in cities. But how do we reach rural India without the infrastructure?

The minimum target I have is that we should have at least as many students as there are number of employees, so that every employee feels that they have someone whose education they sponsor.

Q In your blog “The Science of Success” you always find a way to integrate the ‘work’

and ‘play’. How do you manage to maintain this equilibrium?

A Balance is most important. There are many theories about this. There was a time when

it was popular to say that a clear demarcation had to be made between work and personal life. But now this theory is evolving and it works for me. It says, what is the difference between the two? Why draw the line? You don’t need to build a Chinese wall between the two-let the two intermingle. I don’t see too many differences between what happens in my personal life and what holds business lessons. I try to connect these personal experiences that I encounter to my corporate life. Also, I am able to connect better with my audiences as they can understand me through storytelling and it is more stimulating to them. In the process of writing, I too get to learn from it.

Q In this age, when everyone has a startup and is an entrepreneur, how does one find

the motivation to believe that one has a chance?

A There’s so much competition. When you’re a kid, you want to touch the sky, when you’re

a teenager, you compare yourself to others and by the time you reach your adulthood, it’s all about surviving. There are millions of people who aspire to achieve a lot more than you in life. Success is a combination of your dreams and perseverance. The first thing is realism in life-

understanding that it’s possible that even after playing all my cards right, I may not succeed. The acceptance gives you a lot of peace. You should find contentment in your aspirations. It doesn’t mean that you are laid back. Balance of the mind is important. Keep your eyes open for the right opportunities and keep your slate clean. Be open to new ideas because your ideas will change.

Q For a business to be successful, what do you feel is more important-the idea or the

person behind it?

A Ideas are a dime a dozen. I think the person behind the idea is more important.

Google was not the first browser, Facebook was not the first social media. One of the pitfalls of idea innovation is that every time you think of a brilliant idea there is high possibility that someone else has thought of it. Don’t pick an idea that changes everything. Pick an idea that is evolving. It’s not the idea that always works out, but also its packaging.

Q What is your future plan for Cybage? What type of products are currently under

research and development at Cybage?

A We’re a service and not a product-based company. Today everybody is talking about

digital transformations using the power of technology for business and other transactions. In today’s world, every organisation is racing in this direction. The future lies in emerging environments. Cybage definitely wants to be known as a cutting edge technology player in the digital space and we aim at achieving this through system-driven approaches.

We thank Mr. Arun Nathani for his time and contribution to P.I.N.G.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 8CTD.CREDENZ.INFO

Q You set up offices in the US and the UK pretty early on in your career. Was that

difficult?

A It depends on what office you’re talking about. We set up sales offices. Sales offices

are pretty simple. You tell one of your employees to go and market your company in that place and that is it. Where it is difficult is in setting up development centres. We have 200 people working out of New York. You have to figure out how the hierarchy, reporting environment and the HR will work there. You have to build the team there and the overheads are very heavy. And this is difficult, because you are so far. It gets you defocused to some extent. It’s a question of expense equations. We have offices in Gandhinagar and Hyderabad. Logically, it’s the same thing. It’s not a big piece in the puzzle.

Q What was the biggest challenge in your professional life and how did you combat it?

A In the first year, it was accepting that your idea is not going to go anywhere

and changing the whole strategic direction of the organisation. The change of going from a product company to a service company was one big challenge. We saw that we could not sell a product, so we moved to the service market. Another challenge was figuring out how to differentiate ourselves. There are so many organisations in the Indian IT service industry that work in different business segments but they all look the same. We had to accept that and start differentiating ourselves based on how we do things, rather than what we do. We decided to build our organisation on the principle of not doing different things, but doing the same things differently. Perfecting that system and championing the mindset of Cybagians and our customers to accept this direction required constant championing, which continues to this day.

Q What is the concept behind Cybage ExcelShore?

A We are designing an environment to run our organisation on scientific principles

and systems. ExcelShore is the system driven approach. Building efficiency in an organisation is how you build a sustainable business model. Ultimately, the person who can give the most value for money is the one who wins in the long run. Idea innovation is a massive thing, no doubt. But only 0.1% of businesses are based on that. ExcelShore is basically running an IT company through system and data-driven approaches so that customers and employees get the best value.

Q You have spoken about efficiency in the company. However, people are the same

everywhere. How do you motivate them to work at a better pace to achieve that efficiency?

A Given a financial package you offer, you will get a random mix of people. It is an

illusion that your company has a better crop of people than others. They are the same. So how do you change that? At the end of the day, nothing beats creating a fair environment. Everything else has no recall value. Treat people with as much fairness as you can and kill disparities, then everybody wins. It doesn’t mean treating people equally. It means building an even playing field and kicking out the luck factor. You build a fair environment where you can build people on their approach rather than results. Cybage strongly believes in that. This is why over time, people say that Cybage has better quality people than others.

Q How important are the causes associated with Cybage Asha and Cybage Khushboo

to you?

A Massively. Take Khushboo, for instance. The playing field in our country is so

bad that a brilliant child born in a rural or underprivileged background doesn’t stand a chance, because their fundamental education itself is not good. Even if we give scholarships for medicine or engineering, many of them

7 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

SeaAerial, is a low-frequency antenna made of sea water. It is a conductive plume of water capable of transmitting and receiving radio-frequency waves. Made of just a pump and an insulated nozzle, the SeaAerial is portable, which means it can be set up anywhere onshore or offshore.

-The Editorial Team

Page 6: P.I.N.G. 12.0

AIn the world, Param Yuva is ranked 88th. For years, we’ve seen that computation

requirements are always on the rise. Today, in areas like biology, personal medication, discovery, research and computational chemistry, apart from regular engineering problems, there is a huge need for computation. Tomorrow’s computation requirement needs huge amount of energy. According to current energy levels, we require about 150-200 MW of power for running an Exascale computer, which is just impossible. Therefore green computing, which is computing with energy efficiency, is a solution we cannot escape from. We have no other option than to embrace this concept because the requirement for our computation is only increasing and it is infeasible at the current energy level to actually burn that much of energy. Conceptualisation of energy harvesting and energy reduction is being talked about. Green cooling is another initiative geared towards high efficiency refrigerating system.

QHow is C-DAC working with Prime Minister Narendra Modi’s Digital India

campaign?

AI think C-DAC is one organisation which is in the area of digital computing like

information technology and electronics. We have done a lot of work related to digital computation including languages, high performance computing, software technology, electronics, chip design and all of these are interconnected. When we are talking about digital India, e-governance is one of the major examples. It has formulated around 9 different ideas and one of these ideas is connectivity to everybody, but connectivity itself is not enough. I mean if you have a wire and have no application, how you will send data through it. Hence, applications are important and along those lines, there are several applications being implemented with e-governance being one. In general there is a huge amount of emphasis on digitalisation of information and bringing people onto the digital platform to increase digital literacy and

digital empowerment. Smart City is also one area there. C-DAC has been working on these technologies providing number of solutions and number of technologies that we have, including languages, high performance computing, e-governance and electronic design. All of these are very essential for digital India concept and digital India realisation.

Q Like you have mentioned the Smart City project, C-DAC and PMC are collaborating

and working towards it for Pune city as it has ranked second. How long do you think will it take for Pune to become a Smart City?

AOkay that’s an interesting question. Pune city deserves a kudos for having come up

with a very interesting plan for implementation of Smart City. The central government devised various projects and visions as well as their work plan of several cities. 20 cities have been identified out of which Pune is placed on the second position. This means that the vision is in the right direction. C-DAC and PMC have been working together and C-DAC is acting as a consultant for them. We have been explaining and devising mechanism and methods for Smart City so that dream of Pune can be realised. There are several aspects that we have looked at. If all the city related problems can be handled efficiently then the city itself becomes smart. That’s where ICT becomes very useful. That is the mission of PMC and we are actively working with them.

Q Can you give us an example of a certain mechanism that we are going to implement

soon in our city?

APune is one of the examples where traffic management and planning is of utmost

importance. You also need urban development planning. All of these things require great amount of technology inputs. For example, if one has to do urban planning, one needs to know GIS inputs and GIS data to be able to figure out what kind of things one can do. As in traffic

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 10CTD.CREDENZ.INFO

The Director General of Centre for Development of Advanced Computing

(C-DAC), Dr. Rajat Moona, has taken the organisation to a whole new level under his leadership. He had been a visiting scientist to MIT USA 1994-95 on his fellowship and has also authored 30 research papers. He has to his credit 7 national and international patents. Commencing his career as a Scientific Officer in IISc Bangalore, he has now reached its zenith.

QYou have been a professor at IIT Kanpur for almost 20 years. What is your biggest

takeaway from so many years of experience?

AI have been at IIT Kanpur first as a student and then as a faculty for a long time. I have

seen the journey of IIT Kanpur through the two decades that I was there. I have seen the real growth of IIT Kanpur. During this time, there have been a lot of things, a lot of students that I have interacted with and guided. I have seen IIT Kanpur in very close quarters, looked at the research and the various technologies and directions. It has been a very good experience in IIT Kanpur.

QDuring your time at MIT as a guest researcher, were there any significant

differences that you noticed in the Indian and Western Education System.

AI think there are differences as well as similarities in all the education systems. All

the education systems are always contemporary, as well as based on the areas they are applied to and the local condition. In education systems, outside as well as in the IITs, one thing which has been very common is strong focus on quality of education, semester patterns and other such things. There is a huge empowerment of students, so there is a scope for them to do lot of work that is basically direction driven and not classroom driven. So there is a huge amount of education that goes beyond the classroom. Western systems have incorporated cross-departmental courses, cross-discipline courses

Dr. Rajat Moona, DG, C-DAC

“I would say there is no substitute to hard work.”and even taken credit from other universities courses. In the IITs, the pressure of competition is very high. There are a lot of innovative things which could be done but are not done due to the pressure on students. There are a huge number of students that apply for IITs every year. So certain innovative things cannot be done because numbers become very large and often subjective evaluation becomes much harder. Western system is largely dependent on subjective evaluation. Indian education is based on entrance exams, admissions and other such procedures. Once people come to the institute, there is lot of scope to do the subjective thing. So the main difference between the Western Education System and Indian Education System is the huge pressure which students face in India. Therefore space to do innovative things gets limited.

QC-DAC’s latest version of the Param series is Yuva 2 which is one of the most power

efficient supercomputers in the world. What is, in your opinion, the future of Green High Power Computing?

9 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

Bridging the riftWith Rajat Moona, Director General, C-DAC

Squishy Finger, developed by the soft robotics experts at Harvard, mostly consists of fiberglass, silicone rubber and kevlar fibers. These ‘Fingers’ are used as grippers for deep-sea collection of fragile biological specimens.It is pressurized with seawater so that it curls like a finger around the object to be extracted.

Page 7: P.I.N.G. 12.0

N3XtNext Generation Chip

Chip makers, researchers and developers ubiquitously have been trying to enhance the performance of computing,

which incorporate predominantly increasing the speed, also increasing memory capacity of chips. The invention of N3XT has set a new benchmark in invention of computing chips, promising an increase of thousand fold in the efficiency of computer systems.

Researchers from Carnegie Mellon, University of California, Berkeley, University of Michigan, along with Professor H. S. Philip Wong and Associate Professor Subhasish Mitra from Stanford University have created N3XT, a new multi-layered chip, which can replace the conventional silicon chips to provide faster data movement using far less energy.

A Resource-Heavy Single-Story Layout Customarily processors and silicon memory chips are arranged next to each other and intricate wiring connects the components so data can be computed on the processors and then stored on the memory chips. The silicon chips are orchestrated like standalone houses in the suburbs.

The disadvantage of silicon chips is that the information in these chips ventures longer separations and squanders vitality, regularly creating computerised congested driving conditions while handling. This arrangement occupies a lot of space and signals have to travel

long distances wasting processing time and energy. This results into a bottleneck when a large amount of data is to be processed.

N3XT Chip-Skyscraper Approach is 1000 Times Faster:N3XT chips that are produced using carbon nanotube transistors are little tube shaped particles of carbon that productively lead warmth and power. The N3XT model divides processors and memory into, say, distinctive “floors” in a high rise. Every one of those floors are then associated by many modest electronic lifts, called “vias,” that are utilised to transport information between chips.

The enormous point of preference of Skyscraper methodology-information moves much speedily, and all the more effectively over shorter separations than over a bigger territory like in current silicon chips. “When you consolidate higher velocity with lower vitality use, N3XT frameworks beat routine methodologies by an element of a thousand,” said H.-S. Philip Wong, the Professor who composed the paper.

Another advantage of N3XT over Silicon Chip: Silicon chips can’t be heaped on top of one another like in N3XT chip, on the grounds that, amid creation silicon chip gets heated to a great degree (right around 1,000 degrees centigrade) that winds up harming the layers underneath. As the N3XT chip can be created at much lower temperatures than silicon chip, it can undoubtedly be layered without harming the stacks underneath.

It sounds like a totally distinctive way to deal with PC memory and, obviously, this sort of figuring learning is different. Be that as it may, it’s intriguing to realise that the methodology could bring a large scale level upset in chip design that occurred over a century back.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 12CTD.CREDENZ.INFO

management, it again requires lot of traffic input and traffic engineering approaches to take the decisions about how and in which direction one can actually move the traffic. These are some things which we have been working on. Some of these technologies actually got into the realisation of Smart City dream of Pune as such.

Q People across the globe believe that India will be the next superpower. So what

sector do you think has the maximum scope for improvement in our country?

AOkay, that’s an interesting thing! See often what happens is we skip and do things in

a quantum leap into growth and development. For example, we didn’t have much landline phones. People got connected through mobile phones very easily and quickly. As a result, mobile phone is a great success story in India. Even if we have actually missed some generations of technology, this leads us to grow into the technological development at a great pace. Also, lot of application processing can be done in the ICT area. This is the field where India can prove its abilities. Moment we talk about ICT and huge databases like Aadhar as an unparallel huge platform providing us with the incomparable features and possibilities, we see India as an innovative nation. But then this huge databases like Aadhar, banks, pan cards, etc. that come at the cost of digitalisation, also comes with concern of data security. India is taking a big jump into data security. Our financial transactions are better than the same at any other place in the world (viz. smart cards). So there are many things where India has moved into right direction, as in area of cyber security and ICT. In the near future, world is going to be more and more dependent on India. This can result in an innovative growth towards India being a superpower.

QYou have been issued a significant number of patents. What would you tell someone who

is working towards applying for their first patent?

AThe first one is always the important. Before applying for a patent, there are

many apprehensions on whether one can do it or not. One tends to question one’s abilities. But when the first one is done, everything else becomes relatively easier. Patents are the most effective way in which India can achieve the dream of being a superpower. But we are not taking it serious enough. We tend to think that it’s not an important aspect. Now fortunately things seem to be changing. Patent writing and publishing has a bright future.

QWhat projects are you working on currently?

ACurrently I’m working on the National Supercomputing Mission where we’re

looking into areas like the applications, suprcomputing infrastructure, man power for training, research on the future of the exascale supercomputing. We are also working on the e-Bhasha project which focuses on localisation of the languages used in operating systems. The ICT percentage is just 7-8% in India and we aim at making it 70-80%. This can happen only if the content is in local languages so that language is not a barrier in work. Concepts like automatic translation are being worked on.

QWhat piece of advice would you give our young readers?

AIn simple words, I would say that there is no substitute to hard work. If you want

to get somewhere, there are no shortcuts. Perseverance is your answer to everything. What resources you don’t have doesn’t matter. What matters is, how efficiently you use the ones you have.

We thank Dr. Rajat Moona for his time and contribution to P.I.N.G.

11 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

-Kishori S SekokarHoD Computer Science

Sigma Institute of Technology Vadodara

liter

ati

-The Editorial Team

Page 8: P.I.N.G. 12.0

actual acceleration, rotation and motion data. Sensor measurements from each sporting gear can provide highly targeted and fine grained feedback in real-time about the alignment of a player’s body and his handling of the equipment necessary for playing a particular technique with perfection.

Smart sporting equipment does not eliminate the need of a human coach. The intent is to make coaching more effective, where the coach can focus on player’s emotional and motivational aspect of training-something only a human can attend to-and delegate the mundane task of technique enhancement to smart sporting equipment which can act as a coach’s assistant. The data from smart sporting equipment can be displayed on performance dashboard on a smartphone or a tablet. The coach can review such data and prescribe individualised practice strategy for each player. Such an arrangement can enable remote coaching, specifically for countries or cities where enough competent coaches are not available for a given sport. The smart sporting equipment can act as a proxy for the human coach.

Like any solution, sensor enabled sporting equipment too will have its own set of challenges. Let’s see few such challenges in the context of cricket. For example, consider scenario of a player practicing straight drive. This will include steps like (a) taking the stance (b) waiting for the ball to reach the bat (c) swinging the bat for playing the straight drive (d) swinging the bat forward (e) hitting the ball (f) following through (g) taking the stance again for next repetition.

Each of the steps will generate sensor data. The algorithm will have to ignore the sensor data generated in steps a, b and g and only use the one from steps c, d, e and f to compare the actual vs. ideal motion for perfect stroke. Another challenge is populating the repository of ideal motion necessary for perfect stroke for different type of batting strokes. This can

be done by using a model batsman who can play different strokes to capture ideal motion. However, the algorithm will have to offset difference in height, range of physical motion and other physical characteristics of a player and the model batsman. A petite player might not have same physical motion range as a model batsman and hence the algorithm will need to scale the motion while making calculations.

They’ve already changed running and biking, with apps such as Strava and wearables such as Fitbit. Now they’re being placed inside existing sports equipment-tennis rackets, running shoes, basketballs and golf clubs. Remember, it is the challenging problems that happen to be the interesting ones to solve. And if you solve them, you can make sensor enabled sporting equipment a reality and make it a perfect ally for deliberate practice to achieve peak performance.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 14CTD.CREDENZ.INFO

-Mr. Ashwin PatelFounder, 10xSchool

United States of America

Smart CoachingSurveying your dexterity

In order to become better at anything, you not only need to practice but perform deliberate practice. According to K. Anders

Ericsson of Florida State University, deliberate practice includes activities that have been designed to improve the subject’s current level of performance. One of the key differences between deliberate and regular practice is that in the former method, the subjects receive immediate informative feedback based on the results of their performance. Incorporating feedback after every repetition of the activity helps the subject to improve performance. A good side-benefit is deliberate practice prevents the formation of wrong habits that once learned, are not easy to quit.

In sports, the responsibility for designing the practice activity and providing feedback is usually done by a coach. The coach critically observes the player as he practices. He provides the feedback which the player can consciously incorporate in his next repetition. With each repetition, the incremental course correction nudges a player closer to performing the task with near perfection.

To make the coaching more effective and efficient, particularly with regards to deliberate practice, a coach’s task of manually observing and providing feedback can be automated. This is the basic idea of Smart Sporting Equipment. We have seen devices such as Nintendo’s Wii Remote employing sensors such as accelerometer, gyroscope and motion sensor

to track the video game player’s motion and position and translate it into onscreen actions. The Wii Remote acts as a proxy of the digital object that is being manipulated by a player. Similar approach can be applied to sporting equipment. Sensors such as the ones used in Wii Remote can be integrated directly in the physical sporting equipment to track the motion of the equipment as the player practices a particular technique. By comparing the actual vs. ideal motion necessary to perfect a technique, the sporting equipment can provide real-time audio or tactile feedback to the player which can be incorporated by the player in his next repetition.

Micro electro-mechanical system (MEMS) based accelerometers and gyroscope can be used to measure acceleration and angular rotation of sporting equipment. In its simplest form, MEMS based accelerometers and gyroscope measure acceleration and angular rotation respectively based on the change in capacitance due to the movement of micro mechanical structure. Additionally, motion sensor can be used to measure the motion of the sporting equipment. It consists of two parts-receive and transmitter. The transmitter emits one or more infrared beams which are received by the receiver. Generally the transmitter will be static and receiver can be integrated onto the sporting equipment to track its motion. The point in space at which the receiver intercepts the infrared beam will provide the coordinates of the sporting equipment at a given time. A series of such coordinates over a period of time will provide the path of motion of the sporting equipment.

The measurements from the different sensors can be transmitted via bluetooth to computing engine to compute and provide the audio or tactile feedback to the player.

Additionally, sensors can be integrated in other cricketing gear such as helmet, arm guards, leg pads to capture and compare ideal vs.

13 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

mav

enFontus is a device that can extract water from the atmosphere and condense it into potable water. It is a boon for athletes, cyclists and mountaineers who frequently travel in areas where potable water is scarce. Powered by solar cells, it can harvest up to 500ml in an hour’s time.

Page 9: P.I.N.G. 12.0

DC MetroBuilding Rust-free Rails

Throughout the world, DC traction is a preferred system of traction. 97% of the metros in the world operate on DC

system. In India, 25KV traction is preferred. One of the main reasons against the use of DC system is stray current. But India comes up with its best solution, the Stray Current Monitoring System used in Bangalore metro for the first time.

In DC rail transit systems, the running rails are usually used as the return conductor for traction current. Low resistance between the traction return rails and the ground allows a significant part of the return current to leak into the ground. Thus the current gets deviated from its intended path. This is normally referred to as leakage current or stray current. The amount of leakage in current depends on the conductance of the return tracks compared to the soil and on the quality of the insulation between the tracks and soil. The stray currents represent serious problems for any electrified rail transit system. Corrosion has been a major concern to the railway since the early days of DC railways.

The operation of DC traction systems requires suitable measures to prevent corrosion caused by stray currents on railway and non-railway installations. The stray currents accelerate the electrolytic corrosion of metallic structures located in the proximity of the transit system which causes metal pipes and earthing grids laid in the ground near the tracks, to have a much shorter life which affects safety and economy. Research has been carried out to control stray current in DC electrified rail transit system.

Stray current control is essential in these railway transit systems where the rail insulation is not of sufficient quality to prevent severe corrosion to the rails. Optimum control of stray current is also of direct benefit to the operational and safety aspects of the DC electrified railway systems. It could reduce the rail touch voltage.

Stray Current Monitoring System (SCMS)Stray Current Monitoring System is a straightforward and efficient method of avoiding manual repetitive measurement and any interference with the stray current collecting system. The system continuously measures the rail-to-earth potential under operational conditions, complete with central analysis, visualisation, signalling and archiving capacities.

Any metallic structure, for example a pipe line, buried in soil represents a low resistance current path and is therefore fundamentally vulnerable to the effects of stray currents. 1 ampere of stray current can oxidize 9.11 kilograms of iron per year.

The Stray Current Monitoring System (SCMS) continuously collects voltage data between the return circuit and the earth structure along the line. The potential value is measured through the Voltage Limiting Device (VLD). The SCMS includes functions like setting up of network lines & stations, up to 10 lines and 60 control points, data acquisition and transmission through the communication network and many more.

15 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

Holographic TVRedefining Entertainment

With a large number of the most recent enormous spending plan film discharges being accessible in 3-D,

and everybody discussing the fate of 3-D TV, a number of people are looking to 3-D visualisation projection without the employment of glasses.

Where does this innovation originate from? 3-D holographic projection innovation is in view of an illusionary system called “Pepper’s Ghost” that was initially utilised as a part of Victorian theatres across London in the 1860s.

Pepper’s Ghost was used to make apparition like figures in front of an audience. Avoiding the onlookers’ view, a performing artist wearing a spooky ensemble would stand confronting a plate of glass situated at a calculated distance. The gathering of people would be able to see the glass and the apparition but not the performing artist specifically. A specific light would mirror the performer’s picture onto the plate of glass and an apparition like reflection would show up before the gathering of people.

How is this innovation utilised today? With the utilisation of the most recent HD projectors, CGI liveliness, authority HD film strategies and numerous enhancements, the Pepper’s Ghost innovation has been moved up to the 21st century. Rather than a genuine object or an individual’s appearance showing up on a plate of glass, superior quality video and CGI movement is channelled specifically onto a uniquely outlined, synthetically treated film by

means of a high power HD projector. Despite of being costly, this present day approach results in a much clearer, convincing visual.

What sort of pictures can be anticipated as multi-dimensional images? Because of the current methodology of anticipating CGI movements and pre-recorded footage, nearly anything is conceivable. The “clear canvas” methodology is regularly embraced, making a storyboard restricted by creative energy. The storyboard can then be given over to a CGI activity group which can make it lively utilising the most recent 3-D programming, for example, Maya or 3-Ds-Max.

Genuine individuals can be shot giving a discourse, move or presentation for instance, and after that, can be anticipated as 3-D images. After the generation of holographs, embellishments can be added to any entity. For example, an individual can be beamed into the room, Star Trek style, after the creation of holographs.

Who has utilised 3-D Holographic projections and why? In August 2009, Endemol, the makers of the celebrated unscripted television, indicated that Big Brother projected the housemates’ loved ones into the house to convey messages of support and consolation. The messages were pre-recorded utilising HD cameras and a specific light. The contestants were made to sit before a stage fixed inside the Big Brother house. Promptly, the housemate’s relative or companion was radiated onto the stage, playing their message. Despite the fact that the multi- dimensional image presentations were hard to judge on 2-D TV screens, this occasion was hailed as an incredible achievement, inspiring splendid responses from the housemates, which made for extraordinary TV.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 16CTD.CREDENZ.INFO

-Mrs. Anita S PachkhandeQA Officer, Elysium Pharmaceuticals

Vadodara

-Mr. Kishor ThigaleTechnical Manager, Secheron

Switzerland

Page 10: P.I.N.G. 12.0

pas

t ru

min

ated

equipped with 27 mm lenses, approximately the focal length of the human eye. Every camera photographed a third of the picture, the right camera shooting the left part of the image, the left camera shooting the right part of the image and the central camera shooting straight ahead. The first film made using the three-strip Cinerama process was ‘This is Cinerama’ (1952), a travelogue of the world’s legendary vacation spots, with a thrilling roller-coaster ride. This sequence was so realistic that many people became seasick when they saw it on the huge screen. It was the biggest box office hit of 1952 and it was playing only in one cinema, for three months.

Although Fred Waller’s initiative was awe-inspiring, it lacked in certain aspects.

Firstly, having skilled projectionists was essential as the projectors had to be moved at precise time intervals. In case one of the screens tore, it had to be replaced with a black slug exactly equal to the missing footage to provide synchronisation. Zoom lenses could not be utilised for movies made for Cinerama, since it would lead to distortion and the harmonisation would be lost. The cost alone of modifying the existing theatres, to accommodate Cinerama, was between $25,000 and $75,000 which simply could not be afforded by most of the theatres.

The films made using Cinerama required skillful improvisation on the part of the directors, producers as well as the actors. Filming actors that were interacting with each other in the same screen was cumbersome as well because of the different fields of view. The actors seemed to be looking past each other, making the scene bizarre. The portion where two screens merged had to be occupied by an insignificant object like a tree or a car to avoid the distortion of an actor’s body.

The chief disadvantage of them all was that the picture looked natural from a rather constricted

“sweet spot”. The “sweet spot” for viewing Cinerama was at the crossover point of the projection beams from the three cameras. The directional stereophonic sound accompanying the film, extended its realism for the audience. The central screen would jitter and weave leading to its relative movement with respect to the side screens, which caused distraction to the audience.

When all’s said and done, Waller’s perspective of cinema was a masterstroke and provided a new entertainment thrill. Waller and his team were so absorbed in making Cinerama happen, that they overlooked the most crucial part, the realisation. Cinerama proves to be a classic example of an innovation falling short because it was ahead of its time.

The prevalent 3-D glasses which were created with a similar perspective, enhance the illusion of depth perception, hence adding a third dimension. This paraphernalia modifies your vision unlike the Cinerama that altered the screen, providing a much effortless solution!Cinerama and the 3-D glasses both were gimmicks devised to combat the uprising of the television. Isn’t it astounding to know that these glasses were originated in the same period as the Cinerama, and are rampant even today? Don’t we wish we could experience this breathtaking work of art, the Cinerama?

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 18CTD.CREDENZ.INFO

CineramaTechnology’s Honourable Mention

Could a war possibly impinge on an entertainment industry? Bizarre, isn’t it?

But it did! Post-war United States saw the average family growing in affluence, creating new societal trends. The film industry wanted to aim at the youth. They wanted to create tales of rebellion and rock n’ roll instead of traditional, idealised portrayals of characters.

This was the age where the television was laying its foundation in popular culture. The era when the idiot box tried to encroach upon the cinema’s expansive turf, stealing a major chunk of its viewers. The time when there was a dramatic plunge in ticket sales and box-office receipts.

The film industry had 4 major campaigns involving technical advances with wide-screen experiences viz., colour and scope, the VistaVision, Smell-o-Vision/Aroma-Rama, Cinemascope and the Cinerama.

The VistaVision is a high resolution, widescreen alternative of the 35 mm motion picture film format which was developed by Paramount Pictures’ engineers in 1954. Unlike the Cinemascope, this technique did not use anamorphic processes. Aroma-Rama, also known as Smell-o-Vision, was a system that discharged odor during the projection of the film so that the viewer could “smell” what was happening in the movie. CinemaScope is an anamorphic lens series used for shooting

widescreen movies. These lenses, theoretically allowed an image to be created up to a 2.66:1 aspect ratio, almost twice as wide as the regular 1.37:1 ratio.

Apart from these 3 campaigns, Cinerama was the one in the limelight.

Invented by Fred Waller, Cinerama (coined from the words Cinema and Panorama), was a one of a kind novel process introduced during this period, when the antagonism between the movie and the television industries was at its peak. He was of the opinion that a sense of depth and realism could be achieved by a wide curved screen that included the viewer’s peripheral vision. Waller had realised that normal human vision is actually arch shaped. The way our eyes are fixed in our head gives us a curved view of the world around us. With its extremely broad angle of view, Cinerama system aimed to achieve an image similar to the human eye. A person watching a picture covering the same area, but projected on a curved screen, would feel as if he were living the scene.

An incidental anagram of American, Cinerama, was a revolutionary process that included 3 screens, each of which was projected upon by separate interlocked 35 mm projectors. The screen was deeply arched and subtended an angle of 146 degrees. Made up of hundreds of 22 mm wide vertical strips of a perforated material, it prevented the reflection of light coming from one end of the screen from falling onto the other.

A multi-track discrete and directional magnetic surround-sound system, developed by Hazard E. Reeves, was employed in this process for the first time. Five of the speakers were behind the screen, two on the sides and one at the back of the auditorium. A sound engineer was given a particular script according to which he directed the sound between the speakers.

The photographic system used cameras

17 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

-The Editorial Team

AS&E’s Mini Z system is the world’s first handheld backscatter X-ray imaging system used by security enforcement agencies for real-time detection of potential threats inside objects. It detects the radiations that are reflected from materials, revealing the interior and displays the image of the object being screened.

Page 11: P.I.N.G. 12.0

Science Instrument Module (ISIM), the Optical Telescope Component (OTE) including the mirrors and backplane, and the Rocket Component including the shuttle transport and the sunshield.

The OTE, which works as the eye of Webb, accumulates the light originating from space and provides it to the units comprising the ISIM for analyzing. The backplane, similar to the “spine” of Webb, performs the task of supporting the mirrors.

The sunshield is made up of a super thin polymer called Kapton which is about the thickness of a human hair. The sunshield subsystem isolates the observatory into a hot sun-confronting side (shuttle bus) and an icy counter-sunward side (OTE and ISIM). The sunshield keeps the warmth of the Sun, Earth, and rocket transport hardware far from the OTE and ISIM so that these bits of the Observatory can be kept exceptionally chilled and do not face any problems due to over-heating (the working temperature must be kept under 50 K or -370 degree F).

The LaunchThe James Webb Space Telescope will be dispatched on an Ariane 5 rocket. The dispatch vehicle is a part of the European commitment to the mission. The Ariane 5 is the world’s most dependable dispatch vehicle which is why it is chosen for conveying Webb Telescope to its destination in space. The Ariane 5’s record for fruitful dispatches spans more than 11 years

and somewhere in the range of 57 back-to-back dispatches (as of February 2014).

The OrbitIn October 2018, the 6.5 ton James Webb Space Telescope (JWST) will be dispatched to be set in a working orbit at L2, somewhere in the range of 1.5 million km from Earth on the counter-sunward side. Once there, JWST will start its main goal: to examine the universe by watching an extensive range of targets, which includes identifying the primary galaxies in the Universe, watching the conception of new stars and their planetary arrangements, their motions and contemplating planets in our planetary system.The essential constraining element is the measure of on-board fuel required to keep the telescope working in orbit and pointing precisely at its targets. When that fuel runs out, the JWST will float away from the L2 Lagrange point, entering a riotous orbit in the region of Earth.

NASA is going for a mission timetable of somewhere between five-and-a-half to ten years. With cutting-edge instruments and parts, Webb is prone to stay fit as a fiddle for years. Moreover, with late walks in space-applied astronomy, it will be less expensive for NASA to administrate Webb than Hubble (which required repairs to be done manually).

It’s an exciting time to be contemplating space. Perhaps Webb will discover many different Mars-like planets. Even more superior is the likelihood of finding another Earth. JWST could give us a whole new era of progressive science if things work out. After over ten years of consistently diligent working, the James Webb Space Telescope has us standing on the verge of the eventual fate of space science.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 20CTD.CREDENZ.INFO

-Suraj BhorPune Institute of Computer Technology

Pune

JW Space Telescope Amelioration To The Hubble.

With each additional inch of opening, each additional second of observing time and each additional particle of

atmospheric obstruction you expel from your telescope’s field-of-perspective, the better, more profound and all the more clearly you’re ready to gaze at the Universe. At the time when the Hubble Space Telescope (HST) started operating in 1990, it introduced another period in stargazing of space-based cosmology. No more did we need to battle with the atmosphere; no more did we need to stress over clouds; never again was electromagnetic sparkle an issue. We would simply have to point our telescope at the target, balance it out and gather photons. In the 25 years since then, we’ve started to cover the whole electromagnetic spectrum with our space-based observatories, getting our first genuine look at what the Universe truly looks like.

Hubble is an astounding bit of hardware. However, it is constrained in various ways:• HST needs to utilize fuel to keep in orbit and to change orientation to point at a specific marvel of interest. There is a great amount of fuel on board, but it will, in the long run, get depleted, leading Hubble to stop being a feasible framework.• The HST can’t point at the sun while on the ground because the intense light and heat would affect its sensitive instruments. Thus, it is constantly pointed away from the sun, implying that the HST cannot watch Mercury, Venus and certain stars that are near the sun either.• The maintenance cost of the telescope is very high.

Combating these limitations, led to the birth of James Webb Space Telescope-yet another breakthrough in the cosmic world.

JWST is a joint effort between NASA, the European Space Organisation (ESA) and the Canadian Space Office (CSA). The NASA Goddard Space Flight Center is dealing with

the advancement exertion. The most prodigious thing about JWST is that it can see 13.4 billion years into the past! And this is particularly impressive because the universe is around 13.8 billion years old.

Weighing around 6500 kg with a sunshield the size of a tennis court, this gigantic space observatory’s 6.5-meter essential mirror will be made out of 18 hexagonal boards and will be cooled to a noteworthy -233 degrees Celsius (40 Kelvin). The telescope will be collapsed to fit inside its rocket for dispatch and will open naturally like a bloom, once in space. The total area size of JWST is seven and a half times that of HST.

Objectives of JWST• Hunt down the principal universes or radiant articles formed after the Big Bang. • Determine how cosmic systems have advanced from their arrangement recently and keep a track of their movements. • Measure the physical and particle properties of planetary frameworks, including our own nearby planetary group, and examine the potential for life in those frames.• Watch the development of stars from the primary stages to the arrangement of planetary systems.

The ObservatoryThe Observatory is the space-based segment of the James Webb Space Telescope framework and involves three components: the Integrated

19 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

feat

ured

art

icle

K-Glove or Robo-Glove is a grasp assist device to prevent repetitive stress injuries in autoworkers and astronauts. Driven by actuators and pressure sensors, the glove draws in because of synthetic tendons when its sensors detect that an object is grasped by the wearer, thereby removing the strain on the wearer.

Page 12: P.I.N.G. 12.0

Artificial Intelligence (AI) has gained sudden momentum in the present scenario. This field has endless

opportunities and applications in numerous sectors. According to John McCarthy who had coined this term in 1955, AI is “the science and engineering of making intelligent machines”.Neural networks, one of the hot topics under AI contains a large number of small processing units which work similar to the neurons in a human brain. This whole concept of AI and neural networks is studied under the term “Deep Learning”.

Using the same concept at Massachusetts Institute of technology (MIT), Assistant Professor Vivienne Sze et al has developed an extraordinary chip called Eyeriss. Traditionally deep learning requires complex graphics processing units (GPU) to receive data from the sensor, perform computations and render the output to the user. Its huge drawback of excessive power consumption makes it impossible to be installed on handheld devices. To overcome this problem, scientists started performing complex GPU operations on GPU servers. The raw data from devices is uploaded onto a GPU server where powerful processors perform operations and shoot results to the users.

As every system has its own pros and cons, this too has some limitations. It demands high speed internet connection whenever you want to avail these services. Speaking securrity wise, never upload your personnel data on a

server which is used by several other people. Moreover, due to the data transmission latency in this system instantaneous output is not obtained Eyeriss, a solution to these problems, is a chip that processes data locally instead of uploading it onto servers. In addition to that, it consumes significantly lesser amount of power as compared to the traditional method. It can easily be embedded in mobile systems or any Internet of things (IoT) device.

In technical terms, Eyeriss is a 168 core chip which uses memory and power efficient algorithms to produce an output. These 168 processing units are arranged in a specific pattern. They receive raw data, perform their operations and provide its output to the next layer. This process continues till the last layer is reached. Output of the last layer is the desired response to a given input. Output provided by every unit is a result of a training process. This complete network of units tries to find out some relation between raw data and results of the training process. The developers setup a trade-off between the chip’s power consumption and flexibility to realise the implementation of a wide variety of networks tailored to perform different tasks.

Most of the energy in a GPU is consumed in data transfer between the memory bank and processor core. To resolve this issue, the core in Eyeriss has its own memory space. It compresses the data before transferring it to another core using a small circuit. These cores have a capability of sharing data between their immediate neighbouring cores, saving tonnes of energy. They also have the potential to reconfigure the data allocation circuitry according to different network types.

Eyeriss could be used in mobiles to perform complex image processing locally without consuming much of your battery power thus revolutionalising the tech-world.

21 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

EyerissAccelerated Secure Computing

Since the introduction of Rosie, the beloved maid on the animated series, “The Jetsons”, the development of a robotic

household has been the ultimate futuristic dream. Inventor Mark Oleynik, through his invention of Robot “Moley” has brought the world one step closer to every lazy food- lover’s dream of having to eat without cooking.

The robot’s prototype was premiered to widespread acclaim at the international robotics show at Hanover Messe, Germany. It redefines the future of home cooking. Moley consists of two fully articulated hands that are equipped with tactile sensors, meant to function in a specially designed kitchen which consists of a stove top, utensils and a sink. Its hands can stir, chop, pour, use utensils and turn the stove on and off with the speed, sensitivity and movement of a human being. This piece of machinery uses up to twenty motors, two dozen joints and 129 sensors in order to mimic the movements of human hands.

Human movements are captured into digital actions implementing algorithms designed by the collaboration between the inventors of Moley and teams from Shadow Robotics (UK), Universities of Stanford (USA) and The Saint-Anna School of Advanced Studies, Pisa, Italy. Moley can reproduce recipes of renowned chefs like Tim Anderson, winner of Master Chef 2011. Its first recipe was a crab bisque, based on Anderson’s recipe whose movements had been recorded in 3-D and translated into instructions for replication by the robot. Anderson explains excitedly, “If it can make bisque, it can make a lot of other things.”

The kitchen is accessible to any hungry glutton (consumer) through a built-in touchscreen or a smart phone application. The consumer version which is set to launch in 2017 will be supported by an iTunes style library of over 2,000 recipes from all around the world. The robotic chef alongside a complete purpose-built kitchen, an oven, hob, dishwasher and a sink shall cost its

consumers a small fortune of around $15,000. Moley is a prominent step towards large scale food products and overcomes many limitations posed by the existing processes.

Moley is capturing the attention of many industrial sectors and has been approached by prominent food giants in the restaurant and airline industries. 3-D recipe recording will open doors to a new world for chefs and home cooks. It will make teaching, cooking and learning much easier and accessible for aspiring learners and home cooks. Professionals will be able to present their creations to a huge audience.

Apart from being every food lover’s ultimate dream, Moley also has earned itself a nomination for an award and is among the semi-finalists at the UAE Artificial Intelligence & Robotics Award for Good. The Moley Robotic Kitchen is uniquely able to address challenges related to obesity, diabetes and other serious diseases by employing special diets, and by harnessing world-class technology to make it easy for people to eat well, every day. Not only is this a major development for the food industry but also promotes the consumption of fresh and healthy meals. Oleynik said, “Food and proper nutrition is the basis of a good quality of life. My goal is to make people’s lives better, healthier, and happier.”

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 22CTD.CREDENZ.INFO

Robotic ChefCooking could not get easier

tech

nocr

ats

-Suraj Prasanna KulkarniSRTTC VIT-Kamshet Campus

Pune

-Pratik MangtaniPune Institute of Computer Technology

Pune

Page 13: P.I.N.G. 12.0

Swarm IntelligenceCumulative Competence

To all the staunch Batman fans who have watched Tim Burton’s 1992 ‘Batman Returns’, surely remember the ‘bat

fluttering’ scene. Well, it was not a no-brainer scene. Swarm Intelligence was deployed in creating the movement of the flock of bats in the movie. On similar lines, Peter Jackson’s ‘Lord Of the Rings’ used swarm intelligence, using a software called MASSIVE, to easily create thousands of individual agents, such as humans, for the scene to look natural. One might consider animating a group of objects individually and then distributing them over a screen. However, the result isn’t as natural.

Those were some references of swarm intelligence being used in pop culture. But what exactly is Swarm intelligence? Swarm Intelligence is a form of artificial intelligence and is a collective intelligence of decentralised, self-organised systems, natural or artificial, spread throughout an environment. SI systems consist of a population of simple agents, coordinating and interacting locally with one another and the environment. One can consider brain as an analogy to Swarm Intelligence with the neurons acting as “agents” and the brain as a “swarm”. But where did we get this mind-boggling idea of “SWARM” intelligence? Nature, at times, provides us with inspiration in the form of ants, bats and bacteria colonies for computer scientists. Most of the time, the agents follow simple rules and coordinate with their

adjacent neighbours, leading to an “intelligent” behaviour unknown to individual agents. For instance, an individual ant never knows what the whole group is up to. Communication in such systems is largely done through stigmergy, a mechanism in which the action performed is based on the trace left in the environment by the previous action presumably performed by the former agent, a method of indirect communication.

Consider this amazing example of bats, where a swarm of bats called flock, works to protect itself. The bats have very poor eyesight and use echolocation in order to move around the three dimensional world. Using this formation, the bats can communicate with one another about obstacles or moving objects,even predators, coming their way. The same principles and rules have been applied to self-flying robots, communicating with each other and devising a path, which they choose to follow. These robots can be sent on search and rescue missions, helping in saving hundreds of lives.

The backbone of swarm intelligence is mainly built on two algorithms-Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO). These ‘algorithms’ are typically applied to search and optimization domains. ACO investigates probabilistic algorithms inspired by stigmergy and foraging behaviour of ants, whereas PSO deals with probabilistic algorithms inspired by

movement of organism like, a bird in a flock or a fish in a school. Can swarm intelligence be used to solve a problem or crisis? Yes, and the first step has been taken by the scientists to use such technologies for the betterment of the human kind.

Swarm intelligence has been employed specially to optimise the already existing solutions to conventional problems. Some examples include communications, dynamic control, tracking moving objects and prediction. The Ant based routing in telecommunications network is based on swarm intelligence. Ants leave behind a trail of pheromones whilst in search of food. The trail of pheromones is reinforced on routes where the chances of getting food are better. Similarly, ant based routing uses a probabilistic routing table, reinforcing the route successfully traversed by each ‘ant’ ( a small control packet) which floods the network.

Some other cool applications of swarm intelligence include human swarming and crowd simulation. Through mediating software, networks of distributed users can be organised into “human swarms” (also referred as social swarms), through real-time control systems. It results in unified collective intelligence. When logged into the platforms, groups of distributed users can collectively answer questions, generate ideas and make predictions as a single emergent entity. In other words, human swarms can out-predict individuals.One of the most promising use of swarm robotics is in disaster rescue missions. Swarms of robots of different sizes can be sent to locations which are inaccessible by human rescue workers, to detect the presence of life behind huge blocks of walls via infra-red sensors.

And the coolest of them all – Swarms can be used in military to form an autonomous army of their own. The US Naval forces have tested a swarm of autonomous boats that can steer and take offensive action by itself.

Dr Vijay Kumar, Indian Roboticist at University of Pennsylvania, created swarms of miniature drones to increase the production system efficiency and throughput in agriculture. Productivity in agriculture is one of the biggest problems we are facing right now. The causes could be – water shortage, crop diseases and climate change. At University of Pennsylvania, they adopted an approach called precision farming. They flew swarms of aerial robots in orchards and built precision models/maps of individual plants, took infrared images of the trees so that the farmers could infer what kind of inputs every plant needs. The farmers could then count the total number of fruits on the trees and could estimate the production based on leaf area index and photosynthesis. The robots gave the farmer an insight into precision farming which was not possible earlier, improving the yield by 10%.

The innovation that has been brought about in this field is just the tip of the iceberg, naturally because it’s a relatively new concept. There’s still a long way to go before we can exploit behaviors of living creatures and get inspired by them to solve worldly problems.

23 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 24CTD.CREDENZ.INFO CTD.CREDENZ.INFO

-Shadab ShaikhPune Institute of Computer Technology

Pune

E-volo VC2 volocopter, with its 18 rotors is more stable and safe than the habitual helicopter and is emission free as it is operated on batteries charged with renewable energy. Moreover, this drone doesn’t require a runway for taking off or landing and can be effortlessly controlled with a joystick.

Page 14: P.I.N.G. 12.0

Digital NeuronReplicating the Human Brain

Imagination and invention when blended together can gift us an easy life and better ways of living and socialising in this

gregarious world! Digital brain is one such fascinating invention which has the potential to revolutionise the practical computing world. The alluring prospect behind creating digital neurons is to build neuronal qualities and realise artificial intelligence into the computer to work much faster than existing supercomputers.

When it comes to tasks like vision or voice recognition, the human brain can easily outstrip any man-made machine. It can handle large amounts of data as well as computations that even challenge the world’s largest supercomputers. The brain relies on biological nerve cells called neurons, which are responsible for conducting abstract reasoning and controlling the movement of our body. The brain consists of about 100 billion neurons and it’s been said that if your brain were a computer, one neuron would be able to process about one byte of data at a time. All of this is accomplished using lesser energy than it takes to power a light bulb! Since the last few decades, the term neuron is being modelled at different levels of abstraction.

Imagine a situation where you try to touch electrical machinery with your bare hands with no protection. Odds are, you would get an electric shock and immediately pull your hand away. So, getting a shock and pulling your hand away are processes that happen within

milliseconds of each other. The human brain can communicate all of this in such less time all thanks to the neurons that help it pass the necessary messages in the body from one part to another. There are still many unanswered and unresolved questions about how exactly the brain works and why it works so well and so efficiently.

Scientists had proposed hardware implementations of neuron models in the late 20th century. In a more recent context, Karlheinz Meier, coordinator of the Human Brain Project said, “It may be an artefact of evolution, which is not important, but it also may have a big impact on our ability to compute.” This research work is simply based on copying neuronal structures to create silicon neuron circuits which produce rich, dynamic behaviours. Silicon neurons emulate the electro-physiological behaviour of real neurons. Such neurons can be implemented using any kind of signals digital, analog or mixed. However, as computer technologies work only on digital systems, scientists are more focused on digital based silicon neurons.

This may give rise to a question: “Why only silicon?” The answer of this question is simple. Silicon is not really an optimal electronic material, but it is favoured over other materials with better electronic properties simply because it is cheap and extremely abundant. Also, being a semiconductor, it possesses properties of both conductors as well as insulators. No other electrical material works on such high voltages. To avoid losses in computer systems, silicon is most preferable for any new computer technology.

Researchers around the world have used mostly two techniques to design the neuronal structure: Application Specific Integrated Circuit (ASIC) and Field-Programmable Gate Array (FPGA). There is a myriad of different approaches to integrate neurons on a chip. For instance, either many neurons can be implemented on a single

chip, or some elements can be distributed across chips. The requirement of quick updates can be met using FPGAs, but this is slower than using digital ASICs. Digital techniques offer us a higher level of programmability and more controllable precision. Recently, Very Large Scale Integrated Circuit (VLSI) techniques have also been used for the development of digital neurons.

Currently, there are two well-accepted models of silicon neurons. The Conductance-based model that uses analog signals to process the data and Leaky Integrate-and-Fire model designed for digital processing of data. Technology has succeeded in designing and fabricating silicon neuron circuits. The day isn’t too far when these neuromorphic chips will act as a CPU (digital brain) for computer systems.

TrueNorth is a recently-built computer chip by researchers in IBM. It is a custom computer system that mimics a neuron’s activity. This chip comprises of about a million digital neurons. TrueNorth redefines what is possible in the field of brain-inspired computers in terms of size, architecture, efficiency, scalability and design techniques. TrueNorth is the first single, self-contained chip to achieve one million individually programmable neurons- sixteen times more than the current largest neuromorphic chip.

Scientists from Heidelberg University in Germany have also developed a new technology that is based on parallel data processing. In

neuromorphic computing, neurons made up of silicon take over the computational work on specially created computer chips. The neurons are linked together in a similar fashion to the biological nerve cells in our brain. If the assembly is fed with data, all the silicon neurons work in parallel to solve the problem.

It is well known that the Von-Neumann architecture is so deeply ingrained in today’s computer technology that it would be difficult to abandon the system now. Be that as it may, considering the beauty of neuronal technology, it seems like we will soon be able to realise brain-like computers having inbuilt artificial intelligence to sort out and process the massive amount of information generated on a daily basis. Without a doubt, digital neurons would provide more efficiency while working with big projects like research in astrophysics; but the best way to go about building a proper artificial neuron is still a matter of debate.

Till date, very less work has been done in this field. Hence, there is a need to look at more adaptive approaches to make and build up advanced information processing capability in computer systems. “The future of computing being composed of two types of computers-traditional logical computers and synaptic brain computers-working together in a sort of left brain-right brain symbiosis to create systems that were previously unimaginable. I think that the chip and the associated ecosystem have the potential to transform science, technology, business, government and society”, says Dharmendra Modha, the team leader of TrueNorth.

25 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 26CTD.CREDENZ.INFO CTD.CREDENZ.INFO

-Tejas BramhechaMIT Academy of Engineering

Pune

Quell is a radical, wearable intensive nerve stimulation (WINS) technology, that helps people suffering from chronic pain, without the use of any kind of drugs. It uses Transcutaneous Electrical Nerves Stimulation (TENS) to send electrical impulses originating from the calf to the lower brain.

Page 15: P.I.N.G. 12.0

Sky BenderFlying Routers

“Google is not a conventional company. We do not intend to become one”, was one of the best

lines from Google’s 2004 Founders’ IPO Letter. From that point forward, Google has never failed to flabbergast us with its mind blowing projects. The latest of their projects is Project Skybender. We have heard of Project Loon that aimed at providing internet access, with a speed of 4G to remote areas with the help of high altitude balloons. Project Skybender is a similar program, except this time, it uses solar powered drones and super fast 5G! Yes, that evidently makes all the difference.

Google is now testing solar powered drones at Spaceport America in New Mexico for the Project Skybender. It is a secretive project carried out by the company and the technology giant has been working on building several prototype transceivers at the isolated spaceport. The drones include an optionally piloted aircraft called Centaur, and other solar powered drones by Google Titan. The technology used behind these drones to deliver ultra-fast 5G internet from the sky is millimetre wave radio transmissions. Millimetre wave falls under the Extremely high frequency (EHF) spectrum having frequency in the range of 30 to 300 GHz. These millimetre waves are capable of transmitting GBs of data per second, that is almost equivalent to 40 times more than the current 4G systems. The telecoms have left the millimetre wave band completely unused on the grounds that these EHF waves get easily scattered by rain, fog and atmosphere, thus

obtaining the resultant range, the tenth of the conventional 4G signals.

But as this band is not swarmed at all, it has created a new window for Google and it’s truly a major point of preference. Currently Google is testing SkyBender at the frequency of 28GHz and the signals with this frequency have a very high chance of fading. This tech giant will need to take a shot at focused transmissions to make the signal reach the surface from these high altitude drones. The Federal Communications Commission (FCC) has granted Google permission to test this project at the above mentioned frequency till July, 2016. America’s Spaceport Operations Center (SOC), where this project is currently being tested, has one of the millimetre wave transceivers and the other one is located at Vertical Launch Area. Also, a repeater tower has been set up along with a number of other sites, probably to test the reception of the waves beamed from the drones. An eight feet wide dish antenna has also been established at the SOC pad.

US Military’s research arm DARPA (Defense Advanced Research Project Agency), in 2014 started a program called “Mobile Hotspots”, comprising of a fleet of drones that can provide 1GBPS internet for the troops in remote areas. This implies that Google was not the first to experiment this concept. However, Google could be the first to roll this out for the public very soon. The SkyBender tests at Spaceport America will cost around $ 300k to Google. If the tests done by the technological giant are successful, Project Sykbender will be a medium to provide internet access to towns and cities through drones. With Skybender being a mysterious project taken up by Google, all we can do now, is wait and watch what it has in store for the technological world after Project Loon.

27 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

In today’s competitive market, a new product or a better version of the older product is pouring into the market every minute. The

things you will look for while opting to buy a new smartphone are the advanced features that the older version lagged. Even if you buy the best model available, a better version is still bound to enter the market sooner or later.

That’s where Project Ara comes in handy for you. For a pure technical definition, Project Ara is the codename for an initiative that aims to develop an open hardware platform for creating highly modular smartphones. A modular smartphone is nothing but an assembled smartphone. It is a program that aims on building modular smartphones.

Project Ara falls under Google’s ‘Advanced Technology And Projects’ (ATAP). Developed by a team led by Paul Eremenko, it is scheduled to begin pilot testing in USA in 2016 with a target bill of materials costing $50 for a basic grey phone. The name Ara comes from the name of its lead mechanical designer-Ara Knaian. Prior to Project Ara, Phonbloks created the world’s first well known modular smart phone in 2013. It was an open source modular design concept introduced by Dutch designer Dave Hakkens to reduce electronic waste. Later in 2013 itself, Phonbloks announced its partnership with Motorola due to their similar goals to develop a modular smartphone.

An assembled smartphone allies to an assembled computer. You can pick the camera of your choice, a battery with a long life, have a loud and clear speaker or can specifically increase the processor speed and the screen resolution for better gaming experience. The possibilities are limitless. Bored with your OS? Take your friend’s OS for a few days to get a change. It’s just like replacing a part of machinery.

However with the pros, come the cons as well. Although the idea of Project Ara seems

fascinating at first sight, it has its technical limitations. 1. Too many modules can cause problems in the design and aesthetics of the smartphone leading to costlier modules than a packaged smartphone. 2. Not every module will be compatible with every other module. In fact, there will be very few modules which can be compatible with every other, unless an international standard is followed while creating them.3. Performance will be a major issue, atleast in the initial stages. Since there are a lot of modules present, it will contain more circuits and connections as compared to a smartphone. As a result, propagation delay will be much higher which will lead to low performance and efficiency.

At this moment, although Project Ara has its advantages but is very difficult to implement practically. However, this is not the first project in which Google has encountered difficulties. This promising technology will surely add to the ease and comfort in making smartphones a little smarter. Let’s hope tech experts (maybe even you!) can figure out the solutions to the pitfalls faced by Project Ara in the near future and make the modular smartphone an “economical reality”.

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 28CTD.CREDENZ.INFO

Project AraReinventing the smart phone

-Sammyak SangaiPune Institute of Computer Technology

Pune

-Jainesh PatelPune Institute of Computer Technology

Pune

Page 16: P.I.N.G. 12.0

React NativeLearn Once, Write Anywhere

We live in a mobile-first world. Smartphones and cheap internet have brought about a change in

the way online digital content is consumed. Unfortunately, the way we build software and online content has not adapted perfectly. The transition from Desktop to Mobile has not been smooth. HTML, CSS and JS work well on the browser, but it provides a horrible user experience on mobiles and an even worse experience for the developer. The only way to ensure a consistent, delightful experience across different mobile devices is to use the Native APIs. Native tools produce faster and neater apps that provide a better User experience compared to cross-platform solutions that are derived from web based technologies such as HTML, CSS and JS. However, this approach has a major obstacle. On one hand, Android uses JAVA and on the other iOS uses Objective-C/ Swift. This means that you need to write your code twice to support both Operating Systems. While some cross-platform frameworks exist, they do not work well. No cross-platform framework could beat the performance of Native code, then Facebook open sourced React Native.

What is React Native?Enter React Native. Facebook’s framework for building Native Apps using React (a JavaScript library for building User Interfaces, developed by Facebook and Instagram). The focus of React Native is on developer efficiency across all the platforms you care about. Facebook uses React Native in many production apps and will continue investing in React Native.

React Native makes use of JavaScriptCore on iOS and Android (the same engine used by iOS Safari browser). With React Native, you can have the speed and power of native apps along with easy development that accompanies React. One might argue that several frameworks exist which use JavaScript such as Cordova and Titanium. These frameworks are not really native. They sort of wrap a web app inside a native app using Webviews. This is performance intensive. It depletes a lot of battery and simply cannot match the performance of a native app. Moreover, you cannot access the native APIs that are not accessible by a browser. To be honest, these frameworks almost always result in apps with a poor user experience.

Here is what makes React so unique and remarkable-Asynchronous Execution. The golden rule for making an app fluid and responsive is this: Never perform long-running operations on the main thread and thus block the UI. This is what causes an app to “Hang”.All operations between the JavaScript application code and the native platform are performed asynchronously. The native modules can also make use of extra threads. This means we can decode images off of the main thread, save to disk in the background, measure text and compute layouts without blocking the UI and more. As a result, React Native apps are naturally fluid and responsive. This is perhaps, the most important feature of React Native.

Getting started:React does not make any assumption about your Technology stack. You do not need to rewrite the entire app if you have already built a native app. React can be used alone or along with native code, making it extensible. This means that you can build a single page of your app in React to try it out. This is important as it makes the switching over to React low risk. In fact, many components of the Facebook app are already built with React.

Facebook has invested a lot of time in React

Native. The documentation is extensive and using React has been made as painless as possible. A simple Hello World app can be setup and deployed to Android and iOS in under 20 minutes.

So what does React offer?On the web, simply saving and reloading your page is enough to test changes. On the native side, even something minute like a change in font size requires the entire app to be recompiled. This results in a lot of time wastage, especially on large codebases where compilation takes several minutes. But with the speed offered by this new technology, the possibilities of errors occurring also increases. Hence, another factor React Native is working on is error reporting. Not only it will report the error but also elaborate the ways to debug them.React Native uses Flexbox as its UI Layout tool. Flexbox is a new layout mode in CSS3. It makes it simple to build the most common UI layouts, such as stacked and nested boxes with margin and padding. While it can be difficult to learn, most web developers will find it familiar. React Native also supports regular web styles, such as fontWeight and StyleSheet abstraction.

React forces you to break your application down into discrete components, each representing a single view. These components make it easier to iterate without breaking another component. It results in more predictable and reliable code.

Is it production ready?Facebook’s apps have several components that are powered by React. It is extremely scalable. It is improving continuously and it is safe to say that React can comfortably power production-quality apps. The Facebook Ads Manager app uses React Native. Ads manager allows businesses that advertise on the social network to manage their accounts and create new advertisements.

Over 1000 libraries and components are already available on the npm registry. You can directly

use these components in your application.

So, what does this mean for the industry?With React Native, Facebook is rethinking the top established practices. A company that required separate teams for iOS, Android and Web can use one single team. Startups usually have to launch on one platform only since they simply do not have enough skills to develop a cross-platform solution. With React Native you only need to “Learn Once, Write Anywhere”. It has emerged as the only viable cross-platform framework for making truly native apps, bringing in a highly efficient approach to constructing user interfaces. This novel framework is open-source and Facebook itself is dedicated to continue its maintenance.

React as well as React Native surely have some very high potential and technological assets as it can be seen from its applications, like Facebook, Myntra, Exponent and many others. The only investment we need to make is to learn and spread it to make it a world-wide solo platform for software development to prove its slogan-“Learn Once, Write Anywhere-come to reality.”

29 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 30CTD.CREDENZ.INFO CTD.CREDENZ.INFO

Alpha Go, a program developed by Google’s acquired British startup, Deep Mind, is capable of beating a professional human player at the game of Go, a com-plex Chinese game. Alpha Go defeated the European Go champion, Fan Hui, five games to zero, a feat that hasn’t been accomplished by any other AI.

-Aditya ShirolePune Institute of Computer Technology

Pune

Page 17: P.I.N.G. 12.0

V-MotionConverting Motion into Music

Every living thing on this earth enjoys music. Every civilisation has developed some kind of music form and has been

practicing it predominantly for recreation. It can be inferred that music has existed prior to the dispersal of the human kind all over the world and has played an important role in human life since the dawn of time. It has been evolving since the time of Adam and Eve transforming through prehistoric music, medieval music, renaissance music and many other forms and finally has arrested itself in the posture that it holds today.

This transformation of music has mainly subsumed changes, up-gradations or invention of new objects which were used to produce the sounds. The prehistoric period used human voice, the medieval period used stringed instruments, the renaissance period instituted percussion i n s t r u m e n t s , and today it uses electronic instruments like electric guitars, music samplers and also different m a t h e m a t i c a l equations and algorithms similar to the one sketched by Ceorgios Cherouvimim. The “V-Motion Project” is also a result of one such transformation.

The V-Motion project is a combined effort of Frucor (makers of V energy drink), together with their agency Colenso BBDO, who gathered a number of brilliant people from all kinds of creative fields for this project. The project started off with an initial idea of usage of Kinect camera to detect and record the motion of the musician.

The Instrument consists of Kinect cameras for catching the performer’s movements, and

high performance computers to process the movements and produce visuals music. It uses Ableton Live, a music sequencing software that is generally used by DJs during live performances, but instead of the conventional way of interfacing the physical control panel, the performer’s motions are captured and analysed to interface with the software. For instance, the action of touching your head with left hand triggers certain music loop configured in software and the action of spreading your hands and imitating playing a keyboard results in generation of sounds according to the movement of the performers fingers.

How it works:Audio : Two Kinect cameras are pointed at the performer,

one to capture the skeleton movement while the other to measure the data depth using the infrared sensors. One of the two computers handles the processing related to music and the other handles the processing of visuals which is written in C++ with OpenFrame works. The two computers communicate using a connectionless

UDP, sending JSON objects back and forth for each frame. The computer handling the audio sends its current state to the visual system which contains the skeleton position, while the computer processing the visuals, sends the metadata when the performer plays the “Air Keyboard”. The amazing part is that one of these computers has the Mac OS and the other one has the Windows OS, working in harmony.

The biggest breakthrough for this instrument is the ‘Air-Keyboard’. It uses gestures for triggering certain events or music. The performer has to spread his hands and move his fingers to play the notes.

Visual The visuals portray how the song is being built up, transitioned by the motion, how each and every motion of the performer affects the music and he/she is not just dancing along with the music. It also consists of visual fireworks matching the arc of the song. The crucial part for visual processing is getting as much real- time data from the music processor to visual processor, as possible. This is done by sending JSON objects over UDP connection.

Layers of the visual:• The Landscape:It is the background which changes according to the music. The challenge for rendering this landscape is to time the video and audio sync perfectly, which is done by sending a signal from the music system to visuals machine over the UDP. The playback needs to be fast and stable, so QTKit is used to play the video. Its 64-bit hardware works in a separate thread to do the job of running the 1.9GB Motion JPEG Quicktime at a steady 60fps.

• Circular user interface surrounding the avatar of performer: The user interface of the instrument is a crucial part, and has a huge job to do. It shows how the instrument works and what the performer does to create the music.

Interface visuals are pre-rendered transitions, which are manipulated according to the motions of the performer and animations are triggered by the actions of the performer. For instance, when

performer hits a key on the keyboard, it draws a “note” on the score that’s ticking away above him as well as fires off a sprite animation from the key that he/she hits. Every part of the interface has different transitions and animations when they are being used and the synergy of all transitions creates magnificent visuals.

• A green giant:It is the avatar of the performer, which is center staged, a triangular glitch that matches the rest of the visuals. The triangulated effect is obtained from a silhouette that you can easily get out of the Kinect’s depth information. It is run through a few stages of image processing to get the contours of the performer’s body and the points are run through a Delaunay triangulation. The difference between the original and the calculated depth is used to decide the colour or the outline which makes it look like the figure is being lit from above, giving it a subtle sense of form.

The “V-Motion project” can be summed up to be a contrivance that can actually make you live the music which is being created in real time by its exquisitely intelligent concept and the design. It is a cutting edge technology which can be used to dive deep into music while creating it.

This instrument was used in a music video named “Can’t Help Myself” shot on the streets of Auckland, New Zealand when The V-Motion Project team created music through movement. The makers of this have plans to take it a step further by allowing flexible loop editing and real time VJing. As new cameras come out with higher frame rates and higher resolution, the responsiveness and power will get better.

31 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 32CTD.CREDENZ.INFO CTD.CREDENZ.INFO

Acoustic tweezers is a way of manipulating cells in 3-D using sound waves. In a standing acoustic field, objects will experience an acoustic radiation force that moves the objects to special regions of the acoustic field. This tech could make 3-D printing of cell structures possible for tissue engineering.

-Anuj GodasePune Institute of Computer Technology

Pune

Page 18: P.I.N.G. 12.0

BiospleenIntensifying Immunity

Christopher Reeve, the famous actor who immortalised the character of Superman, died an early death at the

age of just 52. He was diagnosed with a disease called Sepsis, and later died of cardiac arrest. Even with the disease having claimed lives of people, Sepsis still remains an abominable threat to humankind.

Sepsis is a global healthcare issue and affects 18 million people worldwide every year, and on average each case costs more than US $22, 000 to treat. Its mortality rate being more than heart attacks and claiming lives more than any cancer, only handful of the people are aware of it. This medical condition arises due to inability of immune system to fight back the poison in the blood and land up damaging body organs and tissues. Hence, the patient dies rapidly due to the septic shock caused by blood pressure drop and organ failure.

Donald E. Ingber and his team at Harvard’s Wyss Institute for Biologically Inspired Engineering, developed a device called ‘Biospleen’, which can curb the medical issues of blood poisoning which includes Sepsis. Biospleen is inspired from the organ spleen present in the upper quadrant of the abdomen. A protein found in human body, mannose binding lectin (MBL), binds to sugar molecules on the surfaces of bacteria, viruses and toxins. The device is a modified version which gets rid of the toxins and harmful viruses in the body using the magnetic beads of nanoscale size. The researchers coated the magnetic beads with

MBL. In its innate state, MBL has a branch-like “head” and a stick-like “tail.” In the body, the head binds to specific sugars on the surfaces of all sorts of bacteria, fungi, viruses, protozoa and toxins, and the tail cues the immune system to destroy them. However, sometimes other immune system proteins bind to the MBL tail and activate clotting and organ damage so they used genetic engineering tools to lop off the tail and graft on a similar one from an antibody protein that does not cause these problems and later a magnetic device segregates these beads from the blood which is then injected back in the patient’s body.

The first experiment was carried out on human blood which proved successful by cleansing almost 89% of blood infected by pathogens. Later, they used the device on rats infected with E.coli, aureus and toxins which were similar to those pathogens present in Sepsis patient. Ingber and the other researchers found out that 90% of the infection was treated and the remaining can be treated by the body itself. Instead of destroying those pathogens, the researchers developed an idea of simply removing them from blood. Biospleen is capable of filtering 5 litres of human blood. Survival among the treated animals was 90 percent, compared to 14 percent in controls.

Biospleen is a very effective device, saving millions of lives in an efficient way, consuming only few hours for detecting the disease compared to the earlier times, when it would take days to identify the infection. Biospleen works equally well with the anti-biotic resistant organisms. The members involved in the research say that the device might help to treat viral diseases like HIV and Ebola.

33 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

Everyone in childhood would have played video games. Many of us must have thought how a game is made and

how it works! Every gamer wishes to design video games and frame it according to his imagination. But then the notion of coding and learning new stuff scares most of us away. What if it is possible to build our own game without coding? Well, now you can.

Pixel Press is a user friendly application that lets you draw your own video game, consisting of different levels (no code required), and share it with others. Using it you can build a Mario-style iPad game in any way you want. All you have to do is draw a bunch of symbols and lines on a piece of paper, snap a picture of it, and Pixel Press turns it into a video game.How is it done? The implementation technique consists of following steps-Capture, Interpret, Edit, Design, Game play, Arcade and Sharing.

Pixel Press has its capture technology which is built with Optical Character Recognition (OCR). OCR is used to pre-process the photographed game drawing, straighten it, eliminate noise and apply thresholds. The only pre-requisite is that you need to draw on Pixel Press’s paper, which is basically a pre-printed stock grid. OCR separates the user’s game drawing from this paper. It also helps with the interpretation stage. Instead of reading letters and numbers of an existing language, like English, it works with a symbolic language (or glyphs). It makes “drawing” a video game easy using common characters such as +, x, ^ and =. Its engine interprets which glyph has been drawn, and where.

To edit the mistakes in drawing, Pixel Press has collaborated with other technologies like the Draw-in-App. It allows you to capture your sketch once, and then is ready to go. You can edit everything you have drawn on the fly, without going back to the paper. One may even draw an entire sketch in-app. Draw-in-App uses the glyph processing engine from the interpretation

step. Imagine a grid of numbers that matches the grid paper, each number corresponding to a game play element like a spike or coin. That’s basically how this part works.

In the design step, the glyph drawing is brought to life by adding colours and textures. The Pixel Press gameplay engine is built on top of the Unity 2-D game engine. It takes all the glyphs, semantics and theme choices and puts them together in a 2-D world for players to jump, fly, run and climb through. Its arcade is a web API written in Node.js. Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine and is an extremely fast, reliable and scalable option.

The previous iterations of Pixel Press were all about recognising human handwriting and interpreting those symbols into game objects. With recent development colour recognition, usage of computer vision to detect the game board and then parsing the game board for colour blocks has made the application more user friendly. The blocks then go through a similar interpreter that turns them into game objects for video games.

As far as Pixel Press is concerned, the only limitation is your imagination. All you need to do is think of a game and Pixel Press will create it for you!

FEBRUARY 2016 P.I.N.G. ISSUE 12.0 34CTD.CREDENZ.INFO

Pixel PressPortraying Your Imagination

-Sejal AbhangraoPune Institute of Computer Technology

Pune

-Sachi ShambharkarPune Institute of Computer Technology

Pune

Page 19: P.I.N.G. 12.0

Ninja SphereThe Intelligent Household Manager

Imagine a device that can perform basic household tasks, which many deem too mundane to do personally, with hardly any

effort on the user’s side. This is no distant dream as NinjaBlocks, a Sydney-based company, has recently developed Ninja Sphere, a device that behaves like a digital house-elf of sorts. The product is highly acclaimed by people all over the globe as it has made day-to-day tasks significantly easier. This new technology based on the popular “Internet of Things” concept is proving to be one of the most promising new products in the gadget world.

One of the main arguments raised against smart home systems is the investment required to completely rewire existing houses to accommodate them. Ninja Sphere is essentially a smart home system that eliminates the need for that investment. It is a separate fixture that uses gesture control to enable smart household management. It works with other “smart” devices like WiFi-enabled lightbulbs, connected power sockets, Sonos media centers and plant monitors, among others. The basic tasks it fulfills include monitoring temperature, humidity, lighting and energy usage of all the devices connected over the Ninja Sphere wireless network. This contraption also alerts the user via notifications on a smartphone if any appliance is left running when the user is outside the network’s reach. Along with this, it also allows the user to turn that appliance off through a smartphone.

Ninja Sphere uses IFTTT recipes that can connect multiple apps to enable the user to control the heating or air conditioning of the house. In addition, the user can also switch lights on/off via a smartphone or smart watch.

At some point or other in our lives, we all have had the annoying experience of losing our keys. Ninja Sphere can work with other technologies like Gecko, which is a Bluetooth Low Energy (BLE) locator system, to wirelessly tag items like keys, pets, or even children. These tagged items

can then be tracked with the user’s smartphone or directly with the hub to locate them precisely. This feature could also be used to extend security measures in the house. Expensive items like jewellery, ornaments and laptop computers can be tagged to ensure that they stay where they’re supposed to and can come in handy in case of theft. The Ninja Sphere can even be programmed to send alerts to the user if any of these tagged objects are moved. If the necessary hardware is installed, then it is also possible to activate cameras in the room in such a situation.

Ninja Sphere can serve as a hub for high-tech systems for homes like Philips-hue, Belkin WeMo, Dropcam, Pebble, Spotify, Sonos and Parrot. Equipped with ZigBee radio and USB ports for more advanced home applications and to plug into Arduino-based products like speakers and camera sensors, it is a handy tool for overall home system management.

The specifications of the Ninja Sphere are as follows:• ARM Cortex A8 processor• Bluetooth modules• WLAN• USB (2.0/3.0)• ZigBee Amplifier• Light link• HA• OpenHAB&ODB2• Network of coloured LED matrix based on activity

The core of the Ninja Sphere is called the Spheramid. The Spheramid is the sleek dome-like unit of the Ninja Sphere that has the LED matrix that displays information based on activity. The Spheramid features gesture-control to detect the user’s hand movements and decides based on certain parameters to perform necessary actions. It also carries out control operations needed for the support of home appliances. In addition, the complimentary accessories including smart tags and special plugs for connecting various appliances to the Ninja Sphere are all handled by the Spheramid.

The Spheramid is basically a top-class hub designed to manage the home appliances in the Ninja Sphere network. Its tasks include device-mapping (location), gesture-controlled interface, remote control of devices (via smartphone app ios & android) and synchronising promptly with the connected devices over the network. It uses wireless sensors to access control performance.

Depending on the size of the house, multiple Spheramids across a home network can collaborate to act as a gateway to the Ninja Sphere wireless network comprises of the Spheramids and the devices connected. This forms a secure network for individual homes. Additional gateways can be included in the form of “waypoints”. Small USB devices can act as additional nodes to the network.

The BLE (Bluetooth Low Energy) signals analyses the connected devices, allowing the Spheramid system to construct a 3-D model of the environment to further help with the locating service. The gesture-controlled Spheramid, with a few swipes of your hand, can help you check the energy consumption of the device, manage the lighting, send text messages to activate appliances and control the sound system in the network.

One of the best aspects of Ninja Sphere, especially for young engineers, is that it is an

open-source development, open to upgrading by third-party developers. This helps budding developers to build their own platform to support the devices and to create new functions and abilities, making the home system a more solid and useful platform. The developers at NinjaBlocks believe that the future of connected devices must be open and based on standards. “For that reason, we want to open source our entire IoT platform. If we’re able to reach our goal, we’ll have the support to open source everything”, says Elliot Shepherd, a developer at NinjaBlocks.

The Ninja Sphere device was first launched on the crowd-funding website Kickstarter for $180, along with the Spheramid gateway, two waypoints and a smart power socket with free shipment. Later on, after the Kickstarter sale, NinjaBlocks started independent selling at a price of $329 offered along with a smart tag gift worth $55, a special plug worth $25 and free shipment.

The development of Ninja Sphere and its commercial use is proof of the advancement technology has made and how much it has permeated into our daily lives. This contrivance has tremendous scope in the near future and has vast potential to develop not just as an electronic housemaid, but also as a means of security for the house and the family as a whole.

35 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 FEBRUARY 2016 P.I.N.G. ISSUE 12.0 36CTD.CREDENZ.INFO CTD.CREDENZ.INFO

-Soumaya RanjanPune Institute of Computer Technology

Pune

Droppler, invented by Nascent, is an innovative water use management device. This portable 3-D printed device needs to be placed near the sink, shower, etc. and indicates, using an LED panel, the water used throughout the day. Also, the users can exchange some components and customise their gadget.

Page 20: P.I.N.G. 12.0

One of the major drawbacks of classroom teaching is the disparity between the real world and the schematic

representation on the blackboard to represent or model it. However, considering many limitations, a schematic is one of the best and closest ways of conveying the meaning of such real systems within the four classroom walls. In the electrical and electronics domain, the technology of writable circuits bridges this gap between the schematic and the actual system. Rather, it turns the schematic representation into the system itself!

The technique developed by researchers is based on a soft sheet of silicone rubber which has tiny droplets of a metal alloy embedded in it. The alloy used is eutectic gallium indium (EGaIn). A eutectic system is a homogeneous solid mix of species in a unique ratio to form a super-lattice. Each component will have its own distinct bulk arrangement and thus there should not be a unique melting point of such a mixture. However, the eutectic system, at this specific ratio of components, melts as a whole, releasing all its components at once into a liquid mixture, at a specific temperature called the eutectic temperature. The alloy is liquid at about 15.5 degrees Celsius. In the sheet, the globules are not electrically connected. On application of pressure, they sinter, forming a conductive line along the path the pressure is applied. Thus, the connecting wire between two components can actually be drawn by hand.

Manufacturing of normal circuit boards involves multiple steps and needs a schematic design to be submitted to the manufacturer beforehand. Writable circuits need only a sharp object to translate the schematic onto the board and can be custom made within no time, avoiding the usage of any high end manufacturing tools. Circuit boards are generally rigid and may malfunction when deformed. However, the silicone rubber boards are soft and deformable without much internal damage.

After the circuit is “written” and ready, there may be unwanted sintering of the droplets due to exposure to excessive pressure. This may lead to a short circuit that cannot be very easily detected before its effects are seen. This can be overcome by applying a coating of clear glue on the prepared circuit. The glue dries forming a rigid layer thus prevents unnecessary sintering due to external pressure. The width of the connecting wires will be as thin as the stylus used to write the circuit. This is much thicker than the normal connecting wires. Lasers can be used instead of the stylus for thinner and more precise configurations, but this reduces the simplicity of making the circuits to a large extent.

One of the most prominent applications in the writable circuit technology, is the Circuit Scribe. Making design of circuits easy as doodling, this rollerball pen writes with conductive silver ink. However, using high-end equipment to write the circuits will defeat the very purpose of undemanding manufacturing.

The writable circuits technology, despite its remarkable features, needs development to make it match the current standards of the field. Considering the pros and cons, this technology is indeed helpful for students and electronics enthusiasts as it saves the time and cost involved in factory fabrication, and gives a custom design at the same time.

37 P.I.N.G. ISSUE 12.0 FEBRUARY 2016 CTD.CREDENZ.INFO

Writable CircuitsDoodling electronics

-Maitreyee MaratheNITK

Surathkal

Page 21: P.I.N.G. 12.0