Around 300 car models are Android Auto-ready today. Nvidia, the mobile tech supplier, and over 28 car manufacturers have joined efforts to promote the platform, as part of the Open Automotive Alliance. As Android Auto gains traction, it is now moving towards becoming integrated infotainment, communication, and car dashboard solution.

How Android Auto works

Android Auto first came out as a way to turn the rather limited infotainment systems into a smarter solution.

For the time being, the task of Android Auto is to extend the Android platform into the car. It does that by opening the way for the smartphone to broadcast and manage a user interface onto the vehicle’s touchscreen.

The car and the phone are connected via USB cable, which has two key advantages: bandwidth and power supply. For efficiency reasons, calls continue to be transmitted to car speakers via Bluetooth.

Once connected, the app handles five major functionalities, using the car’s display for all of them: maps, music, communication (calls and text), voice actions, and other related apps. Some other apps that can be integrated are Google Play Music, Google Now, Hangouts or Skype.

All the processing is done by the mobile device itself, with the car’s touchscreen working as an extension. This approach places a significant load on the phone’s battery, which is why Google recommends fast-charging USB ports on vehicles that use Android Auto.

With Android Auto, a driver’s mobile device will have access to several of the car’s sensors and inputs: GPS and high-quality GPS antennas, steering-wheel-mounted buttons, sound system, directional speakers, directional microphones, wheel speed, compass, and mobile antennas.

Android Auto software components

The Android Auto app on the smartphone/tablet and the Google Receiver Library, hosted on the car’s computing unit are the two main components. The Android Auto app is available in Google Play Store, in 31 countries – see here the official list. As for the car-side component, it relies on the integration of the Google Receiver Library (GRL) with the car’s software platform. It comes as a bundle of C++ software libraries that are offered under NDA by Google to its automotive partners only. The GRL works on Android, Linux, and Windows CE platforms.

The purpose of the libraries is to ensure the connection with the Android Auto app on the smartphone and to manage callbacks triggered by various user-generated events (such as a tap on the screen), or by software-generated events (such as stopping music when receiving a phone call).

Android Auto and the safety approach

While extending Android functionalities to the car’s dashboard sounds appealing, there are several safety matters to consider. To avoid driver distraction, it does not allow video streaming, nor the use of custom apps or manual texting. However, the speech-to-text function successfully replaces texting while driving.

Moreover, the interface is limited to 5 layers of depth – meaning that drivers don’t have to navigate tricky menus – and scrolling is limited to only two swipes.

So, when choosing a song from your alphabetical playlist, AC/DC will stand a better chance than ZZ Top. And Annie will be easier to call than Serena if you do it manually. To access the bottom of the list you will need to use vocal commands, which are triggered by saying OK Google out loud.

If for some reason, you don’t want to do that, the other option is to slow down to below 5 km/h.

Future developments

In the future, Android Auto will connect to other car data as well. The app will be then able to report low tire pressure, for instance, or the right time for an oil change.

Another feature that Google is working on is the replacement of the USB connection with a WiFi communication channel, which is fast and reliable. Why not use the already ubiquitous Bluetooth? It does not have the speed, bandwidth, and reliability the WiFi can provide.

To make things even further, the next step will be to have Android Auto built-in with the car, rather than using the phone app. Take a look at this prototype presented by Google last year. It is based on a Snapdragon 820 processor installed in a Maserati.

We are living in the age of transformation for UI, with a significant impact on UX. This will have huge implications on our everyday life.

But first, to better understand why we need to define terms.

UX (USER EXPERIENCE): Everything a user does and feels when consuming a product or service to fulfill a need. So, good UX is about solving problems efficiently and pleasantly.

UI (USER INTERFACE): The communication channel between the user and the product or service. For many years, that meant screens and everything related: dashboards, layout,  design, touching, swiping, and more.

Today, using a display to interact with technology is so commonplace that even the most low-tech experiences are getting a screen for basic tasks. Think of fridges, trash bins, or even hotel lobbies.

But as companies are looking for better ways to interact with users, alternatives to screens are emerging fast. The advent of AI and machine learning is further accelerating the change.

So what is the future of UX?

“The best interface is no interface”

The best interface is no interface, author and designer Golden Krishna has been saying for several years. He said it again at UX London this summer, an event also attended by Tremend’s UX/UI specialists. Golden Krishna derided the easy solution of attaching a screen onto anything, just to make it “better”. Instead, he said, UX works best as automation.

A good example is the way Nissan and Toyota solved the problem of cooling cars that overheat while parked in the sun. Nissan launched an app that allowed car owners to remotely open the vehicle’s sunroof to let off the heat accumulated inside.

Toyota, for a change, used a temperature sensor to automatically crack the car’s sunroof open and start a few small fans whenever it became too hot inside. It also added a few solar cells to power the fans. Thus it created a far better user experience, based on automation and no screens. Displays, Krishna said, can stay in the background as a backup solution if automation fails.

But other voices in the industry are less radical when dealing with screens.

Temporary UX environments

Also at UX London, Mark Rolston, co-founder and Chief Creative Officer of argodesign, showcased a way to integrate the user interface into household objects.

The solution is based on bulb-like modules that project images and have integrated cameras. They can interpret hand gestures and can include mundane objects into the virtual interface.

No need to use a tablet for cooking anymore.

cooking with UX

Taking the concept further, the prototype can also serve as a secondary display for a computer.

secondary computing display

Or it can create a temporary interface at a bar, helping customers place orders as they shove beer jugs around. Again, no need for tablets.

ordering at bar

Mixing the analog and the digital experience

For some special users, screens can be too much, too late. This is the case for elderly users, who may have difficulties navigating today’s screen interfaces.

Also at UX London, Adrian Westaway, co-founder of Special Projects, showcased a way to improve the user experience of seniors who use smartphones for the first time. It is done by embedding the smartphone within the classical user’s manual, with great results.

analog and digital2

Presentations we’ve seen at UX London and our own projects for over 60 million end-users enable us to outline a few emerging trends.

FEWER SCREEN INTERACTIONS

As personal devices become smarter and home appliances become smart, more of the things we do will need fewer screen interactions. Voice interaction is rising fast, with the aid of AI to improve speech recognition, as the first viable alternative.

ALTERNATIVE VISUAL INTERFACE

However, screens are not disappearing. Especially when it comes to content consumption and professional uses. But they will morph and find new ways to “express themselves”. Such as interfaces that are projected on the table, in eyeglasses, or even contact lenses.   

ALTERNATIVE (e.g. BODY) LANGUAGE INCLUDED IN MAN-MACHINE COMMUNICATION

The trend towards natural interaction will become stronger. We already talk to computers and receive verbal answers. Computers are now learning to also watch our non-verbal communication and visual cues. After all, it is important for the machine to only take commands from you, not from the TV (true story – see the Whopper ad case).

Both UX and UI play a major part in our software development expertise, from eCommerce to user management and in-car infotainment (IVI) projects.

Tremend delivers full solutions ranging from mobile applications, online stores, or complex banking software to embedded software for the automotive sector.

For over 11 years we have developed Internet of Things solutions, e-commerce platforms, enterprise systems, embedded software, CRM, CMS, ERP, and custom software.

Contact us at hello@tremend.ro for support in developing your own software projects.

We have recently explored how to connect – wireless – to embedded devices scattered across factories and warehouses. Windows Embedded Handheld (the successor of Mobile 6.5) is a typical choice of OS for such devices, and most of them can have Bluetooth and WiFi on board – so these technologies are obvious choices. What they don’t have is a SIM chip in every device and a data plan. Their users have laptops (PCs and Macs), iPads, iPhones, Android phones, and tablets, and can physically walk in the same room where devices are located.

Here’s what we found.

WiFi Infrastructure mode + DHCP

This is the obvious choice working with all platforms (Windows 7 PC, Mac OSX, iOS, Android). You need to have a wireless router in infrastructure mode, and assign addresses via DHCP. The wireless router must be in range both of the device and the PC (one obvious place is to plug it near the device itself)

Once both the device and the PC are in the same LAN, the device can broadcast information about itself via some discovery protocol: UPnP, Bonjour, or you can write a simple custom UDP-based discovery mechanism. It is easy to write a client that uses this information, on all platforms.

Mobile Data service

Same as above, but use a MiFi – like gadget to connect the device to a 3G network. Tape it to the device. Write software on the device to connect to a server – and use it to authenticate and do NAT traversal that way.

Bluetooth DUN might work as well instead of WiFi to connect the device with a 3G modem.

WiFi Ad-Hoc mode

Windows Mobile 6 devices can set up WiFi networks in Ad-Hoc mode. Clients on Windows 7, OSX, and iOS can then connect to this ad-hoc network.

On iOS, it’s not possible to do it programmatically, and there is some user interaction needed in the iOS Settings – the user should manually choose the network to connect to.

As far as I know, Android devices cannot connect to WiFi Ad-Hoc networks.

WiFi Direct

This is basically a Bluetooth-style discovery and service presentation added on top of WiFi. It’s only supported by Android 4+ and Windows Phone 8. It’s rumored to be introduced by Apple in their next products.

The Win Mobile 6 devices do not support WiFi direct – but perhaps this could be a choice for the future.

Bluetooth PAN

This is Ad Hoc TCP/IP over Bluetooth. Works well with both Windows Mobile 6 and Windows 7. Some manual configuration might be needed. It also works with Mac OSX.

PAN is not supported by stock Android (no root access).

On iOS, PAN is used as an underlying protocol for tethering, or for the Bluetooth GameKit framework. The latter allows two iOS apps to discover each other via an Apple proprietary protocol / Bonjour / TCP / PAN / Bluetooth stack. However, you cannot implement that on the WinMobile 6 end of the connection.

Keep in mind the limitations of Bluetooth range (30-100 m) and data rate (2 Mbps)

Bluetooth 2

You can write your own custom protocol on top of RFCOMM on Bluetooth. The simplest would be directly HTTP / RFCOMM / Bluetooth instead of TCP.

However, you need to write custom software for the PC, Mac, and Android in order to proxy the HTTP requests over this protocol – so a simple web browser won’t work anymore.
It does not work with iOS.

Bluetooth 4 Low Energy

This is low power, low range (<50 m), and low data rate (200 kbps). You can try to create a service that proxies HTTP, as above.

It is supported by iPhone 5. It is also supported by a few Android devices (like Samsung Galaxy S3, and S4), but with proprietary SDKs – the mainstream support is still being added currently.

It is also supported by a few, newer PCs and Macs. SDKs are also proprietary, with no global standard.

Some WinMobile 6 devices support BT 4 LE, but it’s not something widespread at the moment.

Other approaches

A framework pioneered by Qualcomm, https://www.alljoyn.org/, tries to abstract out the underlying mechanism for peer-to-peer communication between devices. Programmers can use it to ignore the low-level details of connectivity; the framework will use all the technologies known and available on a given platform ( WiFi and Bluetooth on Android, WiFi on iOS). It’s not as mature yet, however.

Green Electronics LLC needed a partner with the expertise to help them develop the software side of their IoT product, RainMachine, a smart WiFi irrigation controller that can be managed remotely from a phone, tablet, or desktop browser. Along with native iOS and Android mobile applications, Tremend created the firmware and server software for the forecast sprinkler. Forecasting seven days in advance and using real-time temperature, wind, and rainfall data, the RainMachine dynamically adjusts the sprinkler schedule, improving watering efficiency thus dramatically reducing water waste.

Based on our experience with streaming on mobile, Bogdan put together some slides and presented them at Dev World Bucharest 2012.

You can hear details about formats, codecs, and delivery methods on iPhone, Android, or Phone Gap platforms.

Get the slides here.


Mobile Smart Streaming

View more presentations from Tremend.