Thursday, 28 August 2014

A need for intelligent choices from the internet of things

All that data can provide valuable insights, but it demands a selective approach

There’s a widespread appreciation in business of what the internet of things (IoT) is all about, but I suspect that a lot of companies are still deterred from getting to grips with the phenomenon by its sheer enormity.

ABI Research has provided the latest indication of the scale of the IoT, with a forecast of a 20%  growth in the number of wireless connected devices to 16 billion this  year, and a rise to 40 billion by 2020. The data that will flow from all those smartphones, sensors, TVs, wearables and connected household appliances will be a major asset for any organisation able to use it, but also overwhelming in its scale.

So far a minority of organisations have started to use the data in a big way – the analytics is still widely seen as a complex, costly business that only the big players can afford – but it will become more cost-effective as the skills base spreads and specialists step up their offerings of analytics as a service. And as it all becomes more familiar a growing number of companies will begin to see what they can learn from all those devices.

Some will be tempted to grab data from as many streams as possible and throw everything into an analytics mix in search of business insights. But is that going to give them what they need? There’s a danger that data from too many sources – and ‘many’ is what the IoT is all about – can provide ‘insights’ that are over-complicated and lacking the clarity that a business needs.

It’s a danger especially for those that use analytics as a service, bringing in outsiders with the data analysis and science skills but a limited understanding of the individual business. Maybe the best of them will be able to help identify the key data streams for analysis, but I suspect that many will offer a service that is about crunching rather than identifying the data, and needs tailoring by the customer rather than the provider.

This is why business leaders need to think for themselves about the first steps to harnessing the IoT. They should know their business aims and what lessons they need to learn, and in turn have a good grasp of the data that’s going to give them the really valuable insights. When they take that first step they will be ready to bring in the analytics specialists.

It’s also unlikely that they will need the same data all the time. Markets change, new factors come into play and new insights will be needed. This is going to require different streams of data and again it is the business leaders who should take the lead in making the choices.

The growth of the IoT and explosion of data is going to promise some riches for business, but those that reap the full benefits are likely to do more picking and using as needed rather than grabbing data wholesale.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Monday, 18 August 2014

Payment wristbands have to be cool to succeed

bPay and others are more likely to win customers by playing up style as much as function

My first thought on reading that Barclaycard is launching a payment wristband was that the chances of people actually wanting them were quite remote.

Barclays’ credit card is pushing its bPay as a convenient method for small payments from a prepaid account that can be topped up by any Visa or MasterCard. It involves waving the wristband over any terminal with a contactless payment symbol to buy anything worth up to £20.

Yes, it’s convenient, but so are the contactless cards which are becoming more common in the UK, and you can use the same near field communication technology in smartphones and smart watches. I suspect that a purpose made payment wristband won’t win any popularity contests against any of these options.

Compared with the card it might be easier to use but it’s also more visible. A lot of people won’t like to be readily identified as a customer of a particular company, especially one of the big banks, and feel more comfortable with the anonymity offered by a card tucked into a wallet or purse. It’s also going to be less tempting to thieves.

Against the phone and watch it doesn’t provide the bundle of functions that attract users, and it won’t stir any excitement in the way that the gadget fans get a kick out of the latest device with the right brand name. Payment wristbands just won’t be cool.

Then came my second thought, spinning off the fact that a lot of buyers regard phones and wearable devices as fashion accessories. The design and the brand name are often as prominent in their minds as the functions of a device, and if someone can tap into that attitude with payment wristbands they might be able to carve out a share of the market.

I can see bPay or other providers of payment wristbands doing something to make them desirable for reasons that have nothing to do with their function. They can hook up with designers of jewellery and fashion accessories, get their marketing teams focused on younger consumers, and their ad agencies creating the type of campaigns that are as much about what the wristbands can do for the user’s image as an utility factor.

It’s all about branding and making the product stand for something other than what it actually does. You might say that an intelligent consumer doesn’t buy into that stuff, but there are plenty of markets in which it works, which says something about how many unintelligent consumers are out there.

Will bPay or other providers pull it off if they take this course? Maybe, maybe not, but younger consumers often go for a product on style rather than substance. It would give the providers a chance, and I don’t see the wristbands taking it off purely on what they can actually do.


Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Thursday, 7 August 2014

Robotics, AI and the worldwide job cull

New technologies are going to destroy jobs, and there’s no promise they will create enough new ones to fill the gap

Do you think a computer could do your job? It’s a question that people have been asking for at least 25 years, and it’s becoming more intense with the advance of robotics and artificial intelligence (AI). And the uncomfortable truth is that the answer for a growing number is ‘yes’.

Technology has been knocking people out of work for a couple of centuries, and as it develops ever more quickly the trend is going to continue. So far it’s been alleviated in industrial economies by the creation of new jobs, but the big question is whether this can continue as robotics and AI automates more tasks previously dependent on the human brain.

A new report from Pew Research, AI, Robotics and the Future of Jobs, indicates that there isn’t a consensus. A survey of almost 1,900 experts produced close to an even split between the optimists and pessimists, with 52% expecting that technology will create as many jobs as it displaces by 2025 and 48% forecasting that it won’t do so. Unsurprisingly, a lot of the latter group are worried about big increases in income inequality, mass unemployment and breakdowns in social order.

It’s hard to feel positive about blue collar jobs, and the more routine white collar occupations. Robotics are extending machines’ capacity for manual tasks, and AI promises (or threatens, depending on where you stand) to do the same for a lot of jobs that involve the routine processing of information. Also, the ability of cognitive systems to process vast quantities of data at high speed is impinging on areas, such as healthcare diagnoses and financial trading, currently regarded as the province of professionals (a subject I covered in a white paper for the UK’s Chartered Institute for IT).

I’m not going to predict whether the new technology will create enough jobs to replace those it knocks out. I lean towards the pessimists’ view, but that’s the result of a mild scepticism rather than any strong evidence. But the Pew Research report has prompted a couple of thoughts about the future of technology and job creation.

One is that developed economies rely increasingly on jobs that could be described as non-essential. You can apply it to big chunks of the media, marketing, retail, manufacturing consumer goods that are seldom used – providing services that the recipients like, but could easily do without. I suspect that these jobs are close to their limit; society can’t consume any more, however inventive the ad men become at creating demand. There will be fewer new ones to fill the gap as more of the essential jobs become the province of robotics and AI.

The other is to do with how far AI will be allowed to penetrate the professions or top end management roles. There is a realistic argument that an educated human judgement is necessary for many decisions, especially when there’s an ethical element involved. Cognitive computing can be used for high level decision support, but the ultimate responsibility should remain with a human. Those humans form elites, and elites tend to be very good at protecting their own interests.

They’ll want rigid boundaries in place to keep themselves in those top level roles, and a culture that emphasises the primacy of the human mind in their fields. They may be right, they may be wrong, but there are going to be a lot of roles for which the limits are not clear, and professions that will become battlefields.
Of course there’s another possibility: that as technology takes over more jobs those that remain are spread more evenly, so we’ve all got more leisure time. But that was predicted fifty years ago, it hasn’t worked out that way since and, given the prevailing dynamics, it’s not likely to happen in the foreseeable future.

The advance of robotics and AI is inevitable, and in the long term it could well do more good than harm; but in the next two, three, four decades the disruption they cause won’t be a pretty thing to watch.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Friday, 25 July 2014

CIOs should top CMOs on digital strategy

One feature of the conversation about CIOs in recent months has been about where they stand in regard to chief marketing officers (CMOs). It’s an acknowledgement that one of prime functions of an organisation’s information strategy is to support its marketing, and there have been suggestions that the CIO should be regarded primarily as part of the CMO’s team.
Accenture has thrown its voice into the debate with the publication of a report, Cutting across the CMO-CIO divide, which it says reflects a sea change as more CIOs put marketing at the top of their agendas. It’s core message is that, while they understand the need to collaborate, they don’t get on over a number of issues.
For example, a lot of CMOs think that IT teams don’t get the need for urgency in integrating new data sources into campaigns as required, and that technology development is too slow for digital marketing. CIOs complain about shifting goalposts and marketing’s lack of vision in anticipating new digital channels.
All this is no big surprise. Conflicting agendas are part of daily life in the boardroom, and it becomes more fraught when technology is involved as it advances so quickly and the two sides have a different focus. It can also be complicated by issues around data regulation; marketing teams see the opportunities in acquiring and squeezing customer data, while CIOs are aware of the legal limitations and know any transgressions will place them in the firing line.
It shouldn’t be impossible to overcome these tensions; after all, the teams are led by highly paid people who are all meant to have an understanding of the whole business. But it might need a stronger consensus over who is in overall charge of digital issues: who has the final say and is ultimately responsible for any failures.
The CIO is the obvious choice, as information is the foundation of a digital strategy and the focus of his or her responsibility. They spend more of their time and think more deeply about the digital aspects of the business, and should be the prime source of expertise.
But in plenty of organisations that is going to stir up fresh tensions. You cannot stop CMOs and their teams from keeping a sharp eye on the digital opportunities in marketing and making a noise over wanting to grab them, even if they are unproved or could bring unwelcome consequences.
A merger between the two departments – an idea that is occasionally floated – could only come to grief. You’re looking at two groups of people with different mindsets: marketers who want to excite the customer, and information specialists with a more methodical outlook on making sure it all flows as it should. It’s right that, as Accenture suggests, there should be an organisational digital vision to underpin collaboration, but they will remain separate entities.
Solutions won’t come easily and this tension is likely to rumble on for some time. But if CIOs don’t obtain the ultimate authority over digital strategies it will seriously undermine what their role is all about.
Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 8 July 2014

A commercial opportunity in the copyright minefield

Copyright has always been a minefield, and it’s been made more hazardous by the way that sentiment over its place in the digital world has become more confused.

The rise of digital technology stoked up protectionist fears by making it too easy to copy, share or illegally re-sell content. But we’re now in an era when the ability to make something new out of existing content – moving from data mash-ups to app creation – is making copyright more of a hindrance in some eyes. Public authorities with an eye on the economic potential of the latter are feeling increasingly torn between protecting the original creators and giving the next wave the chance to show what they can do.

Neelie Krose, the EU commissioner with the digital brief, has acknowledged the conundrum with a speech crying out for copyright reform. Her language leaned towards worries that copyright is getting in the way of progress; she said the 2001 EU Copyright Directive isn’t fit for the 2010s and that there’s a risk of copyright becoming an irrelevance.

So there has to be reform. Fair enough, but what type of reform, and how is the EU going to make it all fit a landscape that keeps on changing? There are a hell a lot of details to resolve and devils in all of them. Providing a legal framework that protects the original content creators yet still gives the re-use innovators a chance to succeed is going to be a difficult and highly contentious job.

The most obvious recent precedent, the EU Data Protection Regulation, has prompted plenty of observers to claim it is unworkable and could yet be mangled by the Council of Ministers. I suspect that copyright, an issue even closer to the lawyers’ hearts, is going to create even more dissent.

This doesn’t mean that the EU shouldn’t try to deal with the issue, but this is going to be a drawn out process with a lot of grey areas. Those innovators are going to feel increasingly impatient, but also scared at the thought of being financially clobbered if they break the law.

I expect there will be some enterprising legal minds, or even non-legal entrepreneurs, ready to take advantage of this with services that promise a quick and easy way to clarify the legality of using specific content. If they offer a reliable service in checking the origins and licensing terms of specific content they can provide the reassurance that the innovators are looking for – at a price.

These services shouldn’t be particularly complicated, and will probably involve steps that a lot of people could take for themselves. But legal matters always seem very complicated to most of us, and they’ll find plenty of takers among the digital entrepreneurs who don’t want to get burned.

There’s money in that minefield.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Monday, 30 June 2014

Augmented reality glasses face the privacy test

An adverse reaction to Google Glass is underway. It’s not just the anecdotes about it being banned from bars, restaurants and cinemas and police dishing out traffic tickets, but the UK Information Commissioner’s Office has blogged that it’s going to cause problems and warned companies that if they use the technology they have to be careful about privacy and data protection.

Even some of the evangelists for Google Glass acknowledged early on that it would run into opposition, and as more people become aware of its camera function there are going to be a lot of tetchy responses to seeing it in public.

It’s important not to talk solely about Google Glass when it comes to augmented reality glasses. As I wrote a few months back in a white paper for the BCS, there are other companies making glasses that provide data to wearers to guide them through a task, and these have already found early adopters in work from warehousing to healthcare. There is a vast potential for supporting people in their work and it would be a waste if it’s squashed by anxieties over privacy.

In fact, there are a lot of work environments in which it shouldn’t be an issue. Directing a worker in a warehouse or providing guidance for surgery takes place in a closed space where there is no threat of trawling for images through the camera on AR glasses. There’s no reason why this part of the market shouldn’t grow as the technology is refined.

But it won’t be possible to draw a clear line between a closed work environment and the public realm. Organisations are already making use of AR on tablets in fields such as civil engineering, architecture and retail, and there’s going to be strong temptation to experiment with AR glasses if there’s a big advantage in keeping hands free.

It’s easy to imagine a scenario when someone has a legitimate business use for AR glasses in a public space, receiving and feeding back information on the environment while making notes on another device, or operating a machine or vehicle. It’s just as easy to imagine someone else taking offence at their presence being recorded, especially on a device that can stream the image straight into a corporate data store.

Can you draw a clear line between the two? You can have arguments about what constitutes a public space, or if you can still violate someone’s privacy when they’ve entered an organisation’s space. And there are going to be legitimate business reasons, often around public services, for using AR glasses on the street, in a park, in a place where crowds gather. Saying these are off limits would deprive planners and emergency service teams among others of a potentially valuable tool.

I don’t think there are clear answers to this, and it’s going to take time, more familiarity and a good few arguments before a consensus on acceptable usage and a clear legal line emerges. But the ICO has been right to tell organisations to at least begin thinking about what’s acceptable, and make it clear that any information gathered through AR glasses is subject to the same laws as any other data.

Meanwhile, I’ll be interested in how people react the first time I’m in a pub and someone walks in wearing Google Glass.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 3 June 2014

3D printing: the business of consumer creativity

The arrival of budget priced 3D printers promises to give the technology a push into the home market, with models as cheap as £150 being affordable for most households.

But there are no signs of a clamour for the machines. If anything’s holding the market back at the moment it’s unfamiliarity and a sense among most people that they have no use for 3D printing. Most have heard of it, but to them it’s something for techies and people who like playing with gadgets, and they haven’t seriously thought about what they would do with a machine. And I’d bet that some of the creative types who have considered it have quickly retreated at the thought of having to get to grips with a complicated software package.

In the short term it could be a source of frustration for the manufacturers of 3D printers, but it points to an opportunity a little further ahead for companies that are ready to help consumers take their first steps into producing their own objects. Selling ready to print designs for products, with guidance on the raw materials to use, could provide a first step for nervous early adopters. But the real potential is in giving them the chance to stamp their own personalities on the process with customisable template designs.

It’s a half-way step that would hold consumers’ hands through getting to know the software while allowing space for them to show their creativity. It’s similar to enabling people to design their own websites using templates, and would add a ‘home made’ dimension to some sizeable consumer markets, such as jewellery, clothing accessories, tableware, models and toys.

Those early steps would familiarise people with the technology and help them get used to the idea of creating their own objects. Then it’s just a little further to those first efforts in fully mastering the software to create from scratch. That’s when the full potential for 3D printing in the home really takes off, as it becomes a tool for consumer creativity.

This is where some enterprising companies can plant an early stake in the market over the next couple of years. Even if the early adopters move on to doing their own thing others will follow, and the prospect of millions of children getting to grips with 3D printers provides the promise of a big market that will thrive in the long term.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk He has previously written a white paper on the future of 3D printing for the BCS.