Friday, 12 September 2014

Data and the deskilling of business leaders

Maybe some top dogs play down data because of its implications for their status and salaries

Everyone at the top of UK business is taking data seriously, but maybe not as seriously as they should according to a new report published by PwC.
The findings of Gut & gigabytes, written by the Economist Intelligence Unit, might not surprise everyone but indicates that there have been serious limitations on the move towards the data driven business. In short, its survey shows that decision-makers in big companies take data analysis seriously, but it comes third in importance behind their own intuition and experience and the advice and experience of others inside the company.
The report cites reasons for this that will be familiar. There’s scepticism about the quality, accuracy and completeness of data, with a sense that it hasn’t improved much in recent years. There’s also uncertainty about which data is really useful and a fear of getting lost in a deluge.
These are valid concerns, but I suspect there’s something that many C-suite leaders won’t acknowledge: they don’t like the idea of data having more value than what’s inside their heads.
Corporate business is dominated by high level executives who play heavily on their personal capabilities, obtaining high status and massive salaries from a perception that they can provide outstanding insights and prowess in decision-making, way above the abilities of more ordinary souls. They’re paid for their exceptional minds. But if their minds begin to take second place to the lessons provided by data, they become less valuable.
An increasing emphasis on data analysis creates the potential for a partial deskilling of business leaders. If their companies really become data driven organisations all that personal expertise won’t seem so important, and they won’t seem so valuable to shareholders.  I’m not suggesting that the whole C-suite structure is going to crumble, but there could be a shift in the balance that leads to a long term reduction in status and salaries.
I suspect that some business leaders hold half-formed, unspoken thoughts about all this, and don’t like the idea of trusting too much in data. And they might not be in a hurry to find ways over the barriers that they have identified.
Cynical? I suppose so. But cynicism has always been one of the major forces in business.
Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 9 September 2014

Fleet managers might prefer driverless vehicles

Google Car could be seen as a step towards higher assurance and efficiency

In the few months since Google’s plans for a driverless car emerged, the public reaction has gone from a disbelieving chuckle to taking it seriously. Not that anyone expects to see thousands of them on the road in the next few months, even the next few years, but there is a serious debate around its capabilities and possible uses.

It has even prompted speculation that self-driving vehicles would have implications for insurers. A recent article in Insurance Journal pointed out that if a computer rather than a driver controls a car then any crashes would lead to claims against the product supplier rather than the driver. There may not even be a need for personal motor insurance.

If so, this could affect the thinking of companies that run or hire out vehicle fleets. One of the major elements of their planning is the risk management around careless driving, and they can never be 100% sure of the reliability and state of mind of any of their drivers. But, given a few years, self-driving technology could develop to the point where it would provide a higher level of assurance. Also, fleet operators could feel a lot better knowing they could claim from a multi-national manufacturer in the event of an accident.

This would be a logical progression from the development of mobile apps to support fleet operators. There are now plenty on the market that enable them to assist and monitor drivers, and link to back office logistics systems. They support control and integration, and there’s a case for removing the driver to reduce the uncertainties and streamline the process.

Of course there are some massive cultural barriers to overcome. A lot of people, fleet owners among them, are going to feel uneasy at the thought of trusting an array of sensors and internal processors to drive a vehicle. Existing drivers will not be happy. Even if they remain at the wheel to operate an emergency stop, they will see a future in which their role is downgraded and lower paid. And the larger the vehicle, the more it will spark public anxieties and the more resistance there will be.

But removing that risk of human error, of which we’re all too aware, will still provide a lure fleet operators to make a switch. Those running smaller vehicles, such as delivery vans, car hire and taxi services, could find self-driving cars especially attractive. Give it a few years and the driverless car could be a significant element of plenty of business operations.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Thursday, 28 August 2014

A need for intelligent choices from the internet of things

All that data can provide valuable insights, but it demands a selective approach

There’s a widespread appreciation in business of what the internet of things (IoT) is all about, but I suspect that a lot of companies are still deterred from getting to grips with the phenomenon by its sheer enormity.

ABI Research has provided the latest indication of the scale of the IoT, with a forecast of a 20%  growth in the number of wireless connected devices to 16 billion this  year, and a rise to 40 billion by 2020. The data that will flow from all those smartphones, sensors, TVs, wearables and connected household appliances will be a major asset for any organisation able to use it, but also overwhelming in its scale.

So far a minority of organisations have started to use the data in a big way – the analytics is still widely seen as a complex, costly business that only the big players can afford – but it will become more cost-effective as the skills base spreads and specialists step up their offerings of analytics as a service. And as it all becomes more familiar a growing number of companies will begin to see what they can learn from all those devices.

Some will be tempted to grab data from as many streams as possible and throw everything into an analytics mix in search of business insights. But is that going to give them what they need? There’s a danger that data from too many sources – and ‘many’ is what the IoT is all about – can provide ‘insights’ that are over-complicated and lacking the clarity that a business needs.

It’s a danger especially for those that use analytics as a service, bringing in outsiders with the data analysis and science skills but a limited understanding of the individual business. Maybe the best of them will be able to help identify the key data streams for analysis, but I suspect that many will offer a service that is about crunching rather than identifying the data, and needs tailoring by the customer rather than the provider.

This is why business leaders need to think for themselves about the first steps to harnessing the IoT. They should know their business aims and what lessons they need to learn, and in turn have a good grasp of the data that’s going to give them the really valuable insights. When they take that first step they will be ready to bring in the analytics specialists.

It’s also unlikely that they will need the same data all the time. Markets change, new factors come into play and new insights will be needed. This is going to require different streams of data and again it is the business leaders who should take the lead in making the choices.

The growth of the IoT and explosion of data is going to promise some riches for business, but those that reap the full benefits are likely to do more picking and using as needed rather than grabbing data wholesale.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Monday, 18 August 2014

Payment wristbands have to be cool to succeed

bPay and others are more likely to win customers by playing up style as much as function

My first thought on reading that Barclaycard is launching a payment wristband was that the chances of people actually wanting them were quite remote.

Barclays’ credit card is pushing its bPay as a convenient method for small payments from a prepaid account that can be topped up by any Visa or MasterCard. It involves waving the wristband over any terminal with a contactless payment symbol to buy anything worth up to £20.

Yes, it’s convenient, but so are the contactless cards which are becoming more common in the UK, and you can use the same near field communication technology in smartphones and smart watches. I suspect that a purpose made payment wristband won’t win any popularity contests against any of these options.

Compared with the card it might be easier to use but it’s also more visible. A lot of people won’t like to be readily identified as a customer of a particular company, especially one of the big banks, and feel more comfortable with the anonymity offered by a card tucked into a wallet or purse. It’s also going to be less tempting to thieves.

Against the phone and watch it doesn’t provide the bundle of functions that attract users, and it won’t stir any excitement in the way that the gadget fans get a kick out of the latest device with the right brand name. Payment wristbands just won’t be cool.

Then came my second thought, spinning off the fact that a lot of buyers regard phones and wearable devices as fashion accessories. The design and the brand name are often as prominent in their minds as the functions of a device, and if someone can tap into that attitude with payment wristbands they might be able to carve out a share of the market.

I can see bPay or other providers of payment wristbands doing something to make them desirable for reasons that have nothing to do with their function. They can hook up with designers of jewellery and fashion accessories, get their marketing teams focused on younger consumers, and their ad agencies creating the type of campaigns that are as much about what the wristbands can do for the user’s image as an utility factor.

It’s all about branding and making the product stand for something other than what it actually does. You might say that an intelligent consumer doesn’t buy into that stuff, but there are plenty of markets in which it works, which says something about how many unintelligent consumers are out there.

Will bPay or other providers pull it off if they take this course? Maybe, maybe not, but younger consumers often go for a product on style rather than substance. It would give the providers a chance, and I don’t see the wristbands taking it off purely on what they can actually do.


Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Thursday, 7 August 2014

Robotics, AI and the worldwide job cull

New technologies are going to destroy jobs, and there’s no promise they will create enough new ones to fill the gap

Do you think a computer could do your job? It’s a question that people have been asking for at least 25 years, and it’s becoming more intense with the advance of robotics and artificial intelligence (AI). And the uncomfortable truth is that the answer for a growing number is ‘yes’.

Technology has been knocking people out of work for a couple of centuries, and as it develops ever more quickly the trend is going to continue. So far it’s been alleviated in industrial economies by the creation of new jobs, but the big question is whether this can continue as robotics and AI automates more tasks previously dependent on the human brain.

A new report from Pew Research, AI, Robotics and the Future of Jobs, indicates that there isn’t a consensus. A survey of almost 1,900 experts produced close to an even split between the optimists and pessimists, with 52% expecting that technology will create as many jobs as it displaces by 2025 and 48% forecasting that it won’t do so. Unsurprisingly, a lot of the latter group are worried about big increases in income inequality, mass unemployment and breakdowns in social order.

It’s hard to feel positive about blue collar jobs, and the more routine white collar occupations. Robotics are extending machines’ capacity for manual tasks, and AI promises (or threatens, depending on where you stand) to do the same for a lot of jobs that involve the routine processing of information. Also, the ability of cognitive systems to process vast quantities of data at high speed is impinging on areas, such as healthcare diagnoses and financial trading, currently regarded as the province of professionals (a subject I covered in a white paper for the UK’s Chartered Institute for IT).

I’m not going to predict whether the new technology will create enough jobs to replace those it knocks out. I lean towards the pessimists’ view, but that’s the result of a mild scepticism rather than any strong evidence. But the Pew Research report has prompted a couple of thoughts about the future of technology and job creation.

One is that developed economies rely increasingly on jobs that could be described as non-essential. You can apply it to big chunks of the media, marketing, retail, manufacturing consumer goods that are seldom used – providing services that the recipients like, but could easily do without. I suspect that these jobs are close to their limit; society can’t consume any more, however inventive the ad men become at creating demand. There will be fewer new ones to fill the gap as more of the essential jobs become the province of robotics and AI.

The other is to do with how far AI will be allowed to penetrate the professions or top end management roles. There is a realistic argument that an educated human judgement is necessary for many decisions, especially when there’s an ethical element involved. Cognitive computing can be used for high level decision support, but the ultimate responsibility should remain with a human. Those humans form elites, and elites tend to be very good at protecting their own interests.

They’ll want rigid boundaries in place to keep themselves in those top level roles, and a culture that emphasises the primacy of the human mind in their fields. They may be right, they may be wrong, but there are going to be a lot of roles for which the limits are not clear, and professions that will become battlefields.
Of course there’s another possibility: that as technology takes over more jobs those that remain are spread more evenly, so we’ve all got more leisure time. But that was predicted fifty years ago, it hasn’t worked out that way since and, given the prevailing dynamics, it’s not likely to happen in the foreseeable future.

The advance of robotics and AI is inevitable, and in the long term it could well do more good than harm; but in the next two, three, four decades the disruption they cause won’t be a pretty thing to watch.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Friday, 25 July 2014

CIOs should top CMOs on digital strategy

One feature of the conversation about CIOs in recent months has been about where they stand in regard to chief marketing officers (CMOs). It’s an acknowledgement that one of prime functions of an organisation’s information strategy is to support its marketing, and there have been suggestions that the CIO should be regarded primarily as part of the CMO’s team.
Accenture has thrown its voice into the debate with the publication of a report, Cutting across the CMO-CIO divide, which it says reflects a sea change as more CIOs put marketing at the top of their agendas. It’s core message is that, while they understand the need to collaborate, they don’t get on over a number of issues.
For example, a lot of CMOs think that IT teams don’t get the need for urgency in integrating new data sources into campaigns as required, and that technology development is too slow for digital marketing. CIOs complain about shifting goalposts and marketing’s lack of vision in anticipating new digital channels.
All this is no big surprise. Conflicting agendas are part of daily life in the boardroom, and it becomes more fraught when technology is involved as it advances so quickly and the two sides have a different focus. It can also be complicated by issues around data regulation; marketing teams see the opportunities in acquiring and squeezing customer data, while CIOs are aware of the legal limitations and know any transgressions will place them in the firing line.
It shouldn’t be impossible to overcome these tensions; after all, the teams are led by highly paid people who are all meant to have an understanding of the whole business. But it might need a stronger consensus over who is in overall charge of digital issues: who has the final say and is ultimately responsible for any failures.
The CIO is the obvious choice, as information is the foundation of a digital strategy and the focus of his or her responsibility. They spend more of their time and think more deeply about the digital aspects of the business, and should be the prime source of expertise.
But in plenty of organisations that is going to stir up fresh tensions. You cannot stop CMOs and their teams from keeping a sharp eye on the digital opportunities in marketing and making a noise over wanting to grab them, even if they are unproved or could bring unwelcome consequences.
A merger between the two departments – an idea that is occasionally floated – could only come to grief. You’re looking at two groups of people with different mindsets: marketers who want to excite the customer, and information specialists with a more methodical outlook on making sure it all flows as it should. It’s right that, as Accenture suggests, there should be an organisational digital vision to underpin collaboration, but they will remain separate entities.
Solutions won’t come easily and this tension is likely to rumble on for some time. But if CIOs don’t obtain the ultimate authority over digital strategies it will seriously undermine what their role is all about.
Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 8 July 2014

A commercial opportunity in the copyright minefield

Copyright has always been a minefield, and it’s been made more hazardous by the way that sentiment over its place in the digital world has become more confused.

The rise of digital technology stoked up protectionist fears by making it too easy to copy, share or illegally re-sell content. But we’re now in an era when the ability to make something new out of existing content – moving from data mash-ups to app creation – is making copyright more of a hindrance in some eyes. Public authorities with an eye on the economic potential of the latter are feeling increasingly torn between protecting the original creators and giving the next wave the chance to show what they can do.

Neelie Krose, the EU commissioner with the digital brief, has acknowledged the conundrum with a speech crying out for copyright reform. Her language leaned towards worries that copyright is getting in the way of progress; she said the 2001 EU Copyright Directive isn’t fit for the 2010s and that there’s a risk of copyright becoming an irrelevance.

So there has to be reform. Fair enough, but what type of reform, and how is the EU going to make it all fit a landscape that keeps on changing? There are a hell a lot of details to resolve and devils in all of them. Providing a legal framework that protects the original content creators yet still gives the re-use innovators a chance to succeed is going to be a difficult and highly contentious job.

The most obvious recent precedent, the EU Data Protection Regulation, has prompted plenty of observers to claim it is unworkable and could yet be mangled by the Council of Ministers. I suspect that copyright, an issue even closer to the lawyers’ hearts, is going to create even more dissent.

This doesn’t mean that the EU shouldn’t try to deal with the issue, but this is going to be a drawn out process with a lot of grey areas. Those innovators are going to feel increasingly impatient, but also scared at the thought of being financially clobbered if they break the law.

I expect there will be some enterprising legal minds, or even non-legal entrepreneurs, ready to take advantage of this with services that promise a quick and easy way to clarify the legality of using specific content. If they offer a reliable service in checking the origins and licensing terms of specific content they can provide the reassurance that the innovators are looking for – at a price.

These services shouldn’t be particularly complicated, and will probably involve steps that a lot of people could take for themselves. But legal matters always seem very complicated to most of us, and they’ll find plenty of takers among the digital entrepreneurs who don’t want to get burned.

There’s money in that minefield.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk