Wednesday, 12 November 2014

Why you won’t get systems integrators out of government




Ambitious projects demand expertise that won’t always be found in-house
 

“SMEs good, systems integrators (SIs) bad” was one of the earliest messages to come from the Conservatives on UK government IT. Before they came to power their lead voice on the subject, the now government chief technology officer Liam Maxwell, attributed many of Whitehall’s IT woes to complex projects with big contracts that gave too much power and too much taxpayers’ money to SIs.

Since 2010 the coalition government has taken some big steps to break the pattern – the £100 million limit on IT contracts, a two year limit on hosting contracts, not allowing firms to run service provision and systems integration in the same area, development of the G Cloud procurement framework – and given SMEs a better chance to grab some of the market. But one message to emerge from Whitehall Media’s Public Sector ICT Conference was that the SIs are still big in the field and they’re not going away.

Several speakers raised the subject, and none claimed or forecast that the SIs’ presence is seriously diminished. There was talk of their role changing – to provide infrastructure for innovators and offer a wider range of distinct services for different projects – but there was an underlying assumption that they will continue to play a major role.

It all comes down to complexity and expertise. Government at all levels is trying to use IT to change the way it works, the Digital by Default strategy aims to make digital services the norm, and there are still major projects such as Universal Credit. These require a lot of expertise – in understanding the technology, programme engineering and risk management – that government often lacks in-house. And the SMEs may have the in-depth expertise for parts of a process, but they’re not well placed to bring together the myriad elements of a big programme. 

Government has long been trying to build these skills in-house, but it is difficult to keep up. The technology changes quickly and the private sector is always ready to lure the best and brightest towards higher pay cheques. There are always going to be gaps in government’s skills pool, and it has to buy in expertise to fill those gaps. The message from the conference speakers was that government shouldn’t try to deny this, but manage it to do the best for itself and the taxpayer.

The government’s rules are likely to help in limiting the long term commitments to SIs; sustained efforts to build in-house skills should ensure there are some experts committed to a role in public service; and government needs to retain the intellectual capital from large projects. Measures such as these can make a difference in changing the balance, making government a more powerful customer.

That’s a more realistic relationship to aim for than banishing SIs from government IT.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Friday, 31 October 2014

Give big data a chance to evolve



Data science is going to involve trial and error, and the costs of failure have to be contained

Listening to the presentations at the Information Age Data Leadership conference yesterday, it was noticeable that the term ‘big data’ didn’t come up with the frequency some of us expected. It was even treated a little disdain by a couple of speakers, portrayed as a term with more hype than substance behind it.

It reflected some of the lessons that emerged from the conference: that data has to be properly prepared to obtain clear insights; that it’s easier to prepare data held inside your enterprise; and that a lot of big data, despite all of its promises, resides outside the enterprise. The hard part in harnessing big data is not just to get at all that juicy information on the outside, but to get into a shape from which you can produce something worthwhile.

It was notable that in the stand out case study, on how Network Rail is learning a stream of valuable lessons from its data, which came predominantly from inside the organisation, making it much easier to manage. And a strong impression to emerge from the day was that you can get the best results in the short term by limiting your ambition, looking at what you have and can reasonably use rather than making grand plans to tap into streams from the outside world.

I’m not one to write off the potential of big data; it’s the continuation of a trend of bringing together and analysing information that is already producing plenty of real value for business. But it is probably being talked up by its evangelists to the point where it will disappoint a lot of expectations in the short term.

Harnessing all that unwieldy data from outside the enterprise is going to be a massive task, made more difficult by the unstructured nature of a lot of the information, and it will take a long time for best practice to develop. The emergence of data science will probably provide answers over time, but the discipline is in its early days and there aren’t yet many data scientists around. It will probably be well into the next decade before it matures, and some people are going to waste a lot of time and money in unproductive big data projects before then.

Which is why expectations should be kept in check, projects run on a small scale and not used for business-critical decisions until the techniques have been proven. Trial and error is inevitable in opening up any field of science, but data science is going to happen in the business world, not the controlled conditions of a laboratory, and it’s important that the errors are not too costly.

Business will benefit most from big data by allowing it evolve, not letting it loose in a big bang.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Friday, 12 September 2014

Data and the deskilling of business leaders

Maybe some top dogs play down data because of its implications for their status and salaries

Everyone at the top of UK business is taking data seriously, but maybe not as seriously as they should according to a new report published by PwC.
The findings of Gut & gigabytes, written by the Economist Intelligence Unit, might not surprise everyone but indicates that there have been serious limitations on the move towards the data driven business. In short, its survey shows that decision-makers in big companies take data analysis seriously, but it comes third in importance behind their own intuition and experience and the advice and experience of others inside the company.
The report cites reasons for this that will be familiar. There’s scepticism about the quality, accuracy and completeness of data, with a sense that it hasn’t improved much in recent years. There’s also uncertainty about which data is really useful and a fear of getting lost in a deluge.
These are valid concerns, but I suspect there’s something that many C-suite leaders won’t acknowledge: they don’t like the idea of data having more value than what’s inside their heads.
Corporate business is dominated by high level executives who play heavily on their personal capabilities, obtaining high status and massive salaries from a perception that they can provide outstanding insights and prowess in decision-making, way above the abilities of more ordinary souls. They’re paid for their exceptional minds. But if their minds begin to take second place to the lessons provided by data, they become less valuable.
An increasing emphasis on data analysis creates the potential for a partial deskilling of business leaders. If their companies really become data driven organisations all that personal expertise won’t seem so important, and they won’t seem so valuable to shareholders.  I’m not suggesting that the whole C-suite structure is going to crumble, but there could be a shift in the balance that leads to a long term reduction in status and salaries.
I suspect that some business leaders hold half-formed, unspoken thoughts about all this, and don’t like the idea of trusting too much in data. And they might not be in a hurry to find ways over the barriers that they have identified.
Cynical? I suppose so. But cynicism has always been one of the major forces in business.
Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 9 September 2014

Fleet managers might prefer driverless vehicles

Google Car could be seen as a step towards higher assurance and efficiency

In the few months since Google’s plans for a driverless car emerged, the public reaction has gone from a disbelieving chuckle to taking it seriously. Not that anyone expects to see thousands of them on the road in the next few months, even the next few years, but there is a serious debate around its capabilities and possible uses.

It has even prompted speculation that self-driving vehicles would have implications for insurers. A recent article in Insurance Journal pointed out that if a computer rather than a driver controls a car then any crashes would lead to claims against the product supplier rather than the driver. There may not even be a need for personal motor insurance.

If so, this could affect the thinking of companies that run or hire out vehicle fleets. One of the major elements of their planning is the risk management around careless driving, and they can never be 100% sure of the reliability and state of mind of any of their drivers. But, given a few years, self-driving technology could develop to the point where it would provide a higher level of assurance. Also, fleet operators could feel a lot better knowing they could claim from a multi-national manufacturer in the event of an accident.

This would be a logical progression from the development of mobile apps to support fleet operators. There are now plenty on the market that enable them to assist and monitor drivers, and link to back office logistics systems. They support control and integration, and there’s a case for removing the driver to reduce the uncertainties and streamline the process.

Of course there are some massive cultural barriers to overcome. A lot of people, fleet owners among them, are going to feel uneasy at the thought of trusting an array of sensors and internal processors to drive a vehicle. Existing drivers will not be happy. Even if they remain at the wheel to operate an emergency stop, they will see a future in which their role is downgraded and lower paid. And the larger the vehicle, the more it will spark public anxieties and the more resistance there will be.

But removing that risk of human error, of which we’re all too aware, will still provide a lure fleet operators to make a switch. Those running smaller vehicles, such as delivery vans, car hire and taxi services, could find self-driving cars especially attractive. Give it a few years and the driverless car could be a significant element of plenty of business operations.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Thursday, 28 August 2014

A need for intelligent choices from the internet of things

All that data can provide valuable insights, but it demands a selective approach

There’s a widespread appreciation in business of what the internet of things (IoT) is all about, but I suspect that a lot of companies are still deterred from getting to grips with the phenomenon by its sheer enormity.

ABI Research has provided the latest indication of the scale of the IoT, with a forecast of a 20%  growth in the number of wireless connected devices to 16 billion this  year, and a rise to 40 billion by 2020. The data that will flow from all those smartphones, sensors, TVs, wearables and connected household appliances will be a major asset for any organisation able to use it, but also overwhelming in its scale.

So far a minority of organisations have started to use the data in a big way – the analytics is still widely seen as a complex, costly business that only the big players can afford – but it will become more cost-effective as the skills base spreads and specialists step up their offerings of analytics as a service. And as it all becomes more familiar a growing number of companies will begin to see what they can learn from all those devices.

Some will be tempted to grab data from as many streams as possible and throw everything into an analytics mix in search of business insights. But is that going to give them what they need? There’s a danger that data from too many sources – and ‘many’ is what the IoT is all about – can provide ‘insights’ that are over-complicated and lacking the clarity that a business needs.

It’s a danger especially for those that use analytics as a service, bringing in outsiders with the data analysis and science skills but a limited understanding of the individual business. Maybe the best of them will be able to help identify the key data streams for analysis, but I suspect that many will offer a service that is about crunching rather than identifying the data, and needs tailoring by the customer rather than the provider.

This is why business leaders need to think for themselves about the first steps to harnessing the IoT. They should know their business aims and what lessons they need to learn, and in turn have a good grasp of the data that’s going to give them the really valuable insights. When they take that first step they will be ready to bring in the analytics specialists.

It’s also unlikely that they will need the same data all the time. Markets change, new factors come into play and new insights will be needed. This is going to require different streams of data and again it is the business leaders who should take the lead in making the choices.

The growth of the IoT and explosion of data is going to promise some riches for business, but those that reap the full benefits are likely to do more picking and using as needed rather than grabbing data wholesale.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Monday, 18 August 2014

Payment wristbands have to be cool to succeed

bPay and others are more likely to win customers by playing up style as much as function

My first thought on reading that Barclaycard is launching a payment wristband was that the chances of people actually wanting them were quite remote.

Barclays’ credit card is pushing its bPay as a convenient method for small payments from a prepaid account that can be topped up by any Visa or MasterCard. It involves waving the wristband over any terminal with a contactless payment symbol to buy anything worth up to £20.

Yes, it’s convenient, but so are the contactless cards which are becoming more common in the UK, and you can use the same near field communication technology in smartphones and smart watches. I suspect that a purpose made payment wristband won’t win any popularity contests against any of these options.

Compared with the card it might be easier to use but it’s also more visible. A lot of people won’t like to be readily identified as a customer of a particular company, especially one of the big banks, and feel more comfortable with the anonymity offered by a card tucked into a wallet or purse. It’s also going to be less tempting to thieves.

Against the phone and watch it doesn’t provide the bundle of functions that attract users, and it won’t stir any excitement in the way that the gadget fans get a kick out of the latest device with the right brand name. Payment wristbands just won’t be cool.

Then came my second thought, spinning off the fact that a lot of buyers regard phones and wearable devices as fashion accessories. The design and the brand name are often as prominent in their minds as the functions of a device, and if someone can tap into that attitude with payment wristbands they might be able to carve out a share of the market.

I can see bPay or other providers of payment wristbands doing something to make them desirable for reasons that have nothing to do with their function. They can hook up with designers of jewellery and fashion accessories, get their marketing teams focused on younger consumers, and their ad agencies creating the type of campaigns that are as much about what the wristbands can do for the user’s image as an utility factor.

It’s all about branding and making the product stand for something other than what it actually does. You might say that an intelligent consumer doesn’t buy into that stuff, but there are plenty of markets in which it works, which says something about how many unintelligent consumers are out there.

Will bPay or other providers pull it off if they take this course? Maybe, maybe not, but younger consumers often go for a product on style rather than substance. It would give the providers a chance, and I don’t see the wristbands taking it off purely on what they can actually do.


Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Thursday, 7 August 2014

Robotics, AI and the worldwide job cull

New technologies are going to destroy jobs, and there’s no promise they will create enough new ones to fill the gap

Do you think a computer could do your job? It’s a question that people have been asking for at least 25 years, and it’s becoming more intense with the advance of robotics and artificial intelligence (AI). And the uncomfortable truth is that the answer for a growing number is ‘yes’.

Technology has been knocking people out of work for a couple of centuries, and as it develops ever more quickly the trend is going to continue. So far it’s been alleviated in industrial economies by the creation of new jobs, but the big question is whether this can continue as robotics and AI automates more tasks previously dependent on the human brain.

A new report from Pew Research, AI, Robotics and the Future of Jobs, indicates that there isn’t a consensus. A survey of almost 1,900 experts produced close to an even split between the optimists and pessimists, with 52% expecting that technology will create as many jobs as it displaces by 2025 and 48% forecasting that it won’t do so. Unsurprisingly, a lot of the latter group are worried about big increases in income inequality, mass unemployment and breakdowns in social order.

It’s hard to feel positive about blue collar jobs, and the more routine white collar occupations. Robotics are extending machines’ capacity for manual tasks, and AI promises (or threatens, depending on where you stand) to do the same for a lot of jobs that involve the routine processing of information. Also, the ability of cognitive systems to process vast quantities of data at high speed is impinging on areas, such as healthcare diagnoses and financial trading, currently regarded as the province of professionals (a subject I covered in a white paper for the UK’s Chartered Institute for IT).

I’m not going to predict whether the new technology will create enough jobs to replace those it knocks out. I lean towards the pessimists’ view, but that’s the result of a mild scepticism rather than any strong evidence. But the Pew Research report has prompted a couple of thoughts about the future of technology and job creation.

One is that developed economies rely increasingly on jobs that could be described as non-essential. You can apply it to big chunks of the media, marketing, retail, manufacturing consumer goods that are seldom used – providing services that the recipients like, but could easily do without. I suspect that these jobs are close to their limit; society can’t consume any more, however inventive the ad men become at creating demand. There will be fewer new ones to fill the gap as more of the essential jobs become the province of robotics and AI.

The other is to do with how far AI will be allowed to penetrate the professions or top end management roles. There is a realistic argument that an educated human judgement is necessary for many decisions, especially when there’s an ethical element involved. Cognitive computing can be used for high level decision support, but the ultimate responsibility should remain with a human. Those humans form elites, and elites tend to be very good at protecting their own interests.

They’ll want rigid boundaries in place to keep themselves in those top level roles, and a culture that emphasises the primacy of the human mind in their fields. They may be right, they may be wrong, but there are going to be a lot of roles for which the limits are not clear, and professions that will become battlefields.
Of course there’s another possibility: that as technology takes over more jobs those that remain are spread more evenly, so we’ve all got more leisure time. But that was predicted fifty years ago, it hasn’t worked out that way since and, given the prevailing dynamics, it’s not likely to happen in the foreseeable future.

The advance of robotics and AI is inevitable, and in the long term it could well do more good than harm; but in the next two, three, four decades the disruption they cause won’t be a pretty thing to watch.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk