Tuesday, 15 April 2014

Three questions for the UK public about Scottish independence

With less than six months to go to the referendum on Scottish independence, the arguments are getting louder and increasingly bad tempered.

Over the past few weeks Alex Salmond and his Nationalist colleagues have made a sustained effort to frame their opponents’ arguments as those of an overbearing elite who are lying about their intentions in effort to scare Scottish voters away from independence. Yes, most of it surrounds the prospects of a currency union with the rest of the UK, although I’m sure questions of EU membership, defence policy and border controls are also going to stir up some sound and fury in months ahead.

What the Nationalists are not keen on anyone talking about is how the public in the rest of the UK feels about these issues. It suits their case to define independence in terms of Scots asserting themselves against a governing class from privileged backgrounds, rather than detaching themselves from the people of England, Wales and Northern Ireland.

I suspect that plenty of Scots would at least take into account the opinions of their UK compatriots. Scotland has retained its own legal and education systems, and there are some cultural differences, but we have a lot more in common. We share an economy and transport infrastructure, study at each other’s universities, generally watch the same TV programmes, listen to the same music and laugh at the same jokes.

Most importantly, a lot of Scots live in England, and a smaller but significant number of English people live in Scotland, with Welsh and Northern Irish also settling across the borders. It’s everyday stuff that has kept us together, and in the event of independence some of this is going to change, and this will affect attitudes on both sides of the border.

One thing the Scots deserve in advance of the vote is clarity around what the people in the rest of the UK, not just the politicians, think about independence.  Just knowing if they want them to stay would be a big issue, along with attitudes towards a couple of the factors – currency and borders – that everyone will notice.

There has been a recent Yougov poll on currency union, but the process would benefit from a large scale opinion poll across England, Wales and Northern Ireland – commissioned independently of the government – asking three questions to be answered with a simple yes, no or undecided:

1 – Do you hope that Scotland votes to remain as part of the UK in September?
2 – In the event of Scotland becoming independent, do you think it should be allowed into a currency union with the UK?
3 – In the event of Scotland becoming independent, do you think there should be a full border and regulation of movement from one side to the other?

Show people in Scotland what the rest of the UK thinks about these and they’ll go into the referendum with a clearer picture than that presented by the Nationalists.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk. He also writes fiction, details on www.marksaywriter.com.

Thursday, 10 April 2014

How much do we want algorithms to do?

Yesterday I read an engrossing piece by Luke Dormehl in Wired magazine about the prospects for analytic software in the recruitment process. The gist is that algorithms can be a lot more reliable than people in identifying the right candidate, and that it may not be long before organisations rely on technology rather than human judgement to find the right people.

The idea has an immediate appeal in promoting a more genuinely meritocratic workplace. I’m sure that most of us have been frustrated at missing the cut for a job interview when we know we fit the bill. And plenty of us have employed people who had the CV, references and came across great in the interview, but regretted the choice months later. I’ve had experience of both.

So take out those wobbly human judgements and let the algorithms take charge, and we’ll have the right people in the right jobs. Agreed? I suspect there would be more disquiet than enthusiasm.

A lot of managers won’t like it. Getting to choose who work for you is one of the big plus points of being a manager, and it would dent a lot of egos to tell them a computer is more likely to make a good choice. A lot of potential employees would feel dubious, not wanting their worthiness for a job to be assessed by a software programme. There’s a scary element in its implications for our relationship with computers.

It’s also important to remember a question that’s usually asked during recruitment: will they fit in? If you want a machine to answer that you have to combine the data on the candidate with data on their managers, colleagues, and the priorities and dynamics of the company. Will that be readily volunteered? Will it be accurate? The algorithms get more complicated, and become more vulnerable to any distortions and dishonesties. And we all know they come from both sides of the fence.

It’s part of the bigger question of how far cognitive systems can go in replacing humans, something I touched on in a recent white paper for the Chartered Institute for IT. Cognitive computing can do some things better than people, in terms of processing massive quantities of data quicker and more consistently, and can provide important insights at high speed. But they don’t think like people, making the value and moral judgements on which we often rely, and most organisations won’t want to take those judgements out of many of their processes, including recruitment.

I can see more companies using algorithms as part of their process to find new employees; but I bet there won’t be many prepared to drop those face-to-face interviews and take the decisions out of their managers’ hands.

Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 25 March 2014

BYOD and the Windows XP warnings

There are only days left until Microsoft withdraws support for its Windows XP operating system (8 April), and although it’s spent more than a year warning everyone to move to Windows 7 or 8, there are still predictions of a lot of people and businesses being caught out.
I’ve recently watched an FT Connected Business video of a couple of Gartner analysts warning that something like 10-15% are going to have a nasty shock over the next few months, and that anyone who hasn’t begun a migration ought to get onto it now.
I’m not arguing with them, and I should think most of us have experience of working somewhere IT upgrades have been put off as long as possible. The cost, the disruption, and the feeling that the odds are well against a meltdown prompt board members to decide it can wait until next year, or even the year after. It’s not a sound approach, but it’s what happens in some organisations.
But it has got me thinking about how organisations that are big on ‘bring your own device’ will be affected. It’s most likely that those that have gone for it have a sufficiently rigorous approach to IT to have planned for the end of Windows XP; but there must be companies with a more slapdash approach that allow employees to use their own devices for work without paying much attention.
Anyone who is working on their own devices is not going to be reliant on Windows XP - they’re going to be on iOS, Windows 8 or Android – and they would have set up their machines to connect to their employer’s networks and systems. I can imagine that if the office IT hits the XP rocks, then the captains will be begging those on BYOD to do what they can to keep the ship afloat.
I don't see this as a widespread scenario, and I'm not suggesting that BYOD is an alternative to upgrading from Windows XP, but it could be the substance of a few anecdotes for IT managers over the next year or two. And a few people who work on their own devices might find themselves more highly valued by their employers.
More about Mark Say at www.marksay.co.uk

Tuesday, 18 March 2014

Bringing the science to open data

Open data isn’t exactly a high profile issue for the public, but MPs on the Public Administration Select Committee (PASC) have got their teeth into the subject. The publication this week of its Statistics and Open Data report shows that its members are not happy with the progress of campaign, and believe it’s falling far short of its promise to boost business and provide better government through making most government data sets freely available.

One shortcoming that stands out is that public servants themselves don’t have the skills to do much with the data, and the statisticians who do often hide away when they should be champions for the cause. It follows up with a call on the Government to encourage training of more data scientists, with a timetable and target numbers, and include data skills and open data awareness in the Civil Service policy training.

It reflects the fact that open data is often messy data, and while there is a case for standardisation, it is unlikely the public sector will have the resource or the inclination to make it more user-friendly. For the foreseeable future it’s take it or leave it, and taking it will only be a serious option for those with someone with the ability to extract knowledge from the mess.

It ties in with a call that I’m hearing more often for an increase in the number of data scientists. Businesses are being told that if they want to take advantage of big data and the internet of things they need people with more the mathematical skills to grab insights from the torrent of information, but the investment isn’t an easy sell at the moment. It’s still a vague idea for many, and the benefits seem a long way off.

There’s a need for more degree courses and vocational training, and the Government could give it a push by developing some of its statisticians into the role of data scientists. It’s not just about the training, but beefing up the role, and giving it the type of profile that it gave to CIOs a few years ago. If it’s something for people in the Civil Service to aspire to, it could raise the profile of the role and set a trend for others to follow.

The immediate effect would be limited to a few Whitehall departments, but it could provide some longer term momentum for the profession and do something for the cause of open data.

Tuesday, 11 March 2014

Personal data in the app and API economies

Europe’s app economy made some headlines in February with the publication of a report by research firm Giacom, as part of the EU backed Eurapp project, which estimates developer revenues at €17.5bn in 2013 and forecasts they will rise to €63bn by 2018.

It throws the spotlight on the growing economic importance of apps, predicting that there will be 2.8m developers’ jobs and 4.8m people in support and marketing roles by 2018, and has some interesting observations and hard stats on the business. But it doesn’t touch on the influence of personal data, and how it’s managed by organisations that develop apps and the application programme interfaces (APIs) on which they run.

A growing number of consumers are sensitive about personal data, and their choices are going to be influenced by whether they feel it is held in a neutral and safe place. This places it among the factors that API and app developers will have to take into account in building their markets.

It’s a fractured picture. The app economy has been built on developers finding different ways to make money, including charging for the download of apps, offering  them as a medium for advertising or charging when they are used for transactions. The API economy is harder to define – there’s no strong consensus – but it revolves around the provider making the API available to developers as a base for their apps. You can see this in Apple’s App Store and Google’s Play Store, on platforms such as Facebook and Twitter, and among providers of personal data services such as Mydex that provide APIs.

These economies are growing on their appeal to people’s desire for convenience; apps give them an easy route into online services without having to provide their details each time around. But users often have no idea how this is creating revenue for the developer or API provider, even though it often involves the use of their personal data.

Different models are used. Some companies use the data to target their own advertising; others make it available to third party advertisers; it can be used to identify consumer habits and assess risks; to feed into business intelligence; and in developing new services. The potential is growing as more devices are able to provide information on people’s location and movements, and this is going to increase as the data on those apps is hooked up with the internet of things.

There are also some that are not trying to monetise the data but use it to make their own processes more efficient. But the key point for all cases is that there is a degree of intrusion, and it’s all done without the knowledge of most users.

By law API providers and app developers have to tell people how their data could conceivably be used, but in practice it is usually buried in those lengthy terms and conditions that few people are going to read before they click their approval. They only begin to understand further down the track, when in most cases it’s too late to withdraw approval and the only remedy is to stop using the app.

Put this alongside the sporadic horror stories in the press about breaches of personal data held by large organisations, and a growing number of people are going to think twice about signing up to use apps, and will often decide that they can live without them.

If businesses want to keep those people on board they need to find a consent model that gives users more control and feeds a sense of trust; which comes back to the need for a neutral and safe place for personal data that is detached from any commercial process.

There are possibilities around the provision of stores for personal data, supported by a credible process to verify an identity and verify the data, which individuals would populate or have populated by those they connect with and make available as they see fit. They would be most effective if they provide some flexibility; the individual might be able to give blanket permission for access to relevant data for every app they download; or have some a simple short form of consent for different scenarios under their own terms or ask for requests for specified data each time they sign up; or even make data available for one time use on apps for a single transaction.

They would surrender some of the convenience they currently enjoy, and they wouldn’t be the most attractive customers for the most commercially aggressive app developers, but it would give them the control that they want.

This could be done by an API provider or by a third party with access to the API. The latter might prove more attractive to the privacy savvy consumer wanting a service provider that isn’t looking for new ways to make money from the data. He or she would give up some privacy, but with a stronger sense of trust, especially if they were already using it for website-based services.

There is also a case for it assisting API and app providers by reducing the risk from false data. If they are given a false identity or inaccurate details it undermines the quality of what they can offer, and can lead to unhappy customers, reputational damage and, in the worst cases, legal problems from providing the wrong data about individuals. Taking the data, and proof of identity, from a third party with a more stringent verification process could provide a degree of security that they won’t obtain from someone quickly jabbing details into a smartphone or tablet.

It’s likely that some companies would be reluctant to give up free access to the data on which they’re building a commercial model, but it offers a degree of compensation that could be more highly valued as customers become more guarded.

Concerns over personal data are beginning influence the market for apps. This is one possible solution and there could be others out there. Overall, the companies that are willing to take it seriously could find a competitive edge in appealing to those customers who are not going to be more demanding in providing their trust.

This article was written in collaboration with Mydex.

Tuesday, 4 March 2014

BBC4 deserves its place more than BBC2

There's been a fresh round of rumblings in recently about BBC4 being closed/merged with BBC2. Michael Grade, David Dimbleby and Roger Mosey have all spoken about wielding the axe over 4 in recent months.
I can understand the financial pressures that lead to the Beeb asking whether it needs all its TV channels, but I don't buy the argument used to justify this, that there's no clear distinction between the two.
I'd argue that BBC2 is actually much closer to BBC1 in its programming. Just looking at what's on 2 this evening (4 March) between 6.00 and 10.30 – Revenge of the Egghead, a dogs’ intelligence test as part of Sport Relief, a documentary about planners, a sowing competition, another documentary about emergency medics and Sarah Millican’s chat show – I would say all but one (the programme about planners) would fit comfortably into BBC1's schedules.
By contrast, BBC4 is on the ground that once would have been firmly occupied by BBC2 – Michael Portillo looking at Irish railways, a documentary about crabs, another on the role of bones in evolution, and later documentaries on World War One and Captain Cook. True, it has the movie Billy Elliot on at 9.00, but overall it is aimed at a more distinct audience and fulfils a crucial part of the brief of a public service broadcaster - that bit about informing and educating (and plenty of us do find it entertaining).
I’d argue that the content of 4 is much further from the Freeview commercial channels than that of its BBC siblings. And if you close/merge it with BBC2 the overall effect will contribute to the dumbing down of TV.
So if they have to cut something, why not BBC2? Or shift the BBC4 brief, and all the staff from 4 to 2 and see how much of 2 can be absorbed into 1. That would do more to ensure that the BBC continues to play a role in catering for a minority audience, and carries on meeting its brief for public service broadcasting.
Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk

Tuesday, 25 February 2014

Lego, Raspberry Pi and computing skills

I’m happy to cheer on the government’s plan to include computer coding in the national curriculum for primary schools from next September. I also have concerns about its chances for success, but I think these could be improved by learning a few lessons from the history of Lego. Yes, those toy building blocks that a lot of us played with as kids. I’ll get to the reason in a while.

Placing computer coding on the curriculum makes sense. We need a population of coding savvy youngsters to lay the ground for computing innovations in the future. And it’s a fair argument that even those who don’t follow careers in IT will benefit from coding, learning to think with a combination of logic and creativity. It will be good for them and good for the UK.

But I can’t help remembering a previous effort to do something admirable in schools, when John Major’s government wanted to ensure that all secondary students learned a foreign language to proficiency. That would have corrected a long term failing in British education; but it foundered, largely because there weren’t enough people with the existing language skills who liked the idea of teaching for a living.

The effort to ingrain computing skills faces a similar barrier; anyone with any degree of expertise can earn a lot more money in a different environment, and they won’t take the stick that is often aimed at teachers by politicians the press. It’s easy to see why secondary schools are struggling to provide decent courses in computer science.

Hopefully, the plan for primary schools will get over this by taking a different approach.  The availability of Raspberry Pi, the tiny single board computer that can be used to explore the basics of coding, should enable unskilled teachers to learn with the first group of children. It won’t require the existing knowledge needed at secondary level.

But you need to make sure the kids want to learn. I listened to an interesting talk at Cass Business School last week, when Alex Klein, founder of start-up Kano Computing, warned against a learning by rote approach that would be no fun and dampen the youngsters’ interest. They are more likely to learn if they can play in doing so.

That prompted a thought about the previous week’s episode of ‘The Culture Show’ on BBC2, which looked at the influence of Lego on architecture. Some of the top architects around the world grew up messing about with with those plastic, clip-on bricks, indulging their imaginations in weird and wonderful constructions, and getting a feel for symmetry and design.

Lego provided a tool for the nurturing of lively minds, and this is what Raspberry Pi can be. I hope that teachers don’t use it as a tool to drill a bunch of dos and don’ts into the kids’ heads, but as something that’s fun. Treat it as they do art or storytelling, something that lets their imaginations run riot. In the long term it can give a generation of red hot computer scientists, and people who can turn creative minds to many other lines of work.


Mark Say is a UK based writer who covers the role of information management and technology in business. See www.marksay.co.uk