Moon to get 4G


You may not be able to get a mobile signal in Lower Middle-of-Nowheresville, Countryshire, but, rejoice – you may soon be able to get one on the moon.

Apparently tired with the pursuit of terrestrial phone coverage, the newly astrally-focused Vodafone and Nokia have teamed up with German space-probing outfit PTScientists to bring 4G to the earth’s most lunar of satellites.

But why? To beam lovely HD video of the moon back to earth-bound enthusiasts, of course.

Audi is also involved with the scheme, and will be supplying two quattro rovers that will zip about the moon’s surface, looking for interesting things.

And it’s Vodafone’s base station that’ll link up with the rovers and fire their images and videos back to earth through its 4G network, which electromagnetic wave fans may be interested to note will use the 1800 MHz frequency.

So, what’s Nokia contributing to this exercise? Sandwiches? Nope, the Finnish firm is actually building ‘space-grade’ networking equipment, that’ll apparently weigh less than a kilogram, so let’s hope it doesn’t just float off.

Marcus Weldon, a man who works for Nokia, said: ‘This important mission is supporting, among other things, the development of new space-grade technologies for future data networking, processing and storage, and will help advance the communications infrastructure required for academics, industry and educational institutions in conducting lunar research.’

Moon to get 4G

Friday roundup: A week in tech


Sinister online enterprise Facebook has been brought to book for some of its creepy activities, yet again.

This week, a Belgian court ruled that the firm must delete every scintilla of data it’s collected on people WHO DO NOT EVEN USE FACEBOOK – on the extremely reasonable grounds that it’s against the law.

According to Belgium’s privacy watchdog, the firm trampled all over the country’s privacy laws by lacing third-party sites with cookies.

Facebook, which intends to appeal and has said that it is ‘disappointed’ with the ruling, faces fines of £221,000 a day if fails to adhere (which is pretty small beer, really).

According to the court, Facebook must desist from ‘following and recording internet use by people surfing in Belgium,’ which really does sound very creepy indeed.

Surely, the myriad, complex ways in which Facebook harvests and stores personal data is going to become extremely pressing when the GDPR kicks in in May? Watch this space.


Some nice people have built a game designed to help people discern real news from so-called ‘fake news’, the young tearaway nuisance currently besieging the internet with its insidious mischief.

Created by the University of Cambridge’s Social Decision-Making Laboratory, which sounds like an interesting place to work, ‘Bad News’ encourages players to foster a social media following by posting controversial stories and images, just like in real, tepid, horrible life.

Disappointingly for a project led by Cambridge types, the game’s website reads ‘In this game you take on the role of fake news-monger [sic]’ (unless, of course, ‘fake news-monger’ is the name of the character you’re playing? Should be ‘Fake News-Monger’, then, but I digress).

Anyway, the spiel goes on: ‘Drop all pretense of ethics and choose the path that builds your persona as an unscrupulous media magnate [insert R. Murdoch, P. Dacre, etc. joke here]. Your task it to get as many followers as you can while slowly building up fake credibility as a news site [insert more R. Murdoch, P. Dacre, etc. jokes here].’

According to director Dr Sander van der Linden, the team are trying to ‘demystify and illuminate what these techniques are, how to spot them, how to recognise them, and not be influenced by them’.

And it’s hard not to wish them the very best: within approximately three seconds of last week’s Florida school massacre the internet was aflame with exceptionally tender-headed conspiracy accounts of the murders, which have apparently been eagerly swallowed by people for whom thinking is a tiresome, loathsome activity.


And now for something completely different. A 10,000-year clock is being built, for some vague reason.

If all goes according to plan, inventor Danny Hillis’ timepiece will tick once a year, while its century hand will move once, you’ve got it, a century. Apparently, the Long Now Foundation’s scheme is aimed at making ‘long-term thinking more common’. A cuckoo will pop out every 1,000 years, which is a nice touch, even if I’m unlikely to ever see it.

The slothful chronometer is being constructed on land belonging to Amazon’s Jeff Bezo. Naturally, the thing is going under a mountain he owns in the Texas desert.

Brian Eno has also got involved – unsurprisingly, as this kind of project has got his name all over it – and has made a ‘mechanical melody generator’ that’ll emit a unique chime every day for the next 10,000 years. Marvellous.

Anyway, according to a tweet from Mr Bezo, it’s ‘500ft tall, powered by day/night thermal cycles, synchronized at solar noon,’ – though he could be winding us up (sorry).

Strange times.

Friday roundup: A week in tech

How Socitm is helping councils improve cybersecurity


By Matt Smith, Secretary of Socitm London, and Group Manager, ISfL (London’s public sector Warp)

Back in November 2016, Geoff Connell, President of Socitm, gave a presentation at the Information Security for London (ISfL) Annual Conference. He asked: “What could Socitm be doing to assist local authorities improve their cyber resilience?” Further discussions took place with ISfL and, subsequently, an idea started to form.

As a result of these initial activities, Socitm stood up the local government Cyber Technical Resource Group. The group was established to facilitate dialogue between the Local Government Association (LGA), National Cyber Security Centre (NCSC), Ministry of Housing, Communities and Local Government (MHCLG, formerly the DCLG) and the sector.

The group includes a number of local authority security managers, who represent their regional warning, advice and reporting points (Warps). One of the first things with which the group helped NCSC was the development of guidance for local authorities on election security.

We have been briefed on the various aspects of the NCSC’s Active Cyber Defence (ACD) programme and various other projects including the ‘Exercise in a Box’. This is extremely important, as we are able to take back information about these free tools to our regions. Subsequent implementation can only help to improve the cyber resilience of our sector – one of the group’s primary objectives.

We have also been able to assist the LGA with their secure email guidance and to contribute feedback to the Cabinet Office’s Public Services Network team on various topics, including the future of compliance.

Socitm’s involvement in the group ensures the views of Socitm members are heard and that we can continue to influence the national cybersecurity agenda. The group has been a useful sounding board to help shape Socitm’s proposal to the LGA for resourcing the sector’s role within the National Cyber Security Strategy.

At the group’s most recent meeting, which was held at NCSC’s offices, I was particularly pleased to be able to take my daughter along with me on work experience. NCSC welcomed her, as they are particularly keen to promote their CyberFirst competitions for girls as well as Women in IT – something that is very close to our hearts at Socitm.

How Socitm is helping councils improve cybersecurity

AI code of conduct published

Chipboard 2

Those living in fear of artificial intelligence (AI) going horribly awry, be calmed: a 10-point code of conduct for the fantastical/terrifying stuff has been published, so everything might be ok.

Penned by ‘innovation foundation’ Nesta, the Asimov-esque guide is specifically aimed at the application of AI in the public sector – and implores Westminster to be as transparent as can be about how algorithms are constructed.

Nesta director Eddie Copeland has kindly prepared his AI-behaviour directives in this blog, in which he notes: ‘While debate may continue on the pros and cons of creating more robust codes of practice for the private sector, a stronger case can surely be made for governments and the public sector.

‘After all, an individual can opt-out of using a corporate service whose approach to data they do not trust. They do not have that same luxury with services and functions where the state is the monopoly provider.’

Anyway, here’s the man’s code of conduct:

  1. Every algorithm used by a public sector organisation should be accompaniedwitha description of its function, objectives and intended impact, made available to those who use it
  2. Public sector organisations should publish details describing the data on which an algorithm was (or iscontinuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases
  3. Algorithms should be categorised on an algorithmic risk scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor
  4. A list of all the inputs used by an algorithm to make a decision should be published
  5. Citizens must be informed when their treatment has been informed wholly or in part by an algorithm
  6. Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions
  7. When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet principles 1-6
  8. A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision
  9. Public sector organisations wishing to adopt algorithmic decision making in high-risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm
  10. Public sector organisations should commit to evaluating the impact of the algorithms they use in decision making,and publishing the results


Seems like a pretty sensible list to me. But what do YOU think? Do you agree with all 10 conditions? Are there important points missing? Do you not care at all because you believe the earth is soon to be slain by the cosmic entity Cthulhu, thus AI is the least of our problems? It would be good to hear from you.

AI code of conduct published

Friday roundup: A week in tech


The Russian government was behind a large-scale cyber assault last year, the British government reckons.

According to defence minister Gavin Williamson, Russia deployed the NotPetya virus against Ukraine – which subsequently spread around the world costing global firms an estimated $1.2 billion, though how that figure was arrived at is anybody’s guess.

Unsurprisingly, the Russians have denied all of this, pointing out that Russian firms also took a kicking from the NotPetya ransomware.

Mr Williamson MP has claimed that the Russians are ‘ripping up the rule book’ – no, I have no idea what rule book he’s talking about either.

Lord Ahmad, a Foreign Office minister, has also chimed in, saying that the UK government will not countenance ‘malicious cyber activity’. No, of course not.

In summary: isn’t it nice that we’re all getting on so well?


The new bad boy of the money universe, crypto-currency, has claimed another victim, as it continues on its mission to relentlessly baffle the world’s financial journalists.

Miffed radio-astronomers have claimed that the yobbish virtual coinage is hindering their search for extraterrestrials – and can there be a more damning indictment than that?

The folks at SETI (the search for extraterrestrial intelligence) are desperate to expand their explorations but a global dearth of graphics processing units (GPUs) is spoiling things.

Alien seeker Dr Dan Werthimer said: ‘We’d like to use the latest GPUs and we can’t get ’em. That’s limiting our search for extraterrestrials…This is a new problem, it’s only happened on orders we’ve been trying to make in the last couple of months.’

Crypto-miners use processing power to unravel complicated mathematics, which validates purchases made with the likes of Bitcoin. Similarly, those searching the galaxy for life need large amounts of computing power to process the huge amount of data their sweepings of the heavens generate.

The situation is somewhat similar to billionaire industrialist Max Zorin’s scheme to hoard microchips in the 1980s, which was covered in this documentary.


The UK government believes it is in possession of software that will help to thwart the spread of jihadists’ online propoganda.

The generally tech-illiterate Home Secretary Amber Rudd announced the artificial intelligence (AI) tool that can apparently detect extremist material and automatically block it – and she also threatened to make it a legal requirement for firms to apply it.

London-based ASI Data Science received £600,000 from the government towards the creation of the jihadi-foiling app, which the firm says can be shaped to identify 94% of Islamic State’s online films.

Speaking in Silicon Valley for some reason when I’m pretty sure she could have done it from somewhere in the UK where she governs, Mrs Rudd said: ‘It’s really nice and hot here today, now where’s that kid with my latte?’

No, she didn’t. What she really said was: ‘It’s a very convincing example of the fact that you can have the information you need to make sure this material doesn’t go online in the first place.

‘The technology is there. There are tools out there that can do exactly what we’re asking for. For smaller companies, this could be ideal. We’re not going to rule out taking legislative action if we need to do it.

‘But I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got.’

Friday roundup: A week in tech

Webinar Wednesdays Part 2


If you love Wednesdays AND webinars I have excellent news: Socitm’s next Webinar Wednesday is almost upon us (it’s next Wednesday, if you haven’t worked that out yet).

Our brand new member service, the first broadcast – which went out last month and covered data privacy impact assessments (DPIAs), a vital component of the forthcoming General Data Protection Regulation (GDPR) – was a huge success, and can still be accessed by Socitm corporate members here:

This month’s broadcast will have a look at how to lockdown the behemoth that is Office 365.

You may be surprised, or appalled, to learn that once you have moved your users to Office 365, you’ll have to review and change the default settings to meet your organisation’s security needs.

And that’s why we’ve asked Pauline Dahne, an experienced technical instructor, to walk you through the process of changing the various default settings to ensure your Office 365 environment is set up right.

Members should have been sent an invitation to next week’s show, which will go out Wednesday 21 February at 1pm. If you haven’t received an invitation it means you aren’t a member – so maybe now’s the time to become one? You can find out more about the benefits of Socitm membership here:

The webinar is on Wednesday 21 February at 1pm. Check out the details by opening the calendar invitation.

I hope you can make it!

Webinar Wednesdays Part 2

Tech firms must clean up networks, says massive advertiser


One of the world’s largest splashers of marketing money has demanded that the big tech firms clean up their acts – or it’ll take its money away.

Unilever, which makes everything from Marmite to Domestos (I hope at separate factories), claims it’s tired of online hate, fake news and child abuse, and doesn’t wish to be associated with things like that, which is understandable.

Thusly, the likes of Facebook and Google need to take online action if they want the firm to continue wheel-barrowing money into their comfy lairs.

The Anglo-Dutch combine, which manufactures everything from 80s-throwback neck-sweetener Brut to 90s-throwback pseudo-luxury ice cream Viennetta, spent a nuts £6.8 billion on advertising last year.

In a speech today, the company’s chief marketing officer, Keith Weed, will claim that its customers ‘have trust in our brands’ – and remember, these people make I can’t believe it’s not Butter.

Unless he changes his mind between now and the speech, Mr Weed will tell listeners at California’s stimulating-sounding Interactive Advertising Bureau conference: ‘As one of the largest advertisers in the world, we cannot have an environment where our consumers don’t trust what they see online.

‘Unilever will not invest in platforms or environments that do not protect our children or which create division in society, and promote anger or hate. We will prioritise investing only in responsible platforms that are committed to creating a positive impact in society.’

It’ll be interesting to see if the likes of Facebook and Google do suddenly find a way to clear their networks of child abuse, hatred and lies – because if they do it will rather demonstrate that the threat of losing money is the one thing they cannot countenance, and is far more motivating than, say, regulation, or a moral backbone.

But, anyway, the amount of stuff that Unilever makes

As well as I can’t believe it’s not Butter, it makes Bertolli, Flora and Stork (that’s sort of all the same stuff, isn’t it?); Colman’s mustard; Vaseline; Marmite’s insipid cousin Bovril; Pot Noodle; PG Tips; and even that staple of childhood bath times, Matey bubble bath, which caused quite a nostalgic tremor when I saw the branding on its website earlier.

Tech firms must clean up networks, says massive advertiser