Next Wave International Next Wave International™ is a faith-based communications group >which is
training organizations to engage the future & move society forward
in a positive direction. Founder / Director: Mal Fletcher

Facial Recognition, Migrants and Techism

Mal Fletcher
Posted 09 August 2022
View full list

“It has become appallingly obvious that our technology has exceeded our humanity.” So said Albert Einstein in the wake of the atom bomb’s deployment.

We still live under the shadow of the nuclear cloud, even if the threat has, in the public mind, taken a back seat to global warming of late. At least, until Russia invaded Ukraine. Yet there are also other ways in which Einstein’s observation remains remarkably prescient.

In what appears to be a world-first, the British Home Office and Justice Ministry might soon use wearable smartwatches to keep track of migrants convicted of crimes. Said migrants would be expected to upload photos of themselves up to five times per day. In time, of course, the same approach might be used on native Britons who’ve committed crimes.

I can find no record of the Chinese government attempting this and it is the world leader in the use of intrusive surveillance technologies, including biometrics.

The British approach, if adopted, would arguably create more problems than it solves. Whether we intend it or not, this system might be perceived beyond our shores as a sign that "global Britain" values the state’s right to surveil above individual freedoms, including privacy. 

In addition, singling out migrants, criminal or not, sends a worrying message about our attitude to migration. It seems to suggest that migrants as a class are more likely than other cohorts in society to produce criminal re-offenders.

This at a time when Britain, like most of Europe, needs managed migration to maintain its GDP and to help pay for its cumbersome health system and its ageing population.

Meanwhile, such a move might in time open the door to the use of wearable technologies as predictive tools. Ten years ago, at least one British company helped reduce crime in parts of the U.S. by analysing crime data. Its data analytics software allowed police to focus their resources where particular crimes were most likely to be committed. Similar systems are used by at least 14 police services within the UK.

There’s a big difference, though, between this and AI algorithms that supposedly predict a specific type of crime being committed by a particular individual. 

Under the latter system, arrests might be made based on assumed future intentions as opposed to actual deeds.

Given the rate at which governments, businesses, security services, urban planners and more are adopting predictive data analysis tools, this is no longer purely a sci-fi scenario. 

Concerns about the government's use of wearables - particularly when aligned with facial recognition - are part of a much wider debate about the growing incursion of surveillance into our private, social and working lives.

In 2017, a study of employees revealed that one-third felt wearable devices would help them be more productive while reducing stress and assisting people with health issues. In the wake of Covid lockdowns and the rise of home-working, one wonders whether that survey would render the same result. People working from home may feel suspicious of devices which, while no doubt helping workers keep track of their progress, could also be used by employers to track worker activity. 

Even in the pre-Covid 2017 study, 67 per cent of office workers feared that wearables might create an unnecessary surveillance culture.

Surveillance is, of course, essential in the struggle to contain illegal migration. Governments are right to investigate the potential of new technologies to help in this effort, in humane but just ways. However, technology is often more of a blunt instrument than a surgical scalpel. There are, for example, proven biases inherent in some artificial intelligence algorithms. The simple reason for this is that they are programmed by human beings and “learn” by crunching the data provided by human beings. 

We all have cognitive biases, which our brains use to simplify the huge amount of information they process. Programmers may unconsciously build their own cognitive biases into their creations or provide data sets, used to train AI, which becomes tainted in this way.

What’s more, if AI is exposed to incomplete data sets - for example, with study samples that are not truly representative - it will inevitably operate on wrong assumptions. 

The proposed use of smartwatch tracking is not, of course, the Home Office’s first foray into sophisticated surveillance. It recently began deploying drones built in Portugal and collectively worth £1 billion to track the movements of small boats carrying migrants across the channel. 

Drones may be good at spotting small craft from afar, but how effective will their cameras be in helping us identify the actual traffickers? Unscrupulous criminals have often found ways of sidestepping technology. There are reports that boat owners often appoint a pilot from within the ranks of paying customers, to mislead the authorities. Drones or not, photographic technology may lead us no closer to singling out the actual profiteers.

The price we pay for using smart wearables and drones as civil surveillance tools may be greater than the benefits they bring, especially when we consider technology creep. Technologies introduced and sold to the public for one narrowly defined purpose may eventually be used in ways that the public is neither aware of nor willing to sanction. 

CCTV cameras, introduced in cities like London in the 1990s, were not mounted to track parking infringements or match facial images against central databases. CCTV was ostensibly introduced to help reduce high rates of car theft. Yet it does so much more today, with very little by way of public debate on the matter.

Using drones for anything more than mapping the location and progress of migrant boats creates a possibility that the machines will later be used in domestic environments within the UK. Police attempts to use drones to identify people taking part in lawful protests have been met with howls of protest. Rightly so. The larger the amounts of data authorities collect the greater the potential for its use in ways the public might not approve of.

There is another, even more potent argument for limiting government reliance on surveillance technology. It has to do with the big-picture impacts of engaging with technology in ways that threaten our basic humanity.

The word techism, coined in 1994, describes a philosophy of industry that tries to humanise our engagement with technology. We could do with a surge of interest in techism today.

As most of us have discovered in our private lives, the more we use technology, the more we rely upon it. We like it because it can reduce our involvement in mundane tasks. It can free us up to be more creative and perhaps enjoy life more. However, a reliance on gadgets breeds a greater reliance on gadgets. Before the launch of the first iPhone in 2007, not many people felt the need for a device on their phone to locate their vehicle in a car park or to open the car’s doors. Today, though, we rely on smartphone apps for these and a myriad of other everyday tasks.

We may or may not have saved huge amounts of time, but it’s worth stopping to think about what we’re doing to our brains. What has happened to the parts of our brain that once did mental arithmetic, for example, or navigated city streets or spell-checked our writing?

By relegating these and so many other activities to machines, have we set ourselves up for digital dementia? Will mental problems we currently associate with dementia - such as confusion and short-term memory loss - soon become part of normal cognitive function?

Heavy reliance on technology also brings with it a particular mindset, one that is not necessarily helpful. It shifts our primary focus to efficiency, rather than humanity. A helpful example of this is the research and development work going on with fully autonomous military drones. These killer robots - that’s what they are - might not only adjust their flight paths to accommodate weather patterns, as is currently the case but also decide who should be targeted.

This may prove a wonderful money-saver, by reducing the number of operators required, but it does absolutely nothing to promote a humane approach to conflict. It also confuses the issue of human culpability should something go wrong.

In a similar vein, one British producer of facial recognition CCTV advertises its wares with the following claim: "We’re talking [here] about a completely automated system that requires very little interaction with the operator."

Efficient? Yes. Humane? Probably not so much, given that it’s managed almost completely without human intervention.

Let's bring this closer to home. Not so long ago, if you planned a car journey you would consult a paper map. You would identify the route that most closely resembled a straight line from your location to your destination. However, in doing so you might notice that there are certain features on either side of that line, towns and places of interest you might break for lunch or give your children a new experience. That might lead you to adjust your route. It would now take a little longer, but potentially be more enjoyable.

SatNavs don't operate like that. They are programmed for efficiency. Unless you stipulate otherwise, they will recommend only the fastest route. They’re a wonderful tool most of the time, but you never know what might have been possible had you not used the machine.

Similarly, relying on technology might restrict the humanity with which authority figures such as police carry out their duties. It sometimes seems that the scope police have for using what we would once have called common sense is shrinking. Precious resources are sometimes wasted on arresting individuals for little more than expressing an opinion on social media, on the pretext that a single person has felt aggrieved by those opinions and reported them.

Part of the problem is that human beings online quickly succumb to what psychologists call social disinhibition. When they’re online, some people think that the normal conventions of social behaviour no longer apply, that they can say what they please without consequence. But the problem is also the result of police and law-makers focusing too much on technology, relying on it as the main prism through which to observe and evaluate human behaviour. 

Relying heavily on technology may produce greater efficiencies in the short term, but in the longer term, it makes the system less human-centric, and less humane. What follows is the breakdown of trust between citizens and their guardians.

The Home Office and Justice Ministry would do well to step back and take a longer-term view before putting all their faith in technology. There are bigger issues at stake than tracking migrants convicted of crimes.

Keywords: migration migrants facial recognition and migrations criminal migrants migrants and smartwatches smartwatches in migration home office smartwatches department of justice techism





If you benefit from our Social Comment section...
please consider making a donation to help us provide more.
(Prefer PayPal? Click here.)



Read Mal's new book...
'Five Big Ideas: Concepts That Shape Our Culture'


Catch Mal on EDGES TV...
A Fresh Look At Our World Today...






Search This Site

Add Next Wave to your Favorites
Latest News
BBC News
CNN Europe
EuroNews
Mal Fletcher Media Appeal
Austerity - Are Governments Wrong? Mal on BBC
Should Sunday Trading Be Extended? Mal on BBC
Racism vs Racial Identity - Mal on BBC
Are Churches Playing Big Brother? Mal on Premier Radio
Chips Under The Skin & Bio-Hacking - Mal on ABC Radio
More News...
Sign up for e-news

Want to keep in touch with what Next Wave is doing each month? Enter your email address below.

Your Feedback
Since the inception of your program and website, I have been more enlightened with the world around me.
Nigeria

Having heard Mal speak at Hillsong Australia, I paid your websites a visit. I find your EDGES programs inspiring and educational. Thanks.
Anon, Australia

Excellent web sites and presentations! Sound teaching, scriptural enlightenment, etc.
L. Jeff, Australia

Send us your feedback