Next Wave International Next Wave International™ is a faith-based communications group >which is
training organizations to engage the future & move society forward
in a positive direction. Founder / Director: Mal Fletcher

Data Literacy: Fuel For A Human Future

Mal Fletcher
Posted 02 March 2023
View full list

Data, it is often said, is the currency of the future.

Actually, trust is that currency. Trust in how data is collected and shared by machines and trust in our own capacity to analyse data through logical and critical thinking. 

The future will increasingly be shaped by decisions we make using online data and by the artificial intelligence (AI) machines that feed on that data. 

If the data can’t be trusted, or we have no confidence in our ability to analyse it, we’ll never hold AI to account or produce innovation that improves life on earth. 

This is a matter of concern. While most of us rely on digital data to work, rest, relate and play, we are functionally illiterate when it comes to reading, understanding and interpreting it. 

We urgently need to equip ourselves and emerging generations with enhanced skills to deal with all things digital, in a way that enhances our humanity. 

Yes, there are encouraging signs that we may be getting a little more data savvy. Meta has just decided to join Twitter in using subscription services on its social media platforms. 

These BigTech behemoths are losing money. They can no longer rely on revenue from advertising, which is declining because user numbers have dropped. People are becoming more discerning about where they upload private data about their preferences, opinions and lives. 

Yet, for many people, there is still an abiding suspicion that technology is developing quickly in ways they can’t understand, much less influence. This contributes to what Alvin Toffler called “future shock”. 

Change is inevitable and sometimes revolutionary. But when revolutionary change happens on many fronts at once, people grow anxious. 

Multiple studies over the past twenty-five years have shown how deeply everyday engagement with digital tools can negatively impact mental health. 

Yet there are still very few opportunities for people to learn data literacy skills. These would help them navigate the bewildering world of big data, which drives so much of our interaction with technology. 

Big data analysis is one of the most profound results of the digital revolution. It brings many benefits which most of us now take for granted. 

At the macro level, it helps governments predict how policies will affect economies and the impact of climate change on regions and industries. 

It informs the development of vaccines and other medical treatments. It helps urban planners design new streets and envision smart cities. 

Data also drives novel technologies like the one behind the much-lauded ChatGPT. The acronym stands for generative pre-trained transformer. 

Contrary to its popular image, there is nothing really “intelligent” about this technology. It is mainly a sophisticated aggregator of material already created by human minds. 

What is impressive is that it researches quickly and produces output in fluid, almost human-like ways. There may be great benefits with this technology, in fields including education and research - and it’s data-driven all the way.

Perhaps even more significant is the fact that data fuels “machine learning” (ML). By analysing huge swathes of online information, networks of computers identify patterns and anomalies in the data. From this, they spot patterns and infer rules for behaviour. 

This allows them to improve their programming, teaching themselves to carry out complex tasks in more efficient ways.

For all of its benefits, though, big data sets alarm bells ringing in some quarters. The warnings sound most ominously in growing debates about “open” versus “closed” data. Each of these positions is now promoted via a date in the global calendar. 

On Saturday, March 4, the UK celebrates International Open Data Day. Its supporters advocate that many, if not all, forms of data should be accessible to everyone. 

There may be some controls over the re-use of data, they say, but our focus should be on freeing information so that people can learn from and innovate with it. Medical studies, climate reports and records of government decisions are prime targets for Open Data advocates. 

Data Privacy Day, celebrated in January, highlights the need to tighten our safeguards on personal information.

Advocates of both positions agree on at least one thing: we must all become more aware of data’s place in our lives. 

Data literacy needs to play a larger role in our schools and workplaces and in the wider society. This is especially true for emerging generations, who will face the greatest opportunities and challenges with technology. 

While there are plenty of data science courses for tertiary students, there appears to be very little on offer for young secondary or primary students.

Becoming more data literate would help people of all ages to understand the implications of the collective oceans of data we generate every day.

Most of us, I think, rely on digital tools so much that we assume the technology behind them is inherently benevolent. We hardly think about who develops the platforms we use. We don’t stop to ask what their motivations might be, or what they intend to do with the data we give them. Or, even more concerning, how that information might be used by AI and ML.

In this decade, we will rub shoulders more and more with artificial reality, machine learning and human-machine synthesis such as brain implants. Plus tools that nobody’s imagined yet. 

We must focus now on preparing ourselves for the totality of that digital experience.
 

The Unreal World of Real Data 

Data literacy programmes would help us understand the relationship between data collection and artificial intelligence (AI).

The capacity of machines to learn is a subject of interest and concern for experts and novices alike. Data analysis drives the development of AI and will continue to fire the engine of machine learning (ML) for decades to come.

Artificial intelligence, feeding on cloud-based data, helps us build predictive models for everything from natural disasters to wars and pandemics. 

That said, though we don’t yet know the full capacity of machines to learn, two things are already clear. Faulty or incomplete data can produce very troublesome AI and one of the major sources of faulty data is the human machine.

In 2019, an MIT study showed that facial recognition algorithms developed by Amazon, Microsoft and IBM had higher rates of error when identifying people with darker skin tones. Another study by Stanford and the University of Washington in 2019 discovered that AI could be biased against people with disabilities.

AI systems can be infected with the prejudices of their human programmers as well as biases within online data, much of which also originates with human beings.

In the long run, our technologies shape us. (Consider how social media have coarsened our public discourse.) Long before that, though, we shape our technologies.

OpenAI, the home of ChatGPT and one of the world’s leading AI companies, is a case in point. 

TIME magazine recently reported that poor workers in Kenya are employed by OpenAI as “data enrichment professionals”. These people label flawed data on the internet that could pollute OpenAI’s programming through machine learning. For this intensive labour, they are paid just two dollars an hour, at most.

Until now, little has been known about their situation. That’s partly because Big Tech wants to project an illusion of AI as being a kind of miracle, a wonder that evolves without human agency. It’s a very useful illusion for companies that need to attract billions of investment dollars. But it potentially reduces their responsibility to provide proper oversight.

This story reminds us that behind the bells-and-whistles curtain of many high-tech tools, there sits a human wizard, sweating away for a pittance, while others rake in huge profits. 

Data literacy courses would teach people of all ages how to research the Big Tech platforms they use, looking at their ethics and actions.

Another weakness of AI is the fact that its algorithms are vulnerable to attacks and security breaches, which can cause them to malfunction or make inaccurate predictions. 

What’s more, some AI models are now too complex for humans to interpret unaided. This makes it difficult for us to identify potential flaws.

Fortunately, the future is not simply a product of the technologies we develop. It is at least as much a product of how we, as moral agents, choose to use those tools and equip ourselves to do so.
 

Teaching Data Literacy

Future human choices will be shaped by the innate traits of generations who carry the future on their shoulders. For this reason, it's vital to train today's Generation Alpha children and young teenagers to be data literate. 

Data literacy training often involves skills in visualisation - the analysis and creation of graphs, mindmaps and infographics that illustrate links between facts.  They (literally) help us see “the big picture”!

Data literacy projects often teach the basic principles and mechanics behind software development, or coding. They help us understand how the different cogs in the AI machine work - its algorithms and bots, for example.

They also encourage an appreciation for logic and the sequential thinking that underlies computer coding. 

Data literacy helps us understand the social implications of data-driven tools.

We often assume that because digital tools form the wallpaper of our lives, most of the technology behind them is benevolent. We hardly think about who develops the tools we use. We don’t ask, what are their motivations and what do they intend to do with our private data.

When I read a newspaper, I try to remember that it represents an organisation with its own internal culture, a set of preferences for thought and behaviour. Big Tech companies also operate according to internal cultures, which either enhance or pollute their output.

In their case, though, information is also more directly shaped by the biases of customers or users. 

That’s especially true with social media platforms. Taken together, they represent the world’s largest repository of human opinion.

Every day, 500 million messages are uploaded to Twitter. The average TikTok user - mainly GenZs, but ageing upwards - spends almost an hour a day on the app. 

Most social media users would benefit from learning how to fact-check information sources. 

This becomes all the more important when we consider the impact of social media on AI and ML. In drawing data from the social media well, AI is exposed to enormous reservoirs of, at best, questionable information. 

The transmission of ideas on social media is hugely impacted by the levels of emotion they engender. Social media is shaped by what I call the “hot response culture”. In expressing their views online, many people favour emotive messages over the more measured and reasoned variety. 

This is partly because emotion inspires emotion. Many studies have shown what common sense perhaps already suggested: when a social media message moves people to feel something, they are more likely to answer it, reply to it or share it. 

A few years ago, Facebook found itself in hot water when it was shown to have conducted a psychological study involving its users, without their knowledge. Facebook’s ethics were appalling, but the study’s findings were illuminating. 

They showed that social media users tend to respond most to messages that inspire strong emotions like envy or anger.

Emotion is contagious, but as any reputable psychologist will tell you, it needs to be informed by reflection, logic and reason. Anything else produces mental illness, including, at the extremes, psychosis.

It takes cool-headed detachment to distinguish between sound and flawed reasoning. Data literacy encourages people to adopt a calm and measured approach to data analysis. 
 

Let’s Get Ethical!

Data literacy also involves training in the ethical use of technology. Modern technology tends to develop faster than the codes of ethics we need to guide its use. 

A former British Prime Minister recently called upon world leaders gathered in Davos to move us one step closer to global governance. Tony Blair advocated the launch of a global digital database of the vaccinated - and, by extension, the unvaccinated.

This, he said, was necessary to tackle potential future pandemics.

He argued that global spreadsheets would be necessary for other areas, too, so we might as well get on with building them now. In effect, he called for a hugely expanded application of existing data technologies, with some troubling possible outcomes.

Global databases that record private choices carry huge ethical challenges, regarding data privacy, for example, and the protection of citizens’ data from hackers and fraudsters. There is also the threat of technology creep, where the public approves limited use of a tool only to find that it is later used in more invasive ways. 

Global digital databases raise concerns regarding human rights. In parts of China, governments are experimenting with a social credit system that measures whether individual citizens act in government-approved ways. Those who do not lose privileges afforded to more compliant individuals.

Mr Blair’s idea also raises questions about global governance. Global databases require global administration, which must then answer to global lawmaking bodies. In the end, national governments are required to cede powers to global entities. The link between the citizenry and policy-makers becomes more tenuous. In the process, democracy arguably suffers.

The lack of public debate in the aftermath of Blair’s suggestion shows just how poorly we understand the power of data and its potential for misuse.

Just two weeks after his Davos announcement, Mr Blair joined with former Foreign Secretary William Hague to urge the launch of digital identities for all British citizens.

The proposal raises all the same flags as global vaccine databases. These former politicos know they can push for unprecedentedly invasive uses of technology because the public is data illiterate.

If we don’t question the ethics of technology, we will build a world in which ultra-pragmatism is the dominant technological philosophy.

We will simply accept that if a thing can be done, it should be done. 
 

Critical Thinking is Critical

John F Kennedy said, “Too often we enjoy the comfort of opinion without the discomfort of thought.”

That, unfortunately, is all too true in our time. Data literacy programmes can help us develop critical thinking skills so that we make reliable judgements based on credible information. 

Critical thinking involves questioning the relevance and reliability of what we hear, read or see. It helps us analyse different approaches, looking at how consistent each idea is within itself and how workable it is in practice. 

How helpful might that have been during the pandemic? Constant news streams about Covid-19 variants and lockdowns led to an outpouring of not-very-objective, conspiratorial thinking. 

Scientific figures that were barely understood by most of us were selectively pored over and interpreted by armchair experts, who formed combative camps of opinion, each flying the flag for this or that conspiracy.

Critical thinking will become even more important as the rate of workplace automation increases. 

As robots become more sophisticated and their cost of manufacture drops, more low-skilled jobs will be lost to the human workforce.

But AI will impact middle-wage jobs, too, especially those involving data collection and analysis. 
In some parts of Europe, speeding offences are recorded, tickets are issued and penalties for non-payment are applied without human agency. The entire system is automated.

In time, professions will suffer. In some quarters they already are, most notably in law, where chatbots already perform pro forma functions. 

Is there any reason why networked AI robots couldn’t represent people before the courts? The one sticking point might be that empathy is an important part of legal proceedings. Mitigating circumstances are often considered when it comes to deciding levels of guilt and corresponding punishments.

Empathy is built upon fellow feeling, grounded in shared human experiences. By definition, this is something a machine intelligence cannot have.

Nonetheless, AI is making its presence felt in the arts. Painting, drawing, writing and music composition are no longer purely human activities.

For human beings, all of this workplace automation will set temperatures rising. Clear heads will be vital.

Ethical questions will also arise with the synthesis of technology and human physiology, through brain implants, 3D-printed organs and advanced prosthetics. These too will require critical thinking.

Elon Musk’s Neuralink company aims to place thousands of small electrodes in the human brain, which would read signals from neurones and transmit them to remote computers. This might allow not only the analysis of health problems but new ways of regulating them. 

It might enable sight-challenged people to recover their sight and recovery of function for people who’ve suffered paralysis. It might also treat conditions such as addiction, PTSD, depression and anxiety.  

Neuralink says it wants to start human trials this year. But it’s not clear how much effort has been made to identify and address ethical issues, such as the line between human and machine.

This is why, in the coming days, professional ethicists will play a role similar to that of theologians and philosophers in times past. 

They’ll puzzle over complex philosophical problems that have immediate real-world implications. We need to equip these thinkers now. And we need to offer people from all walks of life the chance to train in ethical and critical thinking.
 

Analyse, Evaluate and Synthesise

Data literacy courses will also focus on the analysis of information. Evaluation is a key part of critical thinking, seeking out flaws in reasoning and helping us understand the practical implications of ideas.

Critical thought involves synthesis skills, too - applying logic and reason to formulate healthy conclusions.

All of this will become increasingly vital in an age of networked ML. Computer networks learn not only through the application of their own programming but by referencing the coding of other machines via the cloud. 

Networked ML carries huge potential opportunities and challenges. For example, on the positive side, governments may respond faster to natural disasters if networked machines can constantly update their predictive models. 

The same machine networking, however, might cause fully autonomous robotic weapons to act on flawed data, while also passing their faulty conclusions to other weapons.

There are many questions waiting to be answered. They won’t be answered satisfactorily if we don’t train ourselves and emerging generations to be data literate.
 

Where Do We Start?

How can we prepare ourselves to deal with AI, machine learning and as yet unseen technological challenges? Where does data literacy training start, in practical terms?

It begins with our everyday experience of technology. 

For example, we need to understand the real psychological effects produced by virtual experiences. 

Much has been said about the emergence of the Metaverse. Its immersive nature offers potential benefits for medicine, mental healthcare, education and much more. 

Yet this nascent internet technology is already attracting perpetrators of virtual assault, including online gang assault. Attackers and their victims exist only in avatar form, but this does little to reduce the psycho-emotional impact on those who are targeted in this way.

We also need training in encryption - how it works, as well as its limitations. Encryption is built on the principle that plain text can be scrambled into coded text, which is only readable by people who possess an unscrambling key.

The whole process relies on the security of the encryption software - how resistant it is to hacking - and the safety of the encryption key - how difficult it is to steal.

In 2013, a hacker was able to bypass encryption on the bitcoin protocol and steal an estimated $350 million worth of the cryptocurrency. 

In 2019, a sophisticated attack on the encryption system of the Canadian Imperial Bank of Commerce allowed hackers to steal more than $65 million from customers. 

Both attacks were made possible by exploiting vulnerabilities in encryption systems. 

Encryption is not a magic bullet. Yet in the next few years, nearly every aspect of our online lives will rely on the use of encrypted data.

Data literacy training shows us how to update our encryption measures and to ensure that those who handle our data - businesses, banks, governments and Big Tech - regularly do the same.

It also provides skills to identify cybercrimes such as identity theft and other online scams.

Identify theft involves stealing someone’s personal information, so as to access online accounts or commit fraud. 

ID theft often targets the young. The U.S. saw a 91 per cent increase in data crime victims aged 18-25 in the year ending early 2020, according to the Federal Trade Commission. 

Another report revealed that almost half of all American adults aged 18-25 experienced data theft or fraud in 2020. 

Research by the British banking industry found that 4.5 million fraud offences were committed in the year starting March 2020 - an increase of 25 per cent on the previous year. This was driven by a large increase in cases of advance payment fraud, using phishing scams. 

Phishing involves sending messages that seem to come from legitimate sources, such as email accounts, to trick people into sending funds or giving up information. Teaching people to identify early warning signs of phishing should be on any data literacy curriculum.

Parents and guardians should be trained in how to protect the safety of their children online. This will include teaching them to ask perceptive questions about how and why they perform certain tasks online. 

Childcarers need to limit the information they upload about their charges. Children will often learn more from our actions than our words. When it comes to online behaviour, what one generation tolerates, the next may well treat as normal and seek to outdo.
 

Power of the Technokings 

Data literacy training will also enlighten people about the growing power of the technokings, the heads of Big Tech companies.

Despite their talk about using technology for altruistic purposes, the actions of technokings often reveal more of a profit-making motive. Most of their profit comes from user data.

The fact that technokings value data above all else is shown in their support for a fully cashless economy. 

Our use of cashless payments has boosted data generation exponentially. We generate data about ourselves with every purchase. Data about our buying preferences, yes, but often also about where we live and our bank account details. We have little or no control over what happens to that data once we leave the store. 

Does it remain with the merchant and her/his bank? Do they use it to pitch other products to us in future? Is it passed on to third parties for similar uses?

Entering into a fully cashless economy would benefit Big Tech, banks and merchants much more than consumers. This is why some politicos and economists argue for a central bank digital currency, which would ostensibly provide the convenience of cyber-currencies without their obvious downsides.

This is naive. It isn’t simply the ownership of a currency platform but the nature of cashlessness itself that presents a problem. 

Full cashlessness would leave millions of people facing digital debt because studies show that without cash we spend more with less forethought. It would also boost the likelihood of privacy invasion, either by unscrupulous government agencies or by cyber-criminals.

Big Tech’s devotion to cashlessness is a cause for concern among security agencies. In some parts of the world, Big Tech groups are closely aligned with single-party governments. Researchers have warned social media users that TikTok is linked to the Chinese government and may be part of that country’s growing espionage apparatus.

Whether or not that’s true, we should equip people to question the intentions of Big Tech - especially where the platforms we use offer us no opt-out on data collection.

This is all part of teaching data literacy. It can be done in a proactive way without producing paranoia, which always blocks curiosity and innovation.
 

Data Science Anyone?

Students at the secondary level should be taught data science, too. While data literacy looks at how we access and analyse data, data science is more technical. It teaches skills involved in coding, statistics and machine learning. 

Where once subjects like physics, biology and chemistry were considered core science components, curricula should now also include data science. Without understanding it, young adults will be poorly equipped to engage in a world that's increasingly impacted by AI and ML.

If there’s one thing the pandemic showed us, it is the vital importance of medical professionals, researchers and statisticians. These people are skilled in STEM-related fields - in science, technology, education and maths. We will need more of them going forward. 

In parts of the UK, which has a relatively good record in STEM education overall, up to 40 per cent of jobs in some STEM-related fields can’t be filled. There are not enough qualified applicants.

This does not mean that we should skimp on teaching the arts. As well as producing works that challenge our minds and enrich our souls, arts training provides skills that are indispensable to all forms of innovation. 

Our trust in data and its many uses is a key currency in today’s world. It will play an even greater role in our future. Data will shape so many more of our experiences and interactions. We will need to become data literate.

Keywords: data literacy data literate data science big data analytics big tech ai artificial intelligence machine learning futurist futurism mal fletcher malfletcher technology news future technology open data day data privacy open data technology creep data skills technology skills digital education social media chatGPT chatbots visualisation technology ethics global vaccine database digital id digital identity cyber crime phishing technology creep online scams critical thinking thinking skills robotics robots





If you benefit from our Social Comment section...
please consider making a donation to help us provide more.
(Prefer PayPal? Click here.)



Read Mal's new book...
'Five Big Ideas: Concepts That Shape Our Culture'


Catch Mal on EDGES TV...
A Fresh Look At Our World Today...






Search This Site

Add Next Wave to your Favorites
Latest News
BBC News
CNN Europe
EuroNews
Mal Fletcher Media Appeal
Austerity - Are Governments Wrong? Mal on BBC
Should Sunday Trading Be Extended? Mal on BBC
Racism vs Racial Identity - Mal on BBC
Are Churches Playing Big Brother? Mal on Premier Radio
Chips Under The Skin & Bio-Hacking - Mal on ABC Radio
More News...
Sign up for e-news

Want to keep in touch with what Next Wave is doing each month? Enter your email address below.

Your Feedback
In the UK in 2008, the Church does not seem to do well on broadcast media. Yet it has important things to say. I want to thank you Mal for your forward-looking approach to sorting this out.
Alan, United Kingdom

Thanks so much for visiting Speak The Word Church International (Minneapolis). I am a baby boomer who sat with a GenXer - we both knew that what you said was right on. It made me think. I like that.
Norma Buchanan, United States

Mal, I just wanted to thank you for visiting Abundant Life and RN06. I am a psychiatrist in the NHS and agree very much with what you were saying about the secular services coming to us in 20 years for advice on how to do things. A group of us are starting a ministry to be ready to be asked!
Rob, United Kingdom

Send us your feedback