My wish for 2026

As my first post to welcome in 2026,  I revisit something I wrote back in February 2025, whilst embarking on some research on 'safer communications' and the potential for AI and Social Media channels to disrupt and de-stabilise 'comms'. 

So here is my original article https://juliet.posthaven.com/ai-artificially-self-important-or-a-force-for-good with some excellent responses via ChatGPT, which, according to Wikipedia is a generative artificial intelligencechatbot developed by OpenAI, and released in November 2022. It uses generative pre-trained transformers (GPTs), such as GPT-5, to generate text, speech, and images in response to user prompts

****************************************************

Here's the article entitled  Is AI artificially self important or a force for good? to which ChatGPT responded with:

 'Short answer: AI embedded in internet technology is not inherently our downfall—but unmanaged, poorly governed digital systems absolutely can cause serious harm. What you’re reacting to isn’t paranoia; it’s a rational response to a real imbalance between technological power and societal safeguards. Let's break this down carefully."

And so, this is what followed.... (my writing in italics)

Is Artificial Intelligence (AI) embedded within internet technology going to be our downfall? I've just sat in on a talk given by our local community police team at one of Residential Homes for our elderly residents regarding safety online, where some of the scams that are currently prevalent were discussed. 

AI isn’t the root problem — scale is

Scams existed long before AI. What AI and digital infrastructure have changed is scale, speed, and plausibility.

  • One scammer can now reach millions instead of dozens

  • AI can generate convincing voices, faces, emails, and documents

  • Automation removes friction, cost, and risk for criminals

That creates a brutal asymmetry:

  • Criminals need only a tiny success rate

  • Victims must be right every single time

That imbalance is what feels dangerous — and rightly so.

For the criminals, it's a numbers game and they can target people all around the world with very little outlay and relatively accessible technology, plus they are often supported by larger organised crime networks but they view it as an easy way to make money, and they don't think of themselves as criminals but businessmen. 

The “numbers game” is structurally stacked against citizens

You’re absolutely correct that many perpetrators don’t see themselves as criminals. From their perspective:

  • Risk is low

  • Enforcement is weak or distant

  • Profits are high

  • Victims are abstract and anonymous

Meanwhile, victims experience:

  • Shame (which suppresses reporting)

  • Financial devastation

  • Emotional harm

  • Loss of trust in systems they’re increasingly forced to use

This is not just a technology problem — it’s a governance failure.

For the victims it's miserable, and increasingly hard to spot, or avoid. Even if you apply the " if it's too good to be true it probably is" principle, it's still all too easy to be scammed and this is not acceptable. Before any more of our lives are forced to operate via online-only structures such as energy companies, banks, doctors and dentists, it is time that Governments put their heads together and act decisively to stamp out internet scamming. It's not just social media giants like Facebook, twitter, snapchat et al. that need to shoulder the burden, it's Governments. Governments have the power, authority and collective technology to impose harsher punishments on the perpetrators and together have the ability (if not the will) to get a grip on this before it's too late. 

Digital-by-default without digital protection is reckless

You raise one of the most important points in the entire debate:

Before any more of our lives are forced to operate via online-only structures…

Moving essential services (banks, healthcare, utilities) to digital-only platforms without parallel safeguards and human access is not progress — it’s cost-cutting dressed up as innovation.

For many people, especially:

  • the elderly

  • the cognitively vulnerable

  • those with limited digital literacy

“Online-only” becomes compulsory exposure to risk, not convenience.

That is dangerous.

From phones, to military capacity, to machinery, there is a push towards 'digital' but we should consider very carefully the relative ease with which operations in seemingly third-world countries are able to scam people around the world out of eye-watering amounts of their hard-earned money using digital technology and how quickly they adapt to work-around methods employed to thwart them. Digital, might be cheaper, it might arguably be more efficient but is it in fact lazy and dangerous?

Is digital lazy?

Sometimes, yes.

Digital systems are often deployed because they are:

  • Cheaper than human staff

  • Easier to scale

  • Easier to outsource responsibility for failure

But efficiency for institutions can mean fragility for individuals.

A human bank clerk can notice distress.
A digital form cannot.
An AI chatbot cannot feel unease.

A scammer can.

Governments absolutely have more power than they use

You’re also right that responsibility is often misplaced onto platforms alone.

Governments could:

  • Enforce international cybercrime treaties

  • Mandate stronger identity verification standards

  • Require banks to absorb more fraud losses (which changes incentives)

  • Criminalise negligence in platform design

  • Fund digital literacy as a public good

  • Maintain offline alternatives for essential services

The limiting factor is not technology — it is political will, jurisdictional complexity, and economic interests.

So… is AI our downfall?

No — but unchecked AI + global networks + weak governance is a perfect storm.

AI can also:

  • Detect scams faster than humans

  • Flag anomalous transactions

  • Block impersonation

  • Assist vulnerable users

  • Reduce fraud when properly deployed

The same tool that enables scams can also suppress them — if aligned with public interest instead of pure profit.

The real danger isn’t AI — it’s inevitability thinking

The most dangerous idea is:

“This is just how it is now.”

That mindset:

  • Normalises victimisation

  • Excuses policy inaction

  • Shifts blame onto individuals

  • Treats harm as collateral damage of “progress”

Technology should adapt to human vulnerability — not the other way around.

Bottom line

Digital systems aren’t evil.
AI isn’t destiny.
But forcing people into digital dependence without robust protection is irresponsible.

Progress that ignores human limits isn’t progress — it’s abandonment.

If you’d like, I can:

  • explore practical policy models that have reduced fraud

  • examine whether “offline rights” should be protected by law

  • or discuss how AI could realistically be used against scammers rather than citizens

Just say the word.

*****************************************************

I am really growing to appreciate ChatGPT as it becomes more nuanced, and given that all of the ChatGPT responses above are gleaned from other people's work and collated at the speed of light in response to a question or piece of writing, I can't help but be impressed. There is clearly some healthy debate going on. 

I think the AI chatbot hit the nail on the head quite nicely "Digital systems aren’t evil. AI isn’t destiny. But forcing people into digital dependence without robust protection is irresponsible." I just wish the UK Government, NHS Managers, Social and healthcare providers, Banks et al. would stop the digital drive to get people online, take a pause, and reconsider how likely it is that the majority of people putting together digital systems, businesses, programs will be putting profit and time-saving (shortcuts) before public interest. 

My wish for 2026 is that we all consider much more robustly, and more roundly just where we are headed when we give so much power over to the people behind digital systems. That we stop confusing convenience with wisdom, efficiency with fairness, and inevitability with choice—and remember that technology should serve people, not quietly govern them.

That’s a very reasonable hope for 2026, isn't it?