AI and the truth

We are possibly at one of the greatest inflection points in human history.

Strong words, but hear me out …

In recent years, it’s become trendy to push one’s own truth in the absence of fact. This trend is known as wokeism, and has been and is being pushed in many social areas such as race, gender, sport and education. For some, their reality is not based in the real world or in fact, and they revel in this. But someone’s mental view of themselves and their perceived reality does not dictate actual reality. Most people still have a firm grip on reality.

As an example, it’s scientific fact that you can’t change gender. No matter how you slice and dice your body. Your DNA is set in stone and can’t be changed. [And for those who’d like to infer that mRNA does so, no it doesn’t].

DNA, which makes up our genetic code, is larger, double stranded and very long. The mRNA is a single stranded copy of a small part of the DNA, which is often released to send instructions to other parts of the cell.

It’s typically a vocal minority that push these activist ideas but there’s a slow turn against this – absent any other factors, wokeism is likely to die out in a few years time. However there are other factors, and one very big factor in particular that can have an impact on reality.

What is reality? what is fact?

For the entire history of mankind up until now, fact has determined reality – if it quacks like a duck, and walks like a duck, then it must be a duck. Or so the saying goes. But you get my meaning.

Let’s ask the question: if we can’t scientifically tell the difference between fact and fiction, then what becomes of reality? This is potentially the outcome of an ever-improving AI ecosystem that is providing replacements for audio and image/video affecting 2 of the most important human senses, hearing and eyesight.

OpenAI has just released their AI video generator called Sora – the results of which are equally very impressive yet still identifiable as being artificially generated. I’d say that generally the static image side of things is much better, and the audio area can sometimes be indistinguishable from the real thing. But video will improve.

And if we can essentially fool our senses into believing audio and visual sources, what does this do for our perception of reality? It’s a philosophical and moral conundrum that so far has been a non-issue but one that’s going to be very interesting to see the progression of over the next few years.

So far, this has been quite a non-technical post from my side, however there are very real security and privacy threats that this technology poses to us in our daily lives.

AIs feeding off our private and public information

AI algorithms can be powerful tools for surveillance and data collection, raising concerns about privacy erosion and individual autonomy. Can we develop AI ethically while maintaining personal freedoms? Striking a balance between security and privacy in the face of ever-evolving AI capabilities is crucial.

Understand that AIs are fed huge volumes of our data to learn from – this is how they form their knowledgebase. If you teach an AI that a 1000 pictures of dogs are dogs, then they can infer that other/different pictures of dogs are also dogs.

Now consider that AIs are being fed sources of every type of information imaginable to learn from including information that many would prefer to stay private.

Distorting and changing reality

The ability of AI to generate realistic images and audio is rapidly blurring the lines between the real and the artificial, impacting reality in several ways.

Misinformation and manipulation: Deepfakes, hyper-realistic AI-generated video or audio recordings of real people saying or doing things they never did, pose a real and significant threat. They can be used to spread misinformation, damage reputations, and influence elections. The ease of creating and sharing these manipulated materials necessitates improved detection methods and media literacy education.

And it’s clear that many are strongly influenced by information that is either provided person-to-person or via electronics means.

Immersive experiences: AI-generated visuals and sounds are finding use in creating incredibly immersive experiences in fields like gaming, virtual reality, and augmented reality. However, ethical considerations regarding addiction, the blurring of reality and fantasy, and potential negative impacts on mental health need to be considered.

Social engineering and persuasion: AI can be used to personalize messages and generate content that resonates deeply with individuals, potentially influencing their beliefs or actions. Imagine receiving highly targeted political ads tailored to your deepest fears or desires. While this personalization can be beneficial for marketing or entertainment, ethical considerations regarding manipulation and informed consent are paramount.

The future of perception: As AI-generated visuals and audio become increasingly indistinguishable from reality, our perception of the world around us could fundamentally change. Imagine doubting the authenticity of every video you see or questioning the source of every voice you hear. This necessitates developing critical thinking skills to discern reality from fabrication and fostering healthy skepticism towards AI-generated content.


Here are a few real-life examples of fake information affecting perception and reaility:

Deepfake Scandals: In 2020, a deepfake video of Nancy Pelosi went viral, making it appear as if she was slurring her speech. This manipulated video, despite being debunked, damaged her reputation and contributed to the spread of misinformation during the US election.

Celebrity Rumors: Countless celebrities have fallen victim to fabricated stories and photoshopped images, leading to public scrutiny and career setbacks. Examples include Taylor Swift being falsely accused of racism and Jennifer Lawrence facing a nude photo leak that turned out to be Photoshopped.

Politically Motivated Disinformation: In 2016, fake news stories targeted specific demographics during the US presidential election, influencing voters’ opinions and potentially swaying the outcome. This highlights the dangers of weaponizing AI-generated content for political gain.

False Accusations on Social Media: Online platforms can enable the rapid spread of false information, leading to reputational damage for individuals and businesses. Examples include accusations of misconduct based on fabricated stories or misinterpreted photos, often causing lasting harm even after being disproven.

Product Tampering Scares: Malicious actors have used fake videos and images to stage product tampering incidents, leading to product recalls and financial losses for companies. In 2015, a fabricated video of a razor exploding in a phone case led to significant damage for the phone manufacturer.

False Reviews and Testimonials: Online platforms are vulnerable to manipulated reviews and testimonials, designed to mislead consumers and damage the reputation of businesses or individuals. Addressing fake reviews remains a challenge for many platforms.


In conclusion, AI’s influence on reality through generated images and audio can potentially provide many benefits however there are also significant challenges. There’s a need for open discussion, proactive measures to mitigate risks, and a commitment to ethical development and responsible use of this powerful technology.

We’ve seen some governmental entities around the world investigating the creation of frameworks around the oversight of AI. Regulation (self or otherwise) is a good start. But it will also require us humans to be more perceptive and to challenge the information we experience. Yes it’s an unfortunate change to the cynical but still necessary.

But who knows – perhaps we’re already in The Matrix …

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security