The Silent Coup of Reality: Second Order Effects of AI

For the first time, 2 of the 5 core senses that shape our very reality are now obsolete.

The Silent Coup of Reality: Second Order Effects of AI
“One day everything will be well, that is our hope. Everything's fine today, that is our illusion” - Voltair

Over the last few years the world quietly changed in a way that will impact every single person on earth, but for the most part it has gone largely unnoticed and largely untalked about. Articles have been published about specific symptoms of what has happened, and those in the know have started to react, but the true scope of what has taken place hasn’t truly hit public consciousness yet. 

Consider this my attempt to raise awareness and help people prepare for what’s to come.

Ok… I’ll bite, what happened Noah?

You have probably noticed that every tech company you come across suddenly started flaunting its use of AI. What I think is lost in the midst of this tidal wave of noise, hype, and marketing bs is that a small subsection of these businesses are altering the very foundation of the human experience as we know it.

Rapid advancements in generative AI now allow the fast and inexpensive creation of audio and visual content, at scale, that is quickly becoming indistinguishable in quality from what humans are able to produce, and more importantly what they are able to perceive. I cannot overstate the significance of this.

For all of human history, we have relied on our 5 core senses to perceive the world. The two arguably most important being sight and hearing. We use these senses to interpret the world around us, distinguish what is real from what is fake, and inform our decision making. Notably, these are the two senses that engage with digital content and underpin every single digital interaction and decision. 

With much of our lives and work transitioning online after widespread adoption of the computer and the internet, our digital world has increasingly been blended with our tangible “real world”. So much so that we take much of it for granted. When was the last time you truly sat down and thought about how much you rely on your phone or computer to inform your decision making, communicate with other people, and navigate your day? Digital information and communication have become the foundation for much of what we do and what we think. So what happens when the very fabric of this reality is called into question? What happens when you can no longer ever trust what (or who) you are seeing and hearing?

For the first time, 2 of the 5 core senses that shape our very reality are now obsolete.

In the last few years we have quietly crossed the rubicon when it comes to the accessibility and quality of outputs from generative AI. Humans can no longer rely on their own senses to discern real from fake, inform their perception of reality, and make decisions.

People love to discuss symptoms of this larger problem, but I have yet to see anyone fully explore the root problem: 

We can’t trust ANYTHING we see or hear anymore.

“But Noah, we have always had to question what we see and hear”. Not like this. This is different. 

Think to yourself what it truly means to not be able to trust the authenticity of any digital communication. Every single content type (audio, video, text), across every distribution channel (phones, video conferencing, social media, web browsing, news, etc.) and in every context (elections, court cases, business, everyday life) is now in doubt. You can’t trust what (or who) you see. You can’t trust what (or who) you hear. Anywhere.

So what is the impact of this?

“The era of procrastination, of half-measures, of soothing and baffling expedients, of delays is coming to its close. In its place we are entering a period of consequences.” - Winston S. Churchill

Think about how often you talk on the phone, video conference, watch a video, listen to a recording, listen to music, look at a picture, text, email, message, etc. Now think about where else digital communication appears. Courts, government, banking, healthcare, the very fabric of our society is vulnerable. Everything is now in question.

From a personal standpoint, scam calls where you hear your friend’s or relative’s voices are going to become commonplace, social media will be flooded with disinformation, catfishing will be supercharged on dating apps, the list goes on.

From a security standpoint, while there have always been bad actors attempting to fool people, never before has the ability to do so easily, at scale, across every digital medium and delivery mechanism been possible. Generative AI has made this the new reality. Social engineering and voice phishing are going to be supercharged. Employees will get phone and video calls from “senior leaders” at their companies requesting sensitive information. Voice authentication systems and biometric security systems are now vulnerable. Businesses will be breached, money will be lost, and critical information will be leaked.

From an artistic perspective, the internet is going to be flooded with content that imitates the work of creative professionals and artists. Identifying what is real from what is fake will become increasingly difficult. Navigating rights management in the era of generative AI is going to be a significant undertaking. New rules around content licensing, IP, and how someone’s likeness can be used will need to be defined. Tools will need to be developed to enforce these new rules.

So what can we do?

Well, you’re off to a good start by reading this far. The first step is to be aware that the problem exists. You cannot prepare for something you are unaware of. 

“In all affairs it's a healthy thing now and then to hang a question mark on the things you have long taken for granted.” - Bertrand Russell

Education: Continue to read up on the latest developments on AI. Think about how these could be used maliciously. Make sure your elderly and/or less tech savvy family and friends are aware of the issue as well. Malicious actors love to prey on these people and they are particularly susceptible to falling for well constructed AI scams.

Critical thinking: Unfortunately critical thinking is increasingly becoming less common and more important. For everything you see and hear online or on a call, think about what is being said (or asked) and think: Why is it being said? Who is saying it? Am I sure it’s really that person? Do they have an agenda? Is this the proper channel of communication for this information? Is this following the relevant processes? Don’t just accept information at face value. 

Security questions: Whether you are an individual or a business, I recommend implementing a second method of authentication for voice calls. For individuals it might be agreeing upon a passphrase or question with your family and friends that ensures you can confirm the identity of the person you are speaking with in case you get a call requesting sensitive information or an urgent action. For a business it could involve updating security protocols around what information can be exchanged over calls, how password resets are verified, and ensuring employee training and education on the latest threat types and process updates.

Tools to verify authenticity: The reality is as nice as it is to rely on people to be discerning, this technology has surpassed human weakness, people are lazy, and we need tools to help us. We can no longer rely on our own senses to tell real from fake, shape our reality, and inform our own decision making. To protect ourselves we need to adopt tools that function as a trust layer for all digital content and allow us to trust what we see and hear again. Like our senses interpret the world in front of us, we need “senses” online that help us discern real from fake and inform our decision making, no matter where we go, or what we are doing.

Now a shameless plug for what I’m working on!

We are obsessed with this problem at DeepTrust. For those of you familiar with the story of Noah in the Bible (yes that’s where my name comes from), the rain has started, a flood is coming, and we are building an ark (although this time everyone will be allowed in). 

DeepTrust is building the trust layer for the internet with the goal of authenticating all content across all channels and industries. So what does this mean for you? It means regardless of what your use case is, we are going to be able to help. 

We are starting with voice. We have cutting edge AI models that can detect AI generated voices so you can trust what (and who) you hear again. Calls? No problem. Recordings? Videos? We got you. If you need a way to determine if what you’re hearing is real, we can help.

If you’re interested in learning more, I’d love to chat. Feel free to shoot me an email at noah@deeptrustai.com or book time with me here. Ready to get started? Sign up here!