Why AI Detection Matters
The importance of detecting AI
“Replicants are like any other machine – they’re either a benefit or a hazard. If they’re a benefit, it’s not my problem”.
Rick Deckard, Blade Runner, 1982
01
Summary
Why AI detection matters
There are many benefits to Artificial Intelligence. These include taking on repetitive tasks and speeding up large scale data analysis problems such as better scanning of medical images1. However, AI can be a hazard. This is particularly the case with fake images, fake news, fake voices and biased decision making. There is also the danger of a feedback loop where untruthful content generated by AI becomes part of the training data for future iterations.
How AI Aware is solving the detection problem
To help solve the problem of fake and manipulated Artificial Intelligence content, AI Aware is developing innovative content detection algorithms. These build from models comparing very large sets of AI-generated and human content and add in a wide range of variables and modalities. This makes our detection algorithms more sophisticated than conventional approaches. They include additional dimensions and a range of contextual signals from linguistics, logic and creativity.
02
Benefits
The benefits of AI
Efficiency and productivity gains
Artificial Intelligence is very good at automating repetitive tasks such as tagging data or writing summaries of large quantities of text. AI can also perform tasks at a scale beyond people. This doesn’t necessarily mean that jobs will get cut – instead people could spend their time on higher value activities
Interpreting very large sets of data
Examples include Artificial Intelligence producing insights from 100,000s of written survey responses rather than just tick box options, more effective scanning of very large numbers of medical images or large sets of genomics data.
Creativity and challenging thinking
Artificial Intelligence can act as a creative partner with ideas for the co-creation of writing, art and music. AI can offer a different interpretation of results and help challenge established thinking within an organisation.
Speeding up projects
With Artificial Intelligence helping it can be quicker to produce a proposal, sketch a plan or produce a spreadsheet. This enables people to complete tasks such as sales proposals with less elapsed time.
Better quality of service through personalisation
Examples are AI generated individualised recommendations on a streaming platform or more appropriate listings displayed on a website.
03
Hazards
The threats of AI
Fake news, fake images and fake voices
With images, videos and sounds, it is important to know if they are generated or manipulated by Artificial Intelligence. People want to understand the authenticity and reliability of media.
Transparency and trust in AI
When people aren’t told when and how AI is being used to make decisions or create recommendations it erodes trust. It makes it difficult to understand and challenge outcomes such as loan or hiring decisions.
Copyright and plagiarism
AI content generators are trained on large sets of data. This is used as the reference for generating new forms of media. If parts of the training set are copied into the output this is a form of plagiarism and, potentially, copyright violation.
Fake essays and course work in education
Artificial Intelligence used to generate essays and other coursework can be particularly problematic for educational institutions. An AI generated piece of coursework might not reflect a student’s abilities or show what they have learnt.
Bias in AI
The data AI is used to train with can contain unintended biases such as a particular political or moral point of view which is then reflected in the output. Attempts to provide additional rules for the AI to attempt to create balance raises questions. These include what are the rules, who decides, and back to the first point, where is the transparency for users? The rules themselves can generate additional forms of bias.2
Privacy and data protection
As Artificial Intelligence can process large quantities of text or images it can be used to, for example, monitor employee emails or images from cameras. This can create a big brother type environment and erode trust between people and institutions using AI systems.
The fake information feedback loop
As AI generated fake content gets released onto the internet it becomes the training material for the next iterations of AI. It will then be used as the frame of reference to make decisions or generate new content further distorting truth.
04
Consciousness and AI
Why AI is not conscious, thinking or self-aware
The Chinese room thought experiment
The Chinese room thought experiment was created by the philosopher John Searle.3 Searle imagines somebody in a room following instructions for responding to Chinese characters slipped under the door. The person understands nothing of Chinese, but by following a process (just as Artificial Intelligence does), they can send appropriate strings of Chinese characters back under the door. This leads those outside to mistakenly suppose there is a Chinese speaker in the room.
Generative AI and the illusion of thinking, consciousness and self-awareness
As in the above thought experiment, generative AI such as Chat GPT, other Large Language Models (LLMs) and image creators such as Stable Diffusion and DALL E are creating an output based on sets of instructions and inputs (prompts, models and training data). They might appear to people to be somehow thinking, conscious or self-aware, but they are not understanding the concepts that they are generating. Like in the Chinese room thought experiment, they are creating the appearance of understanding, without having it. For more information on how Chat GPT, Stable Diffusion and DALL E work see here.
Empirical testing for AI consciousness
If Artificial Intelligence moves beyond posting outputs from given inputs without understanding and, instead, through complexity somehow develops consciousness, how would we know? We can’t just ask AI whether it is having internal experiences. The reason for this is that AI could give convincing answers based on training data when no actual internal experiences existed. Using neuroscientific theories of consciousness is an alternative empirical approach. With this approach, the known functions associated with biological consciousness are compared with AI functions. Using this approach, current prominent theorists of consciousness have concluded that current AI doesn’t have the structures needed for consciousness.4
05
The future of AI
The future of artificial intelligence
Artificial Narrow Intelligence
One way of viewing the future of Artificial Intelligence is by looking at its range of capabilities. Current Artificial Intelligence systems such as self-driving cars, chat bots/assistants (such as Claude) or LLMS and generative AI (Chat GPT, DALL E and Stable Diffusion amongst others) are classified as having Artificial Narrow Intelligence (ANI). Artificial Narrow Intelligence refers to systems that do nothing more than what they are programmed to do. They have a limited range of competencies even if they use machine learning and deep learning to teach themselves.
Artificial General Intelligence
The next stage of AI, in the near future, will be Artificial General Intelligence (AGI). Artificial General Intelligence is the ability to perform at least as well as the average person on a broad range of cognitive tasks.5 There is debate about what would count in these tasks.6 Some experts such as Bubeck et al., 2023 believe that “sparks” of AGI are already showing in advanced generative models such as Chat GPT 47 as they argue that they are showing emergent abilities.
Emergent AI abilities
Emergent behaviour in AI refers to skills that appear in complex AI systems that were not pre-trained or programmed in the models. Instead, they emerge unpredictably, particularly in very large-scale systems. One argument used to support the case that AI is showing emergent abilities is phase transition, whereby up to a certain point a model’s performance improves steadily, but after a point of complexity, there’s a sudden leap in abilities. Examples of possible emergent AI abilities are inferring a characters’ mental state when this isn’t in training data, generating humour with words that haven’t previously existed and translating languages with only small amounts of reference data. However, there is debate amongst researchers with some claiming that what looks like emergent AI ability is an illusion caused by flawed statistical analyses of models.8
Artificial Super Intelligence & Strong AI
Artificial Super Intelligence (ASI) is a theoretical point in the future where AI has the complicated multi-disciplinary intelligence of human beings but will be far better because of near limitless memory, data processing, analysis, and decision-making capabilities. It is often connected to the idea of Strong AI whereby AI would need to be self-aware and conscious in order to have human like abilities to solve problems, think of new questions, learn, and plan for the future.
- NHS AI test spots tiny cancers missed by doctors ↩︎
- Shedding light on AI bias with real world examples ↩︎
- Talks at Google – Consciousness in Artificial Intelligence – John Searle ↩︎
- Consciousness in Artificial Intelligence: Insights from the Science of Consciousness ↩︎
- How Sam Altman, CEO OpenAI, defines AGI ↩︎
- Levels of AGI: Operationalizing Progress on the Path to AGI ↩︎
- Sparks of Artificial General Intelligence: Early experiments with GPT-4 ↩︎
- Are Emergent Abilities of Large Language Models a Mirage? ↩︎