What is AI? Cutting through the hype - A clear, jargon-free explanation of AI

A.I. Software Development

What is AI and why is everyone talking about it?

AI is an incredibly broad topic, there are a huge number of ways to categorise the different types and applications of AI, some of which you will have likely heard of, some of which you won't.

For the most part it's been something you only hear about in films and tv shows - often with negative consequences or morally ambiguous story lines. But thanks to the recent breakthroughs in the field, in particular with generative AI models, it's now becoming a mainstream term and a technology that is in the hands of the general public. So what is it and why should people care?

To start things off in a rather bland fashion, a formal description of AI might read something like this:

AI refers to the development of systems that can perform tasks that normally require human intelligence. That could mean recognising speech, understanding language, identifying patterns, making predictions, or even driving a car. At its core, AI is about creating machines that can "think" or "learn" in ways that mimic — or in some cases surpass — human abilities. That doesn’t mean AI has consciousness or emotions, but it does mean that it can process information, adapt to new inputs, and improve over time with experience

ChatGPT 2025

AI is not a new concept - its been around a long time and has many different guises - ranging from machine learning (using algorithms to work out and predict patterns in data), Natural Language Processing (converting speech to text and vice versa) and reinforcement learning (AIs that can play chess and other very specific tasks).

Even if you aren't aware of where it's used or how it works, you probably see its effects in day to day life more than you know. Ever looked at your Netflix recommendations? Facial recognition to unlock your phone? Alexa or Google Assistant at home? All AI to various extents.

But these have been around a while, I hear you say - so why is it suddenly in the news, in my workplace and even being discussed by politicians? The main reason is due to a new type of AI that has come through in the last few years - Generative AI. For the purposes of this post I will be focusing on generative AI and in particular its abilities around text - a fairly narrow scope but a good entry point for those who are trying to find a starting point for understanding what the fuss is all about.

What is Generative AI?

Generative AI is a type of artificial intelligence that can create new content — from text and images to music, video, and even code. Unlike more traditional AI models that focus on analysing data or making predictions, generative can produce something entirely new based on the patterns it has learned.

It's one of the first types of AI that has become widely accessible to the general public. You don't need to understand AI, coding or have any sort of engineering background to make use of this new technology. You can easily load it in your browser https://chatgpt.com/ and have a go.

Most people are used to computers requiring specific instructions, you click exactly here to close this window, you must say exactly the right words to Alexa to get it to turn those lights off or play a song, but generative AI presents a very different way of interfacing with technology - free form text. It can take vague or misspelled instructions and do a very impressive job at understanding meaning and intent. It can also go one step further, in that it can collaborate with you. It will remember what you asked previously, it can modify your (or its) content into more refined output. If you do some incorrect maths in excel - excel will tell you if the formula is wrong but will not tell you how - generative AI can help you spot logic flaws and fix the underlying issue - something that hasn't been seen before.

This ease of access has generated a huge amount of interest, hype and hyperbole around its capabilities and what it means for the future of work. Opinion varies from "it's a fad, it's useless" to "you won't have a job in six months". This can lead to confusion, so I'll try spell out where it is we are currently at;

Who are the key players

OpenAI

OpenAI are probably the most well known company in the AI space at the moment. Their primary model chatGPT is almost synonymous with generative AI. They were the first to market and the first to put an interface to their models. Their model is proprietary (closed source).

Meta

Meta (formerly Facebook) has taken a more open-source approach to AI. Their models are widely used by developers and researches, especially those looking to run AI models locally or build custom solutions. They have now introduced an interface for queries but I think it's safe to say they are firmly in the technical camp and don't have the same share of the news cycle like OpenAI.

Anthropic

Started by former OpenAI employees, Anthropic positions itself as a safety-first AI company. Their cloud models are designed to be more "helpful, honest, and harmless." Their CEO Dario Amodei is often one of the more pragmatic people to listen to in the AI space if you want a reasoned opinion on the news and breakthroughs (editors opinion only!)

Newer arrivals

DeepSeek

DeepSeek is a Chinese based organisation and recently released a model that they claim is trained using a much smaller budget (just a few million dollars) than the existing models.


X

Elon Musk's company recently released Grok 3 which made a few headlines due to its unhinged mode and its reluctance to censor content.

What is generative AI good at?

Content creation

It's in the name - generative AI creates content - it can create almost anything you think of - blog posts, emails, stories, lesson plans, research - if it's text based it will do it. With correct prompting you can have output of any length, any style, any language.

It can also rework existing content - summarising, paraphrasing, rewriting in a different style, shortening, lengthening. Tasks that would normally take considerable effort and brain power can now be produced in seconds. This is not to say the content is always perfect (if you have ever read articles written by AI, you start to see common styles) but using AI as a starting point or a helper to refine text is a big productivity increase.

Brainstorming

AI can operate as a great research / study companion, especially in the early stages of projects or ideas to help fuel creative thought. It will generate endless lists of ideas and suggestions which you can use as a basis for your own work. It's extremely useful for expanding your thinking - it can show you things you might never have thought of on your own. You simply act as the arbitrator of quality and it will help you with that initial phase when you are working through a mass of half baked ideas.

Searching content

Generative AI is fantastic at searching across large data sets and returning results. It can search with the effectiveness of Google but instead of having to find specific keywords, you can use plain English to describe what you mean. AI then returns the answer(s) you are looking for, without having to navigate various websites, deciding which has relevant content.

A fantastic productivity gain we have implemented with our clients at Tribus, is using Generative AI with their own data sources. We create a business knowledge base - one 'source of truth' that is maintained and accurate, add a conversational AI model in front of the documents and you have a very powerful tool for sourcing information.

Translation

When we talk about translation most people would assume languages - which AI is great at, but it's also good at all types of translation. It can convert code from one programming language to another, it can translate hard to read text into something more digestible. Have you ever read an article or paragraph online, or in a book and thought you just don't understand it? Try pasting it into Generative AI and ask it to simplify or break it down - you might be surprised just how good it is at this task. It can also do the opposite - take basic theories and ideas and start fleshing them out into more advanced concepts.

Simulation / Role play

Have a difficult client meeting coming up? Worried about a sales call with a new company? There is no need to rehearse in front of a mirror - you can ask AI to role play and help you prepare. With a bit of context for the meeting, AI is incredibly effective at role playing a difficult client call, a particular challenging business sales pitch or anything you need. You can have it throw soft ball questions, unexpected questions or really push you on your presentation. With the ChatGPT app you can even just enable voice and away you go, no need for a text conservation.

What is it poor at?

Hallucinations

Hallucinations are arguably the number one issue holding back Generative AI adoption. All of the current models have varying degrees of, quite simply, making stuff up. The models are incredibly bad at saying "I don't know" and instead, will often confidently say the completely wrong thing. Couple this with a reluctance (or inability) to cite sources - it often makes people worry about using Generative AI where even a small margin of error in responses is an issue. There are ways to combat this - good prompting and more importantly using correct procedures like 'human in the loop' (never letting AI do anything fully automatically) can help, but it's not fool proof.

Inconsistency

Because Generative AI is essentially a large store of relationships between words (vastly oversimplified!), the phrasing of questions and prompts can massively impact the response. Phrasing a question positively or negatively will impact the result that comes back - so will using a different set of words. You can ask similar questions and get back contradictions in the same session. This is because they are not designed to be logically or factually consistent - they are designed to generate text based on input. Unless it’s explicitly told to cross-check or reflect, it won’t, and even then - it's going to struggle to stick to a definitive response.

Common sense or judgement heavy tasks

As smart as Generative AI may seem - it has no intelligence - when asked, its opinions and conclusions are simply made up of what is the collective opinion from all the data that it has scraped. Asking an AI for an opinion or to make a judgement call on a decision is as broad as asking "what is the most likely response to this question?". It is an engine for prediction, not understanding. It is fantastic for a sounding board to help you make decisions, but it is not the tool to make the decision.

Broader AI Issues

Data Privacy / Governance

Data governance is and will likely always be a challenge for wider AI adoption. Data governance and privacy relates to who has access to data, where it is transmitted, stored and ultimately who controls its lifespan. Think GDPR but across the whole of a business - internal and external. ChatGPT might seem like a great tool to use to query and grow your knowledge, but it comes at a cost - your information is processed and stored in a different country by a different organisation. Not an issue when asking for fitness and running tips or how to slow cook your next meal - more challenging when you are pasting in confidential business information or potentially sensitive data around your finances. Data is valuable to businesses and controlling who can see it and when, is important. Using open source models and self hosted instances (such as aws bedrock) can help mitigate some of these problems but this is often a technical barrier that most are not equipped to overcome without external professional support.

Copyright law

AI has a prickly relationship with copyright law. It's no secret that AI has been trained on copyrighted material - books, journals, articles - regularly without the author's consent. Several music artists and authors have started legal action and are launching campaigns to combat this. The second recently suffered a setback when they were unable to prove definitively that AI models will output content similar enough to their work. Whether it's fair to use their work to train the AI is a very grey area. If content is available publicly or purchased legally - is that any different than a person or company taking learnings from it? How many artists or authors are influenced by others, is it not the same for an AI to use other content as a source of inspiration? Is it only theft if the AI produces their content verbatim?

What do businesses need to be doing?

Generative AI is a ground breaking technology that is improving at a staggering pace. If experts are to be believed, some semblance of AGI is just around the corner. The levels of hype and promotion are reaching astronomical levels. It's time to embrace or get left behind and watch your business crumble. Or is it?

If you aren't using AI right now, you are probably getting along just fine and carrying on as normal - but the question is - will you be able to say the same in one year / three years / five years? Will your staff be as productive and innovative as the company next door who did start adopting AI?

If someone approaches you claiming that AI will transform your business, you have every right to be suspicious - with innovation comes the snake oil but likewise you cant bury your head in the sand, you need to do your research , see what's possible and start looking for current and future opportunities for your business.

Where can you start?

Look at where you can use AI to be more productive

Educate yourself and your team on what AI is good and bad at. Identify any laborious tasks in your business that take up time and money and assess the suitability of using AI to either automate or aid these processes. Which of these can be done by AI and with the lowest risk? Test the waters, see if it yields results. Look at what other businesses are doing and try find discrete tasks that could benefit. You don't have to rip up existing procedures and start from the ground up, its about small incremental gains.

Human In the Loop

Make sure any AI process you implement involves manual approval - check and check again until you are confident that its doing exactly what it should (and then still check it!). You can still save time writing overdue invoice emails with AI whilst having an actual person hit the send button. It doesn't need to be fully automated.

Provide employee guidelines

You might be sat here thinking nobody in your business uses AI, you are probably wrong. I bet at least a portion of your business are using chatGPT and a further portion of them will be using it with absolutely no guidance or knowledge of how the data is used.

You need guidelines on appropriate use. Case in point - unless users opt out - chatGPT will train its base model based on user inputs. That confidential or financial information that an employee has pasted into chatGPT might not stay confidential.

You don't need to write war and peace but some basic AI awareness will go a long way. How about taking it a step further and educating staff on good prompt engineering so not only is it used responsibly, its used effectively.

Appoint an AI champion or team

I get it, people have day jobs and this is something extra that needs to be done. AI guidelines and implementation is going to straight to the back of the todo list and never get done. If you don't appoint someone or take responsibility for these tasks, they wont happen. Its important to give someone the time and space to stay on top of this technology and try champion its use in the business - its the only way you will make progress.

Share this article:
Mark Spilsbury

Cookie consent

By continuing to use this website you agree to the handling and storage of data outlined in our privacy policy.
Scroll