XClose

Teaching & Learning

Home
Menu

Capabilities and limitations of GenAI

When GenAI can be helpful and when it is likely to give less successful results.

 

MediaCentral Widget Placeholderhttps://mediacentral.ucl.ac.uk/Player/eaGJjEGC
 

In this video: Capabilities and limitations of GenAI

GenAI can produce diverse and seemingly original outputs in multiple languages and formats. By providing context and detail, you can sometimes generate results which seem astoundingly realistic and informed. It can feel as though the model truly understands what you are asking it to do, and the outputs that it generates, just as a person might.  

However, this is not the case. These models do not have the capacity to understand and cannot differentiate between truth and fiction or right and wrong. As a result, they can generate plausible outputs that are untrue or inaccurate – these are called hallucinations. They can also produce outputs that are biased or offensive, violate the copyright of others or contradict themselves mid-conversation.  

Because of this, some people begin to use GenAI cautiously and review its outputs thoroughly in the early stages. Over time, they might develop a level of trust in its outputs for particular tasks. Some people report that GenAI becomes a partner in their content creation, helping with tasks like brainstorming ideas, summarising text or improving writing in a particular style.  

If you are representing GenAI outputs as fact, be sure that you  independently verify them for truthfulness. This includes any references that it provides. 

 

Things to know 

You may hear the term ‘hallucination’ used to describe when an AI system produces something misleading or false.

Things to try 

Think of a difficult concept in a subject area that you are very familiar with and ask Copilot to explain this topic to you. You can refine the output by specifying a reading age, asking it to use metaphors or examples or provide a specific context. Reflect on the accuracy of the output.