top of page
Mal McCallion

AIs have Favourite Numbers Too


Quick number quiz - what do 42, 47 and 72 have in common?


Turns out that they're the answers you'll get most frequently when you ask an AI Large Language Model to give you a random number between 1 and 100.


As you can see from the distribution in the graph above, there are many numbers that never get chosen. Yet, for ChatGPT-3.5 Turbo, 47 is the one that it goes for way more than it would if it was providing something genuinely random.


Similarly, Google Gemini loves 72 - and Claude 3 Sonnet can't get past 42 - which, you may recall, is the answer to 'life, the universe and everything' from the Hitchhikers' Guide to the Galaxy by Douglas Adams.


This sounds basically stupid, until you really think about what LLMs are - prediction engines, at heart. And it's actually a really useful way to remember how they can help us, even at this early stage of their development.


LLMs use the entire corpus of their data to try and guess what the correct next 'token' is - that token could be a word, number or part-phrase. Simply because of the volume of information that it has at its disposal - and the zillions spent on its training - it is pretty good at getting correct the answer to the question it's being asked.


Essentially, this 'random' sequence is simply reflecting what each has found us humans are likely to choose a number between 1 and 100 from its training data. Seems like we're the ones that struggle with randomising - and we've passed that on to our AIs.


So when you think that an AI has started to adopt human-like capabilities, just remember that - at present - they're just trained to predict the next words from an enormous amount of data on what the next words ought to be.


These ones are far away from getting close to human intelligence - they can't even recognise that 33, 44 or 66 are random numbers they could choose between 1 and 100.

6 views0 comments

Comments


bottom of page