Generative AI models don’t process text in the same way that humans do, and understanding their “token”-based internal environment might help explain some of their quirky behavior and stubborn limitations.
Most models, from tiny on-device models like Gemma to OpenAI’s industry-leading GPT-4o, are built on an architecture called a Transformer. Because of the way Transformers create associations between text and other kinds of data, they cannot take in and output raw text — at least, not without a massive amount of computation.
Therefore, for practical and technical reasons, today’s Transformer models work with text split into small, bite-sized pieces called tokens. This process is called tokenization.
Tokens can be words like “fantastic”, or syllables like “fan”, “tas”, or “tic”, or, depending on the tokenizer, individual letters within a word (e.g. “f”, “a”, “n”, “t”, “a”, “s”, “t”, “i”, “c”).
This method allows the Transformer to capture more information (in a semantic sense) before hitting an upper limit called the context window, but tokenization can also introduce bias.
Some tokens have odd spaces that can cause the transformer to fail. For example, the tokenizer might encode “once upon a time” as “once”, “upon”, “a”, “time”, but might encode “once upon a” (with a trailing space) as “once”, “upon”, “a”, “”. Depending on how you prompt the model (with “once upon a” or “once upon a “), you might get completely different results, because the model (unlike humans) does not understand that the meaning is the same.
Tokenizers also treat uppercase and lowercase letters differently. To the model, “Hello” is not necessarily the same as “HELLO”. “Hello” is usually one token (depending on the tokenizer), but “HELLO” can be up to three tokens (“HE”, “El”, “O”). This is why many transformers fail the uppercase test.
“It’s a bit hard to avoid the question of what exactly a ‘word’ should be for a language model, and even if human experts could agree on a perfect token vocabulary, the model will probably find it useful to ‘chunk’ things further,” Sheridan Feucht, a PhD student at Northeastern University researching interpretability of large-scale language models, told TechCrunch. “My guess is that because of all this ambiguity, there will never be a perfect tokenizer.”
This ambiguity causes even more problems in languages other than English.
Many tokenization methods assume that a space in a sentence represents a new word because they were designed with English in mind, but not all languages use spaces to separate words: Chinese and Japanese don’t, nor do Korean, Thai, or Khmer.
A 2023 Oxford University study found that differences in how languages other than English are tokenized can mean that completing a task expressed in a language other than English can take twice as long as the same task expressed in English. The same study and another one found that given that many AI vendors charge per token, users of languages with less “token efficiency” may see their models perform worse, yet still pay more.
Tokenizers often treat each character in logographic writing systems (where printed symbols represent words without regard to pronunciation, such as Chinese) as a separate token, resulting in high token counts. Similarly, tokenizers that process agglutinative languages (where words are made up of small meaningful word elements called morphemes, such as Turkish) tend to turn each morpheme into a token, resulting in high overall token counts. (The Thai word for “hello,” สวัสดี, is six tokens.)
In 2023, Yenny Jun, an AI researcher at Google DeepMind, conducted an analysis comparing tokenization and its downstream impact in different languages. Using a dataset of bilingual texts translated into 52 languages, Jun showed that some languages require up to 10 times as many tokens to express the same meaning in English.
Beyond language inequality, tokenization may also explain why today’s models are poorly suited to math.
Numbers are rarely tokenized consistently. Because the tokenizer doesn’t really know what the numbers are, it may treat “380” as a single token and represent “381” as a pair (“38” and “1”). This effectively destroys the relationship between numbers and outcomes in equations and formulas. The result is Transformer Confusion. Recent papers have shown that models struggle to understand repetitive numeric patterns and context, especially temporal data (see: GPT-4 thinks 7,735 is greater than 7,926).
This is also why the model is not very good at solving anagram problems or reversing words.
It turns out that a lot of odd behavior and issues with LLM actually stem from tokenization. We’ll explore some of these issues and explain why tokenization is problematic and why, ideally, we’d like to find a way to remove this stage entirely. pic.twitter.com/5haV7FvbBx
—Andrey Karpathy (@karpathy) February 20, 2024
So tokenization clearly poses a challenge for generative AI. Can it be solved?
perhaps.
Feucht points to “byte-level” state-space models like MambaByte, which can ingest much more data than Transformer without performance degradation by eliminating tokenization altogether. By working directly with raw bytes representing text and other data, MambaByte competes with some Transformer models on language analysis tasks, while better handling “noise” such as character swapping, spaces, and capitalization.
But models like MambaByte are still in the early research stages.
“It would probably be best to let the model look at the characters directly without forcing tokenization, but right now that’s computationally infeasible for the Transformer,” Feucht said. “Especially for the Transformer model, the computations are proportional to the square of the sequence length, so we recommend using a short text representation.”
Barring a breakthrough in tokenization, new model architectures will likely be key.