Large language models (LLMs) have captured the minds and hearts of AI people, enterprises, and even individuals. The new trend is to scale these models to have much greater performance with more or less stable costs. Turns out that the performance of an LLM positively correlates with the model size and scalability. This scaling comes with computational immersive resources; as you might guess, the bigger the model, the higher the costs.
This is one of the biggest challenges of the industry. While Mixture of Experts have been recently hyped for improving transformer models, there is a new approach that ML people find a lot more promising – Mixture of Tokens. Certain drawbacks that MoEs showcased while trying on different models created a need for other methods. In this blog post, we’ll touch upon these new techniques and examine the ways MoTs scale large language models while maintaining training and inference costs.
Mixture of Experts
Mixture of Experts gained fame for dramatically optimizing transformers’ scalability. To understand this, let’s first learn who those ‘experts’ are. In MoEs, experts are models that are specialized for one or several tasks. In standard transformer models, the tokens are processed by a standard feed-forward layer. Instead of this approach, MoEs direct each token towards a pool of experts alongside a small network called controller. This controller ensures that each token is processed by only a small subset of experts. A few advancements of this technique were further introduced, the main ones being switch and expert choice.
The switch transformer sends each token to exactly one expert that has the highest score produced by the controller. This technique resulted in a huge reduction of parameters – from the 1.6T model(T5 architecture) to the FLOPS cost of the equivalent 1.4B vanilla Transformer.
Expert choice offers a slightly different approach. Instead of having tokens select the top-k experts, the experts themselves choose the top-k tokens. This method guarantees even load balancing(each expert receives the same number of tokens) and achieves substantial gains in training efficiency and downstream performance. However, there’s a risk that some tokens won’t be chosen.
Limitations of current approaches
While the performance of the huge-parameter-count MoE architectures is impressive, they come with a new set of challenges during both training and inference. The most notable ones are:
Training instability: This method chooses and matches experts to tokens discreetly. That means small changes in controller weights can have disproportionate effects on controller decisions.
Load imbalance: The problem with MoEs is that we cannot efficiently balance the way tokens and experts are assigned since the choice of the routing network is not efficiently restricted. This is why some tokens are left without any expert to process them(token dropping), and almost all tokens are assigned to a few experts only(model collapse).
Information leak: Some successful MoE methods process tokens from different positions in a sequence together (i.e., by comparing scores of all tokens in a batch). This imposes an intra-sequence information leak and hinders their utility in autoregressive decoding.
In their recent paper, scientists from Cohere AI discussed ways to tackle one of the main MoE challenges – having to store all the experts in memory. They’re proposing extremely parameter-efficient MoE by uniquely combining MoE architecture with lightweight experts. Their MoE architecture outperforms standard PEFT methods and is on par with full fine-tuning by only updating the lightweight experts – less than 1% of an 11B parameters model.
Mixture of Tokens
Such drawbacks led to the rise of Mixture of Tokens (MoTs). This slight modification of approach solves a lot of the problems imposed by the discussed methods. Instead of routing tokens to experts, MoT mixes tokens from different examples before feeding them to the experts. This allows the model to learn from all token-expert combinations and improve training stability and expert utilization. After feeding experts with tokens, each mixture is processed and redistributed back to the original tokens.
How is Mixture of Tokens performed? First, you need to set importance weights for each token. This is done through the controller, followed by the softmax layer performed on the resulting token scores. Thus, token weights are independently calculated for each expert. In the end, you multiply each token by its importance weight and add all of them together.
MoT addresses the problems with MoE models by making the following changes:
- Mixes tokens from different examples before feeding them to experts; this improves training stability and expert utilization by allowing the models to learn from ALL token-expert combinations.
- Mixture of Tokens is a fully differentiable model, meaning that it can be trained using standard gradient-based methods. This avoids the need for auxiliary losses or other difficult-to-train techniques, making it easier to train and deploy."
Mixture of Tokens has the potential to significantly improve the performance and efficiency of LLMs. It has shown amazing results of a 3x decrease in training time when compared to vanilla Transformer. In the future, we anticipate that MoTs will continue to yield even more significant improvements.
Disclaimer: This post is informed by research from the scholarly article “Micture of tokens: Efficient LLMs through cross-example aggregation.”, authored by multiple contributors.