Neural scaling law
In machine learning, a neural scaling law is a scaling law relating parameters of a family of neural networks.[1][2]
Introduction
In general, a neural model can be characterized by 4 parameters: size of the model, size of the training dataset, cost of training, performance after training. Each of these four variables can be precisely defined into a real number, and they are empirically found to be related by simple statistical laws, called "scaling laws". These are usually written as (number of parameters, dataset size, computing cost, loss).
Size of the model
In most cases, the size of the model is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models.[3] In sparse models, during every inference, only a fraction of the parameters are used. In comparison, most other kinds of neural networks, such as Transformer networks, always use all their parameters during every inference.
Size of the training dataset
The size of the training dataset is usually quantified by the number of data points it contains. Larger training datasets are typically preferred as they provide a richer and more diverse source of information for the model to learn from. This in turn can lead to improved generalization performance when the model is applied to unseen data.[4] However, increasing the size of the training dataset also increases the computational resources and time required for model training.
With the "pretrain, then finetune" method used in most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes would have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset.[5]
In some cases, a small amount of high quality data suffices for finetuning, and more data does not improve performance.[5]
Cost of training
The cost of training is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required to train the model). It's important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, and parallel computing on specialized hardware like GPUs or TPUs.
The cost of training a neural model is a function of several factors including the size of the model, the size of the training dataset, the complexity of the training algorithm, and the computational resources available.[4] In particular, doubling the training dataset does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch").
Performance
The performance of a neural model is evaluated based on its ability to accurately predict the output given the input data. Common metrics for evaluating model performance include:[4]
- accuracy, precision, recall, and F1 score for classification tasks;
- mean squared error (MSE) or mean absolute error (MAE) for regression tasks;
- negative log-likelihood per token (logarithm of perplexity) for language modeling.
- Elo rating in a competition against other models, such as gameplay[6] or preference by a human judge[7]
Performance can be improved by using more data, larger models, different training algorithms, regularizing the model to prevent overfitting, and early stopping using a validation set.
Examples
(Henighan, Kaplan, et al, 2020)
A 2020 analysis [8] studied statistical relations between over a wide range of values and found similar scaling laws, over the range of , , and over multiple modalities (text, video, image, text to image, etc.).[8]
In particular, the scaling laws it found are (Table 1 of [8]):
- For each modality, they fixed one of the two , and varying the other one ( is varied along using ), the achievable test loss satisfieswhere is the varied variable, and are parameters to be found by statistical fitting. The parameter is the most important one.
- When is the varied variable, ranges from to depending on the model modality. This corresponds to the from the Chinchilla scaling paper.
- When is the varied variable, ranges from to depending on the model modality. This corresponds to the from the Chinchilla scaling paper.
- Given fixed computing budget, optimal model parameter count is consistently aroundThe parameter varies by a factor of up to 10 for different modalities. The exponent parameter varies from to for different modalities. This exponent corresponds to the from the Chinchilla scaling paper.
- It's "strongly suggested" (but not statistically checked) that . This exponent corresponds to the from the Chinchilla scaling paper.
The scaling law of was confirmed during the training of GPT-3 (Figure 3.1 [9]).
Chinchilla scaling (Hoffmann, et al, 2022)
One particular scaling law ("Chinchilla scaling") states that, for a large language model (LLM) autoregressively trained for one epoch, with a cosine learning rate schedule, we have:[10]
where the variables are
- is the cost of training the model, in FLOPs.
- is the number of parameters in the model.
- is the number of tokens in the training set.
- is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset.
- represents the loss of an ideal generative process on the test data
- captures the fact that a Transformer language model with parameters underperforms the ideal generative process
- captures the fact that the model trained on tokens underperforms the ideal generative process
and the statistical parameters are
- , meaning that it costs 6 FLOPs per parameter to train on one token. This is estimated by Kaplan et al.[11] Note that training cost is much higher than inference cost, as training entails both forward and backward passes, whereas inference costs 1 to 2 FLOPs per parameter to infer on one token.
- .
The statistical laws were fitted over experimental data with .
Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additional optimization objective allows us to solve for all four variables. In particular, for any fixed , we can uniquely solve for all 4 variables that minimizes . This provides us with the optimal for any fixed :
Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable:
Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on. There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model of . One can also directly fit a statistical law for without going through the detour, for which one obtains:
or as tabulated:
/ FLOP | / FLOPs of training Gopher | ||
---|---|---|---|
400 Million | 1.92e+19 | 1/29968 | 8.0 Billion |
1 Billion | 1.21e+20 | 1/5706 | 20.2 Billion |
10 Billion | 1.23e+22 | 1/2819 | 205.1 Billion |
67 Billion | 5.76e+23 | 1 | 1.5 Trillion |
175 Billion | 3.85e+24 | 6.7 | 3.7 Trillion |
280 Billion | 9.90e+24 | 17.2 | 5.9 Trillion |
520 Billion | 3.43e+25 | 59.5 | 11.0 Trillion |
1 Trillion | 1.27e+26 | 221.3 | 21.2 Trillion |
10 Trillion | 1.30e+28 | 22515.9 | 216.2 Trillion |
In simpler terms, the Chinchilla scaling law for training Transformer language models suggests that when given an increased budget (in FLOPs), to achieve compute-optimal, the number of model parameters (N) and the number of tokens for training the model (D) should scale in approximately equal proportions. This conclusion differs from the previous scaling law for neural language models,[11] which states that N should be scaled faster than D. The discrepancy arises from setting different cycle lengths for cosine learning rate schedulers. In estimating the Chinchilla scaling, the authors set the cycle length to be the same as the training steps, as experimental results indicate that larger cycles overestimate the loss of the models.
Broken Neural Scaling Laws (BNSL)
A 2022 analysis[12] found that many scaling behaviors of artificial neural networks follow a smoothly broken power law functional form:
in which refers to the quantity being scaled (i.e. , , , number of training steps, number of inference steps, or model input size) and refers to the downstream (or upstream) performance evaluation metric of interest (e.g. prediction error, cross entropy, calibration error, AUROC, BLEU score percentage, F1 score, reward, Elo rating, solve rate, or FID score) in zero-shot, prompted, or fine-tuned settings. The parameters are found by statistical fitting.
On a log–log plot, when is not too large and is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; the transitions between the segments are called "breaks", hence the name Broken Neural Scaling Laws (BNSL).
The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems, arithmetic, emergent abilities, double descent, supervised learning, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent).
The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form include ResNets, Transformers, MLPs, MLP-Mixers, Recurrent Neural Networks, Graph Neural Networks, U-Nets, Ensembles (and Non-Ensembles), MoE (Mixture of Experts) (and Non-MoE) Models, and Sparse Pruned (and Non-Sparse Unpruned) Models.
Vision transformers
Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter counts , on image sets of sizes , for computing (in units of TPUv3-core-days).[13]
After training the model, it is finetuned on ImageNet training set. Let be the error probability of the finetuned model classifying ImageNet test set. They found .
Neural machine translation
Ghorbani, Behrooz et al.[14] studied scaling laws for neural machine translation (specifically, English as source, and German as target) in encoder-decoder Transformer models, trained until convergence on the same datasets (thus they did not fit scaling laws for computing cost or dataset size ). They varied They found three results:
- is a scaling law function of , where are encoder and decoder parameter count. It is not simply a function of total parameter count . The function has form , where are fitted parameters. They found that minimizes loss if is held fixed.
- "saturates" (that is, it reaches ) for smaller models when the training and testing datasets are "source-natural" than "target-natural". A "source-natural" data point means a pair of English-German sentences, and the model is asked to translate the English sentence into German, and the English sentence is written by a natural English writer, while the German sentence is translated from the English sentence by a machine translator.[15] To construct the two kinds of datasets, the authors collected natural English and German sentences online, then used machine translation to generate their translations.
- As models grow larger, models trained on source-original datasets can achieve low loss but bad BLEU score. In contrast, models trained on target-original datasets achieve low loss and good BLEU score in tandem (Figure 10, 11 [14]).
The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit.
[16] trained Transformers for machine translations with sizes on dataset sizes . They found the Kaplan et al (2020)[11] scaling law applied to machine translation: . They also found the BLEU score scaling as .
Transfer learning
Hernandez, Danny et al.[17] studied scaling laws for transfer learning in language models. They trained a family of Transformers in three ways:
- pretraining on English, finetuning on Python
- pretraining on an equal mix of English and Python, finetuning on Python
- training on Python
The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter count , and after being finetuned on Python tokens, it achieves some loss . We say that its "transferred token count" is , if another model with the same achieves the same after training on Python tokens.
They found for pretraining on English text, and for pretraining on English and non-Python code.
References
- Bahri, Yasaman; Dyer, Ethan; Kaplan, Jared; Lee, Jaehoon; Sharma, Utkarsh (2021-02-12). "Explaining Neural Scaling Laws". arXiv:2102.06701 [cs.LG].
- Hestness, Joel; Narang, Sharan; Ardalani, Newsha; Diamos, Gregory; Jun, Heewoo; Kianinejad, Hassan; Patwary, Md Mostofa Ali; Yang, Yang; Zhou, Yanqi (2017-12-01). "Deep Learning Scaling is Predictable, Empirically". arXiv:1712.00409 [cs.LG].
- Rajbhandari, Samyam; Li, Conglong; Yao, Zhewei; Zhang, Minjia; Aminabadi, Reza Yazdani; Awan, Ammar Ahmad; Rasley, Jeff; He, Yuxiong (2022-06-28). "DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale". Proceedings of the 39th International Conference on Machine Learning. PMLR: 18332–18346.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
- Zhou, Chunting; Liu, Pengfei; Xu, Puxin; Iyer, Srini; Sun, Jiao; Mao, Yuning; Ma, Xuezhe; Efrat, Avia; Yu, Ping; Yu, Lili; Zhang, Susan; Ghosh, Gargi; Lewis, Mike; Zettlemoyer, Luke; Levy, Omer (2023-05-01). "LIMA: Less Is More for Alignment".
{{cite journal}}
: Cite journal requires|journal=
(help) - Andy L. Jones, Scaling Scaling Laws with Board Games
- LMSYS Chatbot leaderboard
- Sam, Henighan, Tom Kaplan, Jared Katz, Mor Chen, Mark Hesse, Christopher Jackson, Jacob Jun, Heewoo Brown, Tom B. Dhariwal, Prafulla Gray, Scott Hallacy, Chris Mann, Benjamin Radford, Alec Ramesh, Aditya Ryder, Nick Ziegler, Daniel M. Schulman, John Amodei, Dario McCandlish (2020-10-27). Scaling Laws for Autoregressive Generative Modeling. OCLC 1228442047.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, J.; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, T.; Child, Rewon (2020-05-28). "Language Models are Few-Shot Learners". ArXiv. S2CID 218971783.
- Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Buchatskaya, Elena; Cai, Trevor; Rutherford, Eliza; Casas, Diego de Las; Hendricks, Lisa Anne; Welbl, Johannes; Clark, Aidan; Hennigan, Tom; Noland, Eric; Millican, Katie; Driessche, George van den; Damoc, Bogdan (2022-03-29). "Training Compute-Optimal Large Language Models". arXiv:2203.15556 [cs.CL].
- Kaplan, Jared; McCandlish, Sam; Henighan, Tom; Brown, Tom B.; Chess, Benjamin; Child, Rewon; Gray, Scott; Radford, Alec; Wu, Jeffrey; Amodei, Dario (2020). "Scaling Laws for Neural Language Models". CoRR. abs/2001.08361. arXiv:2001.08361.
- Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2022). "Broken Neural Scaling Laws". International Conference on Learning Representations (ICLR), 2023.
- Zhai, Xiaohua; Kolesnikov, Alexander; Houlsby, Neil; Beyer, Lucas (2022). "Scaling Vision Transformers": 12104–12113.
{{cite journal}}
: Cite journal requires|journal=
(help) - Ghorbani, Behrooz; Firat, Orhan; Freitag, Markus; Bapna, Ankur; Krikun, Maxim; Garcia, Xavier; Chelba, Ciprian; Cherry, Colin (2021-09-01). "Scaling Laws for Neural Machine Translation".
{{cite journal}}
: Cite journal requires|journal=
(help) - Chen, Mia Xu; Firat, Orhan; Bapna, Ankur; Johnson, Melvin; Macherey, Wolfgang; Foster, George; Jones, Llion; Schuster, Mike; Shazeer, Noam; Parmar, Niki; Vaswani, Ashish; Uszkoreit, Jakob; Kaiser, Lukasz; Chen, Zhifeng; Wu, Yonghui (July 2018). "The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation". Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics: 76–86. arXiv:1804.09849. doi:10.18653/v1/P18-1008.
- Gordon, Mitchell A; Duh, Kevin; Kaplan, Jared (2021). "Data and Parameter Scaling Laws for Neural Machine Translation". Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics. pp. 5915–5922. doi:10.18653/v1/2021.emnlp-main.478.
- Hernandez, Danny; Kaplan, Jared; Henighan, Tom; McCandlish, Sam (2021-02-01). "Scaling Laws for Transfer".
{{cite journal}}
: Cite journal requires|journal=
(help)