Attenzione: i dati modificati non sono ancora stati salvati. Per confermare inserimenti o cancellazioni di voci è necessario confermare con il tasto SALVA LE MODIFICHE in fondo alla pagina
Bicocca Open Archive
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIGbench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI’s GPT models, Googleinternal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit “breakthrough” behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.
Srivastava, A., Rastogi, A., Rao, A., Md Shoeb, A., Abid, A., Fisch, A., et al. (2023). Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. TRANSACTIONS ON MACHINE LEARNING RESEARCH, 2023.
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Srivastava A.;Rastogi A.;Rao A.;Md Shoeb A. A.;Abid A.;Fisch A.;Brown A. R.;Santoro A.;Gupta A.;Garriga-Alonso A.;Kluska A.;Lewkowycz A.;Agarwal A.;Power A.;Ray A.;Warstadt A.;Kocurek A. W.;Safaya A.;Tazarv A.;Xiang A.;Parrish A.;Nie A.;Hussain A.;Askell A.;Dsouza A.;Slone A.;Rahane A.;Iyer A. S.;Andreassen A.;Madotto A.;Santilli A.;Stuhlmuller A.;Dai A.;La A.;Lampinen A.;Zou A.;Jiang A.;Chen A.;Vuong A.;Gupta A.;Gottardi A.;Norelli A.;Venkatesh A.;Gholamidavoodi A.;Tabassum A.;Menezes A.;Kirubarajan A.;Mullokandov A.;Sabharwal A.;Herrick A.;Efrat A.;Erdem A.;Karakas A.;Roberts B. R.;Loe B. S.;Zoph B.;Bojanowski B.;Ozyurt B.;Hedayatnia B.;Neyshabur B.;Inden B.;Stein B.;Ekmekci B.;Lin B. Y.;Howald B.;Orinion B.;Diao C.;Dour C.;Stinson C.;Argueta C.;Ramirez C. F.;Singh C.;Rathkopf C.;Meng C.;Baral C.;Wu C.;Callison-Burch C.;Waites C.;Voigt C.;Manning C. D.;Potts C.;Ramirez C.;Rivera C. E.;Siro C.;Raffel C.;Ashcraft C.;Garbacea C.;Sileo D.;Garrette D.;Hendrycks D.;Kilman D.;Roth D.;Freeman D.;Khashabi D.;Levy D.;Gonzalez D. M.;Perszyk D.;Hernandez D.;Chen D.;Ippolito D.;Gilboa D.;Dohan D.;Drakard D.;Jurgens D.;Datta D.;Ganguli D.;Emelin D.;Kleyko D.;Yuret D.;Chen D.;Tam D.;Hupkes D.;Misra D.;Buzan D.;Mollo D. C.;Yang D.;Lee D. -H.;Schrader D.;Shutova E.;Cubuk E. D.;Segal E.;Hagerman E.;Barnes E.;Donoway E.;Pavlick E.;Rodola E.;Lam E.;Chu E.;Tang E.;Erdem E.;Chang E.;Chi E. A.;Dyer E.;Jerzak E.;Kim E.;Manyasi E. E.;Zheltonozhskii E.;Xia F.;Siar F.;Martinez-Plumed F.;Happe F.;Chollet F.;Rong F.;Mishra G.;Winata G. I.;de Melo G.;Kruszewski G.;Parascandolo G.;Mariani G.;Wang G.;Jaimovitch-Lopez G.;Betz G.;Gur-Ari G.;Galijasevic H.;Kim H.;Rashkin H.;Hajishirzi H.;Mehta H.;Bogar H.;Shevlin H.;Schutze H.;Yakura H.;Zhang H.;Wong H. M.;Ng I.;Noble I.;Jumelet J.;Geissinger J.;Kernion J.;Hilton J.;Lee J.;Fisac J. F.;Simon J. B.;Koppel J.;Zheng J.;Zou J.;Kocon J.;Thompson J.;Wingfield J.;Kaplan J.;Radom J.;Sohl-Dickstein J.;Phang J.;Wei J.;Yosinski J.;Novikova J.;Bosscher J.;Marsh J.;Kim J.;Taal J.;Engel J.;Alabi J.;Xu J.;Song J.;Tang J.;Waweru J.;Burden J.;Miller J.;Balis J. U.;Batchelder J.;Berant J.;Frohberg J.;Rozen J.;Hernandez-Orallo J.;Boudeman J.;Guerr J.;Jones J.;Tenenbaum J. B.;Rule J. S.;Chua J.;Kanclerz K.;Livescu K.;Krauth K.;Gopalakrishnan K.;Ignatyeva K.;Markert K.;Dhole K. D.;Gimpel K.;Omondi K.;Mathewson K.;Chiafullo K.;Shkaruta K.;Shridhar K.;McDonell K.;Richardson K.;Reynolds L.;Gao L.;Zhang L.;Dugan L.;Qin L.;Contreras-Ochando L.;Morency L. -P.;Moschella L.;Lam L.;Noble L.;Schmidt L.;He L.;Colon L. O.;Metz L.;Senel L. K.;Bosma M.;Sap M.;Hoeve M. T.;Farooqi M.;Faruqui M.;Mazeika M.;Baturan M.;Marelli M.;Maru M.;Ramirez Quintana M. J.;Tolkiehn M.;Giulianelli M.;Lewis M.;Potthast M.;Leavitt M. L.;Hagen M.;Schubert M.;Baitemirova M. O.;Arnaud M.;McElrath M.;Yee M. A.;Cohen M.;Gu M.;Ivanitskiy M.;Starritt M.;Strube M.;Swedrowski M.;Bevilacqua M.;Yasunaga M.;Kale M.;Cain M.;Xu M.;Suzgun M.;Walker M.;Tiwari M.;Bansal M.;Aminnaseri M.;Geva M.;Gheini M.;Mukund Varma T.;Peng N.;Chi N. A.;Lee N.;Gur-Ari Krakover N.;Cameron N.;Roberts N.;Doiron N.;Martinez N.;Nangia N.;Deckers N.;Muennighoff N.;Keskar N. S.;Iyer N. S.;Constant N.;Fiedel N.;Wen N.;Zhang O.;Agha O.;Elbaghdadi O.;Levy O.;Evans O.;Moreno Casares P. A.;Doshi P.;Fung P.;Liang P. P.;Vico P.;Alipoormolabashi P.;Liao P.;Liang P.;Chang P.;Eckersley P.;Htut P. M.;Hwang P.;Milkowski P.;Patil P.;Pezeshkpour P.;Oli P.;Mei Q.;Lyu Q.;Chen Q.;Banjade R.;Rudolph R. E.;Gabriel R.;Habacker R.;Risco R.;Milliere R.;Garg R.;Barnes R.;Saurous R. A.;Arakawa R.;Raymaekers R.;Frank R.;Sikand R.;Novak R.;Sitelew R.;Lebras R.;Liu R.;Jacobs R.;Zhang R.;Salakhutdinov R.;Chi R.;Lee R.;Stovall R.;Teehan R.;Yang R.;Singh S.;Mohammad S. M.;Anand S.;Dillavou S.;Shleifer S.;Wiseman S.;Gruetter S.;Bowman S. R.;Schoenholz S. S.;Han S.;Kwatra S.;Rous S. A.;Ghazarian S.;Ghosh S.;Casey S.;Bischoff S.;Gehrmann S.;Schuster S.;Sadeghi S.;Hamdan S.;Zhou S.;Srivastava S.;Shi S.;Singh S.;Asaadi S.;Gu S. S.;Pachchigar S.;Toshniwal S.;Upadhyay S.;Debnath S.;Shakeri S.;Thormeyer S.;Melzi S.;Reddy S.;Makini S. P.;Lee S. -H.;Torene S.;Hatwar S.;Dehaene S.;Divic S.;Ermon S.;Biderman S.;Lin S.;Prasad S.;Piantadosi S. T.;Shieber S. M.;Misherghi S.;Kiritchenko S.;Mishra S.;Linzen T.;Schuster T.;Li T.;Yu T.;Ali T.;Hashimoto T.;Wu T. -L.;Desbordes T.;Rothschild T.;Phan T.;Wang T.;Nkinyili T.;Schick T.;Kornev T.;Tunduny T.;Gerstenberg T.;Chang T.;Neeraj T.;Khot T.;Shultz T.;Shaham U.;Misra V.;Demberg V.;Nyamai V.;Raunak V.;Ramasesh V.;Prabhu V. U.;Padmakumar V.;Srikumar V.;Fedus W.;Saunders W.;Zhang W.;Vossen W.;Ren X.;Tong X.;Zhao X.;Wu X.;Shen X.;Yaghoobzadeh Y.;Lakretz Y.;Song Y.;Bahri Y.;Choi Y.;Yang Y.;Hao Y.;Chen Y.;Belinkov Y.;Hou Y.;Hou Y.;Bai Y.;Seid Z.;Zhao Z.;Wang Z.;Wang Z. J.;Wang Z.;Wu Z.;Delgado R. R.;Chen A.;Mann B.;Olsson C.;Telleen-Lawton T.;Chi N.;Le Bras R.;Rivera C.;Gray A.
2023
Abstract
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIGbench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI’s GPT models, Googleinternal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit “breakthrough” behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.
Srivastava, A., Rastogi, A., Rao, A., Md Shoeb, A., Abid, A., Fisch, A., et al. (2023). Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. TRANSACTIONS ON MACHINE LEARNING RESEARCH, 2023.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/590824
Citazioni
516
ND
Social impact
Conferma cancellazione
Sei sicuro che questo prodotto debba essere cancellato?
simulazione ASN
Il report seguente simula gli indicatori relativi alla propria produzione scientifica in relazione alle soglie ASN 2023-2025 del proprio SC/SSD. Si ricorda che il superamento dei valori soglia (almeno 2 su 3) è requisito necessario ma non sufficiente al conseguimento dell'abilitazione. La simulazione si basa sui dati IRIS e sugli indicatori bibliometrici alla data indicata e non tiene conto di eventuali periodi di congedo obbligatorio, che in sede di domanda ASN danno diritto a incrementi percentuali dei valori. La simulazione può differire dall'esito di un’eventuale domanda ASN sia per errori di catalogazione e/o dati mancanti in IRIS, sia per la variabilità dei dati bibliometrici nel tempo. Si consideri che Anvur calcola i valori degli indicatori all'ultima data utile per la presentazione delle domande.
La presente simulazione è stata realizzata sulla base delle specifiche raccolte sul tavolo ER del Focus Group IRIS coordinato dall’Università di Modena e Reggio Emilia e delle regole riportate nel DM 598/2018 e allegata Tabella A. Cineca, l’Università di Modena e Reggio Emilia e il Focus Group IRIS non si assumono alcuna responsabilità in merito all’uso che il diretto interessato o terzi faranno della simulazione. Si specifica inoltre che la simulazione contiene calcoli effettuati con dati e algoritmi di pubblico dominio e deve quindi essere considerata come un mero ausilio al calcolo svolgibile manualmente o con strumenti equivalenti.