Stella Biderman
Cited by
Cited by
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
L Gao, S Biderman, S Black, L Golding, T Hoppe, C Foster, J Phang, H He, ...
arXiv preprint arXiv:2101.00027, 2020
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
The Tenth International Conference on Learning Representations (ICLR), 2022
GPT-Neo: Large scale autoregressive language modeling with Mesh-TensorFlow
S Black, L Gao, P Wang, C Leahy, S Biderman
GitHub Repository, 2021
Magic: The gathering is Turing complete
A Churchill, S Biderman, A Herrick
10th International Conference on Fun with Algorithms (FUN), 2020
Quality at a glance: An audit of web-crawled multilingual datasets
J Kreutzer, I Caswell, L Wang, A Wahab, D van Esch, N Ulzii-Orshikh, ...
Transactions of the Association for Computational Linguistics 10, 50-72, 2022
VQGAN-CLIP: Open domain image generation and editing with natural language guidance
K Crowson, S Biderman, D Kornis, D Stander, E Hallahan, L Castricato, ...
arXiv preprint arXiv:2204.08583, 2022
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
S Black, S Biderman, E Hallahan, Q Anthony, L Gao, L Golding, H He, ...
Proceedings of the ACL Workshop on Challenges & Perspectives in Creating …, 2022
A framework for few-shot language model evaluation
L Gao, J Tow, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, ...
GitHub Repo, 2021
Pitfalls in Machine Learning Research: Reexamining the Development Cycle
S Biderman, WJ Scheirer
''I Can't Believe It's Not Better!'' NeurIPS 2020 workshop, 106-117, 2020
GPT-NeoX: Large scale autoregressive language modeling in pytorch
A Andonian, Q Anthony, S Biderman, S Black, P Gali, L Gao, E Hallahan, ...
GitHub Repo, 2021
Towards a Model-Theoretic View of Narratives
L Castricato, S Biderman, D Thue, R Cardona-Rivera
Proceedings of the Third Workshop on Narrative Understanding, 95-104, 2021
Datasheet for the Pile
S Biderman, K Bicheno, L Gao
arXiv preprint arXiv:2201.07311, 2022
You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings
Z Talat, A Névéol, S Biderman, M Clinciu, M Dey, S Longpre, S Luccioni, ...
Proceedings of the ACL Workshop on Challenges & Perspectives in Creating …, 2022
Documenting geographically and contextually diverse data sources: The bigscience catalogue of language data and resources
A McMillan-Major, Z Alyafeai, S Biderman, K Chen, F De Toni, G Dupont, ...
arXiv preprint arXiv:2201.10066, 2022
Cut the CARP: Fishing for zero-shot story evaluation
S Matiana, JR Smith, R Teehan, L Castricato, S Biderman, L Gao, ...
arXiv preprint arXiv:2110.03111, 2021
Rotary embeddings: A relative revolution
S Biderman, S Black, C Foster, L Gao, E Hallahan, H He, B Wang, ...
EleutherAI Blog, 2021
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
Neural language models are effective plagiarists
S Biderman, E Raff
arXiv preprint arXiv:2201.07406, 2022
What Language Model to Train if You Have One Million GPU Hours?
T Le Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
Proceedings of the ACL Workshop on Challenges & Perspectives in Creating …, 2022
Data Governance in the Age of Large-Scale Data-Driven Language Technology
Y Jernite, H Nguyen, S Biderman, A Rogers, V Danchev, S Tan, ...
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022
The system can't perform the operation now. Try again later.
Articles 1–20