Small language models’ could be the future of AI, says this expert
This video is part of: Centre for AI Excellence
What if the way to better AI wasn’t to think bigger - more data centres, more data sets, more robust general intelligence - but smaller? That’s the route proposed by Stanford professor and AI expert Yejin Choi, who argues that the cultural dominance of English in #LLMs silences other cultures and shuts whole communities out of AI’s benefits. Her solution? Smaller, more boutique language models, which use less power and processing, train through interaction rather than mass data consumption, and reflect the pluralistic values of their creators. It’s small language models, Choi argues, that are more likely to fully embody the rich and diverse variety of human perspective.
Topics:
Artificial IntelligenceForum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.



