Facebook-owner Meta Platforms Inc is opening up access to a large language model for artificial intelligence research. Meta said that its model was the first 175-billion-parameter language model to be made available to the broader AI research community. Large language models are natural language processing systems which are trained on massive volumes of text. They are capable of answering reading comprehension questions or generating new text.
Meta said the release of its Open Pretrained Transformer (OPT-175B) model would improve researchers’ abilities to understand how large language models work. Meta said that restrictions on access to such models had been hindering progress on efforts to improve their robustness and mitigate known issues such as bias and toxicity. Artificial intelligence technology, which is a key area of research and development for several major online platforms, can perpetuate humans’ societal biases around issues like race and gender. Some researchers have concerns about the harms that can be spread through large language models.
Meta said it hoped to increase the diversity of voices defining the ethical considerations of such technologies. The tech giant said to prevent misuse and maintain integrity, it was releasing the model under a noncommercial license to focus on research use cases. Meta said that access to the model would be granted to academic researchers and people affiliated with government, civil society and academic organizations, as well as industry research laboratories. The release will include the pretrained models and the code to train and use them.