AI and Large Language Models

A collection of resources for researchers interested in using Large Language Models (LLMs) in their research.

Accelerate has released code for working with Large Language Models in research; you can find it in our large-language-models GitHub repository. This code covers a range of ways to use and tune LLMs, including calling APIs, finetuning models, and creating more complex solutions like Retrieval Augmented Generation (RAG). This code is freely available for researchers to build on in their work.

These resources came out of a study group which Accelerate convened in Autumn 2023. This group drew together researchers from across the University with an interest in deploying LLMs in their research. Study group members also contributed to a panel discussion at Accelerate’s AI for Science Summit in December 2023. You can view insights captured from this discussion in graphic form here and view a recording of the session here.

Building on insights from the study group, we developed a one day in-person workshop to equip researchers with knowledge of how LLMs work and how to implement them for use in their own research. Topics covered this session include an introduction to LLMs, routes for augmentation, ethical considerations and an exploration of popular models and services.

Future workshop dates will be shared on our events page.