
Join Pierre-Carl Langlais, a co-founder of Pleias, in conversation with Elena Simperl, as he discusses designing small Large Language Models (LLMs) optimised for regulated industries.
Not all organisations can reap the benefits of AI right now, especially when it comes to Large Language Models (LLMs); while LLMs promise unprecedented gains when employed as assistants in a number of tasks, their uptake has been slow in regulated industries that deal with sensitive or proprietary data.
Often LLMs are considered too big for organisations to run locally, making them reliant on cloud providers which can conflict with their commitments to keeping their data in-house. Also, with the lack of transparency around foundation models' training data, regulated organisations cannot be sure that the LLMs they use were not trained on intellectual property or copyrighted assets - with this being the subject of ongoing lawsuits, potential adopters require assurances that their use of LLMs wouldn't put themselves at risk of legal recourse.
Pleias is a French AI development lab that sought to address these challenges by 'doing the impossible' and designing small LLMs optimised for regulated industries - trained entirely on open data. With such a narrow focus, the Pleias team built a toolbox of innovations that allowed them to get the most out of a small set of sources of open data. Their models, trained on these sources only, outperformed models twice as large as them that were released by global AI labs. Pleias are committed to open science, with each of their models, datasets, and tools openly accessible for everyone in the AI ecosystem.
Speakers
Pierre-Carl Langlais, a co-founder of Pleias, has been a major influencer in the Data-centric AI space, recently speaking at the AI Action Summit in Paris. He will be speaking to us about the importance of Pleias' initiative and how it fits into AI practices now and in the future.
Professor Elena Simperl, Director of Research, ODI
Elena Simperl is the ODI’s Director of Research and a Professor of Computer Science at King’s College London. She is also a Fellow of the British Computer Society, a Fellow of the Royal Society of Arts, a senior member of the Society for the Study of AI and Simulation of Behaviour, and a Hans Fischer Senior Fellow.
Elena’s research is in human-centric AI, exploring socio-technical questions around the management, use, and governance of data in AI applications. According to AMiner, she is in the top 100 most influential scholars in knowledge engineering of the last decade. She also features in the Women in AI 2000 ranking.
In her 15-year career, she has led 14 national and international research projects, contributing to another 26. She leads the ODI’s programme of research on data-centric AI, which studies and designs the socio-technical data infrastructure of AI models and applications. Elena chaired several conferences in artificial intelligence, social computing, and data innovation. She is the president of the Semantic Web Science Association.
Elena is passionate about ensuring that AI technologies and applications allow everyone to take advantage of their opportunities, whether that is by making AI more participatory by design, investing in novel AI literacy interventions, or paying more attention to the stewardship and governance of data in AI.
Event details
You will be emailed a link to access the event.
We will send a recording of the event to all bookers.
Join our mailing list
Join our Research mailing list and be the first to know about our DCAI webinar series and Research publications. You can subscribe here.