This is our response to the Generative artificial intelligence in education call for evidence.

Read our response in full

Since its inception, the Open Data Institute has been committed to our mission: to work with companies and governments to build an open, trustworthy data ecosystem. As our chair, Sir Nigel Shadbolt told the Commons Science and Technology Committee in February: “Although we are talking a lot about AI, for the algorithms, their feedstock—their absolute requirement—is data.” This means that, as the Open Data Institute has argued since its creation, ‘the need to build a trustworthy data ecosystem is paramount’.

If we want to make the most of the opportunities presented by AI, while ensuring that we do all we can to mitigate the risks, we need to think about data. We need to ensure that, as much as possible, the data used is open - accessible, available and assured. We need to ensure that the data that needs to be protected is protected; that we have the right data infrastructure to do that properly; and that we have the data literacy, the right governance (including participatory data governance) and a sufficient grasp of data ethics to build public trust in data use and prevent its abuse.

Our key recommendations

  • Caution in use of AI in education, which should include: assessing if data used is ‘fit for function’, questioning who benefits and who loses out from use of tools; and ensuring transparency and accountability regarding the safety of systems and where data (especially children’s data) goes
  • Those collecting and using data for AI need to be highly alert to inequalities, biases and power asymmetries
  • Providing more transparency in terms of the safety of systems
  • A much bigger programme of research and development will be needed to make this a success