We've created a matrix of AI business models to help you to understand the array of models currently used by businesses developing AI systems, and lay out the most commonly expressed reasons for the current trend of restricting access to data. The current wave of interest and investment in artificial intelligence (AI) is characterised in part by a preference for business models that prioritise the collection and siloing of data. For companies taking this approach, access to data is seen as the key to securing a competitive advantage when building, implementing, and operating AI systems. Many of these companies therefore choose to restrict access to data in an effort to increase their competitive advantage. The most frequently cited justifications for this approach are that the data siloed by companies is personal or proprietary – that by restricting access to data, companies are ensuring privacy and increasing profits. However, this approach – and the business models based on it – arguably undercuts these goals. It undercuts the goal of privacy by allowing large companies to control data about people rather than building a framework that recognises the needs of people, communities, businesses and governments. Along similar lines, the current approach undercuts efforts to limit unintended encoded bias in AI systems since it makes it difficult to interrogate the data used to train those systems. Finally, the widespread practice of restricting access to data undercuts the goal of increasing profits, since an AI sector beset by monopolies and the widespread siloing of crucial data stifles innovation within the sector as a whole. In this report we will present a matrix of AI business models to help readers understand the array of models currently used by businesses developing AI systems, and lay out the most commonly expressed reasons for the current trend of restricting access to data. We will then discuss challenges related to the widespread siloing of data, focusing on the dangers of unintended encoded bias and an AI oligopoly. We will close by identifying a number of emerging trends that are likely to impact the ways data is used in the AI sector over the coming years and explore ways of mitigating against the dangers of oligopoly and unintended encoded bias by opening and sharing data. At the ODI we believe the ideal path forward will involve increasing the amount of sharing and opening of data for and by businesses operating AI systems, as well as the adoption of a wider variety of business models within the AI community. This will need to be done in a way that secures and safeguards the trust of people while promoting innovation across the sector.