Successfully scaling data products – best practice from McKinsey
Data products are central to increasing data consumption across the organization. But how can you ensure your data product program delivers lasting value? We explore the latest best practice from McKinsey, designed to scale data product creation and usage.

As organizations look to accelerate and deepen data consumption, more and more are adopting strategies based around data products – high-value, business-focused data assets that are ready to consume by large audiences. In Gartner’s 2024 Evolution of Data Management Survey, 63% of Chief Data Officers (CDOs) highlight data products as one of their top five investment trends for the next 2-3 years.
However, while the overall push to create data products is growing, delivering success at scale from programs remains complex to achieve. Challenges such as ensuring ROI and value, maximizing governance and aligning data owners and business users, all potentially impact the results of data product initiatives.
How can organizations put in place the foundations for data product growth? Consultants McKinsey has recently released five best practices for scaling data product programs – this blog explores these and how they can be practically applied.
The challenges to creating data product factories
The key objective of a data product is to make relevant data easily-consumable by a large group of business users. And, just as creating bespoke physical products in the real-world is expensive and time-consuming, so is building every data product from scratch. What is needed is an industrial approach which reuses as much of underlying structures as possible between different data products in order to deliver economies of scale and time.
However, challenges go beyond technology according to McKinsey:
Instead, the consultancy outlines three obstacles to scaling data product production:
- Confusion about how data products deliver value – pointing to a need to educate users and senior management about the role and benefits of data products
- Governance practices that favor the individual use case over larger ROI benefits – preventing reuse of elements or the scaling of data products due to concerns about confidentiality and security
- Institutional incentives that reward building data products over scaling them – if everyone is tasked with creating data products, some may be built to fill quotas rather than satisfy specific business needs. Equally, the maintenance and updating of existing data products could be neglected as time is instead spent on creating new products from scratch.
5 data product best practices from McKinsey
To overcome these challenges and scale data product creation McKinsey provides five practical lessons:
It’s about more value, not better data
When organizations first explore data product programs, it is very easy and tempting to focus on one or two high value use cases where data is plentiful. The danger of this approach is that there may not be a specific business need in place – or it may be already met through existing data assets. This means that the benefits are not felt or the data product itself is not adopted, no matter how well engineered it actually is. Alternatively, data leaders overcommit to creating a large number of data products, stretching resources too thinly to deliver real results.
Instead, organizations need to create a roadmap of data products, based on which will deliver the greatest value to the business. This has to be built upon solid analysis that includes the time and cost behind each data product compared to the benefits it will bring, the number of business use cases it meets and how many people will actually use it. That will focus the program and ensure that it combines both delivering early value and an ongoing pipeline of relevant data products moving forward.
Understand the economics of data products
As mentioned above, creating multiple data products through an industrialized approach is much more efficient than hand-crafting each one from scratch. It is vital that senior decision-makers understand this, and value the economies of scale that comes from reusing elements and templates across different data products. The technology processes involved in ensuring data quality and governance should also be made scalable and repeatable.
By adopting common processes and principles, the initial costs of building the first data product can be amortized across subsequent products, bringing down expenses across the program, particularly as teams increase and share their experiences. As part of this it is vital to understand that data products are ongoing, evolving objects – they need to be easy to maintain and update to optimize value over time.
Demonstrating a business case based on clear ROI makes it easier for CDOs and other data leaders to show value from data products, and thus maintain and increase their budgets over time by showing the business impact they bring.
Build data products that can power the flywheel effect
Long-term value comes from maximizing reuse of the technical components of data products, which starts from the beginning of the program. Focusing on data engineering and setting standards and templates may feel time-consuming, but it ensures programs can scale and create a positive flywheel effect, with ongoing momentum increasing over time.
As McKinsey says, organizations need to make access to data products simple. This is where a data product marketplace comes in. A centralized, self-service space for all data products (and other data assets), a data product marketplace connects users and data producers, making it easy to discover, access and consume data products. Data marketplaces need to be built on an intuitive, e-commerce style experience so that any user can find the products they need, without requiring technical skills or support. This accelerates the flywheel effect, prompting greater collaboration between users and data product owners while providing feedback on future enhancements and needs.
Data marketplaces don’t just meet the requirements of human users. As they centralize data in easily-consumable formats, they can be used to train AI models and agents, retaining control while also ensuring consistency and access to a single version of the truth.
Find people who can run data products like a business
While they require technical skills to create, data products are not just an IT product. They must be carefully designed to meet a specific business need, and be able to be used by a large group of non-technical employees. This means that data product teams require a range of skills, from IT, data and governance to business domain expertise and user experience understanding.
McKinsey highlights two key requirements:
- Appoint a strong Data Product Owner (DPO) to lead the project. The DPO should run the data product program like a business, rather than a technical exercise. This means working closely with the wider organization, finding new use cases and tracking KPIs to show how much value is being generated, as well as being accountable for the financial benefits created.
- The business should lead development. Overfocusing on the technical aspects of the data product can result in a solution that does not meet business needs, and consequently is not adopted by users. Business owners, such as domain experts, should be involved in data products from the beginning, with extensive testing and adaptation to finetune the end result.
Integrate gen AI into the data product program
As well as being used to power AI programs, data products can benefit by adopting generative AI in their creation and ongoing support. Organizations therefore need to break down the different steps in data product creation and understand where gen AI delivers benefits in terms of consistency, speed and efficiency. These include the preparation and deployment phases of data products, such as when building pipelines, monitoring data quality and testing and publishing. Gen AI has to be incorporated directly into data product workflows, consistently across teams, to optimize processes and deliver on its benefits while creating better, more usable data products that meet user requirements.
Delivering data products at scale
With data products now a priority for every CDO, ensuring that programs deliver ongoing value is crucial. They have to scale beyond a small number of use cases to create an ongoing, AI-assisted production line that is built in conjunction with the business and reuses the maximum amount of components and experiences to bring down costs and drive ROI. Above all, data products have to be easily discoverable and usable by both humans and AI, meaning organizations should invest in an intuitive, centralized data product marketplace that seamlessly connects employees to the data they need to drive value and become more data-centric.
Looking to industrialize your data product program? Find out how Opendatasoft helps increase consumption and collaboration through our intuitive, self-service data product marketplace solution. Contact us to learn more and arrange a demo.