AI is everywhere in life sciences. Companies have moved from successful proofs of concept to deployment, from models to software and from improving existing applications to rethinking how decisions are made.
So why does everyone want to rethink who takes ownership of AI next?
When we surveyed 15 large life sciences organizations on their approach to establishing and scaling AI, approximately 60% of the organizations began their journey with an AI center of excellence, while the remaining 40% began their journey with a federated approach. Regardless of the starting point, every single organization said they’re dissatisfied with the current status quo and searching for greater returns on their AI investments. They believe they can do better on several fronts: greater scope, more sustained impact and more efficient delivery methods. And, ironically, most said that the other basic approach—centralized or federated—is the more scalable one.
They’re all grappling with the realization that multiple disciplines need to coordinate to deliver impact. Each of their AI-related efforts needs business, data, algorithms, analytics, engineering and UI/UX specialties.
Then there’s the question of who should lead. Chief analytics, digital, information and technology officers typically feel responsible for driving the adoption of AI. Should AI be within one of their remits, and if so, whose? Or should AI be decentralized and developed close to the business?
The dilemma: should AI be federated or centralized? We think the answer starts with examining the limits of today’s federated and centralized approaches.
Let’s begin by asking what is being federated or centralized? First, we must distinguish between ownership and work. Ownership includes decisions about who identifies and prioritizes tasks and takes responsibility for output. The actual work is what is required to make plans a reality. But thinking about ownership and work in a black-and-white manner raises a paradox. There’s simply no combination of these two dimensions that allows for both breakthrough innovation and the practical utility needed for today’s domain apps and tailored models (see Figure 1).
We believe that ownership must be centralized, whereas work should be both federated and centralized, here’s why:
Without the centralized ownership of foundational or common elements, it is difficult to develop a community, share best practices, set common standards, allow reuse and more. As the field of AI rapidly evolves, it is essential for data scientists to have an open sharing environment and to build on each other's work rather than reinventing the wheel each time.
Without some federated work, however, AI models won’t be grounded in specific domains and thereby in practical utility. While these federated endeavors are practical, they're often too focused on incremental innovation.
To counterbalance this incremental tendency, leaders need to create space for work that takes up disruptive ideas and experiments with them. This type of work is best accomplished under centralized ownership.
Solving problems of ownership and work at the next level requires a deeper understanding of the necessary components to deliver AI. Our guiding principle is to not only centralize the ownership of any common or foundational component but also federate the work to create domain-specific solutions or last-mile applications.
- Compute infrastructure is (mostly) a commodity and can be considered foundational and a good candidate for centralized ownership. You may have some special provisions for heavy machine learning workloads, but unified platforms like Spark, Ray and GPU architectures can typically handle these workloads.
- Foundation models can be centralized and provided as a service for domain-specific teams. Central teams can train these models on large bodies of general knowledge to learn representations of a wide range of concepts (think GPT-3 for language and ImageNet for images, etc.) Then, domain-specific teams can use them for more robust training. While these types of models are currently more prevalent in NLP and Vision tasks, we expect to see them become quite useful for structured data tasks as well.
- Data and knowledge layers have multiple facets, so consider a hybrid approach here. For example, data lake initiatives often have the ambition of centralizing and integrating datasets and you can centralize repeatable tasks in data quality management. However, you may need a more federated approach for model-specific data pipelines.
In some cases, companies benefit from an in-between model, where ownership doesn’t rise to the enterprise level, but it’s inefficient to have each brand take ownership. For example, a company may want to group ownership for how it manages patient identifiable information for all its oncology brands since this data is used within the business unit, not throughout the enterprise. In these cases, teams benefit from central ownership at the business unit level.
Think back to the birth of data-driven decision-making. We saw organizations begin their journeys from different places. Some invested in technology and infrastructure. Others wanted to improve their decision-making capabilities. Still others invested in talent, hoping that analytical skills would ensure demand and adoption. Wherever their journeys began, organizations have converged on a more common model over the years.
The widespread adoption of AI will likely follow a similar trajectory, albeit more complicated. If your organization is striving for more impactful uses of AI, we urge you to look at the balance of federated and centralized approaches and chart your convergence plan.