The state of artificial intelligence (AI) is constantly in flux. The problems it can solve and the requirements it needs to solve those problems efficiently are evolving, and leaders in organizations that infuse AI into their offerings will want to stay on the cutting edge to remain competitive in 2022 and beyond. We have found four key themes we think will be important for leaders to consider and compare against their current AI strategies moving forward.
Much of AI’s value still comes from solving known problems with newer methods and data. We’re talking about more efficiently allocating resources, orchestrating sales personnel, personalizing customer experience, optimizing customer interactions through predictive models, detecting large-scale fraud and more. That said, we strongly believe that in the future, AI’s value will be derived from reimagining problems with an AI-first perspective. Take, for example, personalizing customer experiences. Traditional steps may include segmenting customers, understanding the profile of customers within each segment, determining which actions fit the objectives for each segment and then implementing relevant tactics. A first implementation of AI might be to introduce sophistication to several of these steps. A more compelling implementation is to rethink the steps entirely. Can the AI segment and choose actions? If yes, we aren’t limited to a handful of segments which a human mind can comprehend. We can have hundreds of segments and many corresponding actions. How might such an approach work in an organization structured to solve these problems linearly? Are the tasks different? Do we need to reassign responsibilities?
We have all heard data scientists spend up to 80% of their time preparing the data in order to solve problems. Whatever the true estimate is for your organization, the fact remains that delivering value through AI still requires putting in the hard yards and focusing on less sexy aspects such as data preparation. AI applications that reduce time spent wrangling data, without sacrificing quality, are showing potential. Examples include semi-automated mapping of source to target data, guided discovery of ontologies in text, intelligent data quality management support for data stewards and self-supervised learning to enrich data to fill data gaps. Progress on these dimensions will broaden the portfolio of problems that can be addressed.
As skills continue to evolve rapidly, so do the demands on our data scientists. The data scientist of five years ago would be unequipped to tackle many of the problems today. The nature of what we require from the data scientist continues to evolve—knowledge in data science techniques, understanding of architectural constructs, data engineering and much more. The best data scientists work as force multipliers: they encapsulate models into common components and actively facilitate the reuse of cutting-edge AI by a broader analytics community. This community can then further reimagine the frontiers of AI applications.
Algorithms and techniques continue to progress at a dizzying pace. Problems we discovered in 2021 may not affect us today but will soon. Our challenge is to stay in touch. Generative Pre-trained Transformer 3 (GPT 3) is significantly more sophisticated than its predecessor. GPT 3 has a capacity of 175 billion machine learning parameters, a hundred-fold increase over GPT 2 which had a tenfold increase over GPT. We can only take advantage of these advances by understanding them as they happen. These advances are also topic-focused, which demands increased specialization from us.
Many organizations are investing in AI Labs dedicated to looking around the corner and connecting the dots between cutting-edge methods and business priorities. A growing trend of low- and no-code machine learning (ML) platforms and paradigms (e.g., codex) is also reducing the barrier to experiment with cutting-edge research for a broader range of analytics professionals. For instance, large-scale transformer-based language models hosted on Hugging Face can be accessed through Google Sheet-type interfaces. These models shift the focus from programming to interpreting models and inferring implications.
Scaling AI is in the mainstream vocabulary today. What is it, who in the organization has this responsibility and how does it differ based on for whom this is being scaled? First and foremost, the purpose of scaling AI differs across a business user, data scientist and IT Professional. Each wants something different: the business user wants AI to be ubiquitous, the data scientist wants their job to be easier and the IT organization wants to serve a large set of constituent needs efficiently. Their practical understanding of scaling AI is therefore varied. Even as organizations surmount this obstacle, they must realize the hurdles associated with scaling are about both technology and changing behaviors. From a technology perspective, there is increasing recognition leveraging principles and standards from software engineering adapted to the nuances of ML that can pave the way toward reproducible AI models embedded in scalable and robust software products. On the other hand, organizational behavior change requires the thoughtful application of foundational change management principles.
AI is supposed to be everywhere and invisible. But for it to be everywhere, AI requires robust infrastructure. Building infrastructure is time-consuming and probably best done in a centralized manner. But good AI solutions need deep domain expertise, which is difficult to maintain centrally. An informal poll suggests a slight majority of organizations prefer a centralized AI approach (using a center of excellence) while others have decentralized. The consensus seems to be to move to the middle. If you’re an organization that hasn’t started on this journey, where should you focus first? Gradually build the required infrastructure, or develop excitement by solving for use cases that produce impact? We maintain a preference for the latter.
We aspire for AI to be many things: trustworthy, ethical, responsible, fair. It’s all our challenge. A Forrester prediction suggests 15 firms in the Global 500 will appoint a new chief trust officer, and at least five large companies will introduce bias bounties to eliminate discriminatory outcomes from AI systems in 2022. Nowhere is trust more important than in AI, where algorithms tend to be black box in nature. Whether these are in the form of nutritional labels or peer review committees, we expect all of us will be spending more time to ensure our AI is trustworthy in the future.
Leaders need to consider how AI's abilities and needs will influence their organizational AI strategy. The nature of the work, the talent needed to do it, the scale and ensuring our AI is trustworthy are all important ideas to hold up and ask, "Did we consider this? Should we be considering this?" There's no one size fits all answer to these questions. To stay competitive, organizations will need to constantly consider the trends, what their competitors are doing with those trends and what strategic value it offers.