Sidewalk Labs, an urban planning company in the Google family, unveiled a highly anticipated plan to redesign a 12-acre portion of Toronto’s waterfront last June. After decades of poor city planning had left this industrial section of Toronto devoid of residents and cut-off by a tangle of overpasses and bridges, residents wanted a more livable redesign. Waterfront Toronto was eventually formed to request concepts for the redesign. Sidewalk Labs was chosen to take the project on.
It was a big deal. Canadian Prime Minister Justin Trudeau announced the initiative after its conception in 2017. Sidewalk Labs’ proposed innovations were impressive, forward-thinking and loaded with technology and AI. The design included tall buildings made from engineered wood lining the waterfront, and sensors that would track the movement of residents and feed that data to algorithms. The AI would, among other things, use the data to control traffic and deploy robots to clean trash. When the plan was released, however, it landed with a loud thud, and Toronto residents pushed back over privacy concerns. As of February, Sidewalk Labs was forced to scale back its proposal significantly.
Media coverage of the scale-back has mostly focused on residents’ actions. The New York Times framed it as a David and Goliath story in which a band of dedicated Torontonians took on a big tech company and won. Is that what happened? Did they fell a giant, or did the giant stumble over its own shoelaces?
I’ve seen many companies deploy significant makeovers to their organization and workforce. I’ve seen it done well and watched employees adopt new ways of working, and I’ve seen it done poorly and watched employees resist adoption. Sidewalk Labs made the same missteps that many other companies make when attempting to incorporate technology and AI. Here’s how:
- Sidewalk Labs didn’t consider public perception and the importance of building trust. As a member of the Alphabet family, Sidewalk Labs might as well be Google. We’ve entered an age of both awe and distrust of big tech companies. A feature release that offers a timely and useful suggestion can generate amazement along with consternation because our personal data was mined in the background. If you’re Google or Facebook or Amazon, you can’t embrace the awe while dismissing the fear. You’re starting from a place of huge consumer distrust, which goes beyond the ethical use of data. So when a company synonymous with Google proposes to install sensors that would monitor and record each citizen’s every move, this was understandably alarming. Consumers fear ulterior motives with the collection of personal data, but Google seems tone deaf to it. In November of last year, the Wall Street Journal reported that Google was collecting the personal health information of millions of Americans in partnership with Ascension Health. Google seemed genuinely shocked that there was so much blowback when this news came to light because the company was not planning anything sinister. But what Google doesn’t seem to get is that this isn’t about motive—it’s about appearances.
Ethics is about fairness and the moral principles that govern activity. If you’re tracking online activity to serve up ads, that’s ethical. We may not like it and it may feel invasive, but it’s not nefarious because we know how the data is used. Sidewalk Labs could have done more to clearly explain what it was going to do with collected data and not leave it to Toronto’s imagination. Shouldn’t Google put more effort into mitigating and reversing distrust, especially in large initiatives such as this? Developing a solution in secret is not a good way to build trust. The lesson here is to know your audience, understand what kind of trust deficit you may be operating from and keep it in mind as you design a new solution.
- The leaders failed to promote human-machine collaboration. Building confidence and comfort in AI requires, among other things, transparency, explainability and the clear communication of its limitations, capabilities and the role humans will play in its deployment. As reported in the New York Times article, Sidewalk Labs asserts that algorithms will design and run a city better than politicians. Maybe that’s true or maybe it isn’t, but either way, it’s terrible PR. Many find AI threatening, so wouldn’t it be smarter to promote human-machine collaboration, with humans in charge and machines helping them make better decisions?
- The company didn’t bring its “end consumers” on the journey. Sidewalk Labs was supposed to release a plan for a 12-acre site but instead surprised residents with an 800-acre plan. It appears that many of the company’s ideas required a larger scale to work properly (and in its defense, Waterfront Toronto did request a broader vision). I spoke with a few Toronto residents and heard a mix of responses, including, “I heard a lot of news about this at the launch, but then never heard about it again.” Keeping consumers engaged would have reduced the intimidation factor and given them some say in the design. From a communications perspective, it’s a disaster to change the scale of a project without sharing the details, especially if you’re operating from a trust deficit. Keep your stakeholders informed not only of your plans but also of changes in scope and involve their input in your design.
- Ethics were an afterthought. Critics of the plan were quick to characterize its people-tracking component as “surveillance capitalism.” It’s unclear if Sidewalk Labs had any ethical concerns while designing the plan. I assume they didn’t. When citizens spoke up after the fact, Sidewalk Labs reacted, let Waterfront Toronto set the ethical guidelines, and then downscaled the project to address concerns. The company should have seen these concerns coming, not only because the leaders should know their audience but also because the project has a potential for abuse that should concern people. For example, a less well-intentioned tech giant could use surveillance data and smart city infrastructure to nudge consumer behavior to benefit businesses that paid for increased traffic. When you’re designing new AI, don’t innovate first and consider ethical ramifications later. Think about ethics early in the innovation process. Pull ethics-minded people in to brainstorm what moral or safety concerns could arise and keep thinking about ethics during every phase. Anticipate concerns so that you can be ready for the many ways you’ve considered what could go wrong and how you plan to mitigate those concerns.
Technology and AI are intimidating to many even when it’s helpful. Whether you’re Google or an orthopedic shoe factory, releasing technology and AI driven innovations must be done with great care and a deep understanding of those who will be impacted. If not, even brilliant innovations will sink under their own weight. Clearly, that’s what happened in Toronto.