Empowering Facebook's A.I. Team: The Case for Financial Flexibility
Written on
Chapter 1: The Impact of Metrics on Decision-Making
The phrase "What you measure matters" rings true in many organizations. When key performance indicators (KPIs) are closely monitored, employees tend to prioritize them, especially if incentives are tied to specific goals. For instance, if boards place importance on values like diversity and company culture, they should collaborate with CEOs to ensure these metrics are prominently displayed alongside traditional financial indicators like revenue and profit.
Determining the right metrics is a challenge in itself. The tech industry often struggles to evaluate the nuances of Web 3.0 using the simplistic metrics associated with Web 2.0. Issues such as misinformation, trolling, and polarization are far more complex than straightforward metrics like click-through rates (CTR) or cost per thousand impressions (CPM). I experienced this firsthand while managing YouTube's consumer product team. When leadership redirected our focus from solely boosting user growth to also enhancing monetization, we had to sideline a project aimed at improving the comments system—a project that ultimately aimed to reduce negativity. The sacrifice was made because the comments system was not linked to high-priority KPIs like revenue or user engagement, forcing it to take a back seat.
Consider this scenario: what if the right metric to evaluate is the negative impact of a product, but it has an inverse correlation with business KPIs? For example, if polarizing content drives short-term engagement, which subsequently boosts active user counts and ad revenue. It raises an important question about the pressure on margins and its impact on investments in trust and safety.
The discussion sparked by Casey Newton’s Platformer article regarding Facebook's Responsible A.I. team elicited both hope and skepticism. My concern is that these teams, despite being empowered to challenge internal perceptions of their products, may be restricted from implementing changes that could adversely affect business metrics. In essence, we seek accountability as long as it doesn’t jeopardize stock performance. This concern extends beyond Facebook, reflecting a broader issue of competing incentives within corporations. During my tenure at Google and YouTube, decisions were often made to balance user experience with revenue generation. The company would experiment with ad placement and load, typically avoiding strategies that maximized immediate revenue at the cost of user satisfaction or advertiser return on investment (ROI). It’s a long-term vision, prioritizing sustainability over short-term gains.
Section 1.1: Balancing Responsibility with Revenue
How can we enable a team like Responsible A.I. to make decisions that may decrease revenue, engagement, or growth if they believe these choices positively influence fairness or responsibility? One potential solution is the establishment of a dedicated budget for such teams.
This budget would allow teams to "spend" up to a predetermined annual amount. This doesn’t imply that they must exhaust their budget, as many initiatives may turn out to be neutral or even beneficial for revenue. Granting them the autonomy to make decisions aligned with their objectives could help mitigate the need to justify leaving potential revenue on the table.
While this concept may seem unconventional and could lead to unforeseen consequences—such as other departments adjusting their strategies to recover "lost" revenue—the underlying principle is that promoting fairness shouldn't come at the cost of profits. There’s a risk that other teams could offload their responsibilities onto this specialized A.I. team, expecting them to resolve all issues.
Section 1.2: Future Directions for Corporate Responsibility
Perhaps we could envision a model similar to carbon offsetting, where each product team is accountable for managing its own responsibility budget, leading to an internal marketplace for trading responsibility credits. Addressing new challenges will require innovative solutions, necessitating a deep understanding of corporate dynamics in addition to technical algorithms.
Chapter 2: Exploring the Dangers of AI
In this video, Mo Gawdat, a former Google executive, shares insights on the potential risks associated with artificial intelligence and the ethical considerations that must be taken into account.
Chapter 3: Redefining Employee Incentives
This video discusses strategies for employees at Facebook to enhance their performance by shifting their focus from mere revenue generation to fostering a more meaningful engagement with their work.