For The Better - Email 10/3

How to make choices for the future of AI

For the Better comes to you bi-weekly with ideas about how and why to build companies focused on human flourishing and stories of the people who are doing it. Other enthusiasms may occasionally appear.

How to make choices for the future of AI right now

We’re awash in opinions about what the future of AI will be and how to manage it. Utopia! Dystopia! In need of regulation to prevent doom and destruction. No, wait – regulatory oversight will destroy it. Only open source can save it from evil. Or maybe it’s just the opposite. Everyone seems to have a committed position in one direction or another, giving rise to internecine battles on all fronts in a struggle for dominance over a world we don’t yet live in. All of these arguments actually arise from the same issue: No one is really sure which actions will create which outcomes, either positive or negative, because no one knows how AI, and its place in our lives, is going to develop. In some ways, any rabid commitment to a particular vision is a cover for the paralysis born of not actually being able to predict the future. My own belief is that when we’re not sure what's going to happen down the road, the ethically correct choice is simply to pick actions that make sense for today and will also continue to make sense in a very wide range of possible futures. For example:

  • Better disclosure now will help us understand the potential, and possible hazards, of AI in the future no matter what it looks like. 

  • Investing in state capacity will also be helpful and important in any future scenario. We want it to be competent and capable of addressing whatever comes. 

  • Hardening critical infrastructure will not only address current fears about bad actors getting access to AI, but is a generally beneficial thing to do for a very high percentage of trajectories. 

Helen Toner, a former board member of OpenAI, uses this kind of framework to encourage people to move beyond conflict and paralysis to a place of cooperation, especially with regard to transparency. This piece, written with her colleague at Georgetown University’s Center for Security and Emerging Technology, lays out the stakes very clearly:Fostering greater transparency and disclosure in AI will illustrate opportunities to act on present harm and indicate potential “failure modes” for future, more powerful AI systems…AI needs what already exists in aviation, cybersecurity, and other tech domains: an institutionalized, public incident-tracking system backed by government resources and investigatory powers….Beyond incident tracking, transparency and disclosure requirements for high-stakes AI would help mitigate both present and future AI risks. Right now, companies building and selling AI have far more information about their systems than the policymakers trying to figure out how to govern them.As it happens, just this week Governor Gavin Newsom vetoed SB 1047, an AI safety bill that would have created a number of new regulations on the industry. The takes are coming quickly, needless to say. I found this one, this one, and this one to be among the best. But what’s really interesting about them is that they’re really about the problem of insisting on a specific version of the future rather than looking at what can be done now. As one of them concludes: “Asking focused questions…seems far more promising than asking whether an AI model is dangerous or not—a question that’s simply too vague to produce a meaningful answer.”This isn’t a prescription for being passive or going along with the status quo. It’s about recognizing what choices can be made that will provide us with a greater ability to respond to whatever version of the future we arrive into. Relinquishing the battle is the only wayto maximize the eventual benefit for everyone.

Things I’ve Enjoyed Lately 

🔵 A Look Behind the ScreensThis Federal Trade Commission study of the extractive data mining habits of big tech companies including Amazon, Youtube, Snap, WhatsApp and more is hardly surprising, but still bracing in its assessment: “The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse. Our privacy cannot be the price we pay to accomplish ordinary basic daily activities, and responsible data practices should not put a business at a competitive disadvantage.” 

Athena:

One of the things I enjoy the most about what I do is that every day is a little different. I meet people building incredible companies and doing great things for the world, do my writing and work, and always reserve time for my family. Athena, which offers executive assistants at affordable rates, shares my belief that the key to making the most of everything life has to offer is keeping it running smoothly. I've partnered with them so you can focus on living your life rather than managing it. 

🔵 The CEO ResponseA fascinating, and encouraging report on how CEOs feel about the uncertainty that lies ahead on so many fronts and their role in helping the world navigate it. “They embrace their responsibility to collaborate with governments and other stakeholders to solve challenges such as climate change and inequality, even if the final destinations are not yet known.”🔵 Why I’m funding a $100 million project to revive Fillmore StreetInvestor Neil Mehta is putting his money literally where his mouth is – the San Francisco neighborhood where he grew up and where he’s raising his own kids. Local ecosystems are powerful, even across generations, and even when they’re in need of help. They foster real connection and with it, bring value of every kind to the people who live in them.