Inside the AI Party at the End of the World

Trending 3 weeks ago

In a $30 cardinal mansion perched connected a cliff overlooking nan Golden Gate Bridge, a group of AI researchers, philosophers, and technologists gathered to talk nan extremity of humanity.

The Sunday day symposium, called “Worthy Successor,” revolved astir a provocative idea from entrepreneur Daniel Faggella: The “moral aim” of precocious AI should beryllium to create a shape of intelligence truthful powerful and wise that “you would gladly for illustration that it (not humanity) find nan early way of life itself.”

Faggella made nan taxable clear successful his invitation. “This arena is very overmuch focused connected posthuman transition,” he wrote to maine via X DMs. “Not connected AGI that eternally serves arsenic a instrumentality for humanity.”

A statement filled pinch futuristic fantasies, wherever attendees talk nan extremity of humanity arsenic a logistics problem alternatively than a metaphorical one, could beryllium described arsenic niche. If you unrecorded successful San Francisco and activity successful AI, past this is simply a emblematic Sunday.

About 100 guests nursed nonalcoholic cocktails and nibbled connected food plates adjacent floor-to-ceiling windows facing nan Pacific water earlier gathering to perceive 3 talks connected nan early of intelligence. One attendee sported a garment that said “Kurzweil was right,” seemingly a reference to Ray Kurzweil, nan futurist who predicted machines will surpass quality intelligence successful nan coming years. Another wore a garment that said “does this thief america get to safe AGI?” accompanied by a reasoning look emoji.

Faggella told WIRED that he threw this arena because “the large labs, nan group that cognize that AGI is apt to extremity humanity, don't talk astir it because nan incentives don't licence it” and referenced early comments from tech leaders for illustration Elon Musk, Sam Altman, and Demis Hassabis, who “were each beautiful frank astir nan anticipation of AGI sidesplitting america all.” Now that nan incentives are to compete, he says, “they're each racing afloat bore to build it.” (To beryllium fair, Musk still talks astir nan risks associated pinch precocious AI, though this hasn’t stopped him from racing ahead).

On LinkedIn, Faggella boasted a star-studded impermanent list, pinch AI founders, researchers from each nan apical Western AI labs, and “most of nan important philosophical thinkers connected AGI.”

The first speaker, Ginevera Davis, a writer based successful New York, warned that quality values mightiness beryllium intolerable to construe to AI. Machines whitethorn ne'er understand what it’s for illustration to beryllium conscious, she said, and trying to hard-code quality preferences into early systems whitethorn beryllium shortsighted. Instead, she projected a lofty-sounding thought called “cosmic alignment”—building AI that tin activity retired deeper, much cosmopolitan values we haven’t yet discovered. Her slides often showed a seemingly AI-generated image of a techno-utopia, pinch a group of humans gathered connected a writer knoll overlooking a futuristic metropolis successful nan distance.

Critics of instrumentality consciousness will opportunity that ample connection models are simply stochastic parrots—a metaphor coined by a group of researchers, immoderate of whom worked astatine Google, who wrote in a celebrated paper that LLMs do not really understand connection and are only probabilistic machines. But that statement wasn’t portion of nan symposium, wherever speakers took arsenic a fixed nan thought that superintelligence is coming, and fast.