Happy ceasefire day and welcome to Regulator, a newsletter for Verge subscribers about Big Tech’s rocky journey through the world of politics. If you’re not a subscriber yet, you can do so here, but my only request is that you sign up before Donald Trump decides to revisit his previous threats toward Iran and kickstart World War III.
I’m back after being waylaid last week by the deadly combo of a moderate cold and the beginning of pollen season. (Twenty-one percent of the District’s acreage is taken up by public green space, and DC is consistently ranked the best city park system in America. Unfortunately, I am allergic to every tree and grass.) If you’ve got tips on anything I may have missed or anything I should know about the upcoming weeks, send ’em to tina.nguyen+tips@theverge.com.
Do you actually believe anything OpenAI says?
On Monday, OpenAI published a 13-page policy paper addressing the impact that artificial intelligence would have on the American workforce. The company also proposed what it believed was the solution: putting higher capital gains taxes on corporations replacing their workers with AI and using that money to create a bigger public safety net. Its solutions included a public wealth fund, a four-day workweek funded by “efficiency dividends,” and government programs to help transition workers into “human-centered” work, all financed by the abundance that artificial intelligence would deliver.
Unfortunately, it was released the day that The New Yorker’s Ronan Farrow and Andrew Marantz published a meticulously reported, 17,000-word-plus article chronicling Sam Altman’s history of lying to everyone around him, including to his Silicon Valley backers, his employees, his board, and — relevant in this case — lawmakers trying to regulate AI. The New Yorker article reinforced a long-standing narrative about Altman, and OpenAI by extension: They may spout idealistic values, but would quickly jettison them for financial and political gains.
On its own, said several people I spoke to, the paper was a net positive to AI governance overall, in that it introduced new ideas into the political discourse around the emerging technology. But unless the company’s policy and political influence made good on those promises, said OpenAI’s critics, it may as well just be a piece of paper.
“My guess is that there are people on the team who care about the stuff, who’ve thought really hard about this document and are proud of it, and did good work, even if it’s not addressing all of the questions that I wish it would address,” Malo Bourgon, the CEO of the Machine Intelligence Research Institute (MIRI), told me. “And there’s still the question of: Are those people gonna find themselves in the position that many previous people at OpenAI have found themselves in, where they thought the company had certain values or aligned with things they cared about, and then ended up finding out that wasn’t the case, becoming disenchanted and leaving?”
With OpenAI proposing policy, it’s worth looking back at its history with the government, which the New Yorker piece details in depth. Altman had been one of the first major CEOs to publicly advocate for federal oversight for AI, going so far as to propose a federal agency to oversee advanced models in 2023 — but privately he worked to suppress the laws containing his own safety proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to kill a 2023 AI safety bill that it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort to, as one such supporter put it to The New Yorker, “basically scare them into shutting up.” And though Altman had once worked extensively with the Biden administration to build AI safety standards, the moment that Donald Trump became president, Altman successfully persuaded him to kill the initiatives he’d once advocated for.
Nathan Calvin, the general counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, had received one of those subpoenas. “What I’ve seen from their policy and government affairs engagement has just been abysmal,” he told me. While he believed that the team who’d written the OpenAI proposal, primarily from the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will those folks remain engaged as we move from general policy principles towards the many other ways in which lobbying and government influence actually happens? Part of me is hopeful, but a lot of me is also quite skeptical about whether that will happen.” (OpenAI did not return a request for comment.)
A modest, absolutely not craven request:
Next week I plan on running an issue of Regulator cataloging the nerdiest events happening during Nerd Prom, aka the White House Correspondents’ Dinner party circuit. If you’re a tech founder, tech company, or someone that does something related to technology and you’re throwing an event during WHCD week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to shake up the normal social dynamics of the week — I’ve already caught wind of the Grindr party in Georgetown, and the Substack party, which famed looksmaxxer Clavicular is attending — and I’m so, so excited to pull together the most bonkers “SPOTTED” column that Washington’s ever experienced.
(Again, this is contingent upon whether we’re at war with Iran by the end of April, in which case, I imagine no one will be up for frivolity.)
Speaking of DC reporters, this is very true of all of us:









