Originally published in PaperCuts.
The way I observe Judaism looks different from the norm. Beyond not falling neatly into any particular denomination, I also follow Torah by applying my own reasoning to scripture and existing interpretations—even when strays from common rabbinic thought. My lens for religion also informs how I approach other moral and ethical questions in life, too. That includes artificial intelligence.
In Torah, Moses warns, “Don’t add on to the thing I am commanding you and don’t take away from it.” I take that seriously—not just in ensuring I observe enough, but also in avoiding adding more rules. Rabbis sometimes defend the added rules as “fences” to protect God’s commandments. But I also worry some of those fences have drifted from the garden. (That said, I respect how others—Jewish or not—each fence their gardens.)
More observant Jews tend to avoid dancing with someone of the opposite sex because it could lead to intimacy. A famous Jewish joke flips this logic: instead of avoiding dancing because it leads to intimacy, it warns against certain intimacy because “it might lead to dancing.”
My father once asked an Orthodox friend if riding a bicycle on Shabbat was allowed. The friend replied, “any idiot can pasken no.” “No” is easy. You never risk violating a rule. But if the goal is neither adding to nor subtracting from the law, “no” can miss the mark. Getting to “yes,” honestly, is harder—and sometimes more faithful.
While religious views are probably more important than whether or not—or how often—I use AI to put the Snapchat hot dog in renaissance artwork, the same considerations in thinking can be useful for evaluating how we use AI in instances that make us feel uneasy. Basically, we should be willing to push past some fences to better protect what actually matters—and to ask what we’re protecting in the first place.
Hot off the LLM!
Admittedly, even as an avid techno-optimist, I’ve felt weird about taking down some of the fences around journalism—particularly using AI writing. But Matt Yglesias recently opened my mind on this.
He argues that more involved use of AI could actually advance journalism’s core purpose. “It’s important to remember that the point of journalism is the journalistic outputs — bringing facts to light, creating a more informed reading public — not the process,” he writes. Yglesias notes that while he writes well and quickly, other journalists have exceptional reporting skills but struggle more with writing, and AI could help them. “For journalists with that balance of skills, the ability to dump notebooks and interviews into a context window and have it come up with some story ideas could be really useful.”
Moreover, he adds that this would serve society—everyone understands that good journalism informs people and contributes to a knowledgeable society. If an article informs people, does it matter if some of the prose came from an LLM? Although it instinctively feels like something essential is lost, that instinct might be confusing the process with the purpose.
He adds that strong reporters used to be paired with strong writers, and budget cuts made that rarer. AI can now fill that gap. Yglesias even tested an automated Substack to show you can produce decent, data-driven journalism with almost no human labor. His argument isn’t that news teams should replace all human labor, but be more daring and open to using AI in more central and unconventional ways to boost their output and efficiency.
Yglesias also points out that technology has replaced human labor in other areas in the news industry that don’t make us shirk—sorting incoming mail, transcribing interviews, finding existing news stories. So maybe it’s not a huge deal if AI steps in to do those things, or even editing, writing, or helping journalists find the real news hooks and stories in their notes. Maybe they’ll use AI to make charts and graphs to illustrate their stories. That could mean more useful, abundant, and specialized news. If that’s the true goal, resisting it simply because it feels strange starts to look like another fence.
Drivers from the next state over suck
Just because Maryland/Massachussetts/Nevada/etc. drivers are the worst doesn’t mean it also doesn’t feel kind of weird to have cars driven by nobody on the road. The weirdness about AI writing for you is different than the weirdness of it driving for you, but it’s weird nonetheless. But the evidence is overwhelming: it saves lives. “Waymo’s self-driving cars were involved in 91 percent fewer serious-injury-or-worse crashes and 80 percent fewer crashes causing any injury,” writes Dr. Jonathan Slotkin in The New York Times. Waymo’s ability to reduce crashes is dramatic—and in the rare fatal incidents, human drivers were at fault.
Waymo is the easy case—and yet can still feel wrong—like something important is missing when no one is behind the wheel. Of course we can overlook feeling like something is kind of bizarre if it means fewer people will die. But even here, lawmakers are slow to remove the fences. Some of them are protecting the wrong ideal—not safety but specific jobs.
Ironically, saying “no” to self-driving cars feels like the safe option, even though it’s statistically far riskier. It avoids the gamble, blame, and discomfort of change. But if the goal is safety, then “no” can actually move us further from what we’re trying to protect.
PrescrAIbing
Utah’s Department of Commerce recently partnered with AI platform Doctronic to use AI systems to automate prescription renewal for certain patients. This also feels weird and it’s easy to object that medical professionals should always be the ones prescribing medications. But most prescriptions are renewals, and missed renewals drive preventable harm. Automating that process could keep people on medication and improve outcomes. Medication noncompliance is a major driver of preventable harm and about 80% of medication activity is just renewals.
Hesitation here isn’t about outcomes, but rather control, trust, and what feels appropriate for a machine to do. Rejecting this also feels safer. But, again, if the goal is keeping people healthy, then rejecting tools that improve access and consistency just because they feel awkward starts to look like protecting the fence instead of the garden.
Dance
AI is increasingly forcing us to decide whether we care more about how something is done, or what it actually achieves. These are just a few examples of uneasy uses of AI that could do a lot of good. New technology gives us a chance to reevaluate the fences we’ve constructed—and to remember what we’re actually trying to protect. And if it leads to dancing, so be it.
One ask: If you like my work, please consider following the Abundance Institute on LinkedIn! Thank you!