David Zvi Kalman, a research fellow at the Hartman Institute and Sinai and Synapses, is one of the foremost thinkers looking at the intersection of artificial intelligence and Jewish life. While many in the Jewish community worry about destruction of past traditions, Kalman looks to the future. He looks at what could be the next big transformation in Jewish thought—on the margins today, mainsteam tomorrow.
Case in point: artificial intelligence. Kalman posits that AI, unlike previous technological advancements, has the unique ability to mimic human behaviour—a characteristic that could fundamentally alter our relationship with work, productivity and even religious practice. How are rabbis using AI today? Could the machines one day issue halachic rulings? Will it transform the role of rabbis?
Kalman joins as guest host of Not in Heaven, The CJN’s podcast about the future of Jewish communal life and spiritual practice.
Transcript
Avi Finegold: Artificial intelligence, take two. How will AI benefit Judaism? Will it be more of a danger than a benefit? We bring on Judeo-futurist and natural intelligence David Zvi Kalman to discuss AI. And stay tuned for Textual Healing as we debate the Book of Bamidbar. I’m Avi Finegold, and along with Yedida Eisenstat, we are Not in Heaven.
Yedida Eisenstat: I am very, very excited about our special guest for today, and I’m also a little bit sad that Matthew’s not here for it, but it just means that, like, I get to interrupt David Zvi more. So we have a very special guest with us today. Dr. David Zvi Kalman is not only a childhood friend of mine, he’s also a writer, entrepreneur, and one of the most prominent thinkers at the intersection of Judaism and AI. Today, he’s a research fellow at the Hartman Institute and Sinai and Synapses. He hosts his own Belief in the Future podcast and owns Printocraft Press, an independent publishing house that releases beautiful books, including the Illustrated Pirkei Avot and Canadian Noam Sienna’s A Rainbow Thread. For anyone interested in thinking about Jewish futurism, which we’re about to talk more about, David Zvi’s Substack, Jello Menorah, is a must-read. You may ask, “What is a Jello Menorah?” I think David Zvi’s answer encapsulates the entire ethos of his work. Just imagine it. All of that barely scratches the surface of the amazing work he does, though. Maybe most importantly for us, David Zvi was born and bred in the great white north, Toronto the Good. So welcome.
David Zvi Kalman: Fifth generation. Fifth generation Canadian.
Yedida Eisenstat: Wow. But not your children.
David Zvi Kalman: My children? Well, they have Canadian citizenship. We made sure to get them that. Who knows what’s going to happen over here, right?
Yedida Eisenstat: We did that too. So welcome. Welcome to Not in Heaven. For a lot of reasons, Jews spend a lot of time thinking, is this the end of the Jewish way of life? Are we on the edge of collapse? But you, I think, don’t think that. And you spend more time thinking about the Jewish future. So can you tell us more about your thinking and what Jewish futurism is?
David Zvi Kalman: Yeah, it’s very hard for old religions to remember that they started as young at some point and they started with radical ideas and new ideas and continued to thrive and survive through a repeated process of generating new and creative ideas, right? The Rabbinic Project is a creative idea. If you read rabbinic literature and you wonder, where did all these things come from? I mean, maybe the answer is from Sinai, but the answer is also probably from a lot of incredibly creative and thoughtful people trying to think about what does it mean to invent Judaism anew in the wake of the destruction of the Second Temple. And that process happens over and over and over again. I think sometimes, though, when you have a 3,000-year-old tradition, you have the sense of, well, we have an inheritance that’s so strong that we actually don’t need to do any innovating anymore, that it’s all basically done for us. And I think the opposite is true. You actually have a kind of continued mandate to invent and transform. And what that means is that you have to be constantly on the lookout for the next new big idea, the next transformation of Judaism. So Jewish futurism, as I imagine it, is basically that continued search for what are the new emergent ideas of the moment, what are the concepts that are transformative, where do we need to put our efforts going forward? And those are the ideas that are, by definition, kind of on the margins of contemporary Jewish thought. And the point is to move them from the margins to the center.
Yedida Eisenstat: Okay, my big question is all of the 3,000-year process that you’ve described has been driven by human creativity. Now we are venturing into the brave new world of digital, I don’t know, artificial creativity that is non-human. Part of why you’re here today is to talk to us about AI’s role in this process. And I think you also, like, we’re experimenting now with, you know, letting AIs participate in the creative process of the future of Judaism. And it’s been to date a human process. The Torah was given to us, not to Claude or Grok or chat. And, you know, they’re not Chayav in Torah and Mitzvot. They’re not obligated in our traditions the way we are. So why, why are we letting them into the process? Or are we?
David Zvi Kalman: Yeah, the way that AI has unfolded over the last few years is really unprecedented. I don’t know if it’s the most important technology in world history, but it’s certainly the fastest-moving technology in world history. There’s no other technology that has scaled to hundreds of millions to billions of people in a matter of years. And there’s no other technology where the distance between a theoretical breakthrough and everyone having access to it can happen in a matter of weeks. So because of that process, basically, everyone who isn’t working at a major tech company is on the back foot trying to figure out how to catch up to this. And that leaves a lot of people feeling disempowered out of a sense of, “My job is basically to respond to inevitabilities.” It leaves a lot of people feeling like, “I don’t actually know what my role is within the context of Jewish community.” For Jewish leaders, I think that’s particularly difficult because if you imagine your job as being some kind of moral voice, but the moral questions seem like they’re not worth addressing because no one’s going to listen to you anyway, then it’s a little hard to figure out what you’re actually trying to do in that moment. So I see a lot of Jewish leaders, a lot of Jewish institutions, really struggling to figure out how they’re actually supposed to respond. And it’s hard to do.
Avi Finegold: How is this different, though, than, you know, in our lifetimes, what we thought was an unprecedented rollout, which was like the web, right? The idea of hypertext and an introduction that was linking computers at home and in the offices and, you know, around the world, and people were struggling and using a lot of the same language that we’re using now. There’s always going to be something that is going to be unprecedented, that is going to be moving us forward. And so what makes this specific era different in your mind?
David Zvi Kalman: I’ll give you two answers. One answer is that it’s different because AI definitionally resembles people. And the fact that it resembles people makes it unlike any other technology because it changes people’s relationships with themselves, with their own labor, with their own productivity. That is different from what happens with the Internet. The Internet obviously can distort people’s relationships with each other, can make people devalue physical space and value virtual space more than physical space, but it does not necessarily change the way that people think about what it means to be human. Of course, the AI’s success builds on the Internet’s success. The fact that it can scale up so quickly is because we all have phones in our pockets, we’re all looking at screens all the time, and because we have accepted that interacting with other human beings can happen simply through a text interface. And so it’s a lot easier to impersonate people when you only need to impersonate the way they write and not the way that their bodies look or the way they sound or the way they smell or all the other…
Avi Finegold: Things, we know that it can come up and sound the way that humans sound also. Like, that’s…
David Zvi Kalman: But it can’t do the whole package yet, meaning the Westworld concept of an AI that looks fully human, I think, is still a ways off and may actually never happen. But when you kind of restrict your senses to just sight and sound, then it’s a lot easier. So that’s one reason it’s different. But I also want to challenge the idea that Gus actually did figure out the internet. Yes, there’s a lot of Torah online. Yes, it made Torah accessible in ways that had not been accessible previously. But I think the ways that the internet seemed so beneficial also masked its inherent dangers. I think a really good example of this is social media, right? Social media, I think, was seen very early on as being great—another broadcast, another way to get my ideas, my divrei Torah, my stuff out into the world. And that is true. And at the same time, social media can be incredibly harmful, can actually drain physical communities of their vibrancy. And because so many Jewish leaders kind of turn to social media as an opportunity, especially an educational opportunity, they missed having the chance to actually say, hey, there’s something problematic here. Maybe our job is to stand for physical space and physical connection and not simply to take this because we are worried or anxious about our own relevance. I could see the same exact thing happening with AI in a sense of like, oh, great, you know, we can learn Torah and all these better ways through AI. Wonderful. Like, it’s not wrong. But at the same time, it can’t be that the ways in which AI is advantageous to the Jewish community mean that we are unable to talk about its many, many potential dangers.
Yedida Eisenstat: So one of the things we talked about last week, when we were talking a lot about Grok also, was sort of exactly what you’ve said about how we didn’t realize the unintended consequences of the internet and social media. Do you have, like, a hard and fast rule for yourself that you won’t ever use AI as a chavruta, that you won’t replace the humans in your life or relationships with AI? Would you, like, cancel a code of ethics or a way of thinking about our relationships with AI to sort of endeavor to protect ourselves against these unintended consequences and making everything worse?
David Zvi Kalman: Yeah, it’s a good question. I’m still trying to figure it out for myself. You know, there are periods of time when I’m using AI as a thought partner every hour, and there are times when I kind of go a while without using it, and it’s hard for me to tell, actually, which are the more productive moments. Sometimes it feels like AI is allowing me to kind of just zoom along, and sometimes it feels like I’m actually depriving myself of precisely the moments of real clear and heavy and deep thought that were quite important. And I’m kind of restricting myself to kind of more surface-level stuff because I’m delegating all the important tasks to some other entity. So it’s a bit of a mixed bag. There was a paper that came out, I believe, last week or the week before, looking at programmers using Cursor, which is a kind of up-and-coming piece of software that incorporates AI in coding. And they discovered that actually for many programmers, maybe most programmers, using AI slows them down compared to what they would have done otherwise. So there may be a difference between the sense of speeding along and actually speeding along.
Yedida Eisenstat: In the last week or so, The Economist, on their morning newsy podcast, has talked about two different AI elements, both of which you just touched on. One, tasks that we have started using AI on, those parts of our brains that, like, do those tasks like deep thinking and writing, are becoming weaker. And they’ve started to do some preliminary experiments with students who use AI to write their papers and students who, like, do the very hard work of writing. And that, like you said, AI is predicated on a rich and vibrant intellectual culture of the internet. And already, because AI brings you the answers that it, you know, crawls all over the internet for, there’s less incentive for content production because AI is just giving it to you. And so already there’s a concern that actually AI is going to kill the internet because, you know, why would anybody visit a website anymore?
David Zvi Kalman: Exactly. Look, the process we’re going through now is one in which someone is going to try to use AI to do literally anything that people currently do. And sometimes that will succeed, and sometimes they will successfully replace human beings in that task entirely. And sometimes it will be controversial, and sometimes people will say, like, it’s not going to work. I think what it means to be a moral voice in that landscape is to kind of get ahead of it. Not just say, like, oh, we knew all along that, you know, people really did want human teachers and not AIs tutoring them instead, but to actually think in advance of what are the values we’re trying to preserve. I think it’s the process that we can’t afford to do retroactively.
Avi Finegold: I think it’s good because it is asking some really fundamental questions about what we are trying to do within the human project. You know, what are your, the more novel, more interesting questions that you’re seeing actually coming across when it comes to the idea of Jewish values or Jewish law and Jewish practice and using AI in this situation.
David Zvi Kalman: So people are using AI to explore Jewish law. I am certain. I know within Christian contexts people are using AI to explore questions of faith in the same way that people are using AI to do therapy. Because there are certain kinds of conversations that are awkward to have with another human being, and it feels like a kind of cheat to be able to just go to a machine and get the answers you’re going to get instead. So, like, that’s happening. It may change the role of rabbis. It may mean that people go to rabbis a little bit less. I’m, I guess, not so worried about the move of psak to AI, in part because all it’s going to do is reveal the fact that Jewish law is overdetermined, and rabbis all along had a great deal of authority to determine which kind of sources they wanted to use or avoid in coming up with whatever legal opinion they want. So it will have some impact. I think it’s kind of inevitable. I’m not so concerned about it. I am happy for people to have a better understanding of Jewish law. Although one thing that may happen as a result is a kind of flattening that can happen when you get psak from an AI that doesn’t really know anything or can’t really help you appreciate the difference between, like, a 5th-century source and a 17th-century source and just kind of like you see them as names in a list or in a document.
Yedida Eisenstat: Let’s first clarify psak as a halachic decision, a halachic conclusion. One goes and asks a rabbi for a determination for practical purposes. But can AI give a psak now?
David Zvi Kalman: If people accept the psak, then yes. I mean, it’s like that joke—like a person says, you know, like, I had a dream that, you know, I was a rabbi of a thousand people. And the response is come back when a thousand people dreamed that you were their rabbi. Right. So, like, if people accept AI’s psak, then AI can give psak. Right.
Yedida Eisenstat: Okay, so that leads to another question that I wanted to think about with you. So, there are lots of different Jewish organizations that are using AI in content production now. And my question is whether or not you think that that is a reasonable use of AI at the moment. Like, we know that AI hallucinates. Is it still hallucinating? How good is it? Does it need human supervision? Like, what’s a responsible use of AI for content production?
David Zvi Kalman: Yeah, it’s a really good and it’s a really hard question. And it’s hard because I think the way that content production is viewed varies a lot depending on where you are in the labor force. Right. If you’re high up, it can seem like a kind of technical task to design a website, whereas if you were the website designer, you might feel like, oh, like, I’m actually using, like my, you know, my creative abilities in this space. What I think is going to happen is that AI’s ability to create some amount of content is going to mean that all of the work in that sector is going to be devalued. It’s going to be seen as more trivial than it is. Some designers are going to adapt AI into their workflow to create more work faster. I certainly don’t think it’s going to mean that anyone has more leisure or is paid better. It may mean that there is more demand for graphic designers, of artists in general, out of a sense of like, well, you have AI at your disposal. Why can’t you make this in a quarter of the time that you would have been able to do it before?
Avi Finegold: I think that that’s the natural continuation of, like, what happens once you have graphics design packages and it makes it so much easier. It didn’t mean that everybody can do graphic design, which, yes, everybody could use Canva, but it just meant that the graphic designers figured out where their niche was, which was to edit, to make things better, to show where the good design is. I actually think that that’s very true when it comes to content, but I also think it’s true very much when it comes to rabbis, when it comes to rabbinic thinking. Because if you look at the dangers of, like, Rabbi Google, everybody, Rabbi Google is going to replace rabbis. Rabbis use Google now all the time to look up their sources, and rabbis use AI to look up sources and at least to double-check where all their sources are, right or wrong. I’m not worried. I’m actually looking forward to the content that AI is going to be able to generate for Jewish life. I’m looking forward to the halachic ideas that AI is going to generate because I think that we’re going to figure out how to use these tools the same way we figured out how to use Photoshop and Illustrator and the World Wide Web. Sometimes we didn’t get it right.
Yedida Eisenstat: So I’m really worried that everybody’s just gonna, you know, cheat their way through everything and not learn how to do things.
Avi Finegold: No, I don’t think so because stop reading books.
Yedida Eisenstat: That’s what happens with writing books.
Avi Finegold: That’s what people said, you know, in the early 2000s. And they’re like, oh, everybody’s gonna look up their papers online. And teachers figured out how to work around that. And that’s the space that we’re in right now where people are saying, oh, your AI could write your paper. Well, let’s figure out a way that it’s not about whether the AI can write your paper. It’s what’s the right prompt can get, what’s the right thing that you can do that shows that you’re thinking novel thoughts.
David Zvi Kalman: So I think that chain of thought only goes so far because, at some point, the AIs are capable of doing more than even the best human being can do within a given area can write the paper better than you in all circumstances. I don’t think it’s an accident that many of the folks who are most excited about AI’s development also combine that with a vision for humanity’s own development moment. A vision for a kind of transhumanism, humanity transcending its own bounds as a way to kind of explain, well, what do you do with humanity when AI is capable of doing all this other stuff that we were able to do before? The idea that actually human beings just want to be able to do their own thing, they don’t want to be pushed by AI to become something else, I think, is a serious block on AI development. And maybe it should be, right. Maybe there actually is some value in doing things in the classroom, despite the fact that an AI could do those things easily anyways. And the fact that an AI can do them should not mean that human beings must therefore be something other than what they already are. I think if we end up in that direction and kind of have this kind of retroactive mandate for what humanity ought to be because of AI, I don’t think that’s the way we should be operating. I don’t think that we should be operating according to the desires of the folks who are often frequently depicted as being the most reckless in society, right. Like, there’s so many books out now like Karen Howe’s Emperor Empire of AI, which just came out a little while ago about how, like, this is the constant operating principle of all these companies is that they want to move as quickly as possible. They are intentionally pushing aside safety concerns. I don’t think we want to be operating at that speed. I don’t think we can operate at that speed.
Avi Finegold: The thing that I’m finding we’re in a weird place is, is that the beginning of the Internet was very anti-capitalist, anti-corporate, and it ended up being subsumed by that. What happens in this case with AI that you’re saying is where it actually starts with large corporations that are actually profit-motivated, but people are so aware of it now that we’re trying to work around it. And that’s where I’m curious what you think about that. How do we deal with the fact that we’re in the opposite situation of where the beginning of the Internet was, but we’re also so much more aware of the dangers of what could possibly happen.
David Zvi Kalman: I think the way that AI technologies empower individuals is actually very troubling. There has been a progression over the course of the 20th and now 21st century of giving individual human beings more and more ability to cause destruction without any kind of support mechanism, right. This happens through the development of explosives, it happens through the development of firearms, it happens through the Internet, it happens through AI. There are ways in which you can ruin someone’s life easily these days by exposing things about them, by doxing them in ways that would not have been anywhere near as possible previously. So I am less optimistic about individual use of AI because I see all the ways that it creates kind of like unstoppable ability of individual human beings to cause harm. I know that in tech circles a couple years ago there was a statement put out about AI being something like a nuclear weapon in being an essential risk to humanity. And that may be true, but in a lot of ways, AI is also a little bit more like an AK47, more like an assault rifle, in the sense that it gives individuals huge amounts of power that maybe should be more curtailed, maybe should be used only in certain circumstances, rather than basically provided to anyone who has an Internet connection.
Avi Finegold: Isn’t that balanced? Isn’t the benefits that we’re going to get might get from AI actually going to balance out whatever possible dangers?
David Zvi Kalman: So it may be true, but this is not a law of nature. It’s not a law of nature that technology always does good for humanity. And I think that a lot of AI’s ability to kind of seep so quickly into American society relies on decades and decades of goodwill that has been built up around technologies actually offering good things to humanity, you know, making us healthier, making us more productive. And I think that is quickly being spent down. I think a lot of people are saying, like, wait a minute, this isn’t new technology, it is the frontier. But I mean, there was a Pew survey a couple of months ago. Most Americans do not think that AI is going to make their lives better, even in relatively uncontroversial places like medical context.
Yedida Eisenstat: I think a lot of people would actually rather it not be mainstreaming in the way that it is but feel like, oh, if I don’t get on this bandwagon, then I’m gonna be lost forever and I will never have another job. And like, I think that it’s just, it has so quickly infiltrated all, you know, probably white-collar workflows, but also, you know, not white-collar as well. And I think it’s scary, right? I think people are really worried about it.
David Zvi Kalman: Yeah, I think that’s right. And in terms of what you’re saying, that social pressure, those network effects that kind of keep people in the system, that’s actually, that’s precisely why it is so important for religious institutions to have a moral voice here. Meaning, like, who is going to be the one to say these technologies are dangerous and wrong? Not just on a kind of regulatory level, which is like a kind of big question about how good governments are going to be at regulating AI, but just on a moral level, on a kind of ethical level. If there’s no one actually standing up and saying regularly, our community should be very wary about using these technologies, then the only ones who have a large national, international voice are the tech companies themselves. And for some reason, a lot of religious institutions, not just Jews, but Catholics, Protestants, have really struggled with what it means to have that major voice in a way that affects people’s behavior around AI. I still see that network effect of like, I gotta use it, because if I don’t use it, then, you know, my job, my career is in the toilet.
Yedida Eisenstat: So what would you like to see leaders advocating for? Right. Like, I asked before about sort of like an ethics or a moral code and a policy. Like, it’s not just around human creativity and human work because it can’t be just about the economics, it has to be about more. So what are the big questions that inform how you think about this?
David Zvi Kalman: Yeah, I think it’s a big open question. I think that’s like the question. I don’t know that there’s an easy answer to it. I think at some point you have to start talking about don’t use X system or be careful of using X system to actually kind of create those guidelines. And I think everyone has been afraid of saying anything like that because it sounds impossible. And I think to some degree it is going to be impossible right now, but at least you have to start.
Avi Finegold: You mean, like, don’t use deep seek or don’t use gro, or don’t use.
David Zvi Kalman: Them in specific contexts, right? Like, to say, like, these are contexts in which you should not be turning to AI, even as a kind of like, for a chavruta or whatever it happens to be, as a way to kind of, like, establish, here is another pole that is kind of like, stands in opposition, to use it for everything. Something for people to at least talk about, to think through. I haven’t even seen that. I’ve seen, like, a lot of, like, well, let’s talk about it. You know, nuance. AI is good in these contexts. It’s a neutral technology. All the rest. I think actually at some point you have to say, and I’m not the person to say it because I’m not a religious leader, you know, I’m just some guy, that you should not be using AI in certain contexts because otherwise people are just going to keep asking, like, well, what do you want us to do? Like, that’s the thing you do. The thing you do is, like, don’t use the technology.
Yedida Eisenstat: Like, I jokingly threw out, not as a chavruta. So it’s interesting, I read your piece about using AI as a chavruta and doing that experiment. You are using it very differently than I’ve been using it. I’ve been using it as a tool when I’m crunched for time, or I’m really, really, really, really struggling to read some Rishon that’s writing in, like, a pidgin, Hebrew, Aramaic, with, you know, other languages sprinkled in as well, to help me muddle through and make sense of a text. And then I use the suggested translation to read that back into the original to develop my language skills. So I’m very conscious, and in my own usage, I try really hard to only use it as a tool or to give me a first draft of something, and then I take another crack at it. But you are using AI, like, really as a conversation partner, which is fascinating. But you also wrote really thoughtfully about how, like, a chavruta is more than just a dictionary. Right? A chavruta is somebody who gets to know you and your questions and the way you think about things and what you know and complements what you know and what you don’t know. And I don’t know, I wouldn’t want to be substituting my human relationships with the computer. And so I wonder, as we move into schools and thinking about schools, how to do major adapting to this technology. So do you have wisdom for educational spaces or Jewish educational spaces?
David Zvi Kalman: Yeah. So just practically speaking, AI, ChatGPT especially, I think is quite good at understanding and analyzing Jewish sources, and I think people are going to end up using it separately. I think it is worth entertaining the idea that we should not be using AIs that use stolen information or information that has been used without consent. I think even the legal pieces aside, I think there really is an open question about whether we should be treating the Internet as default allowed, which is basically the way that a lot of AIs are developed within a chavruta context. It’s clear there’s a kind of, like, asymmetry in AI’s ability to be a good chavruta. It’s good if you ask it questions. If you say, like, you know, what does this acronym mean? You know, what is this referring to? It’s often quite good, even at difficult questions. And at the same time, if you say, like, you know, please, you know, recall from this passage from the Mishnah or the Talmud, it will often make something up entirely. And that may never go away, that asymmetry may never go away. At the same time, because AI is designed to please people, it’s also going to struggle to really challenge them, to challenge them in ways that chavruta partners are often looking to each other to be challenged. I don’t think it’s never going to happen. Like, Avi, not an AI.
Avi Finegold: How do you know?
Yedida Eisenstat: No, but Avi challenges me regularly and.
Avi Finegold: It’s good, you know. Speaking of this challenge, I’m actually like, if we start by asking, what are the things that we care about as humanity, as a Jewish community, as what are the values that we think that are important that AI might be in danger of? Even without asking that question, sort of asking those things first and then saying, going to people and saying, well, these are the things we value. This is ways in which AI can actually abuse it. So be careful when you’re using an AI, think about these values in advance. You don’t want to use AI as your therapist because part of the recognition, part of the value of therapy is being able to talk to another human about it. There’s a shortcut in being able to open your soul to another human who then can have empathy for you. So the AI is short-circuiting that, but then it’s not doing the right thing. So how do we ask those types of questions, as Yedida was saying, about values and implement them in a positive way rather than everything is bad.
David Zvi Kalman: So I think the first step is to admit that we actually don’t know what the right values are. I find it helpful to think about questions like AI’s moral significance in terms of infection, in terms of pathogens, right? Like when infection enters your body, there are two systems that are used to fight against it. There’s a kind of initial, very fast-acting, innate system that is good at responding to basically everything, but it doesn’t catch everything. Right? And sometimes you need a kind of deeper, more adaptive system that is like, oh, I remember this infection before, I need to kind of specialize response. AI is kind of in that second category in that, not to say that AI is always a pathogen, but it is the kind of system where, yes, we do have some moral values that can kind of stick to it in some ways, like, oh, we shouldn’t harass people, privacy is a value, things like that. But I don’t think any of those fully capture AI existing in society. I think it requires actually developing new values that are connected to existing values but ones that are specifically designed to address AI. I think we probably need those for the Internet as well and for thinking about virtual spaces, but I don’t think we have those yet. You know, one piece of that might be, for example, to make sure that we avoid AIs that are designed with addiction in mind. This is something which has become really quite frequent in the last, you know, 20 years or so, where you have companies that design products not with what does the user want, but how do I induce want? There’s an author who wrote a book called “The Age of Addiction” who calls this “limbic capitalism,” where you’re trying to really make people addicted to whatever your product is. There are people doing this; people did this with Facebook, you know, using the principles of slot machines to create social media devices.
Avi Finegold: A great example is, I don’t think, see, I don’t think Steve Jobs intended people to be addicted to the iPhone, but as soon as the iPhone got invented, everybody realized, oh, we can get…
David Zvi Kalman: People addicted to this and to the device and to the apps on it, right? And the same thing is going to happen with AI, right, exactly. It’s very easy for that to happen. I don’t think we have a way to think about avoiding addictive behaviors, avoiding addictive tools as being something we should be looking out for in and of itself and not just in the context of, kind of like, you know, some specific area of life. I think we need to because it is becoming a playbook for more and more companies. But I don’t think we’re there yet. So that requires actually developing new values which are not taught in school, which have not been developed yet.
Avi Finegold: Or maybe even recognizing that there are very few universal values and a lot of values are community contextual, meaning the values within the broader Jewish community are different from the values within the traditional egalitarian Jewish community versus the Haredi community versus the liberal, you know, Reconstructionist communities. Each of those has their own sets of values for which approaching an AI in general or the idea of it…
David Zvi Kalman: That might be true, but honestly, this stuff is so basic and so universal that, forget about like Jewish denominations, I have a hard time differentiating between what Jews, Catholics, Muslims think about AI. A lot of them end up saying very similar stuff because they’re all confronting the same issues and they’re all struggling to find within their traditions clear prescriptions, clear sources to say, like, do not act in this way around this device. Everyone is struggling. It’s not just Jews.
Yedida Eisenstat: I think that’s right. This whole conversation about AI, I think is especially interesting because, like, we are people who care about humans. Like, Yedida and I are trained. And Avi, you are also, like, as a human extent. Yeah, right. Like, we study the human experience and, like, we’re sort of, we’re coming up against our limit. We’re meeting the artificial technology that is challenging us to look in the mirror and say, okay, like, what’s making us human if this thing can emulate us so perfectly? Last week, Avi and I and Matthew on the podcast really had an intense conversation about GROK and anti-Semitism. I made the argument that we’re seeing the realization of a moment that Deborah Lipstadt, I think, prophesied in the early 90s, where the dark belly of the Internet and all the anti-Semitism that lies therein is now becoming mainstreamed, largely, I think, because of AIs and their not yet fully developed ability, correct me if I’m wrong, to differentiate between reliable vetted sources of knowledge and the other garbage. That terrifies me. I am hoping that you might either tell me that I’m being silly and I have nothing to worry about or, you know, validate my fears.
David Zvi Kalman: I think it’s a real concern. I think the silver lining is it’s not just a Jewish concern, meaning, like there’s lots of groups that should be concerned around biases in the data sets used to train AI systems and an over-reliance on the Internet, or treating the Internet at face value, which doesn’t go well for Jews, doesn’t go well for lots of people. So it may be that anti-Semitism is what wakes Jews up to the fact that this is their problem too, but it’s obviously not a uniquely Jewish problem.
Yedida Eisenstat: That doesn’t really make me feel better.
Avi Finegold: You’re gonna go down along with everybody else.
Yedida Eisenstat: Well, so, no, but that’s really interesting because it gets to that, to the other problem about how, what should it be trained on? Right. And authors and publication companies. You have your own publishing house, right? You don’t want to just give everything over to the AIs. But I would much rather the AI be trained on real legitimate scholarship than the bogus stuff masquerading as scholarship all over the Internet. So I’m torn. That, you know, poses an ethical quandary and I think like…
David Zvi Kalman: Actually, after I wrote that piece, a friend of mine who works at Google emailed me saying like, you know, you could also just tell the AIs to not be anti-Semitic.
Yedida Eisenstat: Right.
David Zvi Kalman: Like, you know, forget about the training data, just say like, don’t be an anti-Semite, which…
Yedida Eisenstat: But you have to be aware of that. It has to even be something that’s on your mind.
David Zvi Kalman: You need to make sure that’s…
Avi Finegold: That’s what I was saying last week, was that at the end of the day, we’re going to figure out the same way that, you know, we figured out 4chan is a bad actor. At some point, you’re going to figure out which AIs are the bad actors and tell people to avoid those things because you can’t rely on them. What does your daily AI practice, if I can use that term, like…
David Zvi Kalman: Look like? I don’t know that my usage is particularly interesting. I use it to do some kind of translation if I need to summarize something. If I’m looking for some, you know, quick Hebrew to English or Arabic to English translation, then I can use AI for that as well. Sometimes, if I’m trying to get research in a new area, I’ll use AI or try to kind of get a sense of a field. Yeah, I find that very useful. As long as you make sure that the sources that it’s coming to you with are actually…
Avi Finegold: It’s all about looking up the sources and making sure that everything’s good. The same way you do with Google and Wikipedia and all that. Yeah. So this is what I’m looking forward to. I know that AIs do not have a long memory yet, and I can see that, but they have some sort of memory. I’m looking forward to the point where the AI not only has a long enough memory to know about me, where it knows my capacity for Khumra (stringency in Jewish law), for Kula (leniency), for how much knowledge I need, how much knowledge I need to be prompted, how much I should be able to do on my own. Not to force me to tell me, but it’ll be like, you know, what should I be doing here? And to sort of like, have the AI say, look, I know where you’re at when it comes to halachic practice. I think you probably could be pushed a little bit in this. When it comes to what you’re doing today, I think you probably can be confident on this side because I know who you are and the way you read sources. But I’m going to give you all the work and show you all the work.
Yedida Eisenstat: Avi, we won’t need you anymore.
Avi Finegold: That’s fine. I’m okay.
Yedida Eisenstat: You’ll be obsolete.
Avi Finegold: I am fine with that because I think it’s about the practice. It’s not about people needing to come to rabbis.
Yedida Eisenstat: Somebody was trying to get me to write something for them, and months and months ago, they went to ChatGPT and said, “Can you write XYZ in the voice of Yedida Eisenstat?” It was eerie. It was totally creepy. I mean, I don’t know that my voice is so distinct and unique, but it was kind of creepy that it was able to do that.
Avi Finegold: First of all, I want to ask, do you think that that’s like, two years ahead, 100 years ahead, or it’s never actually going to happen? So, like, you should really keep to your sources and stick to doing stuff like that.
David Zvi Kalman: Because, I mean, I’ll be, like, a little bit cynical about this. What you’re describing is like a fancy way of talking to yourself, right?
Avi Finegold: You want the AI, but I don’t know. I want the memory. I want the thing that has more rationality than I do and to be honest with me and to, like, speak. That’s often when you talk to your rabbi. And I know because people ask me all the time. I want him. Yeah, no, I don’t know. I want the rabbi to be a reflection. I want to be a reflection of what this person is saying. I want to have that refined to say, well, I’m not in a position in my life right now where I’m really working on myself in this area, but I’m working on that area. And the AI is going to be able to help you and say, “Hey, you know, I saw you davened only for 20 minutes this morning. You probably. Tomorrow you should probably aim for like a few minutes more because you’ve been working on this and you’ve been in a good direction.”
David Zvi Kalman: Oh, that exists already. There’s a Muslim app for that.
Yedida Eisenstat: Yeah, but I’m saying the mention on…
Avi Finegold: The shelf, I want. Yes, I want it to be everything but also to give me a break when I know that I need it.
Yedida Eisenstat: You want it to be your spiritual policeman, but yes, I do not want that.
Avi Finegold: But in a good way, in a way that’s the most positive for who I am.
David Zvi Kalman: I’ll just say there’s a fine line between trying to turn AI into some kind of external consultant that is actually useful for you and actually just externalizing some kind of internal voice which maybe should require some more reflection before getting externalized in the first place.
Yedida Eisenstat: I mean, I’m worried that we’re transferring authority to AI and it is not worthy of that. I’m worried that we’re doing that with sac. I’m worried that we’re doing that with content creation. I’m worried that we’re giving it a level of authority that is not yet warranted. And that is the last word because we have to let David go. This was such a treat. Thank you so much for joining us on Not in Heaven.
David Zvi Kalman: Thank you. This is so much fun.
Avi Finegold: Because of the long-awaited discussion about the book of Bamidbar and the wilderness that we are talking about, we had to pull Matthew back from the wilderness of his vacation from Northern Manitoba just to be able to talk about this. I couldn’t do this without all of us here. So, Matthew, thank you for joining us.
David Zvi Kalman: I’m excited to be here, but this better be good because, like, I had to make a bunch of scrambled eggs and cut up fruit this morning just to keep the kids at bay. But I’m here. I don’t want to be accused of not being a team player because I could not miss this opportunity to hear why Avi thinks what is clearly the third best book in the Torah is the best book in the Torah. But let’s hear, let’s hear.
Avi Finegold: Okay, so Bamidbar to me is the most interesting. And therefore, because you have Bereshin and Shemot, it’s sort of like the beginnings, the very early. It’s like those first chapters of the biography that everybody wants to skip because they haven’t gotten to the recording of the first album. Like, that’s. It’s important, it’s interesting. There’s some really stuff that might be formative, but you want to get to the meat of it. And Bamidbar is the nation of Israel emerging as a personhood, almost like an identity. And I see the book of Bamidbar as like the B’ Nai Yisrael’s adolescence. Right. It’s their teen years. And the things that happen in Bamidbar really map onto that in terms of, like, the rebelliousness of certain individuals. Right. Not just Korach but Miriam and, you know, challenging of authority. And you get to, like, the asserting of, like, we’re intelligent people, we know what we’re doing. And sometimes they’re right. When you get the Pesach and the daughters of Tzalafchad. And that to me is the book of Bamidbar in a nutshell. It’s the adolescence of B’ Nai Yisrael. From their second year in the desert, where they’re barely anything, to the 40th year in the desert when they’re ready to become a nation and to go into the land of Israel. Boom.
David Zvi Kalman: I’m speechless. I mean, I just. I thought everybody’s adolescence and teen years were the years they wanted to, like, skip over and forget. Because you’re covered in acne, you’re hormonally trying to figure yourself out, and I think that you need to, like, not dwell on that as the highlight of…
Avi Finegold: So hold on. Arguably you might be right. And I’m sure the B’ Nai Yisrael would want to forget the massive sin that they had and all the problems that they did. But the teen years are often the most interesting years that people have. That’s why there’s so many great teen movies.
Yedida Eisenstat: So I’m going to switch out interesting for formative. And I’m going to say it’s liminal, it’s in between. You know, I hear your argument that it’s formative and arguably interesting, though we’ve already discussed that. I don’t think it’s that interesting, but it’s. It’s the in-between.
Avi Finegold: I think Devarim is the in-between. It’s like a week and it’s just a recap of everything in a new light. Given I really like Devarim, given where they have come over the past 40 years. That’s what makes Devarim interesting.
David Zvi Kalman: Yedida, why do you like Devarim? I feel like I have to kind of smash this whole thing at the end. But I want to hear this first. Why do you like Devarim?
Yedida Eisenstat: It makes sense that different books speak to us differently at different stages of our life. So it could be that. I don’t know how long Avi’s really liked Numbers. Bamidbar, maybe it’s particularly resonating with him because he’s raising teenagers right now.
Avi Finegold: I’d say about five, seven years. I was like, oh, lots of people are writing interesting stuff about Bamidbar. Right. Aviva Zornberg had a great book about Bamidbar, which was very different from her other books.
Yedida Eisenstat: My friend Angela Ehresman’s book just came out about wilderness narratives, so…
David Zvi Kalman: Okay, okay, careful where you step.
Avi Finegold: We’re dropping a lot of names here.
David Zvi Kalman: Why do you like Devarim?
Avi Finegold: So I used to like Devarim because I sort of liked its simple theology of, like, if you do X that you’re supposed to do, there will be blessing and rain, and if you don’t, then there won’t be. But obviously, everything is much more complex than that, and that’s not actually how things work, and that’s a problem. There’s something appealing to me about that simplicity. I also like the re-narration. I like comparing the retelling of the story and trying to figure out why the emphasis is different and sort of the literary complexity there. And I like the laws. Right. Like, it’s sort of for me, Devarim is like the perfect mix of all the things. It’s got the narrative and the laws and the right balance with, like, a clear theology. And it’s got good advice, right? And Israel is on the cusp. And yeah, it’s just a week, so it’s a moment to look back and look forward. And I, as a historian, am thinking a lot about how past informs present and future.
David Zvi Kalman: I’m not prone to name-calling, but you guys are so incredibly lame. Like, this is just, like, this is hurting my soul. There’s no one in their right mind who would make a defense for either Devarim or Leviticus. They’re both tied for fifth. They’re not even tied for fourth. They’re, like, heavy on laws. They’re both boring. Boring. Any bar mitzvah student who got Re or got Shafti…
Avi Finegold: I mean, like, I got Va’etchanan. So, yeah, I know. No, no, although I got the consolation prize of having the Shema and the Ten Commandments in my parsha.
David Zvi Kalman: It’s great, but it’s also in the middle of the summer when no one wanted to be in shul.
Avi Finegold: Like, this is true.
David Zvi Kalman: Exodus is clearly the best book. Everything that’s formative about the Jewish people happens in Exodus. The Passover story, the plagues, Moses, the burning bush, liberation, the parting of the sea, Revelation at Sinai, the Ten Commandments. Like, everything massive and dramatic and important in the beginning of Judaism starts in that book. I don’t even think it’s an argument. And this is the other thing. Bereshit. Like, this is not a book, like a biography where you ease into it. We’re off to, like, Adam and Eve, creation right off the bat. The snake, Cain and Abel. Like, it is amazing story after amazing story. The character development in Genesis and Bereshit is unlike anywhere else in the Torah. You get the patriarchs, all the Joseph stuff at the end. I mean, that was good enough for a musical. I don’t even know what to do here. Like, I feel like I’m taking crazy pills with you guys.
Avi Finegold: Here’s my Genesis pushback. Okay, Bereshit, the Genesis pushback. If you are a real Ma’amin, right, you’re super faith, everything as literal from the Torah, you have to, like, go through hoops to, like, and you’re rational, you have to go through hoops to figure out how to make Bereshit work, right, from all of this fantastical stuff at the beginning, right? If you’re not, if you’re, like, a rationalist and are a more liberal thinker, and you say, well, Bereshit was inspired by events. It wasn’t true. It’s a parable. It’s this and that. You have to sort of explain away all of this miraculous stuff. So I’m putting Genesis on the side. Shemot is so, like, focused on that you can’t help but make it overrated. To me, the point about Bamidbar is that it’s underrated. And therefore, it slept on. And you should really pay attention to it more than, like, you pay so much attention to Shemot. And like, let’s take a step back and look at this is actually all the stuff here is so much more relatable to us than, you know, we left Egypt in one night. We saw this, like, thunder and lightning. We split, that the sea was split. All of this stuff is so beyond what we can relate to. Whereas Devarim Bamidbar is so, so relatable.
Yedida Eisenstat: Oh. So I don’t think so at all. I find narratives totally not relatable. And I can relate to the dysfunctional family so much more. Yeah.
David Zvi Kalman: And the birth of the nation. I don’t know. I feel like, Avi, you’re trying to be like, yeah, the Beatles are the greatest band of all time, but because everybody says they’re the greatest band, they’re a little overrated. So we have to argue why the Kinks were a better band. No, it’s just. It’s the Beatles, man. Exodus is the Beatles.
Avi Finegold: Come on, jump on board, Beatles. I have to figure out what Devarim is. It’s not. Sorry, but, like. Okay, but you get what I’m saying, though, that it’s severely underrated.
Yedida Eisenstat: I think I take your point that it’s underrated and warrants a second, more critical look. By critical, I mean, like, let’s figure out how these stories can speak to us. I still would not raise it to the number one spot.
David Zvi Kalman: I also think you guys need to remember that you’re very sophisticated readers of the Torah who have gone through this every time. I kind of keep thinking, like, what grabs an audience or a reader for the first time, like, what’s the part of the Torah that they’re gonna, it’s gonna really suck them in and really kind of try, but we use the word resonate or really have meaning for them, and I think you’re gonna have better luck.
Yedida Eisenstat: Certainly, with respect to the Torah, the laws are framed in terms of the narrative. Right. The narrative is the reason that God can grant us law. Right, God. So they’re inextricable. Narrative is a great place to start. I agree to draw people into the human drama that’s compelling in the Torah. And I’m not hierarchical here about, like, whether narrative or law is more compelling. But one thing that I really, really do appreciate is the interrelationship of the two, which I find extra super compelling. The, like, most basic example of this is you were slaves in Egypt. Therefore, you know, be especially sensitive to the call of the orphan, widow, and stranger.
David Zvi Kalman: Just from the history of, like, teaching bar mitzvah and bat mitzvah lessons, kids just seem to love story much more than they love law.
Yedida Eisenstat: Right.
David Zvi Kalman: Even if the rules are things that you can find a way to contextualize.
Avi Finegold: In their life, like, these are the mitzvah.
David Zvi Kalman: These are the foundations. The stories of dysfunctional families and the stories of people being oppressed and the stories of people being freed. Like, the stories, the stories.
Avi Finegold: But to be fair on that point, there are so many stories in Bamidbar. You just have to get these stories. You gotta make them relevant.
Yedida Eisenstat: Hold that thought. We’re gonna come back to it.
Avi Finegold: Okay, let’s hear what you have to say. On to our Textual Healing.
Yedida Eisenstat: So we have a double parsha this week at the beginning of Parshat Masseh. We get a long enumeration of all of the 42 places that the Israelites traveled to and encamped during their time in the desert. And nobody will be surprised now that I’m really interested in the very long Rashi on that verse, chapter 33, verse 1. And Rashi presents two different interpretations there. The basic question, and Rashi asks it, and lots of other people ask it also is like, why do we have this here? Basically, from the exodus from Egypt towards the beginning of the book of Exodus, we’ve been sort of following Israel on their journey. Why now do we need like a 42 place enumeration? And by the way, a bunch of those places are not actually listed in the journey we’ve read about so far. So what’s this for? What’s the purpose of it? So Rashi proposes his first explanation is that this is to highlight God’s kind of like mercy on the Israelites. I’m a rabbi now, so I can say the Jewish people. And the reason for it is that it could have been a lot worse. Those 42 places are enumerated because 14 of the places were just in the first year, and then eight of the places were just in the last year.
So that leaves you with 20 places, 20 separate trips over the course of 38 years. Eh, that’s not so bad. It could have been a lot worse. So I think in that read, Rashi is sort of reminding us that it’s a matter of perspective, right? It could have been a lot worse. See the history as glass half full and all of the mercy, kind things, and miracles that God did for you in those 42 places. The second interpretation is an analogy from the Midrash and Tanchuma, where Rashi likens it to a king whose son is sick. The king grabs his kid and travels, travels, travels, to take him somewhere where he will be healed. As a parent, right, like I. I’m guilty that I’m always like rushing my kids, right, to get their shoes on, get in the car, buckle themselves, like, get where we’re going, get where we’re going. And it’s only sort of after we’re there that I can take a breath and think about all the stuff that happened on the way there, the stops along the way. An interpreter of Rashi suggests that this is about God’s love for B’ Nai Israel. This is about also just like reminding us, right? It’s God reminding us of all the things that God did for us as expressions of God’s love on that 40-year journey. So anyway, the argument that I’m gonna make, I think, is about the uses of history, matters of perspectives, and the stories that we tell ourselves. How the stories we tell ourselves and whether or not we see the glass as half full informs our actions going forward.
David Zvi Kalman: That’s a cool idea, but you still had to squeeze a lot out of a book that ends with a whimper. Deservedly so.
Avi Finegold: I don’t know, you know, like I.
David Zvi Kalman: Said, I think that’s great. Mattel Mase is just like a snooze fest. You find beauty in subtlety and nuance.
Avi Finegold: I want the hits.
David Zvi Kalman: I want the blockbuster hits.
Avi Finegold: All your albums, all the music you own is various greatest hits. You have REO Speedwagon greatest hits. It’s not that. It’s Jackson greatest hits.
David Zvi Kalman: I understand that Bohemian Rhapsody and Stairway to Heaven are classics for a reason. You know, like I don’t start trying to say, oh, this incredible song by Emerson, Lake and Palmer should be the best song of all time. Like, you sound. It just doesn’t. It just doesn’t fit. You gotta go to the masses. You gotta go to the masses. The tour is for the masses.
Avi Finegold: It’s not in Heaven. Thank you for listening to Not in Heaven this week. Our producer and editor is Zachary Kaufman. Michael Fraiman is the executive producer. To catch every episode, subscribe wherever you get your podcasts. If you’d like to support our show, consider making a donation to The CJN at thecjn.da/donate.
Show Notes
Credits
- Hosts: Avi Finegold, Yedida Eisenstat, Matthew Leibl
- Production team: Zachary Judah Kauffman (editor), Michael Fraiman (executive producer)
- Music: Socalled
Support The CJN
- Subscribe to The CJN newsletter
- Donate to The CJN (+ get a charitable tax receipt)
- Subscribe to Not in Heaven (Not sure how? Click here)