I’m going to try something a bit new here and take a close look at the data analyses from a single study. I chose this particular study, The Impact and Lessons of Taglit‐Birthright Israel by Saxe et al, because someone asked my opinion about it and I thought it did some things very well. I think the data supports some interesting findings, although it includes some all-to-common misinterpretations of statistical results. I want to say at the outset that, although I have some critiques of this study, topics worth studying rarely give easily interpretable results. The authors make a positive contribution to the discussion. I’m also impressed that Taglit-Birthright Israel has worked to include data collection and analysis as part of their mission. Their data collection and fairly frequent publications are what make quantitative discussion of Birthright Israel possible.
The primary goal of this specific study was to examine whether participation in Taglit-Birthright Israel affected attitudes towards in-marriage vs intermarriage (and later marriage rates) and views on raising children as Jews regardless of the spouse’s religion. This examination of actual marriage rates is now possible because the 2001-2004 cohort of Birthright attendees now have a sufficient population of participants who’ve married to run statistical analyses on their marriage choices. Most of the examinations of attitudes come from surveys conducted 3 months before and 3 months after 2008 Birthright trips.
The article’s starts with a nuanced discussion that puts concerns that intermarriage will destroy Judaism in the context of existing research. I was surprised to learn that 15.4% of the 2001-2004 applicants to Birthright Israel had a non-Jewish parent while 24% of the 2008 applicants had a non-Jewish parent. That large of a jump in just a few years means that Birthright is increasingly attracting children who doomsayers consider lost to Judaism.

Their main analyses use something called ordinal logistic regression. This is a kind of analysis of ranked responses (i.e. “Thinking about the future, how important is it to you to marry someone Jewish?” 0=Not Important, 1=A little Important, 2=Somewhat Important, 3=Very Important or “ For how many years did you attend a [supplemental or day] school?”) If you know nothing about a person besides their response to a single question, the results specify the odds that an increase in one level from one questions predicts an increase in one level for another question. For example, how much does an additional year of supplemental school or participation in Birthright change the odds that a person will consider inmarriage as “A little important” rather than “Not important.”
The authors present two types of results: (1)The effect of Birthright on people’s actions or attitudes. (2) The relative effect of Birthright compared to other Jewish education and social programs.
(1)The effect of Birthright on people’s actions or options.
Pretty much any way the authors slice their data, Birthright participation increased the odds that someone from the 2001-4 participation cohort married a Jew, with a greater odds increase if the participant has a non-Jewish parent (Table 2). Participation also increased the odds that participants wanted to raise Jewish children, even if the person had a non-Jewish parent (Table 3). However, some of this affect is probably driven by the fact that they didn’t ask the same questions before the Israel trips in 2001-4, and so the people who decided to participate may be the people who cared more about marrying a Jew to begin with.
In 2008, they asked the same questions before the trip. By far, the biggest predictor of how much participants valued in-marriage in the post-trip survey was how much they valued in-marriage in the pre-trip survey. This is a reminder that we need to be careful not to over-interpret the 2001-4 results or results from any other survey that doesn’t ask similar questions before whatever intervention or program is being evaluated.
The data they collected in 2008 allows the authors to control for many factors, including years and types of Jewish education, ritual practice levels, movement affiliations, and Jewish social connections. Even after including all of these other factors, Taglit-Birthright Israel participation shows an effect on the participants. The left columns of tables 4 and 5 show an increase in interest in in-marriage and in raising Jewish children from 3 months before to 3 months after the trip. The right column of table 4 even shows an increase in interest in finding a Jewish spouse for people with a non-Jewish parent. The authors don’t state the level of statistical significance values for any of these results, but the size of the Birthright participation effect seems large enough to be of interest.
I do want to note here that ordinal logistic regression assumes that it is equally important for someone to jump between any adjacent levels of a response (i.e. someone is equally likely to change from “not important” to “a little important” as from “somewhat important” to “very important”). If this isn’t true and the non-participants had a different distribution of responses, it has the potential to bias the results. Figure 10 in a related publication The Impact of Taglit-Birthright Israel: 2010 Update shows a clear distribution difference on in-marriage interest between participants and non-participants. 48% of non-participants think in-marriage is not or a little important compared to 32% of participants. The authors could show whether this distribution difference is a problem by dividing these data into subsamples based on pre-trip responses to separately show the distribution of changes for every group. They could also show the distribution of responses (or the distribution of changes) pre AND post trip for participants and non-participants. Since only a fraction of people would show any change over 6 months, this would help better understand which groups of people are most heavily affecting the results.
One other thing that stands out to me is they include having a non-Jewish parent in the model of what predicts intermarriage (Table 4), but not in the model of what predicts wanting to raise children Jewish (Table 5). The omission without an explanation seems odd.
(2) The relative effect of Birthright compared to other Jewish education and social programs.
The authors fall short in comparing Birthright to other programs. They properly designed their analyses to model the unique effects of Birthright. They then used the same models to make claims like, “These data indicate that in order to equal the impact of Taglit on importance of marrying a Jew, one would need to attend… Jewish overnight summer camp for 12.8 years (well beyond the number of years most Jewish summer camps offer sessions)” If I found a result showing that two weeks in Israel on a Birthright tour is so transformative that it changes people more than an unobtainable number of years in Jewish summer camp (or 4.8 years of day school or 12.0 years of supplemental school), I’d worry. Unbelievable results are usually wrong.
These numbers bother me more because I can’t figure out how they calculated them. The authors say they simply divided the coefficients on participation by the coefficients on Jewish education (footnote 16), but I can’t get near these values with the results presented in table 4. But even using the values from table 4 (middle model), 2 weeks of Birthright has the same effect as 10 years of Jewish Summer camp. This doesn’t make sense.
I suspect that the core problem is that this analysis was designed to test for a Birthright effect, after controlling for many other factors, and this isn’t a good way to identify how the Birthright effect compares to these other factors. A good demonstration of this problem is in the left-most column of table 4, where the authors also include the pre-trip importance of in-marriage as a control variable. This model is comparing the attitudes of people who went on the Birthright trip with people who didn’t go on the trip, but we can probably assume that the attitudes of non-participants didn’t change much during the 6 months between surveys without a trip. Thus, the left and right columns of Table 4 and the left column of table 5 are actually showing the interactions of Birthright participation with schooling, childhood affiliations, and so forth rather than the unique contributions of each of these. If there is little or no change among people who didn’t participate in Birthright, then the results in these tables show that the impact of Birthright participation on attitudes towards intermarriage are heavily driven by Orthodox and Conservative participants those with more years of schooling & camping, and those who had more Jewish high school friends. If my assumptions are correct, based on the left and right columns of figure 4, there’s no clear evidence of a Birthright effect on in-marriage attitudes for participants who were raised secular or just Jewish. The authors could easily test this by running analyses only on the Birthright participants and presenting mean differences across various populations.
The authors do try to deal with the effect of pre-trip attitudes by removing them from the middle model in table 4. This removes the biggest cause of undesirable interactions. In these results, the impact of pre-trip life on in-marriage attitudes is much greater, and, strangely, the effect of Birthright drops by 30%. Still, the other factors still interact. Most of that increase in the Birthright effect on attitudes comes from participants with childhood Orthodox or Conservative affiliation, parental organizational ties, high school ritual practice, and high school Jewish friends. If there was minimal relationship between all these things, it wouldn’t affect the authors’ conclusions about which of these things are more important, but we all know these things are strongly correlated. For example, if 80% of dayschoolers were also Orthodox and 15% of dayschoolers were Conservative, this analysis would be unable to distinguish day school participation from affiliation. This is called muticollinearity, and it isn’t an easy problem to address. One approach would be to throw many possible interactions between factors into the analysis. Even if is possible to model everything, it would be much harder to combine those results into simple interpretations. Another approach would be to look at subsets of the population. For example, look at the effects of schooling choices vs Birthright participation for only those with a Reform affiliation in childhood. If the sample size isn’t big enough to do this, we have to just accept the limits of what we can conclude based on this data.


This is something of an aside, but I thought it was interesting. The authors show that Taglit participants were not more likely to date Jews, but the participants were significantly less likely to be married before age 30 compared with nonparticipants (marriage rates equalized for those aged 30+). They follow with some speculation, “One possible explanation for this phenomenon is that Taglit participants are more likely to want to marry a Jewish person and consequently spend a longer time searching for a suitable partner.” The Impact of Taglit-Birthright Israel: 2010 Update, a related publication, contains the same speculation, but adds a footnote that, “Another possible explanation for the disparate marriage rates among respondents under the age of 30 is that married individuals, or those who are engaged or about to become engaged, are more likely to remain nonparticipants, potentially due to less flexible schedules (and therefore declining a trip and/or not re-applying). This is supported by the finding that over three percent of nonparticipants were married at the time of application to Taglit, compared to less than one percent of participants.” When there are two possible explanations for a finding and they only want to mention one, it would be more appropriate to include the one supported by their data.