top of page

Revisiting My Refutation of Wachtel's Theory of the Development of the Byzantine Text

Updated: Jan 22

Introduction

Until the printing press was invented in the Western world in the mid-15th century, all writings (including the New Testament books) could be reproduced only by copying by hand (a handwritten copy is known as a manuscript). Since no copyist (“scribe”) is perfect, differences at points in the text (“variants”) are found among the extant manuscripts [1]. The attempt to recover the original reading at each point where there are variants is what is known as “textual criticism.”


For a very long time, the mainstream approach to textual criticism was based on three pillars and a concomitant: (1) the idea that manuscripts can and should be categorized as members of various “text-types”; (2) a set of canons (rules) published by German Rationalist J.J. Griesbach in 1796; (3) the claims of B.F. Westcott and F.J.A. Hort, primarily that the vast majority of manuscripts that contain what has been dubbed the “Byzantine text-type,” which manuscripts show relatively strong agreement on the text, are in fact secondary [2] and should be discounted in favour of a handful of manuscripts containing the so-called “Alexandrian text-type,” which disagree on many variants not only with the Byzantine text but among each other. This latter necessitated an explanation of dominance among the manuscripts of this supposedly secondary text-type, and so concomitant with it was the belief that the Byzantine text-type was a recension, created by deliberate editing in the 4th century AD, and usually attributed to one Lucian of Antioch.


I argued in my Master’s thesis in 1996 that every one of these pillars was wrong. Dr. Kurt Aland pointed out as far back as 1965 that the concept of text-types had been shown to be unsustainable. Furthermore, all of the studies done on scribal habits in the last century and more have shown that Griesbach’s canons were completely wrong. I showed (and others before me) that the Westcott-Hort theory was based on false pretences and were disproven by the actual evidence [3].


More recently, leading textual critics have abandoned both the concept of text-types and the idea of a “Lucianic recension[4] —an idea that never should have been accepted, as there was no genuine evidence for it – as the origin of the Byzantine text. However, they continue to cling doggedly to Griesbach’s canons and the claims of Westcott-Hort, though they may disavow those names [5], and that creates a quandary for them, viz. how to account for the rise and dominance of the Byzantine text [6].


Enter Klaus Wachtel and his model of “a long process of development and standardization[7] as the origin of the Byzantine text. Although this sounds rather like the “Lucianic recession” in stretched-out slo-mo, it seems to have become the darling of at least some of today’s top textual critics. Dr. Peter Gurry, Assistant Professor of New Testament at Phoenix Seminary, averred that,

The most serious work on the Byzantine text’s development has been done by Klaus Wachtel, especially in his 1995 dissertation … I myself have found this view persuasive at least as far as the Catholic Letters are concerned … it is the most detailed and substantiated view of the Byzantine text’s origin on offer … Byzantine prioritists (of whatever stripe) need to address Wachtel’s arguments not Westcott and Hort’s.[8]

Gurry also tells us that Wachtel’s theory is now cited as “the most detailed and substantiated view of the Byzantine text’s origin on offer … in both the major introductions to the field (Metzger-Ehrman’s, and Parker’s).[9]


Did Wachtel’s theory solve the problem of the origin and development of the Byzantine text? We did a detailed assessment of his work, as presented in his 2009 paper, [10] and found it to be a signal failure. Not only did he fail to deliver on his promise to show that the Byzantine text was the result of “a long process of development and standardization,” statistical analysis of his data showed the opposite; the Byzantine text showed marked stability throughout the centuries. [11]


Now, the significance of this is profound. It is a fact that so-called Byzantine readings dominate throughout the manuscripts, over against their so-called Alexandrian and so-called Western rivals, generally by an order of magnitude. To continue to hold that these small-minority readings better represent the original than do the Byzantine readings, therefore, requires a viable explanation as to how the latter came to such a dominant position. The long-held artifice of a supposed Lucianic recension as the explanation has finally been abandoned, and the vacuum left by it now bids to be filled by the Wachtel theory, but that, as we have seen, is a failure. Is it not time to accept the fact that the dominance of the Byzantine readings is due to the fact that they trace back to the original autographs (i.e., the manuscripts penned or dictated by the original authors), and so would have been at the initial point of each chain of transmission therefrom?


Unless, of course, my refutation of the Wachtel theory is itself invalid. Some people think so. Let us hear them out. After that, I will have some questions of my own.


The Challenge

Our analysis of Wachtel’s data has been challenged by one Hefin Jones via a comment about my original article that he sent to Truth In My Days. In this comment, he helpfully clarified that,

Wachtel’s majority text is neither Hodges-Farstad nor Robinson-Pierpont but rather the actual calculated majority reading as determined from their collations at the test passages in Text und Textwort or the full collations of the pericopes in the parallel pericopes tool of the ECM / CBGM at INTF. On occasion they do use the Robinson-Pierpont text as a proxy but not in any of the work connected with your article or the Wachtel paper you’re critiquing.

He then made the following point:

[I]n your explanation of why only 147 manuscripts were used by you – you give your reasons for excluding P45, 01, 03, 019, 33 and 05. However Wachtel informs us in the description you also copied that there are 46 manuscripts in this data set that are non-Byzantine, the six you listed and 40 more. They would need to be excluded also. Also many though not all of the 75 manuscripts in the intermediate group of the data set are far from being genuinely Byzantine. Only the last group of 29 manuscripts with 95% agreements are strongly Byzantine throughout.

In this, Jones is mistaken. Wachtel does not, in fact, inform us that “there are 46 manuscripts in this data set that are non-Byzantine”; he tells us that,

[t]he selection includes all 46 manuscripts differing from the majority text at least in 15% of the text passages of two Synoptic Gospels. [12]

Mayhap Jones assumes that any manuscript with such a level of difference from the majority text is not Byzantine, but that is not the case. I was able to find information on the textual characteristics of most of the manuscripts I used from the group that shows an agreement of 85% or less; only one was Alexandrian, two were “mixed texts,” and one could not be categorized. The rest were all Byzantine, Byzantine/Caesarean, or Caesarean. And, given that the so-called “Caesarean text type” [13] was always hypothetical and unverified, so that even Bruce Metzger abandoned it, [14] and that it was always very close to the Byzantine text, these “Byzantine/Caesarean” and “Caesarean” manuscripts are, in fact, Byzantine manuscripts—genuinely Byzantine, contra Jones, even if not “strongly” Byzantine (i.e., those, according to Jones’ definition, that show at least a 95% agreement with the majority text). An analysis of the development of the Byzantine text would, after all, be stymied if we looked only at “strongly Byzantine” manuscripts.


Jones’ main challenge, though, is this:

I think there’s a mistake in your analysis because it is meaningless to attempt to plot “% agreement with MT by century” given Wachtel’s description of this sample of 154 manuscripts out of a population of somewhere around 2000 or so. The nature of the sample set (inclusive of ALL ipso-facto non-Byz manuscripts, inclusive of a disproportionately high number of intermediate manuscripts, excluding of hundreds of high-Byz manuscripts similar to Minuscule 18) precludes the possibility of meaningfully plotting those graphs. Wachtel is not trying to use his ranked list of 154 manuscripts in this way, so he can’t be accused of either hiding or massaging his data. He and his colleagues at the INTF / ECM project picked these 154 not to describe the composition of the whole manuscript tradition of the gospels but rather in order to account for the breadth of the population with a focus on outliers and intermediates. All this is quite explicit in the 2009 paper.

In a nutshell, Jones’ claim is that the way that Wachtel chose the manuscripts (29 with an agreement of greater than 95% with the majority text, 46 with an agreement of 85% or less, and 75 with an agreement between 85% and 95%) results in a data set that is not properly stratified and therefore cannot be used to do statistical analysis the way I have done.


Apparently, Jones has been doubling down on this claim; other readers have sent in snippets from discussions on Facebook, in which Jones charged, inter alia, [15]

  • Tors seems to want to treat Wachtel’s larger set of manuscripts as if it’s a randomised sample of manuscripts from the C5 to C16 when in fact it is nothing of the sort; it’s a deliberately selected set of manuscripts selected on the basis that they’re likely to be interesting to the ECM project.

  • The 150 or so manuscripts are a sample deliberately designed to capture the *breadth of variation* within the transmission history of the gospels, the sample is not designed to be a randomized *sample of the population* of manuscripts.

  • The problem with this approach is that the averages he generates accurately from the 150 don’t reflect the averages that he’d get from a genuine sample of the *population* of mss. Especially from the 10th century onward Byz type mss were systematically excluded from the 150 (18 and 35 excepted). So Tors forgets that he’s not dealing with a population sample and tries to use some data presented for another purpose to do something it cannot do.

The crux of Jones’ charge is clear: the manuscript data provided by Wachtel is not a “randomised sample” and therefore cannot be used to determine the diachronic (i.e., “through time,” in this case across the centuries) trend of the Byzantine text, as I have endeavoured to do. Is Jones correct about this? No.


Now, it is true that Wachtel’s data set is not properly stratified (“randomised”), as we would prefer to have, and that it would be good if we had had more data to use. But that does not mean that meaningful conclusions cannot be derived from the non-randomised data we do have; it is simply a matter of determining the effect of the selective bias on the results.


Consider election polling, for example. Oversampling Democrats in the polls is selective bias, and the effect is obvious: support for the Democrats would appear to be higher than it actually is.

What about oversampling women voters? The effect of that might seem to be more difficult to determine, but it really is not. Consider the following:


Results of the Last Nine US Presidential Elections [16]

In every presidential election since 1996, a majority of women have preferred the Democratic candidate. Moreover, women and men have favored different candidates in presidential elections since 2000, with the exception of 2008 when men were almost equally divided in their preferences for Democrat Barack Obama and Republican John McCain. In 2016, a majority of women favored the Democratic candidate, Hillary Clinton, while a majority of men voted for the Republican victor, Donald Trump. [17]

It is quite obvious from these facts that the effect of oversampling women regarding political preference would skew the results in favour of the Democrat candidate, whereas oversampling men would skew the results in favour of the Republican candidates. So if we saw poll results in which either group was oversampled, we could look at the historical “gender gap” in voting to try to determine what the poll would have shown if the number of men and women sampled had properly represented the actual voting public.


So even “non-randomised” data can yield valid results if we take into consideration the effects of the sample bias and account for it. We would not be able to get the same accuracy as with properly randomised data, of course, but it does have value.


Regarding Wachtel’s data, as Jones has pointed out, it is not randomised; Wachtel picked twenty-nine manuscripts with an agreement of greater than 95% with the majority text, forty-six with an agreement of 85% or less, and seventy-five with an agreement between 85% and 95%. How would this skew the results? Well, in another discussion on Facebook, Jones wrote the following:

Let’s consider the 14th and 15th centuries. Tors takes the 29 manuscripts in Wachtel’s set from those centuries and calculates that the average agreement with the MT is 92.5% and 90.1% in the 14th and 15th century respectively. However, Wachtel picked 18 to represent about 200 other manuscripts (the Kr group). All of these manuscripts have about 98+ % agreement with the MT. The vast majority of the other manuscripts from those centuries are also K manuscripts with 95+ % agreement to the MT. So the figure of an average agreement with the MT of 92.5 and 90.1% respectively that Tors derived from Wachtel’s/INTF’s set is way, way, way too low for those centuries. The population of 14th and 15th century manuscripts would in reality show upwards (probably well upwards) of 95%+ agreement with the MT. [18]

Now, Jones is correct that the agreement among the Byzantine manuscripts should be higher than 92.5% and 90.1%; by oversampling manuscripts that had an agreement of less than 95% and particularly ones with an agreement of 85% or less, the result will be lowering the average percent agreement. That is, in fact, the obvious, and very predictable outcome, of Wachtel’s selection bias. Had the data been properly randomized, the agreement across the centuries would be higher than 92%. How much higher, we do not know. For our purposes, however, it does not matter; we are not interested the actual level of agreement, but in the diachronic trend.


The key question, then, is whether the selection bias affects the trend also, and not only the percentage agreement. And, since the centuries were not selectively chosen, unlike the % agreement, Wachtel’s sample bias should not affect the trend calculation.


Of course, one may say that the centuries were selectively chosen indirectly, because they are linked to the % agreement, and here is where we drive our point home conclusively:


If Wachtel’s theory of that the Byzantine text underwent “a long process of development and standardization … [that] ended up in a largely uniform text characterized by readings attested by the majority of all Greek manuscripts from the 13th – 15th centuries,[19] and reaching an agreement of 95%+ with the majority text, as Jones posits, then increasing agreement with the majority text should be seen diachronically with any subset of manuscripts, randomised or not; the later the century, the higher the agreement. That is the inevitable result if Wachtel’s theory is correct; there is no way around this.


Now, let us look at Wachtel’s manuscript set again, considering the distribution across the centuries for each of his three groups. If his theory is correct, the manuscripts from the set with the agreement of higher than 95% should cluster towards the later centuries, the manuscripts with the agreements of 85% or less should cluster towards the earlier centuries, and the manuscripts with agreements between those ranges should cluster towards the middle. Is that what we see?

The manuscripts in the highest agreement group (95%+) are not clustered in the later centuries; on the contrary, more than half (58.6%) are from the 11th century or before.


The manuscripts in the lowest agreement group (85%-) are not clustered in the earliest centuries; on the contrary, only about one quarter (25.7%) are from the 11th century or before.


Of the fifty-three manuscripts in between the 13th and 15th centuries, inclusive, the period of which Wachtel spoke (“a largely uniform text characterized by readings attested by the majority of all Greek manuscripts from the 13th – 15th centuries”), [20] only eight (15.1%) show the highest level of agreement (95%+). Twice as many (30.2%) show the lowest level of agreement (85% or less). And none of this depends on having a randomised sample, because the placement of a manuscript on the spectrum depends on only one thing—its percent agreement with the majority text. Jones’ objection is powerless here.


The upshot of all this is that Wachtel’s data can legitimately be used to trace the diachronic trend in the Byzantine manuscripts, and doing so shows that Wachtel’s theory is wrong; the trend is stability through the centuries, and not a slow development towards the majority text. Q.E.D. Wachtel’s theory is dead in the water.


My Questions

In his paper, Klaus Wachtel wrote,

Standardization means editorial activity, and in fact, a text form so similar to the late majority text as represented by Codex Alexandrinus cannot have emerged from a linear copying process without conscious editing. It is indeed likely that the text in Codex Alexandrinus is the result of editorial activity which may have been been carried out in one or, more likely, several steps. [21]

Did no textual critic ask Wachtel what he was talking about here? Did no textual critic think to ask Wachtel to explain exactly why he says that the similarity between the text of Codex Alexandrinus and the late majority text “cannot have emerged from a linear copying process without conscious editing“? That is, in fact, exactly what we would happen among any lines of transmission that were copied carefully.


How does the similarity between these two texts show the need for conscious editing? Did the copyist of Codex Alexandrinus somehow look ahead a thousand years to “the late majority text” and alter it to fit what would come later? If not, how does “the late majority text” prove or necessitate editing in Codex Alexandrinus? Why does no textual critic ask this?


One would hope that at least one textual critic would ask these questions, in view of the fact that what Wachtel asserts here is manifest nonsense. He makes no attempt to prove his claim, but simply asserts it. The thinking person would immediately ask, “How do you know? What is your evidence?” One would hope the interlocutor would then notice that Wachtel has offered no evidence at all for his theory. Instead, we get panegyrics such as,

The most serious work on the Byzantine text’s development has been done by Klaus Wachtel, especially in his 1995 dissertation … I myself have found this view persuasive at least as far as the Catholic Letters are concerned … it is the most detailed and substantiated view of the Byzantine text’s origin on offer … Byzantine prioritists (of whatever stripe) need to address Wachtel’s arguments not Westcott and Hort’s. [22]

Furthermore, despite some attempted denials, it is clear that Wachtel in this article is trying to offer evidence to support his theory of “a long process of development and standardization” as the origin of the Byzantine text in his paper. After outlining his theory, he asserts,

I am going to focus on the differences between five manuscript texts to show that despite intense editorial activity the Byzantine majority text is the result of a process of reconciliation between different strands of tradition. [23]

Then he tells us he has fresh evidence for his theory, saying,

The fresh evidence I am referring to now comes from a research project designed to complement our test passage collations of the synoptic Gospels and to study the influence of textual parallels on the formation of variants. The working title of the project is “Parallel Pericopes.” 38 synoptic pericopes in 154 manuscripts were collated in full. [24]

So Wachtel is using the same data set I did, and for the same purpose. However, he cherry-picks only five manuscripts of the 154 for his analysis. The obvious question now is why my work is challenged because I used almost all of the manuscripts in this non-randomised data set, while Wachtel is given a free pass while using only five manuscripts from the same data set? How do we explain this double standard?


Conclusion

The idea of the Lucianic recension to explain the origin and dominance of the Byzantine text has died a long overdue death. If one wishes to maintain the mainstream approach to textual criticism, a replacement is necessary, and Wachtel’s theory has bid fair to be that replacement. But, as we have seen, his theory is a signal failure. It was not derived from actual evidence. It was asserted that the Byzantine text came about through “a long process of development and standardization,” and thereafter it was simply assumed that that was true. Wachtel analyzed his five manuscripts in light of theory; the manuscripts, in fact, in no way showed that his theory was correct.


It is high time, then, to stop looking for a gambit to explain the Byzantine text as secondary. The questions I asked are important. If answered fairly, perhaps textual criticism can finally move forward by applying scientific data handling methods and going where the evidence leads, instead of remaining in thrall to the ghosts of Griesbach, Westcott, and Hort.

 

Endnotes

[1] Some manuscripts were copied very carefully and show few such variants, and others, such as the papyrus manuscript dubbed P66, were copied very carelessly and have a large number of variants.


[2] To say they are secondary is to say that they are not direct descendants of the original autographs but of an edited (and thereby altered) text created at some later point in time.


[3] See my Master’s thesis, “The Wescott-Hort Theory Re-examined” (Ontario Theological Seminary, 1996) for details. A summary of the key points can be found in Tors, John. “A Primer on New Testament Textual Criticism (in Manageable, Bite-sized Chunks)”, and my supplementary articles.


[4] Gurry, Peter. “Where did the Byzantine text come from?” Posted on May 11, 2018, at Evangelical Textual Criticism. Available at http://evangelicaltextualcriticism.blogspot.com/2018/05/where-did-byzantine-text-come-from.html.


[5] James Snapp, jr, points out, “Wallace attempts to separate his ‘reasoned eclectic’ position from the pro-Alexandrian view of Westcott and Hort. He stated that the Westcott-Hort theory has ‘many flaws.’ However, as Eldon Epp has observed, the Nestle-Aland text ‘almost always departs from the B-text only when an À versus B attestation is in question.’29” and then asks, reasonably, “If the methodologies of Nestle-Aland and Westcott-Hort are so different, why are their results so similar?” (Snapp, James, jr. “The Text of Reasoned Eclecticism: Is It Reasonable and Eclectic? Part Four of a Four-Part Response to Dan Wallace: Testing Reasoned Eclecticism.” Posted at https://www.thetextofthegospels.com/2015/01/the-text-of-reasoned-eclecticism-is-it_95.html.


[6] This is what used to be called the Byzantine text-type. I use it as a convenient shorthand, not because I think it is the best name for it.


[7] It should be noted that those who assert that the Byzantine text is secondary need to explain two things: (1) the origin of the Byzantine text, and (2) why the overwhelming majority of the extent manuscripts have the Byzantine text; how did a secondary text supplant the original among the manuscripts? Wachtel’s theory attempts to explain the first, but, as far as I can see, he makes no attempt to offer an explanation for the second.


[8] Gurry, Peter. “Where did the Byzantine text come from?” Posted on May 11, 2018, at Evangelical Textual Criticism. Available at http://evangelicaltextualcriticism.blogspot.com/2018/05/where-did-byzantine-text-come-from.html.


[9] ibid.


[10] Wachtel, Klaus. “The Byzantine Text of the Gospels: Recension or Process?” Paper prepared for the NTTC session 23–327 at SBL 2009.


[11] Tors, John. “Is the Byzantine Text the Result of ‘a Long process of Development and Standardization’? An Examination of Klaus Wachtel’s Text Critical Model.”


[12] Wachtel, Klaus. “The Byzantine Text of the Gospels: Recension or Process?” Paper prepared for the NTTC session 23–327 at SBL 2009, p. 4


[13] Aland, Kurt and Barbara Aland. The Text of the New Testament. Revised and enlarged. Trans. Erroll F. Rhodes. Grand Rapids: William B. Eerdmans and Leiden: E.J. Brill, 1989, p. 66


[14] “Compar[e] Bruce Metzger’s two editions of his Textual Commentary of the Greek New Testament. The 1971 edition dutifully lists the four standard ‘types of text’: Alexandrian, Western, Caesarean, Byzantine, but – behold! – in the 1994 edition there are only three: Alexandrian, Western, Byzantine – the Caesarean has disappeared! … Metzger discusses – in a mere nine lines – the ‘formerly called’ Caesarean text.” “Larry W. Hurtado (29 Dec. 1943–25 Nov. 2019): A Guest Post by Eldon Jay Epp.” Posted on December 3, 2019, by Elijah Hixson at https://evangelicaltextualcriticism.blogspot.com/2019/12/larry-w-hurtado-29-december-1943-25.html.


[15] Typos corrected.


[16] Data compiled from “The Gender Gap: Voting Choices in Presidential Elections”, posted at https://cawp.rutgers.edu/sites/default/files/resources/ggpresvote.pdf and “Behind Biden’s 2020 Victory: An examination of the 2020 electorate, based on validated voters”, posted at https://www.pewresearch.org/politics/2021/06/30/behind-bidens-2020-victory/.


[17] “Behind Biden’s 2020 Victory,” ibid.


[18] Typos corrected. Jones does not tell us whence he has obtained the data on the percentage agreements.


[19] Wachtel, op.cit., p. 1


[20] ibid.


[21] ibid. p. 2


[22] Gurry, op.cit.


[23] ibid.


[24] ibid. p. 4


27 views0 comments

Comentários


bottom of page