top of page

Does Differential Reproduction of Manuscripts Undermine The Majority Text Theory?

Updated: Jan 28

Introduction

Until the printing press was invented in the Western world in the mid-15th century, all writings (including the New Testament books) could be reproduced only by copying by hand (a handwritten copy is known as a manuscript). Since no copyist ("scribe") is perfect, differences at points in the text ("variants") are found among the extant manuscripts. Some manuscripts were copied very carefully and show few such variants, and others, such as the papyrus manuscript dubbed P66, were copied very carelessly and have a large number of variants.


The attempt to recover the original reading at each point where there are variants is what is known as "textual criticism." For centuries, mainstream textual critics have attempted to do so by uncritically following a set of rules published by German Rationalist J.J. Griesbach in 1796 and based on the assumption, slavishly following the claims of B.F. Westcott and F.J.A Hort, that the vast majority of manuscripts that contain what has been dubbed the "Byzantine text-type," which manuscripts show relatively strong agreement on the text, are in fact secondary and should be discounted in favour of a handful of manuscripts containing the so-called "Alexandrian text-type," which disagree on many variants not only with the Byzantine text but among each other.


We have pointed out in detail elsewhere that all of the actual studies done since Westcott and Hort have shown that the assumptions on which modern mainstream textual criticism is built are wrong (and usually not just wrong but backwards) and so its conclusions are completely wrong. I argued in my Master’s thesis in 1996 that the entire concept of text-types was bogus, as Dr. Kurt Aland has pointed out at least as far back as 1965, and that the focus should be on the individual variants, not on the manuscripts or the supposed "text-types," and that the original text could be recovered by use of scientific data handling (statistical analysis), thus vindicating the variant-based "Majority Text theory."[1]


Naturally, challenges were levied against my view, and they were refuted [2]. Two more challenges have cropped up. In this article, we consider one of them.


The Challenge

James Snapp, jr., a minister at Curtisville Christian Church in Elwood, Indiana, blogs about NT textual criticism. Snapp is a rarity in this area in that he is not wedded to the mainstream approach that currently dominates textual criticism (though he is certainly not an advocate of the Majority Text theory), and there is a great deal that is of much value on his blog, particularly his meticulous examination of individual manuscripts and patristic testimony.


Snapp posted a four-part analysis on his blog of my debate with Tony Costa about the best New Testament text, which took place on April 22, 2017. In Part 1, he raises a challenge against my Majority Text theory, which we will examine here. According to Snapp,


Tors’ final argument for the Majority Reading Approach in his opening statement consisted of an appeal to statistical analysis. Using a mathematical model, Tors showed that if a manuscript were copied five times, then, all things equal, any error would have to be copied three times to be the majority reading. And the number of times an error would have to be reproduced in each subsequent copying-generation would necessarily increase. Thus the probability of any error being the majority reading is staggeringly low. Tors said, "This is based on real-life numbers."
(However, what are the numbers based on? They are a hypothetical mathematical construct, not a reflection of historically verified circumstances – a grid, not a map. Of course one can imagine a tree that grows 10 branches, with 10 twigs on each branch, and 10 fruits on each twig, but one can also walk outside and observe trees with branches hacked away, twigs broken off, widely different numbers of twigs on different branches, fruit plucked by birds and squirrels, and so forth. If a reading’s status as part of a majority infallibly implied what Tors says it implies, the Eusebian Sections and chapter-divisions would also be part of the original text. Still, aside from this, Tors presented some sound reasons why the Majority Reading Approach is more trustworthy than the pro-Alexandrian approach.)

What Snapp is pointing to here is the issue of differential reproduction of manuscripts. Before we consider that, let us clarify a few matters. First, Snapp says,


Tors showed that if a manuscript were copied five times, then, all things equal, any error would have to be copied three times to be the majority reading.

No; the manuscript being copied five times here is the exemplar, and what I was saying is that for an error to be introduced into this next generation of five manuscripts, it would have to be made independently at the same place in at least three of the manuscripts to become the majority reading (something that, as I showed, is highly unlikely). We are not talking about copying already existing errors.


Snapp also says,


Second, Tors said, 'This is based on real-life numbers.' (However, what are the numbers based on? They are a hypothetical mathematical construct, not a reflection of historically verified circumstances.

The real-life number was the chance of a copyist making an error at any given point in the text, which I derived from the number of errors the scribe made in the error-laden P66, which was about 1 in 16. (I then used 1 in 10, to give every possible advantage against my theory). Regarding my model based on five copies of each manuscript, of course we cannot know how many copies were made, which is why this is a model. However, five seems to be a very low estimate, again given the advantage against my theory, so it seems reasonable.


Third, Snapp says,


If a reading’s status as part of a majority infallibly implied what Tors says it implies, the Eusebian Sections and chapter-divisions would also be part of the original text.

Again, no. The goal of New Testament textual criticism is to recover the original text of the New Testament, and the Eusebian Sections and chapter-divisions date to no earlier than the third century AD; they are not part of the original text of the New Testament.


With those clarifications out of the way, let us move on to Snapp’s challenge: does differential reproduction of manuscripts invalidate the Majority Text theory?


Examining the Challenge

Recall that Snapp asserted that my analysis is "a hypothetical mathematical construct, not a reflection of historically verified circumstances – a grid, not a map. Of course one can imagine a tree that grows 10 branches, with 10 twigs on each branch, and 10 fruits on each twig, but one can also walk outside and observe trees with branches hacked away, twigs broken off, widely different numbers of twigs on different branches, fruit plucked by birds and squirrels, and so forth." That is a colourful way of illustrating the concept of differential reproduction of manuscripts, viz. that they do not all get copied the same number of times, as in my model. Some are copied more than ten times, some fewer, and some are not copied at all, so that that particular line of transmission dies out.


Snapp is correct about this; differential reproduction of manuscripts is what happened. But that does not make it a "get out of jail free" card against the Majority Text theory. Simply pointing out differential reproduction isn’t enough; Snapp would have to show that it vitiates the Majority Text theory, and he makes no attempt to do so. So, let us see whether it does or not.


First, let us consider the possibility that a manuscript may be copied once or not at all. In that case, the chain of transmission would have ceased long ago. As long as the manuscript was copied once the chain of transmission would continue, but as soon as the latest manuscript in the chain was not copied, transmission would cease and all we would have were the copies that had been copied to that point, and when they decayed away, as papyrus is wont to do, we would be left with no manuscripts at all. The reason we do have them is because manuscripts can be copied more than once.


So, let us make another hypothetical model, which takes this differential reproduction into account. We start with the autograph; what is a reasonable number of copies made of the autographs? Let us assume twenty; considering that they lasted a century or more [3], one copy every five years would seem to be a very low estimate. (Again, we underestimate in order to give every advantage against our theory.)


Using our very generous estimate of one mistake every ten words, the chance of an error happening at the same spot in two manuscripts is one in one hundred (1/10 x 1/10), so in twenty manuscripts that should happen 1/5 times, which rounds to zero; there would have to be one hundred manuscripts in order to expect it to happen one time. However, to continue to bend backwards to give every advantage against my theory, let us suppose that two of the twenty manuscripts have the same error at the same point. And, as we have seen, there cannot be the same chance of copying once and not copying at all, so we will move to the next option: a manuscript can be not copied, copied once, or copied twice. So there are these three possibilities.


Now, the only way for the error to become the majority reading is if results in more copies than the ones with the original reading. There are several ways for this to happen:


  • The two manuscripts with the error can be copied twice each, and the other eighteen manuscripts copied collectively three times or fewer.

  • The two manuscripts with the error to be copied a total of three time (one once, one twice), and the other eighteen manuscripts copied collectively twice or fewer times.

  • The two manuscripts be copied twice (either one of the copied twice and other not copied, or each copied once), and only one of the eighteen manuscripts with the original reading being copied, and that only once.

  • One of the manuscripts with the error being copied, and no other manuscript being copied at all.

There is a total of 1,073 ways in total for the above possibilities to happen. However, there are 320 ways that the twenty manuscripts could be copied, if the options are not copied, copied once, or copied twice—that is 3,486,784,401 ways. This means that the chance of copying the manuscripts in such a way that the erroneous reading becomes the majority reading is (1,073 / 3,486,784,401), which is less than one in 30 million. So it will not happen [4]. The Majority Text theory, therefore, continues to stand tall.


Conclusion

James Snapp inveighed against my Majority Text theory on the basis of the differential reproduction of manuscripts, viz. that manuscripts do not all get copied the same number of times; some are copied more than ten times, some fewer, and some are not copied at all, so that that particular line of transmission dies out. Snapp made no attempt, however, to show that differential reproduction of manuscripts invalidated my statistical analysis; no doubt he assumed in good faith that the fact itself did so.


In response, we did the statistical analysis factoring in differential reproduction, bending over backwards to make assumptions that would give every advantage against my theory. But even with all this, statistical analysis vindicated my Majority Text theory. And this was so even though I allowed for a maximum of only two copies per manuscript after the first copies from the autographs; in fact, three, four, or ten or even more copies could have been made of any manuscript, which would make the odds against an error ever becoming a majority reading even more miniscule than in our analysis.


What we have shown, then, via scientific data handling, is that the original text of the New Testament is recovered by taking the majority reading at each point of variation. Textual critics may not like statistical analysis and may wish to ignore it, but that can no longer be done.

 

Endnotes

[1] See my Master’s thesis (Ontario Theological Seminary, 1996) for details. A summary of the key points can be found in Tors, John. "A Primer on New Testament Textual Criticism (in Manageable, Bite-sized Chunks)."


[2] See Tors, John. "Three Charges Against the Majority Text Theory Examined and Refuted."


[3] Evans, Craig. Jesus and the Manuscripts. Hendrickson Academcic, 2020. Tertullian, writing around AD 180, said that the autographs were still available to be checked, and Peter, Bishop of Alexandria in the early 4th century AD said that the autograph of the Gospel According to John was still extant. (See Wallace, Daniel. “Did the Original New Testament Manuscripts still exist in the Second Century?” Posted at https://bible.org/article/did-original-new-testament-manuscripts-still-exist-second-century.)


[4] Of course, one could point that, although this is exceedingly unlikely, it is not impossible. Technically that is true—if we leave out the expected return. With 140,005 words in the Greek New Testament, we should expect this to happen 140,005/3.077333946-07 times, which comes to 0.043 times in the entire NT. It is not going to happen.

31 views0 comments
bottom of page