The Historical-Critical Exegetes: A Brief Summary of the Consensus in the 41st Century

Herman Von Voelkenhausen
Catholic University of Cologne
St. Benedict XVI Chair of Theology
April 12, 4019

+JMJ+

Before outlining the views of the majority of contemporary scholars on the historical-critical tradition of the 19th and 20th century, it is worth first mentioning the traditional view of that school from which our own views have arisen and evolved beyond.

Writers of the 22nd century onward who reflected deeply on the historical-critical phenomenon, scattered as such writers are, assume that the exegetical school was simply directly inspired by Spinozistic and post-Kantian ideals to re-envision the Scriptures in a radical way, breaking with the cumulative conclusions of the ages and the clear teaching of the Church. These exegetes supposedly became immensely popular, even holding chairs in the most distinguished theological faculties of Europe, where they would really speak and teach their students directly. Their lectures and writings were the real motion towards a culmination in the “Jesus Seminar,” the fullest expression of the movement, which was followed by a number of special disciples who gradually unpacked the wisdom contained therein in the midst of the larger academic community that turned to join the historical-critical movement in this final phase of critical scholarship.

We must now pause and admit that all of this seems rather childish to us, but to the majority of theologians and historians from the year 2100 until well into the 3800’s, this simplistic position was simply taken for granted. It is no wonder; due to the limited knowledge of the 19th and 20th century which was available to the early authors, we cannot expect very much accuracy on their part. The advent of the internet came only near the very end of the 20th century, and immediately afterward came that dark cloud of Fake News, which persisted well into the mid-22nd century. With such imprecise methods of research and communication, we should be inclined to go easy on those who first attempted to react to the historical-critical phenomenon. The tradition which took their analyses in good faith, it is true, has less excuse insofar as their means of investigation increased in quality, but those authors were hindered by the all-too-natural allure of continuity and the professional risk of speaking out too boldly.

The first point which nearly all authors now make is that of the difference between the “historical exegetes,” and the “scholars of faith.”

The exegetes are the real human beings at the center of the scholarly movement traditionally placed in the 19th to 21st centuries (although it seems increasingly likely that these early dates are fallacious). Many of them, it is granted, really did exist as human beings. But it matters very little what these men really were in their historical lives – it mostly matters that they existed. For instance, whether or not some sayings of Rudolph Bultmann were actually spoken by him is largely irrelevant; what really matters is that a tradition developed which sees him in connection with such sayings.

The “scholars,” then, are the writers in the minds of those who received their teachings and modified them. We encounter the scholars in the writings which are associated with them by name.

Immediately the question is raised – how were these writings produced? “The books bear the names of the authors themselves,” it is objected. As foolish as it sounds to us, it was once unthinkingly presumed that, since an exegete’s name was attached to a text that he must have actually written that text himself. The prevailing theory today is that while some writers did indeed exert a kind of influence over the writings that bear their names, in almost every case we see a kind of pseudepigraphy.

A fundamental body of writing in the historical-critical tradition will serve as a fine framework for an introduction to the methods we are using today to analyze this period of theology. This collection of books was traditionally presumed to be the work of a single author, but now the agreement is that it actually is an amalgamation of several written traditions under the redaction and collection of later theologians. First, there is the Kuenen source, or K. Next, the Graf source, G. Third, the Hupfeld source, H. Finally, the Reuss source, R. Over time, a careful redaction on the part of later German exegetes over the coming decades would piece these writings together to form what the historical-critical tradition, and those who uncritically write of its history, has called the collected works of Julius Wellhausen.

Perhaps there really was a Julius Wellhausen, but the “historical exegete” is, in any case, less important than the significance of the “scholar” represented in the popular imagination of the academy of the 20th century. For those first disciples of the masters of the historical-critical tradition, such as Wellhausen was to those who followed in that tradition, they really were seen as true scholars, important figures who somehow had advanced the theological milieu towards a new era.

It should be noted that the most recent quest for the historical Albert Schweitzer has come up largely empty. There is now, however, a broad consensus that he was not born in Alsace-Lorraine, but in Tübingen – to place his birth in an as-then recently annexed part of France was a clever narrative device used to broaden the appeal of the historical-critical movement beyond Germany in the long-term. That is to say, there was a value of a kind of “academic annexing” being imposed on the narrative of the Schweitzer character during the period of redaction of the earlier records of his life. It is well established that he did spend time in France, but to place his birth and childhood in anywhere but Germany finds no support except the primary texts themselves, which, as we have said, have changed the narrative to suit their own ideological agenda.

In the 19th century, the time for historical-critical exegesis was ripe, as there were expectations in the air for such a movement, after the Prussian myth of Schleiermacher had taken hold of the European imagination. (The Schleiermacher-myth was distinct from but related to the Prussian myths of Fichte and Kant, all of which were zealously absorbed and appropriated by the “Hegelian Community.”) Eventually, this all culminated in the well-known “Jesus Seminar” Event. While most scholars agree that there really was a Jesus Seminar, there is little consensus beyond three points: that the Jesus Seminar was formed around the year 1980, that it preached an apocalyptic doctrine about the coming end of traditional Biblical theology (with itself as a central catalyst), and that it ended in a shameful demise.

An example will serve us well to illustrate the attitude of current scholarship on post-Jesus Seminar thought. Virtually all historians of theology today recognize the minimal “historicity” of the writings of Bishop Spong, that is, Spongian authorship. Instead, various radical publishing houses collected the reports of various moderate pieces of scholarship on the part of Bishop Spong, and they published books under his name. Why? Clearly, these publishing houses had their own theological agenda which they were willing to advance, even in the face of such enormous ridicule. Their reflection on the meaning of Spongian theology prompted them to take a courageous attempt at promoting work largely inspired by his own teachings but which was itself a radical development of them. This is a standard model for the era.

The writings of all the post-Jesus Seminar theologians are typically dated to the late 21st to early 22nd century. It was a common pious mentality of devotees of the historical-critical exegetes, and especially those following the Jesus Seminar, to view the writings traditionally attributed to figures such as Bishop Spong, Bart Ehrman, and Paul Bradshaw as being written much earlier than they really were. It has been firmly established, however, that Bradshaw did indeed write his work first, and Spong and Ehrman based their writings on his, and upon other accounts of the Jesus Seminar and the tradition it represents. Furthermore, these three works draw on a common source, “Q,” (from the French, “Qu’est-ce que c’est?” – “What is it?”) which links them together. They are altogether in a different tradition, however, than the Reza Aslan tradition, which is decidedly more “spiritual” than historically minded in its presentation.

Of course, as is well-known, current academics consider many of these texts to have been compiled by the communities which gathered around these figures. The Spongian community, the Aslanian community, and so on. (Bradshaw, it is true, perhaps did actually write his own works – but it is altogether clear that he himself could not have come up with the idea that John was unaware of an Institution Narrative – this was a later redaction by the publishing house.) The growing majority also views most of the writings attributed to Bart Ehrman actually to be complete forgeries – fully dishonest, albeit clever, pseudepigraphy. (Several editions and translations of his work have also left us wondering what the “true” or “original” texts were in the first place – the recent unearthing of hundreds of copies of the text “Jesus Interrupted” in what is thought to be a 25th century Siberian landfill may prove to be a crucial discovery to aid us in getting to the bottom of this vexing problem. My own forthcoming work “Misquoting Bart Ehrman” will investigate this data at length.)

The motivation for our project is simple: it is altogether unrealistic that such men would have really existed, taught, and written as they are traditionally have thought to have done. Their doctrines are too systematically bizarre and radically incoherent to have been the products of single authors; it is altogether unthinkable that, even given such bad scholarship, they somehow gained wide acclaim to the point of wielding true academic and intellectual authority. Therefore, what was at stake in the 19th and 20th centuries, and what was carried on by the disciples of historical-criticism in the centuries which followed, must be studied under a hermeneutic which takes the spirit of the tradition seriously while retaining the position that such fantastical theories themselves were not taken literally by those who first originated them. It was only later generations of devotees of historical-critical exegesis who, in their zeal, took these traditions to be literal works of Biblical scholarship.

Post by: Eamonn Clark (NB: Faith is a gift – let no man boast… Let us pray for souls who lack such a great grace to see and know the Living God!)

10 Extremely Practical Suggestions to Improve Priestly Formation

Eamonn Clark

Due to recent events, priestly formation is on the brain of many Western Catholics. Everyone knows we should improve education, ascesis, accountability, etc., etc., ad infinitum. How do we do it?

While I am certainly not an expert, I do have relatively broad experience with priestly formation from a variety of perspectives. Here are 10 extremely practical suggestions, which could be put in practice in seminaries across the Western world, probably with some success.

  1. Un-Judaize the structure of the weekend. For autonomous seminaries, there is simply no excuse to follow the secular – and Jewish – logic of the Saturday-Sunday weekend. What this structure currently means is that seminarians party on Friday afternoons and evenings, when penance ought to be done. Saturday becomes the main day of rest. Sunday is the day to catch up on homework and other obligations. Not good. By shifting the weekend to Sunday-Monday, not only is the penance-rest paradigm fixed, but those with parish assignments during the year (especially deacons) are more able to engage with them. The current model often means jetting off from seminary to the parish Saturday afternoon, waiting around until the Vigil, and then helping Sunday morning masses and maybe some special event that evening. With a Sunday-Monday weekend, he can show up for the Vigil, be around all of Sunday, then be around for most of Monday, a normal day for the parish, its office, and its school if it has one.
  2. Have college seminarians do manual labor in a parish for one summer. “My hands were made for chalices, not callouses,” goes the sarcastic saying. Many young men who have generously offered their younger years to a formation program need a good experience of “real work” – and there is plenty of it to be done in every parish. Cutting grass, waxing floors, scraping gum off of desks in classrooms… The entitlement which can come with being a seminarian, especially at a young age, will be kicked in the gut. It will also give the young man a sense for what “normal people” do, and it will bestow an appreciation of the dignity of the work of all of his future employees. On the side he can help with some ministry, but his daily work is following around the maintenance crew or something similar.
  3. Put each seminarian in the cathedral or the curia for one summer. Unfortunately, it is not unusual for a diocesan bishop – especially a metropolitan – to meet with each of his seminarians maybe only once a year for a real talk. If this change were implemented, that sad reality would be much less of an issue. No longer will the bishop have an excuse for not being familiar with any of his men – he will have directly overseen them for at least a few weeks. Furthermore, the seminarian gets a perspective on that crucial part of the diocese, a definite advantage.
  4. As a condition for ordination, demand that each man make an oath that he has read at least once all of Sacred Scripture and every infallible declaration of the Ecumenical Councils and popes. How humiliating it is for a priest to have to confess to a parishioner that in fact he has not read the whole Bible – and yet, how tragically common this reality is. The laity may be less demanding with regard to the latter condition, but this is for a want of understanding of the seriousness of the matter, not a righteous sense of mercy. It is the business of the priest to know the Faith – how can he even pretend to be a Master in Israel until he can say with confidence that he has at least passed his eyes over these basic writings at least one time?
  5. Find families to “adopt” each seminarian in the house. In most locations, it is not hard to find an adequate number of pious and stable Catholic families who would be interested in such a ministry. The idea is for a family to get to know a particular man (or perhaps a few), to pray for him, and to have him come for a visit once a month or so. This keeps the local community invested in the success of the seminary, provides a special set of eyes for the sake of formation, gets the man out of the house and into a “normal” environment, and also provides the spiritual benefit of prayer. A little involvement in the life of a good Catholic family can be a very healthy experience for a seminarian, to keep him realistic about family life, to keep him “hungry” for ministry, and to keep him sane.
  6. Avoid assigning ministries or jobs which force a seminarian to “pretend to be a priest.” The reality is that seminarians are not priests, they are “laymen with an asterisk,” as it were. (This strange role-playing dynamic can also be confusing to others about the role of the priest.) There is a reason that Trent did away with the apprenticeship model of formation. Good mentors were not the problem – bad mentors were the problem, and no doubt many bad mentors simply let their apprentices try to stand in their places, either due to laziness or due to some misguided thought about having their men “try out.” Even the Catholic Encyclopedia article on seminaries, written in 1912, foresees only minimal pastoral work on the part of the seminarian. At least until immediate preparation for diaconate, the seminarian should almost exclusively be watching and being watched during serious pastoral work. He usually possesses neither the education nor the security to perform the duties which are more appropriate for priests, and he never possesses the grace of ordination.
  7. Have an extraordinary formator. This sounds strange until put next to its counterpart, which already exists in every seminary, namely, the extraordinary confessor. This is not a priest who is really, really good at hearing confessions; the extraordinary confessor is a priest who visits the seminary about once a month to hear confessions – and pretty much nothing else. He provides a safe opportunity to confess sins about, for example, cheating on a test, lying to the rector, or making some other mistake which would be difficult to confess to a faculty member, and difficult for a faculty member to hear. “Father, I cheated on your sacramental theology test – I actually don’t even know how many sacraments there are.” “Well, that’s awful, but I can’t do anything about it. You are still getting 100%.” Not ideal. Thus, the extraordinary confessor. However, perhaps this isn’t enough. Perhaps there is space for an extraordinary “formator” as well, like an auditor, who shows up once a month… Someone to complain to about, well, anything that is not appropriate to complain about to a normal faculty member. He would be half-way in the external forum, half-way in the internal forum. The identity of the seminarian is safe – he can say what is really on his mind without any fear of being found out, or, if there is such a fear, he can note it and let the extraordinary formator deal with it prudently. Whatever the case, this individual will have the dirt on every single man in the house, seminarian or formator, and it is up to him to manage it by regular meetings with the normal faculty and staff: but without ever revealing the names of any vulnerable seedlings, at least until absolutely necessary… like in court.
  8. Remove WiFi and Ethernet from residential halls. There are a number of advantages to this. Among them are the encouragement to gather together to discuss classwork and assignments, the need to go to a place dedicated solely to academic work to get things done, and the extra help to avoid misusing access to the internet in various ways. Of course, some will abuse the ability to connect with their cell phones, but the men who want the system to work will make it work; the ones who don’t will find a way pretty much no matter what is done.
  9. Incentivize more serious study by attaching it to room choice. In almost every house, the choice of one’s room is a big deal – near the chapel, away from the loud central A/C unit outside, on the bottom/middle/top floor, the window with the best view, etc., etc. Many places use a system of age, years spent in the house, lottery, and other “unearned” things. While some of these could factor in, why not also use GPA, at least for the top scorers? Then good grades are helped along by a friendly competition which has meaningful results.
  10. Once a month, the rector and head spiritual director choose together a special ascetical practice for the whole house. The hot water is turned off for the day. Lunch one Friday is bread and water. One Saturday night is a mandatory 3-hour vigil. These common experiences are good for the life of the brethren… When you suffer together, you grow together, and this develops unity, even if it comes partially through complaining!

Well, that’s it. Surely there are plenty more, but those are mine for now. Do you have any practical suggestions? Keep in mind that adding “one more thing” is always a big deal – the current programs of formation are already packed to the brim with “stuff.” Here I tried mostly to avoid adding more obligations and duties and mainly tried to suggest changes to the character of pre-existing realities. If you have any thoughts, let me know in the comments – including if you disagree with any of my own proposals!

A final thought, somewhat related to formation, but a little outside… It could be worth investigating a split-model for diocesan vocation programs… Namely, a “vocation director” who gets men into the program, and then a “director of seminarians” who manages the men already in. A young guy deals with the rah-rah, come join us kind of stuff, and an older, more experienced, less vulnerable guy (even a “retired” priest) deals with the men already in. Some dioceses already do it, and basically every large religious order does something like this. Just a bonus thought.

Our Lady, Queen of the Clergy, pray for us!

The Other “Scandal” in the Vatican

Eamonn Clark

I was speaking with some confreres a few months ago right around the Youth Synod about a problem most have not realized exists. Do you remember the Youth Synod? How relevant has its work been to your life? Do you recall a single point from the final document? My guess is that you remember it happened, that it did almost nothing but cause concern, and it produced a rather milquetoast exhortation that was probably more or less written before the meeting happened anyway. Okay, fine. That in itself is problematic, but that’s not my point here.

Years ago, we had the Extraordinary Synod on the Family, followed by the Ordinary Synod on the Family. I won’t rehearse the issues there, but it is indisputable that we did indeed have these meetings here in Rome. Okay, fine.

We have another big meeting coming up in two days. (By the way, lower your expectations for that…) There are supposed to be presidents of Bishops’ Conferences from all over the world, plus some other folks from various locales, about 190 people in total officially attending. Okay. Fine.

Here is the big question. How much do these meetings cost?

For this meeting, if we more than generously assume that only 100 people are coming in from outside of Italy for 4 nights, where does that leave us with expenses for travel, room, and board?

Let’s just do the math for travel, and only for those officially attending (not counting any assistants inevitably brought along). A conservative estimate of the average round-trip ticket to Rome for the people showing up would be something like $1,000 in economy class, non-direct. My suspicion is that most bishops want to fly direct if possible, and in business class (arguably justifiable for many older guys, or for the ease of getting work done on the plane). But even giving the BOTD here, we have already spent $100,000. Just to show up and get home. Then 4 days of room and board and who knows what else (rental cars, extra nights, more travel, whatever). Then there’s all the work to prepare the meeting – the planning of the agenda, writing the press releases, getting the venue set up, communicating with attendees beforehand, etc. Let’s be extremely generous and say that the entire thing costs $500,000. (Which I think is a comical estimation – it’s probably deep into the millions.) What will primarily be happening at this abuse summit is listening to a few talks, some group conversations, and then a penitential liturgy with the Holy Father at the end.

The talks may be worth listening to. The group conversations may be worth having (although breaking them into “language groups” seems to encourage ideological incest, but, unfortunately, Latin has been lost, so we are pretty much stuck with this model). The penitential liturgy will surely be poignant.

But is it worth $500,000+ to have everyone there in person? Is it worth leaving the diocese for almost a week at minimum? Is it really worth the time, the money, the effort?

It might have been worth it a few decades ago. Today, there is not really an excuse. There is this new thing, called the internet, which can be used to communicate with many people very cheaply and quickly.

For those of you who don’t know, it’s a series of tubes.

Now, I live in Rome, and I know how slowly things move. I have no delusions that this model going to change any time soon. But it could and should change eventually, and change starts by pointing out the problem and a possible way forward. It is just ridiculous to be spending hundreds of thousands or even millions of dollars on these meetings when they could be done almost for free, and much more quickly at that, with a bit of tech-savvy engineering.

Of course, there are elements to a boots-on-the-ground meeting which are desirable. I’m not suggesting that it is never appropriate to come over in person, or that it isn’t important to be celebrating a liturgy in person with the Holy Father, or what have you. I am suggesting that we are seeing in the Holy See a decadent model of communication occasioned by an adaption to the availability of commercial travel without tempering it by an adaption to the availability of digital communications. We are not in 1875 anymore, it is true… We can fly to Rome and back without much trouble. But we are not in 1975 anymore either – we can have a lot of meetings online without much trouble.

Is there nothing better to do with that money, time, and energy?

St. Isidore, patron saint of the internet, pray for us.

Fake News, Real Vices: A Quick Take on CovCath

Eamonn Clark

On October 18th, 1925, Greece invaded Bulgaria. This event led to the death of nearly 200 people, including many civilians… But that’s not the whole story.

This November, the 100th anniversary will come of a treaty signed in my old neighborhood of Neuilly-sur-Seine, which attempted to resolve some geographical disputes in the Balkan region after World War I. Suffice to say that it remained a point of contention, and a dispute between Greece and Bulgaria over the control of Macedonia and Thrace carried on. About six years later, a young Greek soldier stationed near the edge of Bulgarian territory ran into a clearing in a little mountain pass, perhaps totally unaware that he had even crossed the border. He had no intention of attacking anyone or taking any land – he was chasing his dog, which had run away from him. Bulgarian sentinels quickly determined it was a Greek invasion and shot him dead. The aftermath was several days of open violent conflict around the border. Thus is the event called the “War of the Stray Dog.”

While this narrative is somewhat disputed, whatever the case, after the League of Nations intervened it was admitted by Bulgaria that the whole conflict had been caused by a misunderstanding.

We seem to have just finished our own version of the War of the Stray Dog today. There was political tension (Left vs. Right), a border crossed (perceived mistreatment of a member of an historically oppressed group), a uniform (MAGA hat), an innocent misunderstanding (trying not to be provoked), and a catastrophic aftermath (nation-wide condemnation, death threats, etc.).

Calling out moral failures in this hurricane of off-the-rails virtue-signaling is like shooting fish in a barrel. So I won’t bother – you’ve no doubt read the headlines about Lefty journalists and celebrities calling for violence against these kids, and about the bishops and dioceses who trusted the mainstream media’s narrative and piled on. I just want to point out a few things.

  1. It might not have been better if the kid had walked away. The optics could have even been worse – it might look even more racist to turn your back on a Native American, right? So there was no winning.
  2. High-school kids are not typically models of serenity and prudence. Period. Ask anyone who works in secondary education or has teenage kids. So even if there were excesses or missteps, it seems beyond unfair to hold 16-year-old kids to a standard of foresight and self-control more proper to a 4-star general.
  3. If it can happen to them, it can happen to you and yours. So look out.
  4. “Officially” condemning people is unwise unless it’s your job to do so. I am thinking especially of several ecclesiastical persons/institutions who had no direct business with either the kids or the March for Life. Why is it necessary to comment at all? Are there not problems in your own house to attend to without jumping on the virtue-signal bandwagon?
  5. Every year now, for some time, when the secular media begrudgingly mentions the March for Life in passing, they will not mention the staggering numbers (500k+), the positive atmosphere, or the salient points of main speakers… They will dig up old footage of a high school kid in a MAGA hat and a Native American with a drum and talk about “angry conservatives” and “Trumpian politics” and “counter protesters.” Thankfully, that’s a sign of desperation which I think most reasonable people on the fence will see through.

I think this incident may have popped the media balloon. Time will tell.

St. Francis de Sales, patron saint of journalists, pray for us.

A Radical Suggestion for the Roman Curia

Eamonn Clark

If you didn’t know, there is an ongoing breakdown in American comedy. It is increasingly censorious, politically biased, and generally unfunny. The most recent high profile example is the as-yet-unresolved Oscars hosting debacle… A very long list could be made of such things in the past few years, but the current content of late-night shows speaks for itself. Here’s a great interview on the subject (mild language warning):

Also, if you didn’t know, the papal court used to have a full-time comedian, or jester (a bit more than just a joke-teller), just like many other royal courts. Shortly after his election, Pope St. Pius V, of happy memory, suppressed the office of the papal court jester. Note that he did not just go find a less outlandish, less challenging, and less funny jester, but he removed the office. He had his reasons, and knowing Pius V, they were good reasons… The court has serious business to attend to, and also, having a jester makes the court look very much like a secular king’s court, which could be scandalous.

As everyone knows, jesters are to make people laugh (among other things). In doing so, they provide a little levity amidst the tension – no doubt needed these days in the Roman curia. But humor-based laughter is an overflow of the rational faculties into the senses based on some kind of dissonance being pointed to… In other words, the most important function of the jester (or comedian) is to say what everyone is thinking but nobody else will say because they are afraid to – or are perhaps unaware of the absurdity of some set of contradictory realities. He is supposed to cut right to the heart of the issue, albeit in a roundabout way that shows the ridiculousness of it all. How useful would this be today…

The jester is fundamentally a truth-teller. And to fire a jester for a biting joke would only make the joke all the more powerful… After the pope himself, nobody’s speech is more protected than the jester’s. He can say what needs to be said, and nobody can punish him without making himself look like the real fool.

453 years is enough seriousness. Ease the tension. Tell the truth. Get a jester.

The New Battle for Canaan

Eamonn Clark

About 3,300 years ago, Moses died on Mount Nebo, as a symbolic punishment. I have been to the spot and looked out at the land of Israel from afar, just what Moses would have seen. (A picture I took is above.) It was a hazy day, making it difficult to see everything.

The death of Moses occasioned the rise of his disciple Joshua (Hebrew “Yeshua”) who was commissioned to lead the Jews finally into this mysterious land of Canaan beyond the Jordan, their inheritance by Divine right. Joshua leads a ruthless campaign against the pagan occupiers of the land. (Here is where many of those “difficult” passages of Scripture are found…) The point of the violence is to drive out idolatry from the new home of God’s Chosen People, lest they be tempted to go after other gods. The First Commandment is first for a reason: it is the most important. If you do not worship the one true God, your natural virtue comes to nothing – the fundamental orientation of your life is wrong. To safeguard from such egregious sin, Joshua is given this task of purification.

While Joshua destroys most of the idol cults, he does not succeed fully. A remnant of paganism remains, and this remnant will lead many Jews astray. The predominant goal of the Prophets is precisely to condemn this idolatrous activity, especially on the part of the Kings. Eventually, Israel’s unfaithfulness is so bad that the Temple is destroyed and they are kicked out of the land of Canaan, exiled to Babylon – a wake up call if there ever was one.

What does this have to do with Advent and Christmas?

With the end of the Old Covenants, the Old Law, and the prophetic tradition, characterized by the figure of Moses, there comes a New Joshua – Jesus. In fact, the name Jesus is actually just a different appropriation of the same name, Yeshua. The fierce battle cry of the mighty Joshua is no match for the gentle coos of the little Christ child. The pagan warriors of Canaan may have trembled at the one, but the demons trembled at the other.

When the mythological tradition of the Ancient Near East is recalling the death of the gods (winter), the God of Israel is being truly born. (Yes, I do think that December 25th is the correct date of the historical Nativity, just like Benedict XVI.) The one true God will later die in the spring while the pagan gods are rising, but He will rise too. He has conquered them. But sin continues… There is still a war to fight.

The ongoing battle of the new Joshua is not the exterior Canaan, it is the interior one. The Christ comes into our mysterious hearts and seeks to purify them of idols that lead us into sin and worldly attachment, even at the expense of our suffering. This war is fought with grace and love rather than swords and arrows, and if we do not surrender we will win a battle that condemns us to dwell on the Nebo of the hereafter, always looking at the real Promised Land, longing for it, and never being able to enter.

However, if we welcome the New Joshua to be born into the Canaan of our souls, and if we let Him do the painful work of purification, we will see the New Jerusalem clearly and enter in.

And that’s what Christmas is all about.

St. John of the Cross, pray for us.

True Myth Part 4: Jesus and the Tricksters

Eamonn Clark

Jumping ahead quite a bit in Scripture in our “true myth” series, today we will look at an incredibly powerful relationship between Jesus Christ and the “trickster archetype.”

Fans of the Baltimore Catechism will recall that God “neither deceives nor is deceived.” How then, could God incarnate fit into this paradigmatic role of the Trickster, occupied by deceptive figures such as Loki, Hades, various coyotes, ravens, and other such creatures – including serpents – throughout the history of mythology? These figures use trickery in order to gain power… What does Jesus have to do with this?

Without a full exploration of the ins and outs of the trickster paradigm, we can point out just a few commonalities which apply to Jesus:

  1. He is, in many ways, in between life and death. (See Levi-Strauss on this characteristic of tricksters qua mediators of life and death for more… think of how the animals which normally portray trickster characters are neither herbivores nor hunters but eat already dead animals…) Here are some examples of this “in between” space:
    1. The Baptism in the Jordan – in between the Nations (death) and Israel (life), in between the Sea of Galilee (full of fish and where He calls the first disciples) and the Dead Sea (…dead…), in the midst of the flourishing jungle but in the lowest part of planet Earth, and in water (which both gives and takes life).
    2. His first act after the Baptism – He goes out into the desert (to deal with a real trickster) in between Jericho, the city of sin and death, and Jerusalem, the city of spirit and life… This same space will be the setting for the story about the Good Samaritan (representing Himself), who picks up the half-dead (!) sojourner (Adam), of which He is the renewal.
    3. He touches the unclean (symbols of death) and gives healing/life – For example, the raising of the little girl in Mark 5, or the healing of the leper in Matthew 8.
    4. The Resurrection – Did He actually die? Is He really alive? Whatever the case, it’s clear that our sense of the “in between” is tapped into… The psychology of the uncanny valley is maxed out.
  2. He normally dwells on the outskirts of society, frequently retreating to the wilderness for solitude. Much of the 3 years of the public ministry is spent camping just near the Decapolis and other such places. Bethany is another place worth mentioning, as it is not quite in Jerusalem, but it is near it, where he raises Lazarus from the dead (more “in between” life and death imagery) and prepares for Passover for the last time… Gethsemane and Golgotha are also just outside Jerusalem.
  3. He claims the role of a gatekeeper to the underworld. (Even more death-life ambiguity.) “I hold the keys of life and death,” He says in Revelation 1:18. Or take John 10:9 – “I am the gate; whoever enters through me will be saved,” or John 14:6, “No one comes to the Father except through me.”
  4. He is a shapeshifter.
    1. The Resurrection – He is the same, but different. (More ambiguity!) The disciples can only half recognize Him, though the wounds give testimony that it really is the same man they knew. But He is changed somehow.
    2. The Eucharist – Jesus literally takes the shape of bread and wine.
    3. God has become a human being – certainly a kind of changing shape, albeit in a qualified sense.
  5. He cannot be contained or caught by the power of opponents. He passes through the crowd, or He hides effectively, as seen in many passages in the Gospels, such as the rejection at Nazareth in Luke 4. Instead, only He has the power to lay down His life… and take it up again (John 10:18).
  6. He does not often give direct answers. Instead, He speaks in parables, riddles, questions, and ambiguities. He arguably only directly answers 3 questions of the over 100 put to Him, and He arguably asks over 300.

Other “trickster” characteristics might be noted as well, such as spiritual power, unclear origins, and a preference for working in the midst of obscurity and chaos. What are we to make of all this?

It is that Jesus goes to the most “uncomfortable” place in our psychology and asks us, nonetheless, to trust Him. So one of the deepest parts of our mind, which is intuitively inclined to see the brokenness of the world, is “cured” by His reversal of the trickster archetype.

God “deceives” in a way by becoming human (thus not “looking like God,” as He did on Mount Sinai with fire and thunder), in order to gain the power of persuasion or condescension. But also, and perhaps in a deeper and plainer sense, God is not only reversing the trickster’s goal-paradigm but inverting it as well… Instead of deceiving to become powerful, God becomes weak in order to tell the truth.

 

No, “pressure” to resign from the papacy does not make resignation invalid…

Eamonn Clark

Look. I’m not a professional canon lawyer. But two days in a row now even I have been able to point out some whoppers, both involving juridical validity.

It’s been irresponsibly suggested that “some canon lawyers” (who?) say that if a pope resigns due to scandals, he “cannot be said to have made his decision of his own free will – even if he insists that he is doing so.”

As the kids say these days – lolwut?

Even though the Holy Father apparently has said he has no intention of resigning, he is an unpredictable man, isn’t he. So let’s take a look at this important topic anyway.

Okay, so just a few questions to start us off… Since when is there a legal definition of “scandal”? And who determines whether there is such a “scandal”? And wouldn’t it be reasonable to assume that a person who sees danger and ineffectiveness coming for him due to a scandal would truly want, as an authentic good, to leave office?

If it is true that scandal precludes the resignation of office, it would mean that the person is stuck there, even if due to his own sins and the real good of the Church requires his resignation. On what planet is this a juridic reality? The fact is that there are always scandals and pressures facing popes which would incline them to leave office, many of which are unknown to most people. So is every papal resignation therefore invalid?

No, of course not. As my own professor of canon law told our class, one of the important tools in reading and interpreting canon law is common sense. 

Let’s go through the text, shall we? My comments in bold.

Can. 187 Anyone responsible for oneself (sui compos) can resign from an ecclesiastical office for a just cause. Obviously, the pope is such a person. Note that mounting scandals and ineffectiveness due to pressure to resign would certainly constitute a “just cause.”

Can. 188 A resignation made out of grave fear that is inflicted unjustly or out of malice, substantial error, or simony is invalid by the law itself. This means that, even if there is grave fear on the part of the office holder, that fear must be caused by a serious threat to that person which violates justice in its mode or in its end… We could quibble about exactly what “unjustly” and “out of malice” mean, (and it’s unclear to me if “out of malice” is its own clause – perhaps so) but at present, there seems to be nothing but serious complaints and demands for answers. No threats against the life or liberty of the person of the Holy Father.

Can. 332 …

§2. If it happens that the Roman Pontiff resigns his office, it is required for validity that the resignation is made freely and properly manifested but not that it is accepted by anyone. The key here is how to interpret the word “freely.” As we have seen, grave fear of being an ineffective pastor or of harming the Church through giving scandal would not suffice to inhibit freedom in the proper way, even for holders of a “normal” office. The office of the papacy, however, is not a normal office – it is the supreme office of the Church militant – and so even more stringent requirements would seem to obtain with regards to proving who seems to be the pope isn’t the pope or who seems to have left the papacy has not.

…ah but wait – let’s go back a few hundred canons…

Can. 14 Laws, even invalidating and disqualifying ones, do not oblige when there is a doubt about the law. When there is a doubt about a fact, however, ordinaries can dispense from laws provided that, if it concerns a reserved dispensation, the authority to whom it is reserved usually grants it. So since there is at least a serious argument to be made that “scandal” and “pressures” do not of themselves suffice to render a resignation null when it is properly manifested, there is at least doubt about the law. This subjects the invalidating law, c. 332 §2, to a “stricter” interpretation. Any claim must overcome the arguments provided.

What, then, might actually render an attempted resignation invalid due to a restriction of freedom? Well, the pope could not be tortured to procure a resignation, for example. He also could not reasonably be presumed free when publicly and presently threatened with death or imprisonment by those with clear means to procure either. Anything like this, in which an invalidating pressure is manifest to all reasonable persons, when the Holy Father actually manifests an intention to resign it would indeed be invalid. Otherwise, we have at a minimum a doubtful application of law, which, especially given the importance of the office, should therefore be subject to strict interpretation, as explained above.

Therefore, the Pope is perfectly free to resign, no matter how bad the scandal gets.

Well, that’s my basic argument. Someone will have to show me where I’m going wrong, if indeed that’s the case. We didn’t even get into c. 17… That would be important too.

Text and context.

The Double-Effect Death-Spiral… and the Way Out!

Eamonn Clark

There are a number of pressing problems in Catholic moral theology, especially in bioethics. One of them is the right understanding of the so-called “Principle of Double-Effect,” (PDE) or whether this is really a legitimate principle at all in the way it is normally expressed. Now that Dr. Finnis has both parts of his series on capital punishment out, let’s put on our moralist hats and get to work.

I’ll spare you all the ins and outs of the history of the problem – Fr. Connery’s wonderful book on abortion in the Catholic moral tradition deals with this in some relevant detail – but will give you the gist of the recent discussions so that we can dive into John Finnis’ articles. I too will write in two parts, I think…

The 19th century saw the problem of “craniotomy” come up, and this is a decent and to me, most familiar way to dive into the problem of PDE. (Craniotomy is crushing the skull of an inviable fetus, in this case with an eye to extracting the child to save the mother.) Archbishop Kenrick of Baltimore wrote his morals handbook and forbade the operation, Cardinal Avanzini of Rome anonymously opined in favor (page 308-311) of the procedure in his journal (which would become the Acta Apostolicae Sedis), and Cardinal Caverot of Lyon (the city pictured above, coincidentally) petitioned the Holy Office for an official response. Needless to say, there was some controversy.

In response to Caverot’s dubium, the Holy Office (the precursor to the CDF) decided in favor of Kenrick’s position. But it did so cautiously, saying that the procedure “cannot be safely taught.” It did not exclude definitively the liceity of the procedure in itself.

Let’s fast-forward to today’s iteration of the old camps, of which there were and still are precisely three…

The “Grisezian” Position:

Doctors Grisez, Finnis, and Boyle were major proponents of the liceity of craniotomy in the 20th century and into the 21st. Grisez lays out his argument in several places, including in his magnum opus (entirely available online), The Way of the Lord Jesus. It is worth quoting the relevant passage in its entirety:

“Sometimes the baby’s death may be accepted to save the mother. Sometimes four conditions are simultaneously fulfilled: (i) some pathology threatens the lives of both a pregnant woman and her child, (ii) it is not safe to wait or waiting surely will result in the death of both, (iii) there is no way to save the child, and (iv) an operation that can save the mother’s life will result in the child’s death.

If the operation was one of those which the classical moralists considered not to be a “direct” abortion, they held that it could be performed. For example, in cases in which the baby could not be saved regardless of what was done (and perhaps in some others as well), they accepted the removal of a cancerous gravid uterus or of a fallopian tube containing an ectopic pregnancy. This moral norm plainly is sound, since the operation does not carry out a proposal to kill the child, serves a good purpose, and violates neither fairness nor mercy.

At least in times past, however, and perhaps even today in places where modern medical equipment and skills are unavailable, certain life-saving operations meeting the four conditions would fall among procedures classified by the classical moralists as “direct” killing, since the procedures in question straightaway would lead to the baby’s death. This is the case, for example, if the four conditions are met during the delivery of a baby whose head is too large. Unless the physician does a craniotomy (an operation in which instruments are used to empty and crush the head of the child so that it can be removed from the birth canal), both mother and child eventually will die; but the operation can be performed and the mother saved. With respect to physical causality, craniotomy immediately destroys the baby, and only in this way saves the mother. Thus, not only classical moralists but the magisterium regarded it as “direct” killing: a bad means to a good end.

However, assuming the four conditions are met, the baby’s death need not be included in the proposal adopted in choosing to do a craniotomy. The proposal can be simply to alter the child’s physical dimensions and remove him or her, because, as a physical object, this body cannot remain where it is without ending in both the baby’s and the mother’s deaths. To understand this proposal, it helps to notice that the baby’s death contributes nothing to the objective sought; indeed, the procedure is exactly the same if the baby has already died. In adopting this proposal, the baby’s death need only be accepted as a side effect. Therefore, according to the analysis of action employed in this book, even craniotomy (and, a fortiori, other operations meeting the four stated conditions) need not be direct killing, and so, provided the death of the baby is not intended (which is possible but unnecessary), any operation in a situation meeting the four conditions could be morally acceptable.”

We can see the attractiveness of the Grisezian position. It removes the uncomfortable conclusion that we must allow two people to die rather than save one. However, it simultaneously introduces an uncomfortable conclusion: that we may ignore the immediately terrible results of our physical exterior act in favor of further consequences of that act due to the psychological reality of our intention, in this case contingent on even further action (viz. actually extracting the child after crushing the skull – presumably, a surgeon may perform the craniotomy and then simply leave the child in the womb, thus failing to save either life).

Hold on to that thought.

The “Traditional” Position:

I put the word “traditional” in scare-quotes because it is the position which follows the cautious prohibition of the Holy Office, but it is not very old and is merely probable opinion. It is taken by a good number of moralists who are “conservative” and “traditional” in other areas. And it doesn’t have a modern champion the way Grisez was for the pro-craniotomy camp.

Folks in this school often make more or less good critiques of the Grisezian position, zeroing in on the lack of the appreciation for the immediate physical effects which flow from an external act. How is it that crushing a child’s skull does not equate with “direct killing”? It seems that such an action-theory, as proposed by Grisez, Finnis, and Boyle (GFB) in their landmark essay in The Thomist back in 2001, is utterly at odds with common sense. The plain truth then, is that craniotomy, just like ripping the organs out of someone healthy to save 5 other people, functions based on consequentialism.

This position, however, must bite two bullets. First, there is the sour prescription to let two people die when one could be saved. Second, it throws into confusion the topic of private lethal self-defense… Doesn’t shooting a person in the head also directly kill in order to save another’s life? GFB made this point in their Thomist essay, and, in my opinion, it is their strongest counter-argument. It pulls us back to the fundamental text in the discussion, q. 64 a. 7 of the Secunda Secundae, whence supposedly cometh PDE.

Hold on to that thought too.

The Rights-Based Position:

The final position for our consideration comes most recently from Fr. Rhonheimer, who seems to be at least in part following Avanzini. Basically, the argument goes like this… In some vital conflicts, like the problematic pregnancy at issue, one has two options – save one life, or allow two deaths. Everyone has a right to life, but in cases where we find acute vital conflicts, it sometimes makes no sense to speak of rights. The case in which a person in a vital conflict (the child) will not even be born is one such example. Therefore, while the child retains the right to life, it makes no sense to speak of this right, and so it does not bear on the decision of whether to perform an act which would end in the child’s death if it will save the mother.

Leaving aside the problem of the language of rights in moral discourse (see McIntyre’s scathing critique in After Virtue), we can simply observe that this is a position which does not evidently derive from virtue-ethics but is made up wholesale out of a desire to appease an intuition. Rhonheimer, as far as I recall, does not even attempt to integrate his position into the broader framework of moral theology. In sum, the damning question is, “Why precisely does acute danger to others and shortness of life remove the necessity to respect the bodily integrity/life of a person?” To me, it seems little more than an appeal to intuition followed by foot-stomping.

I credit Fr. Rhonheimer for making an attempt to present a different solution, and certainly, not all of his work is this problematic. But we are presently concerned with this particular topic. Anyway, I suggest that this is not a serious position for further consideration.

A Brief Synthesis

I recently wrote my STB thesis on moral liceity with respect to “per se” order, which is to say that those acts with “per se” order form the fundamental unit of moral analysis upon which the whole question of “object” vis-a-vis “intention” turns. I look at Dr. Steven Long’s truly excellent groundwork in his book The Teleological Grammar of the Moral Act, but I expose what I found to be some ambiguities in his definition and presentation of what exactly constitutes per se order. Skipping over all the details, let me quickly show how problematic the first two foregoing positions are and then give a rundown of the basic solution and its integration with respect to capital punishment. (It is Finnis’ articles on the death penalty which brought us here, remember!)

There are 3 dilemmas we have already mentioned: the central problem is craniotomy. At the two poles are the “transplant dilemma,” with one healthy patient and 5 critical patients in need of new vital organs, and the standard case of private lethal self-defense (PLSD), such as shooting a person in the head in order to stop his lethal attack.

The Grisezian position ably explains the craniotomy and PLSD. Nowhere – and I have looked pretty hard – do NNL theorists explore the implications of their action-theory (such as presented by GFB in their article) with respect to something like the transplant dilemma. One could easily appropriate the language of Grisez’s passage in TWOTLJ to accommodate such an obviously heinous action as ripping out the heart, lungs, kidneys, liver, etc. of a healthy man to save 5 others. (It should be noted that the individual’s willingness to give his body over to such an act, while good in its remote intention, is totally inadmissible. I think basically all Catholic moralists would agree with this.) To rip out the man’s vital organs could certainly be described as “reshaping the body” or something similar to Grisez’s description of craniotomy as “reshaping the skull.” After all, the surgeon need not intend to kill the man – he could simply foresee it happening in view of his means to save these other men.

GFB evidently miss the point in their Thomist article, as they claim a causal equivalence between craniotomy and procedures done on a person for that person’s own sake, on page 23: “It is true that crushing the baby’s skull does not of itself help the mother, and that to help her the surgeon must carry out additional further procedures (remove the baby’s body from the birth canal). But many surgical procedures provide no immediate benefit and by themselves are simply destructive: removing the top of someone’s skull, stopping someone’s heart, and so forth.” We can see, then, that the principle of totality is undervalued by GFB and those who follow them. Serious damage done to a person must at least help that person. Any help to other persons is secondary, and I would argue per accidens rather than per se… One human substance is always related accidentally to another human substance.

The traditional approach more or less throws the teaching of St. Thomas into a cloud of ambiguity. By stating that the craniotomy is illicit because of the directness of its physical causation, the language in q. 64 a. 7 becomes unintelligible. We have to see the whole thing:

“Nothing hinders one act from having two effects, only one of which is intended, while the other is beside the intention. Now moral acts take their species according to what is intended, and not according to what is beside the intention, since this is accidental as explained above (II-II:43:3; I-II:12:1). Accordingly the act of self-defense may have two effects, one is the saving of one’s life, the other is the slaying of the aggressor. Therefore this act, since one’s intention is to save one’s own life, is not unlawful, seeing that it is natural to everything to keep itself in ‘being,’ as far as possible. And yet, though proceeding from a good intention, an act may be rendered unlawful, if it be out of proportion to the end. Wherefore if a man, in self-defense, uses more than necessary violence, it will be unlawful: whereas if he repel force with moderation his defense will be lawful, because according to the jurists [Cap. Significasti, De Homicid. volunt. vel casual.], ‘it is lawful to repel force by force, provided one does not exceed the limits of a blameless defense.’ Nor is it necessary for salvation that a man omit the act of moderate self-defense in order to avoid killing the other man, since one is bound to take more care of one’s own life than of another’s. But as it is unlawful to take a man’s life, except for the public authority acting for the common good, as stated above (Article 3), it is not lawful for a man to intend killing a man in self-defense, except for such as have public authority, who while intending to kill a man in self-defense, refer this to the public good, as in the case of a soldier fighting against the foe, and in the minister of the judge struggling with robbers, although even these sin if they be moved by private animosity.”

Without launching into a critique of the Cajetanian strain of commentary which ultimately gave rise to the crystallized formulation of PDE which pervades most moral discourse on vital conflicts, I will again follow Long and say that the “rules” of PDE really only work if one already knows what one is looking for. In this respect, PDE is like the moral version of St. Anselm’s ontological proof for God’s existence – it is nice to have in a retrospective capacity, but it is not actually that helpful as an explanatory tool.

As we have seen, GFB take Thomas to mean that one does not “intend” to kill the aggressor, just as the surgeon does not “intend” to kill the child in the craniotomy. The traditional school does not have as clear of an answer – it seems forced to say, somewhat like Fr. Rhonheimer, that the rules just “don’t apply,” yet without a convincing explanation. After all, the principle of totality does not bear on the slaying of one person for the sake of another, even in the case Thomas addresses. Furthermore, because it appears that it is only due to the death of the aggressor that the attack is stopped, thus implying “intentional killing” as a means, how do we explain St. Thomas’ position?

We can note a few things in response. First, it is in fact not death which stops the attack initially – it is the destruction of the body’s capacity to continue attacking, which itself is the cause of death. The separation of the soul and body (which is what death is) need not be the chosen means or the intended end. In every single case, the aggressor is incapacitated before dying, and such incapacitation is what is sought. (This is at least part of what makes Finnis’ argument about “unintentional killing” in war plausible.) Second, the child stuck in the womb is a radically different kind of threat than the rational aggressor. Third, Thomas is quick to turn the discussion to public authority, as a kind of foil. All of this is quite significant and points to an answer.

To the first point… It is true that the private citizen can’t have the death of the aggressor as a goal, meaning, death can’t be what is sought as a means or as an end. He doesn’t need to do anything to the soul-body composite as such, he only needs to do something to the body’s ability to be used as a weapon.

To the second point… A gunman in an alley is a very different sort of threat than a child growing in the womb. There seem to be two classes of threats – non-commutative, and commutative. The non-commutative threats are those which result from principles not in themselves ordered towards interacting with the outside world, viz., the operations of which are without a terminus exterior to one’s own body. These would be the material principle itself of the body (the act of existing as a body), and the augmentative and nutritive faculties of the vegetal soul. So a person falling off a cliff, or a child growing in the womb, are not acting on the outside world… Threats which proceed from the animal or rational appetites, however, are indeed acting externally. The crazed gunman who is not morally responsible and the hired hand are both trying to do something to another person, whereas the child growing in the womb is not. So perhaps different kinds of threats allow for different kinds of defense.

To the third point… Without a full exploration of the famous “self-defense” article quoted above, Thomas is eager to explain that public authority can kill intentionally – evidently meaning it can be the end of one’s act rather than just the means. (“Choice” refers to means, “intention” refers to ends – they are only equivocally applied in the inverse senses in scholastic morals.) Here’s where it gets weird.

Because the soul-body composite is its own substance (a living human being), the act of killing a person (regardless of one’s psychology) destroys that substance insofar as the world of nature is concerned. (We leave aside the interesting questions of  the survivalism vs. corruptionism debate among Catholic philosophers.) It forms a per se act – that is to say, there is nothing further which can come from this action which will be per se an effect. This is because, as I argued in my thesis, per se order exists only within the substance chosen to be acted upon. Per se effects are those effects which necessarily occur in the substance an agent acts on which come from the agent’s act itself, given the real situation of the substance. So to destroy a substance necessarily ends the per se order. At the end of per se order there is the intended effect – such as debilitation (which is only logically distinct from self-preservation and therefore is not a separate/remote/accidental effect – what it is to protect oneself simply is to remove a threat) or death. Of course, this intended effect can itself be part of a chain of intended effects which function as means with relation to some further end. If I defend myself in order to live, but I want to live for the sake of something else (like acquiring wealth), then there is a chain of intended ends which function as means. The necessary process of moral evaluation, however, is to look for the per se case of action and examine whether it is rightly ordered in itself.

We have seen with the transplant dilemma that it is wrongly ordered to damage one innocent person’s body lethally with the good aim of helping many others. The answer to the craniotomy seems to be the same… The child does not have an unjust appetite, he has a rightly ordered vegetal/material appetite which is inconvenient to others, so he may not be attacked, unless that attack also proportionately helps him and is chosen in part for that reason. (Such a case might really exist – for example, an inviable fetus is causing the womb to rupture… It’s foreseen that delivering the child will both save the mother and allow the child to live longer than he would have otherwise, even though exposure to the outside world will be the cause of his death. It certainly seems that this would be permissible given the principle of totality.) Finally, we reach the case of PLSD… There is no principle of totality at work here, even though the intended effect of self-preservation is immediately achieved with the debilitation which causes death. Rather, the normal rule of totality is indeed suspended. This is because of the kind of threat which the aggressor poses – it is a threat to the commonwealth due to a disordered external appetite.

Because “it is natural to everything to keep itself in ‘being,’ as far as possible,” and “one is more bound to take care of his own life than another’s,” it stands to reason that in a case in which there is public disorder due to the external act of a person, that person becomes the rightful recipient of correction at the hands of those whom he threatens, without his own good being a barrier to protecting the good of oneself or the community. The blows to the aggressor, we can see, actually help him – they keep him from being a bad part of society. And the private citizen’s duty is indeed to protect the commonwealth insofar as he is a part… This would include a kind of “natural delegation” to dispense with individual totality for the sake of communal totality – he is at liberty to risk the good of the one person (while, remember, he actually does something good to the aggressor by rectifying his disordered exterior act) for the sake of the commonwealth. The private defender may not try to kill the aggressor, but he may knowingly cause it with no benefit to the aggressor beyond keeping him from being harmful. Even though death is a per se effect, the defensive act is legitimate – the private defender acts like a miniature public official in this urgent situation, without psychologically taking death itself as an end.

This plugs in very nicely with Thomas’ vision of capital punishment… Stay tuned for part 2, though I’m sure a lengthy tome like this won’t be too necessary, given that a response from Dr. Feser is likely forthcoming, due in no small part to having been called out personally by Dr. Finnis.

Interesting times indeed.

Three Intellectual Errors in American Leftism

Eamonn Clark

Though there are many problems one might point out in present day progressive American politics, I want to point out three particularly deep-seated intellectual vices. The misunderstandings are with respect to the following: the order of charity, experience and knowledge, and the terminus a quo/ad quem paradigm. They correspond to three key issues… the mode and structure of government, the value of so-called diversity in rational discourse, and the purpose of social institutions and roles especially in relation to sex and gender.

First, the order of charity. One of the great principles of Catholic social teaching is subsidiarity, which is the preference to defer to the more local government to provide for a constituent’s needs. The chain goes something like this: individual – family – town – county – state – nation – world. To know the needs of many individuals belongs to the governor of the family, to know the needs of many families belongs to the governor of the town, and so on. It is easy to see that as we ascend the ladder the task of governance becomes increasingly complicated, as it involves increasingly many parts. This proves the need for an order of governance in the first place, as it would be unthinkable for the king of a large country to govern each town directly, not only because of the amount of time and energy such micromanagement would take but also because of the diverse needs and situations of each town which are understood best by those who actually live there. The king is only in a position to know the affairs which affect the whole country, where its largest parts are concerned in their relations with each other. Thus, subsidiarity. The more that can be delegated to smaller governments, the better. The value of this principle is taught by some of the harshest lessons of world history… When the emperor gets too powerful there is trouble ahead both for him and for his empire.

But what about the relationships as they go up the chain rather than down it, or even those relationships at the same level? For example, what should characterize the individual’s actions vis-a-vis the family, or the state, or the world? How should families or counties or nations interact with each other? Of course, the lower owes care and respect to the higher and ought to be willing to make appropriate sacrifices for the good of the whole of which he is a part, with a greater kind of love given according to the dignity of the political body. However, this good will, or charity (ideally), follows an order, just like the governance to which it relates. Because we are creatures, we can only love in concrete practice up to a certain point, and our acts of love therefore should be patterned on our proximity – physical or otherwise – to the object of that love. Just as good parents care for their own children more than their next door neighbors’ children, they would also care more about their own town than a different town, because it is their own town which is most immediately able to care for them. Furthermore, they would be more ready to sacrifice for their town than for their county, state, or nation, not because they don’t have a greater kind of love for the larger body (i.e. the nation) according to its dignity but because that body is more remote. Finally, they will exercise more diligence and care toward families in their own town or neighborhood, as they have more interest in common with each other and are more able to look out for each other precisely because they are parts of the same small community. Such care is a legitimate application of the principle of solidarity… To be in real solidarity involves real proximity, of geography, blood ties, virtues, or even goals, and that proximity also tends to give a better understanding of the situation. This is why voluntourism is generally bad, or at least not as good as it feels: it ignores the needs of one’s close neighbors to go save people far away, and it does little to no help in the end, possibly even making things worse. The Western obsession with “saving Africa” is one example of this.

This should reveal at least one major problem with two key progressive agenda items: socialism and globalism. It is simply not possible to take care of everyone by centralizing government more and making it bigger (including by weakening or removing borders). We have a duty to look after those who are more closely united with us – and so long as we are flesh and blood, occupying physical space and belonging naturally to families, there will exist this natural order of government – and charity. We are bound to love our neighbor, but we are certainly bound to love some neighbors more than others. (See Gal. 6:10, 1 Tim. 5:8, etc.)

Second, experience and knowledge. It has become an all-too-familiar rhetorical move: you don’t share my experience, therefore your position is automatically irrelevant. “How can you, a man, dictate sensible policy on abortion? You don’t know what pregnancy is like!” This kind of thinking pervades public discourse in debates on race, gender-theory, guns… It even exists in the Church. How much do we really need to “discern with” and “listen to” various people or groups in order to understand the moral and doctrinal issues at stake? Certainly, nobody is saying that acquiring knowledge of particulars is bad or even unhelpful for dealing with those particulars themselves – indeed, it is vital, as Gregory speaks about at length in the Pastoral Rule – but once the general principles are known, especially through the authority of revelation, there is no need to go on studying particulars to learn those principles. If some people want to be “accompanied” a certain way, at odds with right morals or doctrine, then it is they who need reform, not the principles. It is they who need to work to build the bridge. Thus, the first public words of the Lord were not “what do you think” or “how are you feeling,” but rather, “repent” and “believe.”

What, then, is the value of experience? It is the collection of memories which can be applied to work for a desired end through abstracting the universal principles at work. Experience can contribute to making a person more prudent if he pays attention and has a good memory, but it does not necessarily give someone all the knowledge required to make a good decision about how to reach the goal, nor does it necessarily tell a person what ends are best to seek at all. Likewise, empathy with suffering groups, which provides a kind of substitute-experience, does not give the right means or ends either. It can actually be quite blinding. For example, perhaps you feel terribly for victims of drunk driving – but you have to look at whether outlawing alcohol would result in damage far worse than the damage avoided. Everyone you govern must be considered fairly. (See above about subsidiarity!) The wisdom that comes from suffering borne well is a spiritual kind of wisdom, a sort of perspective on one’s own life and meaning, and typically that is its limit. Being a resident of a war-torn country does not make a person an expert on foreign policy, it makes him an expert at hiding from bombs and bullets. If the same person also studied international politics at university and served for decades in his nation’s diplomatic corps, these would be of greater value for prudential decision-making about foreign policy, as they both communicate more information about the relevant matters. Perhaps his experience of hiding from air raids helps to contextualize what he is learning, or helps to remind him of how important certain consequences are, but simply having experienced the wrong end of a war does not make him a good politician.

Knowledge can be gained without experience of the things learned about. This principle is easily proven by the very existence of education: we believe that we can give people information through communicating information. It is left to the individual to organize that information and make a judgment, right or wrong. Thus, a priest who has studied the Pastoral Rule, for instance, is in a much better position to preach and rule well than if he had not studied it, ceteris paribus. If experience is the sole criterion for knowledge, we would face epistemic anarchy: no two people have the exact same experience of anything, and therefore there could never be any common body of knowledge. To rectify this, there is a theory of group-based experience, codified in the doctrine of “intersectionality.” Because minorities (and women) are necessarily victims, and the victim-narrative must always be believed, the number of victim-classes to which one belongs gives greater primacy to their claims and demands. So goes the theory. But if intersectionality defines knowledge, then we should only need to find the few Black, homosexual, transgender-woman, overweight, Muslim immigrants and let them run our lives, since they are practically demigods given their high intersectionality. And even within such an elite group, there would be divisions – some grew up poor, others did not. Some have genetic diseases, some do not. Etc. And so intersectionality is also a kind of compartmentalization which tends toward epistemic anarchy. The truth is that we are not only animals, we are rational animals; we are capable of learning without experiencing, and therefore we can generally see what is good and right in public policy without having been in the exact circumstance of those to whom any given piece of legislation applies, provided we are actually informed of how that policy will affect people and be enforced (subsidiarity!)… But we don’t need to take subsidiarity so far that we actually must be part of the racial, gender, “whatever” group over which we exercise authority.

Third, the terminus a quo/ad quem paradigm. The terminus a quo is the “point from which” one goes. It stands in relation to the “terminus ad quem,” the “point to which” one goes. It behooves a person who wants “progress” to say exactly where that progress leads to, and where it stops. Not only has there been deep confusion about where exactly some kinds of “progress” are heading, but also no principled way to determine when that progress ought to stop and be conserved. Some slopes are slippery indeed.

Today’s conservatives are yesterday’s liberals, especially with regard to gender-theory and its related issues. If you need proof, well, there is an endless supply, but try this one on for size. (Yes, really, click the link. If that doesn’t drop your jaw, nothing will.) What is the endgame? What is it really all about? How far can we “progress”? Of course, the goalposts keep moving. First, mere social tolerance is the only request. Then, once acquired, it is a small legal concession here or there, nothing big. Then, the redefinition of a social institution protected by law – but surely, this is the last step… Except then it becomes domination in schools, in the workplace, in the culture at large: indoctrination of the youth, forced service to same-sex weddings, and constant positive portrayal and exposure in the media. And now that the homosexual lobby is quickly running out of room, the momentum has carried into transgender rights.

But at this point I want to ask about these intermediate steps, which, for some basically sincere people, really are seen as the “end,” the terminus ad quem. That step is the the redefinition of social institutions or roles, such as same-sex marriage on the homosexual agenda and right around “bathroom bills” on the transgender front. There is a distinct problem of intentionality for each with regard to their understanding of their terminus ad quem as such.

Everyone has heard the comparison between the civil rights battle of the 1950’s and the present-day struggle for so-called “gay rights.” There is an oppressed group which only wants equal treatment and protection under the law. Just like Blacks couldn’t use the White schools or water fountains or any number of products and services, so gays don’t (didn’t) have access to marriage, because it is limited to the heterosexuals. Because marriage is so important in public life and personally desirable for so many reasons, it is equivalent to the desire for education, transportation, etc., wherein Blacks were discriminated against. Therefore, the two movements are basically analogous.

The problem with this argument is with regard to the terminus a quo/ad quem relationship. Under Jim Crow, goods and services that were equally desirable to both Whites and Blacks were apportioned unequally and unfairly. It was unfair because it put Blacks and Whites on fundamentally different levels of human dignity, when the reality is that race does not determine basic human nature. In other words, Blacks and Whites share the same terminus a quo, since they are fundamentally equal as human beings with the same desires and therefore deserve basic equality of opportunity, but they were treated as having different termini a quo. Because they share identical desires, such as good schools, a seat on the bus, and so on, their desires themselves have an identical terminus ad quem. To sum up, Blacks were given a different terminus ad quem because it was thought they had a different terminus a quo when in reality they did not. The civil rights movement sought the right to the same terminus ad quem by trying to show the Black terminus a quo was the same as the White terminus a quo.

This is (was) not the case with the push for same-sex marriage. Here, the terminus a quo is assumed to be the same by the government, and the terminus ad quem (marriage) is available to all. There is already equality of opportunity – it’s just that the desire of homosexuals is not the terminus ad quem which was equally available. Instead of pushing to be able to use the White water fountain, this was a push to create a Black water fountain because the water from the White fountain tastes bad to some.

Consider again: in no country ever in world history were homosexuals categorically barred from marriage. It is that they typically don’t desire the “kind” of marriage available. Instead, a new kind of marriage needs to be created to suit their desires – a different terminus ad quem altogether, just with the same name. The terminus a quo is different too, not because homosexuals and heterosexuals differ in fundamental human dignity, but because the desires which define these two categories are unequally useful to the commonwealth in which they seek to be fulfilled. Unlike schools or water fountains, marriage has not historically been treated as a good or service consumed, it has been treated as an office from which services and goods are provided to the community, namely, children and care for children. Even if same-sex couples were generally able to provide equally well for adopted or surrogate children as a child’s natural parents, which seems quite obviously incorrect for several reasons, they would still be at an unequal public dignity because they need help bringing children into existence. A man and a woman do not, generally speaking, need help procreating. And because of the clear good of parents staying together, having kids, and treating those kids well, the government is right to incentivize a lifelong commitment to a monogamous heterosexual relationship with certain public benefits which are not due to even the most committed homosexual relationships. The tendency to produce children is why there is such a thing as marriage in the first place (to protect, educate, and nurture children in a balanced and stable environment), and kids are also the primary reason the government should be interested in marriage at all, as they are the future of the commonwealth. It is especially dangerous when many fatherless young men are gathered together – this is how and why gangs form in cities… the kingpin is the replacement for the father.

We could map this same twist of the terminus a quo/ad quem dynamic onto some other public function or office of nature, such as the military. Just as every society needs marriage, it also needs a military, and so there should be certain incentives or “perks” that come with taking up arms as a soldier. But what if I want those same benefits, but without joining the current version of the military? Suppose I too am patriotic, own a gun, dislike terrorists, and sometimes wear camouflage. Shouldn’t I too have equal access to the military? I do, of course – I could go sign up at any moment – but I want to do it my own way, because I don’t desire to go to the desert or live on a base. Shouldn’t military rights be extended to me, too?

Anyone can see that this is the same line of reasoning as the same-sex marriage argument, and anyone can see also that it is a patently absurd argument.

But there is a different kind of absurdity at work in the transgender activism of today… What is the terminus ad quem of a gender transition – or even of the activism in general? If gender is a social construct, as it is so often claimed today, what is the value of changing the body? Cross-dressing or surgery would make sense if one’s real gender were something inherent to the person. So is the terminus ad quem simply to be treated a certain way by other people according to the superficial notions of male and female? If gender is a social construct, then there is no “noumenal” change, it is only a “phenomenon” which changes – that is, there is only and can only ever be a change in perception rather than any objective reality in the person or the body called “gender.” This seems contradicted by the advent of the big step in transgender activism, which is, like the gay agenda, compulsion. In this case it is even worse, because it is more arbitrary. If gender were only a social construct, looking and acting sufficiently “male” or “female” would suffice, but because the meaning of those terms is sliding away into oblivion, like “marriage,” the “appropriate” way to treat a person is based solely on that person’s desire to be treated a certain way. Because there is no objective reality “male” or “female,” and either it is consistently impossible or irrelevant for transgender people to look and act sufficiently like the paragon for “male” or “female” because of their biological sex, before or after surgery, it may be necessary simply to force people to use certain pronouns that they would not normally use.

Not to do so would be “violence,” because it causes depression and social isolation which can lead to self-harm or harassment. Therefore, speech at odds with my own desire to be called “he” “she” “zhe” or whatever, to refuse me the use of any bathroom or locker room I want, to disallow me to put on my official documents whichever of an ever-growing list of genders I determine, is punishable by law… Bad, right? It’s happening in Canada already with the infamous Bill C-16. Except we are not looking at all the harm this can cause, we are looking at the terminus ad quem. What has a trans-man or trans-woman actually become? Surely, they would say a “man” or “woman,” full stop. (Never mind that this is already causing problems – for example, does a trans-woman count as a man or as a woman for the purposes of any kind of affirmative action slanted towards women? Or take the example in the link above about the “transphobia” of RuPaul!) If gender is a social construct, a gender transition is to create a perception of a person as a member of a certain gender category. But since that category is completely based on perception, in what does the transition actually consist? What is actually being changed? And if it is all about my desires anyway, wouldn’t it be easier to change my desire to match with people’s seemingly entirely empty and baseless perception rather than the other way around? If “man” and “woman” don’t really mean anything objective anyway, then why would one even want to be called or treated as one or the other? What is the motivation to depart from the terminus a quo? It seems to be a comically extreme exercise in vanity…

Hopefully I have hammered home the point. The terminus ad quem of gender transitions and the activism surrounding it is unclear at best. And where the movement in general will end is anyone’s guess, but compelled speech is likely involved. After that point, my guess is trans-humanism will be next, especially given the rapid advances being made with the ongoing development of CRISPR.

Of course, the truth is that gender dysphoria and its accompanying behavior constitute a tragic mental illness and symptoms of that illness. The desire to “become a man” or to “become a woman” is based on a fetish with the biological reality of the opposite sex and the social realities based upon it, or some similar unfortunate disposition of the mind. Something approximately the same could be said of same-sex attraction.

These three points understood rightly – the order of charity, experience in relation to knowledge, and the terminus a quo/ad quem paradigm – give us a fitting lens through which to look at mainstream American (and broader Western) politics. The ideas are firmly rooted in the Christian intellectual tradition and help to make very useful distinctions. Hopefully they can assist you in forming your own opinions and in having your own discussions. Let me know what you think in the comments – but play nice!