In Defense of Bad Priests

Eamonn Clark

I recently prepared third graders for their First Holy Communion. Going through the story of the Last Supper several times, I noticed that they had a fascination with a certain Apostle… You guessed it – Judas. A fascination with such a character is understandable, as it seems rather out of place in a story which one would think is supposed to be exclusively upholding models of virtue. This is not unlike the very grown-up temptation to expect moral perfection from Catholic clergy. After all, they are supposed to be models of virtue, right?

Yes, they certainly are, and extra scrutiny is rightly deserved because we do indeed have the fullness of truth and grace available to us. But here are the facts. Our Lord chose losers, dummies, and wicked sinners as the foundation of His Church. Of the original Twelve, ten were ambitious cowards. One of those ten was also arrogant (Peter). The eleventh was just ambitious (John). And the twelfth one was a greedy traitor. (Later on He would also call a terrorist to this elite group.)

The place to start looking at the failure of any priesthood is a comparison with the first four failed priesthoods, and the first successful one: Adam, Aaron, Nadab, Abihu, and Melchizedek, respectively.

Adam was the high priest of nature, called to guard and serve the Garden of Eden and his wife Eve at his own expense. He ought to have put himself in front of her and the serpent, but he shrinks away from his duty. He stands next to her (as Eve “turned to her husband” to give him the fruit) and watches this calamity take place. The serpent goes after Eve first, because he knows that she is easier to trick and that she might be able to trick her husband. When confronted by God, Eve blames the serpent, and Adam blames Eve: there is no responsibility taken.

The Devil often seeks to harm God’s priests through the very people they are called to protect. In this case, Adam’s own fear and self-interest allow his beloved to fall, and then she takes him with her – for what husband would want to have a wife estranged from him, as Eve surely would have been without Adam following her into sin? And yet they become estranged from each other anyway, needing to hide themselves with the flesh of animals… The first time blood is shed in Scripture, it is to cover up the sins of our first parents. A sign of things to come, for sure.

The next failed priesthood is Aaron’s. While Moses is busy with spiritual matters on Mount Sinai, the people grumble against him. The patience required to receive the Covenant is too spiritual for them, and so they threaten to leave the mountain in protest. Aaron is concerned about such a loss of numbers – he is afraid of what Moses will think. He has the people give him their gold, and he makes for them an idol which provides them with the experience of God they wanted: an unchallenging, unspiritual, ungodly experience. But the people are happy, and they stay put for a while, high on their own erroneous ideas about the worship of God. Aaron saved the day. When Moses returns with righteous fury, Aaron explains, like Adam did before God, that it was not his fault – it was the people’s and the furnace’s. More shirking of responsibility. (Moses gives them the proper experience of the god Aaron made for them when he ground it down, threw it in the water, and made the people drink. Like a good priest, he teaches them that a dead god gives even less life than water: it cannot save.)

Unlike Adam, Aaron’s ambitions were totally worldly. Instead of trying to become like God as a direct opponent, he simply wants to be the hero of God’s chosen people. Aaron wants just a little bit too much of their attention… He is not really after the gold, but what lies behind the gold – the esteem of men. That is what gives gold its value, after all.

Nadab and Abihu were Aaron’s sons. They violated the code of the Lord’s sanctuary by bringing unholy fire into the Tent of Meeting. This strange fire was deeply repugnant to the Lord, and so He slew them where they stood. Our Lord will only have sacred heat and light dwell within His holy place. Though profane fire may sustain bodily life, only sacred fire can sustain the life of grace.

The first successful priesthood is that of Melchizedek, whom Abraham meets after his battles. His is a totally spiritual and eternal priesthood, offering bread and wine and accepting the tithes owed to him for his work. He does not ask for money, he simply receives it. He is a priest not because of his family stock, like the Levites, but because of his charity. He gives to God first, then he receives what is rightfully his from men. He does not go looking for greatness: he simply is great.

Judas wraps up all of the failures of earlier priests in himself and makes them even worse. He is unwilling to do the work of a priest, putting himself in the place of Christ over and over again for the salvation of souls, standing in the way of the Devil’s reach of the weak, even though he would not have been alone in this task, unlike Adam. He trades the incarnate Lord not for the esteem of men, but for money itself. The purifying fires of the grace welling up in the Eucharistic Lord are traded for the fires of the foundry which purified the silver he would take as payment for his betrayal and would later throw into the Temple to try to buy back that grace. He takes into his body the blessed fruits of that very first celebration of Holy Mass which he was simultaneously endowed with the awesome power himself to perpetuate, as a recapitulation and elevation of Melchizedek’s priesthood, and then invites in the Devil to contradict it all. Though the accidents of bread and wine sustained Judas’ bodily life, the spirit within him died because of the rejection of the grace within the Lord’s Body and Blood – true food and true drink which preserve from everlasting death.

What greater human evil is there than the evil found in Judas on Holy Thursday? And yet that very night, Our Lord bowed before him to wash his feet, and He even personally called him “friend.” It is not unfair to say that, in a way, God loved Judas more than anyone else in human history, for there has never been a fouler human being to love. In the midst of this supreme wretchedness, Christ left us a memorial of His own greatness.

We know how the human story turns out. Judas despairs of the very mercy he was shortly supposed to be empowered to bring to others in the sacrament of confession. He attempts to slay himself (though perhaps did not succeed, and received even more time to repent), as if the death of the Lord had not been enough payment for his sin… No, Judas saw himself as so great that he believed his sin was unforgivable. “What a fool I have been,” he uttered. Yet this foolish failure brought about the culmination of our very redemption. Without Judas, there is no Passion, Death, or Resurrection. There is also not the greatest condescension of love ever shown by God. Deicide is not therefore justified, but God’s choice to allow a bad priest to exist in the Church is.

Meanwhile, Peter weeps in contrition and makes amends with “the man” he denied knowing, within earshot and within hours of having heard this prophesied. He left that “strange fire” in the courtyard, from where he watched the Lord shiver in the cold and dark of the prison cell, and he leaves the slave girl before whom he cowered in fear. Behold, the Prince of Apostles, who would eventually learn that taking up the sword is better left to those who persecute Christ than those who defend Him, and who would finally end his life as a willing victim upon a cross. The priesthood of Peter was in as sorry a state as the priesthood of Judas; the difference was repentance. Yet again, Our Lord shows his greatness through the failures of one of His hand-picked dispensers of grace.

The Church on Earth is institutional and hierarchical by nature, because human beings require such an order so as to avoid repeating the tragic error of the men of Babel who tried by their own powers to cooperate to reach up to Heaven. The Church on Earth is also sinful by nature, because it is populated by human beings – even in its hierarchy. It has been so from the moment of its inception, and this is by design. No, God does not want bad clergy in the same way He wants good clergy, but He does want to permit them to exist for now. He knew Judas would betray Him, and He knew all the clerical pedophiles, heretics, and antinomians of our own day would do the same. While they betray the Lord by selling Him for popularity or money, as they shrink from their duty to stand in front of Satan and then blame the weak or the natural insufficiency of their means, as they profane the Eucharist through indifference toward it, they repeatedly show the power of Christ in His Church: even through all this, there is victory waiting.

There has always been a crack in the foundation, there has always been chaff in the wheat, and yet there has always been grace available through these men nonetheless, as it is God’s own power which is the source of their priesthood and thus the source of their power to give grace – “ex opere operato.” God shows His majesty in the midst of this weakness and wretchedness. And sometimes He even brings these men out of their shameful disgrace and elevates them to the profoundest heights of sanctity, a feat which must be marveled at. There is true hope of Heaven for every bad priest in this world. Christ calls each of them “friend.”

Perspective is important. “There is nothing new under the sun,” as Qoheleth reminds us. We would do well to recall more frequently the beginnings of the Church to understand Her challenges today. (A reading of the disturbing history of ancient Israel would help too.) Whatever cleric is the object of concern – parish priest, celebrity priest, local bishop, curial official, pope… If there really is sin there, realize that it is just business as usual. The Barque of Peter has always leaked in the storm while the inept crew runs about helplessly, and yet it continues to float safely toward the harbor. Our Lord can guide it even in His sleep.

Let’s pray and fast for all priests, especially those who need it most.

Our Lady, Queen of the Clergy, pray for us!


Main image: Pope John XII, who was killed in the act of adultery by a vengeful husband

The Grotesqueness of the Mass and the Problem of Evil


I would like you to imagine the classic love story. You know the one: The daring knight rescues the damsel in distress from the fiery dragon. The details really don’t matter. All the story needs, seemingly, is a knight, a dragon, and a princess. However, it seems that there is one other element needed in the story, and that is the element of danger. For the story to work, the knight must triumph in the end, but only after a battle in which he might have lost. And this seems to be true, not just from our perspective, but from the perspective of the princess as well.

I mean, if the story is to be believed, the princess loves her knight, and love seems to include a desire for the beloved to be safe from harm. Yet, imagine how the princess would feel if the daring knight, instead of facing the dragon in hand to hand combat, camped a mile away from the castle with a sniper rifle, killed the dragon from a safe distance, and then waltzed in to pick up the princess. A bit anticlimactic isn’t it? Don’t we all feel, as much as we might not like to admit it, that if we were the princess, we’d prefer our beloved risking it all to save us? Don’t we, in a secret place in our heart, want our knight to be scarred?

Now, I’m not going to try to understand the motivation for this desire. I don’t know where it comes from, I only know that it seems true that we have it. But, I do think it has to do with what comes after the knight’s daring rescue. While the knight and princess gallop away on a snow white stallion, isn’t there already a natural bond forged by their shared experience of the dragon? If the knight had faced no danger and suffered no injury in his battle with the dragon, wouldn’t the princess, as much as she loves her knight, feel estranged from him? Wouldn’t she ask herself, “Does he understand what the dragon did to me?”

I have often had that question about my relationship with God. Knowing how much my sin has hurt me and made me despicable to myself, and reflecting on the glory and perfection of God, I sometimes have asked myself, “Does He understand what sin did to me?” The answer God gave me at the cross, and continues to give me every day in the Mass is, “Yes, because sin has done it to me too.” There seems to be a deep psychological reason that the bread and wine are consecrated separately in the mass: We want a God who knows what it feels like to have his blood separated from his body, in the same way that we have spilled our blood living in a broken world. Of course, we want a God who is all-powerful, who triumphs over sin and death, no denying that, but we also want a God who bleeds in the process. We want our God to carry the same scars we do.

That is “the grotesqueness of the mass.” In the mass, as a continuation of the eternal sacrifice of Christ on the cross, God makes himself vulnerable to us, so that He can share in our weakness. Our suffering becomes the point of encounter with God. In the mass, God enters our brokenness, our loneliness, our anger, our numbness. That is the horrible beauty of the Mass and the cross: that the hour of good’s triumph over evil is when good is weakest. It is when God looks most like a man. God suffers with us, in order to make Himself capable of being understood by His creatures who have so long suffered under sin, that they are unable to comprehend a life of love without suffering.


And yet, we know that this is not the end. God chose to suffer not just to meet us in our suffering, but to bring us out of it. We have hope that there is a love that transcends suffering, and though, in our broken human condition, we can’t experience it now, (or at least, our experience of it is limited,) our hope in God is that some day we will. That is why the problem of evil (Why does a good God allow suffering in the world) is not so much a problem as it is a recognition of our broken selves. As fallen men and women, our experience of our own brokenness makes us want others to have experienced our suffering. This is not because we are evil and sadistically want others to suffer, but because we want to know we are not alone. The cross not only gives us that reality, but also the hope for something more: something we cannot fully comprehend now, but something we know we’ve been missing. Evil exists because in our broken state, we need evil to help us recognize the good. In the evil of the cross, we see the ultimate good, and that ultimate good gives us hope for a good without evil, a love without pain, a final victory over sin.


Post by: Niko Wentworth

Main image: The Deposition from the Cross, Bl. Francis Angelico, 1434

The New Albigensianism, PART III: An Existentialism Crisis


Having examined the first part of the “postmodern manifesto,” which is scientistic, we now turn to the second part, which is existentialist. Here it is again:

Real knowledge is only of irreducible information about the material world, and I can manipulate that same material world however I want in order to express myself and fulfill my desires.

The imposition of a spirit onto its flesh and the world is our object of investigation today.

After the Kantian revolution proposed a deontological moralism as a replacement for metaphysics, Schopenhauer took up the reins and ran with the theme: the will reigns supreme over the intellect. This doctrine recalls those first rumblings present in Ockham, Abelard, Scotus, and even St. Bonaventure. (Who could forget Dante’s depiction of Bonaventure and Thomas circling around each other in Heaven debating the primacy of the intellect and will?) Then came Soren Kierkegaard’s deep anxiety over life together with a suspicion of some kind of opposition between faith and reason. Heidegger, of course, was riddled with anxiety as well, over being and nothingness, and he had an obsession with freedom and authenticity: all characteristic of what was to come. There was no more dramatic precursor to the French existentialists than Nietzsche, who sought to free the world of its nihilism and empower it with the liberation of the will: the ubermensch, or “super man,” would embody a new kind of magnanimity with no regard for the welfare of others or some abstract Aristotelian “flourishing.” Nietzsche apparently couldn’t do it himself and went insane, finally cracking after seeing a horse being mercilessly beaten in a street in Turin. (Here we might pause and recall Durkheim’s observation about happiness and the subjection of the will to a pre-defined role in society… Those who have a life already set up for them tend to kill themselves less often.) The penultimate step to mature existentialism came with Michel Foucault, the forbearer of the “rainbow flag” and a staunch opponent of confining the mentally insane. After all, maybe they are just “different,” you know?

Finally, we come to the main event: a Parisian socialite, his lover, and a journalist-turned-philosopher raised on the soccer fields of French Algeria.

The core of the teaching of Jean-Paul Sartre can be summed up in three words: existence precedes essence. In other words, there really is no human nature, only a human condition which must be figured out and made into something of one’s own. He cites Descartes’ cogito in support of this theory, being an “anti-materialist,” and he claims that this is the only dignified vision of man, as this doctrine alone is capable of acknowledging his true power and freedom – which are apparently the characteristics of dignity. Man must go beyond himself to create himself, quite in contrast to the Comtean humanist religion, where humanity is good “just because.” For Sartre, man is nothing without making something of himself. (This would later become the basic teaching of Ayn Rand as well.) Freedom is to choose and conquer resistance present in one’s situation, and one must exercise this freedom according to his authentic self. But what is the “self” without a human nature? It is unclear.

Sartre’s intermittent lover, Simone De Beauvoir, with whom he would frequently seduce unwitting female students for sexual exploitation, held similar ideas and became the first “feminist.” It is from De Beauvoir that we get the now infamous gender-sex distinction: “One is not born but becomes a woman.” The woman is defined socially – and in classical A-T anthropology – in relation to man and therefore does not have her own identity. This is an existential problem for the woman, who must go out and create herself. To postmodern ears, however, it would sound insane to contradict the sense of De Beauvoir’s complaint; and yet we have St. Paul teaching that some kind of superiority of men is rooted in nature and of necessity must flow into ecclesial life (1 Cor. 11: 3-16, Eph. 5: 21-33, Col. 3: 18-19). The Christian must not be a feminist of the De Beauvoir variety. Our friends the Cathars had women clergy; they anticipated the existentialists in their justification for this choice. We will return to that in a future post.

Then we have our Algerian friend. Albert Camus’ most famous contribution to Western thought was the that the only serious question a person has to ask himself is whether to end his own life. After all, life is absurd, and if one can find no meaning for himself, then it is better that it end on one’s own terms, rather than in something meaningless like a car crash (which, ironically, was exactly how Camus was killed). Despite explicitly denying the existentialist label and preferring to be an “absurdist” instead, Camus is nonetheless the crystallization of the movement – his interpretation of the Greek myth of Sisyphus, claiming that man must accept his existence as an absurdity in order to find peace, or the anguish of the main character of “The Stranger” over the meaningless of his life and what has happened to bring about his execution, for example, provides a fitting capstone to the existentialist project because it shows its end: senselessness. When human nature is removed, purpose is removed. And the frantic search for a self-assigned basic purpose can only end badly, even if it doesn’t feel that way to a “successful existentialist.”

Certainly, more can and should be said about the French existentialists. But this brief and rude treatment suffices to bring to light the critical themes of our own day which were present in the movement, namely: a rejection of human nature as such; a perceived need to define one’s own role to make up for such an absence; and an obsession with “gender” equality.

We have already noted in PART I of this series the shocking fact that the existentialist doctrine on human nature as such has been enshrined in U.S. law by the Supreme Court. That should be enough to show there is a deep-seated existentialist current plaguing the West, but when coupled with the wide diffusion of the watered down scientistic-positivism we explored in the last post, disdain for classical Aristotelico-Thomistic anthropology has become its own unspoken rule. It is not unspoken in the way one doesn’t talk about Fight Club, it is unspoken in the way one doesn’t talk about red being a color… it’s just a given.

Our culture is schizophrenic and self-destructive too. But does it give us novelty soap bars?

If there is any admittance of a “human nature” it is a passing nod to the truth that what we call human beings usually have certain kinds of physical characteristics which normally produce certain kinds of effects. The classical meaning of “nature,” however, is alien to this vague and platitudinous physicalism, as there can be no teleology (in-built purpose) for what is merely a random collection of stuff onto which we slap a name. This, I suggest, is the final fruit of Ockham’s Nominalism which we have discussed previously.

Of course, most postmodernists dimly realize their godless worldview poses the “existential problem,” viz., a lack of inherent meaning and purpose in their life, and they seek to solve it through the recommended process of “self-definition.” We are not here critiquing a healthy ambition to “do what one can” or to avoid idleness; rather, the issue is the desperate and necessarily futile attempt to provide altogether one’s own meaning for existing in the first place. There are also many people, who are not quite full-blown postmodernists, who seek to correct this same inner anxiety with DIY spirituality (moralistic therapeutic deism, usually); this is particularly dangerous as it nominally acknowledges something greater than oneself as a grounds for directing one’s life, but it is really the imposition of one’s own ideas onto a divine mouthpiece.

The existentialist paradigm helps make sense of the postmodern millennial’s take on the issues: the life issues, the gender issues, and the sex issues. Since a person’s meaning is basically self-derivative, and that meaning is predicated upon desires and the ability to fulfill them, then the unborn and the elderly are without their own meaning. Having a certain kind of body which has certain powers does not force one to accept that embodied reality as a given identity and direction either within a social framework or even within a physical framework, provided there is a surgeon available. Much less does this God-given engendered bodily existence, constitutive of unique powers with lasting social consequences and everlasting spiritual consequences, provide an individual with rules for how to engage in the use of the organs which are the seat of that power. You must choose to become something. Alternatively, you may disappear into oblivion – either irrelevance, or death. Before it was the American Dream™ it was the French philosophical anthropology.

The current of this thought has bored a hole so deep into the subconscious of postmodern America (and many parts of Europe) that it has become impolite, if not outright illegal, to tell a person that he is a he, she is a she, that “No, I will not serve cake at your wedding,” or anything that might emotionally hurt that person, so long as that self-given identity or meaning does not result in “harmful” behavior. Harmful behavior, remember, is reduced to emotional, physical, or financial pain or loss – for those who can already “will to power” and aren’t entirely reliant on help from other people for existence, that is.

The video above, while admittedly a bit cherry-picked, demonstrates nonetheless the existentialist current of millennial postmodernity with breathtaking frankness. No doubt such an experiment could be replicated across the global West with some success, at least in supposedly “elite” institutions of higher education. Note again the criterion of “harm” as constituting the core of the normative ethics for postmodern millennials – as if a person with a wildly erroneous self-perception is doing no real harm. You can tell that these kids become more and more uncomfortable as they are forced by their own premises and sense of political correctness to the affirmation that what is obviously “real” truth is being denied by this person, but that since “it’s not ‘harming’ anyone,” it must be okay and therefore good to support. It is the lack of an awareness that such a departure from the truth of one’s natural constitution as “man,” “white,” etc., does indeed cause harm to that person and therefore also to society at least inasmuch as that person’s self-perception is related to his or her function in society, is probably why it doesn’t “bother” the people interviewed. There used to be a word for the self-deception which is being coddled as healthy and normal: mental illness. Now it requires university sponsored trigger warnings and safe spaces, international awareness campaigns, and even protective laws. All of this finally ends in a kind of laissez-faire utilitarian relativism, which we might call the postmodernist ethics. “The more a behavior harms the people or things that I like, the more immoral the behavior is, and the more a behavior does good to the people or things that I like, the better the behavior is.” In this normative ethics, I can never do anything wrong, except inasmuch as I might unthinkingly do something harmful to my own cause. Another person is irrelevant insofar as he doesn’t harm my own mostly arbitrary and narrow values. This must also be understood as occurring within the materialistic framework – both harm and good are all temporal and experiential. (Unless, that is, a little DIY spirituality comes into play… Then all bets are off.) Without a firm understanding of unchanging human nature, and the belief in its authority and power to provide a normative ethics, we are left to define our own values based on whatever we would like to do or become as individuals or collectively as a society.

“Existence precedes essence.” Human beings are now human doings.

Yet clearly, “Some are more equal than others.” Why are some people or things valued over others? The connection to the expression of self and fulfillment predicated upon it are the foci around which postmodern value is measured: money, physical pleasure, convenience, emotional pleasure, diversity, equality, progress. Each goal is vaguer – and more dangerous – than the last. If you  are not contributing one of these goods to society, how can you be valuable? Maybe you are a “good person,” but you are no longer useful and are therefore of no account. In other words, we may kill you if we would like to… and one day we might realize that we ought to kill you: because you are not capable of doing the kind of things we value, your own existence offers you “no benefit.” It is now charitable to destroy a life that can’t “create itself.” Beyond the obvious cases of killing the unborn and physically sick, Camus’ dilemma is being answered for the mentally ill and elderly in Europe in “assisted suicides” which are a little too assisted.

It has become popular these days to remark on “the science” behind why transgenderism or same-sex marriage or whatever is “bad.” While taking note of the psychological and physical processes and results of these experiments is not irrelevant to forming a right opinion on their goodness (like studying the average harm done to children by “gay parenting”), there is no need, and in fact no possibility, for “science” to provide the answer to the foundational moral questions whose answers are found in a study of the soul and body’s basic purposes which are widely known to all, as St. Paul reminds the Romans (Rm. 1:18-32). You really don’t need an expert biologist to give kids “the talk.” You do need something other than mere biology to infer that deviating from the natural order is wrong, and the obsession with the minutest details of the “is” to justify the “ought” belies at least a touch of the intellectual illness diagnosed in Part II of this series, namely, a weak form of positivism called scientism.

Given that existentialism is historically opposed to the materialistic worldview which positivism relies on, how can the postmodern manifesto combine both elements? For example, how can a person support transgender surgery as an effective means of “expressing the real self” while claiming that there is no such thing as a soul because it’s not an object of scientific observation? We might say it is a simple lack of reflection which allows this cognitive dissonance, and this is indeed true. The deeper problem, however, is that ideology is serving passion, rather than the other way around. This is part of what makes millennials so difficult to reason with: they will shift from one part of the manifesto to the other for the sake of whatever person or group or behavior they feel good about, not realizing that each pole is at least a mild affront to the other. What they tend to sense is that their scientism forces one to create his own meaning since there is no predefined role by a true authority (God, revealed religion, a family or government invested with God-given authority), and that the quest to create meaning for oneself is determined only by what is able to be perceived by oneself, the greatest authority. The poles point back toward each other in this way, even though real positivists would reject the idea that a person can “mean something” at all, and real existentialists are not even attached to the doctrine that there is a real material world in the first place. The details of theory are lost in the practice of the unfortunate and unwitting inheritors of these worldviews.

Whether the French existentialists would be on board with the hashtag gender activists of today is not entirely clear. Sartre would perhaps call transgenderism “bad faith,” that is, a fake expression of oneself wherein one “tries too hard” to be something he or she really is not. This is not “authentic” to Sartre. (How there could be such a thing as the “self” independent of one’s sincere desires begins to strike the central nerve of the existentialist project, however; if one can act in bad faith, then there must be something more to one’s identity than his desires which those desires can be in line with… which sounds an awful lot like an essence preceding existence, so to speak.) Camus might call such people to account as failing to accept that life just does not make sense, and that the only way to be happy is to accept this: providing a physical answer to a spiritual problem is vain, but there is no spiritual answer either, so one must simply be content with madness.

Existentialism is likely to remind the attentive reader of Sacred Scripture of Ecclesiastes. Was Qoheleth the first existentialist? The first absurdist? He does claim that the acceptance of life as vain and meaningless in itself is a condition for peace, like Camus. (Truly, Qoheleth is right – there is nothing new under the sun!) But Qoheleth, despite all of his despair, believes that everyone’s life means something to God, and that there are objective measures of morality by which that God will somehow judge us. That his idea of final judgment is fuzzy can seem odd given this, but in his intellectual humility he did not grasp for what he had not already been given. He knew we would die and that God would somehow render justice, but he will not say more.

Postmodernists avoid the topic of death because it would force them out of their watered down existentialism – protected by a million distractions – into the disquieting bluntness of Camus, which few can stomach: your life really is fundamentally meaningless, and there’s nothing you can do about it, so just get comfortable with that fact like a happy Sisyphus. The suicidal dilemma is also “too harsh” for sensitive millennial minds – let that question be left to poor Hamlet and Hannah Baker.

Next time, we will directly investigate the relationship between the trends of our current culture and the doctrine and praxis of the Cathars, finally making good on the title of this series.


Post by: Eamonn Clark

Main image: Simone De Beauvoir, Jean-Paul Sartre, and Che Guevara; Cuba, 1960

CRISPR: The Eugenitopia is Here

Have you heard of CRISPR? No it’s not a breakfast cereal… It’s a fast, accurate, and cheap means of changing DNA. It stands for Clustered Regularly Interspaced Short Palindromic Repeats.

If you haven’t heard of it, then you need to: this is a HUGE deal.

There is a protein in bacteria called Cas9 which helps defend against viral attacks. (Yes, bacteria get viruses, too!) The bacterial DNA can take the viral DNA and store it in a special place (CRISPR). When the virus attacks a second time, RNA loads Cas9 with the viral DNA. The loaded Cas9 scans the bacteria’s DNA to find the new viral DNA that has infiltrated it, and it cuts it out – a little like a DNA antibody. Personified, that means the cell says, “I’ve seen this change before, it’s a mistake, I need to replace the change with the original information.” Then it sends a message to the Cas9 protein with the bad DNA sequence to modify, and off we go.

The Cas9 protein can be taken out of bacteria, be given a DNA sequence from any kind of living thing, be injected into any other living thing, and it will make changes in that organism based on the information it was “programmed” with.

Got that? You give Cas9 a DNA sequence you want to modify, inject it into an organism, and it will make the changes. It is fast, it is accurate, and it is CHEAP.

Okay, this is a bit of an oversimplification. There’s more to it, and no you can’t just walk into the right lab and get a shot that will make you grow wings… yet.

There are obvious benefits to this kind of procedure. CRISPR might provide us with a cure for cancer, AIDS, any number of genetic diseases, and could help us generally keep healthy (like by increasing our metabolism or improving our eyesight). Once it is really nailed down, it is very likely that a couple of $12 shots at the minute clinic will be able to get rid of your asthma, or Alzheimer’s, or cerebral palsy… forever.

But… With great power comes great responsibility.

Thanks, Uncle Ben. Wait a minute – was Spiderman CRISPR’d?

Unfortunately, the 21st century West is not very responsible. Where might CRISPR go wrong?

Well, what color eyes would you like your child to have? Should we bump up his IQ while we’re at it? Hey, you’re an athlete, maybe we can give him long legs and enhanced muscular growth as well, so he will be sure to be athletic too. Just an extra $300, please. Oh, you’d like him to have Shiva arms and a third eye, because you’re into that kind of thing right now? You’ll have to go down the hall for that.

Anti-aging cream? Psh. Take the right injection, and your body will actually start DE-AGING. As long as you don’t get hit by a truck or something, you’re good to go for another hundred years… a thousand years… indefinitely, perhaps. Or at least we will try.

Let’s say you’re running a poor nation and, well, need things to go more “smoothly.” So you put something in the water to make all your citizens have a defect that only you can provide the fix for. And you will only provide it to a person if his taxes are paid on time, he doesn’t have too many children, and he votes for you again. (This could be done now, but not with nearly the same ease and dramatic effects.) Meanwhile, you are pumping your soldiers and police full of testosterone 2.0…

It’s only cool if he’s on your side. You might look like an alien once the Great Leader poisons you.

And once such genetically modified people reproduce (whether they have been helped to be healthy or have been “upgraded” or “downgraded” somehow), those screwed up genes get passed along. At that point, there’s no stopping it. And we have no idea what that will actually mean.

Here’s a video helpful for understanding more:

This technology is developing very quickly. The Church needs to get ready with a response, ASAP. Where is the line for modification, and why? If life is a good thing and death is to be avoided, is anti-aging wrong? What is to be done in terms of people who have already changed themselves by addition – how far does the obligation extend to have such a thing undone? Is this technology really worth the risk of irreversible changes to the gene pool which we don’t even know the danger of? Could there be an obligation to use this technology to prevent certain kinds of diseases? These are the kinds of questions we have to begin asking.

Get ready. It’s coming. And once it comes, it is here to stay.

Post by: Eamonn Clark

Main image: Cas9 in the Apo form

Main image source (modified): By Ben.lafrance – Template:Own rendition of the crystal structure solved by M Jinek et al, published in Science 2014, CC BY-SA 4.0,

The New Albigensianism, PART II: Comte and the Combox

For Part I, click here.

Just as the woman with the hemorrhage reached out to touch the hem of Jesus’ tunic, so do post-modern secular Westerners reach out to touch the hem of scientists’ lab coats. Despite the plain fact that any given scientist or doctor or other “expert” will be tend to be specialized in only some tiny sliver of his or her field, hopeless intellectual wanderers will gather at the feet of these people to learn all the mysteries of the universe… which is dumb. How did this happen?

Let’s take a step back.

The manifesto of the post-modern Westerner par excellence is this: “Real knowledge is only of irreducible information about the material world, and I can manipulate that same material world however I want in order to express myself and fulfill my desires.”

Herein we see two strands of thought colliding, one about the mind and one about the will: positivism and existentialism. Historically, they are not friends. How they have become fused together in post-modernity is a strange tale.

Today we will break open the first clause – real knowledge is only of irreducible information about the material world, the positivist element.

From the outset, we must make a distinction between “positivism,” which is an epistemic and social theory, and “logical positivism,” which is something more metaphysically aimed. My goal here is to show the roots of the broader idea of positivism, how it found its academic zenith in logical positivism, then how the aftermath of its fall has affected Western philosophy and science at large as well as in the minds of millennials.

A brief sketch of the positivist genealogy will suffice. We recall Descartes to point out his obsession with certitude, just as we note the empiricist thrust of Bacon, Locke, and Hume. We must mention Kant, both as the originator of the analytic-synthetic distinction (which will become enormously important) and as an influence to Hegel, who is notable for his approach to philosophy as something integral with history. Condorcet and Diderot should be pointed out as influential, being the greatest embodiments of the French Enlightenment, wherein reason and revealed religion are opposing forces. Marx, though he would reject positivism as a social ideology, helped inspire it along the same lines as Hegel had. The penultimate step was Henri de Saint-Simon, whose utopian socialism was all the rage during the French Revolution which was attempting to put his political theory into political practice.

Of course, these men were not positivists. It is Henri de Saint-Simon’s pupil, Auguste Comte, who brings us this unwanted gift of an empiricism so strong it entirely and unabashedly rejects any and all metaphysical knowledge outright. This led Comte to build a reducible hierarchy of the sciences based on their certainty or “positivity,” and he claimed (rightly) that the trend of empirical studies was heading toward a “social science.” This conception of a reducible scientific hierarchy – one where, for instance, biology can be put in terms of chemistry, and chemistry in terms of physics, etc. – was a rather new way of thinking… Previously, it had been more or less taken for granted that each science has its own irreducible terms and methods, even admitting some kind of hierarchy (such as with the classical progression of the liberal arts).

Not only was Comte the first real philosopher of science, he was also the first sociologist. According to Comte, humanity was passing from its first two stages, the theological and the metaphysical, into the third and final “positivist stage” where only empirical data would ground truth-claims about the world. Having evolved to a higher clarity about what the world is, and having built up enough of the more basic physical sciences to explore how that world works, sociology could finally occur. Mathematical evaluation of social behavior, rather than qualitative analysis, would serve as the proper method of the “queen of the sciences” in this new age.

Comte outright jettisoned religion qua supernatural and revelatory, but his intensely Catholic upbringing had driven into him such a habit of ritual that he could not altogether shake the need for some kind of piety. What was a French Revolution atheist to do? Well, start a “religion of humanity,” of course. (The “positivist religion” never became a major force, especially since Freemasonry already filled the “secular religion gap,” but it did catch on in some areas. Take a closer look at the Brazilian flag and its meaning, for example…) We should also note, for the record, that Comte was only intermittently sane.

The epistemic side of positivism almost ended up just as much of a flop as the pseudo-religion side of it. Unfortunately for the West, Durkheim and Littré became interested, and they, being altogether sane, effectively diffused Comte’s ideas and their own additions through the West at the start of the 20th century. Eventually, a group of like-minded academes started habitually gathering at a swanky café in Austria to discuss how filthy and naïve metaphysics was compared to the glories of the pure use of the senses and simple mathematical reason – the Vienna Circle was born.

Together with some Berliners, these characters formulated what came to be known logical positivism. When the shadow of Nazism was cast over Germany, some of these men journeyed westward to England and America, where their ideas were diffused.

The champions of logical positivism were Hans Hahn, Otto Neurath, Moritz Schlick, Rudolf Carnap, A.J. Ayer, and Bertrand Russell. While Russell is no doubt familiar to some readers (think “tea pot”), the others fly lower under the radar. It is Ayer’s formulation of the logical positivist doctrine which we will use, however, for our analysis.

“We say that a statement is factually significant to any given person, if, and only if, he knows how to verify the proposition which it purports to express – that is, if he knows what observations would lead him, under certain conditions, to accept the proposition as being true, or reject it as being false.” (Language, Truth, and Logic, 35)

Got that? What this means, in the context of the whole book, is that in addition to statements which are “analytic” (“all bachelors are unmarried”) being true necessarily, only statements which we can actually use our 5 senses to verify the truth of can be meaningful – that is, able to be true at all. These are “synthetic” statements. If I say that Pluto is made of bacon grease, I am making a meaningful statement, even though I cannot actually verify it; it suffices that it is hypothetically possible to verify it. If I say that the intellect is a power of the soul, this is not meaningful, since it cannot be verified with the senses. For the details, see Ayer’s book, which is rather short.

Needless to say, it is rare that a school of thought truly dies in academia. A thorough search of university philosophy departments in the Western world would yield a few die-hard fans of Plotinus, Al-Gazali, Maimonides, and maybe even Heraclitus. Perhaps the best or even only example of ideological death was logical positivism. W.V. Quine’s landmark paper “Two Dogmas of Empiricism” was such a blow to the doctrine that eventually Ayer actually admitted himself to be in massive error and repudiated his own work.

What was so blindingly erroneous about logical positivism?

First, the analytic-synthetic distinction, as formulated by the logical positivists, is groundless. Analytic statements supposedly don’t need real referents in order to be true, but they are instead simply about the meanings of words. For some kinds of statements which employ basic affirmation and negation, this might work, as it is simply just a dressing up of the principle of non-contradiction. Fine. But if one wants to start using synonyms to take the place of some of the parts of these statements, the distinction begins to disappear… What the relationship is between the synonym’s object and the original word’s object cannot be explained without a reference to real things (synthetic!), or without an ultimately circular appeal to the analyticity of the new statement through a claim of the universal extension of the synonym based on modal adverbial qualifications (like the word “necessarily,” which points to an essential characteristic which must either be made up or actually encountered in reality and appropriated by a synthesis). In other words, it is analytic “just because.” (Thus, the title of Quine’s paper: Two Dogmas of Empiricism. Read more here.)

Beyond that, logical positivism is a self-refuting on theory its face… If meaningful statements can only be about physically verifiable things, then that statement itself is meaningless because it is not analytic (or is arbitrary if it is, and we go back to the first problem) and cannot be verified with the senses so is not synthetic… How does one verify “meaningfulness” with the senses? Logical positivism is a metaphysical theory that metaphysics is meaningless. Once again, this can only be asserted, not discovered. Except with this dogma, it evidently claims itself to be meaningless.

But the cat was out of the bag: “Metaphysics has completely died at last.” Logical positivism had already made its way from the salons of Austria to the parlors of America and lecture halls of Great Britain. The fuel was poured on the fire that had started in England by Bertrand Russell and G. E. Moore after they had decided to reject the British Idealism that dominated the scene by creating an “analytic” philosophy that didn’t deal with all those Hegelian vanities that couldn’t be touched with a stick or put in a beaker. Russell’s star pupil, Ludwig Wittgenstein, would also come to be a seminal force in strengthening the analytic ethos, after having already inspired much of the discussion in the Vienna Circle. Though Quine did indeed destroy the metaphysical doctrine that metaphysics is meaningless, the force of positivism continued nonetheless within this “analytic” framework – and it is with us to this day en masse in university philosophy departments, which has led several generations of students to miss out on a solid education in classical metaphysics and philosophical anthropology.

In sociology there arose the “antipositivism” of Max Weber, which insisted on the need for value-based sociology – after all, how can a society really be understood apart from its own values, and how can a society be demarcated at all without reference to those values, etc.? A liquid does not assign a value to turning into a gas, which it then acts upon, but a group does assign a value to capitalism, or marriage, or birth status which it then acts upon.

In the broader realm of the philosophy of science, Karl Popper and Thomas Kuhn’s postpositivism came to the fore. Science in general cannot be best explained without regard for some kind of value, but that the possibility of and/or actualization of the falsification or failure of a scientific theory is the characteristic feature of the sciences – in contrast to the optimism of the positivists that we can “just do science,” and that that will be useful enough.

In “science” itself, an air of independence was diffused. Scientists do “science,” other people do other things, and that’s that; never mind that we have no idea how to define “science” as we understand it today, and never mind that values are always brought to bear in scientific evaluation, and never mind what might actually be done with what potentially dangerous knowledge is gained or tool developed. A far cry from the polymaths, such as St. Albert the Great or Aristotle, who never would have considered such independence.

Then there are the “pop scientists” who try to do philosophy. A few examples of many will have to suffice to show that there exist three traits among pop scientists who are the go-to sources on religion and philosophy for countless curious millennials and Gen-Xers alike.

The first is an epistemic myopia, which derives immediately from positivism: if you can’t poke it or put it in a beaker, it’s not real. (Yes, it is a little more complicated than that, but you’ve read the section above describing positivism, right? Empirical verification is the only criterion and process for knowledge… Etc.) This is often manifested by a lack of awareness that “continental philosophy” (as opposed to analytic philosophy) often works in totally immaterial terms, like act, or mind, or cause, or God. This immediately creates equivocation – a pop scientist says “act” and thinks “doing something,” for example.

The second is an ignorance of basic philosophical principles and methods, which follows from the first characteristic. If you don’t know how to boil water, don’t go on “Hell’s Kitchen” – everyone will laugh at you and wonder what you are doing there in the first place. We might do well to have a philosophical version of Gordon Ramsay roaming about.

The third is the arrogance to pontificate on philosophy and theology nonetheless, and this of course follows from the second characteristic. They don’t know what they don’t know, but they got a book deal, so they will act like they are experts.

Everyone knows Dr. Stephen Hawking. (They made a movie!) But did you know that the average 6-year-old could debunk the central claim of his most recent book? It is now an infamous passage:

“Because there is a law such as gravity, the universe can and will create itself from nothing.” (From The Grand Design)

I can hear the 1st graders calling out now: “But gravity’s not nothing!” And they would be right. The myopia of Dr. Hawking (and Dr. Mlodinow, his co-author) is evident in the inability to grasp that, as Gerald Schroeder pointed out, an immaterial law outside of time that can create the universe sounds a lot like, well, God. The ignorance of basic philosophical principles, in this case, the most basic, is clear from realizing that “gravity” can’t be both SOMETHING AND NOTHING. Then, the arrogance to go on pontificating anyway is self-evident by the fact of the existence of the book, and then a TV series which aired shortly afterward wherein we find philosophical reflection which is similarly wanting.

If you really want to do a heavy penance, watch this “discussion” between Hawking, Mlodinow, Deepak Chopra, and poor Fr. Spitzer – I had the displeasure of watching it live several years ago:

Then there are folks like Dr. Michio Kaku. He regularly shows up on those Discovery Channel specials on string theory, quantum mechanics, future technology, yadda yadda. All well and good. But here’s an… interesting quotation for our consideration:

“Aquinas began the cosmological proof by postulating that God was the First Mover and First Maker. He artfully dodged the question of ‘who made God’ by simply asserting that the question made no sense. God had no maker because he was the First. Period. The cosmological proof states that everything that moves must have had something push it, which in turn must have had something push it, and so on. But what started the first push? . . . The flaw in the cosmological proof, for example, is that the conservation of mass and energy is sufficient to explain motion without appealing to a First Mover. For example, gas molecules may bounce against the walls of a container without requiring anyone or anything to get them moving. In principle, these molecules can move forever, requiring no beginning or end. Thus there is no necessity for a First or a Last Mover as long as mass and energy are conserved.” (Hyperspace, 193-195)

The misunderstandings here are as comical as they are numerous… The conflation, found explicitly in the full text, of the first 3 Ways as “the cosmological proof,” which obscures the issue, the belief that “motion” is a term about something necessarily physical, the thought that only recently did we discover that matter and energy don’t just appear and disappear, and then the most obvious blunder – Thomas does NOT start any of the 5 Ways by saying anything like “God is the First Mover, therefore…” There is no such ungrounded assertion which “dodges the question,” as Kaku puts it. One must wonder if he even bothered to read the original text – which is readily available. Kaku has even weaker arguments (unbelievably) against both the “moral proof” (which is a characterization I have never heard of the 4th Way until Kaku’s book, which troubles me from the start) and the teleological proof on top of this disastrous critique, but I won’t bore you. (Basically: “Because change and evolution.” Read it for yourself.)

Once again, we see three qualities: epistemic myopia (as evidenced, for example, by the error about “motion”), ignorance of the most basic philosophical principles (albeit these are a little more complicated than the one Hawking whiffed on), and the arrogance to pontificate about God and the act of creation nonetheless.

Next you have a man like Richard Dawkins, one of the nastiest examples of publicly evangelical atheism the world has to offer at present. Here’s one particularly embarrassing quotation from his seminal anti-theistic work, The God Delusion:

“However statistically improbable the entity you seek to explain by invoking a designer, the designer himself has got to be at least as improbable.” (p. 138)


Can you see the three characteristics? Material beings only (or at least “things” with “parts”), no idea what metaphysical simplicity is and how it relates to God in Western philosophy, and yet here we have one book of many which address this theme.

It is not that these folks don’t believe in classical metaphysics – it’s that they don’t understand them in the least. They play a game of solitaire and claim to be winning a game of poker.

We won’t even get into discussing Bill Nye the Eugenics Guy… for now.

Okay, yes, quote-mining is easy. But this is the cream of the crop from a very large and fertile field. I am not sure I recall ever reading an important and sensible argument about religion or metaphysics from a world-renowned scientist who lived in the past 50 or so years. Someone prove me wrong in the comments.

All this leads us to the average “scientism” which one finds in the comboxes of Youtube videos about religion, threads on various websites, and debates on social media. Yes, there are plenty of religious people in those arenas, but the skeptics who try to make wild claims like “science disproves religion” or “evolution means God does not exist” or even just dismiss the idea of revealed religion outright with some kind of mockery ought to be seen as the children of positivism. It is the most probable explanation – the sources of their myopia, ignorance, and arrogance can usually be traced back through intermediate steps to a talking head like Dawkins who ultimately owes his own irrational ramblings to Auguste Comte.

Why is post-modern positivism so naïve? At the combox level, it is because these people, as all others, have an instinctive drive to trust in someone beyond themselves. For many it is due to circumstance and perhaps a certain kind of emotional insecurity and intellectual laziness that they latch on to the confident scientistic loudmouths to formulate their worldview – and it becomes a pseudo-religious dogmatic cult of its own, a little like Comte’s “religion of humanity.” At the pop-science level, it is just plain laziness and/or intellectual dishonesty combined with arrogance, as we have investigated. At the lecture hall level – and I mainly speak of the general closed-mindedness towards classical metaphysics found in analytic circles – it is a deeper kind of blindness which is the result of the academic culture created by the aforementioned ideological lineage. Each level has its own share of responsibility which it is shirking.

The truth is that matter is known by something immaterial – a mind or person – and this reveals to us a certain kind of hierarchy and order, seeing as matter itself does not know us. Man is indeed over all matter and ought to control it and master it, and all without the consent of matter; but this does not mean that there can’t be knowledge of things nobler and/or simpler than man, like substance or causation or God. Not looking at matter as the product of non-matter, and as being ordered to the immaterial in a certain way, is part and parcel of the New Albigensianism.

So there we have the first part of the manifesto explained. Irreducible facts (the ones devoid of metaphysics and value judgments) about the material world constitute the only real knowledge. The less reducible, the less it is really known. Even though the West is full of supposed “relativists,” it would be difficult to find a person who would truly let go of the objectivity of “science.” To say, “Christianity is your truth but not mine” is one thing; it is quite another to say something like, “Geocentrism is your truth but not mine.”

There is yet more to be explored… Next time, we will dive into the second half of the “postmodernist manifesto” with a look at its existentialist roots and how misconceptions about the relationship of the self to one’s bodily life have led to transgender bathroom bills.

Post by: Eamonn Clark

Main image: The Positivist Temple in Porto Alegre, Brazil

Source: Tetraktys – User:Tetraktys, CC BY-SA 3.0,

Motherhood and Human Maturity

(Part I in a series on motherhood and fatherhood)

So much of who we are comes from our mothers. We are who we are in relation to others – and the first relationship we had was being nestled nine months in our mother’s womb.

“Male and female He created them” – it is fitting that with these words our first parents are introduced, since our first experience of gender, our first experience of male and female, comes – not from our analysis of gender roles in society – but really and concretely, from our mother and our father.

“God created man in his own image, in the image of God He created him; male and female he created them.” Because we are male, because we are female, we are in the image of God. We are not made in the image of God as mere androgynous souls with consciousness; rather, we are embodied in our masculinity and our femininity. Our lives are circumscribed between motherhood and fatherhood – none of us comes into this world without a natural father, none of us comes into this world without a natural mother.

In a time hidden from our memories, that initial relationship with our mothers forms us at the core of who we are. No person has ever grown to maturity without first passing through their mother’s body. Try as they might, technology still has yet to eclipse biology.

(If you want to be overwhelmed with all the particulars of gestational biology, check out this video.)


From the first moment of our existence in the womb of our mother, we are surrounded by her, enveloped in her body. Her body supplies for every one of our needs. As our cells divide and develop, our blood takes nourishment and oxygen from her blood; there is an exchange of life. By the time a mother is aware she is with child, her maternal body has known this already for weeks. Before she feels the budding movements of the child’s limbs, she is already being moved by the child – morning sickness, new diet, the maternal nesting instinct to tackle stale projects. But more than that, her whole life receives a new trajectory; she holds a person within her – two souls in one body.

I recall an experience of a friend of mine when his wife was pregnant with their first child. He came back from work one day to find his pregnant wife lying on her bed with her hands over her womb, filled with wonder. She explained to her husband that she felt her baby move for the first time and was overwhelmed with the realization of her motherhood, explaining to her husband, “I am not alone in my own body.”

A mother after having her first child will often comment that, had she known how much of herself would have been taken in order to love her child, she would not have thought herself capable of giving so much of herself. Motherhood is an experience that requires all of her. It is a self-emptying love that cares fiercely and intimately for her child.

Maternity, femininity, female-ness – this is our first experience of gender; it is our first experience of life. We are born into – conceived into – this relationship with our mother. It is most natural to us. It is the strongest and longest lasting of human bonds. It is a natural communion. For the rest of their lives, the mother and child will retain something of that intimacy where they were truly two souls in one body.

Beginning from this indescribable intimacy, the child goes through a development. Birth requires a leaving behind of the original closeness of the mother. The dependence of the child on the mother continues – nourishment, locomotion, comfort, bathroom issues – but slowly begins to wane. When the child learns to crawl, a mother is pained to see his reliance on her lessened. When the child takes his first steps, every step is a step away from the mother. Motherhood is tinged with sadness. Watching her child grow apart from her requires all of that self-emptying love.

In my own mother, I’ve seen this self-emptying love every time a sibling leaves my parents’ house to depart for college – fourteen times (I have a big family) one of her children left home, fourteen times she’s cried.

A mother’s vocation begins in intimacy, and ends in separation.

A mother’s love makes room for the child to grow. All human life takes as its origin the intimacy of motherhood. Fatherhood completes the picture.


We see this reality of maternal separation lived out most radically in the life of our Blessed Mother. Jesus shared a hidden intimacy with Mary for nine months. At his birth, the shepherds find Him, not wrapped in the arms of His immaculate mother, but wrapped in swaddling clothes and laid in a manger – apart from her. When He is twelve, after being lost for three days in the Temple, He tells her “Did you not know that I must be in my Father’s house?” (Luke 2:49) At Cana, He begins His public ministry with what looks like a rebuke, “Woman, what have you to do with me?” Once while Jesus was close by, Mary tried to get through the crowd to see her Son, and He says, “Who are my mother and my brethren? Here are my mother and my brethren! Whoever does the will of God is my brother, and sister, and mother.” (Mark 3:33-35) Even at the foot of the cross, when she is with Him again, He gives her away, saying to her “Woman, behold, your son” and to St. John, “Behold, your mother.” (John 19:26-27) And then He undergoes the ultimate separation, giving up His spirit and dying on the Cross.

Here, we let Blessed John Henry Newman take over, with his reflection on the Thirteenth Station of the Cross:

He is Thy property now, O Virgin Mother, once again, for He and the world have met and parted. He went out from Thee to do His Father’s work – and He has done and suffered it. Satan and bad men have now no longer any claim upon Him – too long has He been in their arms. Satan took Him up aloft to the high mountain; evil men lifted Him up upon the Cross. He has not been in Thy arms, O Mother of God, since He was a child – but now thou hast a claim upon Him, when the world has done its worst. For thou art the all-favoured, all-blessed, all-gracious Mother of the Highest. We rejoice in this great mystery. He has been hidden in thy womb, He has lain in thy bosom, He has been suckled at thy breasts, He has been carried in thy arms – and now that He is dead, He is placed upon thy lap.

Virgin Mother of God, pray for us.


Main image: “Virgin of the Angels,” William Adolphe Bouguereau, 1881
Post by: Deacon Peter Gruber

Why Thomas the Apostle was so Skeptical

The Apostle St. Thomas Didymus (“The Twin”) was conveniently absent for the first Resurrection appearance to the rest of the Eleven. (Jn. 20: 24-29) Then he famously insisted on seeing and touching the wounds of Jesus, which he then got to do 8 days later. This reading comes to us every year at the close of the Easter Octave to commemorate the event. Let’s take a look.

Aside from “telephone” conspiracy theories (which ultimately don’t allow for any sensible understanding of what happened in 1st century Palestine nor of the text of the Gospels), there are usually three alternate explanations for the supposed Resurrection.

  1. Mass delusion.
  2. A spiritual resurrection proclaimed as if it were a physical one.
  3. The body was stolen and the disciples lied about it (the story that “circulated among the Jews”).

Each of these have plenty of issues, of course. Leaving aside #3 (which has the largest problems of motivation among the 3, and it ultimately just destroys the trustworthiness of the entire text), #1 and #2 do not explain the skepticism of Thomas. Why was he not part of the delusion or vision of the spiritually risen Christ from the beginning? How was he incorporated into it?  What sense does recounting Thomas’ separate physical encounter make given such scenarios? There is no good answer.

There is a fourth alternative. It is the scenario, in fact, which Thomas had in mind when he questioned the claims of his friends.

He clearly doubted that they had seen the Risen Christ… But he did not doubt that they had seen someone. It just does not make sense that he would think all his friends would lie.

The words of the Gospels are careful. If you see some little detail that is added, you can be sure it is an important detail… The author went out of his way to add it. Paper was expensive in the 1st century – no Kinko’s, remember – and drafting the Gospels would have involved the most serious attention to what was going into the text. And of course, this is all under the inspiration of the Holy Spirit. That being said, in this passage we do not find the Apostle called plain old “Thomas.” We also don’t find “Thomas the Scientist,” or “Thomas the Physician,” or “Thomas the Skeptic.” We find “Thomas called Didymus,” or “Thomas the Twin.”

Why add that detail?

Thomas thought Jesus had a twin who until that time had been in hiding. He figured the supposed Resurrection was part of a massive scheme of some sort, like the tricks he and his own twin brother would have undoubtedly played as children but with an agenda far larger. It may even be the case that Thomas’ brother had died, and that one time Thomas was confused for him, no doubt producing a similar effect of shock and confusion and joy in the mistaken person or persons.

This also makes sense of Thomas’ startling insistence on seeing and touching the wounds, as he knew that this would be the best way to show that it was actually the same person who died on the Cross. (There was a recent movie based on this theme. Spoiler alert.) No mere man could walk around with those wounds! The others had been shown the wounds (Jn. 20: 20), but it does not seem they had “double-checked” as Thomas wanted to do by completely verifying that they were the same kind of wounds that one would get from a crucifixion rather than being some serious paper cuts.

This incident with Thomas the Apostle, then, also preemptively answers the Muslim objection to the Resurrection, which is simply the “twin claim” in reverse: Jesus had a look-alike who was killed. (The Muslims, however, wave their hands over the inconvenient parts of the New Testament though, so it matters little. If every clear bit of evidence from the text is a corruption, then there can be no efficacious textual demonstrations.)

All this can also help shed light on the slight differences in Christ’s appearance before and after the Resurrection. Mary Magdalene and the men walking to Emmaus didn’t recognize Him at first. While identical twins can usually be told apart on close inspection, they are mistaken for each other easily. Jesus must have looked quite different indeed – unlike a twin, but close enough to His old appearance that one would be able to see that it is really Him. This is certainly not a twin – no one would dare try to pull off such a stunt unless he did indeed have an identical twin.

Perhaps seeing the Risen Christ was like running into a grown man you had been friends with in childhood… different, but the same. With the Risen Christ, the flesh-cloak of Adam’s sin has been shed so that the man Jesus, the New Adam, could be as glorious as the Divine Person He embodies. (See Gen. 3:21, Rom. 5:12-18) Yet He keeps the wounds, as if to be in solidarity with us and to remind us of His suffering, in addition to proving He has risen.

The Scriptures are wiser to objections than we are ourselves. That is not only because God understands us better than we do ourselves, but also because the Resurrection actually happened… That removes the need for creative thinking and gives the writer of the text the freedom simply to say what really happened.

St. Thomas Didymus, pray for us!

Post by: Eamonn Clark

Main image: The Incredulity of St. Thomas, Caravaggio, c. 1601-1602