If you haven’t heard by now, the Jesuit-run magazine America ran an article in praise of Communism (and a rather weak defense of its publication). There have been plenty of decent reactions. Being a fledgling scholar of socialist and Communist thought, here is a bit of what I’ve learned during the past few months in my deep dive into that world which could help the discussion… I welcome any corrections or criticisms in the comments.
If Marx were alive today, he would recognize no country on Earth as having achieved Communism. He would likewise recognize no major political party as Communist upon close inspection, including those which describe themselves as such. Any country with a “state,” with private property, with wage labor, or even simply with currency, would not qualify as Communist. And any party which is not explicitly – and sincerely – working toward this goal would not be truly Communist in the classical sense. Opportunistic power-grabs which use the language of Communism and impose an indefinite program of state-capitalism through authoritarian collectivization, whatever they are, are not what Marx had in mind. It could and should be argued that any kind of large-scale collectivization and planning is doomed (see Hayek), that Marx left some troubling ambiguities about the process of socialization and its final product (especially regarding the usage of words like “socialization” and “state”) which is in part what opens the door to such misunderstandings or willful manipulations on the part of his early disciples, and that the foundations of Marx’s economic diagnoses were flawed (they were)… But what he cannot be fairly charged with is designing what is popularly thought of as “Communism.” Instead, what he must be charged with is proposing something which is not reachable or is not worth trying to reach, either due to what must happen to get there, or due to the goal’s intrinsic undesirability.
No serious economist today is a classical Marxist, if for no other reason than that several prophecies of Marx’s did not come to pass. The middle class did not disappear. The age of the factory came and went without the revolution, and the revolution does not seem in sight anymore. The increased aggregation of capital has not tended to yield perpetual decreases in profit margins. This is to leave aside all theoretical questions about Marx’s version of the “labor theory of value,” which is integral to his moral critique of capitalism as being exploitative in itself, in addition to his scientific or deterministic predictions which rely on his labor theory of value. So all of this calls into question the legitimacy of the project, at least as expressed by its chief proponent.
That project’s historical foundations are deeply at odds with Christianity in their basic philosophical and anthropological commitments. The dialectical materialism of the classical Communists sets up human nature in place of Hegel’s evolving God (a theory enunciated first by Feuerbach)… Through various stages of mass economic development and conflict, humanity evolves to a perfect state. This process is altogether unavoidable (“scientific,” not “utopian”), and it ends with Communism. There countless problems with this from a Christian point of view; and ironically, the atheistic determinism, violent tactics, and Pelagian ethos rob Communist life of its possibility; that possibility is best actualized in religious life, where the wall primarily prevents one from getting in, not getting out, and where the love of a transcendent God Who heals an otherwise stable and broken human nature animates all work. This should give us real pause.
If there had been a successful global Communist revolution near the end of the 19th century as had been predicted by so many, we can assume safely that the age of innovation was over. The “glut” of capitalist production was seen as overwhelming at the time… We had everything we needed to relax and enjoy life, at last! And since innovation would no longer be rewarded by the accrual of wealth, it stands to reason that it would have been either only for the sake of making work easier (not necessarily more productive, but easier), altruism, or it would be done on accident. Consider what things we take for granted today that were not yet invented or mass-produced in the 1890’s. We would have been essentially stuck in that age had the revolution happened and innovation effectively ceased. What great innovations that otherwise await in the future would a successful revolution destroy today?
Socialization is a matter of degrees. I take this from an analogous insight offered indirectly by Ludwig Von Mises (in his masterwork on socialism, online for free here, along with tons of other Austrian-school economics books and articles) regarding democracy: in some sense, every state is democratic, insofar as a sufficient number of people are sufficiently satisfied with the prevailing state of affairs such that it continues. Put another way, enough people choose with enough commitment to go along with what is the established order of society so that a new order is not established. Incremental changes might happen even outside of a “formal” democratic structure or means (viz., voting on a ballot). Likewise, socialization exists insofar as property is under the control of the community. All kinds of ways exist for controlling “private property” and “private production” through the government or some other organ of the community. The question then is not whether to socialize property or the means of production, it is whether to increase or decrease the strength or directness or scope of the socialization which already exists (and which informs the society’s understanding of ownership and the private sector). This is an important hermeneutic when giving any critique of “socialism”; it is a complicated issue. Simple dismissals of “socialism” are therefore rightly met with equally simple counter-dismissals by those who know the history and contemporary literature. However, Communism, the highest form of socialization, is subject to special critiques insofar as challenges to socialism’s status as desirable, achievable, and sustainable are “turned up to eleven” when discussing socialism’s perfected form.
The scope of the authentic Communist movement today is very limited. The SPD’s Godesberg Program could probably be used as a singular indication of the global shift away from revolutionary Communism toward a milder and less-defined “socialism”; Marx and Engels were quite involved in the affairs of the SPD early on, particularly in opposing the influence of Lassalle’s revisionism, such as we see in the Critique of the Gotha Program alluded to in the America article. That revisionism is radically exceeded in Godesberg, the spirit of which informs the global socialist movement of today much more than an entirely unrealistic call for pure Communism. Under this hyper-revisionism, most “serious” contemporary socialists work for a humane administration of governmental tools in a mixed economy (partly socialist, partly capitalist), and many of them further envision a high degree of democratic participation in the planning of this administration – but NOT full public or collectivized ownership of the basic means of production, the classical definition of socialism. One will find this theme explored at length in the final work of Michael Harrington (also alluded to in the article – who was apparently a “Catholic Worker,” and yet, though we are not told there, was also a committed atheist), and any number of recent books and articles on so-called “democratic socialism.” (Connected but somewhat distinct ideas are “market socialism” and “participatory economics.”) These positions are sometimes subtler than one might think, even if they all ultimately fall prey at least in part to the same pitfalls as more classical Marxist theories (which, by and large, they do in my estimation). Whatever the case, while the old encyclical condemnations remain relevant, those written before 1960 are not necessarily the slam-dunk cases against contemporary socialism that many people think them to be, as they are addressing a more classical version under old global conditions.
So there you have it. In sum, classical Communism is Heaven without God, earned through a large-scale, unavoidable, Hegelian-style revolution due to class conflict, and history teaches us that, despite Engels’ optimism that the revolution only might involve force, is always incredibly violent, whether directly through the killing fields and gulags, or indirectly through creating famine and destitution. Is this what the folks at America think is worthy of discussing seriously with openness? I hope not. If it is true that Communism has a “complicated relationship” with Catholicism – and it is, simply because both are complicated things – perhaps another journal is more fit to handle the discussion.
The other talks from that conference will be coming in the next few days. (They are great.) We also have uploaded talks from the inaugural conference of the year, all available if you click HERE.
We also recently hosted Justice Alito, though no recordings were allowed. (I did help the Secret Service get a door open, though, so there’s that!) Much more to come after the New Year. Get yourself the best Christmas gift ever and SUBSCRIBE to the channel – and to this blog if you’ve not!
Jumping ahead quite a bit in Scripture in our “true myth” series, today we will look at an incredibly powerful relationship between Jesus Christ and the “trickster archetype.”
Fans of the Baltimore Catechism will recall that God “neither deceives nor is deceived.” How then, could God incarnate fit into this paradigmatic role of the Trickster, occupied by deceptive figures such as Loki, Hades, various coyotes, ravens, and other such creatures – including serpents – throughout the history of mythology? These figures use trickery in order to gain power… What does Jesus have to do with this?
Without a full exploration of the ins and outs of the trickster paradigm, we can point out just a few commonalities which apply to Jesus:
The Baptism in the Jordan – in between the Nations (death) and Israel (life), in between the Sea of Galilee (full of fish and where He calls the first disciples) and the Dead Sea (…dead…), in the midst of the flourishing jungle but in the lowest part of planet Earth, and in water (which both gives and takes life).
His first act after the Baptism – He goes out into the desert (to deal with a real trickster) in between Jericho, the city of sin and death, and Jerusalem, the city of spirit and life… This same space will be the setting for the story about the Good Samaritan (representing Himself), who picks up the half-dead (!) sojourner (Adam), of which He is the renewal.
He touches the unclean (symbols of death) and gives healing/life – For example, the raising of the little girl in Mark 5, or the healing of the leper in Matthew 8.
The Resurrection – Did He actually die? Is He really alive? Whatever the case, it’s clear that our sense of the “in between” is tapped into… The psychology of the uncanny valley is maxed out.
He normally dwells on the outskirts of society, frequently retreating to the wilderness for solitude. Much of the 3 years of the public ministry is spent camping just near the Decapolis and other such places. Bethany is another place worth mentioning, as it is not quite in Jerusalem, but it is near it, where he raises Lazarus from the dead (more “in between” life and death imagery) and prepares for Passover for the last time… Gethsemane and Golgotha are also just outside Jerusalem.
He claims the role of a gatekeeper to the underworld. (Even more death-life ambiguity.) “I hold the keys of life and death,” He says in Revelation 1:18. Or take John 10:9 – “I am the gate; whoever enters through me will be saved,” or John 14:6, “No one comes to the Father except through me.”
He is a shapeshifter.
The Resurrection – He is the same, but different. (More ambiguity!) The disciples can only half recognize Him, though the wounds give testimony that it really is the same man they knew. But He is changed somehow.
The Eucharist – Jesus literally takes the shape of bread and wine.
God has become a human being – certainly a kind of changing shape, albeit in a qualified sense.
He cannot be contained or caught by the power of opponents. He passes through the crowd, or He hides effectively, as seen in many passages in the Gospels, such as the rejection at Nazareth in Luke 4. Instead, only He has the power to lay down His life… and take it up again (John 10:18).
He does not often give direct answers. Instead, He speaks in parables, riddles, questions, and ambiguities. He arguably only directly answers 3 questions of the over 100 put to Him, and He arguably asks over 300.
Other “trickster” characteristics might be noted as well, such as spiritual power, unclear origins, and a preference for working in the midst of obscurity and chaos. What are we to make of all this?
It is that Jesus goes to the most “uncomfortable” place in our psychology and asks us, nonetheless, to trust Him. So one of the deepest parts of our mind, which is intuitively inclined to see the brokenness of the world, is “cured” by His reversal of the trickster archetype.
God “deceives” in a way by becoming human (thus not “looking like God,” as He did on Mount Sinai with fire and thunder), in order to gain the power of persuasion or condescension. But also, and perhaps in a deeper and plainer sense, God is not only reversing the trickster’s goal-paradigm but inverting it as well… Instead of deceiving to become powerful, God becomes weak in order to tell the truth.
There are a number of pressing problems in Catholic moral theology, especially in bioethics. One of them is the right understanding of the so-called “Principle of Double-Effect,” (PDE) or whether this is really a legitimate principle at all in the way it is normally expressed. Now that Dr. Finnis has bothparts of his series on capital punishment out, let’s put on our moralist hats and get to work.
I’ll spare you all the ins and outs of the history of the problem – Fr. Connery’s wonderful book on abortion in the Catholic moral tradition deals with this in some relevant detail – but will give you the gist of the recent discussions so that we can dive into John Finnis’ articles. I too will write in two parts, I think…
The 19th century saw the problem of “craniotomy” come up, and this is a decent and to me, most familiar way to dive into the problem of PDE. (Craniotomy is crushing the skull of an inviable fetus, in this case with an eye to extracting the child to save the mother.) Archbishop Kenrick of Baltimore wrote his morals handbook and forbade the operation, Cardinal Avanzini of Rome anonymously opined in favor (page 308-311) of the procedure in his journal (which would become the Acta Apostolicae Sedis), and Cardinal Caverot of Lyon (the city pictured above, coincidentally) petitioned the Holy Office for an official response. Needless to say, there was some controversy.
In response to Caverot’s dubium, the Holy Office (the precursor to the CDF) decided in favor of Kenrick’s position. But it did so cautiously, saying that the procedure “cannot be safely taught.” It did not exclude definitively the liceity of the procedure in itself.
Let’s fast-forward to today’s iteration of the old camps, of which there were and still are precisely three…
The “Grisezian” Position:
Doctors Grisez, Finnis, and Boyle were major proponents of the liceity of craniotomy in the 20th century and into the 21st. Grisez lays out his argument in several places, including in his magnum opus (entirely available online), The Way of the Lord Jesus. It is worth quoting the relevant passage in its entirety:
“Sometimes the baby’s death may be accepted to save the mother. Sometimes four conditions are simultaneously fulfilled: (i) some pathology threatens the lives of both a pregnant woman and her child, (ii) it is not safe to wait or waiting surely will result in the death of both, (iii) there is no way to save the child, and (iv) an operation that can save the mother’s life will result in the child’s death.
If the operation was one of those which the classical moralists considered not to be a “direct” abortion, they held that it could be performed. For example, in cases in which the baby could not be saved regardless of what was done (and perhaps in some others as well), they accepted the removal of a cancerous gravid uterus or of a fallopian tube containing an ectopic pregnancy. This moral norm plainly is sound, since the operation does not carry out a proposal to kill the child, serves a good purpose, and violates neither fairness nor mercy.
At least in times past, however, and perhaps even today in places where modern medical equipment and skills are unavailable, certain life-saving operations meeting the four conditions would fall among procedures classified by the classical moralists as “direct” killing, since the procedures in question straightaway would lead to the baby’s death. This is the case, for example, if the four conditions are met during the delivery of a baby whose head is too large. Unless the physician does a craniotomy (an operation in which instruments are used to empty and crush the head of the child so that it can be removed from the birth canal), both mother and child eventually will die; but the operation can be performed and the mother saved. With respect to physical causality, craniotomy immediately destroys the baby, and only in this way saves the mother. Thus, not only classical moralists but the magisterium regarded it as “direct” killing: a bad means to a good end.
However, assuming the four conditions are met, the baby’s death need not be included in the proposal adopted in choosing to do a craniotomy. The proposal can be simply to alter the child’s physical dimensions and remove him or her, because, as a physical object, this body cannot remain where it is without ending in both the baby’s and the mother’s deaths. To understand this proposal, it helps to notice that the baby’s death contributes nothing to the objective sought; indeed, the procedure is exactly the same if the baby has already died. In adopting this proposal, the baby’s death need only be accepted as a side effect. Therefore, according to the analysis of action employed in this book, even craniotomy (and, a fortiori, other operations meeting the four stated conditions) need not be direct killing, and so, provided the death of the baby is not intended (which is possible but unnecessary), any operation in a situation meeting the four conditions could be morally acceptable.”
We can see the attractiveness of the Grisezian position. It removes the uncomfortable conclusion that we must allow two people to die rather than save one. However, it simultaneously introduces an uncomfortable conclusion: that we may ignore the immediately terrible results of our physical exterior act in favor of further consequences of that act due to the psychological reality of our intention, in this case contingent on even further action (viz. actually extracting the child after crushing the skull – presumably, a surgeon may perform the craniotomy and then simply leave the child in the womb, thus failing to save either life).
Hold on to that thought.
The “Traditional” Position:
I put the word “traditional” in scare-quotes because it is the position which follows the cautious prohibition of the Holy Office, but it is not very old and is merely probable opinion. It is taken by a good number of moralists who are “conservative” and “traditional” in other areas. And it doesn’t have a modern champion the way Grisez was for the pro-craniotomy camp.
Folks in this school often make more or less good critiques of the Grisezian position, zeroing in on the lack of the appreciation for the immediate physical effects which flow from an external act. How is it that crushing a child’s skull does not equate with “direct killing”? It seems that such an action-theory, as proposed by Grisez, Finnis, and Boyle (GFB) in their landmark essay in The Thomistback in 2001, is utterly at odds with common sense. The plain truth then, is that craniotomy, just like ripping the organs out of someone healthy to save 5 other people, functions based on consequentialism.
This position, however, must bite two bullets. First, there is the sour prescription to let two people die when one could be saved. Second, it throws into confusion the topic of private lethal self-defense… Doesn’t shooting a person in the head also directly kill in order to save another’s life? GFB made this point in their Thomist essay, and, in my opinion, it is their strongest counter-argument. It pulls us back to the fundamental text in the discussion, q. 64 a. 7 of the Secunda Secundae, whence supposedly cometh PDE.
Hold on to that thought too.
The Rights-Based Position:
The final position for our consideration comes most recently from Fr. Rhonheimer, who seems to be at least in part following Avanzini. Basically, the argument goes like this… In some vital conflicts, like the problematic pregnancy at issue, one has two options – save one life, or allow two deaths. Everyone has a right to life, but in cases where we find acute vital conflicts, it sometimes makes no sense to speak of rights. The case in which a person in a vital conflict (the child) will not even be born is one such example. Therefore, while the child retains the right to life, it makes no sense to speak of this right, and so it does not bear on the decision of whether to perform an act which would end in the child’s death if it will save the mother.
Leaving aside the problem of the language of rights in moral discourse (see McIntyre’s scathing critique in After Virtue), we can simply observe that this is a position which does not evidently derive from virtue-ethics but is made up wholesale out of a desire to appease an intuition. Rhonheimer, as far as I recall, does not even attempt to integrate his position into the broader framework of moral theology. In sum, the damning question is, “Why precisely does acute danger to others and shortness of life remove the necessity to respect the bodily integrity/life of a person?” To me, it seems little more than an appeal to intuition followed by foot-stomping.
I credit Fr. Rhonheimer for making an attempt to present a different solution, and certainly, not all of his work is this problematic. But we are presently concerned with this particular topic. Anyway, I suggest that this is not a serious position for further consideration.
A Brief Synthesis
I recently wrote my STB thesis on moral liceity with respect to “per se” order, which is to say that those acts with “per se” order form the fundamental unit of moral analysis upon which the whole question of “object” vis-a-vis “intention” turns. I look at Dr. Steven Long’s truly excellent groundwork in his book The Teleological Grammar of the Moral Act, but I expose what I found to be some ambiguities in his definition and presentation of what exactly constitutes per se order. Skipping over all the details, let me quickly show how problematic the first two foregoing positions are and then give a rundown of the basic solution and its integration with respect to capital punishment. (It is Finnis’ articles on the death penalty which brought us here, remember!)
There are 3 dilemmas we have already mentioned: the central problem is craniotomy. At the two poles are the “transplant dilemma,” with one healthy patient and 5 critical patients in need of new vital organs, and the standard case of private lethal self-defense (PLSD), such as shooting a person in the head in order to stop his lethal attack.
The Grisezian position ably explains the craniotomy and PLSD. Nowhere – and I have looked pretty hard – do NNL theorists explore the implications of their action-theory (such as presented by GFB in their article) with respect to something like the transplant dilemma. One could easily appropriate the language of Grisez’s passage in TWOTLJ to accommodate such an obviously heinous action as ripping out the heart, lungs, kidneys, liver, etc. of a healthy man to save 5 others. (It should be noted that the individual’s willingness to give his body over to such an act, while good in its remote intention, is totally inadmissible. I think basically all Catholic moralists would agree with this.) To rip out the man’s vital organs could certainly be described as “reshaping the body” or something similar to Grisez’s description of craniotomy as “reshaping the skull.” After all, the surgeon need not intend to kill the man – he could simply foresee it happening in view of his means to save these other men.
GFB evidently miss the point in their Thomist article, as they claim a causal equivalence between craniotomy and procedures done on a person for that person’s own sake, on page 23: “It is true that crushing the baby’s skull does not of itself help the mother, and that to help her the surgeon must carry out additional further procedures (remove the baby’s body from the birth canal). But many surgical procedures provide no immediate benefit and by themselves are simply destructive: removing the top of someone’s skull, stopping someone’s heart, and so forth.” We can see, then, that the principle of totality is undervalued by GFB and those who follow them. Serious damage done to a person must at least help that person. Any help to other persons is secondary, and I would argue per accidens rather than per se… One human substance is always related accidentally to another human substance.
The traditional approach more or less throws the teaching of St. Thomas into a cloud of ambiguity. By stating that the craniotomy is illicit because of the directness of its physical causation, the language in q. 64 a. 7 becomes unintelligible. We have to see the whole thing:
“Nothing hinders one act from having two effects, only one of which is intended, while the other is beside the intention. Now moral acts take their species according to what is intended, and not according to what is beside the intention, since this is accidental as explained above (II-II:43:3; I-II:12:1). Accordingly the act of self-defense may have two effects, one is the saving of one’s life, the other is the slaying of the aggressor. Therefore this act, since one’s intention is to save one’s own life, is not unlawful, seeing that it is natural to everything to keep itself in ‘being,’ as far as possible. And yet, though proceeding from a good intention, an act may be rendered unlawful, if it be out of proportion to the end. Wherefore if a man, in self-defense, uses more than necessary violence, it will be unlawful: whereas if he repel force with moderation his defense will be lawful, because according to the jurists [Cap. Significasti, De Homicid. volunt. vel casual.], ‘it is lawful to repel force by force, provided one does not exceed the limits of a blameless defense.’ Nor is it necessary for salvation that a man omit the act of moderate self-defense in order to avoid killing the other man, since one is bound to take more care of one’s own life than of another’s. But as it is unlawful to take a man’s life, except for the public authority acting for the common good, as stated above (Article 3), it is not lawful for a man to intend killing a man in self-defense, except for such as have public authority, who while intending to kill a man in self-defense, refer this to the public good, as in the case of a soldier fighting against the foe, and in the minister of the judge struggling with robbers, although even these sin if they be moved by private animosity.”
Without launching into a critique of the Cajetanian strain of commentary which ultimately gave rise to the crystallized formulation of PDE which pervades most moral discourse on vital conflicts, I will again follow Long and say that the “rules” of PDE really only work if one already knows what one is looking for. In this respect, PDE is like the moral version of St. Anselm’s ontological proof for God’s existence – it is nice to have in a retrospective capacity, but it is not actually that helpful as an explanatory tool.
As we have seen, GFB take Thomas to mean that one does not “intend” to kill the aggressor, just as the surgeon does not “intend” to kill the child in the craniotomy. The traditional school does not have as clear of an answer – it seems forced to say, somewhat like Fr. Rhonheimer, that the rules just “don’t apply,” yet without a convincing explanation. After all, the principle of totality does not bear on the slaying of one person for the sake of another, even in the case Thomas addresses. Furthermore, because it appears that it is only due to the death of the aggressor that the attack is stopped, thus implying “intentional killing” as a means, how do we explain St. Thomas’ position?
We can note a few things in response. First, it is in fact not death which stops the attack initially – it is the destruction of the body’s capacity to continue attacking, which itself is the cause of death. The separation of the soul and body (which is what death is) need not be the chosen means or the intended end. In every single case, the aggressor is incapacitated before dying, and such incapacitation is what is sought. (This is at least part of what makes Finnis’ argument about “unintentional killing” in war plausible.) Second, the child stuck in the womb is a radically different kind of threat than the rational aggressor. Third, Thomas is quick to turn the discussion to public authority, as a kind of foil. All of this is quite significant and points to an answer.
To the first point… It is true that the private citizen can’t have the death of the aggressor as a goal, meaning, death can’t be what is sought as a means or as an end. He doesn’t need to do anything to the soul-body composite as such, he only needs to do something to the body’s ability to be used as a weapon.
To the second point… A gunman in an alley is a very different sort of threat than a child growing in the womb. There seem to be two classes of threats – non-commutative, and commutative. The non-commutative threats are those which result from principles not in themselves ordered towards interacting with the outside world, viz., the operations of which are without a terminus exterior to one’s own body. These would be the material principle itself of the body (the act of existing as a body), and the augmentative and nutritive faculties of the vegetal soul. So a person falling off a cliff, or a child growing in the womb, are not acting on the outside world… Threats which proceed from the animal or rational appetites, however, are indeed acting externally. The crazed gunman who is not morally responsible and the hired hand are both trying to do something to another person, whereas the child growing in the womb is not. So perhaps different kinds of threats allow for different kinds of defense.
To the third point… Without a full exploration of the famous “self-defense” article quoted above, Thomas is eager to explain that public authority can kill intentionally – evidently meaning it can be the end of one’s act rather than just the means. (“Choice” refers to means, “intention” refers to ends – they are only equivocally applied in the inverse senses in scholastic morals.) Here’s where it gets weird.
Because the soul-body composite is its own substance (a living human being), the act of killing a person (regardless of one’s psychology) destroys that substance insofar as the world of nature is concerned. (We leave aside the interesting questions of the survivalism vs. corruptionism debate among Catholic philosophers.) It forms a per se act – that is to say, there is nothing further which can come from this action which will be per se an effect. This is because, as I argued in my thesis, per se order exists only within the substance chosen to be acted upon. Per se effects are those effects which necessarily occur in the substance an agent acts on which come from the agent’s act itself, given the real situation of the substance. So to destroy a substance necessarily ends the per se order. At the end of per se order there is the intended effect – such as debilitation (which is only logically distinct from self-preservation and therefore is not a separate/remote/accidental effect – what it is to protect oneself simply is to remove a threat) or death. Of course, this intended effect can itself be part of a chain of intended effects which function as means with relation to some further end. If I defend myself in order to live, but I want to live for the sake of something else (like acquiring wealth), then there is a chain of intended ends which function as means. The necessary process of moral evaluation, however, is to look for the per se case of action and examine whether it is rightly ordered in itself.
We have seen with the transplant dilemma that it is wrongly ordered to damage one innocent person’s body lethally with the good aim of helping many others. The answer to the craniotomy seems to be the same… The child does not have an unjust appetite, he has a rightly ordered vegetal/material appetite which is inconvenient to others, so he may not be attacked, unless that attack also proportionately helps him and is chosen in part for that reason. (Such a case might really exist – for example, an inviable fetus is causing the womb to rupture… It’s foreseen that delivering the child will both save the mother and allow the child to live longer than he would have otherwise, even though exposure to the outside world will be the cause of his death. It certainly seems that this would be permissible given the principle of totality.) Finally, we reach the case of PLSD… There is no principle of totality at work here, even though the intended effect of self-preservation is immediately achieved with the debilitation which causes death. Rather, the normal rule of totality is indeed suspended. This is because of the kind of threat which the aggressor poses – it is a threat to the commonwealth due to a disordered external appetite.
Because “it is natural to everything to keep itself in ‘being,’ as far as possible,” and “one is more bound to take care of his own life than another’s,” it stands to reason that in a case in which there is public disorder due to the external act of a person, that person becomes the rightful recipient of correction at the hands of those whom he threatens, without his own good being a barrier to protecting the good of oneself or the community. The blows to the aggressor, we can see, actually help him – they keep him from being a bad part of society. And the private citizen’s duty is indeed to protect the commonwealth insofar as he is a part… This would include a kind of “natural delegation” to dispense with individual totality for the sake of communal totality – he is at liberty to risk the good of the one person (while, remember, he actually does something good to the aggressor by rectifying his disordered exterior act) for the sake of the commonwealth. The private defender may not try to kill the aggressor, but he may knowingly cause it with no benefit to the aggressor beyond keeping him from being harmful. Even though death is a per se effect, the defensive act is legitimate – the private defender acts like a miniature public official in this urgent situation, without psychologically taking death itself as an end.
This plugs in very nicely with Thomas’ vision of capital punishment… Stay tuned for part 2, though I’m sure a lengthy tome like this won’t be too necessary, given that a response from Dr. Feser is likely forthcoming, due in no small part to having been called out personally by Dr. Finnis.
Though there are many problems one might point out in present day progressive American politics, I want to point out three particularly deep-seated intellectual vices. The misunderstandings are with respect to the following: the order of charity, experience and knowledge, and the terminus a quo/ad quem paradigm. They correspond to three key issues… the mode and structure of government, the value of so-called diversity in rational discourse, and the purpose of social institutions and roles especially in relation to sex and gender.
First, the order of charity. One of the great principles of Catholic social teaching is subsidiarity, which is the preference to defer to the more local government to provide for a constituent’s needs. The chain goes something like this: individual – family – town – county – state – nation – world. To know the needs of many individuals belongs to the governor of the family, to know the needs of many families belongs to the governor of the town, and so on. It is easy to see that as we ascend the ladder the task of governance becomes increasingly complicated, as it involves increasingly many parts. This proves the need for an order of governance in the first place, as it would be unthinkable for the king of a large country to govern each town directly, not only because of the amount of time and energy such micromanagement would take but also because of the diverse needs and situations of each town which are understood best by those who actually live there. The king is only in a position to know the affairs which affect the whole country, where its largest parts are concerned in their relations with each other. Thus, subsidiarity. The more that can be delegated to smaller governments, the better. The value of this principle is taught by some of the harshest lessons of world history… When the emperor gets too powerful there is trouble ahead both for him and for his empire.
But what about the relationships as they go up the chain rather than down it, or even those relationships at the same level? For example, what should characterize the individual’s actions vis-a-vis the family, or the state, or the world? How should families or counties or nations interact with each other? Of course, the lower owes care and respect to the higher and ought to be willing to make appropriate sacrifices for the good of the whole of which he is a part, with a greater kind of love given according to the dignity of the political body. However, this good will, or charity (ideally), follows an order, just like the governance to which it relates. Because we are creatures, we can only love in concrete practice up to a certain point, and our acts of love therefore should be patterned on our proximity – physical or otherwise – to the object of that love. Just as good parents care for their own children more than their next door neighbors’ children, they would also care more about their own town than a different town, because it is their own town which is most immediately able to care for them. Furthermore, they would be more ready to sacrifice for their town than for their county, state, or nation, not because they don’t have a greater kind of love for the larger body (i.e. the nation) according to its dignity but because that body is more remote. Finally, they will exercise more diligence and care toward families in their own town or neighborhood, as they have more interest in common with each other and are more able to look out for each other precisely because they are parts of the same small community. Such care is a legitimate application of the principle of solidarity… To be in real solidarity involves real proximity, of geography, blood ties, virtues, or even goals, and that proximity also tends to give a better understanding of the situation. This is why voluntourism is generally bad, or at least not as good as it feels: it ignores the needs of one’s close neighbors to go save people far away, and it does little to no help in the end, possibly even making things worse. The Western obsession with “saving Africa” is one example of this.
This should reveal at least one major problem with two key progressive agenda items: socialism and globalism. It is simply not possible to take care of everyone by centralizing government more and making it bigger (including by weakening or removing borders). We have a duty to look after those who are more closely united with us – and so long as we are flesh and blood, occupying physical space and belonging naturally to families, there will exist this natural order of government – and charity. We are bound to love our neighbor, but we are certainly bound to love some neighbors more than others. (See Gal. 6:10, 1 Tim. 5:8, etc.)
Second, experience and knowledge. It has become an all-too-familiar rhetorical move: you don’t share my experience, therefore your position is automatically irrelevant. “How can you, a man, dictate sensible policy on abortion? You don’t know what pregnancy is like!” This kind of thinking pervades public discourse in debates on race, gender-theory, guns… It even exists in the Church. How much do we really need to “discern with” and “listen to” various people or groups in order to understand the moral and doctrinal issues at stake? Certainly, nobody is saying that acquiring knowledge of particulars is bad or even unhelpful for dealing with those particulars themselves – indeed, it is vital, as Gregory speaks about at length in the Pastoral Rule – but once the general principles are known, especially through the authority of revelation, there is no need to go on studying particulars to learn those principles. If some people want to be “accompanied” a certain way, at odds with right morals or doctrine, then it is they who need reform, not the principles. It is they who need to work to build the bridge. Thus, the first public words of the Lord were not “what do you think” or “how are you feeling,” but rather, “repent” and “believe.”
What, then, is the value of experience? It is the collection of memories which can be applied to work for a desired end through abstracting the universal principles at work. Experience can contribute to making a person more prudent if he pays attention and has a good memory, but it does not necessarily give someone all the knowledge required to make a good decision about how to reach the goal, nor does it necessarily tell a person what ends are best to seek at all. Likewise, empathy with suffering groups, which provides a kind of substitute-experience, does not give the right means or ends either. It can actually be quite blinding. For example, perhaps you feel terribly for victims of drunk driving – but you have to look at whether outlawing alcohol would result in damage far worse than the damage avoided. Everyone you govern must be considered fairly. (See above about subsidiarity!) The wisdom that comes from suffering borne well is a spiritual kind of wisdom, a sort of perspective on one’s own life and meaning, and typically that is its limit. Being a resident of a war-torn country does not make a person an expert on foreign policy, it makes him an expert at hiding from bombs and bullets. If the same person also studied international politics at university and served for decades in his nation’s diplomatic corps, these would be of greater value for prudential decision-making about foreign policy, as they both communicate more information about the relevant matters. Perhaps his experience of hiding from air raids helps to contextualize what he is learning, or helps to remind him of how important certain consequences are, but simply having experienced the wrong end of a war does not make him a good politician.
Knowledge can be gained without experience of the things learned about. This principle is easily proven by the very existence of education: we believe that we can give people information through communicating information. It is left to the individual to organize that information and make a judgment, right or wrong. Thus, a priest who has studied the Pastoral Rule, for instance, is in a much better position to preach and rule well than if he had not studied it, ceteris paribus. If experience is the sole criterion for knowledge, we would face epistemic anarchy: no two people have the exact same experience of anything, and therefore there could never be any common body of knowledge. To rectify this, there is a theory of group-based experience, codified in the doctrine of “intersectionality.” Because minorities (and women) are necessarily victims, and the victim-narrative must always be believed, the number of victim-classes to which one belongs gives greater primacy to their claims and demands. So goes the theory. But if intersectionality defines knowledge, then we should only need to find the few Black, homosexual, transgender-woman, overweight, Muslim immigrants and let them run our lives, since they are practically demigods given their high intersectionality. And even within such an elite group, there would be divisions – some grew up poor, others did not. Some have genetic diseases, some do not. Etc. And so intersectionality is also a kind of compartmentalization which tends toward epistemic anarchy. The truth is that we are not only animals, we are rational animals; we are capable of learning without experiencing, and therefore we can generally see what is good and right in public policy without having been in the exact circumstance of those to whom any given piece of legislation applies, provided we are actually informed of how that policy will affect people and be enforced (subsidiarity!)… But we don’t need to take subsidiarity so far that we actually must be part of the racial, gender, “whatever” group over which we exercise authority.
Third, the terminus a quo/ad quem paradigm. The terminus a quo is the “point from which” one goes. It stands in relation to the “terminus ad quem,” the “point to which” one goes. It behooves a person who wants “progress” to say exactly where that progress leads to, and where it stops. Not only has there been deep confusion about where exactly some kinds of “progress” are heading, but also no principled way to determine when that progress ought to stop and be conserved. Some slopes are slippery indeed.
Today’s conservatives are yesterday’s liberals, especially with regard to gender-theory and its related issues. If you need proof, well, there is an endless supply, but try this one on for size. (Yes, really, click the link. If that doesn’t drop your jaw, nothing will.) What is the endgame? What is it really all about? How far can we “progress”? Of course, the goalposts keep moving. First, mere social tolerance is the only request. Then, once acquired, it is a small legal concession here or there, nothing big. Then, the redefinition of a social institution protected by law – but surely, this is the last step… Except then it becomes domination in schools, in the workplace, in the culture at large: indoctrination of the youth, forced service to same-sex weddings, and constant positive portrayal and exposure in the media. And now that the homosexual lobby is quickly running out of room, the momentum has carried into transgender rights.
But at this point I want to ask about these intermediate steps, which, for some basically sincere people, really are seen as the “end,” the terminus ad quem. That step is the the redefinition of social institutions or roles, such as same-sex marriage on the homosexual agenda and right around “bathroom bills” on the transgender front. There is a distinct problem of intentionality for each with regard to their understanding of their terminus ad quem as such.
Everyone has heard the comparison between the civil rights battle of the 1950’s and the present-day struggle for so-called “gay rights.” There is an oppressed group which only wants equal treatment and protection under the law. Just like Blacks couldn’t use the White schools or water fountains or any number of products and services, so gays don’t (didn’t) have access to marriage, because it is limited to the heterosexuals. Because marriage is so important in public life and personally desirable for so many reasons, it is equivalent to the desire for education, transportation, etc., wherein Blacks were discriminated against. Therefore, the two movements are basically analogous.
The problem with this argument is with regard to the terminus a quo/ad quem relationship. Under Jim Crow, goods and services that were equally desirable to both Whites and Blacks were apportioned unequally and unfairly. It was unfair because it put Blacks and Whites on fundamentally different levels of human dignity, when the reality is that race does not determine basic human nature. In other words, Blacks and Whites share the same terminus a quo, since they are fundamentally equal as human beings with the same desires and therefore deserve basic equality of opportunity, but they were treated as having different termini a quo. Because they share identical desires, such as good schools, a seat on the bus, and so on, their desires themselves have an identical terminus ad quem. To sum up, Blacks were given a different terminus ad quem because it was thought they had a different terminus a quo when in reality they did not. The civil rights movement sought the right to the same terminus ad quem by trying to show the Black terminus a quo was the same as the White terminus a quo.
This is (was) not the case with the push for same-sex marriage. Here, the terminus a quo is assumed to be the same by the government, and the terminus ad quem (marriage) is available to all. There is already equality of opportunity – it’s just that the desire of homosexuals is not the terminus ad quem which was equally available. Instead of pushing to be able to use the White water fountain, this was a push to create a Black water fountain because the water from the White fountain tastes bad to some.
Consider again: in no country ever in world history were homosexuals categorically barred from marriage. It is that they typically don’t desire the “kind” of marriage available. Instead, a new kind of marriage needs to be created to suit their desires – a different terminus ad quem altogether, just with the same name. The terminus a quo is different too, not because homosexuals and heterosexuals differ in fundamental human dignity, but because the desires which define these two categories are unequally useful to the commonwealth in which they seek to be fulfilled. Unlike schools or water fountains, marriage has not historically been treated as a good or service consumed, it has been treated as an office from which services and goods are provided to the community, namely, children and care for children. Even if same-sex couples were generally able to provide equally well for adopted or surrogate children as a child’s natural parents, which seems quite obviously incorrect for several reasons, they would still be at an unequal public dignity because they need help bringing children into existence. A man and a woman do not, generally speaking, need help procreating. And because of the clear good of parents staying together, having kids, and treating those kids well, the government is right to incentivize a lifelong commitment to a monogamous heterosexual relationship with certain public benefits which are not due to even the most committed homosexual relationships. The tendency to produce children is why there is such a thing as marriage in the first place (to protect, educate, and nurture children in a balanced and stable environment), and kids are also the primary reason the government should be interested in marriage at all, as they are the future of the commonwealth. It is especially dangerous when many fatherless young men are gathered together – this is how and why gangs form in cities… the kingpin is the replacement for the father.
We could map this same twist of the terminus a quo/ad quem dynamic onto some other public function or office of nature, such as the military. Just as every society needs marriage, it also needs a military, and so there should be certain incentives or “perks” that come with taking up arms as a soldier. But what if I want those same benefits, but without joining the current version of the military? Suppose I too am patriotic, own a gun, dislike terrorists, and sometimes wear camouflage. Shouldn’t I too have equal access to the military? I do, of course – I could go sign up at any moment – but I want to do it my own way, because I don’t desire to go to the desert or live on a base. Shouldn’t military rights be extended to me, too?
Anyone can see that this is the same line of reasoning as the same-sex marriage argument, and anyone can see also that it is a patently absurd argument.
But there is a different kind of absurdity at work in the transgender activism of today… What is the terminus ad quem of a gender transition – or even of the activism in general? If gender is a social construct, as it is so often claimed today, what is the value of changing the body? Cross-dressing or surgery would make sense if one’s real gender were something inherent to the person. So is the terminus ad quem simply to be treated a certain way by other people according to the superficial notions of male and female? If gender is a social construct, then there is no “noumenal” change, it is only a “phenomenon” which changes – that is, there is only and can only ever be a change in perception rather than any objective reality in the person or the body called “gender.” This seems contradicted by the advent of the big step in transgender activism, which is, like the gay agenda, compulsion. In this case it is even worse, because it is more arbitrary. If gender were only a social construct, looking and acting sufficiently “male” or “female” would suffice, but because the meaning of those terms is sliding away into oblivion, like “marriage,” the “appropriate” way to treat a person is based solely on that person’s desire to be treated a certain way. Because there is no objective reality “male” or “female,” and either it is consistently impossible or irrelevant for transgender people to look and act sufficiently like the paragon for “male” or “female” because of their biological sex, before or after surgery, it may be necessary simply to force people to use certain pronouns that they would not normally use.
Not to do so would be “violence,” because it causes depression and social isolation which can lead to self-harm or harassment. Therefore, speech at odds with my own desire to be called “he” “she” “zhe” or whatever, to refuse me the use of any bathroom or locker room I want, to disallow me to put on my official documents whichever of an ever-growing list of genders I determine, is punishable by law… Bad, right? It’s happening in Canada already with the infamous Bill C-16. Except we are not looking at all the harm this can cause, we are looking at the terminus ad quem. What has a trans-man or trans-woman actually become? Surely, they would say a “man” or “woman,” full stop. (Never mind that this is already causing problems – for example, does a trans-woman count as a man or as a woman for the purposes of any kind of affirmative action slanted towards women? Or take the example in the link above about the “transphobia” of RuPaul!) If gender is a social construct, a gender transition is to create a perception of a person as a member of a certain gender category. But since that category is completely based on perception, in what does the transition actually consist? What is actually being changed? And if it is all about my desires anyway, wouldn’t it be easier to change my desire to match with people’s seemingly entirely empty and baseless perception rather than the other way around? If “man” and “woman” don’t really mean anything objective anyway, then why would one even want to be called or treated as one or the other? What is the motivation to depart from the terminus a quo? It seems to be a comically extreme exercise in vanity…
Hopefully I have hammered home the point. The terminus ad quem of gender transitions and the activism surrounding it is unclear at best. And where the movement in general will end is anyone’s guess, but compelled speech is likely involved. After that point, my guess is trans-humanism will be next, especially given the rapid advances being made with the ongoing development of CRISPR.
Of course, the truth is that gender dysphoria and its accompanying behavior constitute a tragic mental illness and symptoms of that illness. The desire to “become a man” or to “become a woman” is based on a fetish with the biological reality of the opposite sex and the social realities based upon it, or some similar unfortunate disposition of the mind. Something approximately the same could be said of same-sex attraction.
These three points understood rightly – the order of charity, experience in relation to knowledge, and the terminus a quo/ad quem paradigm – give us a fitting lens through which to look at mainstream American (and broader Western) politics. The ideas are firmly rooted in the Christian intellectual tradition and help to make very useful distinctions. Hopefully they can assist you in forming your own opinions and in having your own discussions. Let me know what you think in the comments – but play nice!
I have argued elsewhere that postmodern millennial culture is shaped by two dominant strains of thought – the positivist strain, and the existentialist strain. These opposing worldviews have merged to form an intellectual chimera that prompts a kind of neo-Albigensian approach to anthropology and ethics… But I have not investigated postmodern millennial politics.
Postmodern millennial (PMM) culture has taken the so-called “Frankfurt School” and run away with it as their own. The Frankfurt School’s “Critical Theory” gave rise to what is known today as “cultural Marxism.” Haven’t heard of it? Wake up, it’s on your doorstep (language warning):
I have just recently read the essay by Herbert Marcuse which is referenced in the video. It’s typical dense German writing, but there are lines which leap out… The general idea is that the “majority” which is in charge tends to allow for a kind of false tolerance of free speech on the part of the “minority,” which is designed to keep the majority in power and is therefore necessarily repressive (thus “repressive tolerance”). Therefore, the minority needs to push back against the ones in charge (of government, culture, schools – whoever is “repressive”) and silence them in order to make things fair.
“The telos of tolerance is truth,” writes Marcuse (RepressiveTolerance). To borrow and elaborate on Alasdair MacIntyre’s critique of this aphorism, tolerance is ordered toward rational discourse… In other words, there is an intermediate step, because – guess what – a person in a minority group might actually have an opinion or desire which is wrong or bad, even about how he/she/ze should be treated!
Marcuse’s book Eros and Civilization (1955) undergirded what little intellectual justification there was for the 1960’s sexual revolution, with the basic message being, “Don’t work, have sex.” The integration of Marx and Freud (both pseudoscientists, of course!) which Marcuse attempted in this book then played itself out in wonderful rejections of capitalism such as were found at Haight-Ashbury and Greenwich Village back in hippie heyday. Can you imagine if that were all of human society?
As far as I can tell, this was the kind of “libertarian socialism” which Marcuse envisioned as utopian, although until that utopia was universal he perhaps wanted it to be less about pot smoking and more about activism, including violent activism. Think angry hippies who are protesting more than a war in Asia… Think angry hippies who are protesting not being given free stuff all the time and not being treated as demi-gods for being part of a minority. That is what he wanted, from what I gather.
He’s got it now.
His ideas aren’t on the fringe anymore, they are mainstream Leftist doctrine. They aren’t just fueling sporadic uprisings like ’68 in Paris, they are causing the countless campus riots over conservative guest speakers. (Here are just a few recent examples.) They aren’t for the dustbin or relegated to historical studies in philosophy, they are living and breathing in PMM activism. They are running the mainstream media. They dominate liberal arts departments at universities. They are the Western Left.
I am still researching this man and his ideas. I am still learning about the effect they are having in Western culture (especially American colleges). I have no clue what the answer is other than to know who we are and what we believe as Christians, to pray for mercy, and to be happy about sharing the Faith with those who want to listen. Most of these folks do not want to listen – that would be too threatening. They would rather stay comfortable in their identity politics than allow themselves to be challenged, which might cause discomfort. Exposing them to threatening or offensive ideas, some of them argue, actually counts as a kind of physical violence against them. Let that sink in.
This is where “political correctness” grows out of all proportion (if there ever was a healthy proportion for such a thing, which is doubtful, at least as public policy or law), as this is where microaggressions, safe spaces, and trigger warnings come from: they are about protecting people from violence. That is how you shut down the other side’s legitimate act of free speech. And the Church is high on the list of entities to silence and compel to fall in line with Leftist identity politics. Think “hate speech” and anything normal that goes on inside an even remotely conservative church, and then you will see the scary, scary picture.
Herbert Marcuse. That’s the name… Even though very few PMM’s have ever even heard the name, that’s where it’s coming from. This is what the Church in the West is up against. Read him. Study him. Denounce his ideas where you find them.
This topic deserves more attention, especially in terms of evaluating Marcuse in terms of Catholic teaching (namely anthropology and social teaching), but this will have to suffice for now. Derrida is also someone to investigate as connected with this phenomenon.
“Lord, it is good that we are here. If you wish, I will make three tents here, one for you, one for Moses, and one for Elijah.” (Matthew 17:4)
I’ve seen people give Peter a hard time for not “getting” what was happening in front of him at the Transfiguration. Mark’s Gospel says parenthetically “he hardly knew what to say – they were so frightened.” But I think we need to give our first pope some credit where credit is due.
Peter was thinking quickly on his feet. So he intrudes into the conversation and asks whether he should build three tents. For us, this sounds odd, but for Peter, he must have thought he had solved the problem: this had to be the beginning of the end times. He might have picked up on how Jesus is fulfilling the Old Testament festivals. He knew of the Jewish tradition that Moses and Elijah would come again before the end of the world. Now, since they’ve come, he was hoping Christ was finally going to restore the kingdom of Israel and reap the much-anticipated harvest of souls. It was the Christological fulfillment of the Jewish Festival of Booths.
They needed tents.
The Festival of Booths (Feast of Tabernacles / Sukkoth) is one of the three major feasts in the Jewish calendar (Leviticus 23:39). For a week, they would dwell outside in tents (“booths”) for seven days, reminiscent of their time dwelling in tents during their exodus sojourn. The timing of the Festival of Booths corresponded with a yearly grain harvest, wherein whole communities would work day and night (with the aid of a full moon) to gather in the harvest and do the work of threshing the grain. Removed from its initial agricultural context, the Festival Booths still looked forward to the harvest that was to come at the end of time. Although Peter’s exclamation of “Lord, it is good that we are here” is a fitting expression of eschatological rest, the tent-building suggestion might have been a little too much.
Peter figured out pretty well what was going on. However, Peter still did not know what he was saying. Where Peter erred was not his analysis – the event of the Transfiguration is the fulfillment of the Jewish Festival of Booths – but his approach.
Peter approached the Transfiguration as a problem to be solved, not as a mystery to be entered into.
The Meaning of Mystery
So when Christians use the word “mystery,” we do not mean a problem without an answer. No, for Christians, a mystery is something that is so intensely knowable that it exceeds the powers of human comprehension. A mystery is so great that it encompasses the subject.
With a little help from the French personalist philosopher Gabriel Marcel, we should distinguish between “mystery” and its misused synonym “problem.” For Marcel, something is a mystery when the self is implicated in it. A mystery cannot be studied from a distance, but is experienced by entering further into it. Openness to mystery is openness to the whole of a reality.
A “problem,” on the other hand, is something that “is placed in front of me, blocking my way.” To treat something as a problem is to purposely exclude yourself from it. It is a purely notional engaging of a situation, wherein one can find objective and finite answers with universal implications.
Problems are the stuff of scientists. Mysteries are the stuff of mystics.
Let me give you an example.
Once, my five-year old niece told me, “Did you know that, when I’m in the car, the moon follows me? It really does!”
Infected as I was by the spirit of abstraction, I told her, “It only looks like it is following you because it’s so far away.” I thought I could maybe explain to her how perspective works at such distances. To prove this, I thought I could set up an experiment, putting her in one car and her sister in another car. They would go separate directions and observe how the moon follows both of them. Then I could prove to my niece that, since the moon cannot possibly be following both of them, there must be another explanation. That explanation would be in the reality of the great distance between the earth and the moon, a distance that can be observed and measured. Science would win out over childish naïveté.
But before I could get anywhere to disprove her childish notion, she interrupted, “NO. The moon really follows me.”
In the face of such opposition, I thus abandoned my attempt to scientifically disprove her childish perception.
For my five-year old niece, the moon was a mystery; it really did follow her. The moon was so beyond her that, rather than disconnecting her, it implicated her in its path. However mistaken her understanding of perspective, she approached it with wonder. And she rightly would not let that wonder be extinguished.
To me, the moon was a problem that needed to be solved; it could be measured and placed conceptually at a distance. I knew that its movements and phases are configured to a different pattern than my sporadic movements. Instead of encountering the moon with her, I abstracted. Although technically correct (the moon does not follow you), my approach prevented me from being gripped by the mystery of the moon and sharing in my niece’s wonder.
Like my niece with the moon, a mystery is so beyond us, that we cannot help but be pulled into it. A mystery is so large that it necessarily involves the viewer.
In this way, God Himself is a mystery, being so far beyond us that, at the same time, He embraces us and loves us in our very being.
The Mystery of the Transfiguration
It is in this way that the event of the Transfiguration is a mystery.
Since mysteries overwhelm us, they implicate us – they require our response. Our response, then, to the mystery of the Transfiguration is not to solve the puzzle of Moses and Elijah’s appearance, but to enter deeply into the reality of what is before us.
Although technically correct, Peter’s approach prevented him from being gripped by the mystery. So while Peter was still speaking, a higher voice interrupts him: “This is my beloved son with whom I am well pleased. Listen to him.”
Through the mystery of the Transfiguration, we are meant to share in Jesus’ own prayer with the Father. By beholding the glory of Christ transfigured and listening to him, we become sons and daughters in the Son. By entering in to the mystery of the Transfiguration – by listening to God’s beloved Son – we become what we contemplate.
How do we enter in to the mystery of the Transfiguration? For us Christians, we go to the source and the summit of the Christian life – the Holy Sacrifice of the Mass. Like Peter at the Transfiguration, we can look at the Mass just as a problem to be solved, a ritual to be analyzed, a puzzle to be deciphered. Or we can enter into the mystery of the Mass.
In every celebration of the Mass, we ascend the mountain with Christ, and we encounter something that overwhelms our understanding: God incarnate – the second Person of the Holy Trinity – comes to us as bread and wine. So great is the glory of Christ in the Eucharist, so utterly beyond us, that we are pulled into the mystery. The altar is our Mount Tabor, where we see His glory, not with the eyes of flesh, but with the eyes of faith. Over the altar the Father’s voice mystically resounds, “This is my beloved Son; listen to him.” We who enter into this mystery by receiving the Body of Christ in Holy Communion are enveloped by the cloud of the Holy Spirit. At Mass, we enter in to the mystery of God’s glory. He gazes on us, and we gaze on Him, and we become what we contemplate.
It is good that we are here.
Post by: Fr. Peter Gruber
Main Image: The Church of the Transfiguration, Mount Tabor
Having examined the first part of the “postmodern manifesto,” which is scientistic, we now turn to the second part, which is existentialist. Here it is again:
Real knowledge is only of irreducible information about the material world, and I can manipulate that same material world however I want in order to express myself and fulfill my desires.
The imposition of a spirit onto its flesh and the world is our object of investigation today.
After the Kantian revolution proposed a deontological moralism as a replacement for metaphysics, Schopenhauer took up the reins and ran with the theme: the will reigns supreme over the intellect. This doctrine recalls those first rumblings present in Ockham, Abelard, Scotus, and even St. Bonaventure. (Who could forget Dante’s depiction of Bonaventure and Thomas circling around each other in Heaven debating the primacy of the intellect and will?) Then came Soren Kierkegaard’s deep anxiety over life together with a suspicion of some kind of opposition between faith and reason. Heidegger, of course, was riddled with anxiety as well, over being and nothingness, and he had an obsession with freedom and authenticity: all characteristic of what was to come. There was no more dramatic precursor to the French existentialists than Nietzsche, who sought to free the world of its nihilism and empower it with the liberation of the will: the ubermensch, or “super man,” would embody a new kind of magnanimity with no regard for the welfare of others or some abstract Aristotelian “flourishing.” Nietzsche apparently couldn’t do it himself and went insane, finally cracking after seeing a horse being mercilessly beaten in a street in Turin. (Here we might pause and recall Durkheim’s observation about happiness and the subjection of the will to a pre-defined role in society… Those who have a life already set up for them tend to kill themselves less often.) The penultimate step to mature existentialism came with Michel Foucault, the forbearer of the “rainbow flag” and a staunch opponent of confining the mentally insane. After all, maybe they are just “different,” you know?
Finally, we come to the main event: a Parisian socialite, his lover, and a journalist-turned-philosopher raised on the soccer fields of French Algeria.
The core of the teaching of Jean-Paul Sartre can be summed up in three words: existence precedes essence. In other words, there really is no human nature, only a human condition which must be figured out and made into something of one’s own. He cites Descartes’ cogito in support of this theory, being an “anti-materialist,” and he claims that this is the only dignified vision of man, as this doctrine alone is capable of acknowledging his true power and freedom – which are apparently the characteristics of dignity. Man must go beyond himself to create himself, quite in contrast to the Comtean humanist religion, where humanity is good “just because.” For Sartre, man is nothing without making something of himself. (This would later become the basic teaching of Ayn Rand as well.) Freedom is to choose and conquer resistance present in one’s situation, and one must exercise this freedom according to his authentic self. But what is the “self” without a human nature? It is unclear.
Sartre’s intermittent lover, Simone De Beauvoir, with whom he would frequently seduce unwitting female students for sexual exploitation, held similar ideas and became the first “feminist.” It is from De Beauvoir that we get the now infamous gender-sex distinction: “One is not born but becomes a woman.” The woman is defined socially – and in classical A-T anthropology – in relation to man and therefore does not have her own identity. This is an existential problem for the woman, who must go out and create herself. To postmodern ears, however, it would sound insane to contradict the sense of De Beauvoir’s complaint; and yet we have St. Paul teaching that some kind of superiority of men is rooted in nature and of necessity must flow into ecclesial life (1 Cor. 11: 3-16, Eph. 5: 21-33, Col. 3: 18-19). The Christian must not be a feminist of the De Beauvoir variety. Our friends the Cathars had women clergy; they anticipated the existentialists in their justification for this choice. We will return to that in a future post.
Then we have our Algerian friend. Albert Camus’ most famous contribution to Western thought was the that the only serious question a person has to ask himself is whether to end his own life. After all, life is absurd, and if one can find no meaning for himself, then it is better that it end on one’s own terms, rather than in something meaningless like a car crash (which, ironically, was exactly how Camus was killed). Despite explicitly denying the existentialist label and preferring to be an “absurdist” instead, Camus is nonetheless the crystallization of the movement – his interpretation of the Greek myth of Sisyphus, claiming that man must accept his existence as an absurdity in order to find peace, or the anguish of the main character of “The Stranger” over the meaningless of his life and what has happened to bring about his execution, for example, provides a fitting capstone to the existentialist project because it shows its end: senselessness. When human nature is removed, purpose is removed. And the frantic search for a self-assigned basic purpose can only end badly, even if it doesn’t feel that way to a “successful existentialist.”
Certainly, more can and should be said about the French existentialists. But this brief and rude treatment suffices to bring to light the critical themes of our own day which were present in the movement, namely: a rejection of human nature as such; a perceived need to define one’s own role to make up for such an absence; and an obsession with “gender” equality.
We have already noted in PART I of this series the shocking fact that the existentialist doctrine on human nature as such has been enshrined in U.S. law by the Supreme Court. That should be enough to show there is a deep-seated existentialist current plaguing the West, but when coupled with the wide diffusion of the watered down scientistic-positivism we explored in the last post, disdain for classical Aristotelico-Thomistic anthropology has become its own unspoken rule. It is not unspoken in the way one doesn’t talk about Fight Club, it is unspoken in the way one doesn’t talk about red being a color… it’s just a given.
If there is any admittance of a “human nature” it is a passing nod to the truth that what we call human beings usually have certain kinds of physical characteristics which normally produce certain kinds of effects. The classical meaning of “nature,” however, is alien to this vague and platitudinous physicalism, as there can be no teleology (in-built purpose) for what is merely a random collection of stuff onto which we slap a name. This, I suggest, is the final fruit of Ockham’s Nominalism which we have discussed previously.
Of course, most postmodernists dimly realize their godless worldview poses the “existential problem,” viz., a lack of inherent meaning and purpose in their life, and they seek to solve it through the recommended process of “self-definition.” We are not here critiquing a healthy ambition to “do what one can” or to avoid idleness; rather, the issue is the desperate and necessarily futile attempt to provide altogether one’s own meaning for existing in the first place. There are also many people, who are not quite full-blown postmodernists, who seek to correct this same inner anxiety with DIY spirituality (moralistic therapeutic deism, usually); this is particularly dangerous as it nominally acknowledges something greater than oneself as a grounds for directing one’s life, but it is really the imposition of one’s own ideas onto a divine mouthpiece.
The existentialist paradigm helps make sense of the postmodern millennial’s take on the issues: the life issues, the gender issues, and the sex issues. Since a person’s meaning is basically self-derivative, and that meaning is predicated upon desires and the ability to fulfill them, then the unborn and the elderly are without their own meaning. Having a certain kind of body which has certain powers does not force one to accept that embodied reality as a given identity and direction either within a social framework or even within a physical framework, provided there is a surgeon available. Much less does this God-given engendered bodily existence, constitutive of unique powers with lasting social consequences and everlasting spiritual consequences, provide an individual with rules for how to engage in the use of the organs which are the seat of that power. You must choose to become something. Alternatively, you may disappear into oblivion – either irrelevance, or death. Before it was the American Dream™ it was the French philosophical anthropology.
The current of this thought has bored a hole so deep into the subconscious of postmodern America (and many parts of Europe) that it has become impolite, if not outright illegal, to tell a person that he is a he, she is a she, that “No, I will not serve cake at your wedding,” or anything that might emotionally hurt that person, so long as that self-given identity or meaning does not result in “harmful” behavior. Harmful behavior, remember, is reduced to emotional, physical, or financial pain or loss – for those who can already “will to power” and aren’t entirely reliant on help from other people for existence, that is.
The video above, while admittedly a bit cherry-picked, demonstrates nonetheless the existentialist current of millennial postmodernity with breathtaking frankness. No doubt such an experiment could be replicated across the global West with some success, at least in supposedly “elite” institutions of higher education. Note again the criterion of “harm” as constituting the core of the normative ethics for postmodern millennials – as if a person with a wildly erroneous self-perception is doing no real harm. You can tell that these kids become more and more uncomfortable as they are forced by their own premises and sense of political correctness to the affirmation that what is obviously “real” truth is being denied by this person, but that since “it’s not ‘harming’ anyone,” it must be okay and therefore good to support. It is the lack of an awareness that such a departure from the truth of one’s natural constitution as “man,” “white,” etc., does indeed cause harm to that person and therefore also to society at least inasmuch as that person’s self-perception is related to his or her function in society, is probably why it doesn’t “bother” the people interviewed. There used to be a word for the self-deception which is being coddled as healthy and normal: mental illness. Now it requires university sponsored trigger warnings and safe spaces, international awareness campaigns, and even protective laws. All of this finally ends in a kind of laissez-faire utilitarian relativism, which we might call the postmodernist ethics. “The more a behavior harms the people or things that I like, the more immoral the behavior is, and the more a behavior does good to the people or things that I like, the better the behavior is.” In this normative ethics, I can never do anything wrong, except inasmuch as I might unthinkingly do something harmful to my own cause. Another person is irrelevant insofar as he doesn’t harm my own mostly arbitrary and narrow values. This must also be understood as occurring within the materialistic framework – both harm and good are all temporal and experiential. (Unless, that is, a little DIY spirituality comes into play… Then all bets are off.) Without a firm understanding of unchanging human nature, and the belief in its authority and power to provide a normative ethics, we are left to define our own values based on whatever we would like to do or become as individuals or collectively as a society.
“Existence precedes essence.” Human beings are now human doings.
Yet clearly, “Some are more equal than others.” Why are some people or things valued over others? The connection to the expression of self and fulfillment predicated upon it are the foci around which postmodern value is measured: money, physical pleasure, convenience, emotional pleasure, diversity, equality, progress. Each goal is vaguer – and more dangerous – than the last. If you are not contributing one of these goods to society, how can you be valuable? Maybe you are a “good person,” but you are no longer useful and are therefore of no account. In other words, we may kill you if we would like to… and one day we might realize that we ought to kill you: because you are not capable of doing the kind of things we value, your own existence offers you “no benefit.” It is now charitable to destroy a life that can’t “create itself.” Beyond the obvious cases of killing the unborn and physically sick, Camus’ dilemma is being answered for the mentally ill and elderly in Europe in “assisted suicides” which are a little too assisted.
It has become popular these days to remark on “the science” behind why transgenderism or same-sex marriage or whatever is “bad.” While taking note of the psychological and physical processes and results of these experiments is not irrelevant to forming a right opinion on their goodness (like studying the average harm done to children by “gay parenting”), there is no need, and in fact no possibility, for “science” to provide the answer to the foundational moral questions whose answers are found in a study of the soul and body’s basic purposes which are widely known to all, as St. Paul reminds the Romans (Rm. 1:18-32). You really don’t need an expert biologist to give kids “the talk.” You do need something other than mere biology to infer that deviating from the natural order is wrong, and the obsession with the minutest details of the “is” to justify the “ought” belies at least a touch of the intellectual illness diagnosed in Part II of this series, namely, a weak form of positivism called scientism.
Given that existentialism is historically opposed to the materialistic worldview which positivism relies on, how can the postmodern manifesto combine both elements? For example, how can a person support transgender surgery as an effective means of “expressing the real self” while claiming that there is no such thing as a soul because it’s not an object of scientific observation? We might say it is a simple lack of reflection which allows this cognitive dissonance, and this is indeed true. The deeper problem, however, is that ideology is serving passion, rather than the other way around. This is part of what makes millennials so difficult to reason with: they will shift from one part of the manifesto to the other for the sake of whatever person or group or behavior they feel good about, not realizing that each pole is at least a mild affront to the other. What they tend to sense is that their scientism forces one to create his own meaning since there is no predefined role by a true authority (God, revealed religion, a family or government invested with God-given authority), and that the quest to create meaning for oneself is determined only by what is able to be perceived by oneself, the greatest authority. The poles point back toward each other in this way, even though real positivists would reject the idea that a person can “mean something” at all, and real existentialists are not even attached to the doctrine that there is a real material world in the first place. The details of theory are lost in the practice of the unfortunate and unwitting inheritors of these worldviews.
Whether the French existentialists would be on board with the hashtag gender activists of today is not entirely clear. Sartre would perhaps call transgenderism “bad faith,” that is, a fake expression of oneself wherein one “tries too hard” to be something he or she really is not. This is not “authentic” to Sartre. (How there could be such a thing as the “self” independent of one’s sincere desires begins to strike the central nerve of the existentialist project, however; if one can act in bad faith, then there must be something more to one’s identity than his desires which those desires can be in line with… which sounds an awful lot like an essence preceding existence, so to speak.) Camus might call such people to account as failing to accept that life just does not make sense, and that the only way to be happy is to accept this: providing a physical answer to a spiritual problem is vain, but there is no spiritual answer either, so one must simply be content with madness.
Existentialism is likely to remind the attentive reader of Sacred Scripture of Ecclesiastes. Was Qoheleth the first existentialist? The first absurdist? He does claim that the acceptance of life as vain and meaningless in itself is a condition for peace, like Camus. (Truly, Qoheleth is right – there is nothing new under the sun!) But Qoheleth, despite all of his despair, believes that everyone’s life means something to God, and that there are objective measures of morality by which that God will somehow judge us. That his idea of final judgment is fuzzy can seem odd given this, but in his intellectual humility he did not grasp for what he had not already been given. He knew we would die and that God would somehow render justice, but he will not say more.
Postmodernists avoid the topic of death because it would force them out of their watered down existentialism – protected by a million distractions – into the disquieting bluntness of Camus, which few can stomach: your life really is fundamentally meaningless, and there’s nothing you can do about it, so just get comfortable with that fact like a happy Sisyphus. The suicidal dilemma is also “too harsh” for sensitive millennial minds – let that question be left to poor Hamlet and Hannah Baker.
Next time, we will directly investigate the relationship between the trends of our current culture and the doctrine and praxis of the Cathars, finally making good on the title of this series.
Post by: Eamonn Clark
Main image: Simone De Beauvoir, Jean-Paul Sartre, and Che Guevara; Cuba, 1960
Just as the woman with the hemorrhage reached out to touch the hem of Jesus’ tunic, so do post-modern secular Westerners reach out to touch the hem of scientists’ lab coats. Despite the plain fact that any given scientist or doctor or other “expert” will be tend to be specialized in only some tiny sliver of his or her field, hopeless intellectual wanderers will gather at the feet of these people to learn all the mysteries of the universe… which is dumb. How did this happen?
Let’s take a step back.
The manifesto of the post-modern Westerner par excellence is this: “Real knowledge is only of irreducible information about the material world, and I can manipulate that same material world however I want in order to express myself and fulfill my desires.”
Herein we see two strands of thought colliding, one about the mind and one about the will: positivism and existentialism. Historically, they are not friends. How they have become fused together in post-modernity is a strange tale.
Today we will break open the first clause – real knowledge is only of irreducible information about the material world, the positivist element.
From the outset, we must make a distinction between “positivism,” which is an epistemic and social theory, and “logical positivism,” which is something more metaphysically aimed. My goal here is to show the roots of the broader idea of positivism, how it found its academic zenith in logical positivism, then how the aftermath of its fall has affected Western philosophy and science at large as well as in the minds of millennials.
A brief sketch of the positivist genealogy will suffice. We recall Descartes to point out his obsession with certitude, just as we note the empiricist thrust of Bacon, Locke, and Hume. We must mention Kant, both as the originator of the analytic-synthetic distinction (which will become enormously important) and as an influence to Hegel, who is notable for his approach to philosophy as something integral with history. Condorcet and Diderot should be pointed out as influential, being the greatest embodiments of the French Enlightenment, wherein reason and revealed religion are opposing forces. Marx, though he would reject positivism as a social ideology, helped inspire it along the same lines as Hegel had. The penultimate step was Henri de Saint-Simon, whose utopian socialism was all the rage during the French Revolution which was attempting to put his political theory into political practice.
Of course, these men were not positivists. It is Henri de Saint-Simon’s pupil, Auguste Comte, who brings us this unwanted gift of an empiricism so strong it entirely and unabashedly rejects any and all metaphysical knowledge outright. This led Comte to build a reducible hierarchy of the sciences based on their certainty or “positivity,” and he claimed (rightly) that the trend of empirical studies was heading toward a “social science.” This conception of a reducible scientific hierarchy – one where, for instance, biology can be put in terms of chemistry, and chemistry in terms of physics, etc. – was a rather new way of thinking… Previously, it had been more or less taken for granted that each science has its own irreducible terms and methods, even admitting some kind of hierarchy (such as with the classical progression of the liberal arts).
Not only was Comte the first real philosopher of science, he was also the first sociologist. According to Comte, humanity was passing from its first two stages, the theological and the metaphysical, into the third and final “positivist stage” where only empirical data would ground truth-claims about the world. Having evolved to a higher clarity about what the world is, and having built up enough of the more basic physical sciences to explore how that world works, sociology could finally occur. Mathematical evaluation of social behavior, rather than qualitative analysis, would serve as the proper method of the “queen of the sciences” in this new age.
Comte outright jettisoned religion qua supernatural and revelatory, but his intensely Catholic upbringing had driven into him such a habit of ritual that he could not altogether shake the need for some kind of piety. What was a French Revolution atheist to do? Well, start a “religion of humanity,” of course. (The “positivist religion” never became a major force, especially since Freemasonry already filled the “secular religion gap,” but it did catch on in some areas. Take a closer look at the Brazilian flag and its meaning, for example…) We should also note, for the record, that Comte was only intermittently sane.
The epistemic side of positivism almost ended up just as much of a flop as the pseudo-religion side of it. Unfortunately for the West, Durkheim and Littré became interested, and they, being altogether sane, effectively diffused Comte’s ideas and their own additions through the West at the start of the 20th century. Eventually, a group of like-minded academes started habitually gathering at a swanky café in Austria to discuss how filthy and naïve metaphysics was compared to the glories of the pure use of the senses and simple mathematical reason – the Vienna Circle was born.
Together with some Berliners, these characters formulated what came to be known logical positivism. When the shadow of Nazism was cast over Germany, some of these men journeyed westward to England and America, where their ideas were diffused.
The champions of logical positivism were Hans Hahn, Otto Neurath, Moritz Schlick, Rudolf Carnap, A.J. Ayer, and Bertrand Russell. While Russell is no doubt familiar to some readers (think “tea pot”), the others fly lower under the radar. It is Ayer’s formulation of the logical positivist doctrine which we will use, however, for our analysis.
“We say that a statement is factually significant to any given person, if, and only if, he knows how to verify the proposition which it purports to express – that is, if he knows what observations would lead him, under certain conditions, to accept the proposition as being true, or reject it as being false.” (Language, Truth, and Logic, 35)
Got that? What this means, in the context of the whole book, is that in addition to statements which are “analytic” (“all bachelors are unmarried”) being true necessarily, only statements which we can actually use our 5 senses to verify the truth of can be meaningful – that is, able to be true at all. These are “synthetic” statements. If I say that Pluto is made of bacon grease, I am making a meaningful statement, even though I cannot actually verify it; it suffices that it is hypothetically possible to verify it. If I say that the intellect is a power of the soul, this is not meaningful, since it cannot be verified with the senses. For the details, see Ayer’s book, which is rather short.
Needless to say, it is rare that a school of thought truly dies in academia. A thorough search of university philosophy departments in the Western world would yield a few die-hard fans of Plotinus, Al-Gazali, Maimonides, and maybe even Heraclitus. Perhaps the best or even only example of ideological death was logical positivism. W.V. Quine’s landmark paper “Two Dogmas of Empiricism” was such a blow to the doctrine that eventually Ayer actually admitted himself to be in massive error and repudiated his own work.
What was so blindingly erroneous about logical positivism?
First, the analytic-synthetic distinction, as formulated by the logical positivists, is groundless. Analytic statements supposedly don’t need real referents in order to be true, but they are instead simply about the meanings of words. For some kinds of statements which employ basic affirmation and negation, this might work, as it is simply just a dressing up of the principle of non-contradiction. Fine. But if one wants to start using synonyms to take the place of some of the parts of these statements, the distinction begins to disappear… What the relationship is between the synonym’s object and the original word’s object cannot be explained without a reference to real things (synthetic!), or without an ultimately circular appeal to the analyticity of the new statement through a claim of the universal extension of the synonym based on modal adverbial qualifications (like the word “necessarily,” which points to an essential characteristic which must either be made up or actually encountered in reality and appropriated by a synthesis). In other words, it is analytic “just because.” (Thus, the title of Quine’s paper: Two Dogmas of Empiricism. Read more here.)
Beyond that, logical positivism is a self-refuting on theory its face… If meaningful statements can only be about physically verifiable things, then that statement itself is meaningless because it is not analytic (or is arbitrary if it is, and we go back to the first problem) and cannot be verified with the senses so is not synthetic… How does one verify “meaningfulness” with the senses? Logical positivism is a metaphysical theory that metaphysics is meaningless. Once again, this can only be asserted, not discovered. Except with this dogma, it evidently claims itself to be meaningless.
But the cat was out of the bag: “Metaphysics has completely died at last.” Logical positivism had already made its way from the salons of Austria to the parlors of America and lecture halls of Great Britain. The fuel was poured on the fire that had started in England by Bertrand Russell and G. E. Moore after they had decided to reject the British Idealism that dominated the scene by creating an “analytic” philosophy that didn’t deal with all those Hegelian vanities that couldn’t be touched with a stick or put in a beaker. Russell’s star pupil, Ludwig Wittgenstein, would also come to be a seminal force in strengthening the analytic ethos, after having already inspired much of the discussion in the Vienna Circle. Though Quine did indeed destroy the metaphysical doctrine that metaphysics is meaningless, the force of positivism continued nonetheless within this “analytic” framework – and it is with us to this day en masse in university philosophy departments, which has led several generations of students to miss out on a solid education in classical metaphysics and philosophical anthropology.
In sociology there arose the “antipositivism” of Max Weber, which insisted on the need for value-based sociology – after all, how can a society really be understood apart from its own values, and how can a society be demarcated at all without reference to those values, etc.? A liquid does not assign a value to turning into a gas, which it then acts upon, but a group does assign a value to capitalism, or marriage, or birth status which it then acts upon.
In the broader realm of the philosophy of science, Karl Popper and Thomas Kuhn’s postpositivism came to the fore. Science in general cannot be best explained without regard for some kind of value, but that the possibility of and/or actualization of the falsification or failure of a scientific theory is the characteristic feature of the sciences – in contrast to the optimism of the positivists that we can “just do science,” and that that will be useful enough.
In “science” itself, an air of independence was diffused. Scientists do “science,” other people do other things, and that’s that; never mind that we have no idea how to define “science” as we understand it today, and never mind that values are always brought to bear in scientific evaluation, and never mind what might actually be done with what potentially dangerous knowledge is gained or tool developed. A far cry from the polymaths, such as St. Albert the Great or Aristotle, who never would have considered such independence.
Then there are the “pop scientists” who try to do philosophy. A few examples of many will have to suffice to show that there exist three traits among pop scientists who are the go-to sources on religion and philosophy for countless curious millennials and Gen-Xers alike.
The first is an epistemic myopia, which derives immediately from positivism: if you can’t poke it or put it in a beaker, it’s not real. (Yes, it is a little more complicated than that, but you’ve read the section above describing positivism, right? Empirical verification is the only criterion and process for knowledge… Etc.) This is often manifested by a lack of awareness that “continental philosophy” (as opposed to analytic philosophy) often works in totally immaterial terms, like act, or mind, or cause, or God. This immediately creates equivocation – a pop scientist says “act” and thinks “doing something,” for example.
The second is an ignorance of basic philosophical principles and methods, which follows from the first characteristic. If you don’t know how to boil water, don’t go on “Hell’s Kitchen” – everyone will laugh at you and wonder what you are doing there in the first place. We might do well to have a philosophical version of Gordon Ramsay roaming about.
The third is the arrogance to pontificate on philosophy and theology nonetheless, and this of course follows from the second characteristic. They don’t know what they don’t know, but they got a book deal, so they will act like they are experts.
Everyone knows Dr. Stephen Hawking. (They made a movie!) But did you know that the average 6-year-old could debunk the central claim of his most recent book? It is now an infamous passage:
“Because there is a law such as gravity, the universe can and will create itself from nothing.” (From The Grand Design)
I can hear the 1st graders calling out now: “But gravity’s not nothing!” And they would be right. The myopia of Dr. Hawking (and Dr. Mlodinow, his co-author) is evident in the inability to grasp that, as Gerald Schroeder pointed out, an immaterial law outside of time that can create the universe sounds a lot like, well, God. The ignorance of basic philosophical principles, in this case, the most basic, is clear from realizing that “gravity” can’t be both SOMETHING AND NOTHING. Then, the arrogance to go on pontificating anyway is self-evident by the fact of the existence of the book, and then a TV series which aired shortly afterward wherein we find philosophical reflection which is similarly wanting.
If you really want to do a heavy penance, watch this “discussion” between Hawking, Mlodinow, Deepak Chopra, and poor Fr. Spitzer – I had the displeasure of watching it live several years ago:
Then there are folks like Dr. Michio Kaku. He regularly shows up on those Discovery Channel specials on string theory, quantum mechanics, future technology, yadda yadda. All well and good. But here’s an… interesting quotation for our consideration:
“Aquinas began the cosmological proof by postulating that God was the First Mover and First Maker. He artfully dodged the question of ‘who made God’ by simply asserting that the question made no sense. God had no maker because he was the First. Period. The cosmological proof states that everything that moves must have had something push it, which in turn must have had something push it, and so on. But what started the first push? . . . The flaw in the cosmological proof, for example, is that the conservation of mass and energy is sufficient to explain motion without appealing to a First Mover. For example, gas molecules may bounce against the walls of a container without requiring anyone or anything to get them moving. In principle, these molecules can move forever, requiring no beginning or end. Thus there is no necessity for a First or a Last Mover as long as mass and energy are conserved.” (Hyperspace, 193-195)
The misunderstandings here are as comical as they are numerous… The conflation, found explicitly in the full text, of the first 3 Ways as “the cosmological proof,” which obscures the issue, the belief that “motion” is a term about something necessarily physical, the thought that only recently did we discover that matter and energy don’t just appear and disappear, and then the most obvious blunder – Thomas does NOT start any of the 5 Ways by saying anything like “God is the First Mover, therefore…” There is no such ungrounded assertion which “dodges the question,” as Kaku puts it. One must wonder if he even bothered to read the original text – which is readily available. Kaku has even weaker arguments (unbelievably) against both the “moral proof” (which is a characterization I have never heard of the 4th Way until Kaku’s book, which troubles me from the start) and the teleological proof on top of this disastrous critique, but I won’t bore you. (Basically: “Because change and evolution.” Read it for yourself.)
Once again, we see three qualities: epistemic myopia (as evidenced, for example, by the error about “motion”), ignorance of the most basic philosophical principles (albeit these are a little more complicated than the one Hawking whiffed on), and the arrogance to pontificate about God and the act of creation nonetheless.
Next you have a man like Richard Dawkins, one of the nastiest examples of publicly evangelical atheism the world has to offer at present. Here’s one particularly embarrassing quotation from his seminal anti-theistic work, The God Delusion:
“However statistically improbable the entity you seek to explain by invoking a designer, the designer himself has got to be at least as improbable.” (p. 138)
Can you see the three characteristics? Material beings only (or at least “things” with “parts”), no idea what metaphysical simplicity is and how it relates to God in Western philosophy, and yet here we have one book of many which address this theme.
It is not that these folks don’t believe in classical metaphysics – it’s that they don’t understand them in the least. They play a game of solitaire and claim to be winning a game of poker.
We won’t even get into discussing Bill Nye the Eugenics Guy… for now.
Okay, yes, quote-mining is easy. But this is the cream of the crop from a very large and fertile field. I am not sure I recall ever reading an important and sensible argument about religion or metaphysics from a world-renowned scientist who lived in the past 50 or so years. Someone prove me wrong in the comments.
All this leads us to the average “scientism” which one finds in the comboxes of Youtube videos about religion, threads on various websites, and debates on social media. Yes, there are plenty of religious people in those arenas, but the skeptics who try to make wild claims like “science disproves religion” or “evolution means God does not exist” or even just dismiss the idea of revealed religion outright with some kind of mockery ought to be seen as the children of positivism. It is the most probable explanation – the sources of their myopia, ignorance, and arrogance can usually be traced back through intermediate steps to a talking head like Dawkins who ultimately owes his own irrational ramblings to Auguste Comte.
Why is post-modern positivism so naïve? At the combox level, it is because these people, as all others, have an instinctive drive to trust in someone beyond themselves. For many it is due to circumstance and perhaps a certain kind of emotional insecurity and intellectual laziness that they latch on to the confident scientistic loudmouths to formulate their worldview – and it becomes a pseudo-religious dogmatic cult of its own, a little like Comte’s “religion of humanity.” At the pop-science level, it is just plain laziness and/or intellectual dishonesty combined with arrogance, as we have investigated. At the lecture hall level – and I mainly speak of the general closed-mindedness towards classical metaphysics found in analytic circles – it is a deeper kind of blindness which is the result of the academic culture created by the aforementioned ideological lineage. Each level has its own share of responsibility which it is shirking.
The truth is that matter is known by something immaterial – a mind or person – and this reveals to us a certain kind of hierarchy and order, seeing as matter itself does not know us. Man is indeed over all matter and ought to control it and master it, and all without the consent of matter; but this does not mean that there can’t be knowledge of things nobler and/or simpler than man, like substance or causation or God. Not looking at matter as the product of non-matter, and as being ordered to the immaterial in a certain way, is part and parcel of the New Albigensianism.
So there we have the first part of the manifesto explained. Irreducible facts (the ones devoid of metaphysics and value judgments) about the material world constitute the only real knowledge. The less reducible, the less it is really known. Even though the West is full of supposed “relativists,” it would be difficult to find a person who would truly let go of the objectivity of “science.” To say, “Christianity is your truth but not mine” is one thing; it is quite another to say something like, “Geocentrism is your truth but not mine.”
There is yet more to be explored… Next time, we will dive into the second half of the “postmodernist manifesto” with a look at its existentialist roots and how misconceptions about the relationship of the self to one’s bodily life have led to transgender bathroom bills.
Post by: Eamonn Clark
Main image: The Positivist Temple in Porto Alegre, Brazil
For the most part, religious errors are reducible to four basic ideas.
Jesus is not by nature both fully God and fully human (Arianism, Eutychianism, Monothelitism, Agnoetism, Mormonism, etc.)
There are not three Persons in One God (Modalism, Unitariansim, Subordinationism, Partialism, etc.)
Sanctifying grace is not a free and universally available gift absolutely necessary for salvation (Pelagianism, Semi-Pelagianism, Baianism, Jansenism, Calvinism, etc.)
Matter is not essentially harmoniously ordered with spirit (Manichaeism, Buddhism, Albigensianism, etc.)
While the first three ideas are certainly prevalent in our own day, the correct doctrines are only available through the grace of faith. The falsehood of the fourth, however, is evident from a rigorous use of natural reason alone. Therefore, it is more blameworthy to succumb to that error.
We are seeing today the resurgence of the fourth error in four ways: the sexual revolution, radical feminism, the culture of death, and most recently, gender theory.
The three forms mentioned in the first list (Manichaeism, Buddhism, and Albigensianism) more or less say that matter is evil and needs to be done away with. The Manichees thought that matter was created by an evil god, the Buddhists think that matter is only a distraction, and the Albigensians (or “Cathars”) became so enamored with the thought of the spirit escaping its fleshy prison that suicide became a virtue… But we will talk all about the Cathars later, and we will find some striking similarities between this medieval rigorist dualism and some of the most recent value developments in the Western world.
The current manifestations of the fourth error do not quite say “matter is evil,” but they instead say that the determination of human matter (the body) is irrelevant to the good of the spirit, and/or that the spirit is one’s “true self” which can be served by the body according to one’s whims. Some proponents may claim they don’t believe in spirit, that is, immaterial reality (in this case, the “soul,” or formal principle of life), but when they speak of someone being “a woman trapped in a man’s body,” or something similar, they betray their real thoughts. Still, even if a person insists on denying the reality of spirit, it remains the spirit within him who denies it. There can be no “self-determination” without a self to determine, and if the body simply is the self, then how can there be real determination? There could then only be physical events without any meaning. This, of course, is contradicted by the very existence of “experience.” It is not merely a body which acts, but a person who experiences.
The error in its current expressions can be traced to Descartes, whose laudable project of attaining perfect certainty about the world was, ultimately, a disastrous failure. After shedding all opinions about which he did not have absolute certainty, he was left only with one meaningful truth: cogito, ergo sum. “I think, therefore I am.” No person could both think and not exist.
This was not new, as St. Augustine had come to a similar realization over 1,000 years earlier. The difference was the context and emphasis of the thought; to Augustine, it was an interesting idea coming out of nowhere and going nowhere. To Descartes, it was the foundation of every knowable proposition, and it led to the idea that human beings are essentially thinking (spiritual) beings rather than a body-soul composite… Think “soul trapped in body.”
This came after the ruins of the scholastic project. With the combination of the fixation on choice and freedom in Scotus’ work and Abelard’s troubling take on the problem of universals (how to account for similarities between different things), the stage for Ockham’s Nominalism was set. (See Gilson’s detailed description in his wonderful book, The Unity of Philosophical Experience.) It was Ockham who hammered in the last nail of St. Thomas’ coffin and who paved the way for the “cogito” to be intensely meaningful not only to Descartes, but to the entire Western academy. Nominalism’s dissociation of “things” from any real universal natures which would make those things intelligible as members of species was the first step towards overthrowing classical metaphysics. This “suspicion of being” understandably increased exponentially with the publication of Descartes’ Discourse on the Method, as it cast a serious doubt on the reliability of the senses themselves, doubt that many felt was unable to be overcome, despite a sincere effort to do so on the part of Descartes himself.
The anxiety finally culminated in Kant’s “nervous breakdown”: a total rejection of metaphysics in the denial of the possibility of knowing “the-thing-in-itself” (noumena). From there, much of the academy generally either desperately tried to do without a robust metaphysics or desperately tried to pick up the pieces, and this theme continues today in the strange and fractured world of contemporary philosophy.
Ideas have consequences. As McIntyre shows so well in his book After Virtue in the case of “emotivism” (the position that ethical statements merely express one’s emotional preference for an action) a powerful idea that spreads like wildfire among the right academic circles can eventually stretch into the average home, even if subconsciously. A very well educated person may never have heard of G. E. Moore, but everyone from the wealthy intellectual to the homeless drunkard has encountered some shade of the emotivism Moore’s work gave rise to. The influence which both Descartes and Kant had on the academic scene in their respective eras was so vast and powerful, that it is not unfair to say that Western philosophy after the 17th century was in response to Descartes, and that Western philosophy today is in response to Kant.
The reaction to Descartes’ rationalism was first empiricism, then idealism. The reactions to Kant’s special fusion of rationalism and empiricism (that started “transcendental idealism”) which here concerns us were logical positivism and French existentialism.
Logical positivism is basically dead in academia, although the average militant atheist has taken a cheapened form of Ayer’s positivism to bash over the head of theists, and the general inertia of positivism remains in force in a vaguer “scientism” which hangs heavy in the air.
Existentialism, on the other hand, has become a powerful force in the formation of civil law. The following lengthy quotation is from Justice Anthony Kennedy’s majority opinion given in Planned Parenthood v. Casey (my emphases):
“Our law affords constitutional protection to personal decisions relating to marriage, procreation, contraception, family relationships, child rearing, and education. Carey v. Population Services International, 431 U.S., at 685 . Our cases recognize the right of the individual, married or single, to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child. Eisenstadt v. Baird, supra, 405 U.S., at 453 (emphasis in original). Our precedents “have respected the private realm of family life which the state cannot enter.” Prince v. Massachusetts, 321 U.S. 158, 166 (1944). These matters, involving the most intimate and personal choices a person may make in a lifetime, choices central to personal dignity and autonomy, are central to the liberty protected by the Fourteenth Amendment. At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life. Beliefs about these matters could not define the attributes of personhood were they formed under compulsion of the State.
“These considerations begin our analysis of the woman’s interest in terminating her pregnancy, but cannot end it, for this reason: though the abortion decision may originate within the zone of conscience and belief, it is more than a philosophic exercise. Abortion is a unique act. It is an act fraught with consequences for others: for the woman who must live with the implications of her decision; for the persons who perform and assist in the procedure; for the spouse, family, and society which must confront the knowledge that these procedures exist, procedures some deem nothing short of an act of violence against innocent human life; and, depending on one’s beliefs, for the life or potential life that is aborted. Though abortion is conduct, it does not follow that the State is entitled to proscribe it in all instances. That is because the liberty of the woman is at stake in a sense unique to the human condition, and so, unique to the law. The mother who carries a child to full term is subject to anxieties, to physical constraints, to pain that only she must bear. That these sacrifices have from the beginning of the human race been endured by woman with a pride that ennobles her in the eyes of others and gives to the infant a bond of love cannot alone be grounds for the State to insist she make the sacrifice. Her suffering is too intimate and personal for the State to insist, without more, upon its own vision of the woman’s role, however dominant that vision has been in the course of our history and our culture. The destiny of the woman must be shaped to a large extent on her own conception of her spiritual imperatives and her place in society.”
No doubt, a critical reader will observe some tragic oddities in this passage. We will table an in-depth analysis, but I do want to point out the bizarre idea that our beliefs can determine reality. One might be tempted to call this “relativism,” and there is indeed some relativism in the passage (the evaluation of the fact of whether a life or potential life is taken in abortion “depending on one’s beliefs”). Without denying this, I also assert that beyond a casual relativism, which might be more a product of a lack of reflection than a real worldview, Kennedy is a deeply committed existentialist. (Indeed, it seems that existentialism naturally disposes a person to relativism.) The thought that one’s beliefs define one’s personhood comes almost directly from Jean-Paul Sartre. The doctrine is: existence precedes essence. Essence is determined by beliefs and actions, according to the existentialist. Such an affront to traditional metaphysics would have been impossible without the aforementioned ideological lineage – Scotus, Abelard, Ockham, Descartes, Kant… Seeing Justice Kennedy through the existentialist lens also helps to account for the striking absence of respect for a human being who can’t believe or meaningfully act. After all, how can such a thing really be a person?
Today’s common philosophy of the Western liberal elite (and their spoiled millennial offspring) seems to be a chimera of these two diametrically opposed worldviews: positivism and existentialism. These ideologies have been filtered into the average home, and watered down in the process in such a way that they can appear to fit together. In this series of articles, we will thematically wind through a maze of philosophy, science, hashtag activism, and moral theology to understand the present crisis and to propose possible remedies for it.
After now having given a brief sketch of the ideological history, we begin next time with a look at the positivist roots of the so-called “New Atheism” and how an undue reverence for science has contributed to what I have termed the “New Albigensianism.”