Thursday, December 10, 2009

Kierkegaard on the Self

Greetings,

Well, it's time to talk about Kierkegaard's ontology of the self. What is a person? What is the reference of the word "I"? Once again, we are going to be referencing The Sickness Unto Death. Let's actually start with the first line in Part One.

"A human being is spirit" (XI 127). So far, so good: If [Human] Then [Spirit]. "But what is spirit? Spirit is the self". Sounds like begging the question, eh? What is the "self"?

"The self is a relation that relates itself to itself or it is the relation's relating itself to itself to itself in the relation; the self is not the relation but is the relation's relating itself to itself" (ibid).

Sounds a little more convoluted; let's see if we can make some sense of that. So, in a human, there is a relation. I'll talk in a minute about what the relation itself is, but, for now, just remember that there exists a relation X. If that relation is capable of self-awareness or apperception, and actually does so, it has a spirit and a self. Thus, the following conditions are necessary (and are probably, although not necessarily, sufficient) for having a "self":

1. They must possess relationship X (which, once again, will be defined below).
2. They must actively apperceive X.

Kant's account of pure apperception is likely what Kierkegaard had in mind, here. Consider, for the sake of simplicity, apperception to be self-knowledge. If ([Zach possesses relationship X] AND [Zach apperceives X]), Then [Zach has a self/spirit].

What is relationship X? The relationship is a "synthesis of the infinite and the finite, of the temporal and the eternal, of freedom and necessity" (ibid). Consider my discussion of freedom (possibility) and necessity from my last post on Kierkegaard. Mere animals do not possess the synthesis of any of those; they exist through pure necessity, finitude, and temporal elements. They lack the necessary relationship to be spirit. Even if they apperceive themselves, they are entities with minds-- which perhaps ought to be respected-- but are not actually selves/spirits, in the technical sense.

There's all sorts of places that the discussion could go from here, but I wanted to start out with a small step. That only touched on the first five-ish sentences of the work, but it's not a bad thing, necessarily, to isolate a fundamental argument and work from there. Hopefully, my post on necessity/possibility will help to dispel fears of question-begging about the nature of the relationship; the other two forms, eternal/temporal and infinite/finite, are also dealt with and, depending on what people are interested in, I'd be glad to post about whatever there's a desire for.

Wednesday, December 9, 2009

Why We Shouldn't Give Christmas Gifts

Here's a very interesting interview with the author of Scroogenomics: Why You Shouldn't Buy Presents for the Holidays.

There are several good points of discussion for you to think about this Christmas season. While you hurry around the winter wonderland with all the hustle and bustle of buying bountiful gifts, roasting chestnuts on an open fire, dashing through the snow in a one-horse open sleigh, and giving all the necessary tidings of comfort and joy, consider how you're buying the love and good favor of others to stave off your need of companionship and affirmation as a good (or somewhat decent) friend or family member...
Or maybe not!

~ Merry Christmas

Friday, December 4, 2009

Kierkegaard on Necessity, Possibility, and Despair

In The Sickness Unto Death, the Danish philosopher Søren Kierkegaard argued at length about despair, the self, and "sin". He argues that humans have "the task of becoming itself in freedom," and both "possibility and necessity are equally essential to becoming" (XI 148). If a person has one but not the other, that individual is in despair. What is meant by these words, and how do they relate?

The necessity of the self refers to what the self already is; the possibility of the self refers to the task one has of becoming oneself. Necessity serves as "the constraint in relation to possibility" (ibid). So, one ought to become oneself(possibility), but one ought not disregard who they already are (necessity). For now, I'm not going to talk about what "the self" actually is; rather, let's talk about what happens if one has an overabundance of possibility or necessity in one's life.

If humans were radically free (as the existentialists, a group I would probably choose not to associate Kierkegaard too closely with), and humans were all possibility with no necessity, "the self becomes an abstract possibility; it flounders in possibility until exhausted but neither moves from the place where it is nor arrives anywhere" (XI 149). This results in possibility seeming "greater and greater to the self; more and more it becomes possible because nothing seems actual. Eventually everything seems possible, but this is exactly the point at which the abyss swallows up the self". After awhile, possibilities "follow one another in such rapid succession that it seems as if everything were possible, and this is exactly the final moment, the point at which the individual himself becomes a mirage" (ibid).

What is missing in a life lived in pure possibility, without necessity or actuality playing a vital role? It is "the power to obey, to submit to the necessity in one's life, to what may be called one's limitations. Therefore, the tragedy is not that such a self did not amount to something in the world; no, the tragedy is that he did not become aware of himself, aware that the self he is is a very definite something and thus the necessary" (ibid). Through this, one loses oneself. There are multiple manifestations of this sort of imbalance, but Kierkegaard identifies the two primary ones as desiring/craving and the melancholy-imaginary. The former involves one chasing possibilities at the expense of who he is, of his necessity. The latter involves one anxiously pursuing a single possibility at a time until he has been led so far away from himself that his is a victim of the anxiety he employed.

The second possibility, that necessity belongs to the self but possibility no longer does, has two possible instantiations: "everything has become necessary" or "everything has become trivial" (XI 152). The former option is held by determinists and fatalists, who Kierkegaard compares with King Midas: he "starved to death because all his food was changed to gold" (ibid). He argues that, "if there is nothing but necessity, man is essentially as inarticulate as the animals" (XI 153). One cannot input or shape themselves, one is as one is, and thus one despairs.

If one has an overabundance of necessity in accordance with the second option, wherein "everything becomes trivial", then one has a "philistine-bourgeois mentality" (ibid). Such a person "lacks every qualification of spirit and is completely wrapped up in probability, within which possibility [which cannot be altogether exterminated] finds its small corner" (ibid). He or she "lives within a certain trivial compendium of experiences as to how things go, what is possible, what usually happens" (ibid). If imagination does not "raise him higher than the miasma of probability", giving him hope and fear, "the philistine-bourgeois mentality thinks that it controls possibility, that it has tricked this prodigious elasticity into the trap or madhouse of probability" (XI 154).

Kierkegaard notes the consequences of each element of the imbalance: "the person who gets lost in possibility soars high with the boldness of despair; he for whom everything becomes necessity overstrains himself in life and is crushed in despair; but the philistine-bourgeois mentality spiritlessly triumphs".

The conclusion? Embrace necessity; you are who you are. You have limits. Know what makes you yourself, and know yourself fully. However, know also who you are (this implies a goal or end for your person), and acknowledge, through hope, faith, and fear, that you can become as you ought to become.

Monday, November 30, 2009

Sex and Plato

The November 30 issue of the New Yorker has an interesting article about the case of Caster Semenya, the phenomenal South African runner who won the 800 meter title at the 2009 World Championships in Berlin. Semenya's victory has since been overshadowed by a controversy over the runner's identity. Semenya competed in the women's race but there are many who claim that she is not a woman. She possesses many of the features normally associated with males. As a result Semenya has been subject to humiliating scrutiny and examination in an attempt to verify her sex. Add to the mix racial and class issues (Semenya is black and from one of the poorest regions of South Africa) and we have the makings of a very complicated and sensitive story.

The New Yorker article poses some questions about philosophy and the metaphysics of sex categorization as well. Dr. Alice Dreger, a bioethcs professor at Northwestern University, is quoted as saying that there is no solution to the question of what the difference is between a man and a woman: "Science is making it more difficult [to solve], because it ends up showing us how much blending there is and how many nuances, and it becomes impossible to point to one thing, or even a set of things, and say that's what it means to be male." And Dr. Anne Fausto-Sterling, who teaches biology at Brown University, is reported to say "there are philosophers of science who argue that when scientists make categories in the natural world--shapes, species--they are simply making a list of things that exist: natural kinds. It's a scientist as discoverer. The phrase that people use is 'cutting nature at its joint.' There are other people, myself included, who think that, almost always, what we're doing in biology is creating categories that work pretty well for certain things that we want to do with them. But there is no joint."

The metaphor of carving at the joints comes from Plato's Phaedrus, and raises the specter of Platonic realism. On the allegedly Platonic view, biologists and other scientists are discovering categories like "male" and "female" or "mammal" and "virus." The contrasting view, often called "nominalism" says that the categories are created by humans based on their own interests and needs. A pragmatic view of this sort is defended in the American philosopher W.V. Quine's paper "Natural Kinds."

While recognizing that a brief quotation in a magazine intended for a lay audience cannot capture the nuances of Fausto-Sterling's thought, I nevertheless feel compelled to point out that the dichotomy drawn in the article oversimplifies matters. First, the existence of "problem cases" of people who are difficult to categorize by sex does not entail that the concept of maleness or femaleness is a human creation. Among the responses available to the realist are epistemicism (our inability to categorize is the result of our own cognitive limitations) and anti-dualism (there is at least a third category of "intersexual" people). But also Fausto-Sterling seems to set up a false dichotomy. It is doubtless true that people categorize according to their needs and interests. It does not follow that the resulting categories are more like inventions than discovered features of reality.

Neverthless, the case of Caster Semenya raises important issues about the nature of identity and scientific understanding. Philosophers of science are well placed to contribute constructively to the discussion of these issues.

Tuesday, November 17, 2009

Ontological Arguments for the Existence of God: Overview

Greetings,

With the topic of Monday's meeting turning to ontological arguments for the existence of God, I though it would be helpful to provide a little bit of background and explain what is meant by such arguments. I'll number the things I say in case people want to address or take issue with specific parts. As always, if I got something wrong, please let me know.

What Is an Ontological Argument for God's Existence?
1. Such arguments seek to show one of two things: either that God necessarily exists, or it is impossible that God does not exist (which is slightly different, but often sufficient to establish that God exists).
2. "Ontology" refers to "being" or "existence". Such arguments do not necessarily need to show that one knows God exists or believes in God; this is an "Ontological" argument, not an "Epistemological" argument.
3. Ontological arguments are, when argued correctly, deductively valid. This means that, in all possible outcomes, if the premises are true, then the conclusion (the existence of God) is necessarily true. There is no possible way that the premises could all be true at the same time and yet the conclusion be false. Thus, to attack an ontological argument you must deny one of the premises. Part of the goal is to make the premises as noncontroversial as possible, which strengthens the argument.

What Are Some Historical Examples of Ontological Arguments?*
1. Dr. Papazian has actually done what I believe is some fairly novel research on Diogenes of Babylon's ontological argument. This is not typically considered the earliest ontological argument, and I know relatively little about it, but it certainly deserves mentioning. If Dr. Papazian wants to share a bit, I'll certainly edit this point.
2. The earliest commonly-accepted ontological argument was given by St. Anselm in Proslogion. His argument was one of "reductio ad absurdum"-- that is, one would derive a contradiction if God does not exist; thus, God exists.
3. Descartes took the next major leap forward; I believe that he put forward three ontological arguments in his Meditations on First Philosophy. It's been awhile, but I'm fairly certain that he did not argue that it is is impossible that God not exist; he actually argued for the "positive" existence of God.
4. Leibniz thought that Descartes did a good job, but failed to show that the idea of a being with all possible perfections (i.e. God) was necessarily non-contradictory. He didn't establish any new ontological arguments, but he thought that he filled in a whole in Descartes' arguments.

And, after Leibniz, things go in all sorts of different directions.

Who Attacked Ontological Arguments?
1. Hume thought that he could negate the possibility of an ontological argument by asserting that a priori truths are necessarily trivial/analytic. As ontological arguments depend on a priori truths, they were therefore trivial and did not tell anything new about reality.
2. Kant had a particularly scathing critique of ontological arguments. He asserted that existence is not a predicate, not a "simple property" as Leibniz might have stated. I'm actually going to leave that alone for now-- to explain it would be another whole blog post. However, as far as I'm aware, his is the only objection to ontological arguments which is universally accepted to be problematic for ontological arguments by philosophers, even if it might be surmountable.

So, there you go. Interested? Shoot me (Zach Sherwin) an email and come to the meeting on Monday evening at a local restaurant! I can send you details, if you're interested. Lastly, if you want to read my own rendition of the ontological argument, check out my post here.

Have a great day!

*Please note that I am no expert on any philosophical subject, and do not claim any sort of authoritative knowledge. I wouldn't cite this post in a paper, if I were you. I'm sure that my posts are filled with minor errors and oversights. If there's anything blatant or particularly malicious in nature, let me know and I'll do my best to resolve it. Thanks!

Monday, November 16, 2009

Hobbes in Hebrew

Interesting discussion at the New York Times website about the recent publication of the first complete translation of Thomas Hobbes' Leviathan into Hebrew.

Sunday, November 15, 2009

Who Says Philosophy is Useless?

My favorite part of newspapers are the obituaries. It is, of course, sad to read about deaths, but at the same time one learns about the accomplishments of lives that otherwise do not make the news. Such was my experience this morning when I read the obituary of Dr. Amir Pnueli, a professor of computer science at New York University. Born in Israel in 1941, Dr. Pnueli was a pioneer in the field of temporal logic, the branch of logic that studies inferences that involve propositions that change their truth values over time. Dr. Pnueli was interested in solving a very practical problem: as computers have become more complex, it has also become harder to verify that the calculations that the programs are performing were correct. To solve this problem, Dr. Pnueli drew on the work of a twentieth-century philosopher and logician, Arthur Prior. Prior was the founder of what at the time was called tense logic, though Prior was interested in using the logic to answer philosophical questions about free will and the metaphysics of time. But Dr. Pnueli realized that Prior's system can be applied to solve the problems facing computer scientists. He published his results first in a 1977 paper and his work earned him the prestigious Turing Award in 1996. As Kenneth Chang, the New York Times obituary writer, notes, "chip makers now use software employing temporal logic to verify that millions of transistors are calculating as designed, and programmers use temporal logic to minimize the number of bugs in their software." So your reading of this blog ultimately depends on work that a philosopher did in the middle of the last century.

I extend my condolences to Dr. Pnueli's family and colleagues. Perhaps an appropriate tribute will be for me to answer the question, "What is philosophy good for?" with his name.

Friday, November 13, 2009

Logic and the Rules of the Internet

Greetings,

While there not, of course, official rules of the internet, there exists a set of unofficial rules; not all of them originated on the internet, but many are constantly referenced. I'm not going to post the full list here-- not all of them are likely to be appropriate for a blog post-- but some of them are at least mildly philosophically interesting.

The first rule we'll look at is the Danth's Law. This law states, "if you have to insist that you've won an internet argument, you've probably lost badly.” In other words, if it is not obvious and noncontroversial that you have proven your point, and yet you state that you have proven your point, the odds are good that you have already shown the weakness of your argument, and are past the point of no return. If the validity of your argument is not obvious from the argument itself, it is invalid; if your argument's form and content is insufficient to have achieved validity, asserting that it is valid will make it necessarily invalid.

Certain rules are numbered, due to their longstanding solidarity with message-board subculture, such as "Rule 14": "Do not argue with trolls-- it means they win". A "troll" is one who posts intentionally inflammatory material and/or responses, often ignoring basic logical principles such as validity, coherency, and relevancy. No logical argument, no matter how carefully constructed, can be valid if one denies the basic axioms of the logical system one works in. Trolls, who often make fallacious arguments such as Reductio ad Hitlerum (described below), do not hold the philosophical motivations of the pursuit of truth or even coherency; rather, they seek either to win or to create a reaction. Thus, do not engage in a philosophical debate with one who does not act in good faith; you won't be productive, and will probably just end up frustrated.

Another rule is Godwin's Law, originally stated by Mike Godwin in 1990, which claims that, "as a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1." The reason? Across the internet, people are less personally accountable for their statements, and thus are less likely to concede to their opponents' arguments. Thus, a universal absolute is difficult to find. While individuals certainly exist who, online, would deny that the Nazis were in fact "evil", it is one of the few relatively non-controversial premises in an online argument. Therefore, it is likely to be used when there is no common ground.

A closely related rule was actually stated by Leo Strauss in the 1950's, which is Reductio ad Hitlerum, which argues that, "If Hitler liked P, then P is bad, because the Nazi's were bad", or, "If Nazis liked P, then P is bad, because the Nazis were bad." This actually seems to be a problem with the "is" function-- the "is of identity" versus the "is of predication. "Bachelors are unmarried men" is an example of the "is of identity"-- A is the same as B. "Nazi's are bad", however, is the "is of predication"-- B is merely a property of A. The Reductio ad Hitlerum argument states, [Nazis=Bad], [Nazis=(One who likes P)], therefore [(One who likes P)=Bad]. The arguer is mistaking the "is of predication" to be the "is of predication" (and vice versa). Some philosophical training on the differences between the two should be sufficient to show why such arguments are fallacious.


I want to credit an excellent article by the Telegraph for compiling many of these "laws", as well as several others I did not talk about. If you're interested, definitely worth a read. As well, a simple search for "rules of the internet" will yield a fairly solid list, with some minor variations depending on whose list it is.

Saturday, November 7, 2009

Freedom is Not Free

Thus far this semester, we have written a couple posts that have made a few cracks in the surface of the free will/ fatalism debate. The first post was Zach Sherwin’s Ontological Argument Against Fatalism, and then a few weeks ago I presented Chrysippus’ views on Free Will and Responsibility. While we’ve by no stretch been exhaustive here of course, perhaps we’ve piqued your interest a bit with what we have covered. If so, I’d like to continue the discussion here with something that’s been rattling around in my brain for a while, so let’s ensue on a separate and all-new branch with an aspect of the quandary that’s yet to be mentioned.

Throughout the rest of the post, I’ll refer to God – for the sake of argument and of brevity for this blog, assume I mean the traditional Judeo-Christian God. Here’s the question: Is human free will incompatible with God? If humans have complete free will, meaning that they have real freedom in choice and action all the time, then God would necessarily not interfere with human will. But does it follow that God then can’t influence free will? Even if we say, he could but he chooses not to, does his ability there, even as just an option, mean freedom is somehow diminished? In the ultimate and complete free will scenario, would God’s mere ability to alter, influence, or affect our decisions mean that, really, we’re only as free as long as he allows, that freedom is more dependent, then shadowy and illusory, than we think it is? Humans then have freedom when God chooses not to exercise his ability, meaning that he could, at his choosing, decide to step in and affect human free will as soon as he wants to. He is all-powerful, so of course God has every ability to affect (and remove) free will; even if he has decided not to do so, especially in respect to human choice in salvation, he, in theory, could, right?

This poses problems for some – they would say that we then cannot have free will at all. Consider their argument: At that moment when we say God can affect human free will, it no longer exists. More deductively: If there exists the possibility that humans have free will – that what and how humans choose or decide is not able to be influenced, altered, or changed by God – then God can’t affect human free will. But God is all-powerful, so he can affect free will. Therefore, free will is incompatible with God and his omnipotence.

But is this necessarily true? Isn’t it sufficient that he doesn’t routinely affect free will, even though he can? I would argue that their claim assumes too much. Free will isn’t necessarily defined as only holding true if God can’t affect free will. He may have chosen not to, to preserve free will and free choice, to let human beings choose what they will, including sin and salvation. But God is still omnipotent; he could interfere and may and may have, or he may not. In any case, the mere fact that he can affect free will doesn’t mean it necessarily cannot exist. Here I’m not arguing for or against free will per say, or addressing free choice in terms of salvation in the Calvinism v. Arminianism predestination realm (though this does bear worthy and interesting implications in apologetics), but I’ve attempted to prove that at least the one claim above that’s put forward by some is too weak to be accepted, being flawed by definition, and that it does not conclusively or deductively prove free will’s incompatibility with God’s sovereignty.

So there you have part three of this semester’s posts on fatalism and free will. Whether this is all really for naught, you may be the judge (if you can). Perhaps this is truly a “timeless” dilemma, one flawed from before its beginning. But give us your thoughts, if you so choose.

Wednesday, November 4, 2009

Kiekegaard, Art, and the Aesthete

Greetings,

Søren Aabye Kierkegaard was a Danish philosopher and, perhaps, theologian from the 19th century. I'm going to present a view of art contained in one of his works, "Either/Or". While this view may or may not actually represent Kierkegaard's own view, it is interesting in its own right, and I believe that it can stand on its own, regardless of whether or not its author would actually endorse such a position. Note that, using the Hongs' translation, what I refer to here shall occur within pages 47 and 134. I would cite everything, but I'll be using so many references that it would plague readability; however, I can back up any specifics as requested. So, here goes.

One can refer to the form of art and the subject of art, and neither of these should be overemphasized (as is often done, be believes). Furthermore, the form can permeate the subject matter, and the subject matter can permeate the form. Aesthetically, for a work to be a classic, the form of work must be the same as the subject of that work. What does this mean? When we talk about a work of art, we can talk about its form (such as that of a poem) and its subject (not only the content of the poem, but what is actually communicated about in the poem). In order for a work to be a "classic" its form must be the same as its subject. As an example, Mozart's opera "Don Giovanni" is pointed to. The subject matter of "Don Giovanni" is an individual who lives as if his spirit existed in a state of pure immediacy, which is the form of the music-- the movement of the spirit through immediacy, as music cannot be abstracted outside of the performance or the moment it is heard/imagined/etcetera (entailing immediacy), and yet it serves as a language, which qualifies it in the realm of spirit.

Another distinction made in this work is the relationship between media (the plural of "medium") and ideas. The more abstract an idea is, the more impoverished it becomes. However, such abstraction is inversely correlated with the likelihood of its being repeated. One might talk of abstraction and concreteness as opposites. Keeping in mind the distinction between media and ideas, in some forms of art, the medium has a high degree of abstraction but a high concreteness in terms of idea, such as in architecture. Homer's use of a concrete idea (history) and a concrete form/medium (natural language) thus created an epic (considering the coherency between the two) that could often be repeated (due to the use of a concrete medium and a concrete idea).

According to this account, sculpture, architecture, painting, and music have abstract media (with sculpture being the most abstract), whereas language is the most concrete of media. Mozart, with "Don Giovanni", managed to find a subject matter that was as abstract as his medium, allowing him to generate an epic.

One can thus speak of the "theme proper" of a medium; for an abstract medium one's "theme proper" is an abstract idea, and one's work cannot be truly great-- cannot be a "classic"-- unless one's medium correlates to one's idea in terms of abstraction/concreteness. Sculpture, the most abstract medium, would thus be inadequate for creating a truly great work about language, the most concrete idea.

So, what do you think? Is there merit to this account? Immediately apparent problems? There's obviously a bit more to it, but hopefully this'll work as an introductory post.

Tuesday, November 3, 2009

The Paradox of the Unexpected Hanging, Part II

Zach and Anonymous commented on my last post on the paradox of the unexpected hanging. While their thinking is interesting, I doubt that they are on the right track toward a solution. Let me present an alternative version of the paradox (also drawn from Martin Gardner's article) and challenge them to see if their attempt at a solution applies to this form.

So a man is presented with ten boxes, each one numbered and all empty except for one, which contains an egg. He is told to open each box in sequence. He is also told that he will not know before opening the box that contains the egg that it contains the egg. So once again, the same reasoning applies. The egg cannot be in box 10 because if the man opens box 9 without having discovered the egg, he will know that it is in box 10. Elimination of the other boxes proceeds as before.

Maybe this less sinister and more basic formulation of the paradox will help in their search for a solution.

Saturday, October 31, 2009

Why study Philosophy?

Hello to everyone, and for all the new visitors, welcome to Areté! If this is your first time on the site and you’re interested in entering our fabulous cash-and-prizes-for-comments contest, this is an excellent post to start out with if you wish (see the details for the contest here). Don’t worry if you feel philosophically uneducated – we value and encourage the input of all you philosophers out there! So here’s a nice, accessible introductory sort of topic for everyone.

My friend and I were enjoying a meal yesterday in Valhalla, and began to discuss the inevitable stresses that come along with course registration time. What follows is the paraphrased transcript of our conversation. When asking about the classes she had signed up for, she made the comment that she got “stuck with a philosophy class.”
Why ever would she choose that phrase?
“Philosophy is just a big waste of time. You won’t ever really use it. I don’t know why you or anyone would want to think about that kind of stuff and just end up wasting your life,” she replied.
Ok, I responded, but if I were to ask you what wouldn’t be “wasting your life,” what kind of living you think is best, can you really be sure you are right? “The uncontemplated life is not worth living,” said Socrates. How can you stand living your life without really pausing to consider the best way to live it? You only have one shot in this game of life, and the risk of blowing it, or of living for a lie or in futility, is just too high to take. You agree that you want to be happy, but are you sure you know what happiness is – do you know that the kind of happy you want is really for the best kind? In philosophy, your quest is to find the best way to live and the best way of being happy; instead of blindly feeling around in the dark on a path you haven’t clearly seen and aren’t sure of the destination, philosophy can offer some light to live by.
“But what if you’re just wasting all this time for nothing? What if you never find the truth or whatever and let your whole life slip away while you’re reading what a bunch of dead people wrote? Why should I care what Plato said? I know he’s a pretty smart guy, but what if he’s wrong? Even if I live and do things without knowing it’s the best way, at least I’m still living, while you will just waste your entire life and never come up with an answer. Or what if you find the truth and it’s depressing? What if there is no point? I’d rather just live without thinking about it too hard, and be happy.”
Well, there is that possibility that the contemplative life will not make you happy. But it’s a chance you have to take. It’s better to be Socrates dissatisfied than a pig satisfied, said John Stuart Mill. You may not be happy as you’d normally think it, but that is only part of the whole truth of it. Your dissatisfaction is a better life than the carefree pig. Can the pig really be happy?
“Yes, pigs are happy. And they don’t think about the things we do. That’s how we should live.”
Pigs don’t think about things at all. They can’t be happy; content, probably, but not happy. Only we can ever really be happy. Let me tell you about Plato’s Allegory of the Cave. (**Click on the link to read the story I told her about.) They’re content in the cave, all their needs are provided for, but you wouldn’t say you’d want to live like that. They’re not happy and can’t be, while living by the distortive, man-made fire (representing man-made, artificial knowledge).
Once dragged out into the Sunlight (that is, divinely-created, transcendent knowledge of the whole world outside of their little cave), they can think, and see for real, and know, and live and be happy.
“Ok fine. Here’s my life philosophy: I don’t want to think all the time about everything about life. I just want to be happy, and I don’t mean just little happiness. Like, I mean, lasting happiness. Not just, I don’t know, sort of happy now, but in a more lasting way. Real happiness.”
That’s actually what Aristotle says about happiness – or for the Greeks, eudaimonia or flourishing. That’s exactly the kind of happiness I think we all want, but you have to make you live the right way to get it! So you see, even you agree with Aristotle about something about life.

**** So what are your thoughts about philosophy? A big “waste of time” or is it the only way to live? Or maybe something in between, an enjoyable diversion to talk about at the coffee shop? Does it give us the answers to questions about life, or just more unanswerable questions? Is it even possible not to philosophize? (Consider Aristotle: "If you ought to philosophize you ought to philosophize; and if you ought not to philosophize you ought to philosophize: therefore, in any case you ought to philosophize. For if philosophy exists, we certainly ought to philosophize, since it exists; and if it does not exist, in that case too we ought to inquire why philosophy does not exist – and by inquiring we philosophize; for inquiry is the cause of philosophy.")

Friday, October 30, 2009

Write Comments, Win Prizes!

Greetings,

From now on, currently enrolled Berry students are eligible for chances to win prizes-- including cash, gift cards, and more-- by commenting on posts on this very blog! Please read below for a full list of rules and regulations. You are welcome to post here or email me at zach.sherwin@vikings.berry.edu with any questions or comments.

Areté Contest Rules and Regulations

Revised as of October 30, 2009

1. The Areté Staff, which includes the Editor-in-Chief, Staff Writers, and Faculty Advisor, reserves the right to edit, change, or remove these rules at any time without prior notice. The Areté Staff also reserves the right to add rules without prior notice. These rules and regulations do not limit the scope of the rights and powers reserved by the Areté Staff except as insofar as explicitly stated.

2. Although anyone is welcome to comment, only currently enrolled Berry students are eligible to win prizes.

3. In order for your comment to count as an entry the following conditions must be met. These conditions are necessary, but are not necessarily sufficient, and further conditions may apply as needed.

~~~a. Your full name, as it appears in the Berry Email System must appear at the end of your content. Before any prizes are awarded, we shall confirm the identity of the potential recipient via email.

~~~b. The comment must be at least 125 words in length.

~~~c. The comment must be “on topic”—that is, the topic of the comment must clearly relate, to a significant degree as determined by the Areté Staff, to the topic of the original post.

~~~d. No ad hominem attacks are allowed. Comments may contain attacks upon an argument or a position, but the actual commenter or poster may not be personally attacked.

4. Points, Entries, and Awards

~~~a. An entrant may earn exactly one “Point” per comment per topic. For any given topic, no more than one “Point” may be earned.

~~~b. Points shall be converted to “Entries” when they meet the requirements outlined below.

~~~~~~i. Points shall be converted to Entries on the basis of the Fibonacci Numbers, excluding “0” and the initial occurrence of “1”.

~~~~~~ii. Thus, the first Entry will be earned after one Point. The second Entry shall be earned after two Points. The third Entry shall be earned after three Points. The fourth entry will be earned after five Points. The sixth entry will be earned after eight points. And so forth.

~~~c. Each Entry shall be considered a chance to win a prize. Points shall be reset at the end of each prize cycle, as determined by the Areté Staff.

~~~d. Comments shall only be eligible for earning Points if posted within two weeks of the post date of the Topic, and if the Topic was originally posted within the current “prize cycle”, determined by the Areté Staff, but typically one month.

5. The Areté Staff reserves the right to exclude any comment from being counted as a “Point”.

6. Areté Staff Members may only receive prizes if the monthly prize allocation is not completely distributed to non-Staff members.

Wednesday, October 28, 2009

The Paradox of the Unexpected Hanging

You have been sentenced to death by hanging. The judge who condemns you to this fate informs you that your execution will satisfy two conditions. First it will take place some time in the early morning on one of the days of the following work week. Second you will not know which day you will die until the executioner appears in your jail cell to lead you to the gallows. You will be surprised. There does not appear to be any reason to think that your hanging cannot take place just as the judge has ordered. But a seemingly plausible argument leads to the conclusion that the judge's conditions are unrealizable.

In my last post I celebrated the birthday of Martin Gardner. Continuing in this theme, I now recount a paradox discussed by Gardner in one of his columns (and reprinted in his book The Unexpected Hanging).

Your execution must take place no later than Friday. Suppose you make it to Thursday afternoon. You now know that you will be executed on Friday, violating the second of the judge's pronouncements. So you cannot be executed on Friday. Now suppose you make it to Wednesday afternoon. Since Friday has been eliminated, you know on Wednesday that you will be executed on Thursday, again violating the surprise condition. So Thursday is out. The same reasoning will rule out Wednesday as well. Continue this line of reasoning until you have eliminated Monday. Therefore on no day of the week can you be surprised by the hangman.

As with all logical paradoxes, seemingly impeccable reasoning leads to a conclusion that is clearly at odds with reality. For having been convinced by your reasoning that you will not be executed, you are understandably surprised when the hangman arrives on Wednesday morning to carry out the orders of the judge.

So I leave it to you to resolve this paradox with the words of Bertrand Russell, who urged those who think about logic "to stock the mind with as many puzzles as possible, since these serve much the same purpose as is served by experiments in physical science."

Monday, October 26, 2009

The Log Lady

On a less scholarly note, my favorite television show is Lynch's and Frost's masterpiece, Twin Peaks. There's all sorts of really interesting issues raised in the show, and one enigmatic character-- the "Log Lady"-- has some particularly interesting things to say. Consider, as she notes in Episode 10 (of season 2), "Coma":

"Letters are symbols. They are building blocks of words which form our languages. Languages help us communicate. Even with complicated languages used by intelligent people, misunderstanding is a common occurrence."

"We write things down sometimes - letters, words - hoping they will serve us and those with whom we wish to communicate. Letters and words, calling out for understanding."

She continues exploring this topic in Episode 11, "The Man Behind Glass": "Miscommunication sometimes leads to arguments, and arguments sometimes lead to fights. Anger is usually present in arguments and fights. Anger is an emotion, usually classified as a negative emotion. Negative emotions can cause severe problems in our environment and to the health of our body.

"Happiness, usually classified as a positive emotion, can bring good health to our body, and spread positive vibrations into our environment. Sometimes when we are ill, we are not on our best behavior. By ill, I mean any of the following: physically ill, emotionally ill, mentally ill, and/or spiritually ill."

So, let's consider her argument. I'll enclose her arguments in [brackets] rather than "quotes" because I'll paraphrase some. [Letters are symbols which are the building blocks of words]. So far so good. All words are built from letters (although not necessarily exclusively from letters). What is interesting, she notes, is that [letters are symbols]-- and we all know what the symbols signify; I have not met another English speaker who could express a different idea of the letter "a" from my own. Words are composed of letters (and other characters, but which serve similar functions as symbols). However, even though words are composed purely of universally (in the context of a language) accepted characters, miscommunication occurs; this implies that either individuals really disagree on what letters are references of, or that a word is greater than the sum of its characters.

We explicitly use [letters and words to call out for understanding], and this is their explicit purpose. However, [miscommunication is a common occurrence]. Now, she says, consider: sometimes, [miscommunication leads to arguments] (which seems reasonable to me), and [arguments sometimes lead to fights] (which also seems coherent). Additionally, [in both arguments and fights, anger-- which is usually classified as a negative emotion-- is usually present]. By "negative emotion", the "Log Lady" refers to [that which can cause severe problems in our environment and to the health of our body]. This can be understood as that which is not a "positive emotion", an emotion that [can bring good health to our body, and spread positive vibrations into our environment].

So, then, letters are symbols which, when used in words to communicate, usually (being "a common occurrence") end in miscommunication. Miscommunication tends to end in negative emotions, which [can cause severe problems in our environment and to the health of our body]. There is a definite implication here that letters themselves can actually cause negative emotions. What does she suggest as a resolution,

In Episode 15, "Lonely Souls", she argues, "Balance is the key. Balance is the key to many things. Do we understand balance? The word 'balance' has seven letters. Seven is difficult to balance, but not impossible if we are able to divide. There are, of course, the pros and cons of division."

So, then, it would seem that one can overcome the problems of miscommunication through "balance", but there is the epistemological problem of whether one actually understands it, because such an understanding requires division of the primordial references of experience. If one is willing to take a primordial element-- whether a letter in a word or an experience in a memory-- and cut it apart in order to study it, there are problems. She explores this too, in Episode 22, "Double Play": "A death mask is almost an intrusion on a beautiful memory. And yet, who could throw away the casting of a loved one? Who would not want to study it longingly, as the distant freight train blows its mournful tone?" On the one hand, if one one does not seek balance in communication, one risks miscommunication, which can yield negative emotions with detrimental effects. However, if one seeks balance, one finds situations where balancing requires division, and one must decide whether analytical study-- whether of a word such as "balance" or an experience such as the loss of a loved one-- will merit the end result.

The conclusion? As stated in the final episode, Episode 29, "Beyond Life and Death", one finds at the end of this puzzle "...an ending. Where there was once one, there are now two. Or were there always two? What is a reflection? A chance to see two? When there are chances for reflections, there can always be two - or more. Only when we are everywhere will there be just one."

When one takes a word, an experience, or a television show, one can either approach it holistically and take the chance of miscommunication, or divide that which is not naturally divided. Such a division means that, [where there was once one, there are now two]. However, [where there are chances for reflections, there exist chances for division], and one can only avoid such a division if one's approach is completely consistent in its indivisibility can one avoid absolute division. In essence: either approach a situation holistically, or be prepared to encounter a situation where "There is as much space outside the human, proportionately, as inside" (The Log Lady, Episode 9, "Arbitrary Law").

Sorry, quite a bit of talking there. Whether there is anything of significance-- or even philosophical consideration-- is certainly up for debate. I think there's some really interesting issues raised, though. And you should certainly watch Twin Peaks when you have the chance. Unfortunately, the Pilot Episode is only available on the newest version, the "Definitive Gold Box Edition", due to a licensing issue they had, but I can assure you that it's worth picking up, renting, or finding online if you have the chance.

Sunday, October 25, 2009

Chrysippus on Free Will and Responsibility

While reading this morning, I came across a passage in my book that I think relates well to our discussion on fatalism earlier this semester and also to the topic for tomorrow night: free will and addiction, and the ethical responsibilities for self-harming actions. The author starts by detailing the beliefs of the Stoics concerning the free will, or for them, the lack of it, possessed by human beings – except for the sage, but that’s a different topic. As the author puts it, for the Stoics, events of nature and human events are “parts of the universal casual nexus which is fate, providence or God, and so…predetermined.” Their critics attacked this viewpoint by claiming that if everything were predetermined, there would be no responsibility or good deeds able to be praised or bad deeds to be condemned, just as we were conjecturing about the existence of morality in a fatalistic world in one of our prior club meetings. Yet Chrysippus, one of the most eminent of the Stoics, argued for the compatibility of fatalism and responsibility in 2 ways. First, even if our actions are really predetermined reactions to external influences or impressions, they are still our own reactions. Chrysippus writes:

“Although it is the case that all things are constrained and bound together by fate through a certain necessary and primary principle, yet the way in which the natures of our minds themselves are subject to fate depends on their own individual quality. For if they have been fashioned through nature originally in a healthy and expedient way, they pass on all that force, which assails them from outside through fate, in a more placid and pliant manner. If, however, they are harsh and ignorant and uncultured, and if they are pressed on by little or no necessity from an impulse they hurl themselves into constant crimes and error. And that this very thing should come about in this way is a result of that natural and necessary sequence which is called fate. For it is, as it were, fated and a consequence of their type itself, that bad natures should not lack crimes and errors. It is just as if you throw a cylindrical stone across a region of ground which is sloping and steep; you were the cause and beginning of headlong fall for it, but soon it rolls headlong, not because you are now bringing that about, but because that is how its fashion and the capacity for rolling in its shape are. Just so the rule and principle and necessity of fate sets kinds and beginnings of causes in motion, but the impulses of our minds and deliberations, and our actions themselves, are governed by each person’s own will and by the natures of our minds.” (Gellius, Attic Nights 7.2.7-11 = LS 62D)

So, we may wonder, if it is our developed natures, which have been predetermined before our birth, (that is the reason for our throwing the stone from the hill) which cause our reaction to outside influences, and the reaction is ours (we do the throwing) – can it still be our responsibility for the event or its outcome? If we blame what we have been predisposed to, our natures that have been given to us, but it is still we who (must) react in a certain way, how much responsibility is ours? For Monday’s talk, is an addiction, presumably something of nature, our responsibility or only reactions to external forces?

The second part of Chrysippus’ argument for responsibility and fatalism is that our actions (or reactions) do make a difference, even though predetermined. The author explains: “To say that certain things are fated to happen does not mean that they are fated to happen regardless of what anyone does beforehand, but rather that certain outcomes and the actions which are necessary to bring them about are ‘co-fated’ with one another.”

This seems quite the paradox, then. It may be clarified as the author shows if we think about an example from the Greek tragedy Oedipus. It would be complete nonsense to say that Oedipus’ father would have had a child whether he slept with a woman or not. Yet he chose to take that risk even after being warned by the oracle that his son would kill him. He wouldn’t have chosen otherwise, but it was still his choice; it is his responsibility, even if the action (or reaction) was predetermined.

What do we think of this argument? Coherent and conclusive? Confusing and lacking?

Wednesday, October 21, 2009

An Attempt at Defining "Art"

Greetings,

On Monday, Philosophia Religioque met and discussed the philosophy of music. One relevant topic that came up was the definition of "art", and what demarcates it from other content. While I am not yet 100% convinced that my definition is necessarily right, I proposed that "art", properly understood, is intentional indirect communication. I'll start by explaining what is meant by those terms, and then get into some of the issues that can be derived from this definition.

At its core level, art is a kind of communication; in fact, I would consider art to fall under the genus of communication. Merriam Webster states that communication is "an act or instance of transmitting". If a painting could only communicate through its visual imagery, and there existed an invisible painting (which I do believe can be understood in concept, even if it's unlikely that one will ever exist), that painting would not be art, because it would be incapable of communication. However, I believe that many things in life qualify as "communications"; thus, this is a broad element, on which I will not say too much more at the moment.

If art is a kind of communication, what kind it? Well, I argue that art is necessarily intentional communication. What is intentional is the communication itself. Say, for example, that I look at the computer monitor in front of me and note its subtly sloping angles, well-rounded curves, and bi-colored palette. While it is true that the monitor might communicate to me a poignant message about the nature of the human condition/experience, such a communication would not have been the intention of the monitor manufacturer, and thus that communication would have been insufficient for the monitor to be considered "art" (although I am not necessarily excluding the possibility of other communications, of course). Even bad art-- whether angsty teenage poetry or annoying pop songs-- serves as intentional communication.

However, while a communication must be intentional to be art, intention is insufficient. For example, if I tell you in a monotone voice, "go outside", that is an intentional communication, and yet is not art (I would argue, and would believe to be non-controversial). This is because that which is communicated through art is necessarily indirect; while direct communication can exist in art, that which transforms an intentional communication into art is its indirectness. In film, for example, certain movies are clearly direct intentional communication, and are thus not understood to be art, while certain movies are very intentional communications-- and yet the communication is entirely indirect, such as in Maya Deren's Meshes of the Afternoon (which you really should watch). As this example hopefully illustrates, direct communication is necessarily not art, whereas indirect communication can be art if it is intentional.

Some interesting things result from this. First, a painting itself would not be art; rather, the communication-- the experience, perhaps, or maybe the performance-- would be the art. This would be in coherence with my understanding of nominalist theory. Additionally, I think that early cave paintings would not be considered art, unless they were doing more than sheer direct illustration. Lastly, good analytic philosophy would necessarily not be art (if I understand correctly), because I think that it attempts to be as direct as possible, whereas continental philosophy-- such as the works of Kierkegaard and Nietzsche-- has the potential for actually being art and philosophy at the same time, as some of their philosophical contributions are intentional, yet indirect.

Monday, October 19, 2009

Happy 95th Birthday to Martin Gardner

I would like to dedicate my blog post to someone who has had an enormous influence on my choice of career. On Wednesday Martin Gardner, the former Scientific American columnist and prolific author, turns 95. I was probably in the 6th grade when I was first exposed to Gardner's writing. At the time I had no idea what philosophy was or that there were grown-up people who actually made a living as philosophers. But I was fascinated by math even when I found it difficult, and I was especially impressed by Gardner's popular writings on math and logic. I remember picking up a book with the odd title The Unexpected Hanging at a bookstore in New York. Opening it, I discovered a collection of essays drawn from Gardner's column on recreational mathematics. The first chapter was devoted to a famous paradox involving a man condemned to hang but without knowing on which day of the week his execution would take place. Gardner wrote clearly and intelligently about what was to me a highly complex and convoluted topic. I was hooked and the path that would lead me eventually to logic began.


After that first encounter, I became a passionate fan, reading as many of Gardner's books as I could get my hands on. I had found an author who shared my interests: not only logic but science and pseudo-science, Alice in Wonderland, cryptography, computers that can learn to play games, magic, and the fourth dimension. It wasn't until much later that I learned that Gardner majored in philosophy at the University of Chicago, a student of Rudolph Carnap, one of the founders of analytic philosophy.

I was happy to see a birthday tribute to Gardner in today's New York Times. I wish him a very happy 95th and many, many more.

Saturday, October 17, 2009

Self-Interest Rightly Understood

This is a pretty contentious topic, and I will confess that the view to follow is probably not a “politically correct” opinion. Nowadays this admittedly is not the most popular view to argue for, but I hope to stir up some good debate, especially with those who disagree.

The issue here is that of the obligation to uphold certain positive rights, and in particular for this post, the right to be free from hunger; perhaps it does not even warrant the title “right,” but giving it that higher station helps the opponent and, in good faith, I will give them the best argument they can muster. On a Google search, including quotation marks, the “right to be free from hunger" yields 573,000 results, and similarly, the “right to food” gets 642,000 hits. By clicking on any of these links, your heart will undoubtedly be moved by the pictures of terribly skinny children and touching appeals to save the world from starvation. Resources are scarce there, but very abundant for you here. So give a small donation and do your part to help out a little. Have a heart; it’s your duty as a human being – you owe it to those less fortunate than yourself.

And they are absolutely right in this regard. Do not yet misunderstand me: starvation is bad, and giving to charity is good. I’m not advocating complete solipsistic selfishness. Where the argument fails is after all of that. In the more extreme realms, some people and organizations out to do good call for the upholding of the right to be free from hunger by redistributing wealth, especially in the wealthy United States. The problem is that this makes the free choice of giving however much to who or what an individual decides and takes that liberty from them, now forcing whoever has an excess to share the wealth with those who have a deficit. Opponents may argue that force (or bribery with tax breaks, for that matter) is needed, for otherwise most people would not logically choose to part with their earnings. How cynical a view of humanity this is indeed! I do not believe altruism to be so foreign and unnatural an idea to many people. Though there is merit to the idea that people will give more when encouraged, governmental force is not the right way, either by our own country’s taxation or by even more remote demands placed upon the country as a whole by global organizations like the UN. (What is the right way, you may be asking? I’ll leave that for you to decide, or perhaps address it in a later post)

The extremists like to say that all lives are equal, that I’m not any more valuable or worthy to eat and have prosperity than someone over in Africa just because I was born here. We can’t really be about justice and equality if we think otherwise, right? We’re all equally deserving of the goods that any person or nature produce; it’s a small world and we’re all the same after all. I do not believe that all the do-gooders are really wishing for such total equality out of great desires of personal self-sacrifice or good will. Some (I’ll call them the Lip-Service Extremists) say such magnanimous things, self-deprecating and diminishing themselves by denying their extra-special worth. In their better-than-thou way, Lip-Service Extremists want to argue that, of course, I’m awesome, and you’re awesome, and everyone else in the whole world is just equally awesome human beings, aren’t we great? Because we’re all so awesome, we should not deny material things to those just as awesome as us. We’re not just accidents living between two abysses; we are capable of doing good for all humanity and we can have a wonderful utopia where everyone is worth the same and no one will be too wealthy and no one will starve or be bothered by thinking about pesky things like the rights to property and prosperity. But, really, our Lip-Service Extremists want to feel important themselves. By claiming that everyone is significant, they’re claiming that they too are significant. The more value they give all others, the more they attribute greater and greater worth to the whole human species, including more value for themselves. By saying that he is just as worthy as I am to live in equal prosperity, you’re implying that you yourself are really something special. Yet extreme equality does nothing to bolster the value of humanity as a whole. Quite the opposite, in fact, all redistributing the wealth does is work to equally devalue the individual. The irreducible, irreplaceable individual is not really unique anymore, he is no longer irreplaceable; his worth is no longer dependent on what he does or who he is.

People are not equal. You are (presumably) a better, more valuable human than, say, your average serial killer. It is simply a detrimental lie to revert to equalizing, and thus devaluing, everyone (Orwellians will attest to this). Your worth doesn’t depend on making another or all others equally awesome! There is, in fact, quite a contradiction with the very notion of equal awesomeness. You can still be important, caring, and unique without valuing all others as equally deserving of all things. Your worth is not contingent on anyone else, and you shouldn’t make it so – such dependence only works to devalue the distinctiveness of your individuality.

Wednesday, October 7, 2009

Zach on Value: A Justification of Ticket Scalpers

Greetings,

I'd like to present a theory on value; let me know what you think. I'll call it "Zach's Theory of Value"; someone else has most likely come up with it first, and as soon as someone points out who it was, I'll gladly note that in this topic. As far as I know, the work is original, but who knows-- I might have heard it in passing one day and forgot my source.

I argue that all value is subjective, and most value is subjectively relative. Let's stay with the simple, primary element for now, subjectively relative value. Rather than give a definitional or axiomatic argument, I'll argue by example.

Say that there are two parties: Band X and the Groupie. For now, assume that Band X directly sells the tickets; while in actuality the process is much more complex, the complexity can be accounted for by this system. For Band X, a ticket is worth, say, $30; considering their time and their investment(s), that's about the price point per ticket where it's worthwhile to them to have the concert. For the Groupie, a ticket is worth $75; they love Band X, and it's absolutely worth 10 hours of work to go hear the band. If Band X sells the ticket for $50, Band X gains $20 in subjective value (they sold a $30 ticket for $50), while the Groupie gained $25 in subjective value (they acquired a $75 ticket for $50).

Say that a ticket scalper decides to get in the fray. He purchases all the tickets from Band X for $50; thus, Band X acquires $20 in subjective value (they sold a $30 ticket for $50). The ticket scalper then sells the tickets to Groupies, for whom the tickets are worth $75, at $60 a pop. The ticket scalper gains subjective value (they sold a ticket that they paid $50 for at a $60 price point), and the Groupies gain subjective value (they gain a ticket that's worth $75 to them for $60).

I'm arguing that mutually beneficial voluntary free market transactions, such as the ones I described, generally result in the net creation of subjective value. The first transaction, without the ticket scalper, netted $45 of subjective value-- $25 for the Groupie and $20 for the Band. The second transaction, with the ticket scalper, netted $45 in value as well-- $20 for the Band, $10 for the Scalper, and $15 for the Groupie.

Therefore, transactions are capable of generating subjective value. Adding middlemen to the mix, such as ticket scalpers, redistributes that subjective value, but it does not actually decrease the subjective value generated.

There's all sorts of fun implications from this argument, and several places where we could go, but I'd like to start with that, and perhaps go further if there's interest in the issue. Thoughts?

Saturday, October 3, 2009

Happiness and the Heart of Darkness

I’d like to begin with the following quotation from Aristotle’s Nicomachean Ethics describing why humans need happiness: “Clearly there must be some such end since everything cannot be a means to something else since then there would be nothing for which we ultimately do anything, and everything would be pointless.” (*Note: Taken from a photocopied excerpt of Book 1 read in a former class – I could not find the translator or page number.) For Aristotle, “happiness” was more analogous to the idea of excellence, something achieved with intent, with purpose and habit – a lifelong way of living. The final four words from that quote always jump out at me – everything would be pointless if there were no happiness? Life, then, is inextricably bound to this idea of happiness, which explains the restlessness of our souls. But what happens if we never achieve this happiness? Is the mere journey towards it enough, or without the achievement, is life simply “pointless”?

There are some who argue that happiness is really an unachievable ideal, but zealously and tirelessly pursued anyway – this is the view of humanity that is illustrated in Joseph Conrad’s masterpiece, The Heart of Darkness. In the novel, the narrator Marlowe discovers knowledge about humanity that he would rather have kept locked away, far out of reach. In his exploration into the wild hearts of men, he discovers humanity to be, ultimately, hopelessly depraved. Without the societal constructs of law, religion, accountability, community, and guilt, men revert to their innate, bestial inclinations. It presents a very Hobbesian state of nature where life is merely nasty, brutish, and short. The “savages” are completely unalterable from their bestiality. However noble the attempts to convert, all attempts are futile. One line, in fact, from one “civilized” man to another, is that they need to “exterminate the brutes!” which is ironic when Marlowe later realizes that all are as equally brutish as the “savages” to whom he refers; there really is no “them” versus “us.” Though even with this being the true state, most people cannot handle the weight of such knowledge. Most people choose to live in the comfort of artificial light, either due to obliviousness or denial, and go on with their ultimately pointless lives.

Marlowe goes on the journey to see what knowledge he hopes Kurtz (discussed later on) can reveal to him, supernatural knowledge about the meaning of life that no one else can possess. It turns out to be very different knowledge than he expected. In the beginning of the journey, he catches glimpses of this unpleasant truth himself, but decides to submerge himself in mediocre tasks to keep his mind occupied elsewhere instead. (Compare this to why most humans fill their lives with innumerable meaningless tasks.) He ultimately discovers that there is no real light outside of the impenetrable darkness, only artificial, society-created light. Kurtz was the most devoutly religious, idealistic, moral, promising, etc. young man who became the most successful manager for the British ivory company; and he died as the most corrupt savage imaginable. If he can fall from the “light” to his natural, innate, brutish human character, no one is safe. As Kurtz lay dying, Marlowe realizes the purposeless that can become of a wasted life, and as he returns to London, he sees that the “civilized world” is filled with hypocrites who pretend not to know better, and are satisfied. This is especially embodied in the character of the Intended, Kurtz’s fiancé, who represents the bulk of humanity living in society. She is hopeful, incessantly and wishfully romantic, and completely naïve, yet somehow content. In the end Marlowe decides to lie to her about the true nature of Kurtz to keep her in this illusory, idyllic world, and keep her from knowing the truth; he decides it is better for society to carry on in their fictitious bustling lives. Only a few really come to grips with the truth – the nihilistic void in their hearts. Ought he have shared his knowledge of the true light that is, really, utter darkness, or was he right to keep the people content?

The life we live is the only one we are given, so obviously we want to make it the very best that we can. We all constantly strive to accomplish: we set plans, make goals, and see them through. There seems to be a constant unrest within our very core telling us to keep trying to improve, not to settle, not to grow too complacent. We want to be something more than what we are now, and to have more than we now have. We seem to be continually discontent. Our human condition drives this; we know our own finitude and knowledge of death can motivate us. We are incessantly driven towards “the better.” Maybe this is a little hubristic, our thinking we can really change things and that we are somehow perfectible. Yet no matter how many things we accumulate or tasks we accomplish, there seems to be some residual feeling of incompleteness, an enduring sense of emptiness, a longing for more, and the bitter sting of idividualistic isolationism. As Conrad writes, “We live as we dream – alone;” and again I’ll ask, when happiness seems unreachable, is life just “pointless”? Maybe all of this working, all the busyness with which we fill our lives is in an attempt to divert this sense of nihilism. If I keep working and attaining, all will not be for naught – I will matter.

Wednesday, September 30, 2009

Philosophical Good Faith (Or, Boo, Hiss, Moriarty!)

As I tend to do, I’d like to make an argument that I am not necessarily ready to stand by; rather, I’ll throw it out there, and see if it sticks. I’m going to argue that “good faith” is necessary to genuine philosophical dialogue; without “good faith”, genuine philosophical dialogue cannot occur, and there is a direct correlation between the degree of good faith in such a discussion and the value of that discussion itself.

First, I’ll help to define some terms. I choose the word “dialogue” rather than lecture to indicate the method of direct communication between multiple parties, with the intent of communication between the two of them. Let’s leave “philosophical” ambiguous, but state that the goal should be that the dialogue be productive for both individuals, without getting too into what is meant by that (as the meaning of “philosophical” is another post in and of itself). Genuine means that the intentions of the individuals participating are explicitly and directly communicated or understood; there is not a hidden meaning or purpose behind the discussion.

What, then, to make of “good faith”? I’ll introduce the concept as follows: “good faith” refers to the state in which an argument is presented. In order for a state to be considered one of “good faith”, it is necessary (although not necessarily sufficient) that the individual in said state maintain the following three properties: absolute earnestness, justified belief, and coherence between one’s argument, one’s method of communication, and one’s intention. Let’s see if I can expand on those a bit, and how examples hold up.

By “absolute earnestness,” I mean that an individual’s argument must be communicated with conviction, and be willing to affirm that conviction’s relation to the argument. If there are contingencies attached to the conviction, they must be communicated, or else “absolute earnestness” shall not be attained, and an individual shall not be acting in a state of good faith. As an example, assume that I am engaged in what I intend to be a genuine philosophical dialogue with Moriarty, and assume that he proposes, “Atoms do not exist”, to be a justified belief (he might offer rational arguments for this position), and his argument might be internally and externally coherent. However, if his argument is made simply to frustrate his fellow dialoguer, rather than promote investigation and/or edification, his argument is not made in good faith; his argument lacks absolute earnestness. If he were truly acting in good faith, he would work to help either himself or his colleague (or both) reach a productive or edifying philosophical end, rather than simply trying to win an argument. Moriarty, unfortunately, tends not to act in absolute earnestness; he brings in unusual and jarring argument for the sake of confusing or perplexing his fellow philosopher, and doesn’t really intend to serve a philosophical cause with his arguments. Boo, hiss, Moriarty!

By “justified belief”, I mean that one must argue from a standpoint of belief, and that belief cannot be purely arbitrary. First, assume I say, “Murder is necessarily good”; if one stated that and did not believe it, they should not assume it as a premise for an argument. However, imagine that one stated, “Suppose that murder were necessarily good”, “What if murder is necessarily good”, or, “Wouldn’t that entail murder being necessarily good?” Such claims are interrogative, not declarative; they are not stating beliefs, but rather using contra-positives to help explore another’s (hopefully justified) belief. Justification refers to a degree of sufficiency with respect to reasons that one has a belief. Just because Moriarty argues that “Corporations are evil, because they want profit” is justified does not mean that said belief is sufficient (boo, hiss, Moriarty!). What determines sufficiency for justification would be a topic of another post; for now, hopefully my point is clear enough. At any rate, assume that I am dialoguing with Moriarty about whether a true practitioner of Nietzsche’s philosophy would necessarily believe in the existence of God. If Moriarty argued that “Nietzsche proved that God is dead, so God necessarily once existed”, his belief (let’s assume that he actually believes it) would not be justified; even a most basic understanding of Nietzsche’s point with that statement would explicitly affirm that Moriarty missed the point. Moriarty would not have been acting in good faith, because he was citing a vital argument (which he believed to be representative of Nietzsche’s philosophical arguments on the subject) that he did not even have a basic understanding of. Thus, his belief was not justified; he was not acting in coherence with philosophical good faith.

Lastly, good faith requires “coherence between one’s argument, one’s method of communication, and one’s intention”. Since there’s a lot of interplay here, I’ll try to be brief. Suppose that one is trying to communicate a philosophical argument, but doing so at gunpoint. There would not be coherence between the individual’s argument (which was philosophical in nature) and one’s method of communication (which is violent, forceful, and antithetical to the consent and understanding of the gunpointee). One would be acting in good faith if they were arguing, “Give me your money”, and had someone at gunpoint; this element of good faith would be satisfied, even though it might be an immoral act. Similarly, if intends to have a philosophically productive/edifying conversation, and yet their argument or their method of communication were quarrelsome and belligerent, they would not be acting in good faith. Moriarty might try those sort of things, but to him we say, boo, hiss.

So, there you have it. I could go on longer, but it’s a long blog post, already. Zach’s argument for what good faith is. I did not have time to actually argue why it’s necessary, but hopefully the necessity should be implicit in the arguments. If not, it’ll make for a good follow-up post…

Friday, September 25, 2009

The Apostasy of Smerdyakov

Greetings,

Although my free–reading time has been severely limited due to the standard semester business, I’ve nevertheless found time to continue reading Dostoevsky’s The Brothers Karamazov. Aside from having the coolest name I’ve ever read (say it aloud a few times and bask in its greatness), Dostoevsky has a pretty interesting way of structuring his character development… I’m not convinced yet that the book deserves the accolades it has received, but it’s at least keeping me reading, which means I might be swayed, eventually.

In Book Three, Chapter VII (“The Controversy”), a character named Smerdyakov argues that, if one takes the Bible to be axiomatically true, it would not be a sin for an individual to renounce his faith if faced with torture or, perhaps, even death. Even though the character is largely trying to provoke others into anger with the argument, the argument itself is interesting, and I’d like to see how well you all think it works. I’m going to spell it out in as straightforward manner as possible. The edition I’m using the reference this is the 1976 Constance Garnett translation, revised and edited by Ralph E. Matlaw. I shall be referring to the tortured individual as “I” in this argument, because that is how Smerdyakov chose to argue.

1. The instant I say to a potential tormentor, “No, I’m not a Christian, and I curse my true God”, I am immediately cursed and “cut off” from the Holy Church (116)
2. However, one need not speak their apostasy; when I think it, before I had actually said it, I am already “cut off”—“accursed”. (116)
3. At the moment I become accursed, I become exactly like a heathen, and my christening is taken off me. (117)
4. If I’ve ceased becoming a Christian, I have told no lie to the enemy when they asked whether or not I was a Christian, as I had already lost my salvation before speaking. (117)
5. If I’m no longer a Christian, then I can’t renounce Christ, for I have nothing to renounce that belongs to me. (117)
6. It is said in Scripture that, if you have faith, even as a mustard seed, and tell a mountain to move into the sea, it would instantly do so. (118)
7. If a torturer tells me to convert, and I tell a mountain to move and crush the tormentor, and it does not do so, my faith wasn’t even that of a mustard seed, and thus I wouldn’t have been able to get to heaven, anyways. (119)

The intended result? If an individual is told to either renounce their religion or be tortured, and they proclaim that they will not be tortured because of x [x could be any saving act, such as the moving of a mountain], x will happen if their faith is real. If their faith is not real, one thinks that they are not saved, and thus cannot renounce their faith, because they have nothing to renounce. Thus, it is not a sin to commit apostasy (to renounce one’s faith) under the threat of torture, because one has nothing to renounce.

Thoughts? Is his argument good, and does it prove what he thinks it proves? Is it simply a word game, or is there something there?

Monday, September 21, 2009

Humorous v. Serious Language: Can Opposites Attract?

For those of you who were able to attend Dr. Sands’ lecture last Thursday – “Lincoln’s Serious Use of Humor” – I bet you got as much joy out of it as I did. I would here like to qualify, however, what some may have mistakenly taken from his presentation and conclusions – or rather, more preferable to say, complement those arguments.

Political rhetoric is like no other. There exists in all of us some strangely excitable passions that can be played upon and provoked, shaking us awake from our state of dormancy and general malaise, and orators and politicians assuredly know all the best ways to do this. There is, of course, philosophical inquires to be made about how rhetoricians ought to persuade, and the responsibilities entailed by both speaker and audience member, and so on, but those are not the topics right now. Here I’d like to focus on the important Dynamic Duo of humorous and serious language, and their efficacy on the masses.

Comedy has an unquestionably universal prevalence in our society, and especially in politics. Remember back to the popularity of late night talk shows in the 2008 elections, what, with Tina Fey’s impersonations of Sarah Palin, the nightly infotainment of Stephen Colbert and Jon Stewart, and all the other sensational political sources like tabloids and satirical cartoon caricatures. Just look at the ratio of comedies on television as opposed to intellectual shows (whatever those may actually be). Even the unabashedly bawdy two-thousand-year-old political jokes of Aristophanes still get chuckles. This obsession with all political things humorous may not be the very best way to inspire citizens towards betterment, and it certainly poses a severe problem to those earnestly wanting to reform society without a punch line. Do we really want our political leaders to be endlessly amusing, always eager to put a smile our face? Most would agree that we want people of the best sort, with genuine, upstanding moral characters, with eloquence and dignity, and with the citizens’ and the society’s best interests at heart. Not too many comedians are described in this way, for humor can tend to bring out the worst in people, yet the politician-comedian who delights the masses is able to win over the bemused crowd with ease.

You may argue that because humor is so effective a tool, why am I condemning it? It is natural, after all, to like things that make us happy, and humor does that! Let’s look at Lincoln for a response to this. He was a statesman incomparable, and cleverly weaved humor, irony, and satire in his life to the advantage of his political career, and his name is not denounced! Yet as president, when he gave speeches and effectively spoke to all the people of the nation and pervasively to us as well, the humor gave way to a more somber tone, often resembling beautiful, poetic prose rather than humorous or more pleasing language. And it is this, rather than his use of humor, that actually moved people the most and when it counted. Rather than a pursuit of persuading a crowd to vote for him, Lincoln was here calling for a cathartic change in the very souls of the American public, and knew that serious language was the correct way to go.

For those who take the time to listen to the less “exciting” speeches and texts of politics, and who study their arguments and meaning, they will be far more affected and on a much deeper level than by any other way of speech. Audiences can be swayed by satire, humorous exaggerations, or funny anecdotes, but it is far more likely that they will enjoy it, laugh a little, and then forget it. Dramas, after all, and tragedies are not nearly so pleasurable to endure at times as comedies, for the former often aim to show us what’s worst about ourselves, and what needs to change. Audiences more inclined to the comedies, however (and that could very well be argued to mean the majority of Americans today!), would also be far more prone to a paralyzed, lazy mind, and less likely to study the “boring” political arguments that deserve more than a casual glance. It must be held that an appeal to intellect, by politicians or others, calls the audience to think and live at an altogether higher level. Instead of having our orators vie with each other for the most laughs, leaders should want to inspire greatness and a betterment of the citizenry through appeals to the higher faculties of man. Dr. Sands’ (and Lincoln’s) aim was ultimately to praise humor and its effectiveness, but only when tempered by serious language as well. Humor does not get a free ride just because we like it, but it must be used rightly. The best use of language must be a good mixture of both humorous and serious language, so shoot for that mean.

Friday, September 18, 2009

Mathematical Truth

As Zach notes below, the topic for the meeting on Monday is the ontology and epistemology of mathematics, and the reading is an influential paper by the Princeton philosopher of mathematics Paul Benacerraf.


Here's some background that will help in understanding Benacerraf's argument. First, the ontology of mathematics is concerned with the nature of the objects that mathematicians study. For example, what are numbers? Do they exist independent of the human mind or are they creations of the mind? The epistemology of mathematics concerns how we come to have mathematical knowledge. Benacerraf's main point is that those theories of math that give a good explanation of how mathematical statements like "2+2=4" connect with the objects they are about fail to account for how we know about them. And those theories that explain how we know math do poorly in accounting for how statements about math can be true.

One influential philosophy of math is platonism. Platonists hold that numbers are real. They are non-physical entities that exist in a separate realm. Platonism explains well why "There exists one even prime number" is true. It's because quite literally there does exist a prime number that is even, namely, 2, just as the existence of a cat lying on a mat makes it true that "There is a cat on the mat." Problem is that it's really hard to give an explanation of how the human mind can come to know about things like numbers, which (unlike cats), cannot be perceived by our senses. The best that the platonist can do is to posit the existence of some kind of mysterious "intuition" that gives us access to the realm of numbers.

On the other hand, formalists or combinatorialists, as Benacerraf refers to them, reject platonism and argue that mathematical statements are true if they can proved from more basic statements. To be true is simply to be provable using certain rules. Formalism explains how we know that 2+2=4 (because it can be proved from basic statements known as the axioms of Peano arithmetic, which define our concept of natural numbers), but it requires denying that "2+2=4" is true in the same way that propositions like "A cat is on the mat" are true. In essence, if truth is about some correspondence between a statement and the world, there is no such correspondence between mathematical statements and the world for the formalist.

So we are left with a dilemma. Or maybe not. Perhaps the problem with Benacerraf's argument is that he has a deficient theory of knowledge. He assumes that knowledge involves some physical, causal relation between the object of knowledge and the knower. But is that true? That's one question among many that we can discuss on Monday.

Coming Up: Ontology and Epistemology of Mathematics!

Greetings,

The study of mathematics raises all sorts of philosophical questions, some of which we will be discussing at the next Philosophy Club meeting, Monday, September 21 at 8:00pm. It should be a lot of fun! If you need location information, send me an email, and I'll give you the relevant info.

Tuesday, September 15, 2009

Zach's Ontology of Truth

Greetings once again, everyone,

Last night's meeting on the philosophical implications of the governmental censorship of obscenity was evocative, edgy, and yet hopefully still both fun and educational.

After the meeting, a few of us hung out and discussed something I had been pondering for some time, but had not actually written out until earlier that day: my proposed ontology of truth. For those who don't know, "ontology" refers to the science or study of the nature of existence. This is not an ontological argument for truth; rather, it is an attempt at classifying what I believe are actual kinds of truths.




Let the predicate "P" refer to "...Coheres with...", so that "Pxy" refers to "x coheres with y".
Let the predicate "S" refer to "...Is a statement", so that "Sx" refers to "x is a statement".
For "Objective Truths", let "T" refer to the one-place predicate, "...Is true", so that "Tx" refers to "x is true".
For "Subjective Truths", let "T" refer to the two-place predicate, "...Is true for...", so that "Txy" refers to "x is true for y".
In these propositions, "x" refers to statements that individuals make, and this test seeks to show whether a statement "x" is.
In these propositions, "y" can be interpreted different ways, depending on your approach to various epistemological issues. I personally find it easiest to think of "y" as a paradigm, as Kuhn considered it. If you have issues with Kuhn's summation of paradigms, think of "y" as a worldview or a summation of perceptions of sorts.
In these propositions, "z" refers to a a subject. "Txz" would thus mean that "x is true for z".


That should hopefully do it. If anyone has a hard time reading the image or interpreting the notation, let me know, and I'm happy to help.

At any rate, here's the gist. Note that the examples I provide are not meant to be insightful and provocative insomuch as they are meant to be noncontroversial. The difficult questions can come later.
For x to be an Objectively Absolute truth, it must correspond to all y paradigms that correspond with reality. If there exists a y paradigm that corresponds with reality, but x does not correspond with this y paradigm, x is not an Objectively Absolute truth. Most (if not all) mathematical axioms, such as that the successor of zero does not equal zero, would fall under this category.

For x to be an Objectively Relative truth, it must correspond to at least one y paradigm that corresponds with reality. If there exists a y paradigm that corresponds with reality, but x does not correspond with this y paradigm, this is not a problem, because this truth is relative. There are possible worlds, perhaps, where paradigm y does not correspond with reality; nevertheless, y corresponds to some reality, so x is true, at least in an Objectively Relative sense. As an example of an Objectively Relative truth, consider the statement, "the universe is constantly expanding".

For x to be a Subjectively Absolute truth, it must correspond to all y paradigms that correspond to reality, and there must exist a subject such that x is true for that subject. These truths require a subject in order for them to be true. For example, consider, "I ought not do that which is wrong". Such a statement requires the existence of a subject, an "I", in order for it to be possibly true. If there does not exist a z such that, for z, this x statement is true for z, x is not true at all.

For x to be a Subjectively Relative truth, it must correspond to at least one y paradigm that corresponds with reality and there must exist a subject such that x is true for that subject. As an example of a Subjectively Relative truth, consider, "Ice cream is my favorite cold desert". This statement corresponds with a y paradigm-- my current one-- that also corresponds with reality. However, at a future point, that y might no longer correspond with reality; I might pick a different cold desert as my favorite. Thus, truths under this category are Subjectively Relative, as opposed to Subjectively Absolute.

Thoughts/comments/suggestions? Criticisms? Applause? Disgust? Hunger for ice cream? Thanks for your comments!


Friday, September 11, 2009

Obscenity as Abstraction

In Jacobellis v. Ohio, US Supreme Court Justice Potter Stewart claimed that, while he could not concretely define "obscenity", stating, "I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [obscenity]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it..."

[Source: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=us&vol=378&invol=184]

I'm going to argue that this is, in fact, a wise move, although I shall warn you up front that my argument is weak and not the result of much in-depth research and whatnot. Consider, if I stated that an item is necessarily obscene if and only if it has Property P-- or, alternatively, an item is necessarily not obscene if and only if it lacks Property P-- it would be likely that, given the rapid development in communication technology, ways could be found to avoid a work's clear possession of Property P. To concretely define what Property P is, one would be concretely defining obscenity; however, while examples of obscenity might be concrete, new forms might arise, and Property P might prove insufficient to capture a future definition of obscenity. In fact, I believe that an understanding of obscenity necessarily requires a context-- it is subjective, and obscenity cannot be sufficiently understood outside the context it is portrayed in-- thus entailing that obscenity be understood subjectively rather than objectively.

One might object that this makes obscenity relative; after all, two individuals might disagree on whether a work is obscene. However, while contradiction is necessarily a valid attack on arguments in the sphere of objectivity, I argue that such is not the case in the sphere of subjectivity. If I said, "Jeremy Soule's orchestration of Terra is the most beautiful", I might be subjectively right; you might favor Nobuo Uematsu's rendition, and it would be a lie for you to claim any other work as the greatest. Such beliefs are not relative-- they are held by absolute standards; the standards are internal to the individual, however,-- subjective-- as opposed to corresponding with an external law or principle, objectivity. Obscenity is similar.

Thoughts?

Tuesday, September 8, 2009

Galileo, why’d you have to do it?

How heliocentrism radically “rocked the world” as we knew it...

Imagine yourself as living when the Earth was the center of it all. It is a time before telescopes and the preciseness demanded by empirical science. A time when explanation was ruled by theory and philosophy, and when we truly believed in a rising and setting sun. Our senses, after all, confirm a geocentric design – we can see the stars moving about us, and it certainly doesn’t feel as if the Earth is moving. For those who desired more than just senses and observation, we have Ptolemy’s complex system of circles that explain the motion of the planets and stars, while still beautifully keeping us in the middle of everything, and he did so using more than theory, but with the meticulous, objective disciplines of math, physics, and astronomy.

How comforting is it to think that the Resplendent Heavens circle about and surround my planet! What purpose and significance it gives for humanity and my individual existence, too! We see that from that perspective, the movements and changes in the ethos would be scrutinized in a way that is most alien to us. For them, the planets and stars directly concern themselves with humanity, perhaps even they are the forces or gods which impact my life – studying the movements of the cosmos could be of life-or-death significance, showing the favor or displeasure of the Fates. I may see the stars and planets as heavenly bodies, or as mysterious beacons of ethereal light, constant and glorious. Even if I don’t think they have a supernatural power over my life, and it’s all I can do to stare and say, “How I wonder what you are…”, I know that they are there, every night, moving about my planet far away in a celestial sphere. The stars are something permanent and enduring even if I’m cursed to a temporal existence; at least there is something that will last. Maybe they are simply there to be admired, here for no other purpose than to exist and to be something beautiful to look up at; perhaps they are just here for pleasure – humanity’s pleasure and my pleasure. Doesn’t that make me feel important!

Then enters Copernicus to knock the first hole in this view and Galileo with his telescope after him to perfect the heliocentric conjectures, and eureka! Goodbye to the Earth-centered universe. Did the cosmos suddenly become less or more knowable? The inscrutable galaxy can now be explained scientifically rather than philosophically. It is more comforting perhaps for some to have a more rational universe, one not just about superstitions or myths. For others, however, like the hypothetical person we imagined ourselves to be at the outset of this post, it destroyed long-held, comforting assurances, leaving in its wake a more mechanized, huge, impersonal galaxy, one that neither needs nor notices the human race. And so too the deistic view of God was popularized. Now we realize that Earth’s place in the galaxy is rather irrelevant. As we knocked it from its throne in the middle of all, so too demotes the value of the human race; so ends the Ancient and Romantic ages of thoughts, and our infatuatory love affair with ourselves; there is no more “sunny” picture of individual worth. Galileo brought to some a caustic, acerbically sobering explanation for a galaxy that is now vast and dark, and empty and scary. The stars do not “look down upon us”; they do not care in any singular way about humanity, and they certainly are not meant to be wished upon – a truly “stellar” poem with this theme is Robert Frost’s “Choose Something Like a Star.”

Isn’t it a funny coincidence that “geocentrism” can easily be misspelled to get “egocentrism”?