![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Sometimes I forget I'm using philosophical methodologies uncommon in the US/UK, and invariably I get brought up short by bewildered (even sometimes annoyed) responses when I start analyzing a text/work. Guess that means I'm overdue for providing, hrm, not a disclaimer... so much as an explanation of my process and why it takes shape the way it does.
I rely most heavily on a style of argument called the dialectical process. It begins by setting forth a thesis, which is distilled to the point that one can then posit the antithesis. Then, by studying the tension between the two, one can arrive at a synthesis. In practical application, it ends up reading like a series of swings back and forth, from the thesis/positive to its antithesis/negative, until slowly each is modified in light of the other to reach a synthesis. That final step is not necessarily a conclusion, per se, so much as an assessment of the tension that exists between the two original points.
Yes, it's very similar to the Socratic method. No, it's not really that common in argument-styles in the US/UK. Yes, it originated from philosophy, but it has been applied as a type of literary criticism; it's a malleable format. Yes, it had a huge impact on me, being introduced to me at a point in my life when I was still pretty close to a blank slate when it came to any signficant analytical skills. No, I don't really use it formally, these days, but I can reread even off-the-cuff posts and see the pattern remains a strong one.
In some sense, this swinging back-and-forth is a kind of triangulation, especially when it goes hand-in-hand with deconstruction. Which, I should also explain, is not the same as common usage -- to de-construct -- because it's not simply a matter of taking apart the whole to end with the numerable parts.
Common critical analysis usually does reduce to parts, such as, "boy fish meets girl fish, boy fish loses girl fish, girl fish dies going over hydraulic dam." Reductionism then determines the parts consist of boy fish, girl fish, and hydraulic dam, and by reductionism's lights, thereby conclude the 'truth' of the whole is equal to two fish and a big honking hydraulic dam.
That's not deconstructionism, though. Deconstructionism is the step beyond reductionism: it may say, for the story to work (for parts to come together as whole), we are required to assume, or take for granted, that boy fish is X, or that girl fish is Y, or that all hydraulic dams are Z. That leads to asking: what if we instead posited that boy fish is Y? What does this shift in understanding do to the parts, and in turn to the whole?
Deconstructive philosophy, or deconstructive literary criticism, is concerned with questions such as, "what assumptions are we making? what do we have to take for granted in order to accept this argument's conclusion?" It seeks the premise, the core concept or thing that underlies the myriad parts. From there, it explores the buried cultural or philosophical or literary assumptions, to question the validity of the core/premise and possibly even the conclusions within the overall whole. Where reductionism sees the parts as equal to the whole, deconstruction is more hermeneutical; its primary questions are informed by interplay between parts and whole, as it explores and overturns hidden assumptions in the discrete parts.
I can't really explain this succinctly without jargon, honestly, so bear with me.
So, hermeneutics. It's a kind of cousin to the dialectic process; it's mostly used as a method of religious criticism/analysis. (Not always, though; my favorite philosopher, Martin Heidegger, was a big advocate for broadening hermeneutical applications, one reason I take that approach myself, I 'spect.) Here's my best shot at hermeneutics in thirty words or less: it's a way of understanding the whole via an understanding of the tension between the parts and the whole.
It's also a swing back-and-forth maneuver, only this time between the whole and its parts rather than a thing and its opposite. The key is that hermeneutics' perspective is that neither of these, the whole or its parts, can be understood in a vacuum, nor can either be understood if separated from the other. The other important detail about hermeneutics is that the textual meaning is entirely contextual, that is, grasped only via grasping the text's philosophical or cultural or literary grounding. Where deconstructionism seeks to identify the underlying assumptions in the parts, hermeneutics seeks to interpret/analyze the parts in light of a greater external context, be that cultural, philosophical, literary, etc.
So, to sum up: reductionism breaks a whole into its parts and considers these equal to the whole, interchangeable as 'bunch of parts' or as 'one big whole'. Deconstructionism explores the premise inherent in the parts as means to question -- possibly even overturn -- stale or outgrown concepts that may be preventing newer/better understandings of the whole. And finally, hermeneutics looks for contextual grounding that may clarify the parts as well as explicate the tension between the parts and the whole.
In some ways, I'm not entirely comfortable with hermeneutics; when taken to an extreme, it starts to imply author intentionality, a philosophy with which I disagree with strongly. But shy of that, it is a valuable and informative method of seeking potential meanings (though rarely the meaning), by placing the thing, story, philosophy, etc, within a fuller comprehension of its original context.
One might even say that hermeneutics is what's being practiced in classrooms across the US, every time students read Huckleberry Finn: unless and until the students understand the historic and cultural framework that Twain was writing in, and against, a lot of the characterizations, motivations, even dialogue, are either going to be missed or misread so utterly as to render the text meaningless (or worse, meaningful only in a strongly negative sense). So the teacher is applying a hermeneutical approach, shifting between the text as-a-whole and the historical and cultural clues in the text-as-parts that may in turn inform the whole.
If you read my posts on a semi-regular basis, you'll probably see this strong aversion to author intentionality come up again and again. The gist goes like this: if we could just talk to the author, then we'd understand what the text means. But author intentionality is a fallacy; when the author is the sole purveyor of a text's meaning, then anyone else's interpretations are, at best, only educated guesses. (And even that much only so long as the interpretation works from a position of trying to get into the author's head and guess at the intentionality). The position outright renders all criticisms as false on the face, with the sole exception of those provided by the author/creator.
I think that attitude isn't just hogwash, it's also a compete cop-out. It presumes that the value or meaning of a work exists solely within the authorial framework and that the reader brings nothing to the table, and can only take away what the author gave. Nonsense, total nonsense. A work, upon dispersal into the wild, becomes its own thing, independent of the author's intentions. It doesn't matter what Twain intended, at this point, if we're trying to understand variations in the interpretations. The best (and perhaps most pragmatic) approach now, I say, is to deconstruct the tension between the text and the reader, to find the basic assumptions that led to this interpretation -- and question those instead. Leave the author out of it, because the interaction is not between author and reader, but between text and reader. If you ask me, arguing otherwise is both misleading and unproductive.
There are good authors and bad, but I disagree that an author/creator always carries full blame for any fuzzy language or imagery, because I do believe all language is inherently fuzzy. Normally this isn't a problem. When I say "apple," I figure this means roughly the same to you as to me, give or take a few details, and that's good enough. But when you're going down to the molecular level to analyze the parts, getting it "approximately right" isn't good enough.
Plenty of sciences realize this; it's why we have jargon in every field. Come on! When I'm purchasing lumber, to say "a two-by-four" may suffice in casual conversation, but in application I must specify "select structural southern white pine, ten-foot, two-by-four." Each additional adjective is necessary to elucidate a precise meaning. I don't want my roof to fall in just as I finish the last skim coat of drywall, so of course I don't want the lumberyard misunderstanding me and selling me No. 2 Non-Dense southern white pine instead.
(Because the roof falling in? Bad. Very bad.)
Fuzzy language is very much a part of poetry, of literature, even of the mysteries in religions; it's fuzzy language that allows us to unconsciously reference cultural values without specifying them (which would alienate those without access). It can let in those who understand without locking out those who don't, because it leaves open the interplay between reader perspective and the authorial values hiding in the work. It's only when one analyzes, deconstructs, that language must -- per any science -- solidify into jargon, which is really just words with very precise definitions that don't always neatly match the looser common usage. And that jargon, admittedly, does lock people out, if they don't know the terms or understand the references.
That's just fine, if you ask me; philosophical discourse isn't meant to be -- and doesn't have to have -- a broad appeal, anymore than medical studies are written to be published in the latest AP bulletin. The audience is different. But it's also why sometimes I get frustrated that I can't just throw out the jargon and have readers understand what I mean via a shared, specific definition; this audience is not trained to have those ears. (That's a huge part of why my posts will go on and on, as I try to compensate for the schism between the terminology in my head versus the fuzzier analogies I tend to use because I don't care for writing a massive glossary with each post.)
The fallacy of author intentionality leads me to modernism, as that fallacy is a risk of the modernist perspective, which rests on a scientific or objective take on things. Modernism posits that there can be a Truth to a thing -- and I mean that capital letter very intentionally -- as an absolute Truth, in and of itself, independent of external cultural or philosophical contexts. You can probably see how modernist approaches are in total contrast to hermeneutical methodologies, because the latter posits that the tension between the parts and whole is clarified and/or grounded in a contextual understanding. In other words, modernism would say, "to understand girl fishes and hydraulic dams, we only need to look at this girl fish, and this hydraulic dam."
Postmodernism is subjective, relativistic; it would say, "other girl fishes have survived hydraulic dams," or even "other boy fishes don't lose girl fishes in the first place." It's true that at the outer extreme of postmodernism, the philosophy can become almost parody, self-reference so far gone as to become oppressively self-conscious to the point of pretentiousness. Television shows sometimes have modernist streaks in their storytelling; one example is the now-hackneyed "flashback episode" when the now-time consists of characters discussing, and then mutually "remembering", a sequence from an earlier episode. That's referential but only towards the self (internal), thus modernism; postmodernism caps that by having characters imply or even state outright that they're aware they're characters (aka breaking the fourth wall). Both refer but where modernism points inward, postmodernism turns outward.
f the postmodern streak is taken to its extreme, and the characters go so far as to analyze their reality via reference (either internal/self or external/breaking the fourth wall), then we get into metanarrative, "a story about a story". A critical analysis may create a metanarrative without getting this label; the postmodernist aspect appears when this metanarrative is created or revealed within the text.
That's where the pretension comes in, I think: it's like someone writing their autobiography and going into depth on what their life means in an overall scope of the world-in-general. It requires you follow along as they attempt to be both object and subject, and I just don't think that always works. (I'm not saying I'm formally Cartesian, far from it, but that I still agree one cannot truly create a metanarrative of one's role from within the role, because we just can't get out of where-we-are to see anything as-it-is.)
Modernism seeks order; it marshals rationality out of chaos. So for literary or philosophical criticism, (it seems to me) limiting a work's criticism to only those parts within the work -- devoid of context -- is a way of limiting the potential fragmentation that can occur when you drag in context after context after context.
On reflection, this dismissal of (or disinterest in) contextual fragmentation is a sign that someone is thinking with a modernism cap. That perspective is most often to blame (in my experience) for seeing textual deconstructionism as "over-analyzing" or "referencing external (and thus by definition unrelated) things". Which is not to say I'm arguing that doing so is better somehow, and neither am I criticizing in turn those folks who prefer the modernist mindset; I'm only pointing out that the two approaches are fundamentally opposite. If someone prefers the other, they're just not going to enjoy being stuck over on this side of the divide.
In modernist thought, a work can speak for itself (whether or not author-input is required); the drive to tease out every blooming possible cultural reference would be seen as a lot of noise and fury signifying nothing. I get that, I really do. But I'm not a modernist, and while it can be overwhelming to come at literary critique with a postmodernist "everything and the kitchen sink" attack, I find it far more satisfying on a personal level than limiting myself to only what's present in the work, meta-free. Postmodernism leans hard towards the absurd, and I mean that in an existential sense; it seeks the fragmentation and the chaos, and in a hermeneutical framework will often pile on more and more and more obscure cultural or philosophical contextual references.
As a last note -- because this is a perspective ingrained pretty deep in me -- existentialism is basically the position that existence precedes essence. From a religious standpoint, this does not deny a godhead nor does it posit automatic atheism, but it does question the use/value of a soul; religious existentialism takes the attitude that any 'spirit' element is shaped by the body's experiences, to the point that one might even say there was no 'soul' prior to the empirical existence.
On a more secular level, existentialism defines who-we-are as a product of what-we-have-experienced. It's really that simple. The essence of who-I-am is informed crucially, almost completely, not by some pre-determined fate but by my experiences growing up in Georgia, living in Rhode Island, owning a bookstore, and so on. Had I been exposed to different experiences, the essence of who-I-am would be so radically altered as to be a completely different person; in contrast, an essentialist believes that we are fundamentally ourselves regardless of overlaid experiences.
The connection between existentialism and postmodernism comes into play through the concept of the absurd, which isn't related to the funny except in the blackest (and if you're reading Sartre, a slightly morbid) kind of random humor. In existentialism, because our experience is what defines/shapes us, then all experience is positive (that is, positive in that it has an impact on our essence) and the lack of predetermined fate means that all possible experiences are open to us. If we choose to run off and join the circus, this is no more a valid or invalid option than if we shave our heads and join the merchant marines. Both are experiences, and experience as such is a neutral thing. The absurdity comes into play when one attempts to place a (usually moral) value on this experience over that experience.
The jargon-word is 'absurd,' but one could easily say, "that's nonsensical!" just like we might if someone announced that the next game move could be four steps to the left or four to the right, but that going to the right is a Bad Thing and thus any results of such a move would therefore also be Very Bad. When there is no assignable or predetermined validity to this option over that option, then to arbitrarily assign value to experience X over experience Y is a ridiculous, even unwarranted, justification.
Postmodernism's use of the concept parallels this: to designate that these sets of external references can be used to understand a text, while those sources cannot, is absurd in the same sense. All references are potentially valid and/or usable given our lack of empirical knowledge; we cannot know that an experience will lead to death on a hydraulic dam, though we may be able to guess. Existentialism's, and postmodernism's, application of the absurd does not deny the use of guessing to determine one's path -- "this looks like a better choice, given what I know of this kind of situation based on similar things in my past" -- but do consider absurd the idea that there is ever absolute certainty.
In pure existentialism, there is no certainty, none at all. I think sometimes that's why people confuse it with Nihilism, which is a bleaker philosophy that says "if all options are equal, then it doesn't matter what I do." Or more melodramatically, "if all options are equal, I'm screwed no matter what, so there's just no point in trying."
Existentialism doesn't play that card. The lack of value judgments on the potential is no reason to avoid experience completely. Existentialism merely says that to place value judgments -- good or nihilist-bad -- is absurd. "Nothing is true; everything is permitted" is not a source of fear per Nihilism, but a source of unmitigated freedom.
Ah, after all that, I could get into the issues of teleology, epistomology, and my fascination with social constructionism... but I won't. I'll just sum up: my style of deconstructive synthesis ultimately focuses on a post-modernist ludic ontology of strong social constructionism*. I may not use these methodologies absolutely formally, but that's the gist of my analytical process.
In ordinary language I might therefore conclude: if you think a work should stand on its own without cultural metanarrative, if you think the 'truth' of a work does not require external comparison, if you dislike the chaos of exploring a story's small details... you might, in the future, just skip my posts when I get into analysis. It's okay, really. You can take the modernist road, I'll take the postmodernist road, and I'll be in Scotland before you.
* See how much easier it is to be concise when using jargon? That took me this entire post to say lingo-free.
I rely most heavily on a style of argument called the dialectical process. It begins by setting forth a thesis, which is distilled to the point that one can then posit the antithesis. Then, by studying the tension between the two, one can arrive at a synthesis. In practical application, it ends up reading like a series of swings back and forth, from the thesis/positive to its antithesis/negative, until slowly each is modified in light of the other to reach a synthesis. That final step is not necessarily a conclusion, per se, so much as an assessment of the tension that exists between the two original points.
Yes, it's very similar to the Socratic method. No, it's not really that common in argument-styles in the US/UK. Yes, it originated from philosophy, but it has been applied as a type of literary criticism; it's a malleable format. Yes, it had a huge impact on me, being introduced to me at a point in my life when I was still pretty close to a blank slate when it came to any signficant analytical skills. No, I don't really use it formally, these days, but I can reread even off-the-cuff posts and see the pattern remains a strong one.
In some sense, this swinging back-and-forth is a kind of triangulation, especially when it goes hand-in-hand with deconstruction. Which, I should also explain, is not the same as common usage -- to de-construct -- because it's not simply a matter of taking apart the whole to end with the numerable parts.
Common critical analysis usually does reduce to parts, such as, "boy fish meets girl fish, boy fish loses girl fish, girl fish dies going over hydraulic dam." Reductionism then determines the parts consist of boy fish, girl fish, and hydraulic dam, and by reductionism's lights, thereby conclude the 'truth' of the whole is equal to two fish and a big honking hydraulic dam.
That's not deconstructionism, though. Deconstructionism is the step beyond reductionism: it may say, for the story to work (for parts to come together as whole), we are required to assume, or take for granted, that boy fish is X, or that girl fish is Y, or that all hydraulic dams are Z. That leads to asking: what if we instead posited that boy fish is Y? What does this shift in understanding do to the parts, and in turn to the whole?
Deconstructive philosophy, or deconstructive literary criticism, is concerned with questions such as, "what assumptions are we making? what do we have to take for granted in order to accept this argument's conclusion?" It seeks the premise, the core concept or thing that underlies the myriad parts. From there, it explores the buried cultural or philosophical or literary assumptions, to question the validity of the core/premise and possibly even the conclusions within the overall whole. Where reductionism sees the parts as equal to the whole, deconstruction is more hermeneutical; its primary questions are informed by interplay between parts and whole, as it explores and overturns hidden assumptions in the discrete parts.
I can't really explain this succinctly without jargon, honestly, so bear with me.
So, hermeneutics. It's a kind of cousin to the dialectic process; it's mostly used as a method of religious criticism/analysis. (Not always, though; my favorite philosopher, Martin Heidegger, was a big advocate for broadening hermeneutical applications, one reason I take that approach myself, I 'spect.) Here's my best shot at hermeneutics in thirty words or less: it's a way of understanding the whole via an understanding of the tension between the parts and the whole.
It's also a swing back-and-forth maneuver, only this time between the whole and its parts rather than a thing and its opposite. The key is that hermeneutics' perspective is that neither of these, the whole or its parts, can be understood in a vacuum, nor can either be understood if separated from the other. The other important detail about hermeneutics is that the textual meaning is entirely contextual, that is, grasped only via grasping the text's philosophical or cultural or literary grounding. Where deconstructionism seeks to identify the underlying assumptions in the parts, hermeneutics seeks to interpret/analyze the parts in light of a greater external context, be that cultural, philosophical, literary, etc.
So, to sum up: reductionism breaks a whole into its parts and considers these equal to the whole, interchangeable as 'bunch of parts' or as 'one big whole'. Deconstructionism explores the premise inherent in the parts as means to question -- possibly even overturn -- stale or outgrown concepts that may be preventing newer/better understandings of the whole. And finally, hermeneutics looks for contextual grounding that may clarify the parts as well as explicate the tension between the parts and the whole.
In some ways, I'm not entirely comfortable with hermeneutics; when taken to an extreme, it starts to imply author intentionality, a philosophy with which I disagree with strongly. But shy of that, it is a valuable and informative method of seeking potential meanings (though rarely the meaning), by placing the thing, story, philosophy, etc, within a fuller comprehension of its original context.
One might even say that hermeneutics is what's being practiced in classrooms across the US, every time students read Huckleberry Finn: unless and until the students understand the historic and cultural framework that Twain was writing in, and against, a lot of the characterizations, motivations, even dialogue, are either going to be missed or misread so utterly as to render the text meaningless (or worse, meaningful only in a strongly negative sense). So the teacher is applying a hermeneutical approach, shifting between the text as-a-whole and the historical and cultural clues in the text-as-parts that may in turn inform the whole.
If you read my posts on a semi-regular basis, you'll probably see this strong aversion to author intentionality come up again and again. The gist goes like this: if we could just talk to the author, then we'd understand what the text means. But author intentionality is a fallacy; when the author is the sole purveyor of a text's meaning, then anyone else's interpretations are, at best, only educated guesses. (And even that much only so long as the interpretation works from a position of trying to get into the author's head and guess at the intentionality). The position outright renders all criticisms as false on the face, with the sole exception of those provided by the author/creator.
I think that attitude isn't just hogwash, it's also a compete cop-out. It presumes that the value or meaning of a work exists solely within the authorial framework and that the reader brings nothing to the table, and can only take away what the author gave. Nonsense, total nonsense. A work, upon dispersal into the wild, becomes its own thing, independent of the author's intentions. It doesn't matter what Twain intended, at this point, if we're trying to understand variations in the interpretations. The best (and perhaps most pragmatic) approach now, I say, is to deconstruct the tension between the text and the reader, to find the basic assumptions that led to this interpretation -- and question those instead. Leave the author out of it, because the interaction is not between author and reader, but between text and reader. If you ask me, arguing otherwise is both misleading and unproductive.
There are good authors and bad, but I disagree that an author/creator always carries full blame for any fuzzy language or imagery, because I do believe all language is inherently fuzzy. Normally this isn't a problem. When I say "apple," I figure this means roughly the same to you as to me, give or take a few details, and that's good enough. But when you're going down to the molecular level to analyze the parts, getting it "approximately right" isn't good enough.
Plenty of sciences realize this; it's why we have jargon in every field. Come on! When I'm purchasing lumber, to say "a two-by-four" may suffice in casual conversation, but in application I must specify "select structural southern white pine, ten-foot, two-by-four." Each additional adjective is necessary to elucidate a precise meaning. I don't want my roof to fall in just as I finish the last skim coat of drywall, so of course I don't want the lumberyard misunderstanding me and selling me No. 2 Non-Dense southern white pine instead.
(Because the roof falling in? Bad. Very bad.)
Fuzzy language is very much a part of poetry, of literature, even of the mysteries in religions; it's fuzzy language that allows us to unconsciously reference cultural values without specifying them (which would alienate those without access). It can let in those who understand without locking out those who don't, because it leaves open the interplay between reader perspective and the authorial values hiding in the work. It's only when one analyzes, deconstructs, that language must -- per any science -- solidify into jargon, which is really just words with very precise definitions that don't always neatly match the looser common usage. And that jargon, admittedly, does lock people out, if they don't know the terms or understand the references.
That's just fine, if you ask me; philosophical discourse isn't meant to be -- and doesn't have to have -- a broad appeal, anymore than medical studies are written to be published in the latest AP bulletin. The audience is different. But it's also why sometimes I get frustrated that I can't just throw out the jargon and have readers understand what I mean via a shared, specific definition; this audience is not trained to have those ears. (That's a huge part of why my posts will go on and on, as I try to compensate for the schism between the terminology in my head versus the fuzzier analogies I tend to use because I don't care for writing a massive glossary with each post.)
The fallacy of author intentionality leads me to modernism, as that fallacy is a risk of the modernist perspective, which rests on a scientific or objective take on things. Modernism posits that there can be a Truth to a thing -- and I mean that capital letter very intentionally -- as an absolute Truth, in and of itself, independent of external cultural or philosophical contexts. You can probably see how modernist approaches are in total contrast to hermeneutical methodologies, because the latter posits that the tension between the parts and whole is clarified and/or grounded in a contextual understanding. In other words, modernism would say, "to understand girl fishes and hydraulic dams, we only need to look at this girl fish, and this hydraulic dam."
Postmodernism is subjective, relativistic; it would say, "other girl fishes have survived hydraulic dams," or even "other boy fishes don't lose girl fishes in the first place." It's true that at the outer extreme of postmodernism, the philosophy can become almost parody, self-reference so far gone as to become oppressively self-conscious to the point of pretentiousness. Television shows sometimes have modernist streaks in their storytelling; one example is the now-hackneyed "flashback episode" when the now-time consists of characters discussing, and then mutually "remembering", a sequence from an earlier episode. That's referential but only towards the self (internal), thus modernism; postmodernism caps that by having characters imply or even state outright that they're aware they're characters (aka breaking the fourth wall). Both refer but where modernism points inward, postmodernism turns outward.
f the postmodern streak is taken to its extreme, and the characters go so far as to analyze their reality via reference (either internal/self or external/breaking the fourth wall), then we get into metanarrative, "a story about a story". A critical analysis may create a metanarrative without getting this label; the postmodernist aspect appears when this metanarrative is created or revealed within the text.
That's where the pretension comes in, I think: it's like someone writing their autobiography and going into depth on what their life means in an overall scope of the world-in-general. It requires you follow along as they attempt to be both object and subject, and I just don't think that always works. (I'm not saying I'm formally Cartesian, far from it, but that I still agree one cannot truly create a metanarrative of one's role from within the role, because we just can't get out of where-we-are to see anything as-it-is.)
Modernism seeks order; it marshals rationality out of chaos. So for literary or philosophical criticism, (it seems to me) limiting a work's criticism to only those parts within the work -- devoid of context -- is a way of limiting the potential fragmentation that can occur when you drag in context after context after context.
On reflection, this dismissal of (or disinterest in) contextual fragmentation is a sign that someone is thinking with a modernism cap. That perspective is most often to blame (in my experience) for seeing textual deconstructionism as "over-analyzing" or "referencing external (and thus by definition unrelated) things". Which is not to say I'm arguing that doing so is better somehow, and neither am I criticizing in turn those folks who prefer the modernist mindset; I'm only pointing out that the two approaches are fundamentally opposite. If someone prefers the other, they're just not going to enjoy being stuck over on this side of the divide.
In modernist thought, a work can speak for itself (whether or not author-input is required); the drive to tease out every blooming possible cultural reference would be seen as a lot of noise and fury signifying nothing. I get that, I really do. But I'm not a modernist, and while it can be overwhelming to come at literary critique with a postmodernist "everything and the kitchen sink" attack, I find it far more satisfying on a personal level than limiting myself to only what's present in the work, meta-free. Postmodernism leans hard towards the absurd, and I mean that in an existential sense; it seeks the fragmentation and the chaos, and in a hermeneutical framework will often pile on more and more and more obscure cultural or philosophical contextual references.
As a last note -- because this is a perspective ingrained pretty deep in me -- existentialism is basically the position that existence precedes essence. From a religious standpoint, this does not deny a godhead nor does it posit automatic atheism, but it does question the use/value of a soul; religious existentialism takes the attitude that any 'spirit' element is shaped by the body's experiences, to the point that one might even say there was no 'soul' prior to the empirical existence.
On a more secular level, existentialism defines who-we-are as a product of what-we-have-experienced. It's really that simple. The essence of who-I-am is informed crucially, almost completely, not by some pre-determined fate but by my experiences growing up in Georgia, living in Rhode Island, owning a bookstore, and so on. Had I been exposed to different experiences, the essence of who-I-am would be so radically altered as to be a completely different person; in contrast, an essentialist believes that we are fundamentally ourselves regardless of overlaid experiences.
The connection between existentialism and postmodernism comes into play through the concept of the absurd, which isn't related to the funny except in the blackest (and if you're reading Sartre, a slightly morbid) kind of random humor. In existentialism, because our experience is what defines/shapes us, then all experience is positive (that is, positive in that it has an impact on our essence) and the lack of predetermined fate means that all possible experiences are open to us. If we choose to run off and join the circus, this is no more a valid or invalid option than if we shave our heads and join the merchant marines. Both are experiences, and experience as such is a neutral thing. The absurdity comes into play when one attempts to place a (usually moral) value on this experience over that experience.
The jargon-word is 'absurd,' but one could easily say, "that's nonsensical!" just like we might if someone announced that the next game move could be four steps to the left or four to the right, but that going to the right is a Bad Thing and thus any results of such a move would therefore also be Very Bad. When there is no assignable or predetermined validity to this option over that option, then to arbitrarily assign value to experience X over experience Y is a ridiculous, even unwarranted, justification.
Postmodernism's use of the concept parallels this: to designate that these sets of external references can be used to understand a text, while those sources cannot, is absurd in the same sense. All references are potentially valid and/or usable given our lack of empirical knowledge; we cannot know that an experience will lead to death on a hydraulic dam, though we may be able to guess. Existentialism's, and postmodernism's, application of the absurd does not deny the use of guessing to determine one's path -- "this looks like a better choice, given what I know of this kind of situation based on similar things in my past" -- but do consider absurd the idea that there is ever absolute certainty.
In pure existentialism, there is no certainty, none at all. I think sometimes that's why people confuse it with Nihilism, which is a bleaker philosophy that says "if all options are equal, then it doesn't matter what I do." Or more melodramatically, "if all options are equal, I'm screwed no matter what, so there's just no point in trying."
Existentialism doesn't play that card. The lack of value judgments on the potential is no reason to avoid experience completely. Existentialism merely says that to place value judgments -- good or nihilist-bad -- is absurd. "Nothing is true; everything is permitted" is not a source of fear per Nihilism, but a source of unmitigated freedom.
Ah, after all that, I could get into the issues of teleology, epistomology, and my fascination with social constructionism... but I won't. I'll just sum up: my style of deconstructive synthesis ultimately focuses on a post-modernist ludic ontology of strong social constructionism*. I may not use these methodologies absolutely formally, but that's the gist of my analytical process.
In ordinary language I might therefore conclude: if you think a work should stand on its own without cultural metanarrative, if you think the 'truth' of a work does not require external comparison, if you dislike the chaos of exploring a story's small details... you might, in the future, just skip my posts when I get into analysis. It's okay, really. You can take the modernist road, I'll take the postmodernist road, and I'll be in Scotland before you.
* See how much easier it is to be concise when using jargon? That took me this entire post to say lingo-free.
no subject
Date: 6 Dec 2008 12:54 am (UTC)That essay, if it is indeed she I am recalling, is a few decades old. I wonder if she still feels the same?