Skip to content

There was a man who had a face that looked a lot like me

The title of this blog post is courtesy of the song “Exploder” by Audioslave, just in case you were interested.

 

Sorry to disappoint, but this post isn’t part of my YouTube curation project; it’s related, but it’s also going to have some rambling.  First of all, I take issue with some of the arguments & thoughts Burgess & Green have in their book.  If I remember right, on page 19, they make the claim that academia (which, I assume, they mean is the ENTIRETY of the academy) is somehow complicit in the continuation of cyberbullying, which can be perpetuated through YouTube.  What I really take issue is the fact that they don’t explain their argument.  Something that has been instilled in me since high school (and heavily reinforced since then) was an aversion to factoids or, as I prefer to call them, soundbytes.  Basically, it’s a claim that is made, but not backed up.  This is exactly what Burgess & Green do: they make this HUGE claim and don’t back it up; instead, they just continue on about YouTube.  So, really, how is academia complicit in the continuation of cyberbullying?  The internet is a dangerous place (I’m looking at a certain website-that-shall-not-be-named) and the internet is basically a battlefield where there are no rules of war, no Geneva Convention and no real oversight.  Is the academy supposed to step in and be that regulatory body and establish proper etiquette in combat?  However, assuming academia did that, I could make the claim that the academy is infringing upon my right to free speech/text in a public place/sphere.  The internet is unregulated as it stands; to introduce regulation would be moving towards something out of George Orwell’s 1984.  So, regulation is out; what can the academy do?  Cyberbullying isn’t even under the purview of the traditional academy; that is something for the government and social sciences to try and deal with.

That’s but one issue I took with Burgess & Green.  Another assertion of theirs that I can turn on it’s head is that the youth is exposed to a lot of dangerous things on YouTube, like Nazi propaganda.  That’s true, yes, but there’s also the flipside of that: the parodying of those dangerous things.  Here’s an example:

 

There are countless other videos like this of Hitler reacting to other things, like losing a Titan in EVE Online.  The point is that while there certainly are dangers on YouTube of seeing explicit content or watching questionable material, there’s always going to be a positive aspect to it as well.  That’s something I noticed Burgess & Green didn’t focus on too much: they failed to note the positive aspects of YouTube (apart from the obvious, of course) in regards to how it can be construed negatively.

I suppose this came off more as a rant than anything, but I did have some thoughts I wanted to air regarding YouTube and Burgess & Green.  Next post should have the YouTube project.

Advertisements

Burnin’ that Gasoline…to stay warm

All right, so.  Primarily, I want to focus on the film Elephant and what Bassett had to say regarding it, particularly regarding the film being interactive.  Also, I’m still holding true to my tradition of titling my posts after Soundgarden or Audioslave songs.  Sorry.

Anyway, first some initial thoughts on Elephant.  If I remember correctly, I’ve never seen any movies by this director, so I’m going into this film with no bias and no previous knowledge except that it’s supposed to be a Columbine-esque plot.  I took some issue with the narrative structure (or lack thereof); primarily because the film shifts focus from several different characters without accounting for any sort of passage of time.  Sure, certain characters cross paths in various ways, overlapping in that respect; but, otherwise, the film really has no sense of time about it.  I felt that to be jarring, especially if there’s a story to tell.  Call me old-fashioned, but if I watch/read/play something, I want it to have a cohesive narrative so that I can understand what’s going on.  To use Memento as an example, I could understand the progression (or regression) of the story just fine because events were made relevant by their context.  With Elephant, we don’t really get that.  To my knowledge, apart from simply being somewhat disgruntled, the two boys that go on their rampage are doing this just because.  There’s no real motive that I can tell or discern from the narrative that is presented to us.  The only clue we’re given is by the blonde attacker (I don’t really remember any of the names of the characters) when he tells the principle not to mess with kids like him anymore; this is precious little to go on.  Was the boys’ watching of the Nazi documentary supposed to be some sort of clue?  If so, then it wasn’t elaborated or expanded upon.  It felt like, when it came to these boys, we started in medias res.  While that trope can certainly be effective, context is usually given for it; I’m thinking of Homer’s The Iliad as an example.

So, now that I’ve gotten that out of the way, I don’t exactly buy into Bassett’s assertion that Elephant is interactive.  If there’s no way for the audience to do anything with the movie apart from try and establish a cohesive timeline, how is it still interactive?  If anything, Bassett is trying to say, in my opinion, that we’re implicit in the actions of the characters: it is because our society is the way it is (in terms of violence, apathy, etc.) that caused this incident.  I don’t buy into this assertion as well.  To think back to the Butterfly Effect from chaos theory (sensitive dependence on initial conditions), each person is going to respond differently because each person has their own experience to draw upon which shapes their judgements.  Sure, societal impact is part of it, but that doesn’t necessarily mean that because war is everywhere and I play violent video games, I’m going to buy an assault rifle and shoot up my school.  It depends on the person and their initial conditions that will have an effect on the outcome.

That’s about all I have for now.  So, in the meantime, to try and explain the Butterfly Effect a bit more, I’ll leave you guys with this link, allowing the game EVE Online to describe it better than I can: http://www.youtube.com/watch?v=08hmqyejCYU

Standing in the Last Remaining Light

So, I have yet to see “Memento” by the time I write this.  However, I did read the articles, so I at least have that going for me.  Mainly, I kinda want to focus on the whole mind-game shtick that the first article (I think it was Elessar or something; if you get that reference there, good for you).

I find the mind-game concept incredibly fascinating, especially when the author mentioned the Butterfly Effect from Chaos Theory, “sensitive dependence on initial conditions” is how it goes, I think.  Anyway, enough geeking out.  I found it interesting that mind-game movies have become fairly prevalent in modern cinema; I’ve been familiar with films like “The Village,” “The Sixth Sense,” and “The Matrix” for quite a while, but never really gave them much consideration about their mind-game effects, particularly in some of the deception that goes on in these movies with characters not knowing what’s real/imaginary, true/false or, simply, what’s going on.  I made some connections to other examples from my own experience of gaming, mainly thinking of the game “L.A. Noire”, which is a crime/mystery game set in late-1940s Los Angeles.  Primarily, I was thinking of the cases Cole Phelps (the player) and his partner, Rusty Galloway (the A.I. partner), encounter when working the Murder desk.  All of the cases seem fairly straight forward, but there’s always something missing that Phelps & Galloway just can’t put together; there’s always a piece of evidence missing or a clue that doesn’t quite fit with what the suspects have going on.   This all comes to a head in the final case on the Murder desk where the real killer from all these cases reveals himself and shows just how much he was playing Phelps & Galloway.  The game deceives the player into thinking that he/she was right all along in their deductions from the previous cases, only to show that he/she was wrong.

But, enough about “L.A. Noire.”  I found that games & movies aren’t the only place where the mind-game can really be found.  If I remember right, one part of the mind-game is that it rewards multiple viewings, or in the example I’m about to use, readings.  Books can also have the mind-game.  I’m thinking mainly of Stephen King’s “The Dark Tower” book series and, in particular, the Coda that’s found in book 7.  Now, I’m not gonna explain the whole series in great detail (the fewest words I can use are: “Ka is a wheel.”), but after reading the Coda, the book almost encourages a second reading of the whole series.  Not to mention the whole Coda really throws the reader for a loop because it is simply fantastic how King wrote it.

Anyway, I apologize if this post isn’t up to snuff with what I’ve usually been churning out.  As evidenced by my last tweet, I just got home after a rough few days; my mind is incredibly taxed right now.  Hopefully, with a little bit of rest, I may come back and edit this.  So, stay tuned for another exciting post from me; same Bat-time, same Bat-channel!

Chillin’ like a Shadow on the Sun

Okay, I’ve been promising this in my tweets, so now I have the joy of explaining Powers’ use & understanding of the Platonic Forms in his novel, not to mention throwing some more philosophical nonsense out there…like how Aristotle argues we learn something.

First and foremost, I really REALLY want to apologize in advance for the philosophical & metaphysical nature of this post.  It’s gonna get a little dicey and I don’t expect every/anyone to fully grasp it.  All right, so here we go with Plato.  Now, I don’t remember which dialogue it was, but in one of the last dialogues of Socrates (might’ve been Crito, I’m not sure), Socrates is chatting with some of his disciples before he downs a nice glass of hemlock, talking about death and the afterlife.  It’s in this dialogue that Plato, through Socrates, explains how people have knowledge of what he calls the Forms, which is basically the essence of something.  I’m gonna try and explain what Plato/Socrates means by that.  My Philosophy professor, Dr. Edelman, always liked to use the example of a chair to demonstrate the Forms, so that’s the example I’m going to use.  Now, let’s say in any given room, you have a nice solid oak rocking chair.  Socrates would say that because it fulfills all the characteristics of the Form of the thing we call a “chair,” we can call it a chair.  As nice as that chair is, it’s not the chair (i.e. the Form chair); it’s just an example of it.

Crazy, right?  Well, it gets better.  Socrates argues that, when we die, our souls go into the ether, the metaphysical world.  It’s in this metaphysical world that our souls gain an understanding of the Forms; we then transfer this knowledge into our bodies when we’re reborn.  Here’s the problem: when our souls re-enter a body, it forgets all that.  For Socrates, learning is basically recollection: all we’re doing is recalling that knowledge from the depths of our souls.  Also, these Forms?  They’re infallible, eternal and static: they exist outside of time and are never changing, nor can they be wrong.  If they were wrong, we’d not know what we’re talking about, according to Socrates.

Isn’t Philosophy fun?  Here’s the best part: I’m not even done yet.  That’s just Plato; now we’re goin’ to Aristotle, of whom I wrote a cheat sheet for.  Aristotle dealt away with most of that metaphysical crap, going for a more practical approach in his Posterior Analytics.  Aristotle replaced Plato’s Forms with his own Universals, which exist not in some metaphysical realm where our souls gain understanding of them, but in our own intellect.  As a side note, Aristotle didn’t go with the idea that our souls exist prior to our bodies; to him, that’s just silly (and he even says so).  Anyway, so for Aristotle, he argues we gain understanding of those Universals through abstraction, or through our senses.  The progression goes kinda like this:

Sense experience -> imagination -> phantasm ?->? Knowledge

That last arrow represents a mystery to Aristotle since he can’t explain what happens there.  To sum it up, we experience something through the senses where it becomes an impression, where it goes into our imagination where it eventually becomes a knowable fact.  Weird, I know, but it makes sense for Aristotle.  So, with that knowledge of the Universals, Aristotle says that we can learn through scientia, or demonstrative knowledge.  For Ari, we do scientia through syllogisms where the premises have to be: true, immediate, primary, prior to and better known than the conclusion.  By “primary,” Aristotle says they’re indemonstrable.  Wait, what?  So, how can they be true if we can’t demonstrate their truth?  Well, Aristotle is ready for that: if you try and prove those premises through syllogistic reasoning, you’re gonna have an infinite regression of premises, meaning that you’re gonna be arguing in circles.  Since we have an understanding of those Universals, we don’t need to prove the truth of the premises; those Universals can’t be wrong.  And while we’re at it, scientia can’t be wrong, either; they’re both infallible.  If they were fallible, we wouldn’t know anything; we wouldn’t be able to argue anything since we’d have no knowledge.

Okay, no more philosophical crap.  I’m tired of it.  (As of my writing this, I started to laugh at how ridiculous all this stuff sounds).  So, how does this relate to Powers & Galatea 2.2?  Excellent question!  Let’s consider Helen in that philosophical mess above this paragraph: Helen is learning.  That we already knew, but how exactly is that going on?  It’s quite simple, really: through sense impressions.  Helen listens to Rick and thinks about what he’s reading to her.  The imagination & phantasm phases of that brief flow chart I made are substituted with her neural network, where those impressions eventually become knowledge.  Not to mention, Powers, I think, has some background in Platonic thinking: on page 196, we’re given this statement: “Yes, yes: we know what the thing is like.  But what is it?”  Forms.  We can only describe their attributes and what they’re supposed to do, but I can’t show you the Form of chair.  And page 236: “She remembered, even things that she had never lived.”  Helen’s digital soul existed before Implementation H came into being, thus being able to recall things she never knew. Plato, that sneaky bastard, he’s everywhere.

And seriously?  I’m done.  No more for now.  I think, after all the reading I did today and typing all of this, my brain is temporarily fried.  Again, I really want to apologize for the metaphysical and philosophical nature of this blog post.  To quote Marty McFly from Back to the Future: “This is heavy.”

Alive in the Superunknown

Unfortunately, I’m continuing my trend of having musically-influenced titles for my blog posts.  Maybe it’s a clue that I need to listen to something else for a bit…

Anyway, I wanted to get my blog post up for Monday so everyone/no one has time to read it before Sunday.  As of my writing this, I have yet to read Ryan’s & McGann’s articles for Monday’s class.  However, while it’s still fresh in my sieve of a memory, I did want to touch upon Galatea 2.2.  If you guys haven’t noticed, I’ve been tweeting about my thoughts on the book, particularly in regards to the creation of a virtual intelligence.  I think, after reading the first “half” of it, it’s an incredibly fascinating book.  Creating a virtual/artificial intelligence is typically the fare of sci-fi novels, TV shows, movies and video games (to name a couple: Star Trek, Mass Effect, etc.) and usually carries some hefty consequences.  While I can’t foresee what will happen in the book, I hope that Helen (Imp H) doesn’t turn out to be like the geth in the video game Mass Effect: robots created as an experiment with a degree of virtual intelligence until they developed autonomy and revolted against their creators.

I’ve noticed that, in the first half of the book, one of the problems Marcel & Engineer (I’m just gonna call Rick & Lentz their nicknames) encounter is figuring out how their creation learns something.  This is something that I actually have some experience with from my Philosophy classes as an undergrad.  The primary goal of that class was to examine what different philosophers thought regarding how we learn something.  Plato and Aristotle both have their own ideas, both of which I’m familiar with and am willing to throw out there…but, not here.  I have other things I wanted to discuss.  But, it’s still a fundamental issue that they run into: does their creation “learn”, and if so, how does it do so?  All we really have right now that they’ve done is their vast neural network.

Anyway, enough philosophical thinking for me.  It’ll just hurt my brain more than it already does right now.  For the last bit of my post, I did kinda want to talk about tweeting.  I don’t remember if I mentioned it in my first blog post or on Twitter, but I never envisioned myself ever tweeting; in fact, I abhorred the idea of ever using it.  I’ve always been hesitant to use social media; I was practically forced to join Facebook back in 2005.  But, I do have to admit: I enjoy tweeting.  I don’t care for the 140 character limit simply because it doesn’t really quite jive with my way of thinking and speaking.  I like to think that I speak very eloquently (though I certainly bungle my command of the English language); as such, I usually say a lot.  So, having a constriction on how much I can say or type doesn’t work very well with me.  However, just like the Roman army would before Augustus Caesar limited the size of the army & empire, I will adapt and I will prevail.

Ha!  I mentioned Star Trek, Mass Effect, Greek philosophy, Twitter and Roman history all in one blog post!  I don’t know if I should be proud of that or disturbed.  Anyway, until next time.

Musings of an Audioslave while in the Soundgarden

First of all, I want to apologize for the title of this post.  I’ve been listening to Soundgarden & Audioslave a lot recently (especially Audioslave) and…well…it kinda has had an influence on my thought pattern.  Not to mention I was listening to some Soundgarden while writing this.  Anyway!  I digress.

After thinking about it while I was at work today, I realized something: I don’t really know exactly what “digital narrative” is.  So, really, this post is my attempt to try & discern what I can about digital narrative, performance, the advent of technology & social media in the realm of academia and writerly/scholarly identity.

First of all, I want to tackle digital narrative since it’s a sort-of-maybe-only-a-little important to our class this semester.  The smart-ass in me says, “Well, digital narrative is just a narrative or story that’s told using a digital format.”  Okay, great.  But, why is that special?  More than likely, digital narrative is special because of the explosion of computers, advancements in technology and the birth of social media (not to mention the lolinterwebs); digital narrative is something that is, in my mind, experimental.  Writing and narratives have entered a new frontier outside the musty tomes, trade paperbacks, hardcovers, and other paperbacks that I’m used to reading and studying.  Writing and the creation of narratives is no longer restricted to academia or people with an M.F.A. in Creative Writing; anyone can create a blog and write a story or create a narrative (be it personal or fictional) and expand on that.  That’s one of the joys of technology in our digital age and it’s one that I haven’t really embraced.  In that sense, I’m a little old-fashioned.  I still prefer hand-writing some things over using digital formats.

But, my tastes are beside the point.  So, I think I satisfied (for now) my curiosity about digital narrative.  Now, how has technology changed the field of writing, performance and writerly/scholarly identity?  Well, I touched on that a little in my last paragraph in that anyone who wants to can start writing something.  Performance is sort-of the same way.  If I wanted to, I could get some friends of mine together, tell them I want to re-enact a scene from Akira Kurosawa’s film Throne of Blood using the accents & imitations of Clint Eastwood, Sean Connery, and Christian Bale, we could.  All we’d need is a video camera and upload it onto YouTube once we were done.  Just like with writers, anyone can become an actor now and perform in some way or another, be it professional or amateur.

What about scholarly and writerly identity?  This is a topic that I can really only take a shot in the dark at.  I haven’t quite figured out what’s going on here mainly because I’ve never really thought about it.  In addition to not thinking about it, I’ve never had any experience about the identity of a writer/scholar in the digital age.  It could get problematic because the internet isn’t always a friendly place and someone could simply write something and try & pass it off as belonging to someone else.  As an example, I could write a foreword to “The Dark Tower VIII: Roland Re-Conquers Gilead” (which, to any “Dark Tower” fans that might read this, is utter bullshit) and try & pass it off as Stephen King’s.  Basically, my thought boils down to this: plagiarism is something that could very well happen on the interwebs.

Well, that pretty much covers everything I wanted to cover.  Until next time!

Hmmm…

So…this is my first blog post for ENG 566.  This really is just an introductory post so I can have one other than that annoying “Hello world!” default post.

What is there to know about me?  Well, that’s a good question that even I’m not all that sure of.  I’m a 24-year-old grad student at the College of St. Rose, going into my third and final year of my M.A. in English Lit.  I specialize in medieval lit. (primarily Chaucer and yes, I can read Middle English just fine and can even read it aloud), but I also enjoy Shakespeare.  It may sound somewhat unusual, but I also thoroughly enjoy reading literature on warfare; as an example, I’m still working through Niccolo Machiavelli’s “The Art of War.”  History was my minor as an undergrad at Nazareth College in Rochester, NY, and it’s still one of my passions; ancient Greek & Roman Republic history are my main focuses, but the Christian Crusades of the 11th & 12th centuries have also taken a focus in my mind.

So, that’s my education, really, along with some of my interests, but what about me as a person?  Well, I’ve been together with my girlfriend for almost 5 years now; I enjoy reading & listening to music (right now, I’ve been primarily listening to some 90s grunge metal, like Alice in Chains and Soundgarden); I work as a shoe salesman at JCPenney in Crossgates Mall; my main hobby, though, is playing video games.  And really, that’s all there is to know about me.

But, on to the main issue, which is how I see myself in the digital age & everything else we have to look at for this first post.  I kinda see myself as a proponent of the digital age, but at the same time, I’m resistant to certain aspects of it.  I spend a lot of time on the internet and I try and use some digital counterparts to things in everyday life (like paying bills, checking my bank account, even a little bit of shopping, etc.); in addition to that, I have my iPod and I have my games.  So, in those aspects, I’m certainly part of the digital age.  It’s kinda difficult for me to explain what I resist when it comes to the digital age, but I don’t care for the need of constant communication & connection to other people via smartphones & the like, nor do I care for the digital replacement of certain things, like books.  I guess I’m old-fashioned and enjoy personal communication rather than digital communication as well as the physical feeling of a book in my hands.

In terms of narrative, writerly/scholarly identity and performance, I’m not sure where I see myself in those aspects just yet.  I haven’t had too much experience when it comes to those, so I don’t know where to stand just yet.  But, I think that about finishes this up, so I think I’ll call it a day for now.  Here’s to a good semester!