Inside Aech’s Basement: A VR Group Project

Our group project was to create various objects using Blender to incorporate into the VR room that is Aech’s Basement from Ernest Cline’s novel, Ready Player One. Each group member brought their complementary individuality which added unique insight, helpful suggestions, and overall, great team camaraderie.

Below, each member of the group highlights their creation process and touches on how important intermediary group projects are to boosting confidence, gaining experience, and pushing academic boundaries.

But first, please enjoy this Game Trailer created by the invaluable, Vincent H.:

https://youtu.be/StVIVT0FUZM

Cheers!

 

ENGL 3726-01 Final Group Project

Janelle O.

The Stages of Using Blender as a Tech Novice

 

Stage 1: Fear

giphy-7.gif

I’ll admit, I let my insecurities get the best of me.

Make a virtual reality object? Me? Someone who’s never worked on anything STEM related before? In a software that has millions of buttons to press and things to mess up?

giphy-8.gif

I watched countless YouTube tutorials and felt like the hosts were speaking in a foreign language. It was so difficult for me to believe that I could comprehend, and eventually implement, what they were telling me. The self-inflicted intimidation inevitably led to…

 

Stage 2: Frustration

giphy-3.gif

giphy-4.gif

giphy-6.gif

No commentary necessary.

But, if you were wondering how many times someone can restart a project on Blender and come thiiiiiiiiiiis close to throwing a computer across a room…

giphy-2.gif

On the brink of a complete meltdown, I took a step back and realized that I may have been giving Blender too much power. Were the tutorial gurus really speaking in a different language? Was is truly as difficult as I believed it to be? Absolutely not…and once I accepted that, it was smooth sailing toward…

 

Stage 3: Success

giphy-1.gif

Ultimately, completing my object brought me relief; relief that I would never have to use Blender again! Kidding…kinda.

I felt a wave of satisfaction and pride knowing that I proved myself wrong. I was able to use software I had never used before to make an object in a world I had never visited prior to this semester. This project allowed me to fine tune a skill I didn’t know I possessed in a field that intimidated me beyond belief.

Am I switching my major to Computer Science? Absolutely not.

Did I gain experience and insight in an important field? Yes.

Most importantly, did I learn and grow in ways I never could have imagined? Yes.

giphy.gif

 

Vincent H.

Skee-Ball Machine and Game Trailer

 

With limited experience with MATLAB and none in computer graphics, I cautiously approached this project that proved to be an interesting venture into the world of 3D modeling.

Our task: supplement an existing virtual reality model of Aech’s basement based on Ernest Cline’s Ready Player One.

Since many of the objects explicitly named in the book had already been created, I needed to immerse myself into Cline’s world and extrapolate other plausible objects. I was inspired by his line: “Most of them were gathered around the row of old arcade games against the wall.” Though not quite old enough to be an 80’s kid, waves of nostalgia still swept over me as I recalled running through a labyrinth of arcade games in Chuck E. Cheese’s as a child. Each corner was filled with vibrant lights that tempted me with the opportunity to win tickets. Stopping in front of a Skee-Ball machine, the objective seemed simple. Whichever prize I earned would be a manifestation of my skill and not chance.

This childhood memory compelled me to create a Skee-Ball machine, and so I did.

Screen Shot 2018-04-23 at 11.45.21 AM.pngPerspective view of Skee-Ball machine in Blender

I started by searching for various makes and models of Skee-Ball machines to provide a historically accurate model; in doing so, I also learned more about the rich history of the game. Skee-Ball was invented in 1908. Aggressive marketing campaigns created an exciting buzz around the game, eventually being featured in various media outlets, with one of those being a game called Superball on The Price is Right.

After finding the ideal model to recreate, I began in Blender by creating a scaffold of rectangular blocks to create a vague table-like structure. Blender has a variety of tools for detail work, so after creating the basic shape, I began adjusting the edges, gradually working towards the sleek and tapered model. After using bevel to create curved surfaces, knife to stencil point values on the backboard, and subdivide to generate the metal mesh, my Skee-Ball machine came to fruition and fits well in the room.

Screen Shot 2018-04-23 at 11.46.07 AM.pngClockwise from top left: (1) Model of vintage 1980s Skee-Ball machine for design accuracy, (2) Wireframe view of final object created in Blender, (3) Rendered view of Skee-Ball machine and ball in Blender, (4) Rendered view of Skee-Ball machine and ball in room environment created with Unity

The object at the end of this meticulous process was similar to the wireframe structure shown above, except for one important feature: it was colorless. I spent the next few days experimenting with different shaders in Blender to generate the matte texture and metallic luster of the machine’s frame. Upon completion of the frame, I was stumped by the deceptively intricate and random texture of the felt for the machine’s surface. Luckily, a trove of insightful guidance and templates were available on the internet, so I found a relevant node map and adapted it for my use.

Screen Shot 2018-04-23 at 11.46.42 AM.pngNode map of blending shaders to create colors and textures

After a few finishing touches, I exported the object and passed it to Vivian for the final step: uploading to Unity. This project gave me a profound appreciation for the computer graphics all around us, increasingly seen in movies. Although the true experts with years of experience are capable of creating models nigh indistinguishable from real-life objects, anyone with the dedication to learn can become proficient within a month. As a STEM major, I truly valued this rare opportunity to exhibit artistic creativity and learn cross-disciplinary skills in an epic quest to remediate Ready Player One.

 

Robert W.

The Music

 

My responsibilities differed a little from the other group members. My time was pretty evenly split between the softwares Blender, Finale, and Cubase. I wanted to add a little aural spice to the otherwise silent basement. Since a large portion of Aech’s and Parzival’s time is spent playing video games, I thought video game music would make the most sense as an incidental soundtrack. One of the only games the author mentions explicitly as a favorite of Aech and Parzival (which also has a decent soundtrack) is Golden Axe.

Screen Shot 2018-04-23 at 11.47.33 AM.pngMy 3D model of the Golden Axe Genesis cartridge

I turned to that soundtrack as a source material for my synth work. I arranged a piece of music from the game, which took several hours of transcription and input in the notation software Finale (the only piece of software I was already familiar with).

Screen Shot 2018-04-23 at 11.47.43 AM.pngThe Finale file of my arrangement

From there, I exported the midi file into the digital audio workstation Cubase. Cubase is where I got to transform my generic midi file into a slightly more interesting electronic piece.

Screen Shot 2018-04-23 at 11.47.50 AM.pngRaw import of my Finale file in the Cubase DAW

I wanted my percussive sounds to emulate those of a sega genesis system, so I worked with the virtual instrument VOPM, a digital synth designed specifically to emulate a sega genesis sound chip. My work with this synth proved a unique challenge. You must describe a sound in your head in terms of attack time, attack delay, reverb, detune, modular shape, etc. in order to create the instrument sound you desire.

Screen Shot 2018-04-23 at 11.48.01 AM.pngVOPM interface

In order to add a unique, slightly more palatable character to my arrangement, I used some virtual instruments created in the Spector digital synth, which is a more modern and practical plugin than VOPM.

Screen Shot 2018-04-23 at 11.48.15 AM.pngSpector synth. Fun fact, the total cost of the software I used for this project exceeds $1000. Thankfully Blair owns all the software so I didn’t have to foot that bill

Even though music is the focus of my degree, my engagement with Cubase and electronic music in general has been limited. Nearly every step of this project (outside of arranging the original track) was a new and valuable experience for me.

 

Wooseong C.

Modeling and UV Mapping  

 

I came into this project without even knowing what Blender was. When I opened the software for the first time, I was overwhelmed by the number of different tools and view modes the software offers.  I mainly learned from Youtube videos that showed the step-by-step process for making a 3D object. During the process of learning to use this software, I learned two Blender fundamentals – modeling and UV mapping.

Modeling simply refers to crafting the shape of your object to match that of the real-world version. This often requires you to have a second window with a picture of the real world object you can refer to:

Screen Shot 2018-04-23 at 11.51.37 AM.pngModeling an open magazine – I chose to create comic books to add to our VR version of Aech’s basement. I wanted to have a comic book that was open to add to the “realistic” aspect

Having a reference is really helpful, as it allows you to be more detailed and accurate. The process of making an open comic book entailed making a plane, extruding edges to make the folded part and curvature indicated by the red arrows (above), and adding the subdivision surface and solidify modifier:

Another blender skill I learned was UV mapping. This step allows the user to map an image acquired online onto his/her object. This allowed us to map images onto our posters and comic books:

Screen Shot 2018-04-23 at 11.51.45 AM.png

For the Astrosmash cartridge, additional steps were needed during UV mapping. Because I needed to add different images to the different surfaces of the object, I had to incorporate the use of Photoshop. The steps entailed unwrapping the 3D object into a 2D map, exporting this UV map to Photoshop, adding the online pictures to the photoshop, and importing the new UV map back to Blender

Overall, learning to use Blender to add objects to our VR representation of Aech’s basement was a very valuable experience. Although the initial learning stage was difficult, I can now use Blender at the beginner level. I now appreciate the vast amount of options and tools that overwhelmed me in the beginning, because it reflects the infinite amount of possibilities one has to create characters and objects. I am very glad that I took this course during my last semester at Vanderbilt. This course was different from all the other courses I have taken, going beyond the traditional essays and lectures, ultimately creating a more hands on learning experience. This final project really gave me appreciation and the confidence to continue using programs like Blender and photoshop in the future.

 

Vivian L.

Uniting All Elements in UnityScreen Shot 2018-04-24 at 10.40.01 PM.png

My role in the Aech’s Basement project group was to take each of my other group members’ separate Blender 3D objects and any other creations they worked on and implement them into the Unity Scene, ensuring everything looked as it was intended and worked properly. Another portion of my job was to raise the almost non-existent level of interactivity in the room. Prior to this semester’s final project, the player was not allowed to move within the room at all, and there was no way to touch or pick up objects, and I desperately wanted to change this.

At the beginning of this project, while all of my other group members worked on creating their assets, I researched and tested different ways to allow camera and body movement in VR. I looked into Unity Oculus Rift support pages, watched many Youtube video explanations on how to track the headset, and what the sensor controllers were called when used in the Scene. My initial thought process was to create a system in which there was a camera affixed to a Capsule 3D object that represented the player. The capsule would then rotate and translate itself according to the detected player headset movements and the Oculus Rift controller joystick input. To create the hands, I wanted to move two Sphere Colliders anywhere that the touch controllers were sensed. Lastly, to simulate the ability of picking up objects, when the player moved their hand Sphere Colliders to hit any other object that was meant to be moved, had a collider on it, and the grip trigger on that hand was being held down, the object would follow the Sphere Collider’s movement.

Screen Shot 2018-04-24 at 10.42.45 PM.png

Sounds fairly simple, right? I had initially thought so too.

However, I soon realized that although I had the thought process down, I had no idea how to physically code them into the room. I had scarcely any experience with writing C# code, which is the primary language in Unity, as well as using any of their numerous class libraries. I was also still piecing together how the controller input was read in Unity, and was largely unsure of how to read physical headset location. Another difficulty was the inability to test my code outside of the VR space in the Wond’ry.

Screen Shot 2018-04-24 at 10.43.59 PM.png

After another week or so struggling to write my own interactive player in VR, I decided to ask for Dr. Molvig’s help on the issue, since he was my professor last semester in the class Virtual Reality for Interdisciplinary Applications. He showed me many helpful websites and more videos on VR player bodies, especially Unity’s own player model. Following tutorials, I was able to put the model into the scene, but it didn’t quite work as expected.

#1 tip for all things Comp Sci: It never works like you expected

In fact, it crashed the game many times, and even when I got the Scene to play, my attempts in creating the hands painstakingly exact to a video I was recommended were ruined by the fact that it seemed as if the controllers were not being tracked in the scene at all! I believed that it was partially due to last semester’s attempts to create a teleportation system that was quite ineffective, and the tampering of the Scene files.

Screen Shot 2018-04-24 at 10.45.22 PM.png

This led to another large difficulty in my project. Since Unity does not allow the copying of Scene objects with their existing properties into another separate project, I would have to choose between spending more time trying to fix the existing scene, or create an entirely new Scene and copy all of the objects into it by hand. Given all of the time already spent trying to bugfix whatever was going on in the original scene, I decided that starting over may be my only choice if I wanted the movement and object interactivity to be a part of the room. I spent many days re-importing and organizing every detail of the room, testing it along the way to make sure that body and hand movement still worked.

Once I had finally recreated the original room, I held my breath when hitting the “Play” button one more time. It worked! The camera moved, my hands moved, and my body moved! I was filled with relief, but this was only the first step.

It was time to put the efforts of my teammates into the Scene. Altogether, we had quite a few assets created to put into the room; comic books, a skeeball machine, posters, a VCR player, game cartridges, and more. (I’m sure my teammates could tell you a lot more than I can about the specifics of their objects!) I also added a few Coke cans, mainly for hand and object collision testing.

A few things happened when I trying moving the objects into the Scene.

Screen Shot 2018-04-24 at 10.49.30 PM.png

  1. Objects were completely devoid of color.
  2. They were ginormous.

I knew I had to fix the color issue, but the size was not a problem. I could scale it to my will, but adding the correct materials was a separate hurdle. I researched what each of the little options on the objects did, and realized that clicking a button labelled “Lightmap Static” allowed many options, one of which was to assign materials to certain surfaces of the object. This meant that I would first need a material to assign, though. My teammates largely used pictures online to properly wrap around their objects, so I looked up how to create Unity materials from images. Once I figured out how to do that, I realized that objects that were supposed to have separated images on each face, like the Betamax VCR player, had only one surface on which to put the material. Without a proper UV map, I went around this issue by creating the object out of 6 quads, each representing a side. Then, I assigned a separate material for each of them. This worked nicely for neat geometrical objects, but I worried what would happen with more complex ones.

Screen Shot 2018-04-24 at 10.52.23 PM.pngThankfully, my teammates supplied me with UV maps that worked like magic for some of the other objects. For others, I simply looked up images online and utilized those. Sizing was done as realistically as possible

Eventually, every asset created was put into the Scene with the correct coloring and sizing. I was very happy with the results and seeing everyone’s work in the space, in VR, amazed me. Lastly, I was to put Robert’s Golden Axe music into the scene. I decided to create a Audio Source centered at the TV. I attached the Golden Axe audio clip to it, and changed the radii to reflect the distance at which the volume plays normally, and the distance at which it completely fades out.

Screen Shot 2018-04-24 at 10.53.14 PM.png

This marked the end of incorporating each team member’s object (and music) into the final room. Aech’s Basement is far from complete, but progress is progress, and each team member had our own learning experiences completing their parts.

Screen Shot 2018-04-24 at 10.54.41 PM.png

Advertisements

The Virtual School, the Better Choice?

james-moore-1
Dr. James L. Moore

 

 

As an education major at Vanderbilt’s Peabody College of Education and Human Development, I have a particular interest in different ways students can learn. When my class read Ready Player One by Ernest Cline, there were parts of the novel that denoted different ways in which the main character, Wade Watts (avatar Parzival online) learned and studied in a virtual world called the OASIS. What was most interesting is this apparent dichotomy between learning in a physical versus learning in a virtual setting. It appears that the OASIS, as you will soon come to learn in the opening chapter, emphasizes virtual components as the main vehicle of getting really almost anything done. Along with currency and communication being almost completely virtual, the schools seem to be as well. Taking a look at a quote in the beginning of the book,

” I was more or less raised by the OASIS’s interactive educational programs, which any kid could access for free. I spent a big chunk of my childhood hanging out in a virtual-reality simulation of Sesame Street, singing songs with friendly Muppets and playing interactive games that taught me how to walk, talk, add, subtract, read, write, and share,” (Cline 15).”

This quote points out the power of Virtual reality in promoting the healthy social/emotional development of students a an early age. An award winning and in my opinion one of the best, highest quality children’s programs of all time, Jim Henson’s Muppets- Sesame Street- Cline discusses through the main character how certain elementary concepts and lessons on social skills could be learned at a more interactive, much more tailored pace through this use of virtual education. Let’s take a look at a video that relates to this (Video #65 on the list:

After watching this, you may recall Dr. James L Moore III remarking that, “We need schools that are student centered and always factor in the human element.” This is juxtaposed with this idea of virtual learning that is the way of learning and understanding in the OASIS. What is better? Virtual learning to give an extremely tailored learning experiences to almost all students, or more expensive individualized learning, with a physical presence of a teacher?

Harassment in VR Spaces

(Spoiler warning for Ready Player One by Ernest Cline in first two paragraphs. Links contain sensitive content relating to sexual harassment in online/gaming communities.)

Ready Player One: 80’s nostalgia trip, celebration of gamer culture, cyberpunk dystopia, hero’s quest, and – teenage love story? I’ll admit, I haven’t finished the book yet, but from the beginning our protagonist Will Wade/Parzival is smitten with Art3mis, a fellow gunter and popular online personality. He even has pictures of her (or at least, her avatar) saved on his hard drive. When he first encounters Art3mis in the Tomb of Horrors, he gives her advice on how to beat the lich king. Once they’re both High Five celebrities on the famed scoreboard, they begin a casual romance. Art3mis breaks it off when she feels their time together has become too much of a distraction from the hunt. Parzival, lovesick, sends her unread messages, flowers, and stands outside of her virtual castle with a boombox: part persistent “good guy,” part slightly creepy stalker.

ready_player_one_cover

Ready Player One by Ernest Cline

Ready Player One has not (yet) examined gender politics on the OASIS, but it acknowledged the age-old mantra: “There are no girls on the Internet.” Even in a world where virtually the entire population uses OASIS and a game event with a massive prize, the default is presumed male. Parzival persistently questions Art3mis’s gender until he is assured that she’s “actually” female, accusations that Art3mis takes with good humor.

(Spoilers end here.)

But as we all know, there are women on the internet and in the gaming world, and they have been there since the beginning – even when the climate is hostile. Shortly after starting Ready Player One I found this article about the writer Jordan Belamire’s experience with sexual harassment in virtual reality. Despite all players having identical avatars, another player recognized her voice as female and followed her around attempting to touch her avatar inappropriately. She finally exited the game. The game’s developers were shocked and dismayed when they heard of the incident and in response developed an in-game “power gesture” that creates a privacy bubble around the player. They hope that other virtual reality developers will take harassment into consideration when designing their games. Online or in-game harassment is nothing new, but as we pioneer exciting new platforms and experiences, it continues to be a thorn in the community’s side.

Ready Player One might take place in the distant dystopian future, but in characters’ interactions with each other the culture seems closest to the Wild West of the 2000s internet – complete with flame wars and skepticism on women’s presence in the OASIS. Presumably, harassment continues to be an issue in this brave new world of the OASIS – but is the response closer to QuiVr’s developer-implemented “power gesture,” or the old advice of “just ignore it and it will go away?” Perhaps it isn’t even a talking point in the OASIS’s community – why worry about it when, after all, there are no girls on the internet?

What do you think of QuiVr developers’ response in implementing the power gesture? Do you think that this is a valid solution, or do you believe it is too much/too little?  What responses to harassment have you seen on other platforms and games?

Virtual Hopes

VR is an exciting way to experience media in a more immersive way although it still has a long way to go before it is truly available for everyone to experience in their daily lives. This is largely because of the cost for a single setup even before you buy any games or interactive experiences to enjoy with your headset. You can either have no interaction with your environment other than turning your head or you can have a fully immersive experience that costs a ton. Another major setback is that these expensive setups that can track your movements are not always very accurate which was a problem we ran into while solving a puzzle in our first experience with the HTC Vive. We were far enough from the walls and close enough to an object in the game that we should have been able to pick it up but the tracking system believed that we were much closer to the wall and prevented us from being able to grab the object until other people in the room moved around and the tracking started working correctly again. It is also rather obvious that you have a screen right in front of your eyes no matter which virtual reality setup you were using and depending on how clear the resolution is and how the screen is created it can get hard to watch really quickly.

Even with these limitations there is a lot of space for VR to expand in videos, games, and simulations for educational purposes. For example, it would be cool if they could have doctors practice surgeries in virtual reality so they don’t have to get cadavers all the time and they can practice over and over with different representations of peoples bodies. Personally, I would like to see VR improve with its tracking capabilities so that it becomes more immersive and can truly simulate real world experiences. VR has already been able to explore many concepts and styles of play by transforming regular three dimensional media into something you can stand in the middle of and feel like you are actually interacting with your environment rather than just sitting in front of a screen where you can’t touch any of the objects surrounding you. For example, there are many VR experiences that allow you to experience things that you wouldn’t be able to do in real life. This includes climbing Mount Everest and becoming a bunny in an animation. Experiences such as this where you can walk a plank at great height can even allow people to experience the things that terrify them without facing any real danger. VR can even transform games that start out as PC games into an immersive experience  allowing you to become a surgeon or play fruit ninja in almost real life. A great side effect of games in virtual reality is that it allows you to become active and practice archery or tennis without ever having to go too far or find a gym to work out in. And if you want to be able to play sports with friends or strangers around the world then you can do that as well though you can’t play with any friends who do not own their own VR setup. Virtual reality can even allow you to experience completely impossible environments that have an animated, drawn, or dreamlike feeling. Though these are all really cool advances in virtual reality that demonstrate how it can be used socially or in an active or dreamlike environment to enhance the way you experience a piece of media the tracking and visibility are not quite at the level they would need to be for it to be used in a truly educational sense for surgeries and other applications. Once these advances can be made and the price comes down to an accessible level then it everyone will truly be able to experience and enjoy virtual reality.

Pokemon GO! The Ethics of Augmented Reality

Pokemon Go is the new gaming phenomenon of the year. Revisiting the old fashioned Nintendo Pokemon games, Pokemon Go takes that same experience up a level by adding the features of Augmented Reality. Allowing people to walk around the planet with their phones and search for Pokemon, the game has added a new dimension to gaming. And along with that, one of the main selling points of the game is their focus on fitness. A recent statistic stated that since the game released earlier this year, people playing the game have walked about 4.6 billion kilometers. To put that into perspective, that’s more than the distance from the Sun to Neptune and more than the distance NASA’s Voyager 1 has travelled in the past 12 years!

However, this achievement does not come without problems. While the game has several positive aspects – people are being more active and have started going out more (even though they are still looking into their screens), and meeting new people (I myself have made a couple of friends while playing the game), there have been quite a few concerns regarding trespassing. People have often been reported to walk into peoples private residences, trying to catch a particular Pokemon. While Niantic, the developers of the game are completely on the legal side of this issue, questions about the company’s responsibility for the actions of their consumers have started emerging. The popularity of the game has pushed the company into new ethical and legal issues that have never been dealt with before; and with the fast developing world of augmented reality, such issues are going to become more frequent as new games implementing this technology are released. While some people say that the players are completely responsible for their actions and how they play the game, many suggest that the game in some ways is encouraging players to trespass into restricted areas, or at restricted times through where the PokeStops are located and where many Pokemon are found.

Public places like monuments or parks are the ideal location to play games such as these, so often Niantic focuses on such areas by providing more Gyms and PokeStops, in a way encouraging their players to come to that location more often. Niantic has received requests from several organizations to remove PokeStops from near their establishments, and so far, Niantic has complied. But the question of whether Niantic is responsible or not is still unanswered.

In my opinion, both parties in question are to an extent to blame for this. Neither are completely wrong in doing this, but since this is a new field of ethical gaming and technology we are dealing with, new rules must be put into play. So far, there are no limitations to where one can place digital markers in the real world, but now as augmented reality is becoming a… reality, we need to make some new laws or rules to govern this. The lack of limitations on where Niantic has put their Gym’s and PokeStops often leads people into unknown territory. As far as the players are concerned, ideally they should be paying more attention to where they are walking and should be more receptive of their surroundings, but the fact that to play the game you must always be looking at the screen of your phone is not really helpful. Niantic has made some efforts to reduce the amount of time that people spend looking at their screens by introducing apps for wearable devices such as the Apple Watch, but this is still not the complete solution. I’m sure that as more game developers start implementing VR into their games, new laws governing the use of digital space will emerge, but until then all we can do is make sure to be more receptive to our surroundings while playing until we are offered a satisfying solution.

 

Throwback Thursday

Playing the Sims was an instant throwback to my elementary school days. My friends and I would load the CD into the CPU and wait anxiously huddled around my desktop computer for the game to load. But times have changed! Emily showed up to class with her laptop and simply clicked for the game to start. Almost instantaneously we were transported into a virtual reality. First off, we created our Sim character. The number of options available for making a human is astronomical! I could not believe all of the different choices to make. It is no longer as simple as choosing hair and eye color, but rather we could choose their body weight and muscle definition, whether or not they have freckles, the shape of their eyebrows. Does this really matter when playing a video game? After a while it became tedious. It seemed so superficial that in a video game we have to care about outward appearances. Not only did we have to choose outward appearances, but also we had to pick their interests, personality, and life goals. Personally, I don’t really care if my Sim wants to be a 5 star chef or an international spy. Finally, we resorted to choosing characters created by the randomization tool since making each separate character was so time consuming. When we got past the part of making characters we had to choose a house and furnish it. While this part was entertaining, I found there were too many options. I’m indecisive in my own real life and almost felt like the video game was mocking me! We didn’t get to play much of the action part because we ran out of time, but I’m looking forward to playing in and creating a virtual world.

Molly Steckler

LOTRO: Not Quite There Yet

by Theo Dentchev

Video games today are the closest thing we have to a commercially available virtual reality like that in Neal Stephenson’s Snow Crash. Lord of the Rings Online is in many ways quite similar to the Metaverse. You have an avatar, you can interact with other avatars of real people in real time, and you can even have houses in various neighborhoods. Of course, all of this is much more limited that in is in the Metaverse; your avatar is only customizable within the confines of the pre-made models and features (you can’t code your own), interactions with other players are much more limited in terms of facial expression and body language (sure you can type “lol” and your avatar will laugh, but your avatar can’t be made to mimic your real life body and face movements), and while you can change the furniture in your house you can’t do much about the structure of it.

And you can also fight. The true core of any game is the gameplay, with everything else, no matter how detailed or flushed out, being simply shiny accessories. In LOTRO, whatever else it may have in its vast universe, is at it’s core a PvE (player vs environment) game where the player fights all sorts of monsters in his various quests. The core of the Multiverse gameplay is to mimic real life, but without the limitations, but you can still have sword fights in it, thanks to some nifty code by Hiro Protagonist. In LOTRO you have a great deal of control over your avatar when fighting. I happen to be a champion, so I know a thing or two about virtual sword fighting. I can decide what kind of attacks my character will use and when. If I time it right I can fit in special attacks in between auto attacks, or I can have two special attacks in a row. I can heal, and I can run away (sort of).

But after reading Snow Crash I realize just how limited the gameplay really is. In the Metaverse skill is in part determined by how closely you can get your avatar to move the way you would in real life. In many ways it is like a fight in real life; you actually have to pay attention to how the other player is moving, and react accordingly by dodging, blocking, counter attacking. All of those are automated in LOTRO, determined by mathematical formulas and probability. In LOTRO you don’t even pay attention to the actions of your avatar or the enemy you are fighting. If you asked me to describe how a spider in LOTRO attacks I couldn’t do it. That’s because in LOTRO you’re just standing still face to face with your enemy, hacking away, and you’re paying much more attention to the health/power bars in the upper left, and the skill icons in you skill bar (whether they are available yet, or how much cool down time is left) than you are to the actual movements going on. Not to mention the fact that your movements don’t really have much of an effect anyway. I may have just used a special move that slashed my enemy four times, but the enemy will look just the same as it did before. In the Metaverse slashes actually have visible effects, such as severing the arm of an avatar from its body.

Reading Snow Crash makes me realize just how far off games like LOTRO are from achieving virtual reality, despite all the cosmetic similarities. And yet, there are similarities. If you compare LOTRO to early arcade games the difference is huge. We’re making strides, and who knows, maybe another twenty or thirty years from now we’ll have a Metaverse in Reality. In the meantime I’m going to go kill some spiders, and maybe I’ll pay a little more attention to the animations this time.

– TD