Podcasting — The Future of News Media

With the increasingly shortening attention span of the average person, the printed newspaper has become the least popular medium for news. News is now transmitted through a variety of different formats — such as television, internet, and video — and you would be hard pressed to find anyone that still reads the morning paper. Hell, I cannot even remember a single time I have read a newspaper throughout the 19 years of my life. The limitations of the printed medium just can’t compare with the affordances of new visual and auditory media. As a result, news media outlets are adapting to the current social climate.

News media outlets such as Vox Media and Vice News have taken advantage of the growing popularity of YouTube by creating informative, infographic videos that incorporate animations, video clips, and graphics with the spoken word to capture the audience’s attention. On the other hand, broadcast companies such as Fox, NBC, and CNN have taken advantage of television broadcasting to disseminate the news and reach broader audiences. These visual mediums have infinitely more potential to capture one’s attention than the small black and white words that fill newspapers.

Just take a look at the video and newspaper below. Which one would you be more likely to read or watch?

new_york_times_frontpage_1914-07-29

The video, right? I agree. There is simply no comparison between the two mediums. With print newspaper, there is just not enough stimuli to compete with these other forms of news. Just like the common idiom states, a picture is worth a thousand words, and there is no way in hell I am going to read a thousand words; so, just show me the picture.

While these mediums do a great job of capturing your attention, they require your complete and undivided attention. People are busy. Most work 9 to 5 jobs, more people than ever commute to work, and a lot don’t have the time nor the energy to engage in these news mediums. So, how can the news be translated in another way to adapt to our busy lifestyles?

Podcasting has emerged as a new, great alternative for consuming the news. It allows for the average person to keep up to date with the news, while performing their routine day-to-day tasks. Depending on the type of job you have, you could be listening to podcasts the entire workday. News media outlets need to take advantage of this emerging medium. With podcasting, news media outlets have the opportunity to be in the ears of the masses for large portions of the day.

giphy

Newspaper The New York Times has taken advantage of this opportunity with its podcast The Daily. They take the most significant current news stories and thoroughly examine them in a condensed 20-40 minutes. This audio format affords them a lot more freedom than print newspapers. For the Blasey-Kavanaugh hearing, they took actual recordings from the hearing, brought in guest speakers who have personal connections with Kavanaugh, and commented on specific key incidents that occurred during the hearing. There is a lot more nuance that can be conveyed in this format.

By listening to the actual hearing itself, a lot more is conveyed than words on a page. You can hear the intonations of their voice and emotions in their speech, and you can form your own opinions based off them. It makes it much more difficult to take out of context, and it holds a much more significant impact when you actually hear the words coming from their source. Podcasting also gives the audience a much more human take on the news. Hearing the reporter’s analysis through his or her voice helps the audience identify the difference between analytical opinions and objective facts.

With that said, podcasting offers an exciting, new alternative to traditional forms of newscasting, yet few news broadcasting companies have begun to utilize it. Podcasting is slowly growing in popularity, while these other forms are quickly declining. These companies need to advance into the future and pick up this growing medium. It is only a matter of time before podcasting becomes a significant component of news media.

https://www.podbean.com/media/share/dir-33yyz-4bc4d9f

https://www.podbean.com/media/player/n7abi-4b59fac-dir?from=share&skin=1&share=1&fonts=Helvetica&download=0&vjs=1&skin=1

*Sorry, I know it’s annoying to click a link, but WordPress is being a butthole and I have been trying to fix it for hours.

Ethan Nguyen

Advertisements

From Gamers to Gamemakers

Sunny Chennupati

I’m a lifelong gamer. From aggressively driving my way to become ranked 12th in the world in Burnout Paradise to carrying my teams to victory at Vanderbilt’s semesterly Vandy_LAN League of Legends tournaments, I’ve played and excelled at a slew of video games. Needless to say, video gaming is a core passion of mine and I don’t see myself stopping this pursuit any time soon.

Wanting to involve myself in this hobby more, I thought it would be an interesting idea to code a game. I’ve spent years on one end of the coder-gamer interaction, so why not make my final project about the other side of the entire interaction? Playing a complex video is a breathtaking experience, but behind every Skyrim, every Witcher, and every Fortnite, lies a team of dedicated developers who work tirelessly to ensure that the players are fully invested in their games. They are the architects of the out-of-body experience that is core to gaming: the moments where players “leave” their body outside the computer and become fully immersed in the virtual experience. I didn’t have such lofty ambitions as to attempt to emulate one of these developers, but I certainly wanted to get a taste for game-making, both in order to develop a new skill and also to foster an increased appreciation for the art of game development.

The first step to making a game was deciding what to make it about. Having read Ernest Cline’s Ready Player One, we were enthralled by the final fight scene at Castle Anorak, not particularly due to the compelling nature of the writing or the exhilaration of the action (neither of which was any more impressive than the rest of the book), but because of how many different genres and references were jam-packed in there. Ultraman fighting Godzilla is a pipe dream for many people who grew up as fans of both franchises, and I can only imagine that having it visualized must be an immeasurable joy. The David and Goliath theme persistent throughout the entire battle, the big-robots-fighting-big-monsters battles that have become such a staple of modern Hollywood blockbusters, the DeLorean DMC-12 as a loving throwback to Back to the Future (think about that wording for a second), all of these make for a wonderfully cacophonous amalgamation of popular media. There was no way we could pass up making our game about this battle.

The second step in the process was thinking about what kind of game to make. I regularly game, and while I play multiple multiplayer games at once (Fortnite, League of Legends, Rocket League, etc.), I generally commit myself to playing one single player game at once. I believe single-player games offer an unparalleled and highly personal storyline experience that cannot even come close to being emulated by most multiplayer games, and thus deserve dedicated time towards their completion. Currently, the game I am playing is Nier:Automata. Nier is a highly acclaimed action role-playing game featuring Androids, a post-apocalyptic Earth, and humanity’s desperate struggle to reclaim this Earth. One key aspect of Nier that caught my eye was how it switched the mechanics of gameplay without batting an eye. More specifically, the game constantly switches perspective, flowing seamlessly between a Dark Souls-style 3D hack-and-slasher to a side-scrolling shoot-em-up à la Megaman, a bullet-hell space shooter like the Touhou series, and other modes of gameplay.

zf1whswaykx0kqyvtrtlmega-manmaxresdefault

From left to right: Dark Souls, Megaman, and Touhou.

It’s easily argued that this could make for a cluttered gaming experience, but the style and ease with which Nier carries these transitions out (VERY minimal usage of loading screens, for example) makes for a compelling and dynamic experience. Moving forward from Nier, I wanted to emulate the changing of mechanics, which interestingly is a mechanic in and of itself.

A demonstration of the switching of perspectives in Nier:Automata

The final step, naturally, was constructing the game. Kyle and I tested multiple game-making softwares, including Stencyl, Scratch, Unity, and Gamemaker. Stencyl and Scratch were both drag-and-drop softwares. While this would be nice for one who knew nothing about coding, Kyle and I have both extensively studied computer science, so why not use our knowledge to make a game instead of taking the easy way out? After trying Unity out, we immediately decided against it. Unity is very powerful, too much so for our purposes of making a simple game for this class’s final project. Trying to make a simple object in the engine took upwards of an hour, and we knew it would be a cumbersome task to try to code anything functional in this engine. So we settled on Gamemaker. Gamemaker is an intuitive software that makes it very manageable to create functional and varied games, so we were excited to begin learning this.

Initially, the plan was to make a game consisting of 5 distinct “phases,” each with its own mechanics and control scheme. The first phase would be a topdown arena shooter, similar to the Boxhead flash games. After a time interval or a certain number of killed enemies, the game would move on to the next phase.

boxhead-more-rooms-game-big

One of the Boxhead games

The second phase would be a side-scrolling fight versus a massive boss, akin to a game like the recently released critically-acclaimed Cuphead.

A boss fight in Cuphead

The third phase would be a turn-based Final Fantasy-esque game between Parzival’s party and Sorrento.

A fight in one of the Final Fantasy games

The fourth phase would be the escape, where the player rides a car to the end of the game, dodging bullets, debris, and enemies, similar to the Flash game “The Flood Runner.”

One of the Flood Runner flash games

Kyle will talk more about the development of the game, so I’ll quickly brush over my process so as not to make this post too long. Ultimately, we decided on two levels for the game, as making each game with new mechanics would be an incredibly long-winded process. I made the side-scrolling shoot-em-up portion while Kyle created the car driving portion.

The first step in my game development was creating the menu, a seemingly simple task that took hours due to the steep learning curve of Gamemaker.

pasted image 0

My initial menu

The next steps were making the basic platforms, coding the game mechanics, drawing the background and creating the special effects, and finally creating the completed game product.

pasted image 0 (1)

The basic platforms

pasted image 0 (2)

The physics and tilesets are added

pasted image 0 (3)

The background and special effects are complete

pastedImage0

The finished product with the Godzilla boss

Kyle will now discuss the actual game development process in greater detail, but if you want to try out my game, attached below is the link to download the Zip file containing the game.

http://www.mediafire.com/file/ames3badu0z1aak/RPOSunnyChennupati.zip

Kyle Tran

“What is the most resilient parasite? Bacteria? A virus? An intestinal worm? An idea. Resilient… highly contagious. Once an idea has taken hold of the brain it’s almost impossible to eradicate.” (Inception, 2010)

At some point during my childhood, spent among some of the classic video games like Counter Strike 1.6, Age of Empires: Rise of Rome and The Sims, a little ‘idea’ had taken root in my mind: to make some kind of video games that I could be proud of, no matter how small or simple it is. Then, after such a long time had passed and that idea had seemingly fallen into the darkest depths of my mind, about two years ago, I found that it was still firmly rooted right there in my head, and finally took the first step on my game-developing journey on a blazing summer day. After several weeks tinkering with the engine Stencyl, I managed to create something that look like this: (it’s available on Newgrounds: https://www.newgrounds.com/portal/view/678131)

Capture.PNG

It was a very simple game where you control a red circle that avoids swarms of small, white circles while firing blue circles at your enemies in order to survive for the longest time. Everything was done quite easily by using the drag-n-drop features available in Stencyl and watching a bunch of helpful Youtube tutorials. Embarrassing as it is to admit, it took me weeks to just copy what those experts on Youtube do in their videos and implement my own features. Although it’s really primitive, it was an exciting task for the high school junior me. I got a chance to experience just how easy it was to start making a game as well as how hard it was to actually finish one.

Two years later, here I am with a chance to revisit that same experience, as I fumble with Game Maker Studio while working on the final project with Sunny. After brainstorming together for a while we eventually decided to make a game about the final battle in Ready Player One. This time, the challenge was on a whole different level – we decided to use the Game Maker Language, which entailed actual coding; I drew my own sprites instead of importing from online resources (then even animated them!) After that there was still more stuff to do – implementing multiple types of enemies and bullets, figuring out game logic, modifying numbers to balance the game and all that jazz – it felt like playing New Game Plus on a harder difficulty.

My journey started out with the setting. Remember how, in the movies, Parzival broadcast his call to arms on the barren Planet Doom, in front of Castle Anorak?

giphy4

What I had in mind was that, the reason why there was no sign of Sixers on the planet surface (except for in Castle Anorak itself) was that Parzival and his pals had already cleared out any Sixer forces and defenses on the outer perimeter. I decided that my game was basically going to be about that first strike. Playing as Parzival, riding in his DeLorean DMC-12, you have to defeat swarms of Sixers and destroy their fortifications, which consist of layers of towers and a portal from which Sixers are continuously spawned (Sorrento configured the portal so that it can teleport Sixers from anywhere in the Oasis to Planet Doom!) As soon as you shoot down the Portal, you win.

31143947_1798043013549815_9039754600038531072_n(Yes, the portal is that thing in the bottom-right corner. The other blue thing is the tower)

But what comes next was the hard part. Being one with barely any artistic talent, I struggled quite a lot with the sprites. My first idea was to check out the Game Maker Store, but Free users are not allowed to download or purchase anything. Dejected, I figured the only thing I could do was to create my own sprites. On my quest to search how to make video game art, I stumbled upon Piskel.

It was really beginner-friendly – even as someone who has zero (pixel) art experience, I was able to make my first character somewhat look like Parzival’s DeLorean DMC-12 (hopefully…)

mGOXhHSICTCa5T18vRRQio7J-JcfwnFC--UqLLBoUORQx3ErkWTKGQj7qGVE7gHsNRcyk_VfIL_WOsVUEAe0KoaCoPCb18Oo6t11WhA2q-liiGeBmccJ93lngf5PeknE9-lCxThwnuQ

The enemies were a bit easier to draw – the sprites for basic melee soldiers and ranged gunners were quite straightforward:

Capture

By the time I got to the walker I was on fire. I even went so far as to animate them in a simplistic way just to make them more lively. Eventually the walkers had a walking animation, a charging animation for their attacks, and a death animation.

30831891_1798048020215981_3644317799771275264_n  30658084_1798048010215982_2661569462141976576_n 30653979_1798048026882647_7887692153315393536_n

 

 

I also made some death animations for the soldiers (they implodes with a sound of coins splashing on the ground like in the movie) and for when the player took damage (the car flashes with electricity leaking out) Unfortunately, I didn’t have enough space in Game Maker to animate the towers crumbling, which I was initially actually really excited to do.

Speaking of towers, I initially drew these four types.

pastedImage0 (1)

What I had in mind at first was to place Parzival in the middle of the map, and then put each type of tower in a corner so that the player has to defeat all 4 elements in a Avatar-esque manner first before fighting the boss in the middle. But unfortunately, the Trial version of Game Maker only allowed users to create 15 objects (which, by the way, includes players, bullets, animations, so by the time I finished with dynamic objects there were already 13 or so) and therefore I resorted to having only one type of tower (the water one).

Next up is the bullets, which were the most easy sprites to draw because they’re mostly straight lines and a few curves with minimal shading.

pasted image 0 (4)

In addition to that is the tileset, which were basically the terrain that the player and enemies walked on. It was really plain at first, so I tried to decorate it a bit with little cracks in the ice – when the tile blocks are put together, you can see them (kind of) connecting to each other.

pasted image 0 (5)

Lastly, the most tiring process of all – coding and balancing the game. Learning the Game Maker Language was quite a pain because it differed from most programming languages I’ve picked up. It took a while to get the hang of it, and even by then it was still a headache to figure out the logic and interaction between characters in the game. Handling enemy behaviors like shooting, moving was already hard (not to mention each enemy type has different attack patterns with different codes!) but to account for in-game “events” like collision with bullets or triggering invincibility upon taking damage and all that stuff was even more complicated than I expected. Then I had to dive into the game myself and try to see if what I currently have is too hard or too easy, then change player and enemy stats accordingly – HP, movement speed, attack speed, damage,… everything needs to be tweaked to the tiniest details. When all is said and done, what’s left is implementing menu screens, and voila, here’s the final product.

pastedImage0 (2)

pastedImage0 (3)

It has definitely been a really fun experience to put this all together, and after this project I found that small ambition from my childhood again. Once more, I was able to create my own little video game, and although it’s still quite primitive, it has been such a huge improvement from the first one I made. I can only wish that from this, it willgo on to become even more of a video game one day.

Here is the game for anyone interested!

http://www.mediafire.com/file/3uwg447e7oja7l5/RPOKyleTran.zip

Till we meet again,

Kyle and Sunny.

Inside Aech’s Basement: A VR Group Project

Our group project was to create various objects using Blender to incorporate into the VR room that is Aech’s Basement from Ernest Cline’s novel, Ready Player One. Each group member brought their complementary individuality which added unique insight, helpful suggestions, and overall, great team camaraderie.

Below, each member of the group highlights their creation process and touches on how important intermediary group projects are to boosting confidence, gaining experience, and pushing academic boundaries.

But first, please enjoy this Game Trailer created by the invaluable, Vincent H.:

https://youtu.be/StVIVT0FUZM

Cheers!

 

ENGL 3726-01 Final Group Project

Janelle O.

The Stages of Using Blender as a Tech Novice

 

Stage 1: Fear

giphy-7.gif

I’ll admit, I let my insecurities get the best of me.

Make a virtual reality object? Me? Someone who’s never worked on anything STEM related before? In a software that has millions of buttons to press and things to mess up?

giphy-8.gif

I watched countless YouTube tutorials and felt like the hosts were speaking in a foreign language. It was so difficult for me to believe that I could comprehend, and eventually implement, what they were telling me. The self-inflicted intimidation inevitably led to…

 

Stage 2: Frustration

giphy-3.gif

giphy-4.gif

giphy-6.gif

No commentary necessary.

But, if you were wondering how many times someone can restart a project on Blender and come thiiiiiiiiiiis close to throwing a computer across a room…

giphy-2.gif

On the brink of a complete meltdown, I took a step back and realized that I may have been giving Blender too much power. Were the tutorial gurus really speaking in a different language? Was is truly as difficult as I believed it to be? Absolutely not…and once I accepted that, it was smooth sailing toward…

 

Stage 3: Success

giphy-1.gif

Ultimately, completing my object brought me relief; relief that I would never have to use Blender again! Kidding…kinda.

I felt a wave of satisfaction and pride knowing that I proved myself wrong. I was able to use software I had never used before to make an object in a world I had never visited prior to this semester. This project allowed me to fine tune a skill I didn’t know I possessed in a field that intimidated me beyond belief.

Am I switching my major to Computer Science? Absolutely not.

Did I gain experience and insight in an important field? Yes.

Most importantly, did I learn and grow in ways I never could have imagined? Yes.

giphy.gif

 

Vincent H.

Skee-Ball Machine and Game Trailer

 

With limited experience with MATLAB and none in computer graphics, I cautiously approached this project that proved to be an interesting venture into the world of 3D modeling.

Our task: supplement an existing virtual reality model of Aech’s basement based on Ernest Cline’s Ready Player One.

Since many of the objects explicitly named in the book had already been created, I needed to immerse myself into Cline’s world and extrapolate other plausible objects. I was inspired by his line: “Most of them were gathered around the row of old arcade games against the wall.” Though not quite old enough to be an 80’s kid, waves of nostalgia still swept over me as I recalled running through a labyrinth of arcade games in Chuck E. Cheese’s as a child. Each corner was filled with vibrant lights that tempted me with the opportunity to win tickets. Stopping in front of a Skee-Ball machine, the objective seemed simple. Whichever prize I earned would be a manifestation of my skill and not chance.

This childhood memory compelled me to create a Skee-Ball machine, and so I did.

Screen Shot 2018-04-23 at 11.45.21 AM.pngPerspective view of Skee-Ball machine in Blender

I started by searching for various makes and models of Skee-Ball machines to provide a historically accurate model; in doing so, I also learned more about the rich history of the game. Skee-Ball was invented in 1908. Aggressive marketing campaigns created an exciting buzz around the game, eventually being featured in various media outlets, with one of those being a game called Superball on The Price is Right.

After finding the ideal model to recreate, I began in Blender by creating a scaffold of rectangular blocks to create a vague table-like structure. Blender has a variety of tools for detail work, so after creating the basic shape, I began adjusting the edges, gradually working towards the sleek and tapered model. After using bevel to create curved surfaces, knife to stencil point values on the backboard, and subdivide to generate the metal mesh, my Skee-Ball machine came to fruition and fits well in the room.

Screen Shot 2018-04-23 at 11.46.07 AM.pngClockwise from top left: (1) Model of vintage 1980s Skee-Ball machine for design accuracy, (2) Wireframe view of final object created in Blender, (3) Rendered view of Skee-Ball machine and ball in Blender, (4) Rendered view of Skee-Ball machine and ball in room environment created with Unity

The object at the end of this meticulous process was similar to the wireframe structure shown above, except for one important feature: it was colorless. I spent the next few days experimenting with different shaders in Blender to generate the matte texture and metallic luster of the machine’s frame. Upon completion of the frame, I was stumped by the deceptively intricate and random texture of the felt for the machine’s surface. Luckily, a trove of insightful guidance and templates were available on the internet, so I found a relevant node map and adapted it for my use.

Screen Shot 2018-04-23 at 11.46.42 AM.pngNode map of blending shaders to create colors and textures

After a few finishing touches, I exported the object and passed it to Vivian for the final step: uploading to Unity. This project gave me a profound appreciation for the computer graphics all around us, increasingly seen in movies. Although the true experts with years of experience are capable of creating models nigh indistinguishable from real-life objects, anyone with the dedication to learn can become proficient within a month. As a STEM major, I truly valued this rare opportunity to exhibit artistic creativity and learn cross-disciplinary skills in an epic quest to remediate Ready Player One.

 

Robert W.

The Music

 

My responsibilities differed a little from the other group members. My time was pretty evenly split between the softwares Blender, Finale, and Cubase. I wanted to add a little aural spice to the otherwise silent basement. Since a large portion of Aech’s and Parzival’s time is spent playing video games, I thought video game music would make the most sense as an incidental soundtrack. One of the only games the author mentions explicitly as a favorite of Aech and Parzival (which also has a decent soundtrack) is Golden Axe.

Screen Shot 2018-04-23 at 11.47.33 AM.pngMy 3D model of the Golden Axe Genesis cartridge

I turned to that soundtrack as a source material for my synth work. I arranged a piece of music from the game, which took several hours of transcription and input in the notation software Finale (the only piece of software I was already familiar with).

Screen Shot 2018-04-23 at 11.47.43 AM.pngThe Finale file of my arrangement

From there, I exported the midi file into the digital audio workstation Cubase. Cubase is where I got to transform my generic midi file into a slightly more interesting electronic piece.

Screen Shot 2018-04-23 at 11.47.50 AM.pngRaw import of my Finale file in the Cubase DAW

I wanted my percussive sounds to emulate those of a sega genesis system, so I worked with the virtual instrument VOPM, a digital synth designed specifically to emulate a sega genesis sound chip. My work with this synth proved a unique challenge. You must describe a sound in your head in terms of attack time, attack delay, reverb, detune, modular shape, etc. in order to create the instrument sound you desire.

Screen Shot 2018-04-23 at 11.48.01 AM.pngVOPM interface

In order to add a unique, slightly more palatable character to my arrangement, I used some virtual instruments created in the Spector digital synth, which is a more modern and practical plugin than VOPM.

Screen Shot 2018-04-23 at 11.48.15 AM.pngSpector synth. Fun fact, the total cost of the software I used for this project exceeds $1000. Thankfully Blair owns all the software so I didn’t have to foot that bill

Even though music is the focus of my degree, my engagement with Cubase and electronic music in general has been limited. Nearly every step of this project (outside of arranging the original track) was a new and valuable experience for me.

 

Wooseong C.

Modeling and UV Mapping  

 

I came into this project without even knowing what Blender was. When I opened the software for the first time, I was overwhelmed by the number of different tools and view modes the software offers.  I mainly learned from Youtube videos that showed the step-by-step process for making a 3D object. During the process of learning to use this software, I learned two Blender fundamentals – modeling and UV mapping.

Modeling simply refers to crafting the shape of your object to match that of the real-world version. This often requires you to have a second window with a picture of the real world object you can refer to:

Screen Shot 2018-04-23 at 11.51.37 AM.pngModeling an open magazine – I chose to create comic books to add to our VR version of Aech’s basement. I wanted to have a comic book that was open to add to the “realistic” aspect

Having a reference is really helpful, as it allows you to be more detailed and accurate. The process of making an open comic book entailed making a plane, extruding edges to make the folded part and curvature indicated by the red arrows (above), and adding the subdivision surface and solidify modifier:

Another blender skill I learned was UV mapping. This step allows the user to map an image acquired online onto his/her object. This allowed us to map images onto our posters and comic books:

Screen Shot 2018-04-23 at 11.51.45 AM.png

For the Astrosmash cartridge, additional steps were needed during UV mapping. Because I needed to add different images to the different surfaces of the object, I had to incorporate the use of Photoshop. The steps entailed unwrapping the 3D object into a 2D map, exporting this UV map to Photoshop, adding the online pictures to the photoshop, and importing the new UV map back to Blender

Overall, learning to use Blender to add objects to our VR representation of Aech’s basement was a very valuable experience. Although the initial learning stage was difficult, I can now use Blender at the beginner level. I now appreciate the vast amount of options and tools that overwhelmed me in the beginning, because it reflects the infinite amount of possibilities one has to create characters and objects. I am very glad that I took this course during my last semester at Vanderbilt. This course was different from all the other courses I have taken, going beyond the traditional essays and lectures, ultimately creating a more hands on learning experience. This final project really gave me appreciation and the confidence to continue using programs like Blender and photoshop in the future.

 

Vivian L.

Uniting All Elements in UnityScreen Shot 2018-04-24 at 10.40.01 PM.png

My role in the Aech’s Basement project group was to take each of my other group members’ separate Blender 3D objects and any other creations they worked on and implement them into the Unity Scene, ensuring everything looked as it was intended and worked properly. Another portion of my job was to raise the almost non-existent level of interactivity in the room. Prior to this semester’s final project, the player was not allowed to move within the room at all, and there was no way to touch or pick up objects, and I desperately wanted to change this.

At the beginning of this project, while all of my other group members worked on creating their assets, I researched and tested different ways to allow camera and body movement in VR. I looked into Unity Oculus Rift support pages, watched many Youtube video explanations on how to track the headset, and what the sensor controllers were called when used in the Scene. My initial thought process was to create a system in which there was a camera affixed to a Capsule 3D object that represented the player. The capsule would then rotate and translate itself according to the detected player headset movements and the Oculus Rift controller joystick input. To create the hands, I wanted to move two Sphere Colliders anywhere that the touch controllers were sensed. Lastly, to simulate the ability of picking up objects, when the player moved their hand Sphere Colliders to hit any other object that was meant to be moved, had a collider on it, and the grip trigger on that hand was being held down, the object would follow the Sphere Collider’s movement.

Screen Shot 2018-04-24 at 10.42.45 PM.png

Sounds fairly simple, right? I had initially thought so too.

However, I soon realized that although I had the thought process down, I had no idea how to physically code them into the room. I had scarcely any experience with writing C# code, which is the primary language in Unity, as well as using any of their numerous class libraries. I was also still piecing together how the controller input was read in Unity, and was largely unsure of how to read physical headset location. Another difficulty was the inability to test my code outside of the VR space in the Wond’ry.

Screen Shot 2018-04-24 at 10.43.59 PM.png

After another week or so struggling to write my own interactive player in VR, I decided to ask for Dr. Molvig’s help on the issue, since he was my professor last semester in the class Virtual Reality for Interdisciplinary Applications. He showed me many helpful websites and more videos on VR player bodies, especially Unity’s own player model. Following tutorials, I was able to put the model into the scene, but it didn’t quite work as expected.

#1 tip for all things Comp Sci: It never works like you expected

In fact, it crashed the game many times, and even when I got the Scene to play, my attempts in creating the hands painstakingly exact to a video I was recommended were ruined by the fact that it seemed as if the controllers were not being tracked in the scene at all! I believed that it was partially due to last semester’s attempts to create a teleportation system that was quite ineffective, and the tampering of the Scene files.

Screen Shot 2018-04-24 at 10.45.22 PM.png

This led to another large difficulty in my project. Since Unity does not allow the copying of Scene objects with their existing properties into another separate project, I would have to choose between spending more time trying to fix the existing scene, or create an entirely new Scene and copy all of the objects into it by hand. Given all of the time already spent trying to bugfix whatever was going on in the original scene, I decided that starting over may be my only choice if I wanted the movement and object interactivity to be a part of the room. I spent many days re-importing and organizing every detail of the room, testing it along the way to make sure that body and hand movement still worked.

Once I had finally recreated the original room, I held my breath when hitting the “Play” button one more time. It worked! The camera moved, my hands moved, and my body moved! I was filled with relief, but this was only the first step.

It was time to put the efforts of my teammates into the Scene. Altogether, we had quite a few assets created to put into the room; comic books, a skeeball machine, posters, a VCR player, game cartridges, and more. (I’m sure my teammates could tell you a lot more than I can about the specifics of their objects!) I also added a few Coke cans, mainly for hand and object collision testing.

A few things happened when I trying moving the objects into the Scene.

Screen Shot 2018-04-24 at 10.49.30 PM.png

  1. Objects were completely devoid of color.
  2. They were ginormous.

I knew I had to fix the color issue, but the size was not a problem. I could scale it to my will, but adding the correct materials was a separate hurdle. I researched what each of the little options on the objects did, and realized that clicking a button labelled “Lightmap Static” allowed many options, one of which was to assign materials to certain surfaces of the object. This meant that I would first need a material to assign, though. My teammates largely used pictures online to properly wrap around their objects, so I looked up how to create Unity materials from images. Once I figured out how to do that, I realized that objects that were supposed to have separated images on each face, like the Betamax VCR player, had only one surface on which to put the material. Without a proper UV map, I went around this issue by creating the object out of 6 quads, each representing a side. Then, I assigned a separate material for each of them. This worked nicely for neat geometrical objects, but I worried what would happen with more complex ones.

Screen Shot 2018-04-24 at 10.52.23 PM.pngThankfully, my teammates supplied me with UV maps that worked like magic for some of the other objects. For others, I simply looked up images online and utilized those. Sizing was done as realistically as possible

Eventually, every asset created was put into the Scene with the correct coloring and sizing. I was very happy with the results and seeing everyone’s work in the space, in VR, amazed me. Lastly, I was to put Robert’s Golden Axe music into the scene. I decided to create a Audio Source centered at the TV. I attached the Golden Axe audio clip to it, and changed the radii to reflect the distance at which the volume plays normally, and the distance at which it completely fades out.

Screen Shot 2018-04-24 at 10.53.14 PM.png

This marked the end of incorporating each team member’s object (and music) into the final room. Aech’s Basement is far from complete, but progress is progress, and each team member had our own learning experiences completing their parts.

Screen Shot 2018-04-24 at 10.54.41 PM.png

Seveneves: The Role-Playing Game!

 

[featured image taken from https://www.nealstephenson.com/news/2015/05/26/seveneves-site/%5D

When you were a kid, did you ever play those games where you would look up to the sky and imagine the clouds as bunnies, dragons, or anything in between? Did you ever play the ever-popular “the floor is lava” game? If so, fantastic, because as a kid you’re sort of expected to have an active imagination. But why does this expectation fade over time? What about “growing up” means that you have to lose your creativity? Well, we believe there is absolutely no reason for that imagination to wane, and in this blog post we’d like to suggest a fun way to keep your inner kid alive and well.

It’s called a role-playing game (RPG for short), and you may have heard about more popular ones, like Dungeons & Dragons, as there has been a resurgence of interest following the use of this particular RPG in popular culture (think Stranger Things). We made an RPG that was a combination of Dungeons & Dragons, Stars Without Number (a sci-fi RPG), and Seveneves (a science fiction book by Neal Stephenson). Ours may not be the best RPG out there, but if anything we hope this blog post shows you how, without enough time and thinking, anybody can have a great time making and playing an RPG.

giphy1

The idea of the RPG is one that has been solely rooted in the fantasy genre, and by association, the wider genre of romance, something that we have discussed at length in this class. The RPG brings together a group of people, often with varying skills and interests that offset each other, with a shared goal. There is usually some form of quest, self-redemption, or self-revelation that occurs, and because RPGs are more focused on player character development than most other forms of interactive media that we discussed, we thought it best to use to remediate a science fiction novel. Additionally, we both have years of experience playing role-playing games such as D&D and Pathfinder, and have planned our own campaigns before as well as played personal characters in others. All of those campaigns were solely in the fantasy genre, however, so if we were going to make a science fiction RPG, we would have to do a little research.

giphy

The first order of business was understanding the book in which we were basing our RPG. Internet synopses can tell you more than we can here, but this is the gist: the moon was destroyed by an unknown “Agent,” and in the two years before the moon rocks crash to Earth and destroy everything, humanity stashes itself in space to return thousands of years later as collection of seven races, stemming from the seven fertile women who survived in space. You can see more contextual information later in the post and in our notes, or if you’re really invested you can even read the book. The point is this book was perfect for creating a sci-fi RPG.

Furthermore, we scoured the Internet for tips and resources on how to make a sci-fi RPG. From Googling those exact words to thumbing through Reddit threads, we took a few days to amass as many ideas as possible. We settled on the system Stars Without Number, as this RPG system was freely available and seemed to be quite well developed for our purposes. Specifically, this system did a great job at reframing D&D classes into various jobs and skills that were more suitable for life in space, rather than a fantasy world, and the system itself was flexible enough to modify.

unnamed

Why would we need to modify the system? Well, our goal for this project was to develop a “one-shot” game, or an RPG that is meant to be played in a single session rather than in a multiple-session campaign. With this in mind, our primary concern was providing the players enough time to explore the world we were creating. We easily adjusted the mechanics for making skill checks to be less based on players stats and more based on intent and narration. In more practical terms, players could essentially do whatever they wished without all the role-playing and messing about that takes time, so the Game Master (GM) could provide more narration about the environment. It worked out pretty well, as you can see in the videos we’ve placed here and throughout the post.

There are many other mechanical considerations you have to make when planning an RPG. Where is the game set? What is the history of this setting? What is society like? What maps do you need to make? Who might the players encounter, and what will that encounter look like? Is there a point system? Since we based our game on the world of Seveneves, we had a lot of the contextual questions taken care of already. We answered the RPG-specific questions, and you can see our notes in the Google document link in this post later on. The doc can speak for itself, but we’d like to briefly elaborate on the point system, which we developed from the ground up. Normally “points” in the RPG world are experience points that accumulate to level up the player. However, we used “assets” as a way of measuring how well the players were forming bonds with the species on Old Earth, and so whichever team (Red or Blue) had the most assets by identifying and succeeding in more opportunities by the end of the session won.

We gathered up some of our friends to play a short one-shot on a Monday. In RPG terms, a “one-shot” is usually a game or storyline that takes one or two sessions to finish (as opposed to usual longer story arcs in regular play). We planned on filming the session to use in our presentation, so we wanted to have every possible race represented. Our friend Penn played an Ivyn engineer, Jacob played a Camite priest, Nick played a Julian aspiring politician, Jamz played a Moiran biologist, Ethan played a Teklan transport specialist, Jordan played a Dinan astronavigator, or “astrogator,” and Matthew played an Aïdan technician. Torie acted as the “GM,” or the Game Master, who essentially narrates the campaign and prompts the players to make various “checks” in order to see if they successfully complete the actions they wish to perform.

Screen Shot 2018-04-23 at 10.04.19 AM.png

The plot itself was simple: each of the player characters had been gathered as part of a co-racial mission from an orbiting space station around the Earth to explore a newly terraformed surface and investigate for human life. You can read more about it here. You can also see the racial traits and backgrounds we provided for our player characters to choose from. In our game, which lasted about three hours (typical for a standard RPG session, at least for us), our group encountered a race of humans that had adapted to living underwater for over 5,000 years that the orbiting population nicknamed the “Pingers,” after the sonar-esque transmissions they intercepted from their society. While at first a little hostile, our group managed to curtail the growing violence and managed to establish good terms with a group of Pingers. (Here is a video of their “first contact”). They shared technological knowledge and made some vague promises at treaties with military leaders, and were pointed to the underground race of humans (“Diggers”) to assist them in repairing their broken communications device.  

As a player character, or PC, I found the sci-fi context fascinating. Personally, I’ve always loved engaging with any media from this genre; though everything is scientific and futuristic, it’s still all imagined and possible, so it makes me feel optimistically youthful. For this game, we had a mix of rambunctious and withdrawn players, so that made the three hours we played pass with much entertainment. I was in the unusual position of being a quasi-GM, meaning that I was privy to everything that might happen in the game, but I still had to engage as a player who did not know these things. Thus, I found myself motivating the players to pursue various paths that I knew would keep the action in the game flowing. I wish we had had more time to thoroughly explore the world that we had created, but that’s just the nature of a one-shot game.

tzxsixk

As a GM, Torie found that there was a lot of the story that she did not prepare for. Unfortunately, only one member of the group besides us had actually read Seveneves ( a couple of them did read it after the session, though!), and this proved to be a bit of a problem when Torie ended up killing a lot of game time explaining background situations and mechanics of the players’ society and objects to them. Both of us (Torie and Matthew) have been Game Masters for our own games before, and while Torie had over twenty pages of GM notes, we both knew that planning a successful campaign and story took a much longer amount of time than a single month. Even with the most careful planning, though, the fun of RPGs is that the players make their own decisions, which means that there is always something happening that the GM has zero plans for. Torie expected this, and because of it, was able to work mostly successfully with the players’ wishes as they went. We were hoping to have contact with both the Diggers and the Pingers in this, but, after three hours, the group had only made it to the Pingers, and we decided to call it a night. (This is common with our experience as GMs and players, stories always take a little longer to tell/roleplay than you think they will). 

In conclusion, we loved the opportunity to take something we both love to do as a hobby and integrating it with the themes that we have learned in ENGL 3726 with Professor Clayton at Vanderbilt. We have both grown up with these “new” forms of media that we have discussed in class, and have been fans of the fantasy and sci-fi genres since childhood. Being able to put those together in this new creation was a really satisfying culmination of these themes for us, and we know that our friends enjoyed playing through the story with us as well.

giphy2

A Song of Gunpowder: An Adaptation of Childe Roland to the Dark Tower Came

My goal in using Twine to create a text adventure game based on Childe Roland to the Dark Tower Came was to put a creative spin on an intentionally ambiguous story, remediating a poem that I thoroughly enjoyed in the context of a game with original art. I wanted to capture the romantic themes and narrative of fate and the eeriness of the ominous dark tower while at the same time applying elements of more conventional storytelling in the form of dialogue, an antagonist, and definitiveness to the threat at hand. I also wanted to incorporate some of my own fantasy creations, and utilize anachronisms in order to both facilitate more robust world building as well as exacerbate the dangers faced by the player protagonist. The concept art I created was intended to increases the sense of immersion and further the mood of peril and prophetic theme. Some of the inspirations for my original creations and story line (excluding Childe Roland) included Game of Thrones (in some of the place names and sigil) and the Metro 2033 book series (with the design of the Stalkers).

pic 1.jpgunnamed.jpgpic 3.jpg

Originally, my goal was to have 5 or 6 distinct story lines and endings, but that immediately proved to be an overwhelming task as soon as I started writing. In the end, I shaped my narrative to that which I wanted the player to experience, leaving choice mainly in the goal of escaping death and furthering the mission (until the final choice). I felt this was more in keeping with the general premise of the poem, as the journey is a fated and unavoidable one, the only difference being the perfectly ambiguous ending. In my game, because I made the threat of the Tower definitive the endings are more clear and based on my interpretation of the text and how best to remediate it. The second to last stanza reads “Not hear? when noise was everywhere! it toll’d/ Increasing like a bell. Names in my ears/Of all the lost adventurers my peers,/How such a one was strong, and such was bold, And such was fortunate, yet each of old/Lost, lost! one moment knell’d the woe of years.” To me, this meant that Childe Roland was fated to the same end that his peers met, but he was happy to have completed his quest. This sentiment is reflected by one of the final choices in my interpretation.

pic 2.jpgpic 4.jpgsnowman.jpg

In terms of using Twine as a program, for the most part I enjoyed it immensely. The passage creation is intuitive, it is completely free and there is a very helpful and welcoming community of other Twine users with extensive experience (both in Twine and in CSS and Java). I was able to implement most of the storytelling tools I wanted to, as well as cosmetic and aesthetic alterations. However, the glaring problem that exists with Twine as a program is that of music and image imports. In order to incorporate original content in either of those categories, you are forced to use an html converter or create a website and upload the pictures, using the link as part of the code. Both of those options proved horrendously buggy (the images refused to be re-sized properly and fit to the contours of the game and youtube links don’t work for incorporating music) and I was forced to abandon my plans for music as well as relegate the art I created to concept material. If I were to start the project again, I would spend time figuring out more robust options for image incorporation and aesthetic manipulation to make the game more unique and more suited to my individual intentions. There were also many interesting prospects for game creation that I had to overlook due to the time limit factor that I would definitely be interested in adding if I make another game (including health, inventory, weapons, etc.). Overall though, I found this final project workshop to be a fantastic opportunity not only for my personal development in learning how to code and use Twine, being creative, and incorporating elements of other media into a video game setting, but also in better understanding the possibilities for remediation of the romance narrative.

 

Here is the link to download the game. You can play it right in your browser.

http://www.mediafire.com/file/j02ucfxsesc6hts/Childe+Roland+to+the+Dark+Tower+Came-+A+Song+of+Gunpowder.html

Andrew Hoffmeister

New Adventures in Old VR (And Vice Versa)

The Adventure Science Center, located about an eight-minute drive from Vanderbilt University’s campus in Nashville, Tennessee, is an incredibly fun place to explore, learn, and, in my case, work for. I was an exhibit attendant and front desk operator for ASC for quite some time, and in my tenure, I was able to witness first-hand the effects of new media and technology on kids’ education.

One of my jobs was to maintain and run the Blue Max Flight Simulator, which was a two-person pod capable of recreating the flips and turns of a digital roller coaster, or the flight of a fighter jet. The concept was not new- arcades and play places had similar devices in my childhood, but this was the first time I was technologically familiar with the ride. The roller coasters were more like incredibly active movies, in which the viewer watched a tightly shot screen of a digital roller coaster and the pod moved to simulate the drops and flips. The roller coasters were not the most realistic things in the world (we had ones where you rode over space, or through a volcano), but even the ones simulating a realistic experience still gave away their simulation through graphic composition, or through the incredibly loud Red Hot Chili Peppers mix blaring through the speakers.

The fighter jet was user-controlled, with joysticks located on the sides of each seat, and again, the graphics left a lot to be desired. But, the kid’s sense of realism was more than fulfilled by having the pod respond to their joystick movement, actually putting them into a dive or repetitive barrel rolls. Physical movement, it seemed, made up for the pixellated images.

I spent a lot of time watching the rides in the Blue Max bay on the screen on the control panel, listening to the shrieks and swears of passengers, and the weirdest thing happened: it started to get old. I was bored of the standard rides and loops, could repeat the theme music for each ride, and became more concerned with how long it took for patrons to empty their pockets. A child was sitting at the desk next to me playing with the flight simulator that was identical to the one in the pod, the joysticks and buttons controlling a wide array of turns and data. He had figured out, all on his own, how to work the joysticks, shoot, and switch camera angles. And he couldn’t have been more than seven years old.

The ASC recently added a VR center for kids ages 13 and up, and currently has a program designed to put the kid directly in the center of the process of building a skyscraper in downtown Nashville. The equipment was clunky and hard to work with, and more than once, we found minor inconveniences could shut down an entire station. No one wanted to work at the VR station. It was boring. All you did was watch people hooked into a complicated system raise and lower their hands and turn around in a blank, empty space. But in the players’ eyes, they were lifting cross beams, choosing window styles, and directing cranes.

It is a strange feeling to work with VR, to see the detached human side of the virtual playground, and it is easy to get bored with it, like it is with any job. But, looking back, the extent to which VR incorporated itself as a normal part of our lives and work environment was disconcertingly quick (the new exhibit was installed in a month, we were trained for a week, and then it went live). It raises a lot of questions for me about the future of this sort of technology, and the ease with which we adapt to it. Where do we go from here?

The Place of Video Games During Finals Season

Yes, it’s that time of the year again. What should be a wonderful and beautiful time of Christmas music and holiday cheer is spoiled by the crushing realization that we all have a lot of work to do before we can enjoy the seasonal cheer. I find the behavior of many people very interesting during this time of year. Some folks seem to maintain a quasi-cheery attitude, knowing that they’ve done this before and they’ll do it again. To them, worrying only doubles the pain, so what’s the point of getting too wrapped up in your studies? On the other hand, some people are quite open about how much they’re struggling. It’s some kind of odd coping mechanism, I think. This (false) dichotomy, though, has shown me one rather interesting thing about the use of video games during this time of the year.

a-very-retro-christmas-to-you-all-video-games-3136465-584-416
A very retro Christmas to all

For the “chiller” group, as I will call them, they keep most of their habits the same, in terms of leisure. Sure, they’ll devote more time than usual to their studies, but they still find time to game, watch some Netflix, or go to the gym. I think this group tends to do better in the long run. Go ahead, search if it’s better to take breaks while studying, and I think you’ll be pleasantly surprised to find some good data that suggests we do better sectioning off our work in to chunks, rather than punishing our brain for 8 hours straight. Even my pre-med roommate finds time to play some mobile games in between his intense biology slides. I’m certainly not saying that you should devote this weekend to beating every side quest of Skyrim (ha), but it might not be the worst thing to knock out one.

To all you “thrillers” who lock yourselves in Stevenson for 12 hours on the weekend, only emerging for food and water, take some time during this final season and try out some sort of quick breaks. Even if it’s to check social media or listen to a few songs, try giving your brain a break to synthesize everything that it’s taken in. I was once like you, I had a lot of internal guilt to overcome when I would enjoy some leisure time. I told myself that I was wasting time that I would need to work. Give it a shot, though. I think you’ll be surprised at how much stronger your work will be when your brain isn’t a heaping pile of mush.

beth_christmas_2007_topper