Research Update: #3

 

Controllers – are hand held devices used in gaming to maneuver the characters or subjects you control. There exist many different types of controllers with particular specialties from complimenting unique playstyles to improving input accuracy. The more standard controllers compared to the ones professional esports players use differ in a number of technical ways. For instance some come with larger directional pads, more pronounced buttons, replaceable peripherals, and a whole host of other tools. Some of them differ in size, shape, color, name and are associated with big brand names (ie Razer, Nacon -Below).

Gaming on a PC utilizes a keyboard and mouse respectively. On such platforms there also exists personalized tools of the trade to up your game. PC gaming is often versatile enough for handheld controllers to be an option given that certain games provide it. But most are very fond of using one or the other depending on the game of choice they are playing and the benefits garnered. Mouse play offers extreme precision while controller pads offer great comfort.

Fun Fact: Fighting game players are famously known for still using retro arcade joysticks to this very day. They prove to be very helpful for accurate inputs in a genre dependent on executing a sequence of button presses in a limited time frame.

 

Prototype Plan

Prototype 1: Unity

For my first Prototype, I will work in unity to create a model in unity that would resemble the globe I plan to have in my finished product. The purpose behind this is to gain a level of familiarity with Unity since I rarely used it in the past. The end goal for this prototype is to have a model that can be interacted with when using the mouse in Unity.

Prototype 2: Leap Motion interaction

For this Prototype, what I envision is experimenting with the leap motion camera, and synchronizing  it with Unity so I can get a working model in Unity that can be interacted with using Leap Motion technology. The goal for this isn’t to have a complex display, but rather to show I have an understanding of the basic mechanics to the point that I can make objects in Unity move using the Leap Motion camera.

Prototype 3: Unity and Leap Motion

The final Prototype will combine the first two prototypes. The goal here is to have a textured model of the Earth that the user can spin, click and identify countries. It will be a very simplistic form of the final project that I envision, however this is to showcase that I will have a strong baseline for my thesis project at the end of the first semester.

Research Update 3

This week was all about learning how I would incorporate Visual Effects (vfx) into my game. Going into this project I knew I wanted to incorporate a variety of vfx (smoke, explosions, starbursts, etc.) to make the potion crafting feel visually rewarding. As the player created increasingly difficult potions the vfx would increase in scale to reflect their difficulty.

I found several forum posts from people working in the industry and a GDC conference about their approach/the unique development of stylized vfx. Since my project has a more cartoonish aesthetic I want the vfx to reflect this.

 

This Game Developer’s Conference on the creation of vfx for the game Gigantic was very insightful, but unfortunately the gifs in the slides don’t play so I can’t visualize the step by step of how they created some of their vfx.

https://www.gdcvault.com/play/1024715/Art-Directing-VFX-for-Stylized

 

One of the developers of Rime held a conference on his experiences creating vfx in Unity for the game, specifically the waterfall.

https://www.youtube.com/watch?v=fwKQyDZ4ark

An associate of his then took this conference’s material and created guide on how to recreate this same waterfall effect.

https://www.artstation.com/mathroodhuizen/blog/ZEgV/stylized-vfx-in-unity-a-rime-inspired-waterfall-full-breakdown-part-1

 

Finally, a game developer named Etienne POV created a blog post detailing how they created their stylized vfx. The vfx they break down was one that they entered into a competition Riot Games and won. This was an incredibly helpful resource because this is almost the exact style I’m looking to recreate within my own project.

https://80.lv/articles/creating-stylized-vfx-in-unity/

Her ArtStation has also given me a lot of inspiration and ideas for what I can do myself.

https://www.artstation.com/etiennepov

Prototype Plan

Prototype 1: 3D Models

For my first prototype I’ll be creating various models that I’ll need for my project. In order to do this I’ll be using Maya and Zbrush to sculpt them. My goal is to model unique potion bottles, a cauldron, a large book, and some pots. I think these will be a good variety of items to test the artistic style I want. It’d also be good practice to see which of my initial potion designs translate best to 3D.

Prototype 2: Textures

For my second prototype I’ll be creating various textures for my project. To do this I’ll use Substance Designer and Substance Painter to create and edit the textures. My goal is to make various stylized textures that I can use throughout my scene. This will prototype will take place after the first one because I want to establish the aesthetic of the models before I attach any textures to them.

Prototype 3: Movement in VR

For my third prototype I’ll be creating a simple VR environment the player can walk around in with objects they can pick up. This is my last of the initial prototypes because this will take the most time. I will be using Unity and online tutorials to accomplish this. My goal is for the player to be able to move freely around the room with responsive controls and a good camera. I also want the player to be able to pick up and move objects around the room, possibly with different physics. This room would be rather plain because I want to focus on the controls, camera, and physics of the objects.

Research Update 3

For my third piece of research I wanted to get into the minds of the professionals. After gaining knowledge on the technical sides of filmmaking I wanting to gain a better understanding of the intangibles of the those who make films. One the most prominent film communities currently is the “No Film School” community. They provide a web page, social media, and youtube clips of resources for young, up and coming, and even experienced filmmakers. I recently came across this youtube video that is added below and I was able to get in the mind and creative thinking of directors from the Sundance film festival.

There were many interesting ideas and thoughts that were shared from these directors. What I enjoyed most about this film was how they gathered a group of people who bring different stories, perspectives and worldviews together for this video. One director spoke on tapping into what you know. Many young directors and filmmakers often try and overthink their stories and lose authenticy from their films because they are trying to create a world that they know nothing about. I learned to stay true to myself and stay within the things that I know. I hope to use my own experiences and thoughts to create a real and authentic film, regardless of the genres that will take place; I hope that the style and story stays true to who I am as a filmmaker.

Another idea that stood out to me is when one filmmaker said you’ll never know until you try. This concept is often being used and can easily be overlooked despite the importance that it holds. I found myself many times doubting the possibilities of my ideas and then water down my creativity because I feel as though I wasn’t able to create what I am envisioning. There will never be a perfect time or scenario so it’s best to just try and don’t fear failing because even if you fail you be able to learn from it. Growth cannot be achieved unless failure was present beforehand. This is a motto that I live by and will carry into my thesis project.

The main idea that was being talked about was the idea of staying true to who you are. Another director shared with the group that if they were all to create a film about Hurricane Katrina they will all create totally different films despite it being the same topic. The reason this is comes the fact that we all have different perspectives and journeys so we all see the world differently. What makes a great filmmaker is the ability to show the audience how you see the world from your own eyes. When a person is able to create with authenticity then it will be apparent to the viewers and in turn will make the viewers more attached and invested into your film.

 

 

Prototype Plan

Prototype 1: Storyboarding

Storyboarding is one of the most important parts of the creative process. I will either physically sketch out the storyboard or I may use a software program that allows me to create it digitally. This is a key part of prototyping because I will be able to see how I will connect the storylines as seamless as possible.

Prototype 2: Edit youtube clips

The plan for my second prototype will be to find various youtube clips and attempt to change the feel and genre by editing alone. Post-production will play a vital part in the execution of the multiple genres. When editing these random clips I will be experimenting with the coloring, music, and edit choices. The editing process has the most control over the project. Adobe Premiere Pro will be the editing software that I will be using to create these videos so this will give me an opportunity to learn more about this specific program.

Prototype 3: Record a mock scene

I am currently recruiting the on-screen talent for the project so for the mock scenes I’ll be using actors who may not necessarily be in the final project. This will give me an opportunity to see who will best fit the story line and character portrayal. Also this gives me an opportunity to work on my director skills and I’ll gain a better understanding of each scene setup once I start recording the actual film.

Research Update 3/3

The theme of this week’s research was technology. Last week I briefly mentioned becoming familiar with the Leap Motion Camera, however this week I took the opportunity to investigate further and see how the practical implementations would work.

Through reading many articles and watching quite a few demo and tutorial videos with the Leap Motion camera, it could best be described as a Kinect, only meant for PC usage and more accurate in every regard when compared to the Kinect. Essentially, it tracks hand movements and reacts to them in the applied space. There were quite a few examples of this implementation, for instance one video showed a man writing with his hands in the air and the words being reflected on the computer screen, while another showcased creating and playing with cubes in unity. Speaking of Unity, in order to create the physical model that I envision (as of now) for my project I will need to utilize Unity, and as such did some research into that platform in the form of introduction/tutorial videos to refresh my memory.

Now, while I focused most of my technological research on getting familiar with Leap Motion and Unity, I also reached out to my student advisor to see what she thought would be good for implementation and if there are other paths I’m not considering at the moment. She in turn directed me to two different areas: google cardboard and interactive displays. Now while I didn’t have quite enough time to continue researching these at the time of this blog entry, I do plan to review these this weekend and come to an understanding of the technology.

https://developer-archive.leapmotion.com/documentation/csharp/devguide/Unity_Demo_Pack.html

https://www.delltechnologies.com/en-us/perspectives/how-technology-is-transforming-the-museum-experience/

Prototype Plan for Visual Music for the Deaf

Prototype 1: Make a demo of the notes on a keyboard represented by colors.

Execution will be through video format and I will presumably create it with Aftereffects. I hope to learn which graphics and colors will represent the musical notes best. I have learned that Aftereffects does not do graphics in real time. So for this demo, I can use Aftereffects to represent what I want to convey with my project.

Prototype 2: Make an interactive keyboard that shows the colors as you play each note.

Execution will be done through a MakeyMakey and a Max patch. The Max patch will include the code for the MakeyMakey as well as the graphics needed for the color representation. I hope to learn how to do visual graphics in Max, as I have not accomplished this before. I also hope that Max does not cause many technical errors in my implementation. For now, the MakeyMakey can work as a prototype, but after talking with a number of people, I might want to use a MIDI keyboard or controller in the final project.

Prototype 3: Improve the interactive keyboard to include MIDI timbres and multiple octaves.

Execution will be done through 2 MakeyMakeys and a Max patch. The Max patch will now include multiple octaves and a range of MIDI timbres, in addition to the graphics and code for the MakeyMakey. I hope to learn which colors can best represent each MIDI timbre, based on color theory research. By this point, all of the colors and placement of visuals will be entirely based on my research, and not just my assumptions.

Prototype Plan

Prototype One: The questions

I have started with my questions and outcome tree on Twine. My first prototype is the Twine layout and see if those types of questions and results can and do work.

Prototype Two: The 2D

I will use Unity to try out a 2D world to allow players to explore and see the world as a means to get the right understanding of how the world should look.

Prototype Three: Dialogue and conversations

After, I will try to make dialogue work in a game to make the dialogue work. With the dialogue set up and a small 2D World, I can make choices have different responses.

Research Update 3

Historical Context

https://www.theguardian.com/artanddesign/2006/jun/24/art.art

“Poems and paintings became music, and music became poems and paintings.”

While visual music is becoming more popular with the technology available today, there are many historical examples regarding visual music as well. While synesthesia is a natural human phenomena, the implementation for sound and sight linking dates back to the 19th and 20th century (The Guardian). Wassily Kandinsky, a Russian composer and visual artist, captures the beauty of music through paintings and other forms of art. He wasn’t alone in his ideas either. Many other artists implemented similar ideas into their own work, linking sound and music with visuals and paintings of the Romantic era. From these original ideas and inspirations, these artists inspired modern lighting and color for many projects available today. For example, concerts across all genres have many lights that fit the theme of the show and each song. A slow, acoustic song has dim, soft lights, whereas a powerful, high energy song has bright, fast changing lights. In a way, modern concerts are visual music. With the empowering instrumentation and bright lights, concert goers experience music in a way that isn’t tangible at home.

Before the technology for modern lighting was available, Scriabin composed his work “Prometheus: Poem of Fire 1909-10,” which was intended to involve a color keyboard, lighting up the entire concert hall with color. Scriabin had synesthesia, where he would perceive certain keys as different colors as opposed to individual notes being a different color. In Yale’s video on YouTube, they show the implementation of the color organ and how it is performed in concert.

Yet another example of live visual music is “The Firebird” and “The Rite of Spring,” which Igor Stravinsky composed for Sergei Diaghilev’s Ballets Russes company. In these pieces, Stravinsky intended for multiple forms of art to be present during the performance. Painters, dancers, and musicians combine their art forms to perform a giant art masterpiece for an interactive and immersive experience for the audience.

Current Context

Visual Music

In this video, Nahre Sol breaks down how she interprets complex chords. These chords aren’t typically seen in pop music and are usually pretty challenging to analyze and play. To give an idea of normal chords, there are two types to start off with: triads and 7th chords. A triad consists of three notes involving the root, 3rd, and 5th of the key. The root meaning the name of the chord (In a C major chord, C is the root), the 3rd being 3 notes above the root (In a C major chord, E is the 3rd), and the 5th being 5 notes above the root (In a C major chord, G is the 5th). In a 7th chord, we build on the original triad and add a 7th above the root. In a root position C major chord, a pianist will play C, E, G, and B, in that order. Of course, there are all sorts of inversions and qualities of chords to consider when referring to triads and 7th chords.

In this video, Nahre shows more complex chords, involving more notes than a standard triad or 7th chord. She shows this interpretation with color. First she plays the chord with a bowl of mixed colors seen at the top of the screen. Then she reveals the two colors used to compose the chord. In one example, she represents an A minor 6/4 chord with the color blue, and an A major chord with the color red. Finally, she plays the chord again showing the mixed colors and the musical notes laid out on sheet music, with their respective colors. This form of visualization can help beginner musicians understand complex chords and how to build them. By thinking of the chords in colors, rather than notes on a piece of sheet music, people may be able to easily understand music theory.

https://www.wired.com/2011/05/visual-music-paul-prudence/

With the technology that is available today, it is no wonder that visual music has become so popular. In this article, Alice Vincent shows an example of visual music by Paul Prudence and how he creates these visuals. All of the visuals are created with an algorithm based on math and geometry (Wired). The visuals themselves look stunning when paired with the music. He intends for his visual music to be experienced live due to the originality of each piece and visual. As an art piece, this form of music stands as a credible source for visual art made with current technology.

In my original idea for the thesis project, I was extremely intrigued by chromesthesia, the association between sound and sight. In this video, we see a young woman playing violin with bright colors popping up on screen to represent the notes while she gives commentary. The colors blend into the cityscape and environment as if they were a natural phenomena, and not inside one person’s head. After wondering, “what does synesthesia look like,” we get this video where someone with synesthesia can accurately represent what they see through the technology.

Based on this knowledge of synesthesia, I realized that my project would be something different. I didn’t want it to look like this video. The video is beautiful and creates a wonderful understanding for what the condition looks like, but it is not clear what each specific note is. Even after spending years training my ear, I cannot identify the notes or intervals she is playing even when looking at the visuals. After reading some comments on the video, people with similar conditions only see certain colors for particular songs, or a whole orchestra may appear one whole color as opposed to multiple vibrant colors. In my head, I envisioned synesthesia to involve a rainbow of colors ranging in shape, tint, hue, saturation, etc. After seeing this video and reading many stories, I decided to leave the idea of synesthesia as an inspiration, and not the sole focus of my project.

Deafness and Music

“Sound is so powerful that it can either disempower me and my artwork or it can empower me. I chose to be empowered.”

In this enchanting TED talk, Christine Sun Kim discusses the similarities of music and sign language and the importance of social currency in the deaf community. She gives the audience some background and history on American Sign Language (ASL) and its importance today. Her entire life, she was always taught to consider sound as something separate from her. Sound was something that she would never be able to experience and she would be distanced from it. As her life went on, she realized that ASL and sound are not as different as the average person might think. She even acknowledges that she spends more time paying attention to sound etiquette and mirroring hearing people with their sounds. At one point in the talk she says, “In deaf culture, movement is equivalent to sound.” This makes sense because all signs in ASL have movement to them, therefore making “sound with them.” What’s interesting to me is that music also has movement to it, both physically and metaphorically. As a violinist moves the bow up and down, we can gauge how intensely or passionate the piece sounds. Even if we covered our ears, we can still get a general idea of the tone and texture of the piece without ever hearing it. In addition to the physical movement of the performer, music and sound has its own movement to it. The phrasing of each piece contains a special movement that changes based on who is playing the piece. A performer can make a melodic line sound melancholy, whereas another performer can make that same line sound lively and energetic.

She also mentions the similarities between music and ASL, which are surprisingly similar. ASL is visual, so we can typically see drawings or paintings relevant to the language, but we hardly see or hear any similarities between music and visual languages. One great similarity is how music and ASL cannot be fully expressed on paper. (TED talk). Christine goes over the different parameters of sign language, including body movement, facial expressions, speed, hand shape etc. All of these parameters cannot be expressed on a piece of paper like english can. “English is a linear language,” and doesn’t have as many physical parameters to consider. Music is similar to sign language in the sense that it is not a linear language. Sheet music exists, but it does not fully capture the meaning and essence of the tone and quality of the piece.  She uses a piano metaphor to explain the similarities, where english is a single note and ASL is a chord constructed of the many parameters to interpret the language. Much like music, if one were to change any part of that chord, the whole meaning would change.

My idea for this project was to drive home the point that deaf people can experience music. They can experience it in a number of ways involving visual and tactile sensations. “You don’t have to be deaf to learn ASL, and you don’t have to be hearing to learn music.” Deaf people do have a voice, they just don’t use their physical one. By recognizing the similarities between sound and ASL, both hearing and deaf people can come together to bridge the gap between the two communities.

http://static1.squarespace.com/static/54ef3f61e4b0dd6c6d1494c0/t/56d7b4a6ab48def067a083cd/1456977062648/Sofia+_+Music+In+Special+Education+-+Research+Paper.pdf

Accessible Technology

On How Deaf People Might Use Speech to Control Devices (Jeffrey P. Bigham, Raja Kushalnagar, Ting Hao Kenneth Huang, Juan Pablo Flores, Saiph Savage)

Accessible music

“If I were here playing cello, or playing on a synth, or sharing my music with you, I’d be able to show things about myself that I can’t tell you in words.”

In this inspiring and touching TED talk, Tod Machover showcases his many projects involving music technology and accessibility. He makes the point that music is much more enjoyable when you can create it yourself (TED talk). In my project, I hope to achieve part of this goal by giving deaf people a means to learn the fundamental skills for music without needing sound. We see part of the wonders with Tod’s innovation, Hyperscore, a program that allows users to coordinate lines and colors to create music. By the end of the video, Tod brings in a man named Dan, who has cerebral palsy. Even with his physical disability, everyone can see that through the power of the infrared camera, Hyperscore, and sensors, Dan was able to express himself in ways that words cannot define. This is truly an inspiring piece of work and provides a foundation for what accessible technology should aspire to be.

If technology is accessible, that means that everyone, no matter their disability, can use it. Even today, there are still devices and innovations that do not cater to people with disabilities. In addition to helping this group of people, accessible technology can also make life easier for the average user. Take music for example. The average person can learn music, but it is extremely difficult and some may struggle more than others. But, with a program similar to Hyperscore, people across a wide spectrum of diversity can create something meaningful to them and improve their quality of life.

Ad-Hoc Access to Musical Sound for Deaf Individuals (Benjamin Petry, Thavishi Illandara,  Juan Pablo Forero, Suranga Nanayakkara)

An Enhanced Musical Experience for the Deaf: Design and Evaluation of a Musical Display and a Haptic Chair (Suranga Nanayakkara, Elizabeth Taylor, Lonce Wyse, S. H. Ong)

Fourney, D.W. and Fels, D.I. Creating access to music through visualization. Science and Technology for Humanity (TIC-STH). 2009, 939–944.

Musica Parlata: A Methodology to Teach Music to Blind People (Alfredo Capozzi, Roberto De Prisco, Michele Nasti, Rocco Zaccagnino)

Color Theory

Technical Implementation

Max

After spending one semester using Max, I figured this would be my first approach for technical implementation. I have been able to make arpeggiators, random number generators, and interactive music pieces from coding on Max. For those that are unaware, Max is a program that allows for interactive music and more through patches and cords. After researching on the website, Max does allow for real time graphics, which is exactly what my project calls for. Also, I can potentially do the demo section of the project with Max as well. I am definitely most comfortable and familiar with this program compared to the other solutions.

WebGL

WebGL (Web Graphics Library) was suggested by a classmate who worked with this interface in a previous class. It allows for 2D and 3D graphics in any compatible web browser. While this is an interesting web based solution, I am not confident or skilled in coding, and I fear that my lack of skill will inhibit my progress with my project.

Chrome Music Lab

The Chrome Music Lab is an excellent source for easy music creation. Just from playing around on the site, one can easily make a simple melody in a few minutes. By using color and a visual interface, it gives the user a refreshing experience with music composition. One particular experiment of interest to me is the spectrogram, which is defined as a picture of sound (Chrome Music Lab). It almost looks similar to a heat map for sound, with certain frequencies appearing more red and others appearing more blue. In terms of my project, it’s important to consider these sources that use color to represent music and sound. In regards to my project, I may want it to perform similarly to the spectrogram where the notes occur on a timeline.

Processing

As suggested by Professor Ault, my last option is Processing, a program that allows for visual art coding. Again, this is a great solution for going about this project, but I am not confident in my coding skills.