Prototype #2

For my second prototype, I decided to steer away from the visual aspect of things and focus on the implementation process. I worked in the program, Max, which I will potentially be using for the final project. I figured out how to set up my MIDI keyboard through Max and I made it so that the user can select between a number of different instruments to play with (just like any normal keyboard would). The difference is that I made it easy to understand, separating the different instruments families into categories and only selecting the instruments that create pitches, rather than sound (i.e. trumpet vs. woodblock).

Even from working with the sounds myself, I realized that a lot of these instruments sound terrible. The reason for this is because I was using general MIDI data to generate the sounds of each instrument. Based on the feedback I received, the main criticism was the quality of instrumentation and making each instrument sound realistic. Dr. Nakra suggested that I either find a source of instrument MIDI files online, or commit to a long and tedious process in Logic. Doing it through Logic would involve the following: play a single note with the instrument track I want, bounce that individual note as a wav file, import that file into Max, rinse and repeat with every note of every instrument. I’m still trying to decide on a realistic approach to the quality of instruments, and I’m glad to hear any suggestions. For the mean time, general MIDI instruments will be my audio source for the final prototype.

In addition to setting up the Max patch, I’ve also been researching on the function “Jitter” in Max, which will allow me to display graphics in real time. I hope that in the future, I will master this function and be able to implement my visuals into the Max patch.

Prototype 1

My first prototype for my visual music project was meant to show how I envisioned some of the base ideas for my final. Because of my limited knowledge in After Effects, I was not able to do anything spectacular or note worthy. I played a chromatic scale (notes ascending by half step, and descending by half step) with colored rectangles mapped to where I thought the notes might appear. After looking it over enough of times, I knew there were a lot of problems with it and the feedback I got complemented my thoughts.

My first criticism is that the colors should be: low = dark and high = light. I even tried to do this with the octaves of certain notes. For example, C4 was red and C5 was a lighter shade of red. However, I did not consider that the other notes in the scale would need to follow this rule as well. My original color scheme was the following:

C – red

C# – red orange

D – orange

D# – yellow orange

E – yellow

F – lime green

F# – green

G – green blue

G# – blue

A – indigo

A# – purple

B – pink

While it made most sense to work with a rainbow at first, it does not agree with my idea. For example, blue and purple are darker than yellow, even though they occur higher in the scale. In future iterations, I’m going to fix the color scheme so that the colors correspond with low and high notes.

Another criticism I received was the usage of the canvas. The way it is mapped currently, all notes would only occur on a rectangle and nowhere else in the entire canvas. Someone suggested that I use the height of the canvas for pitch and the width for time. Ideally, no shapes or colors would go backwards on the canvas like the demo showed. The notes would move along the canvas as though it were a timeline.

I received an interesting suggestion for the accessibility to the project. This particular concern dealt with color blindness and how I might cater to that. Unfortunately, I am not well informed on color blindness and the spectrum of color blindness, but in the final project I can choose different shapes to represent the notes, rather than colors. I wouldn’t replace the colors entirely, I might just make it a separate option for people to choose. I imagine something similar to sites that allow the option to toggle for color blindness.

Prototype Plan for Visual Music for the Deaf

Prototype 1: Make a demo of the notes on a keyboard represented by colors.

Execution will be through video format and I will presumably create it with Aftereffects. I hope to learn which graphics and colors will represent the musical notes best. I have learned that Aftereffects does not do graphics in real time. So for this demo, I can use Aftereffects to represent what I want to convey with my project.

Prototype 2: Make an interactive keyboard that shows the colors as you play each note.

Execution will be done through a MakeyMakey and a Max patch. The Max patch will include the code for the MakeyMakey as well as the graphics needed for the color representation. I hope to learn how to do visual graphics in Max, as I have not accomplished this before. I also hope that Max does not cause many technical errors in my implementation. For now, the MakeyMakey can work as a prototype, but after talking with a number of people, I might want to use a MIDI keyboard or controller in the final project.

Prototype 3: Improve the interactive keyboard to include MIDI timbres and multiple octaves.

Execution will be done through 2 MakeyMakeys and a Max patch. The Max patch will now include multiple octaves and a range of MIDI timbres, in addition to the graphics and code for the MakeyMakey. I hope to learn which colors can best represent each MIDI timbre, based on color theory research. By this point, all of the colors and placement of visuals will be entirely based on my research, and not just my assumptions.

Research Update 3

Historical Context

https://www.theguardian.com/artanddesign/2006/jun/24/art.art

“Poems and paintings became music, and music became poems and paintings.”

While visual music is becoming more popular with the technology available today, there are many historical examples regarding visual music as well. While synesthesia is a natural human phenomena, the implementation for sound and sight linking dates back to the 19th and 20th century (The Guardian). Wassily Kandinsky, a Russian composer and visual artist, captures the beauty of music through paintings and other forms of art. He wasn’t alone in his ideas either. Many other artists implemented similar ideas into their own work, linking sound and music with visuals and paintings of the Romantic era. From these original ideas and inspirations, these artists inspired modern lighting and color for many projects available today. For example, concerts across all genres have many lights that fit the theme of the show and each song. A slow, acoustic song has dim, soft lights, whereas a powerful, high energy song has bright, fast changing lights. In a way, modern concerts are visual music. With the empowering instrumentation and bright lights, concert goers experience music in a way that isn’t tangible at home.

Before the technology for modern lighting was available, Scriabin composed his work “Prometheus: Poem of Fire 1909-10,” which was intended to involve a color keyboard, lighting up the entire concert hall with color. Scriabin had synesthesia, where he would perceive certain keys as different colors as opposed to individual notes being a different color. In Yale’s video on YouTube, they show the implementation of the color organ and how it is performed in concert.

Yet another example of live visual music is “The Firebird” and “The Rite of Spring,” which Igor Stravinsky composed for Sergei Diaghilev’s Ballets Russes company. In these pieces, Stravinsky intended for multiple forms of art to be present during the performance. Painters, dancers, and musicians combine their art forms to perform a giant art masterpiece for an interactive and immersive experience for the audience.

Current Context

Visual Music

In this video, Nahre Sol breaks down how she interprets complex chords. These chords aren’t typically seen in pop music and are usually pretty challenging to analyze and play. To give an idea of normal chords, there are two types to start off with: triads and 7th chords. A triad consists of three notes involving the root, 3rd, and 5th of the key. The root meaning the name of the chord (In a C major chord, C is the root), the 3rd being 3 notes above the root (In a C major chord, E is the 3rd), and the 5th being 5 notes above the root (In a C major chord, G is the 5th). In a 7th chord, we build on the original triad and add a 7th above the root. In a root position C major chord, a pianist will play C, E, G, and B, in that order. Of course, there are all sorts of inversions and qualities of chords to consider when referring to triads and 7th chords.

In this video, Nahre shows more complex chords, involving more notes than a standard triad or 7th chord. She shows this interpretation with color. First she plays the chord with a bowl of mixed colors seen at the top of the screen. Then she reveals the two colors used to compose the chord. In one example, she represents an A minor 6/4 chord with the color blue, and an A major chord with the color red. Finally, she plays the chord again showing the mixed colors and the musical notes laid out on sheet music, with their respective colors. This form of visualization can help beginner musicians understand complex chords and how to build them. By thinking of the chords in colors, rather than notes on a piece of sheet music, people may be able to easily understand music theory.

https://www.wired.com/2011/05/visual-music-paul-prudence/

With the technology that is available today, it is no wonder that visual music has become so popular. In this article, Alice Vincent shows an example of visual music by Paul Prudence and how he creates these visuals. All of the visuals are created with an algorithm based on math and geometry (Wired). The visuals themselves look stunning when paired with the music. He intends for his visual music to be experienced live due to the originality of each piece and visual. As an art piece, this form of music stands as a credible source for visual art made with current technology.

In my original idea for the thesis project, I was extremely intrigued by chromesthesia, the association between sound and sight. In this video, we see a young woman playing violin with bright colors popping up on screen to represent the notes while she gives commentary. The colors blend into the cityscape and environment as if they were a natural phenomena, and not inside one person’s head. After wondering, “what does synesthesia look like,” we get this video where someone with synesthesia can accurately represent what they see through the technology.

Based on this knowledge of synesthesia, I realized that my project would be something different. I didn’t want it to look like this video. The video is beautiful and creates a wonderful understanding for what the condition looks like, but it is not clear what each specific note is. Even after spending years training my ear, I cannot identify the notes or intervals she is playing even when looking at the visuals. After reading some comments on the video, people with similar conditions only see certain colors for particular songs, or a whole orchestra may appear one whole color as opposed to multiple vibrant colors. In my head, I envisioned synesthesia to involve a rainbow of colors ranging in shape, tint, hue, saturation, etc. After seeing this video and reading many stories, I decided to leave the idea of synesthesia as an inspiration, and not the sole focus of my project.

Deafness and Music

“Sound is so powerful that it can either disempower me and my artwork or it can empower me. I chose to be empowered.”

In this enchanting TED talk, Christine Sun Kim discusses the similarities of music and sign language and the importance of social currency in the deaf community. She gives the audience some background and history on American Sign Language (ASL) and its importance today. Her entire life, she was always taught to consider sound as something separate from her. Sound was something that she would never be able to experience and she would be distanced from it. As her life went on, she realized that ASL and sound are not as different as the average person might think. She even acknowledges that she spends more time paying attention to sound etiquette and mirroring hearing people with their sounds. At one point in the talk she says, “In deaf culture, movement is equivalent to sound.” This makes sense because all signs in ASL have movement to them, therefore making “sound with them.” What’s interesting to me is that music also has movement to it, both physically and metaphorically. As a violinist moves the bow up and down, we can gauge how intensely or passionate the piece sounds. Even if we covered our ears, we can still get a general idea of the tone and texture of the piece without ever hearing it. In addition to the physical movement of the performer, music and sound has its own movement to it. The phrasing of each piece contains a special movement that changes based on who is playing the piece. A performer can make a melodic line sound melancholy, whereas another performer can make that same line sound lively and energetic.

She also mentions the similarities between music and ASL, which are surprisingly similar. ASL is visual, so we can typically see drawings or paintings relevant to the language, but we hardly see or hear any similarities between music and visual languages. One great similarity is how music and ASL cannot be fully expressed on paper. (TED talk). Christine goes over the different parameters of sign language, including body movement, facial expressions, speed, hand shape etc. All of these parameters cannot be expressed on a piece of paper like english can. “English is a linear language,” and doesn’t have as many physical parameters to consider. Music is similar to sign language in the sense that it is not a linear language. Sheet music exists, but it does not fully capture the meaning and essence of the tone and quality of the piece.  She uses a piano metaphor to explain the similarities, where english is a single note and ASL is a chord constructed of the many parameters to interpret the language. Much like music, if one were to change any part of that chord, the whole meaning would change.

My idea for this project was to drive home the point that deaf people can experience music. They can experience it in a number of ways involving visual and tactile sensations. “You don’t have to be deaf to learn ASL, and you don’t have to be hearing to learn music.” Deaf people do have a voice, they just don’t use their physical one. By recognizing the similarities between sound and ASL, both hearing and deaf people can come together to bridge the gap between the two communities.

http://static1.squarespace.com/static/54ef3f61e4b0dd6c6d1494c0/t/56d7b4a6ab48def067a083cd/1456977062648/Sofia+_+Music+In+Special+Education+-+Research+Paper.pdf

Accessible Technology

On How Deaf People Might Use Speech to Control Devices (Jeffrey P. Bigham, Raja Kushalnagar, Ting Hao Kenneth Huang, Juan Pablo Flores, Saiph Savage)

Accessible music

“If I were here playing cello, or playing on a synth, or sharing my music with you, I’d be able to show things about myself that I can’t tell you in words.”

In this inspiring and touching TED talk, Tod Machover showcases his many projects involving music technology and accessibility. He makes the point that music is much more enjoyable when you can create it yourself (TED talk). In my project, I hope to achieve part of this goal by giving deaf people a means to learn the fundamental skills for music without needing sound. We see part of the wonders with Tod’s innovation, Hyperscore, a program that allows users to coordinate lines and colors to create music. By the end of the video, Tod brings in a man named Dan, who has cerebral palsy. Even with his physical disability, everyone can see that through the power of the infrared camera, Hyperscore, and sensors, Dan was able to express himself in ways that words cannot define. This is truly an inspiring piece of work and provides a foundation for what accessible technology should aspire to be.

If technology is accessible, that means that everyone, no matter their disability, can use it. Even today, there are still devices and innovations that do not cater to people with disabilities. In addition to helping this group of people, accessible technology can also make life easier for the average user. Take music for example. The average person can learn music, but it is extremely difficult and some may struggle more than others. But, with a program similar to Hyperscore, people across a wide spectrum of diversity can create something meaningful to them and improve their quality of life.

Ad-Hoc Access to Musical Sound for Deaf Individuals (Benjamin Petry, Thavishi Illandara,  Juan Pablo Forero, Suranga Nanayakkara)

An Enhanced Musical Experience for the Deaf: Design and Evaluation of a Musical Display and a Haptic Chair (Suranga Nanayakkara, Elizabeth Taylor, Lonce Wyse, S. H. Ong)

Fourney, D.W. and Fels, D.I. Creating access to music through visualization. Science and Technology for Humanity (TIC-STH). 2009, 939–944.

Musica Parlata: A Methodology to Teach Music to Blind People (Alfredo Capozzi, Roberto De Prisco, Michele Nasti, Rocco Zaccagnino)

Color Theory

Technical Implementation

Max

After spending one semester using Max, I figured this would be my first approach for technical implementation. I have been able to make arpeggiators, random number generators, and interactive music pieces from coding on Max. For those that are unaware, Max is a program that allows for interactive music and more through patches and cords. After researching on the website, Max does allow for real time graphics, which is exactly what my project calls for. Also, I can potentially do the demo section of the project with Max as well. I am definitely most comfortable and familiar with this program compared to the other solutions.

WebGL

WebGL (Web Graphics Library) was suggested by a classmate who worked with this interface in a previous class. It allows for 2D and 3D graphics in any compatible web browser. While this is an interesting web based solution, I am not confident or skilled in coding, and I fear that my lack of skill will inhibit my progress with my project.

Chrome Music Lab

The Chrome Music Lab is an excellent source for easy music creation. Just from playing around on the site, one can easily make a simple melody in a few minutes. By using color and a visual interface, it gives the user a refreshing experience with music composition. One particular experiment of interest to me is the spectrogram, which is defined as a picture of sound (Chrome Music Lab). It almost looks similar to a heat map for sound, with certain frequencies appearing more red and others appearing more blue. In terms of my project, it’s important to consider these sources that use color to represent music and sound. In regards to my project, I may want it to perform similarly to the spectrogram where the notes occur on a timeline.

Processing

As suggested by Professor Ault, my last option is Processing, a program that allows for visual art coding. Again, this is a great solution for going about this project, but I am not confident in my coding skills.

 

 

Research Update 2

Within this past week, my alumni advisor, Meghan McEneaney, reached out to me and gave me some valuable feedback and things to consider for my visual music project.

The first thing she suggested I look into is the Chrome Music Lab, which can be another solution for the technical aspect of the project. It is a web based program, so I may be able to design it for mobile and create a vibration feature to link multiple senses together.

For a less technical look at the project, Meghan suggested I consider the mappings of the colors. “The way people assign different shapes and colors to sound is unique and not to mention – there are an infinite amount of mappings. How will you define these mappings to make this truly accessible for everyone?” This is an excellent point Meghan makes and I’m glad she brought it to my attention. She gave me a few sources to take a look at in regards to visual music, as well as accessible music.

I’m adding these new sources to my list of sources from last week. I have plenty of research to work with for my thesis and I’m excited to show the world of accessible and visual music to everyone!

Research Update 1

Historical Examples

I posted this link on my research plan post last week. This article discusses the various ways that visual music has been used throughout history. Meaning, visual music has existed before technology has ever been invented. Of course, we see visual music change throughout the years, changing from stand still paintings, to computer visualizations through Aftereffects. As I have argued before, many examples of visual music do not accurately represent individual pitches, but instead represent an overarching theme or melodic line. Cinematic music does help us to paint a picture in our heads of what the music looks like, but audio alone does not aid a Deaf person’s understanding of musical elements and pitches.

https://www.theguardian.com/artanddesign/2006/jun/24/art.art

Current Examples

http://ezproxy.tcnj.edu:2667/10.1145/3140000/3134821/p383-bigham.pdf?ip=159.91.13.117&id=3134821&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539913345_39cb2cfc739c097d2d5821175a5622af

In this article, Bigham et al discuss the issues that speech activated devices bring forth. In particular, the main issue is the hindrance that deaf and hard of hearing people experience with these new technologies. In smart phones and computers, speech to text or speech activated actions are optional and you can operate the device without them. However, with new devices like the Amazon Echo, it begs the question: How can a deaf person use this? This example is not necessarily catered to my project in particular and does not discuss the music aspect of accessibility. However, it is important to note that certain designers are noticing issues within our tech based world and working towards a more accessible design for members of the deaf community.

http://ezproxy.tcnj.edu:2667/10.1145/2390000/2384975/p245-capozzi.pdf?ip=159.91.13.117&id=2384975&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539913879_669d644934bda4cd853aca58216a4197

Yet still, we have no evidence of technology or innovative ways to aid the deaf in understanding music, but there is evidence in helping the blind! This article mainly discusses the difficulties of reading Braille and how difficult it is for blind people to read music. This innovation relies on a software that “sings” the notes where they’d be presented on a piece of sheet music. While useful and innovative, would a music teacher not be able to do that exact thing? Also, people do not necessarily need to know music theory in order to play an instrument or compose. In fact, at times it is easier to listen to and play music with one’s eyes closed, to eliminate the distraction of sight from the melodic material. Here we see an example of an innovation that aids blind people with music. Where are the innovations that aid deaf people with music?

http://ezproxy.tcnj.edu:2667/10.1145/2990000/2982213/p285-petry.pdf?ip=159.91.13.117&id=2982213&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539914296_44acf8ae596a73ed2aeb351b6f43f0fa

Here is my starting point for research on “Visual Music for the Deaf.” Surprisingly, this article was not hard to find on the ACM database. All I had to do was search “deaf music.” This article brings up the issue that some visualizers for sound have a delay or lag to them. From prior research, Petry et al have discovered that previous iterations for accessible music for the deaf did not offer a “real time” accurate representation of the sound being represented. This is similar of what I want to accomplish with my project and the interactivity element of it. If the visualizations aren’t exactly linked up with the sounds, then they are not accurate and can’t be used as a reliable source to connect sound to visuals. One quote caught my eye from this article, “Prior work has developed visual and vibrotactile sensory substitution systems that enable access to musical sounds for deaf people [2,4,7].” (Petry et. al 1). A-ha! Here starts my journey. I was struggling to find examples of music innovations made for deaf people and here they are in 3 references at the end of the article. I’ll link these as their own sources below, but I would not have found them if it was not for this article.

Fourney, D.W. and Fels, D.I. Creating access to music through visualization. Science and Technology for Humanity (TIC-STH). 2009, 939–944.

This example stemmed from the previous one about real time music visualizations and tactile responses for deaf users. I will admit that I have not fully read these articles yet, but I have looked through the examples of visualizations from these pages. It seems to me that this article is jam-packed with information about music for deaf people. This is definitely at the top of my list of reliable sources.

http://static1.squarespace.com/static/54ef3f61e4b0dd6c6d1494c0/t/56d7b4a6ab48def067a083cd/1456977062648/Sofia+_+Music+In+Special+Education+-+Research+Paper.pdf

While not specifically related to technology, this article talks about music therapy for deaf people and common misconceptions people have about deaf people and music. It also talks about the major struggles Deaf people have in regards to the hearing community and struggling to find peace in a world where hearing people are the majority. Even though I have learned about these misconceptions and assumptions in several ASL courses I have taken at TCNJ and communicating with Deaf people, it is reassuring to have solid evidence that backs up my claims about these remarks. A particular remark made by hearing people is “Why don’t you just get a cochlear implant? I would never want to be deaf! I can’t imagine not being able to hear I’d rather DIE.” Here’s a quote that accurately represents my opinions on that, “One does not need a good quality hearing aid or a cochlear implant to enjoy of music, because most people, even those with severe to profound deafness, have residual hearing” (Sofia P. Quiñones Barfield 6). In addition to that, there are some Deaf people who do not want to hear. There is definitely a problem with mainstream society with understanding Deaf culture and understanding why some Deaf people may not want a cochlear implant or the ability to hear.

http://ezproxy.tcnj.edu:2667/10.1145/1520000/1518756/p337-nanayakkara.pdf?ip=159.91.13.117&id=1518756&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539916130_09ff3d0046e22bdc25464fb437175978

This is another article that stemmed from the first article that I found on the ACM database. Not only does it show the importance of visualizing music, but it also discusses the importance of vibrations as well. While my project may not focus on vibrations, I think that it is an interesting field to study. There is definitely more concrete evidence for visual music than tactile music, but I think that tactile music can provide leaps and bounds for the Deaf community when it comes to music.

These last 3 links were ones that I posted in my research plan. They are visualizations for music and not catered towards Deaf or hard of hearing people.

http://www.centerforvisualmusic.org/

https://www.youtube.com/watch?v=obrBAysVef0&t=66s

https://www.wired.com/2011/05/visual-music-paul-prudence/

I plan to also do more research towards color theory and how I can best represent each pitch and instrument with appropriate colors. If anyone has suggestions where I can find those sources, please share them!

 

Research Plan for Visual Music

History of Visual Music

https://www.theguardian.com/artanddesign/2006/jun/24/art.art

Just from this one article alone, there are tons of examples of visual music throughout history. There are without a doubt more examples than this, but this is a good starting point.

I will admit, I am having trouble finding articles on the TCNJ library website about visual music, deaf people and music, and accessible music. For now, I will focus on this one article and (hopefully) find more examples as I go on.

Current Examples

There are many examples currently with visual music because of the advantages technology offers to us and the use of Aftereffects. It’s very interesting to see how other people visualize music. Yet, none of these examples accurately represent what I’m looking for.

The search will continue…

http://www.centerforvisualmusic.org/

https://www.youtube.com/watch?v=obrBAysVef0&t=66s

https://www.wired.com/2011/05/visual-music-paul-prudence/

 

 

 

Finalized Concept

Accessible Music

After much contemplation about what to do for my project, I have landed on my final project concept: Accessible Music. Accessible, meaning that it can be useful for people with disabilities (in particular, the deaf community). The idea of this project is to provide a way to enhance the experience of music for people with hearing disabilities.

Project Idea

For my Senior Project, I wanted to delve into this topic of accessible music and make something that can potentially benefit non-music people and deaf people. In regards to non-music people, I find that many people shy away from learning music because it is either too time consuming or they think it is too late to start and would be too difficult. I want to make music easier to experience and easier to learn. This way, everyone can enjoy music without any limitations.

My project will include a musical piece accompanied by a visual representation of each individual note and timbre with colors and tone. The goal for this is for someone to be able to identify the instruments and notes used just by listening to it (as opposed to reading the sheet music or seeing the Logic file). In addition to this, there will be an active interaction element to my project. Through the use of a MIDI keyboard, I will allow the user to play around with the notes and timbres so that they themselves can see the difference in colors with each note and instrument. Ideally, a deaf person will be able to learn the difference between notes and timbres and see how music is represented visually. Through the use of color, shapes, tints, and shades, I will visually represent music in a way that can benefit people with no music experience (or hearing experience).

Inspiration

Originally, my inspiration for this project stemmed from synesthesia, the body’s ability to link one sense to another. Specifically, I have always been intrigued by chromesthesia, the link between sight and sound. For people who have chromesthesia, they see colors and certain shapes every time a musical note or sound are heard. Typically, these people also have perfect pitch, another phenomena that fascinated me. Perfect pitch is the ability to recognize any pitch without a reference pitch or key. Since I have spent my whole life surrounded by music performance and theory, I kept wishing and dreaming that one day I will wake up and be blessed with perfect pitch and chromesthesia. “All I want is to see colors when I hear notes and be able to identify those notes. Is that so much to ask?” Apparently not. Anyone can actually train themselves to have perfect pitch and synesthesia. All it takes is a lot of training and associations between certain colors and notes. After doing a bit of research I realized that chromesthesia is not everything I thought it was.

I watched a YouTube video of this woman who has chromesthesia and how she sees music. In my head, I always envisioned chromesthesia being a wonderful rainbow of colors and shapes that makes life become one giant art piece. In the video, I noticed that instead of rainbows, I saw a few colors projected at different parts of the screen. There were a few main colors (green, yellow, blue), and they would change their position on the screen based on how high or low the note was. Even though I have spent years training my aural skills, I could not identify the notes and their relation to each other. I was partially heartbroken that my lifelong dream was not the reality which stood before me on a YouTube video.

Lost and confused, I wondered what I should do for my senior project. Should I be true to synesthesia and accurately represent what it should look like? Or should I do it my way and show a variety of colors and tones to accurately represent notes? I officially decided that being able to visually represent each note accurately is more beneficial than staying true to synesthesia. Instead, I am considering this project an inspiration from visual music; a way to experience music with more than one sense.

Tl;dr: Synesthesia originally inspired my project. As time went on, I was more inspired to accurately represent notes to benefit people with no music/hearing experience.

Originality and Usage

I can see my project being beneficial outside of the Senior showcase. With a lot of projects I have seen involving music and visuals, it’s more so along the lines of “oh, that looks nice.” Sure, it’s cool to look at, but wouldn’t it be so much more exciting if people can actually benefit from it? It’s one thing to make a project that “looks nice.” It’s a totally different story if the project looks nice AND has a use other than its visual aesthetics. As I’ve discussed before, I would ideally want deaf people to be able to use my project as a way to experience music. Since deafness prevents them from being able to hear music, I want to draw on one of the other senses to enhance their musical experience. Not only does their experience change, but they can also potentially learn music more easily by having the notes visually represented. Of course, this project is not limited to the deaf community. I think that having a visual representation of music can make the learning experience of music that much easier for anybody. I see a potential use of my project for almost everybody across the spectrum. For those that aren’t interested in learning music or already know music, it’s visually and audibly pleasing. For those that want to learn music, it makes the learning experience easier. For those that are deaf, it completely changes their experience with music and can provide a useful way to learn music without hearing it. From those that have no interest or skill in music, to those that spent years learning music, to those that physically cannot hear music, I see a benefit that visual music can have on each of these groups.

In the past, there have been countless examples of visual music throughout history. At concerts, we see colored lights associated with certain songs and moods all the time. Visual music has been done before. But has it been catered to the deaf community? From my understanding and research, there has been no solid evidence towards accessible music for deaf people. Technically my idea for visual music is not original, but my goals for this project are original and have not been implemented before.

Technical Implementation

Although I am still in the beginning phase of this project, I have some ideas of have I can create this project technically. For a prototype, I might use a Makey Makey and assemble a Max patch to match certain pitches and colors. The idea is that each note will have its own color. For example, C can be blue, where C1 will be a navy blue and C8 will be a baby blue. Also, each MIDI timbre will have its own tint or hue to it. For example, a violin may have a brownish hue to it, whereas a flute may have a blueish hue to it. If a violin played a C4, it will look like a royal blue mixed with some brown undertones. For the final project, I may use a MIDI keyboard to make the interactivity more user-friendly. As for the musical piece, ideally I would like to compose a piece showcasing a variety of different instruments and notes. If this proves to be too difficult and time consuming, I can use an existing musical piece not bound by copyright, so long as I have the score. Once I have the piece, I will create a visual representation of the notes and instruments through Aftereffects, with the colors and tones being the same as the ones used in the interactive segment. First and foremost, I will create the interactive element since that is the most important part of this project. After that, I will work on the visual music demo.

Revised Concept (I get it, everyone likes the synesthesia one)

It has been brought to my attention that the overwhelming majority of people I speak with like the idea of chromesthesia. Shocking, I know. I liked that idea best too, but I still wanted to keep an open mind. It probably seems obvious at this point, but my “Visual Music” idea is going to be my senior project idea. Exciting! But what does that entail?

In my last post, I briefly explained what synesthesia and chromesthesia are along with why I want to do the project. Thankfully, Professor Ault gave me some insight on past examples of visual music and how it’s been done. I also found a YouTube clip by someone who has chromesthesia.

https://www.youtube.com/watch?v=obrBAysVef0&t=118s

This is different from what I imagined chromesthesia to be like, with a rainbow of colors and various shapes taking up your whole vision. Clearly, I’m a little too surrealist in my approach. Considering this woman has synesthesia and can show what the world looks like through her eyes, how can I accomplish that? Apparently, it is possible to train yourself to have synesthesia, by tricking your mind into associating notes with colors. I’m not sure if I’ll be able to pull that off, but thankfully, I have a pretty vivid imagination. If any of us really tried hard enough, we can associate certain sounds and notes with colors along the spectrum. For the purposes of my project, I don’t exactly want it to look like this YouTube video. I want every note to be easily identifiable, where anyone can decipher the differences between pitches and instruments. Another thing to mention is the types of sounds this woman can see. She only sees colors when a musical note is played. So if someone slams a door shut, she won’t see a color. But if someone plays a Bb on a violin, it might appear blue and towards the middle of their vision. I’m at a clear disadvantage since I wasn’t born with the ability to visualize music. But with my creativity and passion for music, I can sure as hell try.

As for the technicalities of my project, I have most of it thought out in my head and it doesn’t seem terribly difficult to pull off. First, I would want to create a musical piece using a variety of different instruments so I can visualize each timbre individually. I don’t want to select an existing piece of music because it will either be too complex for me to visualize or there won’t be enough of timbres to visualize. Also, I would need the sheet music and spend hours everyday listening to it to make sure I know every part forwards and backwards. If I don’t know the exact notes being played, how am I supposed to visualize the notes? With this project, I want the user to be able to identify two things: the individual notes and the instruments being played. If a deaf person sees the visualization, I would want them to learn how each instrument would appear. For example, a piano will have plain clear, cut colors, whereas a violin may have some brown or deep undertones to each note. After composing the piece, I would need to visualize it (duh), presumably through Aftereffects. This is going to be a long process, since I will need to know my composition so well that the colors will come to me automatically, without hesitation. If I cut corners and am not consistent with the notes and instruments, the project will be meaningless. The main chunk of my time will be spent composing and visualizing the project. All of the tech related stuff comes second. In addition to my own musical piece, I want each person to be able to see the notes for themselves. I might use a Makey Makey through a Max patch to create a piano of sensors. Ideally, every time someone presses a sensor (note), that note’s particular color will appear on screen. They should also be able to change the MIDI timbre through the Max patch. My only issue with using Makey Makey’s is that they only cover 8 notes. That would mean I would need 2 Makey Makey’s for 1 octave. If I wanted to do multiple octaves, I need to be sure I have the right amount of Makey Makey’s available.

I’m glad everyone seems to be on board with this project idea. This has been creeping in my subconscious for a long time and I want to make it a reality. Not just because I love rainbows and music, but also because I love the deaf community and their ability to enjoy music. I can’t wait to take on this wonderful visual music journey and share it with everyone!

Initial Concepts for Senior Thesis

So many ideas, what to choose?

There are certain people who struggle with thinking of one idea. There are other people who struggle with finding one good idea out of the hundreds of ideas circulating in their head. I am the latter. In the first week, I already had 5+ ideas that I thought were interesting and could easily get excited about. The problem is, I know that some of them will be my enemy in regards to time. I am still keeping an open mind so I chose the first 3 ideas that popped into my head when referencing the 3 topics: pick a topic that reflects on your expertise, pick a topic where your expertise is taken away, and pick a topic where you are intrigued and interested, which doesn’t necessarily follow your career path. Without further ado, here are my three potential project ideas for senior thesis.

Music Technology

I love music. I also love technology. I have spent the past few years at TCNJ taking as many music tech related classes as I can. This involves me spending as much time as possible with Dr. Nakra. If I go down this path, I would love to spend more time with her and really dig down deep to get the best project I can produce. This music tech related project wouldn’t be any ordinary music composition made in Logic. I want to do something truly innovative and inspiring that hasn’t been done before. Originality is hard to come by, which is why synesthesia comes to mind. For those that are unaware, synesthesia is the phenomenon where two sensory pathways are linked and one unconsciously activates the other. The main form of synesthesia that we see is one that I would base my project on: the link between sight and sound, or chromesthesia to be exact. We typically see that people who have this form of synesthesia have perfect pitch, or the ability to identify a note as it is played. Perfect pitch has been interesting to me since I started learning music and synesthesia makes perfect pitch that much more intriguing. We have all seen people using colors to enhance a musical experience with Aftereffects. Just go to a live concert and you’ll see this. However, this is not the type of project I want. I would want a synesthesia project where each note, each pitch shift, each timbre, and each note velocity has a unique look and color to it. I want to envision what it is like to have synesthesia in the form of technology. That would mean that by simply messing around on a piano and seeing the colors projected through Aftereffects, one could train themselves to have perfect pitch. I believe this could also be beneficial to deaf people, a culture that I adore alongside the music culture. Deaf people can still enjoy music without sound as it is. However, I think that seeing each note represented by a color would enhance their experience even more and help them appreciate and love music more.

Project #1: create a musical piece and have colors associated with the notes, following all of the guidelines I had previously stated. Also, there would be an active interaction element to it as well, where a user can play the notes (on a Makey Makey piano) and create their own visual music masterpiece.

Design and Coding

I’ll tell you right now, this idea is not entirely fleshed out and I’m making it up as I go along. This was probably the hardest idea to come up with because I’m thinking “What am I without music?” I might have taken it a little to literally and just crossed out the music section on my Odyssey Plan. What am I left with? Design, coding, and digital media. All of which I am not particularly passionate about or spend hours of my day working on. Sure, I’ve coded for the IMM120 and IMM130 class, but I didn’t enjoy it. In fact, I found myself more frustrated and stressed out from the coding in these classes than any other class. So stressed, in fact, that I had to go to the gym several days a week just to work off the excess energy. Based on this reflection, I would probably go towards the digital media side of things, even though I am not that skilled in the area. I’m not particularly pleased with this idea and I hope that I don’t go with this one. But who knows, maybe in the future, I can make this into something I enjoy.

Project #2: An interactive art piece, or animation, that the user can manipulate and move. This isn’t a video game, just an interactive drawing in a sense.

Visual Novel

Here is the far fetched idea. I know I said that I hated coding and was stressed out in the intro level courses for it. However, there is something about this idea that keeps sticking with me and I don’t want to let it go. At first I thought, “This is going to be a video game,” but I didn’t want something as sophisticated as that. I wanted something simple, with a great story, where the player has complete control over the story and the endings. This is when the term “visual novel” came to mind. If anyone has ever read a visual novel, you will know that it is mainly just text and some animations. You are basically reading a book on a screen with some imagery to complement it. I have seen some “visual novel-esque” games, where the player is brought through the story and you can “choose your own ending.” I will shamelessly admit that I have played a few dating simulators during some of my darker moments in life (there isn’t anything wrong with them, it was just a guilty pleasure of mine and I would never admit it out loud). In a lot of these dating simulators, they are formatted like a visual novel and it claims that your choices massively effect how the ending of each story will turn out. In reality, the player only gets 3 choices every few chapters and as long as you pick “the right choice” enough of times, you get the good ending. I hated this concept because it gave the illusion that the player had control over the situation, when in reality everything is set in stone from the start with a few minor changes based on your “decisions.”

So what does this mean for my project?

I want a visual novel game, where the player has complete control over the story line and characters. In the beginning of the game, you would pick your personality type and as the game progresses, your personality changes based on the choices you make and how well you do. In each chapter, there are a number of choices. None of them would be “bad” so to speak, but each one will lead to a different outcome for the rest of the chapter and a different scene for the upcoming chapter. I started this idea over the summer and I honestly got inspired out of nowhere to do it. I’ll be a little shameless here and post a bit of my journal entry from that day:

July 9, 2018 10:11PM

I spent the rest of the night playing sif, deresute, and animal crossing. So that’s about it. I did get a sudden urge to get motivated though (at 10:30? Why the fuck not?) It might be a little overambitious but that’s what life is all about. If you try something and it doesn’t work out, just try again. I want to create a visual novel of some sort where the player’s actions heavily influence the story. I haven’t figured out where I want to go with the storyline yet, but I know I want it to be decision based and text-based. I was sort of influenced from Detroit: Become Human and how the choices you make decide how the story ends. Of course I would have to learn how to code a video game. And I’d have to draw the characters and such. But whatever. An artist’s first product is hardly the best one. If it doesn’t turn out great, then I’ll just keep practicing until I get better.

-Randi 10:41PM

Mind you, I was at a low point during the summer and was suffering from depression for awhile (it’s still here, but we’re getting there). I was trying to find ways to purposefully motivate myself and get myself out there, to try things I had never done before. The idea for the visual novel hit me all of a sudden at 10:30PM in the middle of summer. I have had stories set up in my head all the time. I just never had it in me to write them down. Secretly, I love creating stories, but I’ve never shared them with anyone nor, shockingly, wrote them down until that day. To be quite frank with everyone, I have been creating stories in my head since I was about 12, but, of course, I never told anyone.

If I did go through with this idea, there is a lot of potential, but there is also a lot of risk. Let’s talk about the potential it has first.

Potential: It would be a breakthrough visual novel type game because (to my knowledge), something like this hasn’t been done before. We have those dating sims, DDLC, and actual visual novels, but I have never seen them show a truly interactive experience for the player, where the choices really matter. My idea could be the first of its kind for this type of game and I think a lot of people would benefit from a truly interactive gaming experience.

Risk: I have NEVER coded a video game before (unless you count IMM120). I could easily look up tutorials, but if I was going to master this project, I would need months or years just to learn the basics of coding. Time is my enemy here. My other risk is that I have no concrete evidence of story-telling skills. I’ve been creating stories in my head for years, but what good is that if it isn’t written down and I didn’t get feedback on it? It’s a serious risk to have 1. no coding experience and 2. no story-telling experience. Two required skills for making a video game are two skills that I do not have. If I were to go with this idea, I would have a SERIOUS amount of learning and work cut out for me and it may very well stress me out until the day of the showcase.

With the potential and risks laid out, I would like to acknowledge a few things. I know this idea is well out of my league and would be better suited for someone with actual experience in this field. However, I want to make this vision of mine a reality and unlike some of the other projects I had in mind, I would like to complete this some day in the future. I acknowledge my lack of experience in the coding and story-telling world. But I also acknowledge my drive and passion for this idea to make a meaningful project.

Project #3: a decision based visual novel with multiple storylines and endings.

Now I had a bit of the characters thought up already. These were never set in stone and I actually cringed at some of the self-introductions, so I deleted them. I’ll leave my Word doc of my original ideas here (in case anyone wants to roast my inability to create an original character).

Visual Novel