Research Update 1/3

For this week’s research I spent most of my time gathering the content, the bread of this project so to speak. After having my 1 on 1 conversation with professor Ault, it was deemed a good starting point to develop keystones of information going into this project. The keystones I had decided on and researched can be categorized as such: Mythology, Tools/Weapons, Civil Structure, Language/Growth. These are subject to change as the research continues, but as for what I’ve discovered up to this point it’s fair to say these categories will be set in stone.

The Research in Depth:

It might be a bit vague as to what each category covers by just looking at their names, so allow me to delve further into description. Starting with mythology, this was actually what inspired me to start this project. From a young age I’ve been interested in myriad pantheons that exist around the world and the myths that come from them. It was during this study that an interesting point came to mind: all cultures with their different mythology have a same starting point, that being the origin story. With the origin story, we can see how that civilization believed the world came into being, and by observing the differences we can at the very least observe some key cultural differences while also looking at the similarities they share. So, to make a ramble come to an end, the mythology keystone is meant to delve into cultural lore, pick apart the pantheons (essentially the collection of deities a culture believes in), the myths that come with them and the many stories birthed as a result.

Moving onto the next topic, we have tools/weapons. When man discovered fire, did it somehow get passed from one person to the next and travel around the world? Of course not, what happened is different early man civilizations discovered fire and used that, along with other tools, to advance their society. In essence, that’s how many different groups of people were able to exist without knowing about each other, they all discovered the same or similar tools and used them in different ways. What tools/weapons seeks to look at is what civilizations used as tools, weapons, building stones for their society, what houses were made out of and how they progressed. The Roman’s architecture, for example, was different from the Persian’s, the weapons they used were also slightly different. The ancient Chinese used crossbows to make unskilled soldiers fight with the strength of a skilled soldier way before other countries picked up on this. The end goal for tools/weapons is to see how civilizations started from near similar routes, and through those groups specific technological evolution shaped them to be unique.

As we know, early man started as hunter/gatherer societies and traveled in packs, not settling in place for too long. Fast forward to the Babylonians, we have what is recognized as the first set of laws for man, Hammurabi’s Code. Fast forward even further to modern day, where you have vast differences in cultures between countries like America and China. Through Civil structure, the research I have done this past week went into showing the roles of man and woman in ancient cultures, delved into hierarchy, class structure, government in its base form and how it evolved over time. In case it wasn’t clear from the previous two paragraphs, the goal here is to outline the differences between cultures while drawing similarities to modern day practices and connecting the trail back to ancient roots.

Finally, we’ve reached the last cornerstone: language/growth. Along with mythology, this is the other keystone that sparked my interest in this topic. Flashing back to high school, I remember one of my teachers telling us about an article he read that said in 1,000 years from now, the English language will be extinct. Now, needless to say, this intrigued me a great deal and always had me questioning the root of languages and how they are so diverse. There are myths surrounding this story, one being the biblical passage dealing with the Tower of Babil, and how the sins created from that were punished with man being unable to converse with each other. Well, fast forward to modern day and we have bilingual people being a dime a dozen, not to mention multilingual people. This got me to thinking how languages were so different, or rather why certain groups shared a more common route than others. For a brief comparison, look at the difference between romance languages and east Asian languages, the alphabet itself is structured very differently, and while the sounds are structured differently they still use the same vocal patterns. My research over the past week went into hieroglyphics, the ancient Sanskrit language, and how civilizations came to adopt a written language.

Insight/Reflection:

While the research took up a large portion of my time, I also did some minor work into thinking about changing the platform, and am playing with ideas right now after getting feedback from both Professor Ault and Angela (my alumni advisory). While I didn’t do enough research into that field, I still would like to note that I am leaning towards making this into a two part project: one part having all the information and write up from the research while the other is a data visualization, showing in a visual format how everything connects together (or differs, the research at this point isn’t decisive enough to pinpoint this). Given how much I put into solidifying my information keystones, next week I will try to play around with the technology and try to get a concrete direction on how I want this to be represented in the final product. As for the heavier question of “where my place in the field is with this project”, I don’t have a solid answer yet, although after seeing all the interesting information I’ve been able to uncover (and how tedious it was) I’d like to say that learning about the world, different cultures and civilizations, how they advanced and came to shape our modern world is fascinating.

Research Update 1

Historical Examples

I posted this link on my research plan post last week. This article discusses the various ways that visual music has been used throughout history. Meaning, visual music has existed before technology has ever been invented. Of course, we see visual music change throughout the years, changing from stand still paintings, to computer visualizations through Aftereffects. As I have argued before, many examples of visual music do not accurately represent individual pitches, but instead represent an overarching theme or melodic line. Cinematic music does help us to paint a picture in our heads of what the music looks like, but audio alone does not aid a Deaf person’s understanding of musical elements and pitches.

https://www.theguardian.com/artanddesign/2006/jun/24/art.art

Current Examples

http://ezproxy.tcnj.edu:2667/10.1145/3140000/3134821/p383-bigham.pdf?ip=159.91.13.117&id=3134821&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539913345_39cb2cfc739c097d2d5821175a5622af

In this article, Bigham et al discuss the issues that speech activated devices bring forth. In particular, the main issue is the hindrance that deaf and hard of hearing people experience with these new technologies. In smart phones and computers, speech to text or speech activated actions are optional and you can operate the device without them. However, with new devices like the Amazon Echo, it begs the question: How can a deaf person use this? This example is not necessarily catered to my project in particular and does not discuss the music aspect of accessibility. However, it is important to note that certain designers are noticing issues within our tech based world and working towards a more accessible design for members of the deaf community.

http://ezproxy.tcnj.edu:2667/10.1145/2390000/2384975/p245-capozzi.pdf?ip=159.91.13.117&id=2384975&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539913879_669d644934bda4cd853aca58216a4197

Yet still, we have no evidence of technology or innovative ways to aid the deaf in understanding music, but there is evidence in helping the blind! This article mainly discusses the difficulties of reading Braille and how difficult it is for blind people to read music. This innovation relies on a software that “sings” the notes where they’d be presented on a piece of sheet music. While useful and innovative, would a music teacher not be able to do that exact thing? Also, people do not necessarily need to know music theory in order to play an instrument or compose. In fact, at times it is easier to listen to and play music with one’s eyes closed, to eliminate the distraction of sight from the melodic material. Here we see an example of an innovation that aids blind people with music. Where are the innovations that aid deaf people with music?

http://ezproxy.tcnj.edu:2667/10.1145/2990000/2982213/p285-petry.pdf?ip=159.91.13.117&id=2982213&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539914296_44acf8ae596a73ed2aeb351b6f43f0fa

Here is my starting point for research on “Visual Music for the Deaf.” Surprisingly, this article was not hard to find on the ACM database. All I had to do was search “deaf music.” This article brings up the issue that some visualizers for sound have a delay or lag to them. From prior research, Petry et al have discovered that previous iterations for accessible music for the deaf did not offer a “real time” accurate representation of the sound being represented. This is similar of what I want to accomplish with my project and the interactivity element of it. If the visualizations aren’t exactly linked up with the sounds, then they are not accurate and can’t be used as a reliable source to connect sound to visuals. One quote caught my eye from this article, “Prior work has developed visual and vibrotactile sensory substitution systems that enable access to musical sounds for deaf people [2,4,7].” (Petry et. al 1). A-ha! Here starts my journey. I was struggling to find examples of music innovations made for deaf people and here they are in 3 references at the end of the article. I’ll link these as their own sources below, but I would not have found them if it was not for this article.

Fourney, D.W. and Fels, D.I. Creating access to music through visualization. Science and Technology for Humanity (TIC-STH). 2009, 939–944.

This example stemmed from the previous one about real time music visualizations and tactile responses for deaf users. I will admit that I have not fully read these articles yet, but I have looked through the examples of visualizations from these pages. It seems to me that this article is jam-packed with information about music for deaf people. This is definitely at the top of my list of reliable sources.

http://static1.squarespace.com/static/54ef3f61e4b0dd6c6d1494c0/t/56d7b4a6ab48def067a083cd/1456977062648/Sofia+_+Music+In+Special+Education+-+Research+Paper.pdf

While not specifically related to technology, this article talks about music therapy for deaf people and common misconceptions people have about deaf people and music. It also talks about the major struggles Deaf people have in regards to the hearing community and struggling to find peace in a world where hearing people are the majority. Even though I have learned about these misconceptions and assumptions in several ASL courses I have taken at TCNJ and communicating with Deaf people, it is reassuring to have solid evidence that backs up my claims about these remarks. A particular remark made by hearing people is “Why don’t you just get a cochlear implant? I would never want to be deaf! I can’t imagine not being able to hear I’d rather DIE.” Here’s a quote that accurately represents my opinions on that, “One does not need a good quality hearing aid or a cochlear implant to enjoy of music, because most people, even those with severe to profound deafness, have residual hearing” (Sofia P. Quiñones Barfield 6). In addition to that, there are some Deaf people who do not want to hear. There is definitely a problem with mainstream society with understanding Deaf culture and understanding why some Deaf people may not want a cochlear implant or the ability to hear.

http://ezproxy.tcnj.edu:2667/10.1145/1520000/1518756/p337-nanayakkara.pdf?ip=159.91.13.117&id=1518756&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EEF3BD08345A252FB%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1539916130_09ff3d0046e22bdc25464fb437175978

This is another article that stemmed from the first article that I found on the ACM database. Not only does it show the importance of visualizing music, but it also discusses the importance of vibrations as well. While my project may not focus on vibrations, I think that it is an interesting field to study. There is definitely more concrete evidence for visual music than tactile music, but I think that tactile music can provide leaps and bounds for the Deaf community when it comes to music.

These last 3 links were ones that I posted in my research plan. They are visualizations for music and not catered towards Deaf or hard of hearing people.

http://www.centerforvisualmusic.org/

https://www.youtube.com/watch?v=obrBAysVef0&t=66s

https://www.wired.com/2011/05/visual-music-paul-prudence/

I plan to also do more research towards color theory and how I can best represent each pitch and instrument with appropriate colors. If anyone has suggestions where I can find those sources, please share them!

 

Research Update 1/3

To begin research for my game I had to look at the history of rhythm games that had made an impact on the industry and came across some pretty popular ones such as Just Dance, Rock Band and Dance Central. I have actually referenced all of these games in my Research Plan post. (http://ault.immtcnj.com/thesis_fall_18/2018/10/11/research-plan-2/) However I didn’t know about one of the games that probably paved the way for more games of rhythm games of it’s caliber, Rez .  So that was the first piece of researching I did after I was informed by Professor Ault about the game.

The game developed by United Game Artists, is a 3rd person rail shooter in which the player shoots at objects that appear around them. However the hook is, every time the player destroys a target a sound is made that adds to the music in the background therefore making the player the composer of the song. This is a quality that is very rare and which I thought was an amazing idea and still can’t seem to figure out how it worked so flawlessly. It also has a beautiful color quality that feels like a form of synesthesia when you are playing it. The game was such a huge success after it’s release in 2001 on the Sega Dreamcast and Playstation 2 that it was rereleased multiple times in the future for Xbox live and later for Playstation VR.  Rez Infinite, the PS VR version, is currently one of the most acclaimed rhythm games in the world. My next step would be to speak to professor Fishburne about his experience playing the game and how I could incorporate that into my game.

I also got a chance to talk to Brett Taylor through email which was actually very helpful because he had done a project that included the use of the Kinect and Unity and actually has designed a couple of his own rhythm games. His advice was to Prioritize and Identify what exactly it was I wanted to accomplish with the game and be aware that as I progress through the making of the game, my priorities might be changed a little. So I came up with a list of priorities:

Learning Unity

Designing and Animating the environment

Design how the music interacts with the environment

Kinect Support
First person controls
A fun and smooth player experience

In researching the use of Kinect and Unity and actually came across a few examples. The problem was that this would be my first experience using the engine and animating and designing my own game so although the watching other people play their games was a pretty cool experience, I found it better to start looking up how to videos on using the unity engine and actually ended up finding quite a few. I also found how to videos on how to add music to the games made in unity as well.

https://unity3d.com/learn/tutorials/topics/xr/getting-started-vr-development

https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr

https://unity3d.com/learn/tutorials/topics/audio/adding-music-your-game

 

Research Update #1

            My research into modding could be considered to have started about seven years ago on youtube. I’ve been watching this channel AlChestBreach for many years and his main content is Fallout mods. His channel is still going strong and every time I’ve watched a video I both laugh and learn a lot about modding. Recently, while watching his Fallout 4 mod playthroughs, I’ve been paying a lot of attention to just how the mod author has put together the world they created. Some of these mods are literally game-changing and are very technically impressive, such as The Train. This series’ relationship with modding has certainly come a long way since the first mods in Fallout 3, which I’ve also modded and played. There’s actually a funny story about Fallout and the Gamebryo engine it runs on. In Fallout 3’s DLC Broken Steel, there’s a part where you enter and ride on the Presidential subway. When you activate the controls, the game locks the player in place and the train proceeds forward following the tracks. The funny thing is, due to the limitations of the engine, the team had to work around an issue of the train moving and came up with an interesting solution. The train car was actually a hidden NPC wearing the train as a hat, and then the NPC would be the thing that’s scripted to run along the tracks carrying both the train and the Player on its head. Weird stuff, but it shows the kind of ingenuity it takes to get an old game like Fallout 3 to work as you want it sometime. Hopefully, I won’t face an issue with such an odd solution as this.

            With Fallout 4, I’ve only ever owned it on the Xbox One. In most cases, consoles never see large scale modding the same way PC does. The platform just isn’t friendly towards the sharing and downloading of mods. However, with Fallout 4, Bethesda created a software platform that allowed for mods created on the PC to be shared and downloaded on the Xbox One edition of Fallout 4. This is the first time that mods of this kind were allowed officially onto a console and it brought a larger audience to the magic of modding. I’ve played that game and the mods that were available for many many hours. But now I’ve just purchased the game on my PC in order to create my own mod for this thesis. While playing the game, I’ve been taking note of objects and terrain that will be useful to use in an underground setting. I’ve explored a few of the caves in the game to get a sense of how the developer treated these subterranean areas. I understand now that most of the areas that I will have to create are considered Interior Cells, in contrast to the exterior cells of the main game world. We’ll see how that fact translates to the larger interior areas that I have in mind. I’ve also managed to download a bunch of awesome mods in order to do some first-hand research into them. I will soon be delving into the Creation Kit to learn how to do some of the things that I see in the mods that I’ll be playing.

Research Update 1 of 3

10/16/18

This week, I created an Amazon AWS account and Alexa skill builder account. I spent a lot of time following along with YouTube videos to work towards developing my first Alexa skill. Talking my idea through with Professor Ault helped immensely in figuring out what direction I should go. After some thought, I realized a touchscreen UI was an impractical way to implement a cooking aid. Voice would be much more fitting for a hands-free experience. I am assuming my skill will fit into the Smart Home category because I would need lighting control, but I will figure out the specifics later. The reason I am getting started with Alexa skills so early is because I want to test if it is possible for me to build what I envision on this platform. The sooner I can test it out, the faster I will learn my limitations and potential roadblocks. Also they’re giving out a free Echo Dot if I can get a loose concept published in time 🙂

I have reached out to Dr. Nakra to meet up and learn more about RFID technology, which might come in handy when I start with the physical portion of my project.

10/17/18

I’ve started building an Alexa skill based on my thesis idea. Although I’m working off example templates for now, I’m already facing some challenges with learning constant variable names and how to translate what I’m trying to do into code. I also researched the meaning of a trademark and how to avoid unknowingly publishing someone else’s protected content.

10/18/18

I met with Dr. Nakra today and we discussed RFID technology. I learned that RFID is able to pick up signals without direct contact, and since spice bottles are small and close together, it might cause a mix up with which spice ID is being picked up by the base sensor. I plan to reach out to Dr. Nakras husband to see if he has a workaround to this or another direction he can point me in.

 

Research Update 1/3

My interactive stitcher project is going to be quite rigorous, considering there are multiple programs I will need to learn and use in order to execute this project.

However, within a short span of research time, I was able to find many great resources that will be useful in helping me with my project. Professor Ault was helpful in suggesting that I research”computer vision.”

To define the term computer vision, it is “a field of computer science that works on enabling computers to see, identify, and process images in the same way that human vision does, and then provide appropriate output.” (- technopedia.com). It also goes hand-in-hand with artificial intelligence (AI), considering the computer has to interpret things it sees and then have its actions match whatever it’s interpreting.

Anyhow, I’ve created a condensed list below of the sources I’ve found so far and other ideas I have of how to move forward with this project:

  • Video sources: https://www.youtube.com/watch?v=UZSm7Q2bZoc  https://www.youtube.com/watch?v=h8tk0hmWB44 https://www.youtube.com/watch?v=kK0BQjItqgw&list=PLU_f8mulhsXL3JXIjRRL0rcNSsgtzsk6w&t=0s&index=6
  • Website sources: http://programmingcomputervision.com/downloads/ProgrammingComputerVision_CCdraft.pdf   http://answers.opencv.org/question/17183/recommended-hd-camera/
  • Possible Programs to Use: Python (free download), SimpleCV, and Raspberry Pi. Python is one I’m highly considering because many sources recommended that it’s a great program to use for computer vision.
  • Necessary Equipment: small HD camera, computers (in the lab)
  • Possible Professors to Talk To: John Kuiphoff, Teresa Nakra, and Josh Fishburn

 

Research Update #1 Karin

This week is finals week in London so I had 3 10-page papers as well as a presentation… so I’m going to keep this short and sweet but I promise to make up for it on the next few!!!

I only had time to explore the ideas left in the comments by Professor Ault and my IMM mentor, and here are my reactions to those ideas:

Angela:

What sets it apart/gamification idea- “contests to win tickets to concerts, secret shows, meet and greets, merch prizes, check in at shows, photo contests”

-This is a great idea, as I’ve also considered some other elements thrown around such as a live video feature (similar to periscope)

I just have 2 concerns:

-1. It may take away from the music “community” side of things, kids go to these shows for the experience, not for the money, contests or prizes. Most of the merch being sold is to help out the bands who already don’t have a ton of money, so having merch prizes might take away from that community feel. Some of the people already know the musicians as well because of how local these shows are so I’m not sure if meet and greets would work in this small scale.

-2. Because of my already ambitious (for me) idea of creating an app with location services, encyclopedias, etc, I’m not sure how realistic it will be for me to add another element such as gamification. I do definitely want to consider this more and see if there is anything I can add based on the surveys I will conduct.

Building the app- “Adobe XD for mockups, Invision, PubNub- Kuiphoff”

-When finals are over here, I’m definitely going to look into these programs. I did a bit of research on Adobe XD and realized that I also used it for 280 during freshman year! This program was awesome with prototyping and I’m going to look into it for when I begin my creation. (as well as the others)

Chris Ault:

Figure out what sets it apart- “bandsintown or songkick, survey people”

Building it- PhoneGap, Cordova, React, Lynda React Developer instructions

Within the next few weeks, I’m going to begin surveying some of my musical friends and ask them how they feel about some of the elements of the app, what’s missing, how is it different from bandsintown, etc. (as well as look into some of your app building ideas)

I do think that the main difference is that it caters to a more local community rather than a concert fanbase??–maybe I can look into ways to making it more community based?– some DIY centers include art workshops, I could potentially include other events centered around DIY on the calender. (not sure)

I have very little knowledge on programming, so this is going to be an adventure for me! Definitely going to look into React, it would be very interesting to see how far I can get with the facebook group that already exists/my idea.

Sorry again for my lack of work this week, it was a crazy one :/

Final Concept

Résumé: The Ride

My idea is to create an exciting and innovative way to display my résumé using projection mapping to project animations, photos, and videos onto a model that also incorporates other special effects and moving parts to tell my professional and educational story. I am inspired by this project and the potential it has because I feel as though this is something I can truly pour myself into over the course of the next 7 months, and I also feel that I can continue to build upon it and improve it until the day of the show. I get very invested in things that I enjoy, and simply thinking about this project has me very excited. Although this project will be a great deal of fun for me, I feel that it can also help my future significantly, being that I will be presenting my passion for working in theme parks in an innovative way and shows that I want to continue down this path.

Although it is clear why I am extremely excited by this project, I feel that others will enjoy it too, because who doesn’t love theme parks? My project will be exciting to watch and aesthetically pleasing. With music, narration, moving parts, practical effects, and projection mapping, the presentation will be a little bit over the top, but I feel that that is the best way to go with something like this. As far as my research has shown me at this point, nobody has done something exactly like this just yet. With projections mapped onto a tabletop model with moving parts and different effects, my project will likely be the first of its kind and will let the viewer feel as though they are visiting a miniature theme park.

I plan on using the makerspace to fabricate a model that would be suitable for my project. The model will be created using wood, metal, and plastic, with motors, lights, and a few other systems that will be used to create effects during the “show”. The model will likely be a combination of a few things, such as a castle, a mountain, a roller coaster, or others. I will likely need at least two projectors for my project, depending on the design of the model. If time permits, I would ideally add a few small pyrotechnic and water effects into the show to highlight different parts of my career. By the end of this semester, I hope to have a solid plan for the model, with construction possibly beginning, and I would like to have a full draft of my “show” written up with a rough cut of the video/audio being put together. If I can achieve these things, I feel that I will be on the right track to completing this project, and I will know where I need to focus more of my time, being that I will be attempting to begin several parts of this project. I also will begin to study projection mapping and start to play around with it so I will be comfortable with the software I choose.

Research Plan

History and Current State of the Field

Projection mapping began in 1969 in Disneyland on the Haunted Mansion. 5 busts sing the ride’s theme song with their faces projected onto them, which was quite a revolutionary idea at the time and can still be appreciated today. Disney also has the earliest patent for projection mapping entitled “Apparatus and method for projection upon a three-dimensional object”. The next instance of projection mapping came around in 1980 with an immersive film installation created Michael Naimark. In 1994 GE took a step into the world of projection mapping when they patented “A system and method for precisely superimposing images of computer models in three-dimensional space to a corresponding physical object in physical space.” In the late 1990s projection mapping began to take off when it was pursued in academia. “Spatial Augmented Reality” began thanks to the work by Ramesh Raskar, Greg Welch, Henry Fuchs, and Deepak Bandyopadhyay at UNC. It all started with a paper titled “The Office of the Future”. They imagined a world in which projections could cover any surface and the use of small monitors would become obsolete. The late 1990s also gave us the I/O (Input/Output) Bulb which was basically a projector combined with a camera thanks to John Underkoffler. In the early 2000s, research began on moveable “smart” projectors and projection mapping began to develop in even more exciting ways. Today, theme parks, theaters, and museums have all begun to incorporate projection mapping as we know it in amazingly innovative ways, and it seems as though this trend will not be stopping anytime soon.

Finalized Concept

Finalized Concept

The project that I will committing to is the idea of an interactive short film. This film will be unique in many different ways. The ways that this project will differ from other projects is by incorporating a variety of movie genres. With each choice being in the hands of the viewer. The goal is to embed videos into one another this way there will be a seamless transition from one video clip to the next. The inspiration behind this project is my passion for film-making. I have only created small one minute clips, but I have never done an actual short film. There’s a commercial on television that has the two main character going through a scene, but it constantly changes genres right on the spot and it creates this excitement and unpredictability. This commercial inspired my idea to include multiple genres into my film.

This project represents my personal values and interest by including my loves for film and movies. I have always dreamed of creating a film and this is something that would be an incredible starting point for me. Being able to dream up a story and then making come to life really excites. Being an IMM major we are often forced to be creative and think outside the box and being allowed to let my creativity run wild is exactly what I love to do. While being an IMM major I tend to lean toward the side of media and video editing and film-making. This project will help my future endeavors because after graduating I will aim to work in social media. The ideal role for me is to create content for the social media and creating videos clips will be on the top of list of content. Creating such an innovative and creative film will add to my portfolio and force me to gain more skills in this area.

I believe there will be a lot of interest in my film because film and video is such a huge part of media. When watching television and when going to the movies you will rarely see any still image promotionals. People love video. The reason why my project will gain even more interest is because of the interactive aspect that I will be presenting to the audience. Once potential consumers see the ability to be in control of the storyline it will immediately attract them to see the film and take control. I imagine the viewers will sit at my film and try and go through each and every possible ending and twist.

When implementing this project technically it will take more research on my behalf. I imagine that I would need some sort of coding to embed the clips into one another. One of my peers spoke about a Youtuber doing something similar to this idea, but he just created a video and the transitions were chopping and obvious. The goal is to make transition seamless as if your sitting at your local movie theater. I will likely be editing my clips in Adobe Premiere Pro and After Effects. When filming I will go to the cage and find the best camera suited for my project along with lighting pieces to make sure the quality remains high, crisp, and clean. What’s great about this film is that I am able to give it multiple styles. When making it a romance it will be a lot of close up with and a bright scene with romantic music and a quiet and serene scene. Then when it changes to horror it will grow dark and the music fades into something suspenseful and creepy. There will be a variety of styles.

When prototyping this project I will be working a lot of different recording techniques and learning how different films are executed. I will share my storyboard with friends and peers and see if my story is entertaining and engaging. Also I plan on practicing embedding videos over one another by creating little clips with my phone and practicing how I would execute when I get my actual clips.  This will be one of the most challenging parts so I will be sure to spend a lot of time of learning and mastering this technique.