top of page
Featured Posts

Open Source Cinema | Setting | 360/VR Cut Experiments

Homework link: (!!buggy because video loading are not handled properly)

After going through the readings, I felt like I wasn't satisfied with Brillhart's proposed framework for cutting in VR. So I decided to run some cutting experiments for this week's homework. Applying what I read in Flicker & Brillhart's post, I designed 3 rough experiments:

  1. Falling deeper (visual masking)

  2. Person-of-interest

  3. Visual question-answer

Following what I learned from these experiments, I also designed a kind of interaction in the homework link.


1. Falling Deeper

For this experiment, I imagined viewing a linear narrative as "falling deeper" into a story. One possible way of cutting and still keeping track of how far a viewer has gone into the story is to have a visual indication of the fall "depth".

I used a green screen as this visual indication.

The cuts feel like jump cuts because not enough contour difference in the frames. But interestingly, as I make deeper fall in the footage, the cuts don't seem that jumpy anymore. Perhaps this is because me becoming more fixated on examining what's inside the screen and not really paying attention to the surrounding contour.

I also realize that this green screen can serve as breadcrumbs in the narrative, and I should keep this in mind as the class moves towards hyperlinking cinema.


2. Person-of-Interest

If you haven't realized, this person is me.

This one's pretty straightforward, because I essentially took a page from Brillhart's article. I keyed out the black and changed the background scene.

The resulting cuts seem smooth to me, and I think it would have been more so if the person stands closer to the camera. I theorized that the brain could make the scene jump smoothly because the focus of the eyes is retained (person standing on the same spot) and the person serves as narrative link between the two scenes.

I like this approach as it would be natural for filmmakers to make a cutting decision in a VR narrative based on their characters.


3. Visual Question-Answer

The last experiment is making a cut when there is an action that imposes a visual question and facilitates the brain to seek an answer. I did this by doing a throw and making a cut after the thrown object passes the closest point to the camera.

I made two changes: (1) object on platform, and (2) background on half of the screen.

While I was editing, I immediately realized that the object should have been a different eye-catching color. I would say if the action had been attention-grabbing enough, the cut would have been smooth and felt natural.


Putting Everything Together

I made a quick & dirty interaction design using Dan's template.

I play the "Falling Deeper" footage first, and a viewer has to stare at the green screen to move to the next screen.

video.onended = endHandler;

function endHandler (e){

if (pathcount == 0){ console.log(camX); if (camX < -420 && camX > -520 ){ //if camera is looking at green screen console.log("PLAYING PERSON"); video.src = paths[1];; pathcount++; return; } else { video.src = paths[0];; return; } }

if(pathcount == 1){ console.log("PLAYING THROW"); video.src = paths[2];; video.loop = true; pathcount++; return; } }

I wanted to have the same stare logic on the "Throw" clip, but did not have time to implement it!


What I Learned

  1. It seems like the best way to make cuts in a 360 cinema is still to either impose a visual question with an action, or by visual masking (overloading the viewer with a lot of thing that they'll smooth over the cut).

  2. That said, it would be interesting to test this with a VR headset.

  3. Maybe the way to have hypercinema is to have breadcrumbs in the form of objects in the scene. E.g. a TV set playing a clip and if I stare at this TV screen for long enough, the entire scene will cut into whatever is displayed in the TV set.

Related Posts

See All
Recent Posts
Search By Tags
No tags yet.
Related Posts
bottom of page