top of page
Featured Posts

Open Source Cinema | Open Source Actor

Concept

Last week I wrote in length (and discussed in class) about how the VR experience typically lacks an emotional anchor / proxy, and therefore fails to suspend user's disbelief of the experience. I drew a comparison with low-fi Youtube vlogs, which don't have high production value but the emotional experience is well anchored by the personalities.

So this week, to facilitate the creation of more compelling characters within a VR experience (instead of high-res environmental graphics), I made an Open-Source Actor.

The aim of the project is to...:

  • Enable user to create a character that is specific for the user's narrative purpose.

  • Allows the character to store & mimic the user's acting (expression & line)

 

How I Made It

  1. Load a collada object into a 360 scene

  2. Replace the character's face with a p5 sketch

  3. Draw facial features on p5, use clmtrackr to move the facial features

  4. Include p5 speech library

  5. Save the sequence of facial expressions & speech

 

1. Load a collada object into a 360 scene

I knew that I wanted a low-poly character so that I could draw faces on p5 using simple shapes. After browsing different 3D model repos, I decided to go with Mixamo because it comes with animation.

I downloaded it as collada and used my previous code to load it into the scene.

 

2. Replace the face with a p5 sketch

I removed the eyes, brows, and the mouth of the character by editing the model's textures in Photoshop.

And using Dan O's example, I loaded a p5 sketch onto the scene. For some reason I had to draw my on geometry, and I had difficulties in (1) the sequence of functions being called, and (2) which JS should be loaded in the header and which should be in the body.

In my code, the p5 sketch is loaded in the body, after the loader library and the main three.js sketch are loaded in the header part of the HTML.

//P5 LAYER var rect = new THREE.PlaneGeometry(80, 60); p5Texture = new THREE.Texture(p5cs.elt); rect.scale(0.06, 0.06, 0.06);

var rect_mat = new THREE.MeshBasicMaterial({ // map: new THREE.TextureLoader().load('assets/textures/UV_Grid_Sm.jpg') map: p5Texture, transparent: true, opacity: 1, side: THREE.DoubleSide });

var rect_mesh = new THREE.Mesh(rect, rect_mat); rect_mesh.position.y = 0.3; rect_mesh.position.x = 1; scene.add(rect_mesh);

animate(); }

 

3. Draw facial features on p5, use clmtrackr to move the facial features

The clmtracker library allows me to assign know the coordinates of facial features captured via webcam.

With this in mind, I started drawing a face on p5.

Then, I captured the "original" coordinates of the facial features, and used the difference between these original coordinates with the real-time coordinates to move the shapes in p5.

curEyeL = Math.abs(positions[26][1] - positions[24][1]);

eyeL_mod = curEyeL / calEyeL;

//left eye noStroke(); fill(255); ellipse(eyeL_X, eyeL_Y, eyeL_Size, eyeL_Size * eyeL_mod);

 

4. Include p5 speech library

This part was pretty straight forward. I followed Luke DuBois' example, and used a button to trigger the record and play functions.

5. Save the sequence of facial expressions & speech

Lastly, I wrote a function to save the sequence of facial features' coordinates in an array, and the recognized speech into a variable.

The idea was to have these uploaded to my database but I have yet to do it yet!

 

Recent Posts
Search By Tags
No tags yet.
Related Posts
bottom of page