June 28th, 2018:

Thursday, June 28th, 2018:

DAY 14:


For today, we had a showing for a client who wanted to know more about Faceware. They are working with cultural institutions (museums, parks, zoos, etc.) to bring to life better experiences in AR and VR. When they came in, I found out that they themselves knew close to nothing about this technology and really wanted almost a hand-holding experience through the demo of the hardware, software, and post processing of the information. So we used Moira again for the acting and discussed Faceware with them.


1. Hardware:

- Here at House of Moves, they have 6 Faceware helmets that they got with the Pro package. This includes Faceware cameras, multiple lenses, the stabilizing bar, light, etc. As we set this up on Moira, it was explained to our client that when placing it on her, we need to adjust the stabilizing bar so the camera frames her face. From here we made her just open her jaw all the way, puff her cheeks, and smile (this way we can see if all of those 'extreme' moves keep her face within the camera range). Troy also explained that the Philtrum is the area we kind of want 'dead center' in framing the face.

2. Video Data:

- Once that was set up, we started doing some ROM tests (one of them is in the video below). This is where we make the face go into the extreme poses that we will then see in Analyzer/Retargeter. We do the eyes, eyebrows, and mouth. We make Moira do ROM tests of just moving those areas around, but also pose out phonemes, and say certain words that enunciate those noises. After doing the short ROMs and long ROMs, we then had Moira say short phrases, like "The Quick Brown Fox Jumped Over The Lazy Dog" and the first couple sentences of the Gettysburg address, that way the clients had some reference to play with.

3. Analyzer/Retargeter:

- We didn't show the client Analyzer/Retargeter due to time, but we did receive a lot of questions that we were able to help with. We sent them off the data that we collected in the ROMs and takes, and explained to them about retargeting, auto tracking, and parameterizing. We will now have to wait though for them to have a hands-on experience, and come back to us with questions.



After the client, we moved on and played with some of Shogun 1.2 (which came out this week I believe) new features. These were the videos we messed with in order to see the new features:

- Shogun Post New Features 1.2

- 1 Import FBX and Creating DOF

- 2 Posing Skeleton



We opened up one of the takes we made last week to make a vss, vsk, new vdf, and have constraints before solving it in shogun (Shown in images above). But during this process, we still found a lot of kinks in 1.2's new features. It was something though that I enjoyed messing with because I hadn't played with Shogun's features deep enough to know how to do all of this on my own. I basically knew how to capture within it, and then export the fbx. This way I know the steps to go through in restructuring vss and vsks on my own.


Troy also showed me how to get it so that during a performance, in the program, we can overlay the skeleton over the video reference. (Shown in images above).


DAILY HOURS: 9:00 a.m. - 6:00 p.m. (9 hours)

CONSECUTIVE HOURS: (127 hours)

Recent Posts

See All

October 7th-11th, 2019:

I'm getting terrible at consistently doing a post of my work! But I am going to try and really focus on it from here on out (insert all of you reading this rolling your eyes right about now). For the