March 5th-11th, 2018:

This week focused on Humane Tech week where I participated in 2 events personally, while also attending some others.

I helped out with

1. Theater students doing PSA announcements (theater students wrote monologues off of Humane Tech articles we all read, then we represented them visually in either drawings/videos.)

2. Visual Perception of Human Motion (talk done by Vita Berezina-Blackburn on human motion's use in theater Viewpoints)

I was inspired by the theater student's piece that stated:


Lights up on a hospital room with two beds. Someone lies in a hospital bed surrounded by HD touch screens, and a personal care robot. The robot strokes the patients arm. In the next bed is a person surrounded by family, all of them are on their phones or other bits of personal technology. It becomes clear that the technology has more capacity for empathy then the humans.

As the play moves forward the technology communicates with each other and the humans in the room. The people struggle to connect with each other as each engage in their own version of selfish escapes. The play creates a world in which the technology can relate more to humans than humans can."

I decided to do a still image (if chosen - can be later animated or turned into a poster) on the scene of the hospital room with the empathetic robot and then the distracted family/friends.

This piece stood out to me the most on how we aren't using technology to be better humans, but rather pass that task off to another piece of technology to do the work for us. This isn't showing 'humane' feelings, but rather detaching us from reality.

With Vita's talk on Human Motion, my primary job at first was to help set up the motion lab (calibration, getting actors into the suits) and then I became and audience member myself.

I saw the effects of Jeanine Thompson's students perform Viewpoints, and then analyze how that worked in motion capture. Are they held back by being in these suits? Is seeing their data in abstract forms harder to learn from? Did they compare themselves to each other when looking back at the data, and discover who is who?

It was a good observational study of human movement and genuine fluidity with also seeing it work in technology to be an educational learning experience.

I later had a conversation with Jeanine on how to direct motion capture (specifically to the VR Ruby project) and she is going to assist me in the weeks ahead on how to direct performance, specifically in motion capture.

Recent Posts

See All

October 7th-11th, 2019:

I'm getting terrible at consistently doing a post of my work! But I am going to try and really focus on it from here on out (insert all of you reading this rolling your eyes right about now). For the