Primary Research Documentation | Part 1

Full tabulation on Google Sheets (in the Primary Research Plan tab)

Relation to project: Technology

Issue: How can I record and extract data points from chosen action?

Motion Capture

Aim: Use motion capture to collect data points at different points on the body.

I used the motion capture studio which uses over 15 wall mounted cameras to track motion and visualise it on screen.

Advantages

  • Tracks full body motion very accurately.

  • Can generate a large dataset that can be used to manipulate code.

  • Has potential if scope changes to non-real time installation.

Limitations

  • Captures only full body movements using a set of trackers. Smaller, more detailed gestures like hand or finger movements cannot be tracked.

  • Dataset generated is unreasonably huge; will be very difficult to work with.

  • Requires use of motion capture studio. Having to use the studio and bringing guest in to move and interact with each other will result in more performative motion as opposed to natural gestures and behaviour.

  • The software itself cannot render forms affected by motion in real time. Real time animations are possible using Unity, but will still require the use of the mocap studio, cameras, and computers.

TouchDesigner

Aim: Document the potential of TouchDesigner as a real time data visualisation platform.

Advantages

  • Can track motion and create abstract forms with data generated in the software.

Limitations

  • Captured motion will need to be processed, hence cannot be used for a real time installation.

ml5.js

Aim: Use a machine learning library to explore the potential of existing ML tools for motion tracking.

I used the posenet library in ml5.js to capture motion and generate tracking points, which I then used to alter parameters of a sketch made with p5.js.

Advantages

  • Able to track motion in real time and manipulate a rudimentary code sketch using data generated.

  • Has potential for more detailed, advanced motion tracking and art generation.

  • Allows you to train your own neural network.

  • Can be integrated with various js platforms or libraries.

Limitations

  • Processing limitations due to being a web-based library.

  • Yet unclear how to use different tracked points to alter different parameters.

Try prototype here (will need access to a webcam). Prototype test below:

Previous
Previous

Way Forward

Next
Next

Concept Statement | Part 1