360 Production

Learning Resources

Brief: To explore how to capture and utilise volumetric data to digitise a performative piece of dance that could be viewed in VR.

Background and Introduction

This project is an exploration on how we could capture volumetric data (3D) information from performers / dancers using tools and equipment that are readily available and within a set budget, it is important to ensure that this process is as unobstrusive as possible to the performers and streamlined. At the time of writing this we don’t have access to volumetric cameras as such the decision was made rather early on to explore similar techniques and tools in this case motion capture and 3D scanning in combination.

Whilst motion capture suits and markers would wield greater reliable results they are often costly and restrictive and would not suit all performers, we opted for a marker-less tracking solution using an array of cameras (iPhone XR’s) synched together in real time and using video AI algorithms to track commonality features i.e. legs, arms, head e.t.c.

The Process

We opted to use marker less tracking as our capture method as such we required multiple cameras to be synched together and to be able to be processed by a cost effective and readily available tool, we opted for Move AI based on its positive reviews and more accessible toolset over Rokoko video as to allow for quick setup and processing.

In our test shoot we had 3 iPhone XR’s in a triangular orbit around the subject (ideally would use more cameras as more cameras = wider ‘play’ area and more subjects to be tracked at once) synched to an iPad which was acting as a remote monitor / controller for the application. The iPhones were networked to the host (iPad) and positioned to ensure that all of the subject was in in frame in each lens and in line of sight of each other.

The performer then calibrated the cameras by clapping and moving around to act as a synch point.

A person practicing dance or yoga in a studio with black curtains, a tripod with a camera, and dance equipment along the walls.

We aimed to test multiple variations of the tracking capabilities to see how areas such as occlusion would be handled and interpreted by the video to motion tracking processing as such a variety of positions and motions were trialled. You may also notice in the pictures we covered any reflective surfaces as too not confuse the video interpretation which was looking for the humanoid structure.

This recording was then sent to Move AI website via the host device and approximately ~15mins later the skeletal data (mo-cap data) was processed, we opted to use the Mixamo skeletal structure for compatibility reasons with other tools such as Blender and Unreal which would be used further down the pipeline.

A person in a grey hoodie and green pants stretching on a stage with black curtains. The photo is taken from a camera on a tripod, showing the person on the camera's screen.

< This recording generated a skeletal framework which we could then via the Move AI website download as a set of animation keyframes and rigged structure to be used in modelling and game engines at a later stage. We could swap this skeletal rig with any model as long as the joint names remain the same.

3D point cloud visualization of a human figure with arms and legs spread, composed of green, yellow, and orange dots, surrounded by several small photographs and icons, with a dark background and a blue control bar at the bottom.

3D Scanning

Now we had the motion data we needed the actors image to be captured and rendered in 3D, during our tests we used photogrammetry techniques due to equipment limitations at the time and keeping to our non intrusive workflow on the subject. Approximately 200 photos were taken and using RealityScan (iPad application) processed into a 3D model. This 3D Model would later be downloaded as an .fbx model and optimised as to reduce poly count and thus strain on computing process.

3D scan of a woman standing with arms outstretched, wearing a green and gray sweatshirt, gray leggings, and black shoes, against a white background.

This 3D capture was not perfect but for the purposes of testing served its purpose, we got the actor to stand in a T-Pose as to make rigging easier when mapping it to the motion capture skeletal data from previous steps. This model after being optimised in blender was then uploaded to Mixamo website using its character rigging tool to add key points (chin, wrists, knees e.t.c) and cause we exported mo cap data as Mixamo skeletal structure this process was made ever more simple later. Once this rig was generated we then downloaded this updated model now tin attached skeletal structure into blender and assigned skeletal animation keyframes to this rigged character.

A computer screen displaying a 3D animation software interface with a humanoid figure modeling a dance move called Hip Hop Dancing. The interface shows different dance pose options on the left, with various humanoid models, some in blue and some in red, in dynamic dance stances. The selected pose features a white 3D model of a female figure mid-dance, with control sliders on the right for style, intensity, overdrive, character arm-space, and total frames, along with options for mirroring and downloading the animation.
3D motorized character model rig with labeled body parts and a humanoid figure dressed in a modern outfit.

Work still to do:

  • Improve 3D scanning workflow / reduce file sizes

  • Speed up remapping and troubleshooting i.e. Bone renaming

  • Increase capture radius and number of actors

  • Import into different environments

Please check back later as Project progresses for further process updates and notes