I wish to show people the traces they leave behind,
letting their bodies become a brush and allowing them to paint with their movements.
I wish to give them the tools to unleash their creativity.
I wish them to savor the joy of dancing.
The Alpha version of the Paint Dance project (shown above) marks the technical challenges as complete and continues to enrich the graphical appearance as well as User Interface and Experience.

This version uses the Deep Learning model OpenPose to deploy pose estimation and is implemented in Unity Engine for high-quality graphics.
Initial Idea and Motivation
Starting dance with Ballet preconditioned me to attend to the lines. In Ballet, everything is about alignment, the lines you make from the tip of your fingers down to your pointed toes, the imaginary line that you follow with your gaze, the curves you make with your body, and the circles you make by twisting and turning. I have explored generalizing this concept by practicing contemporary dance and amusing myself with geometric movements, emphasizing the traces our bodies leave behind as if we were moving in a solid matter and shaping it as we move.

I have found dancing to be a way of creating forms and shapes, an exploration of the space surrounding us, leaving traces of our existence, and printing our mark. The question is how to show this to other people if they are not paying attention. How do you explain movement when its details are unnoticed? It is difficult to perceive the small details of dance in the untrained eye due to their fleeting nature. I want to extend these moments by visualizing them, so they can be fully appreciated. Preserving these moments would be akin to a recording, not just the finished artwork, but also the process of making it, leading to possibilities limited only to the audiences’ creativity. I believe that this would have a great impact in making people more aware of their motion, as well as allow me to delve deeper into the process of understanding dance.
The Process
I started the project by using Kinect which came with the usual problems of noisy pose estimations. The graphics were implemented using Processing and were simple as the main focus of this stage was the technical issues around extracting a clean signal for pose estimation. The overall results were acceptable, but not satisfying as Kinect lost track of joints in slightly complex positions (like sitting or turning), or it sometimes lost track of the right and left sides of the body. Even when the estimation was extracted correctly, the jittering and instability prevented a clean output.

You can see some of the results here, the right-hand side shows the user and the left-hand side is the painting resulting from this movement.
To solve the problem of Kinect, I used a Deep Learning model for pose estimation named OpenPose. Omitting the hardware from the project, gave me the advantage of showcasing the project online. To enhance the Graphics quality, I decided to use Unity Game Engine. This also allowed me to export the project by building it for WebGL and putting it up online for anyone to use.


For a straightforward and user-friendly application, I wanted to add a thorough menu, allowing the user to choose different body parts and assign brushes to them. This led me to learn about User Experience design. The menu shows an outlay of a body with buttons for each body part. By clicking each of these buttons or any combination of them, a second menu pops up, showing choices for the brush. The user can now assign the brush to the selected body part.

Menu options (from left): Simple layout; Advanced options; Body part buttons and brush menu; Line buttons and effects menu

As each body part is equivalent to one point, I saw the option of expanding my vision by introducing lines to my application, which of course could be expanded even further to polygons. While being an interesting idea, it introduces challenges to the UX design as I did not want to confuse the user. To keep things simple, I decided to use the same body layout and added linear buttons that popped up a second menu for the particular line effect. To infuse both menus together, I used a slider.


The user also has a choice of backgrounds, including no background which shows the camera. For a better understanding of the estimated pose, they can also choose to see the output lines. You can see different variations below.

Background and Body part line options

You may also like

Back to Top