In UNITY, one of the most important features at the heart of the project is the analysis, representation and comparison of the intensity of each user's kinetic response to rhythm, in order to understand how dancers and ravers respond affectively to music and join together under the same rhythmic pattern, synchronizing and merging into one another in one, single entity. In musicology, the synchronization of movements is called entrainment, and becomes possible thanks to particular features of the musical, social and physical environments.
After an extensive research through the work of various authors and researchers in social cognition, psychology and music theory , including names of the like of Joel Krueger, Maria Witek and Vincent Cheung, I needed to translate theory into real numbers. In order to quantify and compare the aforementioned values, I deep dived into the work of Rudolf Laban, one of the most proficient theorist on dance and body movement, also creator of the Labanotation, a structured method to write, analyse and dictate human movement and dance choreography.
In particular, the Laban Effort & Shape theory outlines a methodology to section systemic movement into two main elements, effort and shape. More specifically, as opposed to the Body and Space components describing structural characteristics of the movement, effort and shape define instead their qualitative aspects, allowing us to "encode the use of energy through four dimensions: Weight, Time, Space and Flow".
Together with RCA's tutor and engineer Thomas Deacon, for UNITY (2020) we developed a custom algorithm to measure energy and affective response to music of human movements. The Weight component of the Laban Effort and Shape theory was applied to the algorithm in order to attach different weight coefficients to different parts of the body detected by the camera: core (chest), hips, arms and legs. In this way, we were able to measure and quantify the velocity of the movements of each dancer's arms, chest and legs, according to the weight formula, and ultimately extrapolate the velocity aggregate. In other words, we were able to measure the intensity of each dancer and visualise it during the installation period. For the sake of testing, we used imported mannekins and a sphere, both responding to the velocity aggregate coefficient.
By applying the algorithm to UNITY's motion capture system, we could record, extrapolate and visualise the aggregate outcome in real time. The mocap sample avatar was implemented with the related weight coefficients, as seen in the next footage. The resulting velocity aggregate was then applied to an external mesh (a sphere), rendered in particles, which reacted to the aggregate coefficient by increasing and decreasing in size. In other words, the higher the aggregate coefficient - and therefore, the intensity of the dance - the bigger the sphere!
As a final result, the sphere was placed in the middle of two graphs, as to compare how dance and movements reflect music frequencies. Here a motion capture session recorded displays the workflow adopted.
Technical Support - Thomas Deacon (Specialist Technical Instructor in XR, Royal College of Art London)