/QIAN CHANNING MU/
       Work        Life        About


Project

When You Look at Her

Tools

TouchDesigner
ComfyUI
Midjouney &  Stable Diffusion
Leapmotion
Mapping

Description

"When You Look at Her" is an interactive installation and a conceptual extension of "The Flower Vase Girl." Through real-time facial tracking, AI-driven transformations, and gesture-based interactions, the project immerses viewers in the oppressive reality of exploited performers. This installation uses immersive interactivity to guide participants from passive spectatorship to self-reflective action.

                 

Overview



Every day, we spectate. We witness the suffering of others, often remaining silent or neutral, unaware that inaction can make us complicit. What we fail to realize is that neutrality often makes us complicit—and one day, the observer might become the observed.

"When You Look at Her" examines complicity and cycles of oppression, highlighting how passive spectatorship sustains exploitation. Initially, the installation presents an AI-generated "flower vase girl" face, symbolizing both female suffering and the commodification of women. As applause—a symbolic act of voyeuristic engagement—grows louder, this face gradually morphs into the viewer’s own, culminating in a shattered and distorted reflection.

By connecting historical gender oppression with contemporary societal dynamics, the work positions participants within the dual roles of "observer" and "observed." This immersive experience challenges the moral stance of neutrality, emphasizing that passive spectatorship is not harmless but a critical element of oppressive systems.

At its core, "When You Look at Her" is a challenge to individual accountability. It poses a fundamental question: in a society that commodifies and consumes female suffering, can empathy lead to action, or are we trapped in an endless cycle of voyeurism and silence?






Face Image Generation



I wanted the audience to truly feel the predicament of the "Flower Vase Girl," so I came up with an idea: what if I could transform the viewer’s face into thousands of Flower Vase Girl in real-time? To make this happen, I used the original facial images of the Flower Vase Girl I created with Midjourney and built a workflow in ComfyUI around them.




By utilizing large models and positive and negative prompts, the workflow is able to randomly generate extended versions of the Flower Vase Girl’s facial images.


After decoding, Stable Diffusion sends the image to TouchDesigner.




Audience Face Generation


To transform the audience’s face into the "Flower Vase Girl's" face in real-time, I built a workflow in TouchDesigner: the process starts with capturing the viewer’s facial data through a camera, which is then mapped onto a facial model (the blue part). At the same time, I apply the image generated by Stable Diffusion to the UV map of the face (the purple part). After that, I assign these textures to a yellow material, which is finally applied to the facial model to complete the face replacement.





Once the face is generated, it’s bound to the camera feed, ensuring that viewer’s face is instantly mapped onto the "Flower Vase Girl's" face.



Interactive Element



What I’ve always wanted to convey is that the audience isn’t just a passive observer—they’re participants. Sometimes, even a neutral stance can be a form of complicity. When the audience watches the "Flower Vase Girl" performance, the applause and cheers that seem lively and celebratory only push the female figure deeper into the abyss. So, in the TouchDesigner workflow, I extracted the volume data from the microphone. Each clap triggers a different version of the "Flower Vase Girl’s" face, until it eventually turns into the viewer’s own face, looping continuously.

The process involves one original image, four Stable Diffusion-generated faces, and a camera feed, making a total of six visuals that are connected to a Switch node. Data triggers this loop to cycle between 0 and 6, creating a dynamic effect where the images alternate between the original, the generated faces, and the camera feed, highlighting the audience's involvement in the experience.





The loop only stops once the audience takes action—triggered by Leap Motion. Based on the movements of their hands, the "Flower Vase Girl’s" face on the screen will stop cycling and instead start to scatter into particles, depending on the distance of the hand. To achieve this in TouchDesigner, I converted the image data into RGB values and mapped these RGB values to the Z-axis data of the flat particles. This created a sense of depth, with the image fluctuating forward and backward, and then I used the particle coordinates to generate a point cloud, enhancing the scattering effect.


Binding the Leap Motion data with the RGB information of the image to achieve gesture control over the RGB values.




Next Step

During user testing for this piece, I noticed that most viewers didn’t know they were supposed to clap to interact, unless guided. So, moving forward, I plan to integrate facial capture of the audience into the Flower Vase Girl installation and project a second layer of content next to it. This projection will feature a street vendor shouting at the audience to watch the performance, with his yelling gradually turning into more of a demand, pushing the audience to clap along with the crowd.

This adjustment will not only guide the interaction but also highlight how easily people can become "perpetrators" due to society, without ever questioning if these behaviors are right in the first place.



Credits

A project by Qian Mu

Made with Stable Diffusion, Midjourney, ComfyUI, Touch Designer, Leap Motion


 Index            Next