As the sixth and final Theory-Practice Collaboratory of the 2014-2015 academic year, Collaboratory #6: Surface Effects — a mini-symposium featuring dynamic participatory workshops with Jon McKenzie of DesignLab and Shuxing Fan, Dan Lisowski, and Kevin Ponto of the ALICE Project — activated exciting possibilities for multimedia performance. Both workshops opened exciting space for participants to explore text, image, and environment at the interface of physicality and digitality and to think through new forms of communicating research in academic and theatrical contexts.
Jon McKenzie‘s workshop, “Three Act Theory: A Smart Media Workshop,” encouraged participants to translate our academic research into a short, engaging, multimedia presentation. We practiced visualizing our argument and began to discover a clearer and more elegant narrative structure for our research, sharpening our live presentation skills in the process. Jon first discussed the context of his research — performance stratum, performance paradigms (cultural performance, technical performance, organizational performance), and performance-performative blocks — and his work in DesignLab with democratizing digitality and design by training students to use “smart media,” emerging scholarly genres that combine literacy, orality, and digitality. We watched an excerpt from a documentary about pollution in China, Under the Dome, and discussed its use of storytelling and visuals — documentary footage, interviews with residents, a montage of images of the sky — to communicate its message. We next watched a short video made by a masters student in Life Sciences Communication which remediated research about the impact of AIDS on black men, combining spoken word poetry and images of graffiti with statistics and personal narrative. Jon introduced two frameworks with which to analyze these new media texts: C-A-T, an object-centered paradigm which breaks down the conceptual, aesthetic, and technical aspects of the piece, and UX, an audience-centered paradigm which breaks down the experience design (impact), information architecture (structure), and information design (look and feel) of the piece. We next looked at a film made by a graduate student in Folklore Studies, which, with black and white photographs, voiceover, and instrumental music, both analyzes and recreates the aesthetic strategies of the collaborative online creation of the Slenderman legend. Finally, we looked at a three-stage remediation of a work about noise and silence in the work of John Cage. The project began as a traditional academic paper — with 8.5×11 white paper, 1 inch margins, and black 12-point Times New Roman font, the form of which, as Jon pointed out, we are so accustomed to that we no longer “see” it — and was then remediated into a graphic essay with an image/text track, a nonlinear structure, and multiple entry points on each page. The project was next translated into a video essay, which layered image, visual text, spoken text, and music. We discussed the ways that each translation added new layers of media to change the reader/viewer’s experience of the work; the ideas became increasingly embodied and visceral. Interestingly, the graphic essay added an element of non-linearity and viewer co-creation in that the viewer could interact with it in multiple ways, while the video in some ways returned to the linear structure of the essay. All of these examples of smart media encourage us to explore the embodied elements of cognition, the emotional elements of ideation, and the sensory elements of analysis in a dynamic integration of information and experience, a model for 21st century public humanities. We discussed the ways in which these techniques have been used in advertising for decades; as scholars we can better understand and use these techniques to more effectively communicate our research to a wider public. Following our discussion and analysis of these models, Jon discussed Nancy Duarte’s sparkline paradigm (what is, what could be, call to action) and invited us to design and deliver a lightning round of three-part multimodal presentations! The design constraints: 30 minutes to design the presentation, 2 minutes to deliver it, 3 images. We each got to work, collectively and individually inspired to quickly make something new. The results were exciting: Laima Mikaliukas gave an artist’s talk with black and white photographs about her material process with pewter and tin objects inspired by empathetic and affective experiences; Laura Barbata gave a multi-sensory performance about beauty, dignity, and Julia Pastrana with text and image projected onto her body and an invitation to the audience to close our eyes and imagine. We left Jon’s workshop armed with analytical tools and creative inspiration to keep exploring the exciting possibilities of smart media!
Shuxing Fan, Dan Lisowski, and Kevin Ponto‘s workshop, “Behind the Curtain: ALICE Project,” gave participants a backstage view of the mechanics at work in their collaborative multimedia production-in-process. The ALICE Project experiments with an Augmented Live Interactively Controlled Environment created through a combination of theatre technologies, computer science, scene design, video game design, and 3D modeling and animation. According to Shuxing, Dan, and Kevin, “By integrating video projection, entertainment automation, motion capture, and virtual reality technologies together, we aim to enable new possibilities in live performance and enhance the audience’s experience.” They chose Alice in Wonderland as the story for the project because its fictional world — a shrinking, enlarging, surreal environment — is well suited for a digital environment. In an open-ended demo and discussion from both sides of the screen, they showed us the features of the current prototype of the project.Through a dynamic synchronization of movement, sensors on the screen detect the actor’s movement and allow for an interactive experience within the digital environment. One fascinating feature of the project is that it disrupts the carefully staged linear timing of a traditional stage production — here the action of the performance is directed by the actor, who can interact with the environment in a variety of ways and at her own pace. The world of the play mirrors the user-directed structure of a video game, and, in fact, the developers used the Unity game engine to create the world of the play. A theatrical production structured like a video game has fascinating implications for theatre, game, and systems theory and practice. And the collaborative process at work — Kevin, Shuxing, and Dan created the project with funding from WARF’s Interdisciplinary Research Competition — is an exciting model for innovative new research at UW traversing the arts, sciences, and humanities to explore questions and possibilities outside the scope of a single discipline.
View more photos here.