The first demonstrationconsisted of two actors performing as they would in any theatre production without any type of captioning.
The same script was presented to the audience, as a hard of hearing person would experience with the actors whispering their lines inaudibly.
Developing on the previous experiment the actors were then accompanied by standard television captioning, to further convey the hard of hearing and deaf experience.
In this clip the actors continued to speak inaudibly, and were accompanied by kinetic typography displayed on screens behind them. The use of kinetic typography over standard captioning allowed the audience to recieve emotional and visual information absent from standard captioning.
In this demonstration both actors and kinetic typography matched emotionally to bring the spoken dialog to life.
A new script was presented with both actors accompanied by kinetic typography projected behind them.
One actor was removed from the presentation, leaving a single actor to interact with the typographic actor on screen. The typographic actor was accompanied by a pre-recorded audio track.
The audio track for the typographic actor was removed, leaving the actor respond to the silent screen.
Two separate fonts were used to differentiate the two typographic actors. A bold sans serif font was chosen for the first character to help personify her commanding personality. A slightly smaller, childish font was chosen to reflect the second character’s personality. Certain works in each of the lines was emphasized to deliver addition visual and emotional content.
Another recorded dialogue was animated using the same characters from the previous script. The overall size of the text was increased to make better use of space.
Working with a research assistant over the course of the summer the previous scripts were re-recorded with an increased dramatic intonation. Attempts to visually separate both characters was taken with the use of: drop shadows, new fonts, and size.
The use of effects was carefully considered to deliver the correct visual and emotional message. Certain movements like a shaking were used to emphasize emotions like anger, while decreased opacity reflected intonation and desperation.
To separate the typographic characters further, each character was given a visual “trait.” One character’s dialog was accompanied by various words, while the other was distinguished by a glow effect.
Selected Bibliography of Published Work
Communicating Emotion with Animated Text. Visual Communication Journal.
Vol. 8, No. 4, 469-479 (2009). (co-authored with Sabrina Malik and Judy Waalen)
Artists, poets and engineers: bridging disciplines with kinetic typography. The International Journal of the Arts in Society. Vol. 1., No. 3, pp 107-112. (co-authored with George Swede and Kevin Worthington)
Expressing Emotions Using Animated Text. 10th International Conference on Computers Helping People with Special Needs. Published in Springer’s Lecture Notes on Computer Science, Volume 4061/2006. pp. 24-31 (co-authored with Raisa Rashid and Deborah Fels)
Haiku in Motion: Kinetic Typography Enhances Poetic Meaning Humanities Conference 2005, Cambridge, England
My interest in animated poetry stems from the relatively new field of kinetic typography. Kinetic typography–simply the animation of words through movement—can add meaning to words and extend the emotional experience of readers. My exploration of this medium began with animating haiku, as “moving haigas”. It was a natural “fit”, in as much as haigas interpret and extend the meaning of a haiku; my moving haigas do the same by using the type itself to offer another layer of experience.
Random Dialogue was a preliminary step in exploring ways of animating text to support emotional content. Students were sent out to the Eaton Centre to gather snippets of overheard conversations and bring them back to the classroom where they were then edited into a soundtrack. The soundtrack was then given to a research assistant to animate text in such a manner as to explore vocal tone and texture. An interlude/question was added to focus the piece.
This performance picked up where the Stage 1 workshop left off. Whereas in Stage 1, the emphasis was on the testing of ideas, the goal for Stage 2 was picking the best solutions and incorporating them into telling a story. We were still experimenting with multiple modes of presentation; speech, animated text, video, signing—they all contributed, sometimes simultaneously, to the narrative. In order to facilitate a natural relationship between live and typographic characters, actors were fitted with custom wireless triggers that were used to cue video. This enabled the actors themselves to control pacing. Additionally, an infrared tracking system was used in other scenes for direct control over text.
The idea for the first stage of the project represented the embodiment of an idea that struck me more than two years ago; could this new “kinetic typography” that I was experimenting with be developed into a fully believable typographic character? Further, could this character be presented in a live production with other, human, characters? And if so, would the audience accept it as “real”?
Working with a colleague (playwright Sheldon Rosen) we set a goal the first year of creating a workshop to test creative ideas and technology. Rosen created a half hour piece of experimental theatre where we explored the interaction between text and actors. We mounted 6 performances over four days. We experimented with multiple modes of presentation; speech, animated text, video, signing—all contributed, sometimes simultaneously, to telling a story. We also tested different methods of presenting the interplay between text and actor; manual cueing of animations, computer/human interaction through technologically modified props, and a combination of live and pre-taped dialogue. Each performance was followed by a question and answer session which was videotaped for future reference. Audience reactions were overwhelmingly positive, though by no means consistent. They had little trouble following the text/actor interaction, indeed some of the preferred segments were the pieces where some characters were presented as type only.
Working with a collaborative team at Emily Carr University of Art and Design, we made use of the school’s motion capture system to feed data from a dancer’s movement to a Max patch which allowed for both the triggering of specific video events and direct control of specific typographic features or movements.
Working with a collaborative team at the School of Interactive Arts and Technology at Simon Fraser University, we considered how dancers might interact directly with text. The challenge I posed for the group was to use technology that was portable and scalable, allowing us to present a final performance at any venue.