Generating Shakespeare

First Folio Title Image-Video

Experiments with digitizing Shakespeare’s Folio eventually led to using image-image and image-video generative AI. Starting from the book reveals alot of potential ideas. From simple prompts using Kaiber AI it was fairly difficult to explore the diversity of characters present in this 400 year old book that contains works like Macbeth, Othello and Cleopatra. It is important to therefore curate any generative AI to get closer to the content that you want it to generate.

Diversity of Shakespearean Characters

Exploring Shakespeare’s First Folio also triggers an exploration of the various characters in his plays. The rich variety of these include fantastic characters from Midsummer Night’s Dream in addition to tragic ones from Othello.

Characters Popping Out of a Book Text-Image-Video

Multiple experiments with characters popping out of a book were explored using several Gen AI text-image then finalizing with image-video. This gave me ideas for potential motion of characters to inspired an animation team. Experiments with Kaiber started to give the idea of characters popping out of the book itself and provided another dimension to consider for future explorations. Further experiments led to the visualizations of characters not just popping out of the book but starting to imply the different types of blocking that would occur as they jumped, danced, and smoked their way out of the book. What might we learn from these characters? What stories might they tell beyond the ones they were confined to be a part of in Shakespeare’s plays? What might the witches of Macbeth say? What would any character say to Shakespeare if they had a chance? What might Shakespeare say? Clearly from these very visual-oriented explorations there came a need to interact with the text and see what actors, a director and dramaturg might co-create when confronted with text that was generated in the style of Shakespeare from a large language model.

Characters & Spoken Word

Click to Download Workflow for I am Real

Advancements in bringing characters to life through eyes, mouth, face and movement in addition to lip-syncing continue with a variety of Gen AI platforms. The collection here show different approaches and audio-image-video platforms like D-ID present multiple opportunities to present richer characterization.

Pre-visualizing Animation

This experiment conducted with the consent of actors who have chosen to remain anonymous shows the potential of Wonder Dynamics technologies. These allow you to substitute actors with 3D animated characters, in this case bots in a bot fight. This is ideal to conduct pre-viz experiments to understand how a scene might play out prior to bringing actors to a motion capture, LED or volumetric studio where costs will be higher and it pays to better understand how you would like a scene to play out.