Gen AI for Pitch Decks & Experimental Media

Emerging generative artificial intelligence (genAI) technologies, with the ability to generate high-quality images, video and audio, assure continued change in, and challenges to, the way we ‘think’ and ‘do’ creativity in the narrative media arts and the creative industries more broadly. Proprietary genAI models, and their associated applications and platforms, limit control of generative parameters and are kept behind expensive paywalls, prohibiting access to many. I propose a workshop focused on how to use more accessible and malleable open-source generative AI tools for teaching genAI for film and screen arts.

 

For 3 years I have been undertaking creative practice research involving experimentation with various genAI models including text-to-image, image-to-image, image-to-video, and text-to-audio models. I have used various applications from free consumer ‘smartphone native’ apps to user interfaces for Stable Diffusion, including Automatic1111 and Comfy UI, which give a much greater range of control to the user for refining generated outcomes.

 

This workshop is designed to outline several different workflows for using open-source genAI applications for creative screen projects to promote broader access for students to tools and workflows that give more control and are less reliant on paid ‘black-boxed’ applications.