A workshop on the use of Generative AI in Design and Design Research as part of DIS 2023 Pittsburgh, Pennsylvania
Workshop Date: 11-7-2023
Submission Deadline: 19th June 2023
Notification: 26th June 2023
Submit by emailing to: email@example.com
Submission Templates: ACM Template (2-4 pages max excluding references—Anonymous submissions preferred, but not required)
We also encourage alternative submission formats (e.g., artworks, demos, videos, theatre plays, pictorials). If you have suggestions for this or questions in general, feel free to reach out to Willem
Call for Participation
The year 2022 saw a boom in the field of Generative Artificial Intelligence (GenAI). The radical accessibility of these technologies has the potential to transform the creative field including design practice and design research. For instance, by helping designers generate, explore, and extend ideas more quickly, or by offering serendipity, surprise and generative friction through its unpredictable outcomes. It is also likely that GenAI will open up new creative modalities and possibilities for creative exploration, as well as new opportunities for design research. Despite these potentials and advancements, there remains little published academic work on the topic. In this workshop, we intend to take stock of the (potential of) GenAI in the context of design, with a focus on creating a comprehensive vision for the role of GenAI in design practice and avenues for design research. We aim to collaboratively develop this vision by synthesizing the work that is currently developing within the community and that participants will share during the workshop. Rather than introducing and exploring tools, we ask participants to present their own cases, so we can build together upon our own experiences with GenAI in design.
Participants should submit a 2- to 4-page position paper (excluding references) in the ACM Extended Abstracts Format. We also encourage alternative submission formats (e.g., artworks, demos, videos, theatre plays, pictorials). The proposals should contain authors’ (initial or preliminary) experiences and reflections using GenAI in creative practices. The proposals should be emailed to firstname.lastname@example.org. We will select papers based on their relevance, quality, and diversity. We will limit the size of the workshop to 25 participants. At least one author of each accepted submission must attend the workshop and all participants must register for both the workshop and for at least one day of the conference.
Proposed Workshop Schedule
The workshop will have both hands-on and discussion-based components. Participants will have the opportunity to present (and demonstrate) their own projects/cases related to GenAI in design and discuss the implications of GenAI in design research and practice with the workshop attendees.
9:00 – 9:15: Arrival
9:30 – 10:00: Introduction and overview of the workshop topics
10:00 – 10:30: Keynote
10:30 – 10:45: Break
10:45 – 12:15: 5-minute presentations
12:15 – 14:00: Lunch break
14:00 – 15:00: Themed panel discussions round 1
15:00 – 15:30: Break
15:30 – 16:30: Themed panel discussions round 2
16:30 – end: Discussion of dissemination of workshop results (e.g., Interactions article, research collaborations, projects)
By Michael Muller & Justin Weisz (online)
Framing and Reframing are powerful co-creative strategies. We explore Reframing as a novel human-AI co-creative method using a conversational UI to a Large Language Model, presenting an actual session in the form of a theatrical script. We address themes of human-AI co-creativity, the role of the AI, and questions of AI anthropomorphism.
By Paulina Yurman
Machine Stories are fragments of stories created using ambiguous watercolour drawings of artefacts  and machine learning platforms (Diffusion and DALL-E) that replicated their visual features, whilst experimenting using various or no word prompts. Text was created using sentences generated by Chat GPT that
were modified.Machine Stories attends to the unfamiliar and ambiguous and is
a reflection on my experiences collaborating with humans from different disciplinary perspectives and on experimenting with systems that replicate dominant features . Whilst not always being able to understand how other humans or systems operate, I was nevertheless able to assign meaning to what I saw, recognising patterns and creating associations based on my particular
experiences or imaginaries. As humans, we cannot escape giving meaning to what we see, even when we do not understand it. AI does not understand ambiguous drawings created by humans. It merely identifies and reproduces
patterns. Not understanding can be a source of creativity.
By Timothy Merritt
Artificial intelligence (AI) and especially generative AI is experiencing an explosion of adoption and new application across many facets of society. Generative AI applications are quickly being adopted into the design practices of leading design companies assisting with di- vergent thinking, and providing inspirational inputs and reflection in the design process. One of the most complex design challenges is the use of humor in interactive experiences. In this article, the topic of humor and design is examined through the use of some popular generative AI tools. Illustrative examples are provided to foster deeper discussion about the strengths and weaknesses of using generative AI in the design of humorous experiences. Future directions are proposed as initial directions for design research of humor and generative AI.
By Arngeir Berge & Frode Guribye
In our DIS’23 paper “Designing for Control in Nurse-AI Collaboration During Emergency Medical Calls” , we describe a design research process that could have been sped up had ChatGPT been launched earlier.
By Jordan Eshpeter
Currently, I am conducting a research study at a digital product design consultancy. This means I both observe the agency’s product design practice, participate in company culture and events, and contribute to its research team. As an extension of this role, I recently joined an internal working group that was formed in response to advancements in GenAI tools like OpenAI’s Chat GPT
and DALL-E. This group is charged with exploring the possible uses of these platforms and tools to improve the agency’s processes and the digital products and experiences it produces for its clients. Specifically, I am a part of an Ethics and Policy subgroup which aims to develop principles for the responsible and safe use of GenAI tools at the agency and for its clients. Based on this, I aim
to join this workshop and contribute to the development of ethics and policies as one of many issues facing design researchers and practitioners as they use GenAI tools in their work.
By Stephanie Houde, Siya Kunde & Rachel Bellamy (online)
We propose that the potential process benefits of new generative AI tools for design are not limited to activities of design professionals designing interactive systems. They can also extend to professionals of any discipline who are engaged in design-like activities where collaborative idea generation, exploration and idea extension selection occur. We present a vision scenario depicting how generative AI could support material chemists engaged in a collaborative molecule discovery task. We comment on specific design process benefits at each stage.
By Kazjon Grace
In this paper, we describe various work-in-progress efforts to explore human–AI collaboration in creative domains, with a specific focus on the iterative and exploratory early conceptual phases of creative work. We document a series of AI models and interactive systems encapsulating them that serve to explore the state-of-the-art in that area.
Implications of Generative AI on Learning and Assessment in Higher Education and Design Research Practice
By Roger Whitham, Glynn Stockton, Daniel Richards, Joseph Lindley, Naomi Jacobs, Paul Coulton
In this paper, we explore the impact of generative AI (GenAI) on assessing student work in further education contexts. As GenAI becomes widely adopted, it engenders potential risks to assessment, such as false evidence of learning, student vulnerability to academic integrity injustice, and implications for independent learning and creativity. We report on the interim findings of a working group seeking to understand these risks and propose possible mitigations. Alongside desk research and discussion, a ‘wargaming’ activity was used by the team to attempt assignments with the help of AI tools. Based on our research, we propose possible mitigations to the assessment challenges, including adapting assessment methods, minimising reliance on automated enforcement tools, and reimagining course structures to integrate GenAI into them. In conclusion, we discuss critical issues and potential mitigations surrounding the intersection of GenAI, student assessment and Design Research practice and reflect upon how these considerations may inform and reshape the future of Design Research practice in general.
By Christian Sivertsen
The increased popularity of generative AI calls for methods for designers to evaluate their phenomenological qualities. Accuracy or efficiency are not sufficient concepts as the aesthetic qualities of generative models have significant impact on how they work when employed in the real world. I have propose an approach to designing for the reflexive use of generative AI. The intention is to be able to design interfaces that afford the ability to interrogate AI-based models and systems on particular experiential qualities. This is among others, relevant for designers to assess qualities of models ahead of their implementation in a system in the wild.
Haunted Aesthetics and Otherworldly Possibilities: Generating (Dis)embodied Performance Videos with AI
By Brett A. Halperin, Mirabelle Jones & Daniela K. Rosner
In this paper, we use GPT-3 to generate a score for an endurance art performance that both a human performance artist and AI text-to-video system “perform.” First, we consider how the artist performs the score. Then, we use Runway AI to generate three different video performances derived from the score. In reflecting upon this process, we diagnose pitfalls and potentials of engaging AI in performance and video generation. On one hand, we see pitfalls of AI’s inability to grasp human bodies, yet ability to render aesthetics haunted by displaced artists and ghost workers. On the other hand, we see creative potentials to generate otherworldly possibilities and augment low-resourced independent video/filmmaking. We hope to join this workshop to contemplate how a critical and creative design research framework for generative AI may or may not support humanistic traditions of video/film and performance.
Generative AI for First-Person Meta Reflection in Design Research: More-than-Human Storying the Ecologies of AI Arts
By Petra Jääskeläinen (online)
While sensitivities toward ecologies and more-than-human design and research have become visible in the HCI, design, and humanities [1–3, 7–11], inquiries from these perspectives are not common in the specific case of AI Arts . This is a challenge, as AI arts is often approached from a technological post-humanist perspectives, mixed with values of techno-positivism in an attempt to find new
ways of expression and creativity1 or ways of utilizing AI for creative work – and overlooking feminist care over environment and more-than-human ecologies. In this position paper, I explore Generative AI as a tool for reflecting on such challenges that relate to ’doing research in AI arts’ from a first-person perspective. In order to explore alternative ways of knowledge-making, I use the method of storying speculative conversations with ChatGPTs “More-than-Human Alter Ego” gAIa. Consequently, the results of this process surfaced that using Generative-AI for more-than-human storying can work as a method for first-person reflection of the challenges, experiences, and situated context of doing design research work from a first-person lense.
By Peter Kun
Image generation models have triggered a paradigm shift in how we can express ourselves in visual digital art. Despite their enormous uptake both for amateur and expert uses, deploying these models into interactive prototypes is still largely unexplored. In this paper, we present the design of a research prototype, GenFrame – an image generating picture frame, which will be used to study how people relate to this technology when deployed in familiar contexts. While developing GenFrame, we reflect on the research-through-design journey of the design decisions made for an interactive artifact that is centered around the questions of control of image generation models.
Willem van der Maden is a Ph.D. candidate at the Delft University of Technology. His work is in the field of designing Positive AI. He specifically focuses on how we might align AI systems with human wellbeing.
Evert van Beek is a PhD candidate at Industrial Design Engineering at Delft University of Technology. His research investigates design and innovation in the Dutch energy transition with a specific focus on human building co-performances.
Iohanna Nicenboim is a Microsoft Research funded Ph.D. candidate at Delft University of Technology, investigating Human-AI interactions through more-than-human design. She is one of the chairs of the pictorials track at DIS2023.
Vera van der Burg is a Ph.D. candidate at the Industrial Design Engineering at Delft University of Technology. Her research investigates AI in creative design practice, and how this can generate new ways of designing.
Peter Kun is a postdoc at the Media Art and Design group at IT University of Copenhagen, researching what are the meaningful ways to use state-of-the-art image generation algorithms for consuming traditional art, such as paintings in a museum.
Derek Lomas is assistant professor of Positive AI at the Faculty of Industrial Design Engineering. He designs data-informed smart systems for human wellbeing, bringing humanist values into AI systems. He is a classicist, futurist, cognitive scientist and proponent of the magic of resonance in design.
Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning as well as the possibility of creative AI. Her works gradually have transformed into interactive and interdisciplinary art projects, which currently focuses on the nascent area of AI art. A few years ago she left her tenured art professor position to design and teach new courses (Art and Machine Learning, Creative AI) at the Machine Learning Department of Carnegie Mellon University.