Agentic, Multimodal Large Language Model (LLM) with Behavioural Parameters and Self-iterative Capabilities for Conversational Architectural Design Processes

Lok Hang Cheung, Davide Lombardi*, Giancarlo Di Marco

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This research proposes ConvoAI, an AI addressing the tension between sophisticated AI tools and flexible design exploration in architecture. Departing from linear AI systems, ConvoAI integrates behavioural modes and self-iteration capabilities to foster conversational design processes. ConvoAI responds to multimodal inputs by generating information that provokes explorations, rather than optimised solutions for fixations, enabling designers to extract information from AI response to abstract insights, reframe problems, and derive strategies. Validated in a design studio, three engagement patterns emerged: (1) Design Partner for problem-space redefinition; (2) Concept Clarifier for clarifying design strategies through visualisation and (3) Design Assistant for accelerating traditional workflows. Analysis revealed that Partner achieved the highest improvement for high-performers, and Clarifier helped average-performers the most. Positive impacts were strongest for Partner and Clarifier, with slight negatives for Assistant. Users shifted to view AI as an exploratory collaborator. Limitations include scalability and group collaboration. Future work explores personalised, multi-agent adaptation.
Original languageEnglish
JournalArchitectural Science Review
DOIs
Publication statusAccepted/In press - 5 Nov 2025

Fingerprint

Dive into the research topics of 'Agentic, Multimodal Large Language Model (LLM) with Behavioural Parameters and Self-iterative Capabilities for Conversational Architectural Design Processes'. Together they form a unique fingerprint.

Cite this