Published in 2/2024 - Matter and Intelligence


Industrial Revolution of Creativity

Mikael Tómasson

At its best, generative artificial intelligence is an aide in creative work and brainstorming, Mikael Tómasson says.

Despite the immense buzz about artificial intelligence over the last year, many designers are left with a sense of uncertainty when it comes to the topic. Should we start learning to use AI, since it is supposed to make our work so much more efficient? When phrased like this, the question fails to grasp the far-reaching consequences of utilizing artificial intelligence. In addition to simply speeding up work processes, adopting generative AI constitutes a much larger revolution in creative work. In skilled hands, AI can be put to use to industrialize the creative process, and it is becoming an inseparable part of the future work of architects.

Today’s designers are already using several AI applications in their daily work – often without even realizing it. The general definition of artificial intelligence is that it has the ability to independently perform functions that are regarded as requiring intelligence. For example, the Google search engine is an AI system because it is able to filter relevant search results out of vast masses of data. However, the ability of design software to calculate the surface area of a apartment or to model the wind conditions of a building should not be regarded as artificial intelligence. After all, the software is “only” performing the calculation tasks defined by the human user and is not making any independent value choices.

Instead of good old Google, recent discussions have revolved around generative artificial intelligence. Based on human-written prompts, this type of software is able to create text, images and 3D models as if out of thin air. Text-generating artificial intelligence tools, such as Chat-GPT or Perplexity, are already in use in some architecture firms as a means to automatize and create first drafts of emails, project descriptions and marketing texts. It is worth bearing in mind, however, that these systems should not be used to produce actual project documents, as all of the text entered into the system ends up in the hands of the software administrators and, with some skill and dedication, third parties will be able to dig the original texts out of the system.

Image-generating AI applications also have their problems – they cannot reliably produce repeated images of the same subject matter. If a designer wishes to create perspective images of the same building from different angles, the AI system, which generates each new image “from scratch”, is not up to the task. However, AI image generators are effective tools for enlivening and finishing plan illustrations. A rendered interior image can be quickly furnished by running it through the generative fill feature in Photoshop.

In addition to image editing, image-generating AI also has another purpose: interactive brainstorming. By prompting the AI to generate visual solutions to one’s own design challenges, a designer can gain new perspectives on the task. 

As an example, I will use my recent experiments with the massing of multistorey buildings for a city block in Turku. At first, I defined the area for which I asked the Stable Diffusion AI to generate an image of “Turku apartment blocks”. Upon realising that the AI offered up an over-abundance of plastered facades, I redefined the material, which then also switched up the style of the buildings. The change of style led me to experiment with various options for the block courtyards. This type of back-and-forth reacting and honing of ideas between myself and the machine was addictive.

The generated images do not quite correspond to reality, but the suggested massing sparks the designer’s imagination. Having produced a hundred of such images within the space of thirty minutes, I did actually end up with some workable ideas. Still, the images I created all reiterate a generic visual imagery of modern architecture that the AI has learned from the internet. Indeed, for the time-being, generative AI does still appears being a mere assistant and does not wow anyone with its creativity.

The series of images of a housing block in Turku and used prompts illustrates the back-and forth workflow with Stable Diffusion AI. Images: Mikael Tómasson

Prompt: air shot of Turku apartment blocks, architectural model, (masterpiece, low contrast, highly detailed), single building, single plot, solarpunk, house, circular courtyard, tree, foliage, air photo, drone photo, isometric view
Prompt: air shot of Turku apartment blocks, pleasant city, architectural model, (masterpiece, low contrast, highly detailed), wooden architecture, single building, single plot, (big green courtyard), solarpunk, house, (circular courtyard), tree, foliage, air photo, drone photo, isometric view
Prompt: air shot of Turku apartment blocks, pleasant city, architectural model, (masterpiece, low contrast, highly detailed), massing study, brick and concrete architecture, single building, single plot, (big green courtyard), solarpunk, house, (circular courtyard), tree, foliage, air photo, drone photo, isometric view

Prompting an image-generating AI to create “new fabulous wow architecture” will only get you very familiar-looking aesthetics, as the AI has learned to connect the words “wow architecture” and “fabulous” with the works of, for example, Zaha Hadid and Frank Gehry. In fact, Zaha Hadid Architects often utilizes image-generating AI tools when seeking to infuse their new design projects with a touch of their late founder’s trademark flair. In reality, however, the process of creation relies on a human user who directs the machine to mix up the concepts it has learnt.

AI systems are highly skilled in analysing and reproducing data, which means that the human user must be creative in guiding them so as to make them generate creative content. For example, I prompted Stable Diffusion to generate an image of a pinecone and another of a Orthodox church. At a cursory glance, both turned out to be believable. Next, I again asked the AI to generate an image of a pinecone, but changed my instructions mid-process to instead produce an image of a Orthodox church, using the half-finished pinecone image as a template. This time, the AI produced images in which it was no longer simply replicating what it had learned previously. Now it was actually creating something new. Once the machine had put together a hundred crosses between a pinecone and a church, I got to play judge and pick out inspiring ideas.

AI seems not to only replicate what it has learned if the instructions are changed mid-process. The images in the middle are the result of combining a pinecone and an Orthodox church. Images: Mikael Tómasson

This kind of image generation will not, of course, single-handedly revolutionize the creative process. We could give the same brief of a pinecone–church hybrid to human architects and expect to receive much more usable proposals. But AI has one inevitable advantage: it is cheap. Creating the one hundred pinecone church images took me an hour. Image-generating AI follows the logic of industrialization, whereby a person equipped with a machine is faster and more productive than a person working by hand, even if the quality often falls short. Thus creativity can also be automated. In Japan, for instance, 3D models of entire office buildings are already being created with the aid of generative AI. The task falling upon the human designer is to identify the best creation and developing it further.

The industrialization of an architect’s job description will also continue in Finland, and it is my belief that, in twenty years’ time, generative AI will be widely used for automating creative processes. Architects will not be out of work, however, because human intervention is needed for process management, contextualization and communication – at least for the time being.

Generative AI models are bound to industrialize the creative process, and any architect who still wants to be working ten years from now would be wise to familiarize themselves with the technologies within the next year. A good place to start the learning process is by getting a feel of how AI is directed by writing. Text-to-image generation requires an architect to adopt a new mindset – being handy with a pen or CAD cursor is no longer enough, and one has to be able to verbalise the desired result in computer language. Over the coming years, generative creation will break through in more and more areas of an architect’s creative work. This will also streamline the creative process, as design software will develop to include AI-based assistants that make our designs more efficient and polish off our mistakes.

I still have a good thirty years of my career ahead of me. In that time, according to historian Yuval Noah Harari, humankind will go through a technological revolution whose impact will surpass that of the advent of printing or industrialization in their time – for the first time in the history of our species, the technology we create is able to make independent decisions and adjust its own operation. This upheaval is also bound to have an effect on construction. Harari admits that he does not know what we should be teaching today’s youth, but I, for one, am now training myself to use AI systems, while also studying aesthetics and usability, in order to qualify as an arbiter of style and taste to judge and moderate the choices made by artificial intelligence. ↙

MIKAEL TÓMASSON is a city planner and AI trainer.