The rise of Artificial Intelligence (AI) has revealed incredible possibilities in recent months to those of us who had little understanding of it until now. The basic principle behind AI is to develop computer systems that can simulate and mimic human intelligence. AI systems are designed to process information, learn from experience, reason, and make decisions in a way that imitates human cognitive abilities.
The previous two sentences were generated by ChatGPT, and I decided to leave the lines intact in my recognition of those who decided this was what AI needs to be and what AI must itself believe its purpose might be, if believing is something AI systems are capable of.
However, while computer systems are being designed to imitate human patterns of thought, the difference between computation and authentic intelligence is yet to be bridged. The prevalent sentiment that AI repeats conceptual patterns already existing in the real world—merely running through countless iterations more rapidly than humanly possible—is correct to a large extent. It is also true that present AI explorations are mostly within lower-level routine tasks that are repetitive and take place within a closed management system.
An unintentional insertion of human ‘cognitive bias’ within the programming of AI systems by human programmers is perhaps the limiting aspect of present AI systems. This bias comes from a somewhat irrational pattern of human thought that allows us to make rapid decisions based on experience, emotion, near-at-hand data, etc., rather than going through entire data sets in our heads to arrive at a decision. What speeds up mental processes for us—precisely this shortcut to quick decision-making—is a deficiency that computer systems do not need. In other words, human programmers are unwittingly creating faulty, human-like AI systems.
Let us, for the sake of not being ostriches, admit that this is only a momentary setback. Computer scientists, more conscious of the misguided mindscapes of human bias, are creating improved systems.
There is, of course, a big concern around the unregulated development of AI. Governments and corporations, as gatekeepers of big data essential for the development of AI, will probably have to concede that without AI, big data is just a big dump of unusable data. Less democratic governments may not have qualms about using big data and AI to infringe on the rights of their citizens, creating ranking systems that encourage compliant behaviour. Imagine finding yourself more liable to be penalized for speeding after posting a critical comment on the government’s performance. Devious use of deepfakes, blackmail, and the spread of fake news, etc., give us more reason to regulate the use of AI.
But these concerns seem to be less pressing than the fear of AI bots taking away human jobs. This seems to be the primary concern for most people, accompanied by a fear of being left behind if one is not able to adapt to and adopt new technologies. Social media is rife with examples of AI tools for architects that seem to alleviate a lot of the burden of menial tasks that are relegated to the fresh-out-of-college in architectural offices.
While this could possibly mean freeing up a lot of time for ‘design,’ it separates labour from birthing, making us wonder if the latter can, in fact, survive the break-up.
Educational institutions, to ‘keep up’, will perhaps be looking at new roles for human designers in an age where AI, propelled by its own advances, will transform the landscape of practice. Will graduates emerging from colleges be equipped with a deep understanding of mankind and how to fix the world with no knowledge whatsoever of how to fix a leaky roof? But then again, practical knowledge and street-smartness are not humanity’s ultimate aim. Humanity’s ultimate aim is…hang on, let me ask my AI app…ah yes! “To endeavour to find humanity’s ultimate aim.”
If this distraction from mundane tasks that occupy our days can, in fact, help us think about larger ontological questions, I would say it’s not a bad thing at all. At the very least, the rise of AI intelligence serves as a slap in the face of human arrogance that has only led to the selfish destruction of our planet. As Sam Altman, the founder of ChatGPT said, AI might, in fact, allow us to recognize that intelligence is not in the human mind but is part of the material world all around us.
Human beings have valorised intelligence, and in particular, individual brilliance has been humanity’s ‘ultimate aim,’ at least in many cultures. Gary Kasparov’s loss at the hands of the computer Deep Blue, therefore, came as a shock to many. The computer had studied not just the patterns of the mind of the genius it was playing but carried with it the mental patterns of many geniuses across time that it utilized in its winning strategy. The loss to the computer, we still somehow believe, was not the loss of the brilliance of the human mind. It was merely Kasparov’s loss, who, perhaps, was not so much of a genius after all.
In Greek philosophy, genius was associated with divine inspiration, while during the Romantic era, thinkers like Goethe and Nietzsche portrayed it as an independent force defying convention. Modernism further exalted individualism and the genius as cultural vanguards challenging the status quo.
However, this celebration of the individual genius invites critical examination. The success of the genius is dependent on collective context and recognition by the masses. We may argue that perpetuating the idea of the solitary genius undermines the importance of collective efforts and diverse perspectives. In recognition of creative achievements emerging from complex systems and networks, rather than individual brilliance, emphasis may be placed on the interconnectedness of ideas, the accumulation of knowledge, and the contributions of multiple actors.
It may be argued that there was a time when individual genius was required to break away from convention. Progress in design, like scientific progress, happened through paradigmatic shifts in thought-perceptions. Just like advancements in science are not progressive but radical—for e.g., Einstein’s theory of relativity killed off the common-sense physics of Newton’s laws of motion and gravitation—shifts in design ideologies too were welcomed as a sign of a progress. The Gothic period, the Renaissance, the Arts and Crafts movement, or the modern movement, all brought radical shifts in thinking promoted by avant-garde geniuses and their patrons, each in the service of their own ‘true expression’. Present-day architecture, however, only worships a single god: the god who will save the planet from imminent doom previewed in climate change. Poetry— desiccated in a design space where there is no larger narrative than saving energy—is replaced by the enchantment of aesthetic styles—styles that have been catalogued away in Pinterest boards under ridiculous categories like Japandi (Japanese-Nordic) and Boho (Bohemian-Hippy). If poetry survived Auschwitz, it would certainly find its end in the gas chambers of Pinterest categories.
It is important to note that these conceptions of genius originate in Western thought. The intellectual brilliance that computers and the hive mind of global teams have usurped is a certain kind that is founded on conceptual thinking—a practice of connecting disparate abstract ideas to develop new ways of thinking while deepening thought processes within specialised disciplines. Eastern thought presents a different understanding, emphasizing natural intelligence and intuitive wisdom. In Taoism, the concept of zhi or zhihui refers to an innate knowing beyond intellectual understanding. Hinduism recognizes the interconnectedness between individual Atman and universal Brahman, involving the realization of inherent divinity and interconnectedness. In Zen Buddhism, genius relates to moments of satori or awakened awareness and profound clarity that transcend conceptual thinking.
Non-conceptual thinking is thinking without putting into boxes. Consider it as how a baby might see a dog for the first time—not as moving shapes and patterns, a tail, a head, a body—but the whole thing and knowing that it is a whole. It is a thinking not premised on the specialisation of thought leading to categories and disciplines.
Sam Altman’s rather profound statement on the end of human arrogance and the realization of the intelligence of matter resonates with an understanding of cosmic consciousness where each atom is intelligent and has a purpose. Our fascination with our mind’s capacity to tinker with matter, coercing it to our perceived needs has been the contemporary definition of intelligence and scientific progress. It may as well be the contemporary definition of architecture.
I shudder to think this might just be the only option after doomsday whenever that comes.
An understanding of matter as intelligence is at the heart of architecture as we once knew it. It is not just knowing the physical properties of matter, but how matter forms part of a larger whole.
It is what the world needs today. AI can help us get there, but sadly, not in the way we think it will.
We thank Dimension Plus and its Learn Portal Discover BIM for supporting the series on Artificial Intelligence in Architecture.
2 Responses
AYN RAND answered it decades ago. The HUMAN Thinking and acting mind is above anything artificial
A very thought provoking article, AI may be just seen as a tool to open myriad possibilities in the world, to take away miseries and help discover what is not known so far. We have to come to terms with it to acquire skills and process information in to wisdom.