The "Wholeness-Generating" Technology of Christopher Alexander

To manage a phenomenon of “wholeness” requires a technology that brings with it a set of useful methods and practices.

The great systems theorist Herbert Simon once gave a very concise definition of design: it was, he said, the “transformation from existing states to preferred ones”. This elegant little phrase packs a punch. For who is doing the preferring–the designer? the artist? the corporate moneymaker? No, the users. Which users, however, and how do we identify them? And how do we know what they prefer? How do we know what the existing state is? How do we know how to get from existing to preferred–what tools and methods can we use? And how can we evaluate that process and correctly re-adapt it as we need to?

This idea of design–as “transformation” using an adaptive process–is very much at the heart of the design theorist Christopher Alexander’s work. Through that adaptive process, we generate a form that achieves our “preferred” state. But at each step of this transformation, Alexander says, we are dealing with a whole system–not an assembly of bits.

This turns out to be a crucial point for leading design technologists. Such a “whole systems” theory of design is distinct from what we might call an “assembly” theory of design. But both are useful at different times. While we can treat some systems as “mere” assemblies with little harm, it’s disastrous to do so with others. We can take, say, the parts of a car apart and put them back together, and the car will run all right. But we can’t take apart the parts of a living organism and put them back together! The reason is that there is no way to decompose an organism (or any complex system) into parts without destroying the connective networks–subsystems themselves–that make it work.

To manage this phenomenon of “wholeness” requires a technology that brings with it a set of useful methods and practices. Medicine is an obvious example: physicians deal successfully with problems of whole systems all the time. But the wholeness of a system (like a person’s body) doesn’t happen automatically; it is a result of certain kinds of transformational processes (like the functions of the endocrine system, or the protein foldings of embryogenesis). These can be studied and, to a surprising extent, replicated. The best and most enlightened technologists want to know how transformations occur most efficiently in natural systems, and how human technology can benefit from the efficient wholes that result. Design professionals, from software engineers, to product designers, to systems designers of organization, enterprise, and information architectures, are looking for a practical method of achieving an optimum integrality in the systems they manage–an optimum wholeness.

One reason this matters is that when wholeness is not achieved, then the system in question is disordered and inefficient, and probably headed towards a state of collapse. If it is a biological system, we might say it is “diseased”. If it is a system of human technology, we will probably say it is highly inefficient and perhaps unsustainable, and needs to be reformed. The alternative may well be a catastrophic collapse of the systems upon which human wellbeing depends–or at the very least, a disastrous decline.

Sustainable systems of technology seem to have been designed, perhaps unselfconsciously, with wholeness in mind. As we noted, Christopher Alexander has been concerned with this topic of “wholeness”–the relation of parts to wholes–from the beginning. He has sought a more advanced method of design that might help us produce more integral wholes in the built environment, and in other fields. To do this, he has derived a method that we will outline here (with some of our own added fine-tuning). Alexander’s method involves recursion, repeated steps taken on the basis of previous steps, and modifying in response to them. Wholeness is generated using an “algorithm” (simply put, a set of instructions for step-wise growth and transformation). This method is able to create a great deal of complexity in short order. But just any complexity does not guarantee wholeness.

Alexander provides two algorithms in his The Nature of Order books meant to be used together for design. These are essential steps towards achieving geometrical wholeness, which is accomplished by combining two distinct processes. To understand what is going on, consider the different scales (measures) occurring in a structure, where each scale is defined by components at a given size. All complex systems show a hierarchical classification into small elements, several sizes of intermediate elements, and large elements. Elements of distinct sizes interconnect. For functional reasons having to do with system stability, the size distribution of different components will be regular, not random. Otherwise, the system turns out to be fragile.

According to the “universal rule for the distribution of sizes” (due to one of us and physicist Bruce J. West, following Alexander’s insights) the smaller the elements in a complex system, the more of them there are. This relationship between size and multiplicity is governed by the fractal or inverse-power scaling law, which is obeyed by the majority of stable natural and artificial systems. These include the neural systems of animals, the circulatory systems in plants, the mammalian lung, electrical power grids, the World Wide Web, and countless other complex networks.

Look at a fern leaf, a cauliflower, and a magnified snowflake to see this distribution of sizes graphically. A prescription for achieving wholeness in design works with the components of a system classified according to their size. First, Alexander says that every time you create something, look at its particular scale, and make sure that the new piece reinforces–in a coherent manner–the immediately smaller scale, as well as the immediately larger scale. This produces a geometrical and functional weaving of three different scales each time something new is introduced.

The result is one step towards increasing overall wholeness. Alexander’s second rule also concerns the scales of a system. It is diagnostic, suggesting where to insert something, not because that’s obvious from the viewpoint of the inserted element itself, but because it’s needed by the system as a whole. Begin by visualizing the whole system. Then identify the scale that is the weakest, or is missing (i.e., where there is too large a gap between scales), and create or intensify something on that scale. The new component must reinforce all existing components on its own scale, as well as the immediately smaller and immediately larger scales, as described in the first rule.

Let’s outline a theoretical example to show how the method works in practice. In designing a system stepwise, we try to find its weaknesses as we add functionality. The problem we face is that some part of the design feels wrong (it’s unconnected, or fragile). When the problem is identified, don’t just adjust that piece–which in any case doesn’t always tell you how to adjust it–but look at that scale containing the problematic piece in the entire design. Ask: what is the optimal component that can be introduced and will reinforce this scale? That then implements change in the context of the whole system, rather than adjusting only the original faulty piece. This idea re-orients design from thinking about individual pieces in isolation, to thinking about particular, interlinked, scales. A set of diagrams from Helmut Leitner (a software engineer in Graz, Austria) helps us grasp the wholeness-generating transformations. We reproduce his five-step graphical description from our book Algorithmic Sustainable Design.

1. Step-wise: Perform one adaptive step at a time.

Here we run into a problem with modern design education, which is based almost exclusively on perfunctory assembly and composition following a “program”, and then a nearly magical addition of “creative inspiration” all at once. This is, after all, what famous contemporary architects are thought to do, following the accepted myth of intuitive genius. It is very difficult to convince a young architecture student, for example, to design with one adaptive step at a time.

2. Reversible: Test design decisions using models; “trial and error”; if it doesn’t work, un-do it.

Leitner2

Another deep problem here, revealing the inadequacy of present-day design training: how does a practitioner judge whether a design “works” or not before it’s built? The only means of doing so is to use criteria of coherence and mutual adaptivity (or “co-adaptivity”), not abstract or formal (static) design. Otherwise, an architect has no means of judging if an individual design step has indeed led closer to an adaptive solution. As far as actually undoing a step because it leads away from wholeness, however, that is anathema to current image-based design thinking!

3. Structure-preserving: Each step builds upon what’s already there.

Leitner3

This has been the theoretical and philosophical underpinning of all of Alexander’s (and our) work. The most complex, yet adaptive and successful designs arise out of a sequence of co-adaptive steps and adjustments that preserve the existing wholeness. On the other hand, designs that arise all at once are for the most part simplistic, non-adaptive, and dysfunctional. A trivial algorithm cannot generate living structure. And even a single step away from wholeness can derail the system.

4. Design from weakness: Each step improves coherence.

Leitner4

Again, part of the same fundamental problem we already mentioned: how to identify the precise location where an evolving design happens to be “weak”. This can only be done on the basis of adaptivity and coherence, otherwise one risks privileging a non-adaptive component that looks “exciting” instead of sacrificing it to create an improved overall coherence. The dysfunctional Achilles Heel of many a contemporary design may make them photograph well!

5. New from existing: Emergent structure combines what is already there into new form.

Leitner5

As in the development of an embryo, or successive design improvements of a computer chip, a functionally complex system evolves through cumulative steps, changing and getting better and more complex and thus acquiring more advanced capabilities. We cannot emphasize sufficiently that designing from evolving wholeness will introduce features–asymmetries, symmetries, connections, new scales–that are inconceivable within an assembly approach to design.

M5-Figure6

Christopher Alexander: Interior of the Great Hall, Eishin Campus, Tokyo

If you create a large building all at once, guided by the customary “great flash of genius”, and it is wrong, it is wrong in thousands of different ways. By contrast, a stepwise method of design makes corrections possible by providing a methodology and checks that catch mistakes before they develop into monsters. Even a very modest building is the product of a very large number of design decisions. It is mathematically impossible to make all those decisions simultaneously. The “soul” of a building, be it a modest or great building, is due to emergent properties, which cannot possibly be conceived at the beginning of the design process.

M5-Figure7

Christopher Alexander: College Building, Eishin Campus, Tokyo

What are some long-term positive influences of adopting this step-wise and scale-focused design technique? What precisely are the larger scales of the system, and what are the even larger scales external to the system boundaries? They, of course, are the environment, and the physical context for situating the system/building. One critical failing of contemporary architecture and urbanism is that they ignore context. They are focused upon creating extraordinarily noticeable objects (which as we have discussed elsewhere, is a form of product marketing). By learning to apply the method we described, context comes back as an essential part of adaptive design.

These observations suggest that there is a critical weakness in the way architecture is created today. Many architects do express concern that people live comfortably in the spaces they create, making sure their buildings address the contexts and culture of their sites, putting in gardens where people can come and relax, etc. Those noble statements are disproven by the actual design methodology–typically a rigid process that does not allow the kind of small-scale adaptivity we describe, but instead, rewards the creation of visually attention-getting, stylized objects. These standout objects are typically drawn in the studio, isolated from the whole context of which they must take part. Such imposed insertions into the urban fabric simply cannot have evolved adaptively. Their geometry is detached from nature and living structure–willfully rejecting wholeness, it is dead on arrival. Neither delirious praise nor prestigious awards can magically imbue life into such objects.

Architects today have aspirations to use computers for design beyond their current application as graphic design tools. This is a realizable goal, but any success towards achieving adaptive computer design depends totally upon the rules programmed into the software system. At the present time, design software suffers from the limitations of mechanical and compositional design–that is, design by assembly–and that is why when trying to generate a design with human qualities, one has to fight against the software’s own preferences. It is doubtlessly only a matter of time before we have more co-adaptive design software available, but the software companies have to respond to client demand if they are going to invest in bringing such software to market. This is an exciting new frontier for a new human-centered, life-centered technology. And much is riding on it.

Michael Mehaffy is an urbanist and critical thinker in complexity and the built environment. He is a practicing planner and builder, and is known for his many projects as well as his writings. He has been a close associate of the architect and software pioneer Christopher Alexander. Currently he is a Sir David Anderson Fellow at the University of Strathclyde in Glasgow, a Visiting Faculty Associate at Arizona State University; a Research Associate with the Center for Environmental Structure, Chris Alexander’s research center founded in 1967; and a strategic consultant on international projects, currently in Europe, North America and South America.

Nikos A. Salingaros is a mathematician and polymath known for his work on urban theory, architectural theory, complexity theory, and design philosophy. He has been a close collaborator of the architect and computer software pioneer Christopher Alexander. Salingaros published substantive research on Algebras, Mathematical Physics, Electromagnetic Fields, and Thermonuclear Fusion before turning his attention to Architecture and Urbanism. He still is Professor of Mathematics at the University of Texas at San Antonio and is also on the Architecture faculties of universities in Italy, Mexico, and The Netherlands.

Read more posts from Michael and Nikos here.

Recent Programs