Field Notes

Recently, a prominent AEC-industry software company executive (who will remain unnamed) used a peculiar metaphor during a meeting with designers at our office.  While discussing the role of simulation in the design process – including the pros and cons of certain tools  – the executive said: “Look, we just make the weapons, but it’s up […]


Recently, a prominent AEC-industry software company executive (who will remain unnamed) used a peculiar metaphor during a meeting with designers at our office.  While discussing the role of simulation in the design process – including the pros and cons of certain tools  – the executive said: “Look, we just make the weapons, but it’s up to you to wage the war.”  We all nodded our heads politely, mulling over the software-vendor-as-weapons-manufacturer analogy, nobody wanting to bring up the twisted economics of the defense industry profiting from war (awkward!).

This was a telling glimpse into this exec’s perception of his customer base.  I don’t tend to think in militaristic terms, but, being a good soldier (notice I didn’t say “good sport”) who loves metaphors, I thought I’d give it a try.  So…designers are like soldiers waging a war using Building Information Models?   Then who is the enemy?  Is it our competition?  The market?  The value engineer?  The client?  What is at stake for architects fighting in this war?  It is creative autonomy?  Competitive advantage?  Sustainability?  Profit?   I don’t think any of these make sense.  If anything, what is happening in architecture right now – especially with regard to Design and Computation – is more like an insurgency using guerrilla tactics.  Maybe the conflict is not against some imagined external enemy, but one to preserve a generalist discipline that is threatened by hyper-specialization.  Although dramatic, this is real.

Military metaphors aside, designers are beginning to craft their own digital tools: just take a look at any of the Grasshopper groups for a representative sample.  It begins by using existing software and hardware in new and interesting ways, plugging one end in the wrong way to see what happens.  In terms of the design process, this is the sticks and stones phase of tinkering, figuring out just enough about how technology ticks without breaking it.  The next step comes when designers adapt this new configuration to an existing problem unforeseen by the software engineer.  This takes enough knowledge of software architecture to specifically link the flows of information to the actual design.  The next time around, the designer rewrites the software to generalize the solution to a whole class of similar problems.  At this point, the building architect plays software architect, with a heavy dose of satisficing (combining satisfy with suffice).  The next step is anyone’s guess, but at this point the architect’s relationship to technology has fundamentally changed from passive to active.

I’m not an architect.  My job title is Design Technologist, a term borrowed from the discipline of Interaction Design, which is just as well, because that’s more my background: the field of Human-Computer Interaction.  I’ve been working in architecture for 3 years and I still don’t know how architecture happens.  This should come as no surprise to anyone who has been practicing architecture awhile.  I have learned that the harsh reality of actual projects is beautifully complex and unbelievably rigid.  The highs and lows of creativity and compromise have become familiar.  In a discipline where it’s supposed to take a quarter century to hit one’s stride, I would have a ways to go if my aim were to design buildings.  That’s not my aim.  If I were to stick it out, it would be a long time before I adequately describe the processes through which a building is designed and delivered …certainly with enough detail to form reasonable causal chains that would satisfy my own curiosity.  (Fortunately for me, that’s not my job and, when I want to know, I just ask George).  My job is not the what and why of the design itself, but rather the tools and processes (the how) through which the what and why are realized.  And, like it or not, those tools and processes are changing rapidly.

Im2 copy

I have a taste for the mystical explanations that architects sometimes employ to explain how and why some particular feature of a building is the way it is.  We spend a lot of time on our blog focusing on the details of Design Technology, attempting to demystify the subject, and to be honest, I actually prefer the mystical explanations (“our structural engineer has magical powers”) to the mundane (“energy code made me do it”).  But while I’m a dilettante at heart, the reality of my job is the need to translate the arcane phrases of designers into something that is “actionable”—and a big part of that is pulling back certain granular elements of the design process from the brink of mass specialization.

As a case in point, consider sustainability.  All good, hard-working, earnest architects are profoundly concerned about sustainability – a nebulous endeavor that will hopefully, someday, be as nerve-wracking as worrying about fire codes or accessibility standards.   Unfortunately, today is not that day.  My role with regard to sustainability centers on computational simulation which, in the circular language of architectural discourse, is often thought of as performance analysis for the sake of “increased sustainability.”  Operationally, this is a perfectly valid way of approaching simulation and we frequently talk about it that way, even though it misses a larger point.  In relationship to architectural design, computational simulation of physical systems is best thought of as a method of compressing time in order to perceive the effects of change.  Simply put: simulation is about performance over time; it allows designers to press the fast-forward button and watch change happen.  This may be the play of light across a surface during a day or the thermal variations over the course of a month, a year, or a decade.

Before computational simulation, over the course of a 25 year career, most architects had the opportunity to test their intuition against maybe two to four built projects per year (if they were lucky) – a woefully sparse statistical sample-set considering the time scale.   In the bygone era when over-engineering to “play it safe” was viable, heuristics might have worked…but not today.  We no longer have a quarter century to spare.   Rules of thumb just won’t do.  Today, making a good guess does not pass muster and it shouldn’t.  Fortunately, computers are getting faster and cheaper and viable simulation tools are beginning (just beginning) to emerge.  But there is something as important as the designer’s intuition or the predictive accuracy of a simulation engine: the design of the experiment.

Experimental design is the single most important (and overlooked) aspect of architectural simulation that designers can learn.  I love the fact that architects are always jumping at the chance to design something besides a building: chairs, decorative screens, websites, you name it.  Experimental design should be no different.  Learning to think like a steely-eyed empiricist may seem cold-blooded, but it’s crucial to good simulation.  Looking at color-coded thermal gradients or false-color images on your computer screen are certainly not the same things as experiencing a cold draft of air or being blinded by glare.  Clarity and accuracy are important, but illustrating causality is critical.  Developing an intuition about which configurations to test and under what conditions takes time and practice…and has little to do with the underlying technology.  Think of this as the “good empiricist” approach to simulation; it is very much akin to doing good science.

Eventually, the “good empiricist” gives way to the “pragmatic engineer.”  This is where computation really comes in.  At some point, someone will say: “What is the Most Advanced Yet Acceptable solution?” (Though it might come out more like: “Just show me the best option!”).  This is a problem of design optimization.  When done well, optimization can allow us to test and temper our intuitions based upon many possible futures…not just one.   In that sense, simulation is about deciding which possible futures are worth sustaining.  That sounds daunting…and it would be if we were stuck with the cardinal triangulation of “three options,” but fortunately, we’re not.

This brings us to the topic of designing with code.  For some, parametric and generative design has become an almost religious or moral imperative.  We are assured that the algorithm is the last bastion preventing architecture from becoming an anachronistic cottage industry.  As with any burgeoning orthodoxy, there will be those who would pull the evangelicals back from extreme positions.  And there are certainly others – I would probably lump myself in this category – who see the current arc of parametrics as trending toward “managing complexity” …as banal or incisive as that may prove to be.

There is little doubt that computational approaches are extremely powerful and those who use these methods skillfully have an astounding amount of agency.  The ability to model the relationships that drive a design (rather than a single option) dramatically increases our ability to adapt to change.  Additionally, by linking up parametric models with iterative performance simulation, designers become the drivers of the optimization process through which high-performance design is achieved …all without sacrificing aesthetic control.  But what we are discovering is that this trend has broader implications than rationalizing doubly-curved geometry or managing large data sets for the sake of sustainability.

The cultural and technical context in which architecture now operates is changing at a rate that fundamentally shifts what buildings are about.  This is not a new concept or realization, but this shift is an opportunity for all designers – not just architects.  I have a suspicion that most people don’t care about architectural design in the same way they care about the design of the new iPhone.  I suspect that most non-architects feel rather let down by contemporary building design.  The fear is that architecture has become less intimate, less human.  Like philosophy, architecture is glacially slow – while subject to the unrealistic expectation of being both the vanguard and the context of all great cultural endeavors.  (Would you expect the design of the new Apple HQ to improve the design of Apple products?  Or would you expect it to be more like an Apple product?).   The deck is stacked against architecture.   Bad buildings are something we talk about, a diversion to consume.   Great architecture is a slow burn: most buildings take time to exonerate themselves.

In the end, it is important to remember that technology can only take us so far.  Architecture has a way – at its best moments – of making the technology that it embodies disappear.   At that moment, we forget all the technical considerations – even if only for a split second – and only the raw human experience remains.  The rest of the time, it’s up to the architects to escape the trenches of specialization and mass-standardization.  Technology can help or hinder.

I began by saying that I am not an architect.  As fascinated as I am with the built environment, what I want to design are the interfaces we use to design.  This sort of meta-design requires a rather skewed perspective: thinking about how to design for design is like standing in a hall of mirrors: distortion in all directions.  It’s difficult to say what constitutes the object of such an endeavor.  You can wave your hands and point to what is really a design act, but infinite recursion is always just one step away as behaviors begin to mirror each other.

If you point to what you think is an act of design, you realize there’s a dizzying number of smaller acts, decisions, regulations, intuitive leaps, misinterpreted intentions, hedged bets, (the list goes on and on) that constitute the entire performance of design.  In the fun-house of this intellectual exercise, it’s only through perspectival shifts – through movement – that one begins to see the angles where actual design attempts to flee the baroque house of contemporary architectural practice.  I say “baroque” because, like it or not, architecture is more complex today than ever.  In response – the acts of designing, crafting, and making are increasingly technical, increasingly closed, increasingly florid.  The true challenge is to put design computing on equal footing as traditional methods: sketching, model making, and fluid discussion.

Im3 copy

Looking back ten years from now, contemporary Design Technology will seem as if held together with duct tape and dental floss.  With a bit of historical perspective, our representations will seem beautifully fractured.  We will realize that the unified objects we now call Building Information Models were actually tangled messes of heterogeneous and largely trade-centric representations and implicit information.  It will probably feel like the means and methods by which we translate those representations into actual buildings were incredibly slow.  We’ll reflect on how the best and most accessible parametric software was still procedural in nature, how the use of generative algorithms was considered exotic.  We may even become nostalgic for a time when there was a clearer separation between buildings, information, and the dynamism of moving bodies.  But we don’t have the luxury of that perspective yet: we can only imagine…and begin to build those technologies.

I believe that while our expectations of technology have radically changed, our expectations of architecture have not kept pace.  Better tools are only part of the solution.

I’ve gathered that this desire to design with the actual substrate of technology stems from excitement and fear (which probably amounts to the same thing) that our experience of the built environment is changing and that architects are not the only ones with skin in the game.  Human-Computer Interaction Designers are increasingly interested in ubiquitous and spatial computing and are busy building the tools they need.  Architects are actively experimenting with interactive systems.  Disciplinary overlaps are growing larger. The entanglement of service-oriented and product-oriented business models in this widening and increasingly networked world will only get tighter.  The broader implication of this trend is that architectural practices will have to rethink intellectual property and its relationship to their work, retool with the necessary skills, and then fit those to the firm culture and design agendas.  No small task…but that’s the subject for another post in this series.

Dan Belcher is a Design Technologist at LMN Architects and a founding member of LMNts.

This post is the 2nd in a series on one Seattle architecture firm’s response to the changing landscape of Design Computing.  For the background story, see George Shaw’s Re-Upping on Design Technology.

Recent Programs