The most moving entry in FORM+CODE is Gehard Mantz’s landscapes. Mantz’ landscapes are rendered, 3D modeled topologies that represent different emotional states and use subtile shifts in light and atmosphere to craft an external representation of an internal landscape. His work is stunningly gorgeous and incredibly detailed.
while I’m writing and teaching about design computation, I have a tendency to give long, detailed explanations that still leave students and readers asking, “yeah… but what is it?”.
it’s difficult to explain a design technique that hasn’t be utilized on a large scale yet. it’d be much easier to explain if some design computation projects were widely know, but to this point they’re not. but dvvd’s spiral bridge is an interesting example, because it’s very computational but still very expressive. it’s computational because it’s creating a form that is the result of a mathematical process- the square brackets that surround the bridge are at a progressively different angle from each other, so there’s an illusion of a twisted form. the rotated squares are then connected by the structural tubes that give the bridge it’s fluidity. if there’s one piece of math with the angle of the brackets, then there’s another with the connection of the tubes between the brackets.
so the design relies heavily on computation to solve the faming of the form and the form generation itself- but the result is a fluid, sculptural piece, that is pretty unique and compelling.
as architecture evolves into the post-bilbao economy, form can no longer justify itself. the argument that dynamic form will act as a catalyst for a regional growth has been disproven by years of an atrophying tourist market and crippled by diminished construction budgets. the current climate has legions of young architects struggling to find a raison d’être for the complex curves and intricate surfaces they have been trained to produce. websites like grasshopper3d and suckerPUNCH feature parades of exercises in technique, but proposals that are completely divorced from our current condition. the few designs that do reference ideas of sustainability, economy, or performance typically seem to be a post-rationalization of a design technique, rather than a rigorous investigation of how form could become a performative element.
the investigations into digital technique over the last two decades have provided architects and designers with incredibly productive tools- but little basis in how and when to use them. if design computation strategies are to be used to produce performative form, then how form effects performance needs to be interrogated more seriously. this post is the beginning of an index of how formal strategies can effect passive environmental strategies.
when most people think of solar power, the first thing that comes to mind are photovoltaic panels. while pv panels and solar thermal are necessary for many architectural projects in terms of sustainably providing power for refrigerators, lights, and televisions; pv and st panels should not be driving the discussion on sustainability. passive solar strategies allow a building to be heated or cooled without the need for any mechanized systems. while building orientation is the most important factor in a building’s passive solar performance, shading systems can be optimized off of regional information to provide shading when necessary and allow light in when desired. foster and partner’s beijing airport is an example of a shading system that was designed to allow direct light inside during winter and to block direct light during summer without any moving parts. in this case, the components were only used in certain locations, but it is easy to imagine a complete field condition of similar components or an entire architectural logic that was generated from them.
like solar power, geothermal energy is generally though of as a means of producing electricity. but in projects as varied from peter zumthor’s therme vals to maryann thompson’s geothermal house, architects have shown that geothermal energy can be beneficial passively as well. frequently architectural projects use geothermal energy to provide the heat necessary for radiant floor heating, but there could be formal conduits for a passive system. though toyo ito’s sendai mediatheque is does not utilize a geothermal system, it’s twisting tubes and floor-penetrating shafts could have as easily been informed by a passive geothermal strategy instead of a structural and conceptual base. as more and more climates are realized to be inappropriate for passive solar alone, there will be more research into passive geothermal techniques for providing climate control.
natural ventilation is probably the oldest architectural feature- whether it was intended or not. there are two main strategies: using a shaft effect like the downdraft cool towers at the zion visitor center and using natural wind like the ventilation cowls at BedZED. both strategies are relatively new to contemporary architecture, but raise strong formal questions. how does wind penetrate the building so that other elements don’t? once inside, what formal qualities can be deployed to sustain wind flow? how does natural wind reach multiple levels? computational design strategies provide a unique opportunity to produce solutions that perform at a higher level than ever before- the investigations just need to begin.
hydrodynamics is, without a doubt, the item on this list that is the least investigated by contemporary architecture. like the others posted here, projects utilizing hydrodynamics date back to some of the earliest architectural constructs- like band-e kaisar in iran. the most compelling example of hydrodynamics in contemporary architecture is yusuke obuchi’s wave garden. this elegant design of an ocean-based power station draws energy from the friction caused the natural movement of waves under an amorphous structure. obuchi’s design is compellingly simple- a conceptually refined project that has radical insights into how we see construction and how it relates to the natural world around us.
in my computational design classes, one of the first questions that is asked is “what is computational design?” the definition is not that elusive, computational design is simply using computation as an approach to solve design problems. the follow up question is a much more difficult one to answer: “why would you want to do that?”
on a basic level, computational design harnesses the processing power of computers to perform millions of mathematic computations to create multiple outcomes. these computations can be anything: form generation, manipulation, or reduction. but what separates this method from any other technique is that the result could have only been created with the aid of a computer- there is no way these designs could have been sketched or sculpted by the creator alone.
sweet… but why is that a big deal? well, it is and it isn’t. while many think design computation and its products are worthy on their own merit, what is incredibly compelling about computational design is its ability to increase design performance in many disciplines. since design computation is based off of computations, this allows the computations to operate off of data- any data. so if the data is relevant to how well the design performs, these techniques have the potential of elevating current design practices to a much higher level.
the highest potential for an increase of performance is in the field of sustainable architecture. the bible for the sustainable design is brown + mckay’s sun, wind & light. sun, wind & light is an absolutely invaluable resource for a designer- it breaks down very complex concepts about passive heating + cooling, solar shading, and thermodynamics so that they are easily understood and implemented. the problem with the text is that most of the calculations are on the scale of the entire building, large moves that deal with the project as a whole. because the calculations are simple enough to be done by hand, the results are broad enough to be reductive and simplistic, as compared to what a computer could do.
design computation offers the possibility of creating solutions on a much finer level, and generating building massings from an algorithmic code. architectural strategies for sustainability effect space on a very small scale, so the solutions we create for them must also be able to operate on a small scale as well. as solutions are generated, more and more input can be added to the equations, producing a more finely-tuned instrument of a building.
as designers become increasingly post-technological, there will be less emphasis on technique and more emphasis on how that technique can be used to increase performance. the shock and awe of controlled chaos will eventually fade to serve the more essential needs of comfort, light, and enclosure. design computation is only a technique, but a technique that uses contemporary tools to solve contemporary problems like few other disciplines can.
some resources for computational design:
grasshopper – a visual scripting plugin for the 3D modeler rhino
nodeBox – a parametric 2D design tool
ecotect – autodesk’s environmental analysis tool
design reform – tutorials on using grasshopper and revit
atelier nGai – ted ngai’s scripts and resources for grasshopper and ecotect
the proving ground – nathan miller’s scripts and resources for grasshopper
utos – thomas grabner’s + ursula frick’s scripts and resources for grasshopper
LIFT architects – LIFT’s blog, resources for grasshopper, and project updates
generator.x – marius watz’ + atle barcley’s blog on computational design
FORM + CODE – the official website of the book, with computational design code examples
no one could do surface articulation like doc bailey– mostly because he never articulated a surface. all of his work was generated by codes written for programs he created, which replicated fractals and patterns that modified themselves to be readable at a variety of scales. some of the best examples of this was his visual effect work he did for the remake of solaris, where he was able to emulate the sense of vastness of space by having such a massive range of scales. if he had a larger form that had minute articulation, then it would read as something on the scale of a nebula or a star. doc’s work is absolutely brilliant in its ability to convey a sense of reality- by illustrating range of scales we see all around us, his renderings seem more like snapshots from a parallel universe than computer generated images.
unfortunately, it’s incredibly difficult to have architecture operate in a similar way. architects like hernan diaz alonso have been successful- they simulate genetic growth with architectural cells that replicate themselves in a mannered behavior, creating opportunities for program and enclosure. with this technique, program and context are typically subservient to the formal logic that is generated through these genetic processes, and they are positioned by where ever the formal conditions will allow.
typically architects will work in more of a top-down fashion- using program, context, and architectural effect to generate a mass that is then articulated structurally and architecturally to start to relate the project to a human scale. there are issues with this approach- frequently it is a time-consuming part of the process that is starting at the very end, not leaving much time to actually execute. it also is difficult to find a method of articulation quickly, that illustrates the architectural logics that led the project to where it is.
this post is to act as an directory of different devices and techniques that will allow a greater variability of skin articulation, so that architects can develop skin systems that are in closer sync to their larger architectural concept.
the flow along surface command is native to rhino, and allows an array of surfaces to morph upon another surface. puerto rico’s pontificial catholic university has an interesting digital workshop blog with a great how-to with flowAlongSrf and other techniques.
one of the most basic and easiest ways to pattern a surface is with andrew kudless’ honeycomb script. it’s quick, easy, and will divide your surface into a honeycomb pattern of extruded fins. while there are features like a customizable depth and u_v divisions, you are still locked into a honeycomb pattern.
supermanoeuvre has created an interesting script that uses attractors to change the density of the pattern. there are other interesting possibilities with this technique, and it is comparatively easy to assemble a physical model with a lazer cutter or even printing out a paper template.
_point set reconstruction
point set reconstruction is an older toolset based off of early delaunay and voronoi scripts by david rutten and others. point set reconstruction is dependent on a set of points- these points then create curves and these curves can be projected onto a surface. while this is a quick way to generate curves, the points need to be created first- either by scripting an intelligent array or manually placing them. you can use the paneling tools to help create them.
creating points on a plane = easy
creating points on a surface = hard
the paneling tools plugin is a great way to create a variety of patterns on your surface that you can then articulate with a voronoi, delaunay or another pattern.
while it’s very adept at dividing surfaces, the actually application of panels is a little erratic. grasshopper can probably panel better, but it’s a little trickier to mimic the patterns you can get out of the patterning tools.
the patterning tools comes with a very comprehensive tutorial pdf, and there’s also a good how to on the ea-pr site.
ted ngai has created a plugin that has a bunch of incredibly helpful tools- including the pipe all curves command and the extrude surface normal command. both of these will allow the user to create geometry from curves quickly and easily.
a lot of people talk about rhino scripting and grasshopper as if they were separate- they’re not. anything you can script, you can do in grasshopper. grasshopper was a scripting teaching tool developed by david rutten to act as a realtime diagram of how a script was processing information. when bob mcneel saw it, he quickly realized the potential and put rutten in charge of developing it as a tool for distribution. it’s still technically only in beta, but it has tens of thousands of users world wide.
one of the best ways to understand both how to use gh and how powerful it is, is to try the patterning with attractor points demo on designreform. this demo can be tweaked to create forms off of a surface instead of a plane, creating surface articulation.
if you’re interested in producing a skin system that is more 3 dimensional, the 3d voronoi script based off of qhull is fairly useful. dimitrie stefanescu has created a nice gh script that will take points in 3d space and generate a formal diagram from them. like the 2d voronoi system, it is still fairly limited in its output, but it can start to articulate ideas in 3D.
_component population on mesh
ted ngai has created in interesting gh script that allows components to be placed on a mesh based off of a color value. this process was designed to optimize a skin system from ecotect data, but could be any data set. the only criteria is that there are a certain amount of components that can be arrayed based from a set of information. because there is no limit to the geometry defining the components, this is an incredibly flexible system that can allow a large variety of patterning.
ngai’s script is an appropriate one to end on, because I believe this is the direct computational design is heading. as kazys varnelis wrote, “just because you can design a blob, why would you want to? more importantly, just why would you want to build one?” if we are creating complexity for complexity’s sake, the invariable question is “why?”. are william macdonald’s forms a more potent source of architectural affect than james turrell’s skyspaces? if they are not, then why create complex forms instead of simple boxes without roofs?
for computational design to continue to grow where blobchitecture failed, it needs to be more focused on the “why”. if computational design can create skin and structural systems that respond to environmental, climatic, or other data, then why wouldn’t it? these datasources will not only add legitimacy through better building performance, but will also provide the friction that engenders great design.
the reality is that there is a vast amount of enthusiasm for this line of inquiry, but there is little concern for the economic and political realities that govern architecture in its built state. if more people are to incorporate these design tactics into their work, they must at least acknowledge these realities and work to develop an architecture that can respond to them.
local code’s entry for the WPA 2.0 competition is an incredible use of grasshopper and ArcGIS to locate publicly owned abandoned sites in major cities across the US and design a landscape intervention that responds to solar, thermal, and water issues that’s specific to each site.
it’s an incredible use of grasshopper as an analysis tool and seems to pose the question- if grasshopper can create a design response for environmental data for multiple sites, could it also create a design response for environmental, programmatic, code, structural, and any other data for one site? could this be the dawn of an MVRDV-esq software that actually works?
more specific analysis:
this is my first post from my google phone, so please forgive anything unusual or unsightly…
I came across an interesting article on aecbytes.com about using ruby to generate bezier splines in sketchup. Curves and scripting in sketchup? This poses an interesting challenge to rhino, especially when coupled with plugins like IES, which gives sketchup BIM-esque functionality.
last week mcneel and associates launched their new grasshopper specific website. the site not only features the typical tutorials and info pages, but it also has a very robust social networking element. the site has a forum, user webpages and blogs, and user generated photo and video galleries.
the emphasis on user generated content is an interesting move- and it seems to be working. in a little over 7 days, there have been almost 100 topics generated on the forum boards, over 320 images and over 35 videos posted in the gallery… all for a software that hasn’t been officially released…
for years it’s been difficult for algorythmic designers to have a resource for scripts. typically, you used to have to wait for david rutten or andrew kudless to post something and then hope that it was what you were looking for. a good friend of mine and TOSD, nick pisca, has created a wiki devoted to creating an online database of various scripts- BLAST. nick and others have done a very good job seeding the initial site with very interesting scripts and the range of software they cover is striking- everything from maya and rhino to running journal files in revit.
there are several MEL scripting handbooks out there, so why would you buy a book by nick pisca when you probably already have one?
because he knows scripting, he knows architecture, and he knows how to explain both.
1) nick first taught a MEL scripting class at sciarc… while he was a student.
2) nick works at gehry technologies, so you know he’s no slouch.
3) he’s a great guy, he drives a bio-diesel rig, and he’s pretty hard to publish a book on your own.
buy YSYT here.