AA Inter 3

Nathan – WIP
April 21, 2015, 12:13 pm
Filed under: 2013-14

Hi Nannette and Ricardo,

Here is a work in progress of the perspective section. The image shows a preliminary design for the Watchtower and the inverted city above. I will continue to work on the language and clarity of the Augmented Reality city!

At the moment as I have really been focused on TS work I haven’t rendered out the other perspective yet – but I will post a draft when it is ready. So sorry!



Watchtower_Perspective Section

Visions of Future Humans: Science Fiction and Human Enhancement – free lecture at LSE
February 25, 2015, 3:45 pm
Filed under: 2013-14

Hey all,

Just letting you know there is a free lecture at LSE this Saturday that might be interesting/relevant: here’s the link!


Proteus – Augmented Reality Prototype
December 19, 2014, 4:30 pm
Filed under: 2013-14

Hi Nannette and Ricardo,

Thank you so much for such an exciting first term! Please find my short video & description linked below. Can’t wait for Sri Lanka!

*To watch in HD please watch click HD and follow link to native Vimeo site.

Proteus, named after the ancient Greek sea god of transfiguration is an exploration of Augmented Reality (AR) applications in architecture. As a prototype, the model aims to illustrate some of the core ideas of digital augmentation – the idea of reality overlay rather than replacement, and the use of physical machine-readable markers to enable real time perspective mapping and rendering. The prototype envisions three levels of experience – a universal architecture, which is coherent and has the same experiential qualities for all observers, a datascape, embodying the digitally coded infrastructure that operates the program, and a spatial typology termed the ‘aether’ – a fluid environment that is responsive to user preferences, proposing that augmented space may be mediated at an individual level – much like current day web browsing.

The project forms part of a larger field of investigation into the impact of AR technologies on issues of land ownership. As our experienced environment is digitised, how will it be legislated and governed, who will design and construct these worlds, and will inhabitation patterns in future cities become far more nomadic, as users simply seek to find appropriate spaces to load their digital environments around them.

The AR shown in the film is real-time; video footage of the physical prototype was streamed through Processing, with marker detection and calibration handled by the NyArtoolkit library. An Arduino micro-controller operates servos to determine which AR markers are displayed, with the digital geometry being generated either directly in processing, or in Rhino/Grasshopper.

Nathan – Progress
December 2, 2014, 12:57 pm
Filed under: 2013-14

Hi Ricardo and Nannette,

This week I’ve been brainstorming ideas for the design of the digital overlay for my section of New Arcadia. It is still a bit conceptual but I’ve put some text and enders below to try to explain where I am at.

I have conceptualised the digital city as existing in three layers – Landscape, Aether and Datascape, each with a different experiential quality and function within the city. The main driver of these imaginings has been the question: how does the division/meaning of space manifest when it is immaterial? Instead of proposing that the Augmented City will merely be digital projections of known architectural elements like walls, windows, roofs etc, I am imagining that there will be a new language of spatial elements – more concerned with densities, colours and opacities, as these can be more reactive and dynamic. In an augmented reality, because of the very immateriality of the projected spaces, we can inhabit volumes and solids rather than voids defined by solids.

I have also returned to the notion of the mountain/island as an image of promised land from my initial research into the mythology of Arcadia, the Deluge stories and the Elysian fields.

Base Model

01_Deucalion Base Model


The citizens of New Arcadia have coded the digital architectures of the city into a series of mountains, mirroring the steep and stacked scaffolding infrastructure the city is built on. These mountains offer a reminder of dry land amid the vastness of the open ocean that surrounds the Deucalion. The mountains are in some ways a reproduction of lost land, with their complex, undulating and cracked forms giving a sense of security and shelter to Arcadians, whilst in other ways they take advantage of their immaterial nature to enhance the internal environment – lack of substance allows New Arcadia to be light filled and permeable, whilst being simultaneously solid and impenetrable in appearance. The landmasses and tunnels of the city have a further purpose – they add distinctiveness through landmarks and help Arcadians navigate the treacherous scaffolding safely through the projection of ‘solid’ objects. The landscape of New Arcadia is programmed into everyone’s Eye in the same way – allowing the city to be perceived and inhabited coherently. Whilst it is constantly changing and being reprogrammed, the aesthetic culture of New Arcadia always finds it manifest as some interpretation of terrain. It provides unity, security and an image of a distant past and a promised future.

 02_Deucalion Landscape AR


The Aether is experienced simultaneously to the Landscape. However, in contrast to the coherence of the Landscape, the Aether is customised to each Arcadian. It is a sensory field, which behaves much like a virtual liquid space enveloping the entire interior of New Arcadia. It is a constant volume whose role is wayfinding. The Aether overlays New Arcadia with an information field, producing an information map tailored to user preferences, manifest in flows of colour and intensity. It is a tool to navigate New Arcadia’s ever changing programmes, and a tool to create socially coherent groups – identifying other Arcadians with similar priorities and interests. Through the Aether, each Arcadian’s experience of the city is unique.

03_Deucalion Aether AR


Behind New Arcadia’s sensorial manifestations is the raw code and data that defines the city’s digital infrastructure. Some Arcadians are code literate, and they sometimes choose to walk through projections of raw code as they interpret and rebuild New Arcadia’s Landscape. This Datascape is messy and treacherous – the usual wayfinding elements of New Arcadia allowing inhabitants to safely navigate the open scaffolding do not exist here. It is the realm of the hackers, the programmers, and the Proteus engineers. It is a dangerous place inhabited only by the upper echelons of Arcadian society and the occasional cyber criminal attempting to recode Proteus sequences to their own advantage.

04_Deucalion Datascape AR

The various layers will be tied to the AR symbols embedded in the model, meaning that when they are activated, the model will be an integrated mix of all three levels, overlapping and interchanging between each other, creating a dynamic environments. The Landscape layer will be a constant pre-modelled environment. The Aether layer will be an animated field (I still need to work out whether this is responding to a sensor or a choreographed sequence). The Datascape layer will display real time calculations and data that are being used to manipulate the model layers.

05_Deucalion Integrated AR

On the physical side I am currently working on getting all the servos working to control the AR Markers. At the moment I think it is likely that they will move in a choreographed sequence but there is a possibility of making them interactive. I spoke to Apo and he said he thought it wouldn’t be a problem running that many micro servos – especially if they were connected in groups, reducing the number or Arduino signals needed.

01 02 03

A further technical issue is that loading a live grasshopper model introduces more lag as the communication between grasshopper and processing takes a little amount of time. I plan on working towards generating the geometry inside processing and bypassing grasshopper.

Looking forward to hearing your thoughts, concerns and advice.


November 21, 2014, 7:26 pm
Filed under: 2013-14

I am so sorry for the late post – my videos took longer than expected to upload to vimeo.

I have attached links to two video files describing the work so far and showing where I am with testing. The concept is still to map a digital 3D design to a 3D physical ‘scaffolding’ model – a simplified test version is shown here (the AR is still a little bit jumpy but I hope this will be resolved when I put it in a controlled lighting condition). The ambition is to eventually model a small section of the Deucalion from my first drawing to populate with the Augmented layer.

VIDEO 1 – AR TESTING PROCESS – Click image for link




The video also shows a short experimentation with transparent screens – I am using the Pepper’s Ghost effect (a digital screen at 45 degrees to a transparent reflective surface – in this case perspex) to create a transparent hologram illusion. The next step here (and potentially the hardest bit to come) will be to get the image on the screen to display as scaled and positioned correctly to really overlay reality behind – at the moment the webcam is not integrated with the screen. I imagine this calibration will involve getting the screen in the right position relative to both the viewer’s eyes and the webcam.

At the moment the 3D digital model responds in real time to a Rhino model imported to Processing through Grasshopper. This is allowing me to perform real time transformations to the geometry through Grasshopper. To work Arduino into the project, I plan to use Firefly to take environmental sensor data from Arduino (probably light levels at this stage) to control the geometry projected.

To tie the concept into my notion that land ownership might be temporally dynamic I also hope to make the calibration markers in my final model changeable – powered by Arduino servos. Depending on which calibration markers were shown would determine which digital model is loaded, and which sensor data to use in its manipulation. In my narrative, the idea would be that a person would own a set of calibration markers rather than the land they were placed on – thus allowing owners to design and inhabit their digital environments only during the times when their markers were displayed.

So, given the tests so far, this is my plan of action for next week – let me know what you think! (My approach is to get as many of the technicalities resolved early on – so that late next week and the week after I can really focus on getting the design right – for the digital and the physical models).

1. Develop Arduino sensor input into Grasshopper (using Firefly plugin?)

2. Develop Arduino servo control script for the calibration markers

3. Test the Pepper’s Ghost screen on a thinner piece of perspex or acetate – to remove the double image effect

4. Initial 3D model (in Rhino) of the final physical model

5. Initial 3D model (in Rhino) of the final digital model(s) – and the associated grasshopper variable controls

I’m having a lot of fun with this, but I am wary of getting carried away with the technical side of things! Please let me know if you have any concerns or advice.


MIT Kinetic Tabletop – For Nicholas
November 16, 2014, 9:44 pm
Filed under: 2013-14


MIT kinetic table

Digital Painting
October 14, 2014, 7:53 pm
Filed under: 2013-14

Hi All,

Following our visit to Forbidden Planet today I thought I’d just post this channel I found on digital painting on youtube.


Its by a guy called Feng Zhu who is a concept artist who worked on blockbuster films/games who started a design school in Singapore. His process and thoughts on the industry are pretty interesting, as well as his actual techniques. Hope it’s useful or at least provides some semi-relevant entertainment!