Kirigami deployable space frame using AR

Hi everyone,
We’re in the beginning phase of using Fologram as a fabrication tool in assembly of complex space frame that folds from flat cut sheets. The kirigami space frame system Spin-Valence, when used to produce flat or single-direction curvature, has assembly logic relatively well encoded into the cut parts. For double-curved versions it is less clear how assembly should proceed, so Fologram is very useful.

Each unit in the assembly is a slightly different size and will deploy into different configurations. We will begin this work by developing a Fologram working environment that will allow us to accurately deploy units one by one during assembly. The goal is to use steel to produce the pavilion design pictured here, but we will begin by testing the accuracy of deploying 3D Spin-Valence units into specific configurations. Once that is successful, we will move to larger assemblies.

I’ll also attach a small steel double-curved assembly that I put together without the help of AR. There are inaccuracies because our computational model produced imperfect geometries at that time (it has since seen many updates), and because of inaccuracies in assembly. Now that we have addressed the digital geometry, we need to address the accuracy in assembly.

We welcome any feedback on this work, and we’ll continue to post updates as we progress. We are interested in methods that we can use so that the Fologram model will provide visual feedback to the fabricator when the deployment is within construction tolerance (color change, or other cue).


I was only allowed to post one image, so many of these will not appear… next time, I guess.

(no image)
flat version of Spin-Valence for reference

(no image)
single unit similar to lower image will be deployed into non-parallel configuration that is unique to each unit. we will place flat part onto a surface aligned with digital model, and then deploy the unit hub into final configuration based on Fologram model.

(no image)
example double-curved configuration

initial physical prototypes in chipboard and steel

(no image)
pavilion design (to be built at upcoming IASS expo)

1 Like

Hi Emily, thanks for joining the forum and good to see you’re still playing around with AR.

Such a teaser not being able to see the images! But the first prototype is looking pretty good. I’m really interested to see how this progresses. I am guessing that one challenge with using the holographic guide as a reference during the forming process will be maintaining alignment between some reference point on the physical object and the hologram. Are you doing the forming in a vice or are you doing the forming ‘free hand’ and then holding the object up to a hologram to perform visual checks? My guess is that you would need a vice to hold the object in place and allow you to easily perform silhouette checks that would help avoid problems with the hologram occluding the physical objects.

It might also be interesting to use the new (ish) outline property in the Sync Objects component in grasshopper to help with this - it renders a silhouette rather than the surface of the object. We originally developed this for orienting 3D objects in space and exploring subtractive forming processes but it could be useful here too.

Another idea might be to simulate the forming digitally (is this possible)? in order to play this back and forward during physical forming and get a sense of whether things are following the same procedure.

Can you try posting some images again? I thought it might have been because you were a new user but from a quick check of our forum settings you should be able to post multiple images.

Hi @Gwyll,
I think before I made this post, I was at a one-image limit, and now it’s removed. I created the whole post, and it only alerted me to the limit when I was trying to send… ack! I’ll recreate it here. I’m excited we’re finally getting into this a bit, but we’re just at the beginning and planning to post more as we progress. We have some geometry issues still to solve before the Fologram integration will be truly useful, but I’m trying to get ahead and be ready with a Fologram strategy when we’re caught up with the rest of the project.

flat Spin-Valence unit
we do the deployment by hand, so we would likely have a clamping setup to allow the outer concentric shape to stay in place as the central hub deploys up and out of it.

a secondary step with making “legs” more rigid (not always part of the process, but won’t need AR guidance)

deployment into final position (by hand). This is the part we want to use Fologram to direct. Here you see a parallel plane deployment, but the unit can and will be deployed into non-parallel positions to make double-curved form space frames

example flat configuration

example double-curved configuration

3D printed unit that we will initially use to test Fologram as a deployment tool.

We’ll show a Fologram image soon. Thanks for the input!

Cool thanks for sharing… good to know the images do actually work on the forum.

Let us know how the Fologram test goes!

I was delayed in experimenting with this because my original 3D printed unit was so small, so I printed a new one at twice the size ( still only about a 6" square). This was helpful. I used a small QR printed slightly offset from the lower right corner where I placed the unit (perhaps a larger QR would help a bit), and I printed the QR with the unit reference point. This was very helpful to snap the geometry back into precise position. Great tool!

I found that I had to regularly resnap geometry back into place, but that it did look like a precise fit. This test is a bit premature, because our computational model doesn’t yet work in exactly the same way as physical (as mentioned above), but I was able to see how it was moving differently than the physical. I didn’t take photos, but I have have two sliders that control the position of the unit hub to the unit base, adn those worked great when streamed into the app.

Successful first test, but I need to create a visually cleaner background, and to work out the kinks in our computational model before this method can be fully assessed. For now, I’m using the mobile app, but the plan would be to use a Hololens for real fabrication.

Now I’m considering using a QR marker attached to the movable hub to track the placement of the hub from physical space back to digital.

before deployment

after deployment (with some discrepancies that are more due to computational model than AR)

That looks like it is working pretty well considering you are using the mobile phone! Its an interesting use of AR to calibrate the behaviour of the digital model to match the physical one. What kind of precision do you need to get out of this process for the fabrication of the larger structure to work?

I don’t know if a larger QR could improve the placement precision or not. The precision of the placement depends entirely on how many pixels of the code the phone can see - so this could mean a small marker that is close to the phone or a larger marker that is further way.

What would potentially improve tracking (and reduce the need to have to re-snap to the qr all the time) is more visual information in the workspace. You could try putting a few high-contrast objects on the desk and making sure you aren’t around glass / reflective / black surfaces. Also a quick walk around the room with the phone before snapping to the QR code might help.

Generally speaking tracking is going to be tricky though because you will need to move the phone pretty close to the object to get a sense of whether things have been assembled properly, and this means the phone won’t have a lot of visual information to track from other than the object (which is moving / changing and terrible for tracking).