A Behind-The-Scenes Look At The Rubenstein Library’s Anatomical Fugitive Sheets Project

Written by | May 7, 2015 | Posted in Process

OR: The Coolest Thing I’ve Ever Coded And Also Reflections On The True OGs of My Craft

We here at Cuberis take a lot of pride in each and every project we complete, but occasionally certain projects come across our desks that we feel the need to highlight specifically. Sometimes a project might be totally unique or left-field, sometimes a project might be coded creatively, sometimes a project simply turns out to be — for a lack of (and a lack of interest in choosing) an alternate phrase — super dope.

One project that fits all three of these categories is the Anatomical Fugitive Sheets project, for which we were hired by the David M. Rubenstein Rare Book & Manuscript Library over at Duke University. We were tasked with designing and programming an interface for users to explore the collection, which itself consists of interactive 16th- and 17th- century anatomical drawings. I’d recommend spending a few minutes clicking around the above link before reading ahead.

Be warned, some of this is going to be pretty in-the-weeds. If nothing else, at least read the last section. We’re beyond proud of the way this project turned out, and I hope you’ll see why.


You might be wondering how 16th- and 17th- century anatomical drawings can be interactive. Rachel Ingold, curator of the Rubenstein Library’s History of Medicine collection, has a fantastic write-up on the collection. The collection features anatomical fugitive sheets dating from the early 1500s to mid 1600s, from both Italy and Germany, and in various states of preservation. The sheets themselves each may contain a number of woodblock-printed paper flaps which represent a “layer” of the human body. Exempli gratia: the lungs, liver, stomach, and reproductive organs might each be displayed as a separate flap. The flaps are ordered so that the process of unveiling each successive layer reveals another, similar to performing a dissection on the human body — just without the messy clean-up or, y’know, need for a body. You can see the obvious benefits this had for contemporary students of anatomy, biology, or medicine.

There’s a ton of modern interest in the pieces, too. They’re a part of the histories of so many diverse fields: medicine, art, mass communications, and interactive media, amongst others. Students and scholars obviously can learn a lot from the collection, so the Library wanted to make them widely available to these parties.

We are far from the first to attempt digitizing pieces like these. Sean Aery, Digital Projects Developer for Duke Libraries, has a great write-up on the project of his own with links to earlier examples. Most notably, projects like this have historically been done with Flash — but as Sean notes (and hopefully this isn’t a shocker for readers of this blog) this isn’t a viable option for the modern web.

To the best of my knowledge, what we ended up with is one of the most — if not straight-up capital-letters THE MOST — innovative and advanced implementations of digitizing pieces like those in the collection.


First thing, we identified several objectives we wanted to accomplish when designing an interface for this project. They are as follows:

  • To mimic the natural movement and appearance of the physical copies of the pieces in the collection.
  • To provide an unobtrusive system for navigating through various layers in a way that neither obstructs nor distracts from the layers themselves, but that makes glaringly obvious to users how to interact with the pieces.
  • To identify all use-cases of the collection, and design for them in ways that don’t conflict with one another.
  • To code our eventual system in a way that maximizes accessibility across browsers and devices.

To address the first and fourth point, we opted for an hybrid solution of SVGs and high-res PNGs that also makes extensive use of CSS transitions.

To address the second and third points, we opted to create two separate means of navigating the layers. The first is direct interaction with the piece itself: when you hover anywhere over the image, an icon appears that notes where the piece’s interactive layer is (or in the case of one piece, where the interactive layers are). When hovered over, the “active” — that is, the current top-most — layer gets a white stroke. At any time, only the active layer and the layer above it can be clicked. Clicking a layer causes it to flip up or flip down, advancing the user’s navigation.

Screen Shot 2015-04-28 at 5.52.27 PM

The second navigation method is via a list of layers on the sidebar. Users can click a layer to navigate directly to it — causing any intermediary layers to flip up or down — or use up and down arrows to navigate linearly. The sidebar is important for a few reasons: first, it clearly denotes which anatomical part a layer represents. Second, hovering over any item also triggers the hover state on that layer, letting users where they are in the sequence. Third, it provides the most non-intrusive means of navigation.

Pieces can also be zoomed in; this can either happen in a new window, or within the same window. This feature is essential because some users will be particularly interested in the Latin or German text that accompanies each figure. It’s also great because the imagery used in the project is also super high-resolution (the images used in the project aren’t anywhere near as big as this, but the TIFFs we were provided were ~20MB each) so there a ton of very fine details in the images.

The project had a few missteps along the way. Notably, one of our early experiments was with tactile navigation that allows for one-to-one relationship between touch interactions and layer animation. This is a cool trick, but it was scrapped when we realized users wouldn’t be interested in how a flap looks halfway-flipped. A proof-of-concept is available on my personal GitHub page if you’re curious, but I’d say we made the right call abandoning the idea — still, pull that link up on a touch device and play around with it.


Fun fact about me that doesn’t get talked about enough on the company blog or elsewhere: I actually fancy myself somethin’ of a designer. I have both Photoshop and Illustrator pinned to my taskbar, but during the course of the workday they typically go untouched — we have enough talented designers (and for that matter, development work) around the office for me to mostly stay in the lane of Sublime Text and the command line.

So whenever there’s a chance for me to open up my oft-neglected Adobe software or flex the ol’ Pen Tool muscles, you know I’m gonna pounce on my computer like Derek Zoolander trying to find evidence. This project was one of those opportunities.

The first step was to take the images the Duke team provided us with into Illustrator and trace them with the Pen Tool, to provide a path I could then export the path as SVG data for use in the final product. That process looked something like this:

Screen Shot 2015-04-28 at 1.46.42 PM

(If the code output there looks overwhelming: trust me, it is, and I’m terrified by it. There’s a reason I used Illustrator rather than even attempting to hand-write that.)

The next step was to take the images into Photoshop and isolate the layers themselves, to be used as PNGs in the final project. To do this, I used the SVG objects produced in Illustrator as clipping masks. (For the first few components I used the Quick Mask tool and paintbrush/eraser to isolate the layer, but I quickly realized how dumb this was).

Once I had my SVG data and images ready to go, the last thing to do before returning to my regular Sublime Text domicile was to figure out positioning of the elements. To do this, I placed all layer images together along with the base layer in one Photoshop document, positioned them as closely to the actual placement of the layers in the physical pieces as I could, and figured out the exact pixel values for where everything should live relative to the whole document. I needed these as percentage-based values, and rather than wasting time calculating these myself, I turned to the powers of LESS:

Screen Shot 2015-04-28 at 6.37.51 PM

Which gets compiled to look something like this:

Screen Shot 2015-04-28 at 2.32.13 PM


Sean’s write-up above (linked above) does a great job of explaining a lot of the technical aspects of the project. I’d recommend reading that if you haven’t yet, but there a few other points I’d like to touch on about the code behind this project.

First, while the layer flipping animation might seem complex, they’re actually pretty simple. The SVG paths mentioned above are reused several ways for each component: as the path for the hover-state stroke, as the shape of the backside of the component, and as a clipping mask for the gradient that provides the shadow effect that lends components the feeling of three-dimensionality.

For the flipping animation, really all that’s happening is that the height of as each component flips, the height of its wrapper changes from its initial value to 0 and the height of the backside changes from 0 to the component’s initial height. Because the backside is absolutely positioned with bottom: 0, it moves with the bottom of the component wrapper as it changes height. Similarly, because the frontside is also absolutely positioned with top: 0, it stays in place, but becomes hidden as the wrapper shrinks because the wrapper is set to overflow: hidden. Because all three components have the same transition timing function, everything appears to move as one fluid unit, though all three are technically moving independently.

If you read that and think, “that’s actually not that simple,” then try this. Touch the tips of your index fingers, and then move both your arms up but one faster than the other. The index finger on the faster arm is the top of the backside, and the index finger on the slower arm is where the bottoms of both the frontside and backside cut off.

Second, I’m actually really proud of the miniature cross-window messaging API I built out for this. As Sean noted, the image is a separate HTML file embedded into the page. The two windows can share and interpret information from one another via stringified JSON data. For instance, sending data from the outer window to the inner window looks like this:

Screen Shot 2015-04-28 at 7.40.35 PM

And the inner window receiving data looks like this:

Screen Shot 2015-04-28 at 7.42.43 PM

If you hover off of a list item, you remove the .toc-active class (which causes the hover-state stroke to display). If you click one of the elevator buttons, you send a string telling the updateCurrentActive() function to animate up or down — the gender property is for the piece that has both male and female figures, as both can be animated independently. If you click a list item, it sends the index value corresponding to that layer (the skipToNewActive() function calls the updateCurrentActive() function on a 250ms interval until the desired layer is reached. If you hover ON to a layer, you send the index value of that layer and it’s given the .top-active class mentioned earlier.

Going the opposite direction works similarly, but with only the directional values sent, which happens as part of the updateCurrentActive() function. The humorous implication of this is that clicking a list item in the outer window actually sends a message to the inner layer to send the outer layer a message.


(I had to momentarily turn the UNC lawn gnome I keep on my work desk the other way, so he couldn’t see me type those words.)

I want to give the world’s biggest shout-out to all the people we worked with at Duke. Rachel Ingold, the curator of the Rubenstein Library’s History of Medicine collection, was our primary point of contact, and she worked tirelessly to facilitate communication between the various parties involved in the project.

Sean Aery, Will Sexton, and Cory Lown are developers for the Duke Libraries who all provided valuable feedback on my contributions. (Looking at the project in production, they also seem to have made improvements to my code — guys, I’m kicking myself for not using CSS transforms to flip the SVG paths). They did an incredible job of taking our deliverables and integrating them seamlessly into the existing Duke Libraries site.

Finally, Mike Adamo and Molly Bragg of the Digital Production Center did a truly mind-boggling job of photographing the various layers of the pieces. I can’t fathom the amount of delicacy and care it took to take documents — especially documents as fragile as these — and produce such high-quality and usable images as the ones produced by Mike and Molly. I wish I’d been a fly on the wall to see y’all work.

A project like this can’t happen without intense level of communication, coordination, and collaboration between various teams.


I’m friends with a lot of designers, and one thing I’ve noticed is that every one of them harbors a singular appreciation for Johannes Gutenberg, inventor of the printing press and basically the O.G. of typography. It’s almost a Kendrick Lamar/Tupac dynamic, just on a much wider time-frame.

As a developer, I’ve felt never felt that level of idolization with anyone. Obviously there are a ton of programmers I owe — from Tim Berners-Lee to Linus Torvalds to mdo and fat of Bootstrap, but it’s safe to say that none of those date back to the 15th century. I’ve always been jealous of designers in that way — they have someone I don’t, someone who did what they do now, just hundreds and hundreds of years ago. Working on this project changed that.

The people who made original fugitives sheets did were essentially the front-end web engineers of their time.

What I do — and really everyone here at Cuberis and in the development community does — for a living can really be boiled to this: creating interactive ways of distributing information. These guys were doing that hundreds and hundreds of years ago. Their names aren’t taught in history classes like Gutenberg, but what they did was important: they looked at a static piece of paper and said, “this would be way better if it moved.” They planted the seeds that — albeit about as indirectly as can be — led to ideas that then led to the genesis of a whole industry that I love being a part of.

In that way, I was so humbled to work on this, to carry on their lineage, to present their work in a way that can be accessed and appreciated by so many. I couldn’t be happier to have worked on this project.