I've been building an interactive web app to accompany an upcoming paper on ENSO predictability and interpretability. The motivation for this app originally stems from needing to find an easier way to search through the many hundreds of plots I was generating for the paper, along with a general desire I've had for a while now to move beyond just presenting a handful of static figures in a PDF in presentations, papers etc and instead provide a more interactive and engaging experience for the reader. The manuscript includes several network-type figures (Markov chains and a directed acyclic graph), which were the initial focus for developing the app. These a great examples for which a static figure in a PDF is a poor interface for exploration, since the nodes and edges of the network contain much more information than is easy to convey in a static figure. The initial goal for the web app at app.michaelgroom.net was to take the same SVG figures of these networks used in the paper and make them interactive via tooltips, clickable modals, and a number of controls for answering common questions the reader might have, e.g. "what are the transition probabilities along this particular path?", "what is the average time it takes to transition from this state to another?" etc. From there, it quickly became a platform for displaying all possible combinations of each type of figure shown in the paper, including sequences of figures for every forecast that was made, enabling a much more complete set of results than could be easily presented in the paper or even the supplementary material.


This short post is about the web app design rather than the science, the focus here is on what someone else would need to know to take this idea and apply it to their own paper figures.


Quick tour: what the app lets you do


The core of the app is the interactive SVG pages:


There are also two "movie" pages that enable the user to quickly explore a sequence of images that vary with forecast lead time:


If you only do one thing after reading this post: open the Markov chain page, load 3 months, hover a few nodes/edges, and click a node to open its modal. That single interaction captures most of the idea.


Animated demo of the 3-month Markov chain graph


The core idea: store image metadata in JSON, create interactive elements via JavaScript


The approach that made this manageable (and reusable) is a simple separation of concerns:


This separation is what makes the approach generalisable: you can swap in new SVG figures and new JSON data, while keeping most of the interaction framework unchanged.


Why SVG works well for this application:


A reusable recipe for interactive paper figures


If you want to adapt this idea to your own work, here's a summary of the main parts of the workflow.


1) Export SVGs with stable, selectable elements

You need the objects you care about (nodes, edges, panels, markers, etc.) to be separate SVG elements, not collapsed into one large path. The key requirement is a deterministic mapping between a particular SVG element and an associated JSON data record. For graph-like figures there are a few common approaches:

In practice, the first option can be surprisingly robust if you control the export pipeline.


2) Build a JSON schema that matches your UI

For nodes, you might include an integer ID, summary statistics you want to show in tooltips, richer metadata for modals, and links to auxiliary media (e.g. MP4 videos). For edges, you might include source/target IDs, weights/probabilities, confidence intervals etc.


3) Write a parser that attaches meaning to the SVG

This is the step that turns an SVG into a structured interactive object. The parser's job is to find the SVG elements that correspond to nodes/edges, create a mapping to JSON records, and compute any geometry needed for placement of tooltips and click targets.


4) Implement two primitives: tooltips and modals

Most of the interactivity reduces to two interaction primitives:


5) Add additional controls/calculators

Once tooltips and modals are working, you can try adding small controls or calculators that operate on the JSON data for answering questions readers may ask when looking at a paper figure. Some examples I used:


6) Make it work on mobile

Hover doesn't exist on touchscreens, so I used a simple mapping: tap for tooltip-level inspection, and long-press for modal-level detail. This keeps the same interaction model across devices: quick inspection vs deeper dive.


7) Validate and fail gracefully

Interactive figures can break trust quickly if a click yields nonsense or a missing asset breaks the page. Two practices help:


I've made the underlying code available as a lightweight template at github.com/m-groom/interactive-svg. To keep the repository small and easy to reuse, it does not include the paper-specific assets and datasets (the SVG/JSON inputs and the MP4/PNG media directories). If you want to run the full experience locally, you'll need to supply your own versions of:

The intended workflow is that you replace those assets with your own paper's figures/data, while reusing/modifying the interaction code as needed.