I've been building an interactive web app to accompany an upcoming paper on ENSO predictability and interpretability. The motivation for this app originally stems from needing to find an easier way to search through the many hundreds of plots I was generating for the paper, along with a general desire I've had for a while now to move beyond just presenting a handful of static figures in a PDF in presentations, papers etc and instead provide a more interactive and engaging experience for the reader. The manuscript includes several network-type figures (Markov chains and a directed acyclic graph), which were the initial focus for developing the app. These a great examples for which a static figure in a PDF is a poor interface for exploration, since the nodes and edges of the network contain much more information than is easy to convey in a static figure. The initial goal for the web app at app.michaelgroom.net was to take the same SVG figures of these networks used in the paper and make them interactive via tooltips, clickable modals, and a number of controls for answering common questions the reader might have, e.g. "what are the transition probabilities along this particular path?", "what is the average time it takes to transition from this state to another?" etc. From there, it quickly became a platform for displaying all possible combinations of each type of figure shown in the paper, including sequences of figures for every forecast that was made, enabling a much more complete set of results than could be easily presented in the paper or even the supplementary material.
This short post is about the web app design rather than the science, the focus here is on what someone else would need to know to take this idea and apply it to their own paper figures.
Quick tour: what the app lets you do
The core of the app is the interactive SVG pages:
- Markov Chains: choose a lead time (the Markov chain for 3-months lead time is featured in the paper as part of figure 4), then hover/tap nodes and edges to see summary tooltips. Click (or long-press on mobile) a node to open a modal with more detail, including a video of the associated composite pattern for that cluster. There is also a target date slider (which on desktop highlights nodes according to their affiliation probabilities for that target date) and a mean first passage time calculator.
- DAGs: similar node/edge interaction (the modals that are displayed are identical to the ones for the corresponding Markov chain nodes at the same level/lead time) and target date slider, plus tools to compute cumulative probabilities between nodes at different levels and to highlight the most probable path between two nodes directly on the graph. The DAG corresponds to figure 5 in the paper.
There are also two "movie" pages that enable the user to quickly explore a sequence of images that vary with forecast lead time:
- Case Studies: select a target date (e.g. Jan 2016 is featured throughout the paper) and compare reconstructed vs observed fields for each foreecast made for that target date. There is also an option to toggle between raw and detrended observed fields, as well as to synchronise the videos.
- Feature Importance: select a season/class (e.g. DJF El Niño is featured throughout the paper) and toggle between importance (figure 11) and correlation maps (figure 14). These are shown alongside synchronised movies of precursor composites with the contours of the importance/correlation map overlaid on top.
If you only do one thing after reading this post: open the Markov chain page, load 3 months, hover a few nodes/edges, and click a node to open its modal. That single interaction captures most of the idea.

The core idea: store image metadata in JSON, create interactive elements via JavaScript
The approach that made this manageable (and reusable) is a simple separation of concerns:
- SVG: save figures as publication-quality vector images (exactly how they appear in the paper).
- JSON: combine each figure with a semantic layer that stores attributes for each node/edge (IDs, probabilities, uncertainty estimates, derived quantities etc).
- JavaScript: the "glue" that maps SVG elements to JSON records and attaches interaction patterns (tooltips, modals, highlighting, calculations etc).
This separation is what makes the approach generalisable: you can swap in new SVG figures and new JSON data, while keeping most of the interaction framework unchanged.
Why SVG works well for this application:
- Vector graphics mean it stays sharp at any zoom level.
- Nodes and edges are distinct parts of the figure (they exist as individual elements in the browser's Document Object Model) that you can select, style, and attach event listeners to.
- Since SVG is based on XML, it is human-readable and editable, which makes it easy to manually inspect and debug.
- The same exported figure can be used in the paper and in the web app.
A reusable recipe for interactive paper figures
If you want to adapt this idea to your own work, here's a summary of the main parts of the workflow.
1) Export SVGs with stable, selectable elements
You need the objects you care about (nodes, edges, panels, markers, etc.) to be separate SVG elements, not collapsed into one large path. The key requirement is a deterministic mapping between a particular SVG element and an associated JSON data record. For graph-like figures there are a few common approaches:
- Match the order of nodes in the JSON to the order of node elements in the SVG.
- Encode explicit IDs in the SVG (e.g.
id="node-17") and parse them. - Geometry-based matching: infer matches via positions in the figure (more work, but sometimes necessary). This was the approach taken for the DAG edges.
In practice, the first option can be surprisingly robust if you control the export pipeline.
2) Build a JSON schema that matches your UI
For nodes, you might include an integer ID, summary statistics you want to show in tooltips, richer metadata for modals, and links to auxiliary media (e.g. MP4 videos). For edges, you might include source/target IDs, weights/probabilities, confidence intervals etc.
3) Write a parser that attaches meaning to the SVG
This is the step that turns an SVG into a structured interactive object. The parser's job is to find the SVG elements that correspond to nodes/edges, create a mapping to JSON records, and compute any geometry needed for placement of tooltips and click targets.
4) Implement two primitives: tooltips and modals
Most of the interactivity reduces to two interaction primitives:
- Tooltip: quick inspection ("what is this?") on hover/tap.
- Modal: deeper dive ("what does this mean?") on click/long-press, with room for tables, text, embedded videos etc.
5) Add additional controls/calculators
Once tooltips and modals are working, you can try adding small controls or calculators that operate on the JSON data for answering questions readers may ask when looking at a paper figure. Some examples I used:
- Date highlighting: brighten nodes according to an affiliation probability vector for the selected target date.
- MFPT: look up mean first passage time from cluster i to cluster j.
- Cumulative probability: compute the cumulative probability of transitioning from a node at level n of the DAG to a node at level m < n.
- Most probable path: identify and highlight the most likely path between two nodes on the DAG.
6) Make it work on mobile
Hover doesn't exist on touchscreens, so I used a simple mapping: tap for tooltip-level inspection, and long-press for modal-level detail. This keeps the same interaction model across devices: quick inspection vs deeper dive.
7) Validate and fail gracefully
Interactive figures can break trust quickly if a click yields nonsense or a missing asset breaks the page. Two practices help:
- Schema checks: validate JSON structure upon loading the SVG.
- File existence checks: if auxiliary media (videos/images) are missing, show a clear message and keep the rest usable.
I've made the underlying code available as a lightweight template at github.com/m-groom/interactive-svg. To keep the repository small and easy to reuse, it does not include the paper-specific assets and datasets (the SVG/JSON inputs and the MP4/PNG media directories). If you want to run the full experience locally, you'll need to supply your own versions of:
svg_files/(figure exports)json_files/(data/metadata driving interactivity)mp4_files/(video assets)png_files/(image assets)
The intended workflow is that you replace those assets with your own paper's figures/data, while reusing/modifying the interaction code as needed.