New AI Skill - Sync Roadmaps from Miro to Airtable
The Practical AI for Product Ops Series
Prologue
I’ve made no secret that until early 2026, I was well behind the times on all things AI. Sure, I could ask Claude a question, do a little research, or rewrite an email for me. But doing deep analysis, complex data work, and in particular creating and running customised routines between platforms - I was honestly both scared and lost. I didn’t quite trust AI, preferring to write static queries by hand. I knew it would work, but it took hours (even days). I was scared to look into myself at how far I was behind my peers. And I didn’t know where to start to close this gap.
In Q1, I needed to change this. I did. I put my big-boy pants on and started to play and explore. I burned a lot of tokens, but it was really worth it. I concentrated on semi-automating a very specific task I knew well - Sprint Performance Reporting - because over the past 18 months, I have refined and refined and refined this to death. This meant I knew exactly what I wanted for inputs and outputs, I kinda knew how it should run internally, I knew the data to be able to test every aspect. It was a very safe project.
This is my first advice for learning - take something you know really well, and AI the s**t out of it! You’ll obsess over getting it perfect, you’ll get frustrated over the foibles of AI understanding your questions and tokens running out, you’ll geek out when you ask it to ‘self-evaluate to make it as efficient as possible’ and see how many tokens it can save itself… and you.
Ironically, on this specific project, where I was reporting out of Airtable (the data) and into Conflucence (the reports), I’ve since abandoned the skill and used Airtable’s AI to build a better and even more automated report (largely due to some data/memory limitations and MCP limitations at the time). And that is perfectly OK - it did the job of throwing me into AI at the deep end, but with a big-ass lifejacket.
And since then, I’ve been largely building a new key routine or skill every week - and in this new series of articles, I want to share the details of what I have built that you - Product Ops professionals - can take away, use, experiment with or adapt for your own needs. Light on theory, heavy on detail and value - as always from me!
A couple of things you will notice from these articles on AI:
I affectionately say ‘we’ a lot - Claude and I - because it really does feel like a partnership to solve these problems. Claude may not have feelings, pride or an ego - yet - but I’d not be able to achieve this without it.
In the spirit of that partnership - and my rapid use of AI - I do get Claude to review the journeys we go on with each project and do the first draft of these specific articles. I am happily upfront about this - largely because it has the details, it remembers the details better than I do! While I don’t do this elsewhere (I do more often ask AI to review post writing - who doesn’t!), I have zero problem with this partnership if the output is accurate, truthful, informative and useful to the reader.
The Problem - Roadmaps in Miro, Details in Airtable
Our roadmap lives in Miro. Like probably all Miro-housed roadmaps, it’s a large, visual board where Initiatives are represented as coloured boxes arranged across squad swimlanes (horizontal) and sprint columns (vertical). Product Managers drag them around, resize them, and rename them. It’s the living, breathing plan.
Our product database lives in Airtable. Initiatives, squads, sprints (reporting) — it’s all structured, relational, and drives our reporting, performance summaries, and portfolio view.
The problem? These two systems don’t talk to each other. Every time the roadmap changes, someone has to manually update the Airtable records. New initiative? Create a record. Initiative moved to a different sprint? Update the dates. Renamed? Update the name. All of this then cascades into sprint reporting links, the squad capacity, and so much more. It’s a deep, complex system that has transformed how we report on progress, but has always been limited in what it can show us about what is coming up. Understandably, Product Managers have not wanted to maintain both systems. I’ve not blamed them for this! Manual work means drift. The roadmap says one thing; the database says another. That’s a Product Ops nightmare.
For clarity, we’ve long settled that Miro is a much better planning tool for our teams - Airtable just doesn’t have that creative drag-drop functionality out of the box (yet!), and we have been ok with this.
The Idea
What if I could get AI - Claude - to read the Miro board and automatically create or update the matching records in Airtable? Airtable does have a Miro integration, but (AFAIK) the Miro API is very limited in what is shared. I tried it, and quickly discounted it months ago. Additionally, Miro is a visual tool, with information contained within the physical position of shapes, especially in roadmaps.
So I asked Claude if it could interpret the locations of shapes in relation to others, which unbelievable it can quite easily! And so, I could get it to identify each shape on the roadmap board, and based on the alignment to the Squad shape to the far left, and the Sprint shape(s) along the top, it could translate into an initiative, owned by squad x, and starting/ending based on sprints y and z.
That’s exactly what I built using Claude’s MCP integrations for Miro and Airtable.
What the Skill Actually Does
1. It reads every shape on the Miro roadmap frame. Using the Miro MCP, Claude paginates through all items in the frame, extracting each shape’s position (x, y), dimensions (width, height), colour, and text content.
2. It filters for Initiatives. Not everything on a Miro board is an initiative. There are squad labels, sprint headers, month markers, PM names, section dividers, and background shapes. The skill identifies initiative boxes specifically with text content, above a minimum size threshold. Everything else gets discarded.
3. It maps each initiative to a squad. Each squad has a horizontal swimlane on the board, defined by the position and height of the squad label shape. The skill checks where each initiative’s vertical centre falls within those swimlane zones. If it lands cleanly in a swimlane, it gets assigned to that squad. If it falls on a boundary between two squads, it gets flagged for my review.
4. It maps each initiative to sprints. Sprint columns are defined by the x-positions of the sprint date headers on the board. The skill calculates the boundary between adjacent sprints as the midpoint between their header positions, then checks which columns each initiative’s horizontal span overlaps with. An initiative that stretches across three sprint columns gets assigned a start date from the first sprint and an end date from the last.
5. It links Sprint Reporting records. This is where it gets interesting. In our Airtable setup, Initiatives’ start/end dates are determined by the sprint they are worked on in. Each squad sprint (conversation for another time!) is listed in the database and has a start/end date. So for each initiative, Claude looks up the correct Sprint records for that squad and those sprints, and links them. And this translates back to start/end dates for the initiative, too.
6. It creates the Airtable records. For each initiative: shape name as the Initiative name/title, squad link, delivery start and end dates, Sprint Reporting links, the Miro Widget ID (as a permanent anchor), and a deep link back to the exact shape on the Miro board.
How Claude and I Refined It Together
This wasn’t a one-shot build. It was iterative, and the iteration was where the real value emerged.
First, I assessed viability. I asked Claude to explore the Miro board and tell me what it could see. Could it identify the squads? The sprints? The initiative boxes? The answer was yes — emphatically. It correctly identified all squad rows, all sprint columns, and ~100+ initiative rectangles, all from the raw shape data.
I tested with 5 items. I asked Claude to pick 5 Initiatives and ran the full pipeline: extract from Miro, map to squad, map to sprints, and create in Airtable. This immediately surfaced a few learnings about the Airtable database structure I had built for the past 18 months. That’s the kind of thing you only discover when you actually try to write a record and the link doesn’t resolve.
We designed change detection. This is arguably the most important part. The Miro Widget ID is a permanent, stable identifier — it doesn’t change when someone renames or repositions a shape. So we designed a change detection routine that re-reads the board, matches shapes to existing Airtable records by Widget ID (by creating a new field to store the widget/shape ID when the Initiative is created), and detects four types of changes: name changes, position changes (which mean date changes), new items, and removed items. Removed items get flagged, never auto-deleted. The change report is presented for approval before anything gets updated.
We captured everything as a skill. All the workflow logic, every field ID, every sprint boundary calculation, every squad swimlane position, every Sprint record ID — it’s all stored in two files (SKILL.md and config.md) that Claude reads before executing. This means I can pick this up in any conversation, weeks from now, and Claude has the full context. I can also export and share this skill with colleagues to run themselves, though I’m still thinking about how and when this skill should run to get the balance of timely updates and not overrunning the skills and colleagues treading on each other’s toes.
The Gotchas
A few things caught us out that are worth sharing:
Miro’s MCP is read-only for existing shapes. I’d originally planned to write the Airtable record URL back into the Miro shape as a link. Can’t do it — the MCP doesn’t support updating existing shapes. The workaround is the Miro Widget ID stored in Airtable, which gives us a permanent anchor and a deep link in the other direction. It’s a little annoying because I cannot provide a mechanism to get from the roadmap share directly to the Initiative record in Airtable… but the MCP is just limited, and nothing I can do on this. For now.
Initiatives visually starting/ending mid-sprint cannot be accurate. We have a number of Initiatives that, on the roadmap, suggest they are only half a sprint of work, or 1.5 sprints, etc. We’re not currently handling this in the sync - because we could never hope to be totally accurate based on pixel-perfect locations. Is it exactly half, or 1/3, or 1/4… it’s not worth the faff, to be honest. If the Initiative is located visually within any part of a sprint, it is deemed to span the whole sprint.
The Time Savings
Let’s be conservative about this.
Our roadmap has approximately 100 delivery Initiatives across all squads. Creating each initiative record in Airtable manually — including the name, squad link, start and end dates, phase tag, and sprint reporting links — takes a PM roughly 3-5 minutes - assuming they are in the zone to just bash them out one after another. For a full roadmap sync, that’s somewhere between 5 and 8 hours of manual work. This is just to transfer the information Miro ‘holds’. The Initiative records hold a lot more information - and we’re finalising some other skills to take care of this too (another article!).
This skill does it in under 10 minutes, plus review time. That is a 96%-98% saving in working time!
But the bigger saving isn’t the initial sync — it’s the ongoing maintenance. Every time the roadmap changes, the change detection routine can re-read the board and surface a diff report. Instead of a PM manually checking whether the Airtable records still match the Miro board (and inevitably missing something), they get a structured list of what’s changed, and approve the updates. That’s a few minutes instead of a recurring, error-prone task that never quite gets done. I’d estimate the time saving is probably the same here, too.
And the accuracy improvement is arguably more valuable than the time saving. The database and the roadmap stay in sync. The reporting is trustworthy. The portfolio view reflects reality.
What This Means for Product Ops
I’ve spent a lot of time talking and writing about AI within Product Operations. The 2026 predictions piece we published on Product Ops Confidential included a clear signal: AI’s primary value in product organisations is shifting from flashy demos to infrastructure. The hard work is in data quality, taxonomy, and making sure AI has the right underlying structure to work with.
This project is a concrete example of that. AI isn’t replacing a product manager’s judgment about what should be on the roadmap. It’s doing the operational translation between two systems — reading spatial positions and turning them into structured records. It’s plumbing. Valuable, time-saving, accuracy-improving plumbing.
For Product Ops practitioners looking to explore AI tooling, I repeat my advice: start with a real problem. Something you or your teams are doing manually, repeatedly, and imperfectly. Don’t start with the tool and look for a problem. Start with the friction and see if the tool can remove it.
And when you find something that works, capture it. Store the logic. Document the gotchas. Build it so you — or whoever comes after you — can run it again without starting from scratch.
That’s what Product Ops is about: making the operational side of product work seamlessly, so the people building the product can focus on building the product.
Graham
I’ll be speaking about this project and others at the virtual Product Ops Festival on April 8, 2026 - a free Product Ops conference hosted by PLA.




