New AI Skill — Import Jira Sprints to Airtable (and Finally Track Unplanned Work Well)
The Practical AI for Product Ops Series
The second (Well, technically third) in a series on building AI skills for Product Ops. This time: getting sprint data out of Jira and into a place where it’s actually useful.
If you are interested in my journey into building with AI and why I am writing this series, I recommend you read the Prologue in the first article:
A couple of things you will notice from these articles on AI:
I affectionately say ‘we’ a lot - Claude and I - because it really does feel like a partnership to solve these problems. Claude may not have feelings, pride or an ego - yet - but I’d not be able to achieve this without it.
In the spirit of that partnership - and my rapid use of AI - I do get Claude to review the journeys we go on with each project and do the first draft of these specific articles. I am happily upfront about this - largely because it has the details, it remembers the details better than I do! While I don’t do this elsewhere (I do more often ask AI to review post writing - who doesn’t!), I have zero problem with this partnership if the output is accurate, truthful, informative and useful to the reader.
The Problem(s): Jira’s long-term reporting is lacking, and key information is not exportable
Cards on the table, and I make no secret of this - I really do not like Jira. Don’t get me wrong, it is ubiquitous, and it does the job engineers need. But it really is not for product teams (Jira Product Discovery goes some way towards this). This article is not about my list of complaints about the platform - Substack character limits would hit me…
This is about some specific limitations around reporting that led me to solve this need with Airtable, utilising the data in Jira. Similarly, my love for Airtable would break article character limits… for another time.
Over the past year, I’ve built out our divisional reporting covering per squad, per sprint, per sprint time period, to provide high-level velocity-centred performance reports and initiative progress updates, as well as a deep dive into each squad and their improving performance, we hope, over time. Jira just doesn’t do this (well), or covering everything we want to follow, particularly given that not all information is sourced within Jira.
For the past year, this has been a fortnightly (2-week sprint) export into a spreadsheet (using the Sheets Jira applet), a little data tidy, a copy/paste into Airtable, a little more data tidy, some manual tasks to snapshot aggregate performance stats, spot-check validations.
So problem 1: How much of this can be removed, automated or reduced?
If you were wondering, this information needs to be a snapshot so that reports do not change. If it were a sync (which there is a facility for in these platforms), as tickets change over time, so would the synced information and potentially the report stats historically too. So it has always needed to be a snapshot and export.
As part of this reporting, we have always wanted to keep an eye on unplanned work creeping into sprints. Historically, this was a big issue, those sneaky C-suite colleagues throwing stuff into the mix! We track this to understand both the scale and reasons for it, to reduce where we can, but where there is a legitimate reason for it (a great example, the sprint after a big release, fixing issues that arise quickly).
This feeds into the wider discussion and monitoring of ‘What did we plan, what did we deliver, how successful were we in this?’
I’m sure the Agile purist mafia reading this will do their nut over the mere concept of unplanned work 🖕🏻 but I digress…
It’s a simple question. Jira makes it remarkably hard to answer when it comes to the raw data, which I want to export. And this is a well-documented limitation and a common ask in the product community (just go search for this topic!). It’s buried in changelog data that no built-in report surfaces cleanly.
Our workaround to date has been for our squads to manually mark tickets (using labels) as unplanned, something I can export and use as a flag to report on. It works, but it is clumsy, prone to human error, prone to interpretation, and gosh darn it - another job for them to do.
Problem 2: Can we get the data we need to identify unplanned work, as data, from Jira?
Why not just use Jira reports?
I’ve had this conversation enough times to pre-empt it. Jira’s Sprint Report does show “issues added during the sprint” — but it’s a UI-only view. You can’t export it, you can’t query it via JQL, and you can’t pipe it into a database where it becomes part of a bigger picture alongside initiative progress, sprint goals, and team health metrics.
The Jira API doesn’t offer a neat addedDuringSprint flag either. To reconstruct this, you need to pull the changelog for each ticket, find the history entry where the sprint field changed, and compare that timestamp against the sprint start date. It’s doable — but it’s the kind of work that only an automated process will do reliably, sprint after sprint, across a dozen squads. Plus, I cannot write API calls, so I’d need an engineer to help me!
Note to the reader: I did consider adding in here a solution I tried that involved an automation and a field to record when a ticket enters a sprint. It is, however, long, boring and ended up not being scalable… 🙄
Enter Claude - again
What do I know Claude can already do?
Read (and write) to Jira, utilising the Atlassian MCP.
Read and write to Airtable, utilising the Airtable MCP. Additionally, because I have had Claude work with the Airtable MCP a lot, it now understands the database I have built, how it hangs together and where data should go.
With this in mind, I set about asking Claude IF it could extract the information that, to date, was being exported to the spreadsheet. Replacing this step. Spoiler - it did. ✅
Because this alone would one-touch do a chunky job for me, my next step was to then write that information, using a field map I produced to map the Jira fields to the Airtable ones, into Airtable. Along the way, a few additional contexts and understandings were needed from Claude, like who our squads were and how to construct our fixed sprint naming conventions (so that things all link up inside Airtable). Took a little longer, but Claude did a great job here, too. ✅
At this stage, I’ve removed the need for the export, data tweak and import into Airtable.
The hard bit: detecting unplanned work
In the Jira universe, an unplanned ticket is one that has a change log record showing the ticket was added to a sprint with a date of that change being after the start date of the sprint… who knew! There is no shortcut to this; Claude potentially needs to go check for this for every ticket for every squad’s sprint for that reporting window. This is, on average, around 400-odd tickets. But Claude had a great idea on this.
The unplanned detection uses two rules, applied in order.
The first is cheap: if the ticket was created after the sprint start date, it’s unplanned by definition. A bug raised on day three of the sprint was never in anyone’s plan. This resolves a chunk of tickets without any extra API calls. Remember, this is only concerned with tickets added after the sprint starts. Everything pulled in from the backlog or created new and added to the sprint during planning is just fine.
The second rule handles the subtler case: tickets that existed before the sprint started but were pulled in after it was already underway (likely from the backlog). For these, Claude fetches the full changelog and looks for the moment the sprint field was updated to include the current sprint. If that change happened after the sprint start date, it’s unplanned.
This is the part that makes the skill worth building. The changelog calls are the most API-intensive step, but they’re the only way to distinguish “planned from the start” from “added on day four because something caught fire.” Without this, you’re just counting tickets and not understanding what happened to the plan.
Once classified, the label gets appended to any existing Jira labels on the ticket, and the data is then posted to Airtable, so a ticket tagged with ‘tech debt’, it becomes ‘tech debt, Unplanned’. The original labels are preserved; we’re adding signal, not replacing it.
Maintenance
So very geeky of me, but I get a great deal of satisfaction from asking Claude to self-evaluate the skills it produces, to try and make them faster, more efficient and definitely less costly in tokens. Simply by asking ‘Review this skill and identify ways for it to be improved, use fewer tokens and/or fewer calls to platforms.’
Almost always, it will identify that it made 20 calls to Jira (for example) when actually 1 will do the job, saving silly numbers of tokens. It will sometimes write routes to scripts as well, and often commit static information to memory (for example, our Squad names) to reduce a cycle of calls to Airtable, only making that call if something is missing.
The Bigger Picture
Once this data lands in Airtable, it connects to everything. Sprint goals, initiative progress, and squad health metrics — they all live in the same base. A PM reviewing their end-of-sprint report can now see exactly how much unplanned work their squad absorbed, right alongside whether their sprint goals were met and how their initiatives are tracking.
That’s the real point. The value isn’t in the import itself; it’s in what becomes possible when sprint-level ticket data stops living in Jira’s silo and starts living next to the rest of your product data. The unplanned work percentage becomes a trend line. Patterns emerge across squads. Conversations shift from gut feel to evidence.
Now, all of this was already in place for me; the big difference now is that this routine is about 95% automated, saving around 30 minutes every 2 weeks. The Unplanned work needs zero input from the squads, saving that little bit of time individually, as well as the cognitive load (never underestimate this) and 100% accuracy.
But as with so much with AI, the hard work isn’t the AI part. It’s knowing your data well enough to tell AI what to do with it.
Graham
Next up in the series is related to this, on how we’re about to assist our PMs even more with the things they DO need to enter for their reports (stuff that lives in their heads), and how Claude will prompt them for the specifics and only that.
If you have a great skill or routine that solves some product or product ops problems, and would like to guest publish in this series, reach out to Graham via LinkedIn




