How I Built My Own Tools Inside Adobe Experience Cloud with AppBuilder
If you work with Adobe Experience Platform on a daily basis, you know the drill: the UI gives you just enough to get by, but whenever something goes wrong — or you need a quick overview across the board — you end up either opening Postman, digging through Confluence, or clicking through dataset after dataset.
I got tired of it. So I built my own tooling — directly inside Experience Cloud — using Adobe AppBuilder. No extra infrastructure, no external dashboards, no new toolstack to maintain. Just a custom app that lives in the same browser tab as AEP.
In this post I’ll walk you through what AppBuilder actually is, what you get out of the box, and the three tools I built with it.
What is Adobe AppBuilder?
AppBuilder is Adobe’s platform for building custom applications that run natively inside Adobe Experience Cloud. It’s not a third-party integration layer, it’s a first-party developer platform built by Adobe, designed specifically for teams who want to extend or automate their Adobe stack without leaving the ecosystem.
What you get:
- Adobe I/O Runtime — a serverless function platform (built on Apache OpenWhisk) where your backend actions run. No servers to manage, no containers to maintain.
- CDN-hosted frontend — your React app (built with Adobe React Spectrum) is deployed to Adobe’s CDN automatically.
- Experience Cloud Shell integration — your app shows up inside experience.adobe.com as a native tab, with the same nav and authentication as AEP, Analytics, or Target.
- Unified auth — your serverless actions automatically inherit the user’s IMS token. No OAuth flows to build, no API keys to expose in the frontend. Calls to AEP APIs just work.

The development experience is CLI-driven via the aio CLI. You scaffold an app, write Node.js actions and React components, and deploy with aio app deploy. Local development runs with hot reload via aio app dev. It feels like a standard Node/React project — because it largely is one.
The key point: you’re calling the same AEP APIs that Adobe’s own UI calls. You’re just building a better interface for your specific workflows on top of them.
Three Tools I Built
1. Batch Error Viewer
The problem
When a batch ingestion fails in AEP, the native UI tells you very little. You see a status of “Failed” and maybe a high-level error count. There is actually a way to get more detail through the UI — but it’s not in the dataset view where you’d expect it. You have to navigate to Sources, find the right dataflow, locate the specific dataflow run, and then click “Preview Error Diagnostics”. It works, but it’s buried deep enough that most people never find it, and even when you do, the information isn’t always complete or easy to act on. The more reliable path ends up being two consecutive API calls in Postman: one to get the list of failed files in the batch, a second per file to retrieve the actual validation errors. Either way — it’s slow, repetitive, and the last thing you want to be doing when something is broken in production.
What I built
A simple tool: paste a Batch ID, click a button, get a consolidated view of all validation errors across all failed files in that batch. The AppBuilder action handles both API calls server-side, merges the results, and returns a clean JSON response to the UI. What used to take 5 minutes in Postman now takes 10 seconds — without leaving Experience Cloud.

2. Launch Changelog
The problem
Adobe Launch does have a built-in changelog — but it’s scoped to a single property, buried inside each property’s activity log, and gives you no cross-property overview. If you manage multiple properties, getting a picture of what changed across all of them means clicking through each one individually. In practice, that doesn’t happen. Teams either maintain a manual changelog in Confluence that nobody remembers to update, or they skip it entirely and rely on memory when something breaks.
What I built
A changelog generator: select a property and a year, and the app fetches all published libraries for that property via the Reactor API. For each library it compares the list of rules, data elements, and extensions against the previous publish, and surfaces what was added, modified, or removed.

No more manual Confluence updates. The changelog is always accurate because it’s derived directly from the Reactor API — it reflects exactly what was published, not what someone remembered to write down.
Where it goes next
The current implementation is a starting point. The same Reactor API that powers the changelog can be queried in the other direction — instead of asking “what changed in this property”, you can ask “where is this data element used, and when was it last modified”. A natural next step is a search feature: enter a data element or rule name, and get back every property it appears in, what it’s connected to, and its last publish date. Useful for impact analysis before a change or for auditing.
3. Dataset Health Dashboard
The problem
AEP does have a dataset overview — but it only reflects the current state, and the information it surfaces is minimal. You can click into any individual dataset to get more detail, but there is no cross-dataset overview that lets you assess the health of your entire sandbox at a glance. There’s no failure rate, no trend, no indication of whether a dataset that looks fine today has been silently degrading for the past week. For streaming datasets especially — where data should be arriving continuously — that’s not enough. A dataset can show a recent successful batch and still have a 40% failure rate over the last seven days. You wouldn’t know unless you clicked into each one individually and manually pieced it together.
What I built
A dashboard that gives you a single-screen health overview of all datasets in your sandbox. It combines two APIs: the Catalog API for current batch status and record counts, and the Observability Insights API for time-series metrics (success rate, failure rate, daily volume) over a selected period.

The result is a table where you can immediately see which datasets are healthy, which are degraded (success rate below 95%), which are stale (no ingestion in 24 hours), and which have active failures. Clicking into a row shows the last 10 batches — and if a batch has errors, clicking “View Errors” opens the Batch Error Viewer pre-filled with that Batch ID.

That last part — the integration between features inside the same app — is where AppBuilder starts to really pay off. You’re not dealing with separate tools that don’t know about each other. It’s one cohesive internal product.
Is It Worth It?
The setup has some friction. You need an AppBuilder entitlement on your Adobe contract, you need to configure Developer Console workspaces and Admin Console permissions, and there are a few sharp edges in the CLI and the Reactor API that will catch you off guard. It’s not a zero-effort platform.
But once it’s set up, the velocity is good. You’re writing standard Node.js and React. You deploy in under a minute. You don’t manage servers, you don’t set up authentication, and your app lives exactly where your users already work.
For teams that live in the Adobe ecosystem, the alternative is usually one of these: build an external tool (more infrastructure, separate login, separate deployment), use Postman collections as shared tooling (not a real solution), or just don’t build anything and absorb the manual effort. AppBuilder is a better answer than all three.
If you’re an AEP or Analytics engineer and you find yourself doing the same manual API dance more than twice a week — it’s probably worth a weekend.