Why this exists.
Scan a set of emerging markets for a specific use case. Atlas shows you the ones where people want what you sell — but no-one local is doing it well.
A patchwork that breaks.
- A shared spreadsheet nobody opens on time
- Prompts copy-pasted into a chat window
- A contractor who disappears for two weeks
- Output that lands in a different shape every run
A workflow that ships.
- One brief, one cadence, one place to read it
- Every claim cited, every step reviewable
- A finished artifact in your team’s format
- Your roadmap is driven by opportunity, not by what feels familiar.
Four moves.
Atlas runs each move with a preview attached — so you know what lands before you ever hit send. Skip freely once you know which parts carry the weight.
- 01Kickoff
Describe the problem you solve.
Atlas starts with the brief and asks only for what's missing. No boilerplate intake form, no setup meeting.
Chat · marieteYDescribe the problem you solve.AAtlas is working - 02Gather
Pick the countries to scan.
Sources are pulled, cleaned, and cross-checked against prior runs — every claim carries a citation you can trace.
checklist · step-02Source connected · pick the countriesContext loadedFirst pass completeSource connected - 03Reason
Atlas maps demand against what is on offer locally.
The agent thinks out loud where it matters — trade-offs named, assumptions surfaced, judgments explained.
ranked results01Find · white space0.9402The · white space0.8203Space · white space0.67 - 04Draft
Pick the emptiest markets first.
A first draft lands in the format your team already uses. You edit the last 10%, not the first 90%.
delivery · inboxAnewAtlas → your teamjust now · scheduled weeklyFind the white spaceYour roadmap is driven by opportunity, not by what feels familiar.Open briefing
Configure Atlas.
Atlas runs on structured setup, not freeform prompts. Fill the fields once and the run is reproducible every time — same agents, same sources, same output shape.
Inputs in, outputs out.
Atlas runs on the inputs on the left and hands back the artifacts on the right. Skip any input — the agent will ask for it the first time it needs it.
- One source of truth (CSV, CRM, or warehouse)
- A one-paragraph brief on the goal
- The KPI you want to move
- A scored, cited brief you can forward
- A structured file for downstream automation
- An alert when anything material changes
A finished artifact, not a todo list.
Every run ends the same way — a packaged brief in the channel your team already reads. Here's a preview of what shows up.
Here's the brief for this week. I ran the playbook end-to-end, flagged anything that shifted against last run, and packaged the output for Slack and the shared drive.
- Describe the problem you solve.
- Pick the countries to scan.
- Atlas maps demand against what is on offer locally.
- Pick the emptiest markets first.
Where teams stall.
Three ways we see this go sideways — and how to avoid each one.
Pointing the agent at stale or half-connected data. Clean the source once, compound every run after.
Running it once and forgetting. Put it on a weekly cadence so the numbers actually move.
Skipping the first review. Check the first run by hand — trust compounds from there.
Before you start.
Usually one source is enough to see value. Atlas can run on a CSV paste for the first pass; connect the CRM, the data warehouse, or the tool of record once you want it to run on its own.
Most teams put this on a weekly cadence. That's the sweet spot between "too noisy to read" and "too stale to act on". Adjust once you see how the numbers behave.
Whoever owns the downstream action. Atlas hands back a finished result — the value is in somebody actually reading it and shipping the decision the same day.
It usually isn't. The first pass is calibration — tell Atlas what was off, rerun, and the second is close. By the fourth it reads like a teammate.