Dashboards Are for Looking. APIs Are for Doing.
If your ASO workflow involves opening a browser, clicking through tabs, and manually exporting CSVs, you're doing it wrong. Not morally wrong — just inefficiently. As developers, we automate repetitive tasks. ASO should be no different.
The problem with dashboard-only ASO tools is that they trap your data behind a UI. You can look at your keyword rankings in a pretty chart, but you can't pipe them into a script, trigger an alert in Slack, or feed them to an AI agent. The data exists, but it's locked behind mouse clicks.
Our API and CLI exist to fix this. Every piece of data you'd find in a dashboard is available as structured JSON, callable from any script, cron job, CI pipeline, or AI agent.
The CLI: ASO in Your Terminal
The CLI is a standalone tool — no browser, no account dashboard, just commands. Install it and authenticate once:
npm install -g @asotool/cli
aso auth login
Then start researching:
# Search for keyword data
aso keywords search "photo editor" --country us
# Get keyword suggestions based on a seed
aso keywords suggestions "meditation" --country us --limit 20
# Look up an app
aso apps lookup 123456789 --store ios
# Check ASO score for any app
aso apps aso-score 123456789 --store ios
# Extract keywords from an app's metadata
aso apps extract-keywords 123456789 --store ios
Every command supports --json for machine-readable output. Pipe it to jq, feed it to a script, or redirect to a file — standard Unix workflow.
API Endpoints: Seven Tools, Zero Rate Limits
The API serves the same data as the CLI, just over HTTP. Authenticate with a Bearer token:
curl -H "Authorization: Bearer aso_your_key_here" \
"https://asotool.app/api/v1/keywords/search?q=meditation&country=us&store=ios"
The seven stateless endpoints cover the full research workflow:
| Endpoint | What It Does |
|---|---|
GET /apps/lookup | App metadata by store ID |
GET /apps/search | Search apps in a store |
GET /apps/aso-score | ASO audit score (0-100) with breakdown |
GET /apps/extract-keywords | Extract keywords from app metadata |
GET /apps/reviews | Fetch app reviews |
GET /keywords/search | Keyword research (volume, difficulty) |
GET /keywords/suggestions | Keyword suggestions from a seed |
These are stateless by design. No session, no cookies, no multi-step flows. Call any endpoint in any order. This matters for automation — your scripts don't need to manage state, and your AI agents don't need to follow a prescribed sequence.
Practical Workflows
1. Weekly Keyword Research Report
A bash script that runs every Monday morning, researches keywords for your app, and saves the results:
#!/bin/bash
# weekly-keyword-check.sh
DATE=$(date +%Y-%m-%d)
APP_ID="123456789"
# Get current suggestions
aso keywords suggestions "habit tracker" --country us --json > "/tmp/kw-$DATE.json"
# Get difficulty scores for top suggestions
cat "/tmp/kw-$DATE.json" | jq -r '.[].keyword' | while read kw; do
aso keywords search "$kw" --country us --json
done | jq -s '.' > "reports/keywords-$DATE.json"
echo "Report saved: reports/keywords-$DATE.json"
Schedule this with cron and you have automated keyword monitoring without ever opening a browser.
2. Competitor Metadata Diff
Track when competitors change their App Store metadata:
#!/bin/bash
# competitor-diff.sh
COMPETITOR_ID="987654321"
STORE="ios"
PREV="data/competitor-prev.json"
CURR="data/competitor-curr.json"
# Fetch current metadata
aso apps lookup "$COMPETITOR_ID" --store "$STORE" --json > "$CURR"
# Compare with previous snapshot
if [ -f "$PREV" ]; then
diff <(jq '.title, .subtitle, .description' "$PREV") \
<(jq '.title, .subtitle, .description' "$CURR")
if [ $? -ne 0 ]; then
echo "Competitor metadata changed!"
# Could send a Slack notification here
fi
fi
cp "$CURR" "$PREV"
3. ASO Score Tracking Across Your Portfolio
If you manage multiple apps, track their ASO scores over time:
#!/bin/bash
# portfolio-aso-scores.sh
APPS=("123456789:ios" "987654321:android" "555555555:ios")
DATE=$(date +%Y-%m-%d)
for entry in "${APPS[@]}"; do
IFS=':' read -r app_id store <<< "$entry"
score=$(aso apps aso-score "$app_id" --store "$store" --json | jq '.score')
echo "$DATE,$app_id,$store,$score" >> data/aso-scores.csv
done
Plot the CSV in a spreadsheet and you have a historical view of ASO health across your portfolio.
4. Bulk Keyword Research for a New App
When launching a new app, you want to research dozens of seed keywords at once:
#!/bin/bash
# launch-research.sh
SEEDS=("meditation" "mindfulness" "breathing" "calm" "sleep sounds" "stress relief" "focus music" "white noise" "relaxation" "anxiety")
for seed in "${SEEDS[@]}"; do
echo "=== Suggestions for: $seed ==="
aso keywords suggestions "$seed" --country us --limit 10 --json
done | jq -s 'flatten | sort_by(-.popularity) | unique_by(.keyword)' > launch-keywords.json
echo "Found $(jq length launch-keywords.json) unique keywords"
In under a minute, you've researched 10 seed terms, collected all suggestions, deduplicated, and sorted by popularity. Try doing that in a dashboard.
Integrating with AI Agents
The real power of an API-first ASO tool shows up when you connect it to an LLM. Our endpoints are designed as tool definitions that any agent framework can consume.
Claude / MCP Integration
If you use Claude with MCP (Model Context Protocol), our API endpoints map directly to MCP tools. An MCP server wrapping our API gives Claude native access to all seven endpoints. Your conversations become:
> "Research keyword opportunities for my fitness app in the US store. Focus on keywords with search popularity above 35 and difficulty below 30."
Claude calls the API, gets structured data, analyzes it, and responds with a prioritized keyword list — including which keywords to put in the title, which in the subtitle, and which in the keyword field.
OpenAI / Function Calling
Same idea, different framework. Define our endpoints as function schemas:
{
"name": "keyword_search",
"description": "Search for keyword data including popularity and difficulty",
"parameters": {
"type": "object",
"properties": {
"q": { "type": "string", "description": "Keyword to research" },
"country": { "type": "string", "description": "Country code (us, gb, de, etc)" },
"store": { "type": "string", "enum": ["ios", "android"] }
},
"required": ["q"]
}
}
Any agent framework that supports function calling — LangChain, CrewAI, Autogen, or a custom setup — can use our API as tools.
The Agent Plan
This is why the Agent Plan exists. At $9/month with no rate limits, it's designed for machines, not humans. No dashboard, no tracking features, no user management — just seven fast endpoints returning JSON.
For context: the cheapest comparable API access from other ASO tools starts at $166/month (AppTweak). Some don't offer API access at all. If you're building ASO into an automated workflow or AI agent, we're the pragmatic choice.
CI/CD Integration
Your ASO can be part of your release pipeline. Add a step that checks your app's ASO health before each release:
# GitHub Actions example
- name: ASO Pre-Release Check
run: |
SCORE=$(aso apps aso-score ${{ secrets.APP_ID }} --store ios --json | jq '.score')
echo "ASO Score: $SCORE"
if [ "$SCORE" -lt 60 ]; then
echo "::warning::ASO score below 60 — consider updating metadata before release"
fi
This won't block your release, but it keeps ASO visible in your development workflow instead of being an afterthought.
Scripting Tips
A few things that make API/CLI workflows smoother:
Cache aggressively. Keyword difficulty doesn't change hourly. Cache API responses for 24 hours and save yourself API calls and latency.
Use jq liberally. Every --json output is designed to be jq-friendly. Filter, transform, and combine results without writing a single line of Python.
Parallelize with xargs. When researching multiple keywords, run lookups in parallel:
cat keywords.txt | xargs -P 4 -I {} aso keywords search "{}" --country us --json
Version your data. Save keyword research results with dates. A month of historical data lets you spot trends that a single snapshot misses.
Key Takeaways
- The CLI and API give you the same data as a dashboard, but in a format that scripts, agents, and automations can consume
- Seven stateless endpoints cover the full ASO research workflow — no session management required
- Shell scripts + cron replace manual weekly keyword checks
- AI agents (Claude, GPT, or any LLM with tool use) can call our API directly as functions
- The Agent Plan ($9/month, no rate limits) is purpose-built for automated and agentic workflows
- CI/CD integration keeps ASO visible in your development process instead of being a separate activity