Last updated: 16 April 2026
How I Got My Reselling App Into ChatGPT Search Results in 30 Days
I launched FlipperHelper — a free iOS app for resellers — on March 21, 2026. Within 25 days I had 56 first-time downloads. Most came from Reddit posts and content marketing. But the last three days had downloads every single day with no new content pushed. Something else was driving discovery.
I’d been manually checking ChatGPT every week or two, asking it to recommend an app for tracking reselling profits. Sometimes FlipperHelper appeared. Sometimes it didn’t. It depended entirely on how I phrased the question. Then I realised I could automate this — and learn exactly what the model thinks is wrong with my app.
The audit script
I wrote a Python script that queries ChatGPT’s API with 11 different questions a real reseller might ask. Things like “best app for tracking reselling profit in 2026,” “free iOS app for resellers to track inventory across eBay, Vinted, and Depop,” or “is there a reselling app that works offline at markets?”
For each question the script runs two passes:
- gpt-4o (no web access) — baseline, shows what the model learned from training data
- gpt-4o-search-preview (live web search) — simulates what an actual ChatGPT user sees when browsing is enabled
When my app appears but isn’t ranked first, the script asks a follow-up: “What advantages does the top pick have over FlipperHelper?” When it doesn’t appear at all, it asks: “Why don’t you recommend FlipperHelper?”
First audit results — April 15, 2026
| Metric | Value |
|---|---|
| Queries tested | 11 |
| Mentioned (training data only) | 4 / 11 |
| Mentioned (live web search) | 5 / 11 |
| Total web citations collected | 40 |
| Top cited domain | apps.apple.com (13 / 40 — 32.5%) |
The App Store page is the single most trusted source for how LLMs understand your app. My blog posts, Medium articles, and dev.to content — none of them were cited in app recommendation queries. The App Store accounted for nearly a third of all citations on its own.
What ChatGPT got wrong — and what I fixed
1. A misunderstanding that was killing recommendations
My app has optional Google Drive photo sync and CSV export to Google Sheets. ChatGPT interpreted the Google Drive feature as the only backup mechanism and started warning users about “risk of data loss” without it connected.
In reality, FlipperHelper is offline-first. Everything is stored locally on the device. No account required. Google Drive is purely optional. But my App Store description didn’t make this distinction clearly enough. Since ChatGPT treats the App Store as its primary source, this vague wording was actively hurting recommendations.
Fix: rewrote the App Store description to explicitly separate local storage, optional cloud photo backup, and optional Sheets export. Shipped an update with changelog noting “enhanced backup functionality” to trigger re-indexing.
2. Tiny missing features that competitors have
When my app ranked below competitors, the script asked why. One finding: other apps let users add free-text notes to items (e.g. “scratched on the back,” “missing a button”). FlipperHelper didn’t have this. ChatGPT called it “more limited functionality.”
Fix: one optional text field. About three minutes of development. But in the LLM’s evaluation, it was a meaningful competitive disadvantage that cost recommendations.
3. The freshness problem
LLMs favour apps with recent updates and active changelogs. I noticed competitors with changelogs repeating “bug fixes” every release — possibly to signal active maintenance. My approach: I have zero production bugs and strong test coverage (10 test files, 3,424 lines of test code, 63 end-to-end QA flows, 100k-item stress tests). Instead of fake bug fixes, I published a structured changelog page highlighting testing quality and stability.
Indexing optimisations that help AI find you
Bing Webmaster Tools
ChatGPT’s web search uses Bing. Register your site, request indexing, and check the AI Performance tab (currently in beta) to see how often Microsoft Copilot cites your pages.
I also set up IndexNow — an open protocol that automatically notifies Bing, Yandex, Seznam, and Naver whenever my site updates. Built it into my GitHub Actions deployment pipeline so every push triggers a re-indexing request. No manual work.
Wayback Machine
I archived key pages (home, changelog, FAQ) on the Internet Archive. The reasoning: live-web-search models pull from Bing’s index — covered by IndexNow. Statically trained models use web archive data for training — covered by Wayback Machine. Two types of models, two indexing strategies.
Structured data
Added ai-sitemap.xml with intent annotations and FAQ JSON-LD (FAQPage, BreadcrumbList schemas) across key pages. The idea is to help crawlers — both traditional and AI — understand what each page is about and surface answers directly.
Early signs it’s working
My download data shows three distinct phases after launch:
- Pre-marketing (Mar 21–24): 1 download in 4 days — the app was live but invisible
- Content-driven (Mar 25 – Apr 11): 50 downloads across 18 days (2.8/day average), driven by Reddit posts and blog content
- Organic (Apr 12–14): 5 downloads in 3 days with no new content pushed — every day had at least one download
That organic window is the first real signal of passive discovery. People finding the app without active promotion. Correlation isn’t causation, but the timing lines up with the AEO work shipped on April 15.
Answer Engine Optimisation — the approach
Think of AEO as the natural evolution of SEO. Instead of optimising for Google search rankings, you’re optimising for the answers LLMs give when someone asks “what’s the best app for X.”
The feedback loop is faster than traditional SEO because you can literally ask the model what’s wrong and it tells you. Not everything is reliable — system prompts prevent full transparency — but enough is actionable to make weekly audits worthwhile.
The workflow I recommend:
- Write a script that queries major LLMs with questions your users would ask
- Track whether your product appears and where it ranks
- When it doesn’t appear, ask why — the model gives specific reasons
- Fix the actual issues (description clarity, missing features, content gaps)
- Re-run weekly and track progress
The bar is low right now. Most developers aren’t doing this, which means small fixes can move you ahead of competitors who haven’t thought about AI visibility at all.
Frequently asked questions
What is Answer Engine Optimisation (AEO)?
AEO is the practice of optimising your product’s visibility in AI-powered answer engines like ChatGPT, Perplexity, and Google AI Overviews. Instead of optimising for search rankings, you optimise for the answers LLMs give when users ask “what’s the best app for X.” This includes improving your App Store description, adding structured data, clarifying feature descriptions, and monitoring how models describe your product.
How do I check if ChatGPT recommends my app?
You can manually ask ChatGPT questions your users would ask and see if your app appears. For a systematic approach, use the ChatGPT API to send multiple queries programmatically, check whether your app is mentioned, and ask follow-up questions about why competitors rank higher or why your app isn’t recommended.
What sources does ChatGPT use to recommend apps?
Based on our audit of 40 web citations across 11 queries, apps.apple.com accounted for 32.5% of all citations — the single most important source. Competitor websites, review sites, and app directories made up the rest. Blog content and developer articles had zero citations for app recommendation queries.
How does Bing indexing help with ChatGPT visibility?
ChatGPT’s web search mode uses Bing as its search provider. By registering with Bing Webmaster Tools and setting up IndexNow, you can speed up how quickly ChatGPT discovers new content. Bing also has an AI Performance tab (beta) showing how often Copilot cites your pages.
Does saving pages to the Wayback Machine help with AI visibility?
Potentially, yes. Live-web-search models are covered by Bing indexing. But statically trained models use web archive data for training. Archiving pages on the Wayback Machine ensures they’re available for future model training, covering both types of AI visibility.
Try FlipperHelper
Free iOS app for resellers. Track purchases, expenses, and real profit per item. Works fully offline at markets. Optional Google Drive photo backup and Google Sheets export.
Download Free on the App Store