Debugging Prompts
When your app doesn't match your vision, here's how to diagnose and fix your prompts.
Understand Why Prompts Fail
Most prompt failures fall into one of four categories:
- Too vague — The AI made assumptions because there wasn't enough information.
- Too broad — Too many features were requested at once, causing some to be ignored or poorly implemented.
- Missing constraints — Rules, edge behaviors, and access conditions were not specified.
- Ambiguous terminology — Words like "dashboard", "profile", or "feed" mean different things in different apps. Without context, the AI picks a generic interpretation.
Diagnostic Checklist
Before resubmitting a prompt, run through this checklist:
- Did you specify who the app is for?
- Did you state the core problem the app solves?
- Did you list features explicitly, not just reference a well-known app?
- Did you define user roles and their permissions?
- Did you include edge cases or constraints (e.g., validation rules, notification triggers)?
- Was the prompt focused on an MVP, or did you request too much at once?
The "Reference App" Anti-Pattern
Problem: "Build something like Notion" or "Make a clone of Trello."
This fails because it gives the AI no information about your specific users, features, or constraints. The result is a generic interpretation that may not match what you actually need.
Fix: Replace references with explicit feature descriptions.
Instead of:
Build something like Trello.
Write:
Build a project management app for marketing teams. Users can create boards with columns (e.g., To Do, In Progress, Done). Cards can be assigned to team members, given due dates, and tagged with a priority level (Low, Medium, High). Admins can create new boards; members can only edit cards within boards they've been added to.
The "Everything at Once" Anti-Pattern
Problem: Packing an entire product roadmap into a single prompt.
When you ask for too much in one generation, the AI either ignores later features, generates them superficially, or produces an inconsistent structure.
Fix: Use phased prompting.
- Prompt 1 (MVP): Core screens, authentication, primary user flow.
- Prompt 2 (Extend): Secondary features, admin panel, notifications.
- Prompt 3 (Polish): Edge cases, additional roles, integrations.
Debugging a Specific Broken Feature
When one feature isn't working correctly, isolate it:
- Identify the exact behavior that's wrong.
- Describe the expected behavior and the actual behavior.
- Submit a targeted correction prompt.
Template:
The [feature] is currently doing [actual behavior]. It should [expected behavior]. Specifically: [any additional context or constraint].
Example:
The search bar on the listings page is currently returning all results regardless of the search term. It should filter listings in real time by title as the user types. Only show listings where the title contains the search input.
When to Stop Prompting and Assign a Developer
Some issues are not prompt problems — they're engineering complexity problems. Consider assigning a task to an Imagine.bo developer when:
- You've sent 3+ correction prompts on the same issue without resolution.
- The feature requires a specific third-party API integration.
- The logic involves complex conditional business rules or multi-step workflows.
- The issue is related to performance, security, or data integrity.
Use the Hire a Human button in your dashboard. Frame the task clearly: what the feature should do, what it's currently doing, and any relevant constraints or examples.
Prompt Improvement Examples
| Weak Prompt | Improved Prompt |
|---|---|
| "Add a dashboard." | "Add an admin dashboard with four metric cards: total users, active users this week, new signups today, and total revenue. Below the cards, show a bar chart of signups per day for the last 30 days." |
| "Make it more secure." | "Redirect unauthenticated users to /login if they try to access any page under /app. Users should stay logged in for 7 days before the session expires." |
| "Fix the form." | "The signup form is not validating the email field. It should show an inline error message 'Please enter a valid email address' if the user submits without a valid email format." |
| "Build something like Stripe." | "Build a payment management dashboard for SaaS businesses. Show a table of transactions with columns for date, customer name, amount, and status (Paid, Pending, Failed). Admins can filter by status and export the table as CSV." |
For issues that aren't resolved through prompt refinement, use the Hire a Human feature from your project dashboard — or contact the Imagine.bo support team.
FAQ
the four categories are too vague where the AI made assumptions due to lack of information, too broad where too many features were requested at once, missing constraints where rules edge behaviors and access conditions were not specified, and ambiguous terminology where words like dashboard profile or feed mean different things without context
check if you specified who the app is for, stated the core problem, listed features explicitly rather than referencing a well-known app, defined user roles and permissions, included edge cases and constraints, and kept the prompt focused on an MVP scope
the reference app anti-pattern is using prompts like build something like Notion or make a clone of Trello which gives the AI no information about your specific users features or constraints, fix it by replacing references with explicit feature descriptions including user types workflows and permissions
it is packing an entire product roadmap into a single prompt which causes the AI to ignore later features generate them superficially or produce an inconsistent structure, fix it by using phased prompting with Prompt 1 for MVP core screens and auth, Prompt 2 to extend with secondary features, and Prompt 3 to polish edge cases and integrations
isolate the feature by identifying the exact behavior that is wrong, describe the expected behavior and the actual behavior, then submit a targeted correction prompt using the template — the feature is currently doing actual behavior, it should expected behavior, specifically any additional context
consider assigning a task when you have sent 3 or more correction prompts on the same issue without resolution, the feature requires a specific third-party API integration, the logic involves complex conditional business rules or multi-step workflows, or the issue is related to performance security or data integrity — use the Hire a Human button in your dashboard

