Effective prompts are the foundation of successful AI-assisted development. Clear, specific requests with appropriate context enable Verdent to deliver accurate, relevant results.
What You’ll Learn
- Best practices for writing effective prompts
- How to provide context and avoid common mistakes
- Advanced techniques like @-mentions and subagent delegation
- Examples of well-structured prompts
- Iterative refinement strategies
What Makes an Effective Prompt
Effective prompts are clear, specific, and provide necessary context for Verdent to understand your intent and deliver accurate results. Key Principles:- Be Specific - State exactly what you need, not vague requests
- Include Details - Provide technical specs when you have preferences
- Specify Scope - Clarify which files/components are involved
- Provide Context - Help Verdent understand your architecture
- State Outcomes - Describe what success looks like
- Use Natural Language - No special syntax required
Common Prompting Mistakes
- Being Too Vague
- Omitting Context
- Too Much at Once
- Missing Scope
- Unstated Requirements
- Missing @-Mentions
- Ignoring Errors
- Skipping Plan Mode
- No Version Control
- No Interview Request
Example Prompts:
| Problem | Solution |
|---|---|
| Verdent doesn’t know what improvements you want or which bugs to address | Specify exactly what needs improvement or which bug to fix |
Well-Structured Prompt Examples
- Feature Implementation
- Bug Fix
- Refactoring
- Testing
- Component Creation
Creating new functionality with clear requirements and constraints:What makes this effective:
- Clear requirements for inputs and validation
- Specific file locations for implementation
- Reference to existing patterns to maintain consistency
- Expected HTTP status codes and error handling
Advanced Prompting Techniques
- @-Mentions
- Plan Mode
- Subagent Delegation
- Think Hard Mode
- Iterative Refinement
- Constraint-Based
- Reference Patterns
- todos.md Planning
- Clear Context
- MCP Servers
Reference specific files, components, or subagents:Benefits:
- Ensures Verdent has exact context by explicitly including specific files
- Prevents ambiguity in large codebases with similar filenames
- Guarantees all relevant code is visible simultaneously for accurate refactoring and pattern matching
- Essential when referencing implementation patterns from one file to apply in another
Including Context in Prompts
- @-Mentions for Files
- Project Architecture
- Existing Patterns
- Technical Constraints
- Business Logic
- Error Context
- Automatic Loading
- Project/User Rules
- Images
- Website Links
Explicitly include relevant files in context:When to use:
- When working with tightly coupled files (model and controller, service and tests)
- Referencing implementation patterns from one file to apply in another
- Coordinating changes across multiple related files
- In large codebases with similar filenames where automatic detection might miss context
- Always use when asking Verdent to “follow the same pattern as…” to ensure it has the exact code
Iterative Refinement Strategies
- Broad to Specific
- Review & Correct
- Follow-Up Prompts
- Ask for Explanations
- Plan Mode Refinement
- Provide Examples
- Clarify Constraints
- Progressive Enhancement
Initial prompt:Verdent’s response might be generic. Refine:When to use: Starting with general request, then adding details based on initial response
FAQs
How specific should my prompts be?
How specific should my prompts be?
Be specific enough to eliminate ambiguity, but don’t over-explain obvious details. Include: exact file paths, implementation approach, expected outcomes, and constraints. Bad: “Fix the code” - too vague. Good: “Add input validation to the email field in
ContactForm.js to reject invalid email formats” - clear scope and goal. When in doubt, err on the side of more specificity.What's the difference between @-mentions and automatic file loading?
What's the difference between @-mentions and automatic file loading?
Verdent automatically loads files mentioned by name in prompts and related files in the same directory.
@-mentions (@filename.js) explicitly guarantee a file is in context, which is critical when working with tightly coupled files, referencing patterns from one file to apply in another, or when automatic detection might miss context in large codebases. Always use @-mentions when asking Verdent to “follow the same pattern as…” to ensure exact code reference.When should I use Plan Mode instead of normal mode?
When should I use Plan Mode instead of normal mode?
Use Plan Mode for: large refactorings or architectural changes, multi-file modifications where you want to review scope before execution, complex tasks where you’re uncertain about requirements, or when you want Verdent to interview you with clarifying questions before implementation. Skip Plan Mode for: simple, well-defined tasks, quick bug fixes, or routine operations. Plan Mode adds overhead but prevents costly mistakes on complex work.
What if Verdent doesn't understand or follow my prompt correctly?
What if Verdent doesn't understand or follow my prompt correctly?
Use iterative refinement: review the output, identify what’s wrong, then provide corrections in a follow-up prompt. Example: “The validation logic is good, but use Joi schema validation instead of manual checks. Match the validation pattern in
ProductController.js.” You can also ask for explanations: “Why did you use Redux instead of Context API?” then refine based on understanding. Don’t repeat the same prompt - adjust based on what failed.Do I need to repeat project context in every prompt during a session?
Do I need to repeat project context in every prompt during a session?
No - Verdent maintains conversation context within a session, so you don’t need to repeat architecture details or conventions already discussed. However, for critical constraints or when sessions get long (
100+ messages), restate important context. Better approach: use project rules (AGENTS.md) to document persistent context like tech stack, coding standards, and patterns - then you never need to repeat them.Well-structured prompts with clear intent, relevant context, and specific constraints consistently produce better results.