Free 🧰 Practical LLM Lessons

LLM Lab 9: Debugging Prompts

Diagnose why a prompt failed and apply targeted fixes instead of guessing.

Published January 8, 2026

Spot the failure mode

Most bad outputs trace back to one of four issues: missing context, conflicting instructions, wrong format, or excessive randomness.

Can You Guess?

The model ignored your JSON format and wrote prose. What is the fastest fix?

Prompt debug checklist

Follow the steps and watch each checkpoint light up as you progress.

1

Step 1: Identify the miss

Was it wrong facts, bad format, unsafe content, or missing details?

2

Step 2: Check instructions

Remove conflicts and clarify priority.

3

Step 3: Add or trim context

Provide needed facts; delete irrelevant noise.

4

Step 4: Show an example

Give a one-shot example that matches the target output.

5

Step 5: Adjust sampling

Lower temperature for precision; raise if too bland.

Try this now

Fix a broken response

  • Ask the model for JSON output and note where it drifts.
  • Add a short JSON example and restate β€œrespond in JSON only.”
  • If it still drifts, lower the temperature and remove extra instructions.
  • Key Takeaways

    • βœ“ Name the failure mode before changing prompts.
    • βœ“ Examples and constraints fix most format issues.
    • βœ“ Tweak one variable at a time to see what helps.

    Quick poll

    What brought you to this free lesson?

    Lesson: LLM Lab 9: Debugging Prompts