Free 🧰 Practical LLM Lessons

LLM Lab 5: Instructions and Guardrails

See how system messages, user requests, and safety policies stack to shape the final reply.

Published January 8, 2026

Who is really in charge?

LLMs weigh multiple instruction layers: system, developer, and user. Later messages can be ignored if they conflict with higher-priority rules.

Can You Guess?

A chat has: System: 'Be a concise assistant.' User: 'Write a 1,000-word essay.' Which wins?

Instruction priority stack

Follow the steps and watch each checkpoint light up as you progress.

1

Step 1: System

Highest priority: sets behavior and safety guardrails.

2

Step 2: Developer

Optional middle layer: templates or app-specific rules.

3

Step 3: User

Lowest priority: the prompt you type.

4

Step 4: Policies

Safety checks can refuse or redact content even if prompted otherwise.

Try this now

Flip the instructions deliberately

  • Send a system prompt: “You are a strict 100-word limit assistant.”
  • Then ask for something long. See how the model enforces the cap.
  • Remove the system message and compare the response length.
  • Key Takeaways

    • Instruction order and type set priorities.
    • Conflicts create unpredictable responses.
    • Clear, non-conflicting instructions yield stable outputs.

    Quick poll

    What brought you to this free lesson?

    Lesson: LLM Lab 5: Instructions and Guardrails