AI4RA Workshop | REACH 2026
Layers of Context Engineering
A Reveal.js outline for the third workshop teaching module.
Module goal
Participants should leave with one core idea
Strong AI workflows do not come from one giant prompt. They come
from a deliberate stack of instructions, examples, files, tools,
retrieval, and human review boundaries.
Opening frame
Your data is already layered — context engineering makes it explicit
HERD data in one system, policies in another, departmental
spreadsheets somewhere else, and dashboards pulling it together.
Context engineering is how AI works with institutional data where
it actually lives — the same integration challenge you already
solve for reporting.
Definition
What counts as context?
- Instructions that define the model's role and boundaries
- Task prompts that define the immediate assignment
- Examples and templates that shape consistency
- Files, images, and documents that provide reference material
- Tools and retrieved data that expand what the system can do
The stack
A practical context stack
- System or role instructions
- Task prompt
- Examples and templates
- Files and documents
- Tools and actions
- Retrieved and structured institutional data
Decision rule
Use the thinnest layer that gets the job done
If a template and a clearer prompt solve the problem, stop there.
If a trusted PDF is enough, do not jump straight to retrieval. If
the task depends on local judgment, the right layer may be human
escalation instead of more automation.
Research analytics lens
Why layered context matters for your work
Analytics and administration workflows mix dashboard data, SQL
queries, sponsor rules, policy documents, HERD submissions, and
departmental spreadsheets. A generic prompt will miss too much.
Layered context helps teams decide which parts need instructions,
which need a database query, and which need a person.
Worked example
Proposal routing response
A faculty member asks about routing deadlines and required internal
approvals for a proposal. A generic answer can sound polished while
missing local forms, current deadlines, or special cases.
Better workflows combine role guidance, templates, policy, current
data, and escalation rules.
What to teach
What each layer contributes
- Instructions set tone, caution, and escalation boundaries
- The prompt defines the current task and output
- Templates keep responses consistent
- Policy documents provide institutional authority
- Retrieved tables add current operational detail
- Human review protects edge cases and exceptions
Exercise
Have participants map their own stack
- Pick one workflow people want to automate
- Write the smallest useful prompt
- List the other context needed for trust
- Sort it into layers
- Mark what is ready now and what still needs governance work
Discussion
Questions for participants
- Which workflows only need better prompts or templates?
- Which ones need files or retrieval before they are useful?
- Which become risky once the system can take an action?
- Where should a human remain part of the context stack?
- When would AI add value beyond what your existing dashboard already delivers?
Continue the conversation
Sessions this week that build on layered context
- E5 — Standardizing Messy Research Data (Mon 3:45 PM)
- C2 — Bridging Data Silos in Academia (Mon 1:30 PM)
- F1 — AI Agents for Research Compliance — Nate Layman (Tue 10:15 AM)
- F2 — Prompt Engineering for Research Intelligence (Tue 10:15 AM)
- D3 — Demystifying SQL (Mon 2:30 PM)
- I3 — Automated Data Collection with APIs (Tue 2:30 PM)
Bridge forward
Next modules build from this stack
Once participants can see context as layers, it becomes easier to
teach structured output, evaluation, retrieval design, automation
fit, and where human judgment still needs to stay in the loop.