Evaluate context before using it
Participants should leave able to explain why provenance, permissions, quality, and stewardship determine whether context is safe enough to use in an AI workflow.
Module 2
Context engineering is not just about giving an AI model more information. It is about deciding which information can be used, where it came from, how much it can be trusted, who is allowed to see it, and what should happen to it after the interaction.
Module brief
Participants should leave able to explain why provenance, permissions, quality, and stewardship determine whether context is safe enough to use in an AI workflow.
Start from the governance work they already do for dashboards, then show that AI context uses the same institutional muscles under higher stakes.
The system may read governed inputs such as policy documents, spreadsheets, and dashboard tables, but it can also produce new structured outputs such as extracted fields, classifications, summaries, and routing data that enter downstream workflows.
This module should leave people with a short set of questions they can use to judge whether a document set, policy source, or dataset is ready for AI-assisted work.
The hosted governance deck and the context readiness checklist should summarize the framing, example, and exercise on this page rather than introduce new content that only exists in slides.
You can introduce this section with a blunt contrast: most people think context engineering is about giving the model more useful information, but institutions actually succeed or fail here based on whether they govern that information well. Retrieval quality is downstream from governance quality.
Core teaching arc
Teams often talk about adding more documents, more system access, or richer retrieval pipelines to improve AI output. But if the underlying data is outdated, poorly permissioned, ambiguous, or untraceable, the added context can make the answer more dangerous, not more useful.
AI systems do not only consume institutional data. They can also generate new data products: extracted fields from PDFs, tagged records, structured summaries, classifications, and draft workflow metadata. Once those outputs feed a dashboard, a queue, a report, or a routing decision, they become governed data too.
Research analytics teams already decide which source is authoritative for a metric, how to version a KPI definition, who can see which dashboard, and what happens when a source system changes mid-reporting cycle. AI governance is the same discipline applied to a new output.
A strong workshop message is that context engineering is not only about retrieval quality. It is also about stewardship: permissions, quality standards, curation, traceability, and clearly defined review boundaries.
Example and activity
Imagine an institution wants an AI assistant to help answer questions about proposal routing. The team connects policy documents, old email guidance, a shared drive of forms, and a few notes from experienced staff. The system now has more context, but not necessarily better context.
If the policy PDF is current, the shared drive is out of date, the email guidance reflects exceptions, and the notes are informal local practice, the model may blend all of that into a plausible answer that no one should trust without review.
If the system then extracts deadlines, approver names, sponsor requirements, or routing codes into a structured table, that generated table may look like clean operational data. It still needs provenance, validation, and ownership.
Facilitation support
This module should lead naturally into later sections on layers of context engineering, structured data, retrieval, and response evaluation. The through-line is simple: every technical layer becomes more useful when the institution knows what it is allowed to trust, retrieve, expose, and act on.
Derived assets
A Reveal.js slide outline based on this material is available for live delivery and iteration during workshop prep. It should continue to summarize the lecture framing, explanation, example, participant activity, and discussion prompts captured here.