Module 3

Your institutional data is already layered.

Context engineering is how AI meets institutional information where it actually lives: prompts, files, tools, retrieved sources, structured data, and the human review boundaries around them.

Module brief

Help participants choose the right layer for the job.

Learning goal

Choose the layer that fits the workflow

Participants should be able to describe the major context layers, explain what each one adds, and decide when a workflow needs retrieval, tools, templates, or human escalation.

In-room move

Teach from familiar silo problems

Use dashboard integration work, policy lookups, and crosswalks as the bridge to prompt layers, files, tools, and structured retrieval.

Participant artifact

A context stack map

The main reusable output is a simple map of the layers a real workflow needs now, plus which ones are unsafe or unnecessary.

Derived assets

Context layers slide deck and future worksheet

The current slide deck should be read as a condensed version of this module's framing, stack definition, example, and activity.

Lecture framing

Start with the data they already manage, not the magic

A practical way to teach this module is to connect to what the audience already knows: they spend their days integrating data from siloed systems into dashboards and reports. Context engineering is the same challenge applied to AI, with each layer improving capability in a different way and creating its own review obligations.

Core teaching arc

Each layer changes both capability and risk.

Module explanation

Prompting is only one layer in the stack

Adding a role prompt can improve tone and framing. Adding examples can improve consistency. Adding files and retrieval can improve grounding. Adding tools can make the workflow act on the world. Every new layer expands both what the system can do and what the team must validate.

The layers

A context stack participants can remember

  • System or role instructions that define the model's job and boundaries.
  • Task prompts that describe the immediate assignment.
  • Examples or templates that show the expected format and tone.
  • Files, images, or documents that provide direct reference material.
  • Tools and actions such as search, code, SQL, or workflow integrations.
  • Retrieved and structured institutional data that grounds answers in current sources.
Research analytics lens

Why layered context fits institutional work

Research analytics and administration work mixes policy interpretation, institutional practice, sponsor rules, dashboard data, HERD submissions, and local spreadsheets. A layered approach lets teams decide whether the job needs only better instructions, a trusted template, a policy document, a SQL query, or a human escalation rule.

Teaching takeaway

Use the thinnest layer that gets the job done

If a better prompt and a template produce reliable output, they may not need retrieval. If a current policy PDF is enough, they may not need a live database connection. If an answer depends on institution-specific judgment, the right layer may be an escalation rule instead of more automation.

Suggested teaching flow

A sequence for presenting the stack

  1. Start with a plain-language definition: context is everything the model can rely on while doing the task.
  2. Show that a prompt is only one layer in a broader design stack.
  3. Walk through the major layers from lightest to heaviest.
  4. Explain what each layer is good at and where it introduces new failure modes.
  5. Emphasize that teams should use the thinnest layer that solves the problem reliably.
  6. Close by connecting layered context to evaluation, governance, and human review.
Decision aid

A quick rule for choosing the next layer

  • If the job is mostly wording and tone, start with instructions, examples, and templates.
  • If the job depends on a current policy or reference, add the smallest trusted file set that answers it.
  • If the job depends on live institutional facts, move to retrieval or SQL only after governance and permissions are clear.
  • If the job changes records, sends approvals, or creates risk, add a human gate before adding more automation.

Example and activity

Map the stack against a real institutional workflow.

Worked example

Drafting a proposal routing response

Imagine a unit wants AI help answering a faculty member's question about proposal routing deadlines and required internal approvals. Today, that information might live in a Power BI dashboard, a policy PDF, a departmental spreadsheet, and the institutional knowledge of an experienced staff member.

With layered context, the workflow can combine role instructions, a response template, the current routing policy PDF, a SQL query against the deadline database, and a rule that sends edge cases to an RA professional instead of guessing.

What to point out

What each layer contributes in the example

  • The role instruction sets tone, caution, and escalation boundaries.
  • The task prompt defines the current question and desired output.
  • The template keeps the answer consistent across requests.
  • The policy document provides authoritative institutional grounding.
  • The retrieved deadline table adds current, specific operational detail.
  • The human-review rule protects cases where policy and local practice diverge.
Hands-on exercise

Ask participants to map their own context stack

  1. Choose a real workflow from your institution that people are tempted to automate.
  2. Write the smallest prompt that describes the task.
  3. List what other context the model would need to be genuinely useful.
  4. Separate that context into layers: instructions, examples, files, tools, retrieval, and human escalation.
  5. Decide which layers are available now, which ones are unsafe, and which ones need governance work first.
Discussion prompt

Questions to ask participants

  • Which workflows at your institution only need a better prompt or template?
  • Which ones need trusted files or retrieval before the output could be useful?
  • Which workflows would become risky the moment the system could take an action?
  • Where should a human remain part of the context stack rather than outside it?
  • When would AI add value beyond what your existing dashboard or SQL query already delivers?

Facilitation support

Keep every layer connected to trust and review.

Speaker notes

Talking points for the presenter

  • Keep reminding participants that prompting is not the whole system.
  • Use the phrase "thin layer first" to normalize incremental design.
  • Show that some workflows improve more from templates and examples than from more model sophistication.
  • Stress that tools and retrieval increase power, but also increase the need for permissions, logging, and review.
  • Connect every layer back to trust: what is this layer allowed to influence, and who owns it?
REACH sessions to highlight

Complementary sessions on Monday and Tuesday

  • E5 - Building a Crosswalk: A Practical Framework for Standardizing Messy Research Data (Mon 3:45 PM, NEWPORT).
  • C2 - Bridging Data Silos in Academia: Smartsheet, Tableau, and Power BI as Catalysts (Mon 1:30 PM, WEATHERLY).
  • D3 - Demystifying SQL: Interpreting and Building Queries for Beginners (Mon 2:30 PM, COLUMBIA).
  • I3 - From Manual Downloading to Automated Data Collection with APIs (Tue 2:30 PM, COLUMBIA).
  • F2 - Can Prompt Engineering Turn General Questions into Actionable Research Intelligence? (Tue 10:15 AM, WEATHERLY).
  • F1 - Lessons Learned from Implementing AI Agents for Higher Ed Research Compliance (Tue 10:15 AM, FREEDOM).
Bridge forward

How this sets up later sections

This module creates a practical bridge into structured output, evaluation, retrieval design, and workflow automation. Once participants can see context as a layered stack, it becomes much easier to ask which layers belong in a given workflow, which ones are governed well enough to use, and where human judgment still has to stay in the loop.

Derived assets

Slides and related materials for this module

Presentation asset

Slide version of this module

A Reveal.js slide outline is available for live delivery, workshop rehearsal, and follow-up refinement. It should remain a condensed version of the framing, layer map, worked example, decision aid, and exercise documented here.