Dashboard governance
- Which source is authoritative for a metric?
- Who can access which dashboard?
- How often is the data refreshed?
- What happens when a source changes mid-cycle?
AI4RA Workshop | REACH 2026
A facilitator-ready module for research analytics and administration teams deciding what information AI should be allowed to trust.
Module goal
Context engineering is not mainly about giving the model more information. It is about deciding which information the institution can safely trust, use, expose, and act on.
Opening move
AI governance uses the same institutional judgment, but the system can now synthesize, classify, extract, and route work, not just display information on a screen.
A useful opening line: retrieval quality is downstream from governance quality.
Definition
Policies, files, tables, forms, notes, prompts, and tools.
Which sources are authoritative enough to influence the answer.
What must be cited, refreshed, validated, or constrained.
Abstain, defer, or escalate when the workflow leaves safe bounds.
Core thesis
Add more documents, more retrieval, more system access, and more files in hopes that the answer becomes better.
If the data is outdated, poorly permissioned, ambiguous, or untraceable, the answer may sound more confident while becoming less safe and less institutionally valid.
Key distinction
Once those outputs feed a queue, dashboard, report, or decision, governance applies to them too.
Governance lens
Where did this information come from, and can we prove it?
Who is allowed to access it, transform it, or expose it?
How current, complete, reliable, and explainable is it?
Does it contain private, regulated, or institutionally sensitive material?
Who owns refresh, correction, and validation over time?
When should the system answer, cite and defer, or escalate?
Research analytics lens
Which number is authoritative? What is the refresh cadence? Who has access? What does this metric mean?
Which source should influence the answer? What should be cited back? Who can see it? When must a human validate the output?
The discipline is familiar. The difference is that AI can act on the data and create new data for downstream use.
Worked example
A unit wants an AI assistant to answer questions about proposal routing deadlines and internal approvals.
The team connects a current policy PDF, old shared-drive forms, exception-heavy email guidance, and a few notes from experienced staff.
Those sources do not carry the same authority, freshness, or interpretive weight. The model may blend them into a plausible answer that no one should trust without review.
Source map
Good candidate for grounding if the version is current and owned.
Useful reference, but dangerous if staff do not know which copy is current.
May reflect edge cases that should not be generalized into a default answer.
Helpful for context, but often hard to defend as institutional policy.
Failure mode
Generated outputs
If the assistant extracts deadlines, approver names, sponsor requirements, or routing codes into a structured table, that output starts to look like clean operational data.
Decision aid
Authoritative, current, permissioned, explainable, and clearly owned.
Potentially useful, but freshness, ownership, access, or provenance is still unclear.
Too sensitive, too ambiguous, or too unreliable for operational use.
Would you trust this source in a dashboard, audit trail, or official workflow decision?
Activity
If time allows, ask two people to share why they made different decisions about source readiness.
Facilitator note
Governance is the condition that makes automation safe, inspectable, and institutionally legitimate.
Discussion
Continue the conversation
Operationalizing Data Governance
Tue 11:15 AM, WEATHERLY
Ethical and Epistemic Foundations
Tue 11:15 AM, NEWPORT
Building Data Literacy as a Shared Language
Tue 2:30 PM, ENTERPRISE
Automating Your Data Dictionary
Tue 1:30 PM, COLUMBIA
Takeaway
Trustworthy AI workflows depend less on how much context a model can access and more on whether the institution knows what it is allowed to trust, retrieve, expose, and act on.
Module assets
Bridge forward
Module 3 builds from this foundation by showing how prompts, files, tools, retrieval, and human escalation work together once the institution knows what belongs in the system.