AI4RA at REACH 2026

AI4RA: The Intersection Between AI and Data

A workshop for research analytics and administration professionals who need a practical way to think about prompting, evaluation, context, automation, and when AI should support human work instead of obscuring it.

AI4RA mark
April 20 Monday, 1:00 PM - 4:00 PM, Room BRISTOL
24 seats Intermediate workshop with registration required
Presenters Barrie Robison, Nate Layman, and Nathan Wiggins
Focus Prompting, workflows, evaluation, RAG, and Vandalizer

Why this workshop exists

Practical AI adoption starts with judgment, not novelty.

Artificial intelligence and data science are transforming research analytics, administration, and other higher education functions, but the value is uneven unless people can distinguish useful application from hype. Whether you build dashboards, write SQL queries, wrangle institutional data, or manage sponsored programs, this session frames AI and data science as complementary tools, shows where human expertise remains essential, and gives participants a grounded way to design reviewable workflows rather than black-box experiments.

Site guide

Start from the page that matches what you need.

Course hub

Browse modules and teaching assets

The course content page is now a hub with dedicated links out to each module page, its slide deck, and related handouts.

Slide library

Open the presentation decks directly

Use the slides page when you want the Reveal decks without moving through the full facilitator guides first.

Presenters

Review bios and affiliations

The presenter page stays separate so workshop framing and people information are easy to find from the landing page.

Learning objectives

What participants should leave able to do

Prompting

Improve output quality

Apply prompt engineering fundamentals to improve quality, relevance, and consistency in AI responses.

Decomposition

Break work into parts

Turn complex research analytics and administration tasks into discrete, AI-manageable components with review points.

Workflow design

Build end-to-end workflows

Develop complete AI-assisted workflows for common research analytics and administration processes.

Judgment

Decide what belongs

Evaluate which tasks and workflows are suitable for AI deployment — and when a well-designed query, dashboard, or human process is the better answer.

Workshop flow

Three hours organized around decision quality

The session moves from framing and task selection into prompting, context design, and demonstrations so attendees can connect ideas directly to the analytics, reporting, and administration work they do every day.

1:00 PM

Foundations

AI4RA introduction, FAIR principles, the Vandalizer workflow, and frameworks that connect data science tasks to agentic workflows.

1:45 PM

Prompting and evaluation

Prompt engineering, structured output, and judging whether a response is trustworthy and usable.

2:30 PM

Context and retrieval

How AI works with the SQL queries and databases you already maintain — plus files, images, RAG, semantic search, MCP tools, and unstructured data as workflow inputs.

3:15 PM

Demos and discussion

Promptulus, Vandalizer, UDM feedback, automation fit, and guardrails for vibe coding.

Topic brainstorm

Priority threads for the session

These are the content lanes that make the workshop practical for participants who need to return to their institutions and decide what to pilot, what to document, and what to reject — whether the answer is AI, a better dashboard, or a well-governed data pipeline.

  • Which tasks are suitable for AI, which belong in a dashboard, and which need human judgment
  • When AI adds value beyond what your existing queries and reports already deliver
  • RAG, semantic search, extraction, and working with messy institutional data
  • Kinds of context: SQL, files, images, MCP tools, RAG, and CLI
  • Response evaluation and judging AI output quality
  • Structured output for repeatable workflows and downstream automation
Applied tool

Promptulus

Use the live demo environment to make prompt structure and output comparison concrete for attendees.

Applied tool

Vandalizer

Show how AI can streamline and enhance research administration workflows in a temporary demo setting.

Institutional fit

UDM feedback

Connect the session back to practical data standards work and how Vandalizer can help populate the UDM.

Decision lens

Human review

Reinforce where human oversight, policy interpretation, and institutional context remain non-negotiable.

Session details

Quick facts for facilitators and attendees

Format

Workshop session

WA3 - Workshop AI4RA - The Intersection Between AI and Data.

Audience

Intermediate level

Best suited for analytics and administration professionals ready to move from AI curiosity into workflow and governance questions.

Framing

Human-centered practice

Emphasizes actionable use cases, evaluation, and where human expertise remains essential.

Access

Temporary demos

Includes hands-on exposure to Promptulus and a temporary demo version of Vandalizer.

People

Presenter page

Review facilitator affiliations, bios, and credentials on the dedicated Presenters page.

Course content

Module hub and facilitator guides

Explore the new Course Content hub, the dedicated module pages, the slide deck directory, and the participant-facing context readiness checklist.

Continue the conversation

Sessions on Monday and Tuesday that extend what we cover

Our workshop introduces ideas that many other REACH sessions explore in depth. These are not competing talks — they are complementary. Use this map to plan which sessions to prioritize after the workshop.

Governance

Data governance and responsible evaluation

G2 — Panel: Operationalizing Data Governance (Tue 11:15 AM). G5 — Beyond Compliance: Ethical and Epistemic Foundations (Tue 11:15 AM). I4 — Building Data Literacy as a Shared Language (Tue 2:30 PM).

Context & integration

Working with fragmented institutional data

E5 — Building a Crosswalk: Standardizing Messy Research Data (Mon 3:45 PM). C2 — Bridging Data Silos in Academia (Mon 1:30 PM). H6 — Building a Modern Research Analytics Ecosystem with Microsoft Fabric (Tue 1:30 PM).

AI applied

AI in research analytics practice

F1 — Lessons Learned from Implementing AI Agents for Research Compliance (Tue 10:15 AM). F2 — Can Prompt Engineering Turn Questions into Actionable Research Intelligence? (Tue 10:15 AM). B1 — Enhancing Research Support with Generative AI (Mon 11:15 AM).

SQL & structured data

Queries, APIs, and data dictionaries

D3 — Demystifying SQL: Building Queries for Beginners (Mon 2:30 PM). H3 — Automating Your Data Dictionary (Tue 1:30 PM). I3 — From Manual Downloading to Automated Data Collection with APIs (Tue 2:30 PM).

AI readiness

Frameworks and strategic judgment

D1 — SCALE: A Data-Driven Framework for Building AI Readiness (Mon 2:30 PM). G1 — Building AI-Native Research Intelligence for Strategic Impact (Tue 11:15 AM). H1 — AI for Research Analytics: Chatting with Institutional Data at Scale (Tue 1:30 PM).

Reporting & dashboards

From data to decisions

D4 — One Dataset, Many Audiences: Research Reporting That Gets Used (Mon 2:30 PM). E1 — Beyond Reporting: AI-Enabled Analytics for Gaps, Risk, and Accountability (Mon 3:45 PM). I2 — Designing Narrative-Driven Dashboards (Tue 2:30 PM).

Workshop takeaway

Participants should leave with a sharper sense of what to automate, what to augment, and what still demands human judgment.

The goal is not simply to introduce tools. It is to help research analytics and administration professionals build safer, more relevant, more inspectable AI-assisted workflows — and to know when the right answer is a better query, not a bigger model.