Vandalizer is the AI workflow layer in the AI4RA ecosystem. Its purpose is not to add AI for its own sake. Its purpose is to create a transparent and governable way to explore where automation can assist research administration without obscuring responsibility.

That distinction matters. Research administration professionals are already being asked to evaluate AI claims in an environment full of hype, uncertain expectations, and uneven institutional readiness. Vandalizer should be framed as a practical experiment in trustworthy augmentation, not as a promise of autonomous administration.

Why the community needs it

There are real opportunities for AI to help with extraction, classification, triage, review support, and repetitive administrative tasks. There are also real risks when tools blur accountability, hide uncertainty, or operate without meaningful human review.

Vandalizer should help the community ask better questions:

  • where does automation actually reduce administrative burden
  • where is human judgment still essential
  • how should review, provenance, and error handling be designed
  • what governance signals are necessary before institutions should trust a workflow

What good governance looks like

This release should make its safeguards visible. That includes:

  • explicit use cases rather than vague AI promises
  • clear boundaries on what the workflow is and is not doing
  • human review expectations
  • documentation of known limitations and failure modes

How people can engage

Useful contributions include:

  • identifying high-friction administrative tasks that may be suitable for bounded automation
  • reviewing proposed workflows for hidden policy or compliance risks
  • contributing evaluation criteria for trustworthiness, reproducibility, and usefulness
  • helping translate practitioner concerns into concrete implementation requirements