Een methodologie om IT-processen te optimaliseren met betrouwbare LLM-oplossingen

DossierPD.PD.PD03.034
StatusLopend
Subsidie€ 267.400
Startdatum1 januari 2026
Einddatum31 december 2030
RegelingFinanciering PD-kandidaten 2023-2027
Thema's
  • Sleuteltechnologieën en duurzame materialen
  • Bètatechniek

This PD project addresses the growing need for organizations to integrate Large Language Models (LLMs) into IT processes in a reliable, effective, and responsible way. While LLMs show strong potential for automating tasks such as code reviews, support ticket triage, and report generation, their adoption is hampered by challenges of trust, quality assurance, and process alignment. Current evaluation techniques are well established for code but insufficient for complex textual and decision-support tasks, leaving a gap between technical capabilities and practical, trustworthy applications.
The project aims to develop a methodology that enables companies to structure the deployment of LLMs within IT processes, ensuring quality, reliability, and measurable added value. Central elements include: (1) defining task-specific quality criteria and impact indicators, (2) creating a technical evaluation framework that combines established software engineering practices with emerging AI reliability techniques, including LLM-as-a-Judge, (3) systematically assessing organizational and human impact, and (4) embedding legal and ethical considerations such as compliance with the European AI Act.
An iterative, action-oriented and design-oriented research approach is used, in close collaboration with industry partners Sioux Technologies and Alliander. The PD project enables personal development through four interconnecting roles: professional, innovator, researcher, and change agent, ensuring that technical innovations are combined with organizational embedding and ethical responsibility.
The project contributes to practice by providing a validated, transferable framework for responsible LLM adoption in IT processes, to research by advancing methods for evaluating LLM reliability and human–AI interaction, and to education by preparing future professionals for AI-augmented engineering. In doing so, it balances technological innovation with societal responsibility, supporting sustainable and trustworthy use of generative AI in complex IT environments.

Contactinformatie

Fontys Hogeschool

L. Schrijvers, contactpersoon

Consortiumpartners

bij aanvang project