AI systems: MLOps, model governance and explainable AI ensure robust use

AI systems: MLOps, model governance and explainable AI ensure robust use

Modern AI systems have the reputation of being black boxes whose functionality remains hidden from users and developers. On the way to the future use of AI, companies are therefore faced with challenges that have not previously been faced in classic software development. On the one hand, teams must quickly and efficiently bring models learned from data into productive operation and then continuously monitor and update them. The process models and technical components of the practice of Machine Learning Operations (MLOps) help.

This post is part of a series of articles to which heise Developer invites young developers – to provide information about current trends, developments and personal experiences. The “Young Professionals” series is published monthly. Are you a “Young Professional” yourself and want to write a (first) article? Send your suggestion to the editors: developer@heise.de. We’re here to help you write.

On the other hand, it must be ensured that the AI ​​systems meet all relevant legal requirements and do not cause any wrong business decisions or reputational damage. This requires model governance, but also methods that make the decisions of AI systems comprehensible for all stakeholders. Explainable AI (XAI for short) offers a variety of approaches to extract explanations that people can understand from the complex mathematical structures of the models used.

Using machine learning comes with responsibilities and obligations. In order to meet these requirements, a company needs processes through which it

  • controls access to ML models
  • Implements guidelines/legal requirements
  • the interactions with the ML models and their results are tracked
  • records the basis on which a model was created

Model governance designates these processes in their entirety

Checklist:

  • Complete model documentation or reports. This also includes the reporting of the metrics using suitable visualization techniques and dashboards
  • Versioning of all models to create external transparency (explainability and reproducibility)
  • Complete data documentation to ensure high data quality and compliance with data protection
  • Management of ML metadata
  • Validation of ML models (audits)
  • Ongoing monitoring and logging of model metrics

Sustainable success of AI software can only be achieved for companies that build their AI systems on these three pillars – MLOps, Model Governance and Explainable AI (see Fig. 1). In order to make their interaction tangible, this article uses real-life examples to show how the integration of the three elements helps to build solid AI applications.

Only the interaction of Model Governance, MLOps and Explainable AI enables the reliable and profitable use of AI systems (Fig. 1).

In our application example, a company would like to make its application process more efficient and plans to use an AI-based, automatic pre-filtering of applications: The system should recognize which applications are promising and should lead to an interview. In times when HR departments decide after a cursory look at the documents whether they want to devote more attention to an application, automated pre-filtering seems to be a worthwhile process. However, numerous pitfalls and risks lurk on the way to implementing such a system (Fig. 2).

So that Apple doesn't read anything: SimpleX chat updated to version 3 Previous post So that Apple doesn’t read anything: SimpleX chat updated to version 3
Next post The Heise webinar: user stories, acceptance criteria and requirements