Project Details

Description

With great power comes great responsibility" is a cliché that perfectly applies to artificial
intelligence (AI). As technology is progressing faster than ever before towards software with
more and more abilities, adaptation and autonomy, the risks are becoming increasingly
apparent. As a matter of fact, humans are still lacking the tools to systematically obtain strong
guarantees - about their safety, privacy, etc. - from the intelligent software that serves them.
Quite paradoxically, but not so surprisingly, better human control over AI could come from
intelligent software, and more specifically, from automated verification software. Verification
is a scientific discipline within computer science that devises automated methods to improve
the dependability of computer-based systems. It does so by exploiting mathematicallygrounded
techniques such as model checking, theorem proving and model-based testing.
Although automated verification has made remarkable progress during the past decades, its
application is still limited to traditional software that does not use AI or machine learning
technology. This project aims to remove these limitations by investigating the logical and
probabilistic underpinnings of both verification, machine learning and AI. The expected results
are new theories, formalisms and algorithms that will constitute the foundations for a new
generation of AI-ready verification methods.
AcronymVeriLearn
StatusFinished
Effective start/end date1/01/1831/12/22

Keywords

  • artificial intelligence
  • machine learning
  • software verification
  • system safety

Attachment to an Research Institute in UNAMUR

  • NADI

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.