Click the links below to view or download our previous newsletters:
Click the links below to view or download the contributions from each partner:
As part of the GALICIA project, a consultation was launched to gather insights from a diverse range of stakeholders, including users of innovative SMEs and Start-ups, Research Centers, Large companies, Industry Associations, and Innovation Hubs, both in Europe and Canada. The objective was to understand their experiences and perspectives on the innovation prospects of the project.
The consultation started at the end of October 2024 and will conclude at the end of February 2025. This report presents the partial results collected up to January 8, 2025. The final survey report will be shared in March 2025.
The KPI4 public report “Platform logical architecture and requirements” details the platform’s architecture, specifying its key components and their features, alongside the software delivery requirements. In particular, this document outlines an application that generates and verifies source code from natural language requirements.
It features a user-friendly interface, a server that orchestrates interactions with Large Language Models (LLMs) like LLaMA and GPT-4o, and formal verification tools like NuSMV or PyModel.
The survey was carried out to gather insights into the challenges and opportunities in verifying AI-generated code, as well as stakeholder perspectives on automated testing and compliance. Its outcomes played a crucial role in shaping and establishing the ToR of the workshop, ensuring that its scope, objectives, and key discussion themes align with the most relevant issues identified.The ToR now serves as a strategic framework to guide stakeholder engagement, facilitating discussions on cybersecurity, AI-driven automation, and compliance verification in a way that directly addresses the survey’s findings. The workshop serves as a platform to refine GALICIA’s approach, incorporating feedback from industry and research communities to ensure that the project remains aligned with emerging challenges in digital resilience.
In a rapidly evolving technological landscape, the challenge of ensuring software correctness remains more relevant than ever. The GALICIA project proposes a novel approach that integrates formal verification methods with generative AI models, specifically large language models (LLMs) like LLAMA and GPT. The core goal is to verify that code generated by AI complies with functional and security standards, and aligns with user-defined requirements.
The methodology aims to:
Development Workflow Overview
GALICIA introduces a structured, iterative workflow that bridges natural language functional requirements (NLFRs) with formal methods:
This process is iterative and includes:
To demonstrate the methodology, GALICIA tested various known problems:
These examples span mathematical constructs, protocol behavior, and web service logic, providing a comprehensive validation field.
This case study tested the GALICIA workflow on an IoT-focused application: the WiC Advisor. Key features:
Each function was tested for:
Euclid’s geometry was chosen due to its clear axiomatic structure, making it ideal for formal modeling. The workflow:
The model was also built in Coq and Python.
This case exemplified how well-defined domains can benefit from multi-modal formal modeling and cross-language execution.
The third case study modeled the TCP three-way handshake.
Focus on the TCP three-way handshake:
The sequence ensures reliable communication between sender and receiver
Despite being a toy example, it demonstrated how even network protocols — typically modeled with finite state machines — can be formalized and verified through GALICIA’s approach.
In this case study, the goal was to model and verify the well-known TCP/IP handshake using a combination of natural language specifications, formal modeling, and generative AI code synthesis.
The communication sequence modeled includes:
This behavior ensures reliable data transmission, even in the presence of network faults or delays.
The Event B model generated by GPT in a co-creative process:
Model Verification
The Model is implemented in Python:
The GALICIA project proves that building formal models from NL specifications is feasible in days — compared to months for manual methods. Generative AI, when guided correctly, helps:
The process evolves into a co-creative collaboration between human experts and AI tools. While current examples are small, they lay the groundwork for scaling this method to more complex systems — a direction where trustworthy AI-generated software becomes not only possible, but standard.