Click the links below to view or download our previous newsletters:
Click the links below to view or download the contributions from each partner:
As part of the GALICIA project, a consultation was launched to gather insights from a diverse range of stakeholders, including users of innovative SMEs and Start-ups, Research Centers, Large companies, Industry Associations, and Innovation Hubs, both in Europe and Canada. The objective was to understand their experiences and perspectives on the innovation prospects of the project.
The consultation started at the end of October 2024 and will conclude at the end of February 2025. This report presents the partial results collected up to January 8, 2025. The final survey report will be shared in March 2025.
The KPI4 public report “Platform logical architecture and requirements” details the platform’s architecture, specifying its key components and their features, alongside the software delivery requirements. In particular, this document outlines an application that generates and verifies source code from natural language requirements.
It features a user-friendly interface, a server that orchestrates interactions with Large Language Models (LLMs) like LLaMA and GPT-4o, and formal verification tools like NuSMV or PyModel.
The survey was carried out to gather insights into the challenges and opportunities in verifying AI-generated code, as well as stakeholder perspectives on automated testing and compliance. Its outcomes played a crucial role in shaping and establishing the ToR of the workshop, ensuring that its scope, objectives, and key discussion themes align with the most relevant issues identified.The ToR now serves as a strategic framework to guide stakeholder engagement, facilitating discussions on cybersecurity, AI-driven automation, and compliance verification in a way that directly addresses the survey’s findings. The workshop serves as a platform to refine GALICIA’s approach, incorporating feedback from industry and research communities to ensure that the project remains aligned with emerging challenges in digital resilience.
Mind in a Box, partner of GALICIA, participated in the 7th International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI 2025), held in Austria from April 8 to 10, 2025. The event brought together leading experts, researchers, and innovators in the field of AI, providing a high-level platform for knowledge exchange and technological dissemination.
As part of its ongoing efforts within the GALICIA project, Mind in a Box used this international stage as a key dissemination opportunity to present the infrastructure supporting GALICIA’s advanced AI capabilities. The presentation focused on two major components:
During the event, a scientific paper titled “Enhancing Real-time Decision-making with Scalable, Safe, and Private LLMOps and Context-aware RAG Workflows” was officially presented and published in the conference proceedings (see p. 120 of the attached document).
This study explores the integration of scalable Large Language Model Operations (LLMOps) with advanced Retrieval-Augmented Generation (RAG) workflows to address challenges in deploying high-performance AI systems. It emphasizes retrieval’s role as a critical component in RAG, ensuring relevance and accuracy. The architecture achieves enhanced relevance, efficiency, and scalability using technologies like vLLM for low-latency inference, Nvidia A30 GPUs for accelerated processing, and OpenSearch for hybrid search. Components like hybrid search, reranking, HyDE, and domain-specific embedding adapters optimize retrieval and generation processes. Kubernetes and Docker facilitate dynamic scaling and resource management, while on-premises deployment prioritizes data privacy. The SciFact dataset is used to
evaluate the system’s retrieval and generation performance, with metrics like NDCG and MAP assessing effectiveness. The study highlights incremental improvements from enhanced RAG features and test scalability under high query loads, demonstrating a robust, efficient solution for sensitive, high-stakes applications.
This dissemination activity significantly contributed to raising awareness about the GALICIA project among the international AI research community, while also strengthening the scientific credibility of its technical approach.
In a rapidly evolving technological landscape, the challenge of ensuring software correctness remains more relevant than ever. The GALICIA project proposes a novel approach that integrates formal verification methods with generative AI models, specifically large language models (LLMs) like LLAMA and GPT. The core goal is to verify that code generated by AI complies with functional and security standards, and aligns with user-defined requirements.
The methodology aims to:
Development Workflow Overview
GALICIA introduces a structured, iterative workflow that bridges natural language functional requirements (NLFRs) with formal methods:
This process is iterative and includes:
To demonstrate the methodology, GALICIA tested various known problems:
These examples span mathematical constructs, protocol behavior, and web service logic, providing a comprehensive validation field.
This case study tested the GALICIA workflow on an IoT-focused application: the WiC Advisor. Key features:
Each function was tested for:
Euclid’s geometry was chosen due to its clear axiomatic structure, making it ideal for formal modeling. The workflow:
The model was also built in Coq and Python.
This case exemplified how well-defined domains can benefit from multi-modal formal modeling and cross-language execution.
The third case study modeled the TCP three-way handshake.
Focus on the TCP three-way handshake:
The sequence ensures reliable communication between sender and receiver
Despite being a toy example, it demonstrated how even network protocols — typically modeled with finite state machines — can be formalized and verified through GALICIA’s approach.
In this case study, the goal was to model and verify the well-known TCP/IP handshake using a combination of natural language specifications, formal modeling, and generative AI code synthesis.
The communication sequence modeled includes:
This behavior ensures reliable data transmission, even in the presence of network faults or delays.
The Event B model generated by GPT in a co-creative process:
Model Verification
The Model is implemented in Python:
This document presents the development and validation of a basic port scanner implemented in PHP. The scanner is designed to test a predefined list of common ports on a specified IP address, returning a simple text output indicating whether each port is “Open” or “Closed”.
The PHP implementation uses an array to store the common ports, and assigns a status variable to each element of the array. This allows the program to track and report the state (open or closed) of each port individually.
The GALICIA project proves that building formal models from NL specifications is feasible in days — compared to months for manual methods. Generative AI, when guided correctly, helps:
The process evolves into a co-creative collaboration between human experts and AI tools. While current examples are small, they lay the groundwork for scaling this method to more complex systems — a direction where trustworthy AI-generated software becomes not only possible, but standard.