Rethinking Secure AI: How PetalGuard Sets a New Benchmark for Federated Learning

Published by: Petal Guard
27 May 2025
Rethinking Secure AI: How PetalGuard Sets a New Benchmark for Federated Learning

Business leaders are wary. They urgently want to reap the benefits of Artificial Intelligence (AI), and yet many hesitate, because they worry about the security implications. AI models after all are only as strong as the data they have been trained on, but the very process of AI learning carries the risk of data leaks. Especially highly regulated and security-conscious companies in sectors like healthcare, finance, defense and critical infrastructure have to strike the right balance between unlocking the potential of AI and safeguarding the privacy of their own data.

Traditional approaches to AI learning can expose companies to substantial privacy and compliance risks. That’s why AI experts have long been choosing more secure methodologies like federated learning (FL), which makes AI models collaborate without directly sharing sensitive data.

Various security measures are used to protect the data feeding into the FL system, for example Trusted Execution Environments (TEEs), Homomorphic Encryption (HE), and Single Server Secure Aggregation. Each has its merits: TEEs offer efficient hardware-based protection, HE ensures strong data privacy through encrypted computations, and Single Server Secure Aggregation provides privacy-preserving aggregation. All three approaches, however, come with significant limitations. TEEs require specialized equipment and are vulnerable to hardware exploits; HE is slow, requires significant computational overhead, and struggles in large-scale, real-time scenarios; Single Server Secure Aggregation for its part requires a complex set-up and is vulnerable when too few participating parties provide their model updates.

To unlock the full potential of federated AI in secure and privacy-conscious industries, companies need a solution that is at the same time robust, scalable and efficient. At the Technology Innovation Institute (TII), part of Abu Dhabi’s Advanced Technology Research Council, we have developed PetalGuard, a highly secure federated learning service that overcomes traditional FL limitations thanks to our unique approach harnessing Multi-Party Computation (MPC). PetalGuard is secure by design, because it distributes randomized model shares across multiple independent aggregators. This ensures that no single aggregator or any unauthorized entity can reconstruct or extract sensitive data. With PetalGuard, data leaks have become practically impossible.

Our MPC-based approach is significantly more efficient than using Homomorphic Encryption, which makes it faster to train a model – and iterate model updates. Compared to SSSA, PetalGuard also scales seamlessly without loss in performance or security. Crucially, PetalGuard’s decentralized architecture provides stronger protection than TEEs, as it completely eliminates the dependence on specialized hardware, which is often found vulnerable to exploits and costly to replace.

The importance of secure AI training isn’t just theoretical. The consequences of poor data governance during AI development are already being felt. For example, in 2023, it was reported that Microsoft’s AI research team accidentally exposed 38 terabytes of private data, including internal Teams messages and backups of employee workstations. The breach was discovered during a project meant to train AI models on large datasets.

This incident shows how even well-resourced companies can suffer massive risks when handling sensitive data for AI training, underlining the critical need for more robust solutions like PetalGuard.

With our unique solution for MPC-based federated learning, we are offering scalable, privacy-preserving federated learning solutions for enterprise AI models. Finally, companies even in the most security- and compliance-conscious industries, can confidently build and deploy their AI capabilities.

PetalGuard is the new benchmark for safer, scalable and more responsible AI innovation.