Computed tomography (CT) is an important diagnostic tool in clinical practice, widely used for disease screening and diagnosis. However, CT scans involve X-rays, which expose patients to radiation and potential health risks. Existing low-dose CT imaging often comes with degraded image quality, thereby affecting diagnostic accuracy. Although recent deep learning methods can markedly improve low-dose reconstruction quality, most rely on large centralized paired datasets collected under diverse vendors and scanning protocols—an approach constrained in medical imaging by privacy and regulatory requirements as well as acquisition costs and manual effort. Moreover, heterogeneity across multi-center data—stemming from differences in hardware, parameters, and anatomical regions—further undermines training efficiency and cross-domain generalization of centralized models. To overcome these limitations, federated learning (FL) has emerged as a promising paradigm that enables model training across clients without sharing data. “Nevertheless, when cross-institutional distributions differ substantially, simple parameter averaging often fails to accommodate multiple tasks and imaging geometries, and performance degrades under such heterogeneity.” said the author Hao Wang, a researcher at Southern Medical University, “To overcome these challenges, we propose FedM2CT, a federated metadata-constrained method with mutual learning for all-in-one CT reconstruction. This method enables simultaneous reconstruction of multivendor CT images with different imaging geometries and sampling protocols in one framework.”
All-in-one CT reconstruction from multiple imaging geometries and sampling protocols within one model is a challenging task for the existing centralized DL-based CT reconstruction methods. In this work, authors develop a federated metadata-constrained iRadonMAP framework with mutual learning (FedM2CT). FedM2CT consists of 3 modules, TS-iRadonMAP, CPML, and FMDL. TS-iRadonMAP performs the local CT image reconstruction task using a private model with any architecture, facilitating information exchange with the server. CPML is designed to share the knowledge in the local parameter-sharing submodule in TS-iRadonMAP via mutual learning. High-quality metadata are collected on the server to train the metamodel. FMDL adaptively combines the parameters of the metamodel on the server with the parameters of CPML to mitigate the effect of data heterogeneity.
This section details the design of each module. The backbone adopts a dual-domain iRadonMAP composed of three cooperating parts: (1) a sinogram-domain subnetwork, (2) a learnable back-projection module, and (3) an image-domain subnetwork, corresponding to physics-consistent modeling from projection space to image space plus residual refinement. Because the back-projection submodule is strongly affected by imaging geometry and sampling protocols, it must be made task-adaptive in a federated setting. To protect privacy and combat heterogeneity, TS-iRadonMAP keeps the sinogram-domain subnetwork and the back-projection module local (excluded from federated sharing); only the image-domain subnetwork participates in dynamic updates and information exchange under CPML. Clients may adopt different local architectures as long as the input/output dimensions match. To “indirectly” share key knowledge across clients, CPML introduces a locally hosted shared model that performs mutual learning with the image-domain subnetwork of TS-iRadonMAP, thereby improving exchange efficiency while preserving privacy. Its core is conditional prompting: each client’s imaging-geometry and scanning-protocol parameters are fed into a shallow MLP to produce two sets of coefficients (V₁, V₂) that affinely modulate feature maps. This drives both the shared model and the local subnetwork with “task conditions,” enabling adaptation to the sampling setup. Unlike traditional FL that simply averages parameters, FedM2CT maintains on the server multi-source, highly diverse paired metadata (paired low-dose/normal-dose images) to additionally train a “meta-model.” The parameters of this meta-model are then aggregated—via weighted fusion—with the CPML parameters uploaded by each client and broadcast back, thereby absorbing cross-protocol and cross-geometry priors at the global level and mitigating heterogeneity. The aggregation includes a weighting factor between metadata and client parameters as well as per-client weights. The training pipeline proceeds as follows: (1) each client performs task-specific reconstruction and updates locally with TS-iRadonMAP; (2) mutual learning and conditional prompting are conducted locally via CPML; (3) the server trains the meta-model using the metadata; (4) parameters are uploaded, fused with weighted aggregation, and broadcast back to clients for the next round.
The experimental results have demonstrated the effectiveness of the proposed framework. Across multiple scenarios, FedM2CT consistently outperforms competing methods on objective metrics: its average PSNR/SSIM is markedly higher than traditional filtered back projection (FBP), while RMSE is significantly lower (all at P < 0.001). Modulation transfer function (MTF) curves are also higher overall, indicating better spatial resolution. In “partially supervised/partially unsupervised” settings (e.g., only a few clients have paired supervision), FedM2CT achieves superior quantitative results (lower FID, lower LPIPS) and qualitative outcomes (smaller ROI errors, stronger artifact suppression) compared with federated baselines. Compared with frequency-domain decomposition methods that rely on paired data (e.g., FedFDD), it better balances denoising and detail preservation on unsupervised clients. These gains stem from the server-side prior learned from metadata and the knowledge transfer enabled by mutual learning. Subjective scores from three radiologists show that FedM2CT attains higher visual-quality ratings across multiple client scenarios; its texture detail, noise control, and structural fidelity are closer to reference images.
Although FedM2CT can improve imaging performance for all-in-one reconstruction, there are 4 limitations when directly applying FedM2CT to realistic medical scenes. First, FMDL requires collecting diverse CT data on the server. The varied distributions of these data are caused by differences in imaging geometries, sampling protocols, and subject cohorts, which may potentially violate privacy-preserving learning requirements and pose a bottleneck for practical training in real-world healthcare situations. Second, this study includes simulation experiments only to demonstrate FedM2CT’s performance, thus making it a retrospective analysis. Third, while our method has slightly higher local computational costs than traditional FL, these can be mitigated through model compression and edge computing strategies for local aggregation. Finally, some parameters in FedM2CT need careful tuning. Parameter selection remains an open challenge for all CT reconstruction tasks. Automated hyperparameter optimization and domain adaptation techniques could systematically address this challenge. Some advanced network designs or DL-based models can be adapted, such as meta-learning models and large language models. “Specifically, we can use large language models to modulate the intermediate features in FedM2CT, thus enabling personalized CT reconstruction. Deploying these advanced models in our FedM2CT could potentially further improve reconstruction performance, representing an important direction in the future.” said Hao Wang.
Authors of the paper include Hao Wang, Xiaoyu Zhang, Hengtao Guo, Xuebin Ren, Shipeng Wang, Fenglei Fan, Jianhua Ma, and Dong Zeng.
This work was supported in part by the National Key R&D Program of China under Grants 2024YFA1012000 and 2024YFC2417800 and the National Natural Science Foundation of China under Grant U21A6005.
The paper, “Federated Metadata-Constrained iRadonMAP Framework with Mutual Learning for All-in-One Computed Tomography Imaging” was published in the journal Cyborg and Bionic Systems on Aug. 27, 2025, at DOI: 10.34133/cbsystems.0376.
 END