Node-based BNNs assign latent noise variables to hidden nodes of a neural network. By restricting inference to the node-based latent variables, node stochasticity greatly reduces the dimension of the posterior. This allows for inference of BNNs that are cheap to compute and to communicate, with a communication complexity equivalent to optimization based FL methods. It would be interesting to see if these node-based BNNs can outperform point-based NNs in federated learning.
A starting point could be this ICML 2022 paper: Tackling covariate shift with node-based Bayesian neural networks