Abstract

This study introduces a privacy-preserving framework for distributed deep fuzzy learning. Assuming training data as private, the problem of learning of local deep fuzzy models is considered in a distributed setting under differential privacy framework. A local deep fuzzy model, formed by a composition of a finite number of Takagi-Sugeno type fuzzy filters, is learned using variational Bayesian inference. This paper suggests an optimal (∊,δ)-differentially private noise adding mechanism that results in multi-fold reduction in noise magnitude over the classical Gaussian mechanism and thus leads to an increased utility for a given level of privacy. Further, the robustness feature, offered by the rule-based fuzzy systems, is leveraged to alleviate the effect of added data noise on the utility. An architecture for distributed form of differentially private learning is suggested where a privacy wall separates the private local training data from the globally shared data, and fuzzy sets and fuzzy rules are used to aggregate robustly the local deep fuzzy models for building the global model. The privacy wall uses noise adding mechanisms to attain differential privacy for each participant’s private training data and thus the adversaries have no direct access to the training data.

Links and resources

Tags

community

  • @scch
  • @dblp
@scch's tags highlighted