Wednesday, 26 September 2018

Tunable Measures for Information Leakage and Applications to Privacy-Utility Tradeoffs. (arXiv:1809.09231v1 [cs.IT])

In the first half of the paper, we introduce a tunable measure for information leakage called \textit{maximal $\alpha$-leakage}. This measure quantifies the maximal gain of an adversary in refining a tilted version of its posterior belief of any (potentially random) function of a data set conditioning on a released data set. The choice of $\alpha$ determines the specific adversarial action ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for these extremal values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. For $\alpha\in(1,\infty)$ this measure is shown to be the Arimoto channel capacity. We show that maximal $\alpha$-leakage satisfies data processing inequalities and sub-additivity (composition property). In the second half of the paper, we use maximal $\alpha$-leakage as the privacy measure and study the problem of data publishing with privacy guarantees, wherein the utility of the released data is ensured via a \emph{hard distortion} constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. We show that under a hard distortion constraint, both the optimal mechanism and the optimal tradeoff are invariant for any $\alpha>1$, and the tunable leakage measure only behaves as either of the two extrema, i.e., mutual information for $\alpha=1$ and maximal leakage for $\alpha=\infty$.



from cs updates on arXiv.org https://ift.tt/2IePoZK
//

0 comments:

Post a Comment