Consider the standard two-party communication model. This paper, follows up [Y. Dagan, Y. Filmus, H. Hatami, and Y. Li, Trading information complexity for error, Theory of Computing 14(2018), no. 6, 1-73], studies the dependency of information complexity on error allowed to compute a function. Two of our main results on prior-free information complexity are:
For both internal and external information complexity, we show to compute any function with error $1/2-\epsilon$, at least $\Omega(\epsilon^2)$ and at most $O(\epsilon)$ information must be revealed. Both bounds are tight via examples from [M. Braverman and A. Moitra, An information complexity approach to extended formulations, STOC, ACM, 2013].
For external information complexity, we show to compute any function with small error $\epsilon>0$, one can save external information cost, against computing the function without zero, at least $\Omega(\epsilon)$ and at most $O(h(\sqrt{\epsilon}))$ where $h(\cdot)$ is the binary entropy function. We do not know either of them is tight. However, when we consider prior-free external information complexity over only product distributions, the lower bound can be improved to $\Omega(h(\epsilon))$, with a simpler proof than the one in [Y. Dagan, Y. Filmus, H. Hatami, and Y. Li, Trading information complexity for error, Theory of Computing 14(2018), no. 6, 1-73].
from cs updates on arXiv.org https://ift.tt/2DA5Ghp
//
0 comments:
Post a Comment