Tuesday, 24 April 2018

Online Improper Learning with an Approximation Oracle. (arXiv:1804.07837v1 [cs.LG])

We revisit the question of reducing online learning to approximate optimization of the offline problem. In this setting, we give two algorithms with near-optimal performance in the full information setting: they guarantee optimal regret and require only poly-logarithmically many calls to the approximation oracle per iteration. Furthermore, these algorithms apply to the more general improper learning problems. In the bandit setting, our algorithm also significantly improves the best previously known oracle complexity while maintaining the same regret.



from cs updates on arXiv.org https://ift.tt/2K7sFQ9
//

Related Posts:

0 comments:

Post a Comment