Slides

Short Abstract: We study convex optimization with zero-order information where only function evaluations are avaiable. We focus on the stochastic optimization setting, and present convergence rate of zero-order algorithms for both smooth and non-smooth convex functions. It is shown that the proposed zero-order algorithm is at most O(\sqrt{d}) slower than first-order algorithms for smooth optimization. For the non-smooth case, zero-order algorithm is only at most O(log d) slower than the smooth optimization. Lower bounds are also provided and it verify that the proposed method is minimax optimal under this setting.

Reference: Duchi, John C., et al. ”Optimal rates for zero-order convex optimization: The power of two function evaluations.” IEEE Transactions on Information Theory 61.5 (2015): 2788-2806.