![](https://crypto4nerd.com/wp-content/uploads/2024/03/1711572727_15EyQc-m3dOkCJyMbdLfL2w.jpeg)
- Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization(arXiv)
Author : Wei Shen, Minhui Huang, Jiawei Zhang, Cong Shen
Abstract :
2.Minimax Optimal Submodular Optimization with Bandit Feedback (arXiv)
Author : Artin Tajdini, Lalit Jain, Kevin Jamieson
Abstract : We consider maximizing a monotonic, submodular set function f:2[n]→[0,1] under stochastic bandit feedback. Specifically, f is unknown to the learner but at each time t=1,…,T the learner chooses a set St⊂[n] with |St|≤k and receives reward f(St)+ηt where ηt is mean-zero sub-Gaussian noise. The objective is to minimize the learner’s regret over T times with respect to (1−e−1)-approximation of maximum f(S∗) with |S∗|=k, obtained through greedy maximization of f. To date, the best regret bound in the literature scales as kn1/3T2/3. And by trivially treating every set as a unique arm one deduces that (nk)T−−−−√ is also achievable. In this work, we establish the first minimax lower bound for this setting that scales like O(mini≤k(in1/3T2/3+nk−iT−−−−−√)). Moreover, we propose an algorithm that is capable of matching the lower bound regret.