Abstract:
With growing applications in Machine Learning, Game Theory, and the training of Generative Adversarial Networks GANs, solving min-max problems and establishing their convergence properties have experienced significant attention. Gradient Descent Ascent (GDA) and Optimistic Gradient Descent Ascent (OGDA) algorithms are popular algorithms used to solve saddle point problems. In an effort to address the issue of oscillating convergence behavior of these algorithms, we propose a dynamic method for solving bilinear problems. Our proposed method is characterized by its novel mechanism that dynamically chooses the coordinates to be updated at every iteration to guarantee a more stable and efficient convergence. Motivated by OGDA, we propose an algorithm, denoted Momentum Block-based Gradient Descent Ascent (MBGDA), that utilizes the momentum at every iterate to determine the block of coordinates for which a gradient step is applied. We present several empirical results that demonstrate the superior performance of our proposed algorithm compared to existing first- order methods for solving bilinear saddle point problems. More specifically, MBGDA achieves more stable convergence properties and achieves a higher probability of convergence in non- convex non-concave settings.