Opinion formation, stochastic gradient descent, and gradient-like systems

Speaker

Fang-Yi Yu
U Michigan

Host

Govind Ramnarayan, Quanquan Liu, Sitan Chen
MIT CSAIL
Abstract: We study opinion dynamics on networks with two communities. Each
node has one of two opinions and updates its opinion as a ``majority-like"
function of the frequency of opinions among its neighbors. The networks we
consider are weighted graphs comprised of two equally sized communities
where intracommunity edges have weight $p$, and intercommunity edges have
weight $q$. Thus $q$ and $p$ parameterize the connectivity between the two
communities.

We prove a dichotomy theorem about the interaction of the two parameters:
1) the ``majority-like" update function, and 2) the level of intercommunity
connectivity. For each set of parameters, we show that either: the system
quickly converges to consensus with high probability in time $\Theta(n
\log(n))$; or, the system can get ``stuck" and take time $2^{\Theta(n)}$ to
reach consensus.

Technically, we achieve this the fast convergence result by exploiting the
connection between a family of reinforced random walks and dynamical
systems literature. Our main result shows if the systems are a reinforced
random walk with a gradient-like function, it converges to an arbitrary
neighborhood of a local attracting point in $O(n\log n)$ time with high
probability. This result adds to the recent literature on saddle-point
analysis and shows a large family of stochastic gradient descent algorithm
converges to local minimal in $O(n\log n)$ when the step size $O(1/n)$.