Are the weights of a trained neural network repeatable in their convergence?
The question came up whether a neural network will always converge to the same weights if it is retrained repeatedly from the same starting values. Of course this would assume that each repeat shuffled the order of the training data or trained on shuffled subsets of the training data.
Is there any convergence proof that would answer this question one way or the other?
Topic neural-network machine-learning
Category Data Science