Factor·arcadia
Adam optimizer
Gradient descent optimization algorithm using momentum decay parameters 0.5 (β₁) and 0.999 (β₂), no weight decay, and learning rate 0.001
Confidence
90%
active
Source
The Impact and Limitations of AI
Gradient descent optimization algorithm using momentum decay parameters 0.5 (β₁) and 0.999 (β₂), no weight decay, and learning rate 0.001