Adding Gradient Noise Improves Learning for Very Deep Networks
Paper
•
1511.06807
•
Published
NEAT (Noise-Enhanced Adaptive Training) is a novel optimization algorithm for deep learning that combines adaptive learning rates with controlled noise injection to improve convergence and generalization.
The NEAT optimizer enhances traditional adaptive optimization methods by intelligently injecting noise into the gradient updates. This approach helps:
pip install neat-optimizer
git clone https://github.com/yourusername/neat-optimizer.git
cd neat-optimizer
pip install -e .
import tensorflow as tf
from neat_optimizer import NEATOptimizer
# Create your model
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# Use NEAT optimizer
optimizer = NEATOptimizer(
learning_rate=0.001,
noise_scale=0.01,
beta_1=0.9,
beta_2=0.999
)
# Compile and train
model.compile(
optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))
learning_rate (float, default=0.001): Initial learning ratenoise_scale (float, default=0.01): Scale of noise injectionbeta_1 (float, default=0.9): Exponential decay rate for first moment estimatesbeta_2 (float, default=0.999): Exponential decay rate for second moment estimatesepsilon (float, default=1e-7): Small constant for numerical stabilitynoise_decay (float, default=0.99): Decay rate for noise scale over timeIf you use NEAT optimizer in your research, please cite:
@software{neat_optimizer,
title={NEAT: Noise-Enhanced Adaptive Training Optimizer},
author={Your Name},
year={2025},
url={https://github.com/yourusername/neat-optimizer}
}
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
For issues, questions, or feature requests, please open an issue on GitHub.