Evaluating the Benefits of Computational Complexity Reduction in Neural Networks

In our latest study, we explore the impact of computational complexity (CC) reduction in NN-based nonlinear equalizers (NLEs) for optical communication. Using a numerically simulated single-carrier 64-QAM 30 GBd dual-polarization channel over 20Γ—50 km of SSMF, we analyze two NN architectures:

πŸ“Œ biLSTM+CNN – 100 hidden units
πŸ“Œ 1D-CNN – Dilated CNN for improved feature extraction

One of the key techniques investigated is quantization-aware training (QAT), which helps mitigate errors from low-bit precision weights. Our findings reveal:

βœ… Weight Clustering (W.C.) – Achieves optimal quantization by learning adaptive weight alphabets, outperforming uniform, power-of-two (PoT), and additive power-of-two (APoT) approaches.
βœ… 6-bit W.C. model – Performs on par with the original unquantized model.
βœ… 2-bit W.C. model – Matches the 1 Step-per-span (StPS) DBP benchmark, demonstrating efficient complexity reduction.
βœ… Trade-offs – APoT with two terms offers a practical balance between optical performance and CC by minimizing hardware multiplications.

πŸ’‘ Key Takeaway
CC reduction techniques like quantization can significantly enhance the efficiency of NN equalizers while maintaining strong optical performance. However, strategies like gradual quantization and training monitoring are essential to stabilize QAT.

πŸš€ As we push toward more power-efficient E2E transport infrastructures, these innovations will be crucial for next-gen optical networks!

Let’s discussβ€”what are your thoughts on balancing efficiency vs. performance in ML models? πŸ‘‡

#MachineLearning #NeuralNetworks #OpticalCommunications #ComplexityReduction #DeepLearning #AllegroProject #AI #SignalProcessing