In our latest study, we explore the impact of computational complexity (CC) reduction in NN-based nonlinear equalizers (NLEs) for optical communication. Using a numerically simulated single-carrier 64-QAM 30 GBd dual-polarization channel over 20Γ50 km of SSMF, we analyze two NN architectures:
π biLSTM+CNN β 100 hidden units
π 1D-CNN β Dilated CNN for improved feature extraction
One of the key techniques investigated is quantization-aware training (QAT), which helps mitigate errors from low-bit precision weights. Our findings reveal:
β
Weight Clustering (W.C.) β Achieves optimal quantization by learning adaptive weight alphabets, outperforming uniform, power-of-two (PoT), and additive power-of-two (APoT) approaches.
β
6-bit W.C. model β Performs on par with the original unquantized model.
β
2-bit W.C. model β Matches the 1 Step-per-span (StPS) DBP benchmark, demonstrating efficient complexity reduction.
β
Trade-offs β APoT with two terms offers a practical balance between optical performance and CC by minimizing hardware multiplications.
π‘ Key Takeaway
CC reduction techniques like quantization can significantly enhance the efficiency of NN equalizers while maintaining strong optical performance. However, strategies like gradual quantization and training monitoring are essential to stabilize QAT.
π As we push toward more power-efficient E2E transport infrastructures, these innovations will be crucial for next-gen optical networks!
Let’s discussβwhat are your thoughts on balancing efficiency vs. performance in ML models? π
#MachineLearning #NeuralNetworks #OpticalCommunications #ComplexityReduction #DeepLearning #AllegroProject #AI #SignalProcessing
