
Prof. Ana Pérez (IEEE Fellow)
Universitat Politecnica de Catalunya, Spain
Title: Towards Explainable and Prunable Neural Networks with Discrete Cosine Transform Activations
Abstarct: In this paper, we extend our previous work on the Expressive Neural Network (ENN), a multilayer perceptron with adaptive activation functions parametrized using the Discrete Cosine Transform (DCT). While earlier results established the remarkable expressiveness of ENNs with compact architectures, here we focus on their explainability and structural interpretability. We demonstrate that DCT-based parametrization enables clear insights into the functional role of each neuron, facilitating the identification of redundant components. Leveraging this property, we show that ENNs are inherently amenable to effective pruning, removing unnecessary DCT coefficients with minimal or no degradation in performance, an outcome difficult to achieve with conventional fixed or other adaptive activation functions. Through experiments, we illustrate how pruning decisions can be directly interpreted from the activation spectra, reinforcing the explainable nature of the model. Notably, our results show that up to 40% of the activation coefficients can be pruned without any loss in performance. This is largely due to the orthogonality of the DCT basis, which allows aggressive pruning with little to no need for fine-tuning. These results highlight the advantages of adopting a signal processing foundation for neural network design, bridging the gap between high expressiveness and interpretability.
Bio: To be announced soon