Please wait a minute...
Frontiers of Engineering Management

ISSN 2095-7513

ISSN 2096-0255(Online)

CN 10-1205/N

Postal Subscription Code 80-905

Front. Eng    2021, Vol. 8 Issue (4) : 572-581    https://doi.org/10.1007/s42524-021-0169-x
RESEARCH ARTICLE
Novel interpretable mechanism of neural networks based on network decoupling method
Dongli DUAN1, Xixi WU1, Shubin SI2()
1. School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710311, China
2. Ministry of Industry and Information Technology Key Laboratory of Industrial Engineering and Intelligent Manufacturing, Northwestern Polytechnical University, Xi’an 710072, China; School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, China
 Download: PDF(12522 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

The lack of interpretability of the neural network algorithm has become the bottleneck of its wide application. We propose a general mathematical framework, which couples the complex structure of the system with the nonlinear activation function to explore the decoupled dimension reduction method of high-dimensional system and reveal the calculation mechanism of the neural network. We apply our framework to some network models and a real system of the whole neuron map of Caenorhabditis elegans. Result shows that a simple linear mapping relationship exists between network structure and network behavior in the neural network with high-dimensional and nonlinear characteristics. Our simulation and theoretical results fully demonstrate this interesting phenomenon. Our new interpretation mechanism provides not only the potential mathematical calculation principle of neural network but also an effective way to accurately match and predict human brain or animal activities, which can further expand and enrich the interpretable mechanism of artificial neural network in the future.

Keywords neural networks      interpretability      dynamical behavior      network decouple     
Corresponding Author(s): Shubin SI   
Just Accepted Date: 26 July 2021   Online First Date: 07 September 2021    Issue Date: 01 November 2021
 Cite this article:   
Dongli DUAN,Xixi WU,Shubin SI. Novel interpretable mechanism of neural networks based on network decoupling method[J]. Front. Eng, 2021, 8(4): 572-581.
 URL:  
https://academic.hep.com.cn/fem/EN/10.1007/s42524-021-0169-x
https://academic.hep.com.cn/fem/EN/Y2021/V8/I4/572
Fig.1  Testing the model approximations for neural dynamics on scale-free networks with N= 1000 and s=8 (the line is the theoretical solution of Eq. (11), and the various symbols represent the simulation results). (a) Comparison of the impact of the excitation strength J1 and inhibition strength J2 on system behavior (here, we set I= R=1, and the inhibition links are randomly selected from the adjacency matrix). (b) Comparison of the impact of the basal activity I and inverse of death rate R on system behavior.
Fig.2  Upper and lower bounds of the average activity of neurons (the colored solid lines represent the theoretical solutions of x, the square and diamond represent the simulation results of x, and the dotted and black solid lines represent the lower bound xL and the upper bound xH, respectively).
Fig.3  Contour maps of the upper and lower bounds of the average activity. (a) Phase diagram of the system behavior with xL, which is independent of the network structure and other dynamical parameters. (b) Contour map of the lower bound xL. (c) Phase diagram of the system behavior with xH, which is coupled as J1 sR+ IR (here we set s=4). (d) Contour map of the upper bound xH.
Fig.4  Linear correlation between the macroscopic behavior and the structure of the nonlinear neural networks (the black solid line is drawn from Eq. (11), and the squares are the simulation results for 1000 networks).
Fig.5  Comparison of the structure and dynamical behaviors of C. elegans with the whole-animal connectomes (“Chem” and “Gap Jn Sym” mean the chemical and the gap junction connectivity network for the neuron systems, respectively). Top: the pie charts show the number of all types of neurons in the whole neuron map for adult hermaphroditism and adult male of C. elegans. Middle: the connection data of the neuron map are incorporated into the neuron dynamic model to calculate the activity distribution of different types of neurons (here, the system is unweighted network). Bottom: the weight of links is considered when calculating the activity of neurons.
Fig.6  Quantification of the behavioral differences between the two C. elegans sexes with our framework. (a) Simulation and theoretical solution for unweighted neuron networks. (b) Simulation and theoretical solution for weighted neuron networks.
Fig.7  Predicting the steady-state of each neuron with our framework.
1 M Arjovsky, S Chintala, L Bottou (2017). Wasserstein generative adversarial networks. In: 34th International Conference on Machine Learning. Sydney, 214–223
2 S Bach, A Binder, G Montavon, F Klauschen, K R Müller, W Samek (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One, 10(7): e0130140
https://doi.org/10.1371/journal.pone.0130140 pmid: 26161953
3 D J Bumbarger, M Riebesell, C Rödelsperger, R J Sommer (2013). System-wide rewiring underlies behavioral differences in predatory and bacterial-feeding nematodes. Cell, 152(1–2): 109–119
https://doi.org/10.1016/j.cell.2012.12.013 pmid: 23332749
4 X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel (2016). InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In: 30th Conference on Neural Information Processing System (NIPS). Barcelona, 2180–2188
5 K Cho, van B Merrienboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, 1724–1734
6 S J Cook, T A Jarrell, C A Brittin, Y Wang, A E Bloniarz, M A Yakovlev, K C Q Nguyen, L T Tang, E A Bayer, J S Duerr, H E Bülow, O Hobert, D H Hall, S W Emmons (2019). Whole-animal connectomes of both Caenorhabditis elegans sexes. Nature, 571(7763): 63–71
https://doi.org/10.1038/s41586-019-1352-7 pmid: 31270481
7 I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio (2014). Generative adversarial nets. In: 27th International Conference on Neural Information Processing Systems (NIPS). Montreal, Quebec, 2672–2680
8 L A Hendricks, Z Akata, M Rohrbach, J Donahue, B Schiele, T Darrell (2016). Generating visual explanations. In: European Conference on Computer Vision. Amsterdam, 3–19
9 S Hochreiter, J Schmidhuber (1997). Long short-term memory. Neural Computation, 9(8): 1735–1780
https://doi.org/10.1162/neco.1997.9.8.1735 pmid: 9377276
10 Á Kádár, G Chrupała, A Alishahi (2017). Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43(4): 761–780
https://doi.org/10.1162/COLI_a_00300
11 P J Kindermans, K T Schütt, M Alber, K R Müller, D Erhan, B Kim, S Dähne (2017). Learning how to explain neural networks: PatternNet and PatternAttribution. arXiv preprint, arXiv: 1705.05598
12 J Li, X Chen, E Hovy, D Jurafsky (2015). Visualizing and understanding neural models in NLP. In: 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, CA, 681–691
13 M Mirza, S Osindero (2014). Conditional generative adversarial nets. arXiv preprint, arXiv: 1411.1784
14 A Nguyen, J Clune, Y Bengio, A Dosovitskiy, J Yosinski (2017). Plug & Play Generative Networks: Conditional iterative generation of images in latent space. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, 3510–3520
15 D H Park, L A Hendricks, Z Akata, A Rohrbach, B Schiele, T Darrell, M Rohrbach (2018). Multimodal explanations: Justifying decisions and pointing to the evidence. In: IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, 8779–8788
16 A Radford, L Metz, S Chintala (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint, arXiv: 1511.06434
17 W Samek, A Binder, G Montavon, S Lapuschkin, K R Müller (2017). Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11): 2660–2673
https://doi.org/10.1109/TNNLS.2016.2599820 pmid: 27576267
18 M Schuster, K K Paliwal (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11): 2673–2681
https://doi.org/10.1109/78.650093
19 H Strobelt, S Gehrmann, H Pfister, A M Rush (2018). LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics, 24(1): 667–676
https://doi.org/10.1109/TVCG.2017.2744158 pmid: 28866526
20 Z Tang, Y Shi, D Wang, Y Feng, S Zhang (2017). Memory visualization for gated recurrent neural networks in speech recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans, LA, 2736–2740
21 M Wu, M C Hughes, S Parbhoo, M Zazzi, V Roth, F Doshi-Velez (2017). Beyond sparsity: Tree regularization of deep models for interpretability. In: 32nd AAAI Conference on Artificial Intelligence. New Orleans, LA, 1670–1678
22 L Yu, W Zhang, J Wang, Y Yu (2016). SeqGAN: Sequence generative adversarial nets with policy gradient. In: 31st AAAI Conference on Artificial Intelligence. San Francisco, CA, 2852–2858
23 H Zhang, T Xu, H Li, S Zhang, X Wang, X Huang, D N Metaxas (2017). StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV). Venice, 5908–5916
24 Y Zhang, Y Xiao, S W Hwang, H Wang, X S Wang, W Wang (2017). Entity suggestion with conceptual explanation. In: 26th International Joint Conference on Artificial Intelligence (IJCAI). Melbourne, 4244–4250
25 Y Zharov, D Korzhenkov, P Shvechikov, A Tuzhilin (2018). YASENN: Explaining neural networks via partitioning activation sequences. arXiv preprint, arXiv: 1811.02783
26 B Zhou, D Bau, A Oliva, A Torralba (2019). Interpreting deep visual representations via network dissection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9): 2131–2145
https://doi.org/10.1109/TPAMI.2018.2858759 pmid: 30040625
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed