Superneurons: dynamic GPU memory management for training deep neural networks L Wang, J Ye, Y Zhao, W Wu, A Li, SL Song, Z Xu, T Kraska ACM SIGPLAN Notices 53 (1), 41-53, 2018 | 49 | 2018 |

Learning compact recurrent neural networks with block-term tensor decomposition J Ye, L Wang, G Li, D Chen, S Zhe, X Chu, Z Xu Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018 | 24 | 2018 |

Simple and efficient parallelization for probabilistic temporal tensor factorization G Li, Z Xu, L Wang, J Ye, I King, M Lyu 2017 International Joint Conference on Neural Networks (IJCNN), 1-8, 2017 | 8 | 2017 |

Adversarial noise layer: Regularize neural network by adding noise Z You, J Ye, K Li, Z Xu, P Wang 2019 IEEE International Conference on Image Processing (ICIP), 909-913, 2019 | 7 | 2019 |

TensorD: A tensor decomposition library in TensorFlow L Hao, S Liang, J Ye, Z Xu Neurocomputing 318, 196-200, 2018 | 7 | 2018 |

Bt-nets: Simplifying deep neural networks via block term decomposition G Li, J Ye, H Yang, D Chen, S Yan, Z Xu arXiv preprint arXiv:1712.05689, 2017 | 5 | 2017 |

Compressing recurrent neural networks with tensor ring for action recognition Y Pan, J Xu, M Wang, J Ye, F Wang, K Bai, Z Xu Proceedings of the AAAI Conference on Artificial Intelligence 33, 4683-4690, 2019 | 3 | 2019 |

Efficient communications in training large scale neural networks Y Zhao, L Wang, W Wu, G Bosilca, R Vuduc, J Ye, W Tang, Z Xu Proceedings of the on Thematic Workshops of ACM Multimedia 2017, 110-116, 2017 | 3 | 2017 |

Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks Z You, K Yan, J Ye, M Ma, P Wang Advances in Neural Information Processing Systems, 2130-2141, 2019 | 2 | 2019 |

Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks Y He, P Liu, L Zhu, Y Yang arXiv preprint arXiv:1904.03961, 2019 | | 2019 |