Jinmian Ye
Jinmian Ye
在 std.uestc.edu.cn 的电子邮件经过验证
标题引用次数年份
Superneurons: dynamic GPU memory management for training deep neural networks
L Wang, J Ye, Y Zhao, W Wu, A Li, SL Song, Z Xu, T Kraska
ACM SIGPLAN Notices 53 (1), 41-53, 2018
472018
Learning compact recurrent neural networks with block-term tensor decomposition
J Ye, L Wang, G Li, D Chen, S Zhe, X Chu, Z Xu
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
232018
Simple and efficient parallelization for probabilistic temporal tensor factorization
G Li, Z Xu, L Wang, J Ye, I King, M Lyu
2017 International Joint Conference on Neural Networks (IJCNN), 1-8, 2017
82017
TensorD: A tensor decomposition library in TensorFlow
L Hao, S Liang, J Ye, Z Xu
Neurocomputing 318, 196-200, 2018
72018
Adversarial noise layer: Regularize neural network by adding noise
Z You, J Ye, K Li, Z Xu, P Wang
2019 IEEE International Conference on Image Processing (ICIP), 909-913, 2019
62019
Bt-nets: Simplifying deep neural networks via block term decomposition
G Li, J Ye, H Yang, D Chen, S Yan, Z Xu
arXiv preprint arXiv:1712.05689, 2017
52017
Compressing recurrent neural networks with tensor ring for action recognition
Y Pan, J Xu, M Wang, J Ye, F Wang, K Bai, Z Xu
Proceedings of the AAAI Conference on Artificial Intelligence 33, 4683-4690, 2019
42019
Efficient communications in training large scale neural networks
Y Zhao, L Wang, W Wu, G Bosilca, R Vuduc, J Ye, W Tang, Z Xu
Proceedings of the on Thematic Workshops of ACM Multimedia 2017, 110-116, 2017
32017
Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks
Y He, P Liu, L Zhu, Y Yang
arXiv preprint arXiv:1904.03961, 2019
2019
Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks
Z You, K Yan, J Ye, M Ma, P Wang
Advances in Neural Information Processing Systems, 2130-2141, 2019
2019
系统目前无法执行此操作,请稍后再试。
文章 1–10