Spremljaj
Junyang Lin
Junyang Lin
Qwen Team, Alibaba Group & Peking University
Preverjeni e-poštni naslov na alibaba-inc.com - Domača stran
Naslov
Navedeno
Navedeno
Leto
Qwen technical report
J Bai, S Bai, Y Chu, Z Cui, K Dang, X Deng, Y Fan, W Ge, Y Han, F Huang, ...
arXiv preprint arXiv:2309.16609, 2023
27172023
Qwen2. 5 technical report
A Yang, B Yang, B Zhang, B Hui, B Zheng, B Yu, C Li, D Liu, F Huang, ...
arXiv preprint arXiv:2412.15115, 2024
12322024
Qwen-vl: A frontier large vision-language model with versatile abilities
J Bai, S Bai, S Yang, S Wang, S Tan, P Wang, J Lin, C Zhou, J Zhou
arXiv preprint arXiv:2308.12966 1 (2), 3, 2023
11822023
OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework
P Wang, A Yang, R Men, J Lin, S Bai, Z Li, J Ma, C Zhou, J Zhou, H Yang
ICML 2022, 2022
11462022
Cogview: Mastering text-to-image generation via transformers
M Ding, Z Yang, W Hong, W Zheng, C Zhou, D Yin, J Lin, X Zou, Z Shao, ...
Advances in neural information processing systems 34, 19822-19835, 2021
8222021
Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution
P Wang, S Bai, S Tan, S Wang, Z Fan, J Bai, K Chen, X Liu, J Wang, W Ge, ...
arXiv preprint arXiv:2409.12191, 2024
560*2024
Understanding and improving layer normalization
J Xu, X Sun, Z Zhang, G Zhao, J Lin
Advances in neural information processing systems 32, 2019
4522019
Towards knowledge-based recommender dialog system
Q Chen, J Lin, Y Zhang, M Ding, Y Cen, H Yang, J Tang
arXiv preprint arXiv:1908.05391, 2019
2912019
Diversity-promoting GAN: A cross-entropy based generative adversarial network for diversified text generation
J Xu, X Ren, J Lin, X Sun
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
266*2018
Global Encoding for Abstractive Summarization
J Lin, X Sun, S Ma, Q Su
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
2022018
M6: A chinese multimodal pretrainer
J Lin, R Men, A Yang, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
arXiv preprint arXiv:2103.00823, 2021
179*2021
Qwen2. 5-coder technical report
B Hui, J Yang, Z Cui, J Yang, D Liu, L Zhang, T Liu, J Zhang, B Yu, K Lu, ...
arXiv preprint arXiv:2409.12186, 2024
1752024
Explicit sparse transformer: Concentrated attention through explicit selection
G Zhao, J Lin, Z Zhang, X Ren, Q Su, X Sun
arXiv preprint arXiv:1912.11637, 2019
1522019
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
A Yang*, J Pan*, J Lin*, R Men, Y Zhang, J Zhou, C Zhou
arXiv preprint arXiv:2211.01335, 2022
1322022
One-peace: Exploring one general representation model toward unlimited modalities
P Wang, S Wang, J Lin, S Bai, X Zhou, J Zhou, X Wang, C Zhou
arXiv preprint arXiv:2305.11172, 2023
1252023
Expertprompting: Instructing large language models to be distinguished experts
B Xu, A Yang, J Lin, Q Wang, C Zhou, Y Zhang, Z Mao
arXiv preprint arXiv:2305.14688, 2023
1202023
An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models
L Chen, H Zhao, T Liu, S Bai, J Lin, C Zhou, B Chang
European Conference on Computer Vision, 19-35, 2024
1192024
Qwen2-audio technical report
Y Chu, J Xu, Q Yang, H Wei, X Wei, Z Guo, Y Leng, Y Lv, J He, J Lin, ...
arXiv preprint arXiv:2407.10759, 2024
1152024
Modality competition: What makes joint training of multi-modal network fail in deep learning?(provably)
Y Huang, J Lin, C Zhou, H Yang, L Huang
International conference on machine learning, 9226-9259, 2022
1092022
Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement
A Yang, B Zhang, B Hui, B Gao, B Yu, C Li, D Liu, J Tu, J Zhou, J Lin, K Lu, ...
arXiv preprint arXiv:2409.12122, 2024
1042024
Sistem trenutno ne more izvesti postopka. Poskusite znova pozneje.
Članki 1–20