팔로우
Qing Yang
Qing Yang
Du Xiaoman (DXM)
duxiaoman.com의 이메일 확인됨
제목
인용
인용
연도
Xuanyuan 2.0: A large chinese financial chat model with hundreds of billions parameters
X Zhang, Q Yang
Proceedings of the 32nd ACM International Conference on Information and …, 2023
362023
Position-augmented transformers with entity-aligned mesh for textvqa
X Zhang, Q Yang
Proceedings of the 29th ACM International Conference on Multimedia, 2519-2528, 2021
192021
Combining explicit entity graph with implicit text information for news recommendation
X Zhang, Q Yang, D Xu
Companion Proceedings of the Web Conference 2021, 412-416, 2021
162021
Self-qa: Unsupervised knowledge guided language model alignment
X Zhang, Q Yang
arXiv preprint arXiv:2305.11952, 2023
72023
Expertbert: Pretraining expert finding
H Liu, Z Lv, Q Yang, D Xu, Q Peng
Proceedings of the 31st ACM International Conference on Information …, 2022
72022
TranS: Transition-based knowledge graph embedding with synthetic relation representation
X Zhang, Q Yang, D Xu
arXiv preprint arXiv:2204.08401, 2022
72022
Dml: Dynamic multi-granularity learning for bert-based document reranking
X Zhang, Q Yang
Proceedings of the 30th ACM International Conference on Information …, 2021
72021
Efficient Non-sampling Expert Finding
H Liu, Z Lv, Q Yang, D Xu, Q Peng
Proceedings of the 31st ACM International Conference on Information …, 2022
52022
Deepvt: Deep view-temporal interaction network for news recommendation
X Zhang, Q Yang, D Xu
Proceedings of the 31st ACM International Conference on Information …, 2022
52022
CGCE: A Chinese Generative Chat Evaluation Benchmark for General and Financial Domains
X Zhang, B Li, Q Yang
arXiv preprint arXiv:2305.14471, 2023
32023
ExpertPLM: Pre-training Expert Representation for Expert Finding
Q Peng, H Liu
Findings of the Association for Computational Linguistics: EMNLP 2022, 1043-1052, 2022
32022
Improve ranking correlation of super-net through training scheme from one-shot nas to few-shot nas
J Liu, K Zhang, W Hu, Q Yang
arXiv preprint arXiv:2206.05896, 2022
32022
Self-supervised disentangled representation learning for robust target speech extraction
Z Mu, X Yang, S Sun, Q Yang
Proceedings of the AAAI Conference on Artificial Intelligence 38 (17), 18815 …, 2024
22024
Dapt: A dual attention framework for parameter-efficient continual learning of large language models
W Zhao, S Wang, Y Hu, Y Zhao, B Qin, X Zhang, Q Yang, D Xu, W Che
arXiv preprint arXiv:2401.08295, 2024
22024
Adaptive attention for sparse-based long-sequence transformer
X Zhang, Z Lv, Q Yang
Findings of the Association for Computational Linguistics: ACL 2023, 8602-8610, 2023
22023
Learning to Recover Causal Relationship from Indefinite Data in the Presence of Latent Confounders
H Chen, X Yang, Q Yang
arXiv preprint arXiv:2305.02640, 2023
22023
Just ClozE! A Fast and Simple Method for Evaluating the Factual Consistency in Abstractive Summarization
Y Li, L Li, Q Yang, M Litvak, N Vanetik, D Hu, Y Li, Y Zhou, D Xu, X Zhang
arXiv preprint arXiv:2210.02804, 2022
22022
Comparative analysis of phytochemical profile and antioxidant and anti-inflammatory activity of four Gentiana species from the Qinghai-Tibet Plateau
J Wang, R Liu, J Zhang, H Su, Q Yang, J Wulu, J Li, Z Zhang, Z Lv
Journal of Ethnopharmacology, 117926, 2024
12024
Pre-trained Personalized Review Summarization with Effective Salience Estimation
H Xu, H Liu, Z Lv, Q Yang, W Wang
Findings of the Association for Computational Linguistics: ACL 2023, 10743-10754, 2023
12023
Instance-guided prompt learning for few-shot text matching
J Du, X Zhang, S Wang, K Wang, Y Zhou, L Li, Q Yang, D Xu
Findings of the Association for Computational Linguistics: EMNLP 2022, 3880-3886, 2022
12022
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20