Follow
Zekun Wu
Zekun Wu
Other names泽鲲 吴
CS PhD Student, University College London / Researcher & Engineer, Holistic AI
Verified email at ucl.ac.uk
Title
Cited by
Cited by
Year
Towards auditing large language models: Improving text-based stereotype detection
W Zekun, S Bulathwela, AS Koshiyama
arXiv preprint arXiv:2311.14126, 2023
52023
Eliciting personality traits in large language models
A Hilliard, C Munoz, Z Wu, AS Koshiyama
ArXiv. org 15, 2024
42024
Advancing Multimodal Data Fusion in Pain Recognition: A Strategy Leveraging Statistical Correlation and Human-Centered Perspectives
X Gu, Z Wang, I Jin, Z Wu
arXiv preprint arXiv:2404.00320, 2024
22024
Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation
Z Wu, S Bulathwela, M Perez-Ortiz, AS Koshiyama
arXiv preprint arXiv:2404.01768, 2024
12024
Eliciting Big Five Personality Traits in Large Language Models: A Textual Analysis with Classifier-Driven Approach
A Hilliard, C Munoz, Z Wu, AS Koshiyama
arXiv preprint arXiv:2402.08341, 2024
12024
Bias Amplification: Language Models as Increasingly Biased Media
Z Wang, Z Wu, J Zhang, N Jain, X Guan, A Koshiyama
arXiv preprint arXiv:2410.15234, 2024
2024
Assessing Bias in Metric Models for LLM Open-Ended Generation Bias Benchmarks
N Demchak, X Guan, Z Wu, Z Xu, A Koshiyama, E Kazim
arXiv preprint arXiv:2410.11059, 2024
2024
CauSkelNet: Causal Representation Learning for Human Behaviour Analysis
X Gu, C Jiang, E Wang, Z Wu, Q Cui, L Tian, L Wu, S Song, C Yu
arXiv preprint arXiv:2409.15564, 2024
2024
HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
T King, Z Wu, A Koshiyama, E Kazim, P Treleaven
arXiv preprint arXiv:2409.11579, 2024
2024
SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
X Guan, N Demchak, S Gupta, Z Wang, E Ertekin Jr, A Koshiyama, ...
arXiv preprint arXiv:2409.11149, 2024
2024
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
M Liang, A Arun, Z Wu, C Munoz, J Lutch, E Kazim, A Koshiyama, ...
arXiv preprint arXiv:2409.11353, 2024
2024
From Text to Emoji: How PEFT-Driven Personality Manipulation Unleashes the Emoji Potential in LLMs
N Jain, Z Wu, C Munoz, A Hilliard, A Koshiyama, E Kazim, P Treleaven
arXiv preprint arXiv:2409.10245, 2024
2024
HyPA-RAG: A Hybrid Parameter Adaptive Retrieval-Augmented Generation System for AI Legal and Policy Applications
R Kalra, Z Wu, A Gulley, A Hilliard, X Guan, A Koshiyama, P Treleaven
arXiv preprint arXiv:2409.09046, 2024
2024
JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Z Wang, Z Wu, X Guan, M Thaler, A Koshiyama, S Lu, S Beepath, ...
arXiv preprint arXiv:2406.15484, 2024
2024
Towards Auditing Large Language Models: Improving Text-based Stereotype Detection
Z Wu, S Bulathwela, A Koshiyama
Socially Responsible Language Modelling Research, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–15