Segui
Haoqi Li
Titolo
Citata da
Citata da
Anno
Speaker-invariant affective representation learning via adversarial training
H Li, M Tu, J Huang, S Narayanan, P Georgiou
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
642020
Self-supervised speaker verification with simple siamese network and self-supervised regularization
M Sang, H Li, F Liu, AO Arnold, L Wan
ICASSP 2022-2022 IEEE international conference on acoustics, speech and …, 2022
432022
Learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling
PG Shivakumar, H Li, K Knight, P Georgiou
APSIPA Transactions on Signal and Information Processing 8, e8, 2019
312019
A deep reinforcement learning framework for Identifying funny scenes in movies
H Li, N Kumar, R Chen, P Georgiou
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
282018
Linking emotions to behaviors through deep transfer learning
H Li, B Baucom, P Georgiou
PeerJ Computer Science 6, e246, 2020
182020
Unsupervised Latent Behavior Manifold Learning from Acoustic Features: audio2behavior
H Li, B Baucom, P Georgiou
Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017
172017
Automatic prediction of suicidal risk in military couples using multimodal interaction cues from couples conversations
SN Chakravarthula, M Nasir, SY Tseng, H Li, TJ Park, B Baucom, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
152020
Sparsely Connected and Disjointly Trained Deep Neural Networks for Low Resource Behavioral Annotation: Acoustic Classification in Couples' Therapy
H Li, B Baucom, P Georgiou
Interspeech 2016, 1407--1411, 2016
142016
An empirical analysis of information encoded in disentangled neural speaker representations
R Peri, H Li, K Somandepalli, A Jati, S Narayanan
Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 194--201, 2020
132020
Predicting behavior in cancer-afflicted patient and spouse interactions using speech and language
SN Chakravarthula, H Li, SY Tseng, M Reblin, P Georgiou
Proc. Interspeech 2019, 3073--3077, 2019
122019
Deep reinforcement learning framework for characterizing video content
R Chen, N Kumar, H Li
US Patent 10,885,341, 2021
112021
" Honey, I Learned to Talk" Multimodal Fusion for Behavior Analysis
SY Tseng, H Li, B Baucom, P Georgiou
Proceedings of the 20th ACM International Conference on Multimodal …, 2018
112018
Acted vs. improvised: Domain adaptation for elicitation approaches in audio-visual emotion recognition
H Li, Y Kim, CH Kuo, S Narayanan
arXiv preprint arXiv:2104.01978, 2021
102021
Zero-shot end-to-end spoken language understanding via cross-modal selective self-training
J He, J Salazar, K Yao, H Li, J Cai
arXiv preprint arXiv:2305.12793, 2023
72023
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions
JA Brooks, V Tiruvadi, A Baird, P Tzirakis, H Li, C Gagne, M Oh, A Cowen
Companion Publication of the 25th International Conference on Multimodal …, 2023
32023
Unsupervised speech representation learning for behavior modeling using triplet enhanced contextualized networks
H Li, B Baucom, S Narayanan, P Georgiou
Computer Speech & Language 70, 101226, 2021
12021
247. Reverse Engineering an Emotion Expression Estimate Classifier for Depressed Mood
V Tiruvadi, A Baird, L Schooler, J Brooks, C Gagne, P Tzirakis, H Li, M Oh, ...
Biological Psychiatry 95 (10), S200-S201, 2024
2024
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data
A Baird, R Manzelli, P Tzirakis, C Gagne, H Li, S Allen, S Dieleman, ...
arXiv preprint arXiv:2403.14048, 2024
2024
Deep reinforcement learning framework for sequence level prediction of high dimensional data
R Chen, N Kumar, H Li
US Patent 11,829,878, 2023
2023
Deep reinforcement learning framework for characterizing video content
R Chen, N Kumar, H Li
US Patent 11,386,657, 2022
2022
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20