Follow
Masanari Ohi
Title
Cited by
Cited by
Year
Continual pre-training for cross-lingual llm adaptation: Enhancing japanese language capabilities
K Fujii, T Nakamura, M Loem, H Iida, M Ohi, K Hattori, H Shota, S Mizuki, ...
arXiv preprint arXiv:2404.17790, 2024
452024
Building a large japanese web corpus for large language models
N Okazaki, K Hattori, H Shota, H Iida, M Ohi, K Fujii, T Nakamura, M Loem, ...
arXiv preprint arXiv:2404.17733, 2024
82024
Likelihood-based mitigation of evaluation bias in large language models
M Ohi, M Kaneko, R Koike, M Loem, N Okazaki
arXiv preprint arXiv:2402.15987, 2024
72024
HALL-E: hierarchical neural codec language model for minute-long zero-shot text-to-speech synthesis
Y Nishimura, T Hirose, M Ohi, H Nakayama, N Inoue
arXiv preprint arXiv:2410.04380, 2024
22024
Elp-adapters: Parameter efficient adapter tuning for various speech processing tasks
N Inoue, S Otake, T Hirose, M Ohi, R Kawakami
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024
22024
HarmonicEval: Multi-modal, Multi-task, Multi-criteria Automatic Evaluation Using a Vision Language Model
M Ohi, M Kaneko, N Okazaki, N Inoue
arXiv preprint arXiv:2412.14613, 2024
2024
Why We Build Local Large Language Models: An Observational Analysis from 35 Japanese and Multilingual LLMs
K Saito, S Mizuki, M Ohi, T Nakamura, T Shiotani, K Maeda, Y Ma, ...
arXiv preprint arXiv:2412.14471, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–7