دنبال کردن
Takeshi Kojima
Takeshi Kojima
رابطه نامشخص
ایمیل تأیید شده در weblab.t.u-tokyo.ac.jp
عنوان
نقل شده توسط
نقل شده توسط
سال
Large language models are zero-shot reasoners
T Kojima, SS Gu, M Reid, Y Matsuo, Y Iwasawa
Advances in neural information processing systems 35, 22199-22213, 2022
43872022
Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
T Kojima, Y Matsuo, Y Iwasawa
Proceedings of the 31st International Joint Conference on Artificial …, 2022
312022
On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
252024
Unnatural error correction: Gpt-4 can almost perfectly handle unnatural scrambled text
Q Cao, T Kojima, Y Matsuo, Y Iwasawa
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
142023
Making use of latent space in language GANs for generating diverse text without pre-training
T Kojima, Y Iwasawa, Y Matsuo
Proceedings of the 16th Conference of the European Chapter of the …, 2021
32021
Robustifying Vision Transformer Without Retraining from Scratch Using Attention-Based Test-Time Adaptation
T Kojima, Y Iwasawa, Y Matsuo
New Generation Computing 41 (1), 5-24, 2023
12023
Slender-Mamba: Fully Quantized Mamba in 1.58 Bits From Head to Toe
Z Yu, T Kojima, Y Matsuo, Y Iwasawa
Proceedings of the 31st International Conference on Computational …, 2025
2025
Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
F Uchiyama, T Kojima, A Gambardella, Q Cao, Y Iwasawa, Y Matsuo
Proceedings of the 2024 Conference on Empirical Methods in Natural Language …, 2024
2024
Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
S Takashiro, T Kojima, A Gambardella, Q Cao, Y Iwasawa, Y Matsuo
arXiv preprint arXiv:2410.00382, 2024
2024
Decoupling Noise and Toxic Parameters for Language Model Detoxification by Task Vector Merging
Y Kim, T Kojima, Y Iwasawa, Y Matsuo
First Conference on Language Modeling, 2024
2024
Cycle Sketch GAN: Unpaired Sketch to Sketch Translation Based on Cycle GAN Algorithm
T Kojima
Proceedings of the Annual Conference of JSAI 33rd (2019), 3B3E203-3B3E203, 2019
2019
Curse of Instructions: Large Language Models Cannot Follow Multiple Instructions at Once
K Harada, Y Yamazaki, M Taniguchi, T Kojima, Y Iwasawa, Y Matsuo
سیستم در حال حاضر قادر به انجام عملکرد نیست. بعداً دوباره امتحان کنید.
مقاله‌ها 1–12