Hwaran Lee

Hi! I am a lead research scientist at NAVER AI Lab, working on natural language processing and machine learning.

My research is committed to understanding humanity and society to further develop human-like and trustworthy Artificial Intelligence. Recent primary interests has been building trustworthy and safe Large Language Models (LLMs), with a focus on: (1) construction of safety datasets, benchmarks, and evaluation metrics; (2) controllable language generation; (3) LLM security, including adversarial attack and red-teaming; (4) safety alignment and learning methods.

I obtained Ph.D. in Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) in 2018. During my Ph.D., I was fortunate to be advised by Prof. Soo-Young Lee. In 2012, I obtained B.S. in Mathematical Science at KAIST. Before joining NAVER AI Lab, I worked at SK T-Brain as a research scientist from 2018 to 2021.

Contact me via email: {first_name}.{last_name}@gmail.com

📣 I’m recruiting research interns! Please check more out at 👉HERE!

news

Mar 15, 2024 LifeTox was accepted at NAACL 2024. 🌶️
Jan 15, 2024 KoBBQ was accepted at Transactions of the Association for Computational Linguistics(TACL). 🇰🇷
Jan 15, 2024 Prometheus was accepted at ICLR 2024. 🔥
Sep 23, 2023 ProPILE was accepted at NeurIPS 2023 as spotlight, and Prometheus at Instruction Workshop @ NeurIPS 2023 🍾🍾
Aug 24, 2023 HyperCLOVA X and CLOVA X are released, which are a Korean LLM and a chat-based service from NAVER respectively. I gave a short talk at DAN2023!🦕
May 08, 2023 Five papers (SQuARe, KoSBi, Bayesian Red Teaming, Critic-Guided Decoding, ClaimDiff) are accepted at ACL 2023! 🎉
Mar 27, 2023 I’m serving as a committee member of the 2nd Forum on Artificial Intelligence Ethics and Policy, organized by the Ministry of Science and ICT, South Korea. 🇰🇷
Apr 08, 2022 Our paper Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate - from the Perspective of DistilBERT has been accepted to GeBNLP workshop at NAACL, 2022 :tulip:
Apr 07, 2022 Our paper Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking has been accepted to Findings of NAACL 2022! :tulip:
Mar 19, 2022 We will organize a ACM FAccT’22 CRAFT (workshop) on HyperscaleFAccT in Seoul, South Korea. :sunny: