Hwaran Lee

Hi! I am an assistant professor at Sogang University in the Department of Artificial Intelligence and Computer Science Engineering.

My research is committed to understanding humanity and society to further develop human-like and trustworthy Artificial Intelligence. Recent primary interests has been building trustworthy and safe Large Language Models (LLMs), with a focus on: (1) construction of safety datasets, benchmarks, and evaluation metrics; (2) controllable language generation; (3) LLM security, including adversarial attack and red-teaming; (4) safety alignment and learning methods.

I obtained Ph.D. in Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) in 2018. During my Ph.D., I was fortunate to be advised by Prof. Soo-Young Lee. In 2012, I obtained B.S. in Mathematical Science at KAIST. Before joining Sogang University, I was a lead research scientist at NAVER AI Lab from 2021 to 2025, and a research scientist at SK T-Brain from 2018 to 2021.

💌 Contact me via email:

  • {first_name}.{last_name}@gmail.com
  • {first_name}{last_name}@sogang.ac.kr

📣 Notice

  • Student Office Hours: Please schedule a meeting here!
  • Recuriting Undergrad/Grad Students: I am looking for motivated students to research and work together. Stay tuned – recruitment is coming soon! :->

news

Mar 06, 2025 One paper Drift is accepted at Bi-Align Workshop @ ICLR 2025
Mar 01, 2025 I joined Sogang University in the Department of Artificial Intelligence and Computer Science & Engineering in Seoul.
Jan 29, 2025 Invited talk Safe and trustworthy AI at Google Research Australia in Sydney. 🇦🇺
Jan 22, 2025 One paper GUARD is accepted at ICLR 2025, and two papers AdvisorQA & MAQA are accepted at NAACL 2025. 🇸🇬🇺🇸
Nov 15, 2024 Attended at Bay Area Safety Alignment Workshop and Frontier AI Safety Commitment Conference, which are invite-only.
Oct 09, 2024 Participated in CoLM 2024 as a panelist at the Multilinguality and LLMs special session 😁.
Sep 26, 2024 BLEnD paper was accepted at NeurIPS D&B 2024. 🥗
Jul 20, 2024 I’ll serve as the Diversity and Inclusion Chair at ACL 2025. 🌿💐
May 15, 2024 Five (Main: SWEET, APRICOT, Findings: KorNAT, TRAP, TimeChara) papers were accepted at ACL 2024. 🎉
Mar 15, 2024 LifeTox was accepted at NAACL 2024. 🌶️
Jan 15, 2024 KoBBQ was accepted at Transactions of the Association for Computational Linguistics(TACL). 🇰🇷
Jan 15, 2024 Prometheus was accepted at ICLR 2024. 🔥
Sep 23, 2023 ProPILE was accepted at NeurIPS 2023 as spotlight, and Prometheus at Instruction Workshop @ NeurIPS 2023 🍾🍾
Aug 24, 2023 HyperCLOVA X and CLOVA X are released, which are a Korean LLM and a chat-based service from NAVER respectively. I gave a short talk at DAN2023!🦕
May 08, 2023 Five papers (SQuARe, KoSBi, Bayesian Red Teaming, Critic-Guided Decoding, ClaimDiff) are accepted at ACL 2023! 🎉
Mar 27, 2023 I’m serving as a committee member of the 2nd Forum on Artificial Intelligence Ethics and Policy, organized by the Ministry of Science and ICT, South Korea. 🇰🇷
Apr 08, 2022 Our paper Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate - from the Perspective of DistilBERT has been accepted to GeBNLP workshop at NAACL, 2022 :tulip:
Apr 07, 2022 Our paper Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking has been accepted to Findings of NAACL 2022! :tulip:
Mar 19, 2022 We will organize a ACM FAccT’22 CRAFT (workshop) on HyperscaleFAccT in Seoul, South Korea. :sunny:
Feb 26, 2022 Our paper “Plug-and-Play Adaptation for Continuously-updated QA” has been accepted to Findings of ACL 2022! :blossom:
Nov 01, 2021 Our paper “TaleBrush: Sketching Stories with Generative Pretrained Language Models” has been accepted to CHI 2022! :four_leaf_clover:
Aug 12, 2021 Our work “Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer” has been accepted to Findings of EMNLP 2021! :cherry_blossom:
Aug 12, 2021 Our work “SUMBT+LaRL: Effective Multi-domain End-to-end Neural Task-oriented Dialog System” has been accepted to publish in IEEE Access! :hibiscus: