elephantalk, a language infrastructure that brings the unheard into the AI-first era.
elephantalk is a real-time ASL-to-text translation plugin that empowers Deaf/HoH users to speak naturally in video meetings - without the need for a live interpreter. It brings sign language, captions, and voice into one inclusive interface, on the tools people already use.
Date
2024.12-(ongoing)
Role
Product Designer (UX/UI, Research, Strategy)
Company
San Francisco–based self-initiated project, built in collaboration with:
– a Mid-level Product Designer at Salesforce (U.S.)
– an AI Data Engineer (California)
– a Full-Stack Developer (U.S.)
Designed and tested specifically for U.S.-based Deaf/HoH users and video meeting platforms like Zoom, Google Meet, and Microsoft Teams.
The question
Have you ever wondered how Deaf or hard-of-hearing individuals engage in virtual meetings - especially when it comes to real-time, two-way communication?

While the world moved to video, millions were left behind. Deaf and hard-of-hearing users rely on lagging captions, costly interpreters, and platforms that were never made for sign language. They show up - but they’re not always heard.
Start point
It didn’t start with a design brief. It started with a friend who just wanted to be in the meeting.
While working in Korea, I became close with a Deaf sales professional who can also speaks ASL. One day, he told me: “Virtual meetings are emotionally exhausting. I’m always afraid of how people will see me.” That moment shifted my perspective - from designer to ally.
Virtual meetings are emotionally draining. I’m constantly worried about how hearing people will perceive me - before I even get the chance to express myself.

Alex Lee
Salesperson (Diagnosed with hard of hearing at 4yrs old, using hearing aids, speaks ASL & Korean Sign Language)
Overview
Then, Was there a workaround? Interpreter use helped - but never matched the smooth, natural flow hearing participants had.
In another project, I need to hire an interpreter for a research interview with deaf people. The translation was accurate, but the experience wasn’t. Just getting started required time, coordination, and money. Once the meeting began, the conversation lagged. It felt indirect, fragmented. And I still remember, the Deaf participant looked tired.
To sum up, there are areas that still unsolved by current existing tools or technologies.
1
Gap for ASL-native users
Even with high-quality captions, reading English as a second language causes fatigue and information loss for ASL-first signers.
2
Constraints on multimodal view
Hard to see speaker, interpreter, and shared content at once - interpreter video often shrinks or disappears during screen share or layout changes.
3
Lack of interface control
Deaf/HoH users cannot pin or arrange views as needed, and have limited visual tools to request the floor or stay in the conversation flow.
4
High cost of quality access
Professional interpreters and CART services are expensive and often inaccessible for individuals or small organizations without funding.
Overview
And when we listened to them directly, the pattern was clear - the effort spent managing tools and workarounds often outweighed the value of the meeting itself.
Most platforms provide captions, meeting summaries, and chat logs, which enable information delivery. But for Deaf or hard‑of‑hearing users, delays and the lack of real‑time sign language support mean they cannot fully join the conversation without hiring an interpreter.
“I spend more energy managing the tools than following the meeting itself. After trying to keep up with a two-hour meeting, I’m completely drained - every single word feels like a puzzle piece.”
Hard-of-hearing researcher
Overview
This feedback made it clear : the solution ha
Most platforms provide captions, meeting summaries, and chat logs, which enable information delivery. But for Deaf or hard‑of‑hearing users, delays and the lack of real‑time sign language support mean they cannot fully join the conversation without hiring an interpreter.
“I spend more energy managing the tools than following the meeting itself. After trying to keep up with a two-hour meeting, I’m completely drained - every single word feels like a puzzle piece.”
Hard-of-hearing researcher
Overview
그래서 우리는 Real time translation 을 가능케하는 서비스를 빌딩하자는 원라이너에서부터 시작했다.우리가 만든 것: Elephantalk
그래서 우리는
Deaf 사용자가 회의 도중 사인을 하면
→ 그게 곧바로 텍스트로 변환되어 보여지는
real-time ASL-to-text 플러그인을 만들었어.
✔ 회의가 시작되면,
✔ 수화로 발화하고,
✔ 그 말이 바로 텍스트로 출력돼
✔ 다른 사람들은 그걸 읽고 실시간으로 반응할 수 있어
Start point
초기에는 더 많은 기능을 만들려고 했어.
미리 수화 스타일/대화 템플릿 설정
발화자 지정
커뮤니케이션 방식 선택 등…
UI도 수십 개 만들었지.
하지만 결국 깨달았어.
“우리가 당장 풀어야 하는 건, 단 한 가지야.
수화를 하면 그게 바로 보이게 하는 것.”
그래서 과감히 기능을 걷어냈고,
MVP는 Translation On/Off만 남겼어.
그 단순함이 오히려 사용자의 자율성을 더 살려줬어.
Most platforms offer captions. Some offer interpreters. But in practice?
These tools existed. But they weren’t enough. We needed more than accessibility. We needed presence.
Start point
6. 근데 이제, 다음 스텝이 더 어렵다
지금 버전은 괜찮아.
근데 이 AI 모델은 아직 완벽하진 않아.
수화는 사람마다 다르고,
맥락에 따라 같은 동작도 다른 뜻이 될 수 있어.
모델이 계속 좋아지려면,
사람들이 오류를 알려줘야 해.
근데 너무 자주 물어보면 지치고,
안 물어보면 모델은 멈춰.
“어떻게 하면 자연스럽게, 지치지 않게
모델이 배우도록 도와줄 수 있을까?”
Most platforms offer captions. Some offer interpreters. But in practice?
These tools existed. But they weren’t enough. We needed more than accessibility. We needed presence.
Start point
6. 그리고 비주얼 랭귀지 (inclusivity….)
Most platforms offer captions. Some offer interpreters. But in practice?
These tools existed. But they weren’t enough. We needed more than accessibility. We needed presence.