I designed elephantalk to explore a question: What would accessibility look like if users could shape it in real time? The result became a participatory communication layer - a system that listens, learns, and evolves with every conversation.
2025.01 - 2025.10
Founding Product Designer (UX/UI, Research, Strategy) — led end-to-end design and product definition
*Collaborated with AI Data engineer, full-stack developer, Product Owner and UX researcher.
Designed for Deaf/HoH professionals in the U.S., integrating with Zoom, Google Meet, and Teams
Overview
And if user sees any mistranslation, they can hit the button says "Mark caption as incorrect" and in their local device elephantalk will save their flag for later correction.
By letting users flag and correct interpretations, the system grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.
By letting users flag and refine captions, elephantalk grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.
Start point
This project began with a Deaf sales professional I worked with in Korea.
One afternoon, he said quietly :

One of the precious interview moment with HoH salesperson in 2024
RESEARCH
1
2
3
RESEARCH insights 2
Because of these gaps, accessibility began to feel like work. Deaf and hard-of-hearing professionals spent more energy managing tools than joining conversations.
1
2
3
Design concept
Rather than designing another assistive tool, we imagined accessibility as a mutual learning loop. elephantalk introduces two key interactions:
Collaboration & learning system
We designed not just how the model speaks, but how it listens.
To make the system adaptive, I collaborated closely with an AI engineer to design a federated learning loop, a structure that allows the model to learn from users locally, without ever exposing personal data.
In parallel, we worked with Deaf educators and interpreters to rethink caption design itself. The result was the ASL Gloss Caption Mode, a visual structure mirroring how signers think and express ideas spatially, reducing cognitive fatigue while preserving meaning and rhythm.
And our early prototype successfully recognized signs word by word, and later evolved into a sentence-based formation model that improved accuracy over time.
Brand&visual language
When designing elephantalk’s brand identity, the goal was to make accessibility feel calm, connected, and trustworthy, not technical or assistive. Every visual element was built around the same principles that shaped the product itself: empathy, clarity, and participation.
REsults
After four design iterations, we started seeing the shift, not just in numbers, but in how people felt inside conversations.
94%
Testers
Said they felt “actively heard” during meetings
40%
Accuracy improvement
after 2 weeks of
federated learning
1m
Under
Aversage
setup time
reflection
NEXT PROJECT
Shop what you see,
Not what you type : with Walaland's visual-first shopping experience
Owned end-to-end experience
•













