Redefining how Deaf and hearing professionals communicate in real time.

Redefining how Deaf and hearing professionals communicate in real time.

Redefining how Deaf and hearing professionals communicate in real time.

I designed elephantalk to explore a question: What would accessibility look like if users could shape it in real time? The result became a participatory communication layer - a system that listens, learns, and evolves with every conversation.

Date

Date

Date

Date

2025.01 - 2025.10

Role

Role

Role

Role

Founding Product Designer (UX/UI, Research, Strategy) — led end-to-end design and product definition

*Collaborated with AI Data engineer, full-stack developer, Product Owner and UX researcher.

Scope

Scope

Designed for Deaf/HoH professionals in the U.S., integrating with Zoom, Google Meet, and Teams

Focus

Focus

Created a two-way ASL ↔ Voice system with a federated learning loop. Designed not just for accessibility, but for shared agency during video calls.

Created a two-way ASL ↔ Voice system with a federated learning loop — designed not just for accessibility, but for shared agency

Overview

What is elephantalk?
Meeting layer that turning
sign language into real-time caption.

What is elephantalk?

What is elephantalk?
Meeting layer that turning
sign language into real-time caption.

Elephantalk is a real-time ASL-to-text translation layer designed for inclusive meetings.
It turns sign language into text — and accessibility into participation. Built for U.S.-based Deaf and hard-of-hearing professionals, the system learns directly from users through federated feedback, creating a loop where every correction helps the model improve.

Elephantalk is a real-time ASL-to-text translation layer designed for inclusive meetings.
It turns sign language into text, and accessibility into participation. Built for U.S.-based Deaf and hard-of-hearing professionals, the system learns directly from users through federated feedback, creating a loop where every correction helps the model improve.

Elephantalk is a real-time ASL-to-text translation layer designed for inclusive meetings. It turns sign language into text — and accessibility into participation. Built for U.S.-based Deaf and hard-of-hearing professionals, the system learns directly from users through federated feedback, creating a loop where every correction helps the model improve.

During the meeting
Turning translation on

During the meeting
Turning translation on

When the meeting starts, the user simply activates the elephantalk widget. With one tap, translation turns on, signing instantly appears as live captions for everyone in the call.

When the meeting starts, the user simply activates the elephantalk widget. With one tap, translation turns on, signing instantly appears as live captions for everyone in the call.

Icon

One-tap toggle for both Deaf and hearing users

One-tap toggle for both Deaf and hearing users

Icon

Works across Zoom, Google Meet, and Teams

Works across Zoom, Google Meet, and Teams

And if user sees any mistranslation, they can hit the button says "Mark caption as incorrect" and in their local device elephantalk will save their flag for later correction.

When meaning slips
Flagging an incorrect caption

If a caption doesn’t match what the user meant, they can tap “Mark caption as incorrect.”
The system saves this flag locally and learns how the user signs, capturing real context without interrupting the conversation.

By letting users flag and correct interpretations, the system grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.

After the meeting
Correcting and improving

After the session, users can review their flagged captions and input the right translation in their own words. Through federated learning, the model updates locally, building a personalized ASL-to-text engine that improves with every interaction.

By letting users flag and refine captions, elephantalk grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.

When the meaning slips
flagging an incorrect caption

If a caption doesn’t match what the user meant, they can tap “Mark caption as incorrect.”
The system saves this flag locally and learns how the user signs, capturing real context without interrupting the conversation.

After the meeting
correcting & improving

After the session, users can review their flagged captions and input the right translation in their own words. Through federated learning, the model updates locally, building a personalized ASL-to-text engine that improves with every interaction.

Start point

A conversation that changed everything

A conversation
that changed everything

This project began with a Deaf sales professional I worked with in Korea.
One afternoon, he said quietly :

“Virtual meetings are emotionally exhausting. I’m always worried about how hearing people will see me before I even get to speak.”
“Virtual meetings are emotionally exhausting. I’m always worried about how hearing people will see me before I even get to speak.”
Alex Lee
Alex Lee

Salesperson (Diagnosed with hard of hearing at 4yrs old, using hearing aids, speaks ASL & Korean Sign Language)

Salesperson (Diagnosed with hard of hearing at 4yrs old, using hearing aids, speaks ASL & Korean Sign Language)

One of the precious interview moment with HoH salesperson in 2024

That moment shifted my role - from designer to ally.
I realized the real barrier wasn’t technology, but participation.
He didn’t need a tool to help him speak. He needed a space that could listen without bias.

That single conversation reframed the project for me : from building access to designing presence.

RESEARCH

What current tools fail to solve
Current tools deliver only information, not connection

What is elephantalk?

What current tools fail to solve
Current tools deliver only information, not connection

Most meeting platforms convert words into captions, but captions ≠ communication.
Deaf professionals told us they spend more energy managing tools than actually participating. Even interpreter support couldn’t replicate the natural flow of hearing conversations.

Most meeting platforms convert words into captions, but captions ≠ communication. Deaf professionals told us they spend more energy managing tools than actually participating. Even interpreter support couldn’t replicate the natural flow of hearing conversations.

Most meeting platforms convert words into captions, but captions ≠ communication. Deaf professionals told us they spend more energy managing tools than actually participating. Even interpreter support couldn’t replicate the natural flow of hearing conversations.

“By the time my words are interpreted, the topic has already changed.”

“By the time my words are interpreted, the topic has already changed.”

Deaf Product Designer

Deaf Product Designer

1

No real-time ASL-to-text translation.


No real-time ASL-to-voice translation.

2

Interrupted participation — users wait while topics move on.

Interrupted participation — users wait while topics move on.

3

Tool juggling — managing captions, chat, and interpreters just to keep up.

Tool juggling — managing captions, chat, and interpreters just to keep up.

RESEARCH insights 2

When access becomes effort

When access becomes effort

When access becomes effort

When access becomes effort

Because of these gaps, accessibility began to feel like work. Deaf and hard-of-hearing professionals spent more energy managing tools than joining conversations.

“I spend more energy managing the tools than following the meeting itself. After a two-hour session, I’m completely drained, every single word feels like a puzzle piece.”

“I spend more energy managing the tools than following the meeting itself. After a two-hour session, I’m completely drained, every single word feels like a puzzle piece.”

Elle, Hard-of-hearing researcher

Elle, Hard-of-hearing researcher

1

Cognitive fatigue from constant coordination.

Cognitive fatigue from constant coordination.

2

Loss of flow and timing in group discussions.

Loss of flow and timing in group discussions.

3

Emotional distance feeling like an observer, not a participant.

Emotional distance feeling like an observer, not a participant.

Design concept

What if accessibility could learn from its users?

What if accessibility could learn from its users?

Rather than designing another assistive tool, we imagined accessibility as a mutual learning loop. elephantalk introduces two key interactions:

Translation toggle

Translation toggle

a one-tap widget that turns signing into live captions instantly,
accessible for both Deaf and hearing users during meetings.

a one-tap widget that turns signing into live captions instantly,
accessible for both Deaf and hearing users during meetings.

Federated feedback loop

Federated feedback loop

Users can flag mistranslations in real time.
Their feedback refines the model locally, without sending personal data to the cloud.

Users can flag mistranslations in real time.
Their feedback refines the model locally, without sending personal data to the cloud.

AI Model

feedback loop

User signs

AI translates

User flags

Model learns

Conversation flows better

AI Model

feedback loop

User signs

AI translates

User flags

Model learns

Conversation flows better

AI Model

feedback loop

User signs

AI translates

User flags

Model learns

Conversation flows better

AI Model

feedback loop

User signs

AI translates

User flags

Model learns

Conversation flows better

In essence:

User signs → AI translates → user flags → model learns → conversation flows better.

In essence: User signs → AI translates → user flags → model learns → conversation flows better.

Collaboration & learning system

Designing how the model listens

Designing how the model listens

Designing how the model listens

Designing how the model listens

We designed not just how the model speaks, but how it listens.

To make the system adaptive, I collaborated closely with an AI engineer to design a federated learning loop, a structure that allows the model to learn from users locally, without ever exposing personal data.

In parallel, we worked with Deaf educators and interpreters to rethink caption design itself. The result was the ASL Gloss Caption Mode, a visual structure mirroring how signers think and express ideas spatially, reducing cognitive fatigue while preserving meaning and rhythm.

And our early prototype successfully recognized signs word by word, and later evolved into a sentence-based formation model that improved accuracy over time.

Brand&visual language

Giving accessibility a presence, not a label

Giving accessibility a presence, not a label

Giving accessibility a presence, not a label

Giving accessibility a presence, not a label

When designing elephantalk’s brand identity, the goal was to make accessibility feel calm, connected, and trustworthy, not technical or assistive. Every visual element was built around the same principles that shaped the product itself: empathy, clarity, and participation.

REsults

When inclusion feel reals

When inclusion feel reals

When inclusion feel reals

When inclusion feel reals

After four design iterations, we started seeing the shift, not just in numbers, but in how people felt inside conversations.

94%

Testers

Said they felt “actively heard” during meetings

40%

Accuracy improvement

after 2 weeks of

federated learning

1m

Under

Aversage

setup time

reflection

Designing participation, not perfection.

Designing participation, not perfection.

Elephantalk was never just a translation tool. It was a conversation about participation between Deaf users, interpreters, engineers, and designers.

Through this process, I learned that accessibility isn’t achieved through automation, but through reciprocity, when a system learns to grow with the people it serves.

Elephantalk was never just a translation tool. It was a conversation about participation between Deaf users, interpreters, engineers, and designers. Through this process, I learned that accessibility isn’t achieved through automation, but through reciprocity, when a system learns to grow with the people it serves.

Real accessibility is when people have the right to teach the system that serves them.

Real accessibility is when people have the right to teach the system that serves them.

NEXT PROJECT

Shop what you see,
Not what you type : with Walaland's visual-first shopping experience

Owned end-to-end experience

Home

uxykyuri

Can’t wait to work & build
something together!

Drop me a line at

uxykyuri@gmail.com

CALL

415-969-1756

2024, Seoul city

Site designed and built by uxykyuri.
Copyright © 2025 uxykyuri. All rights reserved.

uxykyuri

Can’t wait to work & build
something together!

Drop me a line at

uxykyuri@gmail.com

CALL

415-969-1756

2024, Seoul city

Site designed and built by uxykyuri.
Copyright © 2025 uxykyuri. All rights reserved.

uxykyuri

Can’t wait to work & build
something together!

Drop me a line at

uxykyuri@gmail.com

CALL

415-969-1756

2024, Seoul city

Site designed and built by uxykyuri.
Copyright © 2025 uxykyuri. All rights reserved.

uxykyuri

Can’t wait to work & build
something together!

Drop me a line at

uxykyuri@gmail.com

CALL

415-969-1756

2024, Seoul city

Site designed and built by uxykyuri.
Copyright © 2025 uxykyuri. All rights reserved.