Redefining how Deaf and hearing professionals communicate in real time.

Redefining how Deaf and hearing professionals communicate in real time.

I designed elephantalk to explore a question: What would accessibility look like if users could shape it in real time? The result became a participatory communication layer - a system that listens, learns, and evolves with every conversation.

TIMELINE

5 months (2025-2026)

ROLE

Founding Product Designer (UX/UI, Research, Strategy) , Led end-to-end design and product definition

COLLABORATORS

Collaborated with AI Data engineer, full-stack developer, Product Owner and UX researcher.

Self-Initiated

This is not a formal company project; it's a side project concurrent with graduate studies.

Overview

What is elephantalk?

Elephantalk is a real-time ASL-to-text translation layer designed for inclusive meetings. It turns sign language into text and accessibility into participation. Built for U.S.-based Deaf and hard-of-hearing professionals, the system learns directly from users through federated feedback, creating a loop where every correction helps the model improve.

During the meeting
Turning translation on

During the meeting
Turning translation on

When the meeting starts, the user simply activates the elephantalk widget. With one tap, translation turns on, signing instantly appears as live captions for everyone in the call.

Icon

One-tap toggle for both Deaf and hearing users

Icon

Works across Zoom, Google Meet, and Teams

And if user sees any mistranslation, they can hit the button says "Mark caption as incorrect" and in their local device elephantalk will save their flag for later correction.

When meaning slips
Flagging an incorrect caption

When meaning slips
Flagging an incorrect caption

If a caption doesn’t match what the user meant, they can tap “Mark caption as incorrect.” The system saves this flag locally and learns how the user signs, capturing real context without interrupting the conversation.

By letting users flag and correct interpretations, the system grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.

After the meeting
Correcting and improving

After the meeting
Correcting and improving

After the session, users can review their flagged captions and input the right translation in their own words. Through federated learning, the model updates locally, building a personalized ASL-to-text engine that improves with every interaction.

By letting users flag and refine captions, elephantalk grows more accurate and personal with every meeting, redefining accessibility as a shared process, not a service.

Research

A conversation that changed everything

This project began with a Deaf sales professional I worked with in Korea. One afternoon, he said quietly :

Alex Lee

Salesperson (Diagnosed with hard of hearing at 4yrs old, using hearing aids, speaks ASL & Korean Sign Language)

“Virtual meetings are emotionally exhausting. I’m always worried about how hearing people will see me before I even get to speak.”

One of the precious interview moment with HoH salesperson in 2024

That moment shifted my role - from designer to ally. I realized the real barrier wasn’t technology, but participation.
He didn’t need a tool to help him speak. He needed a space that could listen without bias.

RESEARCH

What current tools fail to solve

Most meeting platforms convert words into captions, but captions ≠ communication.
Deaf professionals told us they spend more energy managing tools than actually participating. Even interpreter support couldn’t replicate the natural flow of hearing conversations.

“By the time my words are interpreted, the topic has already changed.”

Deaf Product Designer

1

No real-time ASL-to-text translation.


No real-time ASL-to-text translation.


2

Interrupted participation ,users wait while topics move on.

Interrupted participation ,users wait while topics move on.

3

Tool juggling, managing captions, chat, and interpreters just to keep up.

Tool juggling, managing captions, chat, and interpreters just to keep up.

Collaboration & learning system

Designing how the model listens

We designed not just how the model speaks, but how it listens.

To make the system adaptive, I collaborated closely with an AI engineer to design a federated learning loop, a structure that allows the model to learn from users locally, without ever exposing personal data.

In parallel, we worked with Deaf educators and interpreters to rethink caption design itself. The result was the ASL Gloss Caption Mode, a visual structure mirroring how signers think and express ideas spatially, reducing cognitive fatigue while preserving meaning and rhythm.

And our early prototype successfully recognized signs word by word, and later evolved into a sentence-based formation model that improved accuracy over time.

Brand&visual language

Designing elephantalk's brand identity

When designing elephantalk’s brand identity, the goal was to make accessibility feel calm, connected, and trustworthy, not technical or assistive. Every visual element was built around the same principles that shaped the product itself: empathy, clarity, and participation.

reflection

So What did learn from this self-initiated project?

It was built through real collaboration with Deaf and hard of hearing users, interpreters, engineers, and designers. By validating the problem, building the model, and testing with users, I experienced what it means to take an accessibility product from 0 to 1. While the model is still evolving and accuracy remains a risk, this work was only possible thanks to the Deaf and HoH community who helped us evaluate usability and trust.

Systemic Challenge

Meetings weren’t accessible by default. Deaf and hearing participants relied on interpreters, partial notes, or recordings, leading to missed context, delays, and unequal participation.

Solution

I designed an AI powered workflow that translates sign language into real time voice and text, so everyone can follow, speak, and participate in the same meeting flow.

Impact

Meetings became inclusive and easier to run. Deaf participants gained equal access to conversation, teams communicated more clearly, and accessibility was built into the workflow instead of added later.