‘Coffee is just an excuse’: Deaf-run cafe where hearing people sign to order deafness and hearing loss

by
0 comments
'Coffee is just an excuse': Deaf-run cafe where hearing people sign to order deafness and hearing loss

wAshley Hartwell raised her fists at the barista and shook them near her ears. Then he lowered his fists, extended his thumbs and little fingers, and moved them up and down across his chest, as if milking a cow. Finally, he placed the fingers of one hand on his chin and bent his wrist forward.

Hartwell, who has no hearing problems, used BSL, British Sign Language, to order her morning latte with normal milk at the deaf-run Dialogue Café, located at the University of East London, and thanked deaf barista Victor Olaniyan.

Hartwell, a university lecturer, said, “I have to be honest: when this café first opened near my office, I avoided it because the whole idea made me anxious.” “But now I’m fascinated. Sign language is amazing. I’m thinking about taking a course so I can learn more.”

The café’s touchscreen menu gave Hartwell the confidence to try BSL. Instead of listing the coffees and cakes on sale, the menus show videos of their BSL translations.

For many deaf BSL users, this type of direct access is important. BSL is a first language for thousands of people in the UK.

Olaniyan, who has worked at the café for five years and now does shifts as well as pursuing an accounting and management degree at the University of Reading, seemed a little taken aback by the reactions he heard people having on the video menu.

“I grew up listening to people, so I don’t have any problems in the hearing world,” he signed off. “But hearing this, people often feel anxious about communicating with us. If this technology helps them, that’s great, but I’m fine as I am.”

Wesley Hartwell is ordering drinks at the café. Photograph: Jill Mead/The Guardian

Over the past two years, there has been an explosion of digital and AI-linked products aimed at bridging communication barriers between the deaf and hearing worlds, from signing avatars to large generative models that aspire to rival mainstream AI platforms.

However, independent evaluation of many of these systems is limited, and sign language researchers caution that current tools still struggle with linguistic nuance, regional variation, and context, especially in high-risk settings such as health care and law.

But the ambitions are striking: UK startup Silence Speaks has created an avatar-based system that converts text into BSL, claiming it can convey contextual meaning and emotional cues.

The British project SignGPT, backed by £8.45 million of funding, is developing models to translate bidirectionally between BSL and English, as well as building what it calls the largest sign language dataset in the world.

Sign languages ​​AI research has also become increasingly collaborative and international: a new £3.5m UK-Japan research project is developing systems trained on natural deaf-to-deaf conversation data rather than interpreter recordings.

Much of the recent progress has happened quickly. When Professor Bensi Voll, co-investigator of the SignGPT project at the Center for Deafness, Cognition and Language Research at University College London, first entered the field of BSL research, communication beyond face-to-face conversation for deaf people was extremely limited.

“The rest of the world was moving forward with technology, but deaf people were often left behind,” he said. “What’s different now is the momentum. Over the past few years, the deaf community has benefited from an explosively powerful mix of possibilities.”

Wall cautioned that, historically, technology has not always been positive. “There’s often been a fantasy, especially among researchers who don’t understand sign language, that this is a quick fix. That you take a sign language, turn it into written English – and you’ve made the lives of deaf people wonderful,” he said.

Victor Olaniyan works shifts at the café while pursuing his degree in accounting and management. Photograph: Jill Mead/The Guardian

That perception led to what Wall described as “really awesome technology”, including wearable translation suits, heavy gloves and head-mounted cameras designed to capture the signing process.

“These were all doomed to failure,” she said, “because they were designed by people who did not understand sign language and did not ask deaf people what they wanted, let alone work with deaf experts from the beginning. The community has been frustrated for years by the proliferation of bad solutions.”

Yet the need for solutions is real. Approximately 70 million people worldwide are deaf or hard of hearing. In the UK, there are approximately 151,000 BSL users recorded in census data. For about 25,000 of them, BSL is their primary language. It is a distinct, natural language with its own grammar and structure, not a signed version of English.

For this group, written and spoken English is often a second or third language, followed by lip-reading, sign-supported English, or family-invented gestures.

This has practical consequences: subtitles and written text are not always adequate substitutes for direct BSL access. A large 2017 study of deaf children aged 10 to 11 found that reading ability was well below the expected age level, with 48% of deaf children receiving instruction using only oral language, and 82% of children whose everyday language was sign language.

Dr Lauren Ward has the unusual role of leading on AI technology for the deaf community and advising government and industry at the Royal National Institute for Deaf People (RNID).

“The pace of change is so fast that RNID has taken the unusual decision to hire engineers,” he said. “The potential to help the deaf community is huge – but there is also the potential to harm.”

Deaf people have long been early adopters of technology: SMS messaging transformed communication in the 1990s. But Ward said the past two years have brought a new intensity of interest and concern. “It’s suddenly shifted from university labs to startups and commercial products,” he said.

This shift has been enabled by advances in machine learning and related technologies that ultimately make large-scale processing of sign languages ​​technically possible.

Hakan Elbir, Founder of Dialogue Hub. Photograph: Jill Mead/The Guardian

Increases in research funding, better datasets and greater involvement of Deaf researchers have also accelerated momentum, as has widespread acknowledgment of the long-standing gap between the access that Deaf people are entitled to legally and what is delivered in practice: reliable sign language provision has been promised for decades, but often unfulfilled.

This combination of opportunity and risk makes the current moment a double-edged sword, Ward said.

“This is incredibly exciting, and the next five years could bring real improvements,” she said. “But there is a danger that private companies respond by focusing on profit rather than working with and leading the deaf community.”

Dr. Maartje de Meulder, a deaf scholar and consultant on sign language AI, agreed that the risk is great.

“At the moment, deaf people are excluded from a huge amount of online information, from educational videos to government websites,” she said. “No one will have the resources to translate the entire Internet into sign languages, so even partial solutions can be transformative.”

Neil Fox, a deaf research fellow at the University of Birmingham, agreed that if avatar translation reaches sufficient quality, it could open up many online spaces that are currently closed to deaf users.

But everyone is very cautious. Rebecca Mansell, chief executive of the British Deaf Association, said it “has become a very lucrative area and a lot of projects involve deaf people only tokenly”.

“This is all happening very fast and there is a real risk that solutions will be imposed on us,” he said.

Mansell also raised concerns about regulation and fair use. “An avatar might be fine for ordering something simple,” she said, “but what about a cancer diagnosis? In schools, a human interpreter is often a deaf child’s only friend.”

Dr. Lewis Hickman of the Minderoo Center for Technology and Democracy and lead author of the BSL Is Not for Sale report has worked in AI ethics for a decade.

“Many companies claim they can solve these problems without understanding the linguistic and cultural complexity of BSL,” he said. “Current avatar systems still lack the nuance of human interpreters, which creates risks in medical and legal settings.”

Hickman also pointed to the limitations of the available data. “British Sign Language is not the same as Irish Sign Language or American Sign Language. There are regional dialects within England. This means that the data available for training AI systems is extremely limited.”

So he asked, where would the appropriate training data come from?

“The Deaf community wants innovation,” she said, “but we want to take it slow so we can shape it and make sure it really benefits us.”

Back at the café, its founder Hakan Elbir felt little need for more complex tools than his static BSL video menu.

“People talk a lot about innovation, but for most deaf people it’s still theoretical,” he said. “What I wanted was a meaningful daily conversation for people to listen to.”

“Coffee is just an excuse,” he said. “I didn’t need complicated technology to break down barriers. I just needed to talk to people openly.”

While waiting for her latte at the counter, Hartwell quietly practiced the sign for “flat white,” proving that it was simple, human interactions — supported but not overshadowed by technology — that were drawing her back, one signed coffee order at a time.

Related Articles

Leave a Comment