The silent exclusion: When technology speaks every language but ours


Technology loves to promise miracles. For people like me, it is often marketed as a great equaliser; an assurance that with the right device, disability is not a barrier. Access is just a download away, we are told.
But growing up, I realised that this flashy narrative, wrapped in lofty guarantees, was merely a mirage — out of reach for me.
For a family like mine, buying tech was expensive. My mother worked tirelessly to purchase devices she hoped would level the field for my visually impaired brother and me. But even when she managed to get them, a more invisible barrier persisted: hardly any technology could speak our tongue.
A profound disconnect
While I could listen to English books through screen readers, reading or writing in Urdu — the language of home, of emotion, and of identity — remained a distant dream. I longed to read novels on my own, to write my own stories, to revise schoolwork without leaning on someone else’s eyes.
That dream became especially far out of reach during O-levels, when I was barred from sitting the Urdu exam simply because no accessible technology existed to support the process.
Urdu is compulsory for university admissions in Pakistan, and, like many visually impaired students, I found myself locked out of a gateway to higher education. Not because I lacked talent, but because the tools did not exist.
Today, decades later, much has improved, and for that I am grateful. But the truth is that progress has been slow, inconsistent, and unfairly distributed. My research with visually impaired communities across Pakistan reveals that the very barriers I encountered at that time continue to persist to date.
Many participants described a profound disconnect from technology, particularly those without English proficiency. We often assume that access to a smartphone equals access to the world. For some of us, that assumption remains painfully untrue.
Across Pakistan, Punjabi, Pashto, Sindhi, Balochi, Saraiki, and dozens of other languages are spoken every day. The country is home to at least 80 languages and nearly 27 million people with disabilities, many of whom depend on technology for education, employment, and basic communication.
When the tools they rely on cannot understand their language, the consequence is not mere inconvenience. It is exclusion: systemic, silent, and on a massive scale.
The myth of inclusive technology
So what does that look like day in and day out? The accessibility audits conducted in the Accessibility, Language, and Tech for the People (ALT) project — a major European initiative dedicated to building a robust European ecosystem for Language Technologies and AI models — provide a clear picture.
Starting with screen readers, on Windows, the commonly used E-Speak Urdu TTS engine performs so poorly that even the simplest tasks become laborious. Several participants noted that the voice was so robotic and unclear that reading long-form content became extremely difficult.
On Android, the picture is slightly better, and Google’s speech services provide clearer Urdu pronunciation. But typing remains an obstacle even then because the system does not read individual letters aloud, and therefore, users are unable to confirm what they have written. Yet another obstacle.
On iPhones, things get even more complicated. There is no built-in Urdu TTS at all.
This leaves visually impaired users with no option but to change system settings manually every time they want a message to be read in Urdu, or worse, have to rely on clumsy third-party tools. One participant put it starkly: writing even a single sentence in Urdu using the available synthesisers was “a terrible experience” and felt nothing like the smooth, accessible typing they were used to in English.
Beyond language itself, application accessibility is equally inconsistent. People often assume that because an app works perfectly for sighted users, it must work for everyone. But the audits show a very different reality.
Daraz, one of Pakistan’s most widely used shopping platforms, remains deeply problematic. Product images lack descriptions. Buttons are unlabelled. Tab navigation is inconsistent. Product details are sometimes impossible to access without help. Checkout processes fail to provide feedback, leaving users unsure whether an action has been completed. Several testers described having to rely on sighted assistance or apps like “Be My Eyes” just to finish an order.
Ride-hailing apps mirror these challenges. InDrive is relatively better, but nearly all such apps fail in one crucial area: live location tracking. For visually impaired passengers, this isn’t a bonus feature; it’s essential for safety and independent travel. Yango, for instance, makes even entering a location difficult because of inconsistent labelling. Bykea has similar navigation issues.
Food delivery apps, on the other hand, are a mixed bag. Foodpanda is usable for many tasks, but placing an order or chatting with a rider can become difficult due to unlabelled options or delayed keyboard feedback. Some testers could search for restaurants easily, only to find themselves blocked at the final step because the “place order” button wouldn’t register with their screen reader.
Banking apps, perhaps the most essential tools of modern life, show the widest range.
SadaPay is frequently praised for its smooth and intuitive accessibility. But Easypaisa and JazzCash — two financial lifelines for millions — can be nearly impossible to navigate. Users described crowded interfaces, unlabelled buttons and processes so convoluted that beginners “give up out of frustration and demotivation”.
Think about what that means: in 2025, millions of blind or visually impaired users still cannot send money, order food, track a ride, or buy groceries without assistance. We often talk about accessibility as if it’s a checklist — add alt text here, label a button there, job done.
But accessibility isn’t a feature. It’s a commitment. A philosophy. A willingness to build with the most marginalised users in mind, not as an afterthought but as a starting point. Right now, the digital world is being shaped around English-speaking, sighted, neurotypical users. Everyone else is left to catch up.
Redefining accessibility
For me, participating in the ALT project helped reclaim some of the marginalisation I grew up with. The project examines how language politics, disability and technology intersect in South Asia. Though these countries share deep histories and cultures, we found that barriers vary, and that solutions must be community-driven, co-created and grounded in the lived realities of disabled people.
Time and again, participants expressed a desire to separate themselves from colonial linguistic hierarchies. They don’t want to centre English; they want to centre their languages — to read the news in Pashto, to type in Punjabi, to browse a website in Sindhi, to write poetry, read novels and order food in Urdu without fighting their phone every step of the way.
And yet, big tech companies have been slow to respond. Apple still doesn’t offer Urdu TTS support. Android has made improvements, but it remains far from ideal. Community whispers about “something new coming soon” have circulated for years, only to fade away as researchers hit barriers, lose funding or abandon the project. The cycle repeats.
Pakistan deserves better. South Asia deserves better. Every language deserves digital visibility, not only the global ones. And most of all, every disabled person deserves the dignity of independent access.
Accessibility, in its current form, has too narrow a definition. It must widen linguistically, culturally and technologically. Because when technology fails people who need it most, it isn’t just a technical issue — it’s a question of justice, dignity and belonging.
The promise of technology as a saviour isn’t entirely wrong. But it will remain a myth until it serves us in the languages we live in, speak in, and dream in.



