Blind Perspective: So, how is it really with chatbots?
Article content
Chatbots are increasingly placed on the front line of customer support. You can find them in online stores, on bank websites, courier services, and many other platforms. They reduce costs and shorten the path to products and services. You can ask them questions. And, if needed—sometimes—they will pass you on to a human.
So I decided to check whether they would help me as well.
The result? See for yourselves.
Step one: finding the chat window
In theory, it’s simple. Visually, the chat window is usually waving cheerfully from some corner of the screen, easy to spot. With a screen reader? Also possible. It just takes longer.
The chat tends to hide inside a frame with a not-so-informative name like “Frame 0”, or behind a button with an unlabeled icon. But fine—there it is. Located.
I click the button and… nothing. What’s going on? Oh. It’s hidden from the screen reader. So I’m not getting in at all.
And if someone *did* think about accessibility, the chat might open in a new frame with an equally meaningful title like “Frame 1”. Or, for variety, in a new browser tab.
Alright. I’ll take it. Success. Chat found.
Step two: writing a message
This part is the easiest. I can still type. At least I think I can.
The only question is: **where** do I type? I’ve had situations where, for reasons unknown, the screen reader didn’t announce what I was typing in the message field. Fortunately, this happens more often in mobile apps than on desktop.
This time, it worked. Message sent. Even if the Enter key didn’t do anything, the button next to the text field—or that unlabeled icon—was apparently meant for sending. Great.
Let’s see what happens next. I didn’t even ask a complicated question. That wasn’t the point of this test.
Step three: the chatbot’s response
So I wait for a moment. You never know how much traffic they have or how long the chatbot needs to respond. After a minute or two, my patience runs out.
I start browsing the conversation window and… well, there *is* a response. I think. I just haven’t reached it yet.
At this stage, I usually run into a few recurring obstacles:
- Right after pressing Enter to send a message, the focus jumps to the conversation window. That wouldn’t be a problem – except it always lands at the very beginning of the conversation. Every single time.
- The conversation itself is often complete structural and communicative chaos. Messages are not presented in any predictable way. There’s no clear information about who said what and when. If it were at least consistent – date and time, sender, content – I could work with that. But sometimes even that is missing.
- To make things more confusing, in some browsers every piece of text is announced as “clickable”. Why? There’s no link there. It’s just text. Am I supposed to activate it? Copy it? Guess.
- I often don’t know whether the response comes from a chatbot or from a human. That information is conveyed only visually – through an avatar or a small photo- with no textual alternative.
- Status information is frequently missing as well. I don’t know whether the bot is typing or has already responded. The only exception is when the focus suddenly jumps back to the start of the conversation again. Then I know *something* happened. Needless to say, this doesn’t bring me joy – especially when the conversation is longer than three sentences. How many times can you reread the same content just to get back to where you were?
Emojis
Chatbots are designed to be friendly, cheerful, and helpful. Emojis play a big role in that. On the surface, there’s nothing wrong with it – everyone uses them.
There are, however, two problems. First, different operating systems interpret emojis differently for screen readers. Second, when custom graphics are used without alternative text, their meaning is completely lost to me.
It helps that I know English. As long as the asset name is something like “smiling_face” instead of “Adgo395D32Gha369”, I can usually figure it out. But other blind users in Poland might not be able to rely on English file names to fill in the gaps.
Will the chatbot answer my question?
The shortest answer is: **it depends**. If I asked an online cookware store about a non-stick frying pan, or a courier company about the status of my parcel, it would probably work. These are questions that most users ask.
Things would likely go much worse if I wanted information that’s less typical—details not included in the description but visible in product photos, for example.
In summary
Working with chatbots is not as straightforward as it might seem. The issues I’ve described are not unique to chatbots either. Lack of proper grouping, logical order, and information conveyed in ways other than visual ones – these are topics that keep coming up in discussions about digital accessibility.
So, do I like using chatbots? The answer is not clear-cut. I’ve encountered solutions designed with different users in mind – and those were genuinely pleasant to use. Unfortunately, many others still feel like a battle.
Is there an alternative? Of course. Typing a query into a search engine. Or asking an AI model. I use the latter more and more often. And I am surprised more often then not with just how accurate product recommendations from large language models have become.
Barbara Filipowska
Audytor dostępności
Recommended articles
-
19.09.2024AccessibilityHow to audit and create a report on digital accessibility. Part two
In the first article of this series, we discussed why it’s important to verify the state of digital accessibility. We…
-
06.09.2024WCAG AcademyWCAG 2.2 – Criterion 3.3.9 – Accessible authentication (enhanced) (Level AAA)
Today, we’re covering criterion WCAG 3.3.9 Accessible Authentication (enhanced) at AAA level, which is a more detailed version of criterion…
-
16.11.2023Not only designBook Review: “Neurotribes: The Legacy of Autism and the Future of Neurodiversity”
Today, we celebrate International Tolerance Day. Along with the establishment of this day, the Declaration of Principles of Tolerance was…