When you work in technology, if you asked a random sampling of people if they normally felt very confident they’d say “no”. When you ask a question like that confidence is always seen as a social attribute, synonymous with extravert.
If you asked the same group of people if their products were widely accessible, you’d often get comments about how well it worked with screen readers and adhered to accessibility guidelines. Good! That’s what I’d expect.
When you deal with conversational agents, be it my regular Alexa or Google Home or any of the other voice interfaces that are appearing in our devices, those comments change – there’s no need to worry about screen readers, there’s no screen! Just talk to it and it will sort all that out (most of the time).
And for those with difficulty over speech? People are working on that too
This is all fabulous work – it’s great that more and more people are understanding this kind of accessibility is important.
No, it’s not, but I had to make sure that it was clear that when we talk about accessibility to technology it’s normally a very specific interpretation. And up until recently I was in that group too – I was in the “voice is awesome, it’s even more accessible” camp, specifically.
Now for those who are reading this and don’t know me very well (not many I imagine, but I can dream) I attend a lot of the meetups that are organised in the local tech community here in Nottingham – one of which is Women in Technology, Nottingham run by Emma Seward and Helen Joy. It’s through this that I follow Helen’s blog and last month she posted an article regarding Permissive UX Design ( I thoroughly recommend reading the article and following the blog, insightful and engaging view on the work we do and our industry ). This article has had my noggin turning over now and again since I first read it and there’s a section in there entitled “
Confidence and digital exclusion” which I keep coming back to because it challenged my conception of inclusivity and accessibility in regard to the technology I love working with, and that’s bothered me because simply put: I couldn’t think of a suitable answer.
You talk to Alexa, so I can’t help the knowledge or conversation required to get to my skill, but the team at Amazon and Google are working on and have functionality so that they can recommend a skill or action if someone says the right thing. So it’s possible to get to my skill without much confidence or knowledge. But after that it’s on me. So what if they say
This is a scenario I’m familiar with and that I’ve worked on before, so is my example scenario. Someone who has an interest in technology but low confidence wants to attend a meetup – possibly to help gain that confidence – and they’re in my skill. How can I change what I’ve written to allow for that?
Disclaimer: These might seem like exaggerated or obvious examples. That’s fine – they’re examples – but it’s to ensure the point is made as to how it can be done.
The first issue is one of understanding that the request may be from someone with a lower confidence in the technology they’re dealing with. I can’t know what the user said in some cases, but even if I could it’s such a small amount of text that identifying mood through an automated means would be difficult.
So I go back and rely on the concept of Permissive UX. Because the medium here is voice, we can rely on the same distinction that a person would make in a low confidence social situation – because their thought process is still “this makes me a little uncomfortable, I’m not sure I’m doing this right”
The language changes from:
“Are there spaces left?”
“Can I go to this meetup?”
to something less confident, more permissive:
“Am I allowed to go?”
“Is it alright if I attend?”
“Is there space for me?”
And this is something we can definitely deal with. You create two intents in your voice conversation. The first group is “wantsToAttend” and the second is “wantsToAttendLowConfidence”. You rely on your UX team to help identify which phrase goes where, but you have a clear and easy way to maintain that identification.
So we now know that this is a low confidence intent that we’ve received from the user. But that doesn’t change the request they’ve made at a fundamental level – the intent is still going to perform the same actions and so is highly maintainable.
Where the low confidence version of the intent does make a difference is in the response of that request.
So we’ve checked the API and found there’s space. Say by default your intent is going to return a polite but generic response
“Yes, there’s ten spaces left in the meetup, can I help you with anything else?”
If you know you’re dealing with the low confidence intent, then you may want to reinforce a positive and encouraging tone
“That’s not a problem at all, everyone is welcome at the meetup and there’s still ten spaces left. Is there anything else you’d like to ask?”
The important point is that we were able to give immediate and positive feedback that they interacted with the technology correctly and encouraged further communication.
This is an important thing to remember when dealing with “unhappy” journeys, the worst of which would be if they’d said something we hadn’t been able to figure out.
You don’t want to immediately lose that confidence, so ensure that the mistake is moved onto the skill to ensure they don’t fall back into a “I’m rubbish at tech” thought process and use permissive UX back to them to reinforce that idea and allow them to feel still totally in control.
“We weren’t quite sure how to answer that particular question, but we will try and improve how we answer in the future. For now, would it be okay if we gave some examples of questions that we’re sure we can help with?
Ideally your development team (people like me) shouldn’t be writing content. But there’s plenty of different ways of ensuring that people with a more user-centric and empathy rich mindset are able to generate the responses for these sorts of replies.
The last thing is to try and enrich the response with as much information as possible, giving them a better result from the interaction they had, without making the user feel like you’re just talking at them.
This can be achieved with rich media, that most conversational agents have access to, and placing this information behind straight forward questions – simple interactions that completing will increase your users confidence with the technology and your brand.
One example is that, at the local Tech Nottingham meetup, they arrange to meet newcomers to the event beforehand. You could tweak the above low confidence response about being able to attend to ask about that information and then place it on the users device – somewhere they’ll feel more comfortable navigating and placing the user in control of its use
“That’s not a problem at all, everyone is welcome at the meetup and there’s still ten spaces left. If you’ve not been before we regularly meet attendees beforehand, we can place the details in your Alexa app if that’s of interest?”
“No problem at all. That will on your phone in a few moments, and we hope to see you at an event soon. Is there anything else we can help you with?”
So this is basically the thought process I went through as I started to come up with an answer to the original question of how to handle requests from users who may be suffering from low technical confidence.
I hope it’s made you think of the kind of examples that we can handle as an industry and allowed you to see that confidence and empowerment can be transferred to your users with small maintainable changes and ensuring the right people are responsible for the voice your brand speaks with.