This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Why Don’t Big Tech Companies Listen To Users, Only User Data?

From Uber to Amazon, big tech companies are constantly gathering user data to better understand what we want, but they don't want to hear what we have to say.
Hindustan Times via Getty Images

There’s nothing quite like the unexpected, and yet increasingly familiar, account-suspension notification—often for alleged ‘suspicious activity’—to expose the precarity of the users’ contract with web-based service providers. Those who’ve experienced automated break-ups know only too well the frustrations of trying to extract a logical explanation from the other party. More and more, user interface design makes it easy to feed back data to companies on their terms, while disabling the most basic component of communication—person-to-person interaction.

The most visible example of this might be Uber—whose ‘help’ section has no ‘contact us’ option, but does offer the opportunity to give feedback by tapping on ‘change star rating’. At the same time, we’re told that giving a poor rating can affect the driver’s job security, and remuneration, so although you might want to give feedback to Uber as a service, are you willing to affect someone else’s livelihood to do so?

For the latest news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

The feedback loop

Whether you’re buying fruit or a fridge, overbearing algorithms are at the ready, hectoring customers for their feedback. In the online marketplace, automated mails, clickable yellow stars, pop-ups, page redirects, comment and check boxes extract the data that joins with yet more data to become big data.

Web induced feedback culture is plaguing off-screen locations too. Recently, I saw a feedback device installed in the toilet of a multiplex cinema asking patrons ‘how was your experience of our toilets today?’ The child’s toy looking contraption displayed a vertical row of plastic buttons divided along a 5-point scale of smiley and not-so-smiley emoticon faces.

HappyOrNot push button feedback devices have taken off in banks, shops and airports around the world, emboldening journalists to claim that smiley faces are ‘changing the way we travel’. The lo-fi smiley-faced gadget that uses green for happy and red for angry is the brainchild of a young Finnish entrepreneur and boasts a reach of more than 4,000 organisations worldwide.

But while offline services take cues from star-ratings employed online, Web and app interfaces have made customer service opaque. Communicating with customer service via the browser can often leave one wondering if it’s a human or chatbot at the other end. Even in cases of human-to-human interaction, responses are so templated that one has to wonder if humans are learning their behaviour from machines, inverting machine learning’s basis in human behaviour.

What happens when you actually face a problem?

Some time ago, despite conscientiously rating drivers after every trip, Uber abruptly suspended my account without explanation. Confident that I was the accidental victim of a technical glitch and my account would be reinstated, I mailed its support team. The reply from the support team confirmed that a specialists review showed the suspension was correct and final, and there was nothing to be done about it.

The team of ‘specialists’ had apparently earned their specialisms in fields other than responding to customer service inquiries. Not missing a window for aggregating driver data, the mail added: ‘If you want to revise your rating given to your driver, you can do that from your email receipt for that trip (bottom right corner).’

With no clarity still, the next day saw another email from a different customer support representative.

‘It appears that your account has been suspended for activity that violates our Terms and Conditions and will be unavailable for you to use until further notice. We will let you know if we will consider lifting the suspension’. This ‘feedback’, it goes without saying, made me none the wiser. Had I violated all of its terms of agreement, a single one or a combination thereof? Having suffered the indignity of being dumped by Uber, their decision to copy paste the request to rate my last ride seemed in bad taste.

The response to a subsequent mail added yet another layer of opacity. ‘I am afraid I do not have further info as this is a system generated flag with no manual intervention in the process’, the next customer support representative informed me. They went onto explain that Uber’s security system—operative across 300+ countries—had no way of differentiating between fraudulent and unusual account activity. That the algorithm’s IQ had its limitations was perhaps not surprising. More disconcerting though was the conviction with which Uber’s representative disposed of the value of human intervention.

“Had I violated all of its terms of agreement, a single one or a combination thereof? Having suffered the indignity of being dumped by Uber, their decision to copy paste the request to rate my last ride seemed in bad taste.”

He counseled me that there was no way for its system to establish if I had engaged in fraudulent activity or simply swiped my screen at the wrong time and place and that ‘frankly, there really is no way for anyone else to distinguish the two either’.

Already discombobulated by the deflections of the mounting mails with an alternating roster of authors each in turn backed up by a ‘team of specialists’, I was advised that whatever I had done regardless of whether I had in fact done it, the system had no method of cross-verification. Neither did ‘anyone’, I was told, have the ability to interpret whatever it was that was supposedly done. The nature of what I had performed unbeknownst to myself would remain unknown to the Uber team. That it was unknowable appeared to be the only known fact.

All this unknowing unknown behavior was moving perilously close to former US Secretary of Defense Donald Rumsfeld’s infamous ‘known unknowns’ and ‘unknown unknowns’. To top it off, Uber’s unsupportive customer support person signed off with: ‘I am afraid I really cannot be of any more help with this than I already have’ and the by now threadbare, copy-pasted request to update driver feedback.

Lack of accountability

There are of course greater and graver injustices, but when everyone else is on Uber, then not being able to book a ride felt like some form of exclusion. It makes apparent a startling lack of accountability on the part of app based global commercial services. With only one other cab operator of comparable size serving the city and the nearest Metro station at a distance of 11Km away, the gap between standing space on an overcrowded bus in the 35-degree heat had closed by 50%.

“In many cases, support is not forthcoming – customer service representatives provide little more than unsubstantiated allegations of policy violations.  One has to ask if there is no applicable law to protect customers from assumed crime and automated punishment.”

The issues aren’t specific to Uber. This is now a common experience across a number of tech-platforms. When a friend and Ola customer riding in an ‘Ola Outstation’ in Kerala tried to change the destination of her trip from Munnar to the specific address of the hotel in and around Munnar, the app, in spite of automating the option to do so, failed to apply the change. With an aggressive driver on board and without a ‘contact us’ option or customer support number available via the app, she was forced to hit Ola’s Emergency Button to speak to someone.

When it became clear that the support person was also getting nowhere with the driver, and feeling perturbed by his increasing belligerence, she asked him to stop the car so she could get out and terminate the ride. When she spoke to the next customer service person explaining that the driver refused to take her to her hotel and that his intimidating behaviour was making her feel unsafe, she was told that she had violated Ola policy by leaving the cab.

BSIP via Getty Images

Her concerns around safety and dissatisfaction with paying in full for sub-standard service were met with further disapprobation for not complying with Ola’s policy.

Another colleague’s experience with Ola mirrored the earlier Uber experience. Their customer service people told her that owing to ‘policy violations’ they would be unable to unblock the account. ‘We are sorry for the inconvenience’, they assured her. Her conversation moved to the phone where it was explained to her that owing to the automated nature of the system, the block would remain in place even though they were unable to identify the nature of the violation.

This is not an isolated experience. On the evidence of numerous discussion forums, the numbers of users with similar accounts is rapidly rising. In one recent instance, a customer of Zoomcar was billed Rs 8,000 for damages to the car. When they sent an email asking for details, the bill was instantly reduced to Rs 4,000, but no further information was given. Later, on Twitter, one representative agreed that the damage was minor and waived the charge. When the customer asked why they were arbitrarily charged Rs 8,000 to begin with, a different representative jumped in (after multiple messages between the customer and Zoomcar) to say “Thank you for reaching out to us. Please let us know more about your concern to assist you accordingly.”

In many cases, support is not forthcoming – customer service representatives provide little more than unsubstantiated allegations of policy violations. One has to ask if there is no applicable law to protect customers from assumed crime and automated punishment.

In this current devolved state of customer service, it is ironic that ‘Help us serve you better’ is a hyperlinked call to action of choice among app-based and online service providers. Commercial operators ask customers for help in developing their product, all the time telling their customers that they are helping them. Customer feedback is in this way a valuable commodity in futures market data. The customer who asks for help, however, is not seen as equally valuable.

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.