At Furthermore, we speak to clients and customers a lot about trust and the perceived level of trust with brands and the products and services they offer. Creating effective experiences can often rely on whether they appear trustworthy to the person experiencing them but trust itself relies on a complex mix of emotions and sensory appeal which varies from person to person.
As individuals, we’re judged at lightning speed by others - the way we sound, look or behave influences the reactions of others before either has even realised it. Our unconscious bias is involved in racial and gender stereotyping from childhood and our brain scans largely unconsciously to form judgments of strangers within seconds. In this BBC Futures article Melissa Hogenboom discusses how certain accents appear more trustworthy than others.
How does this translate to the work we do as design practitioners? In an age of fake news, deepfakes, project fear, racial profiling, online scamming and privacy concerns, how is trust perceived online? If we’re hard wired to judge people for trustworthiness in a few seconds, what factors are at play when engaging with a digital interface? And how may this change as technology and computing power advances?
Take the images we see everyday online, Kalev Leetaru talks about our blind trust in the imagery and videos we see online and the loss of provenance on today’s web. It seems we could have justifiable cause for concern as information published has little context for its placement or evidence for its legitimacy. In the 2019 Digital Index report by Lloyds Bank, a quarter of the UK population were unable to assess the trustworthiness of digital content. Trends show this is not a digital-only issue as information is alarmingly being used to deliberately subvert democracy, confuse and control public opinion.
The most convincing deepfake examples now show us how deep learning technology could be open to abuse. Here Joseph Foley shows eight examples of the possibilities that have arisen with the development of deepfakes. How we spot the tell tale signs of an untrustworthy source in today’s fake news era is also becoming more important. In this article Rob Verger talks about MIT’s use of artificial intelligence in spotting non-genuine content online. He talks about how Google have made an AI system which automatically scores the toxicity of reader comments, and how Facebook has turned to AI to help its efforts to keep hate speech at bay in Myanmar. But of course even AI can get it wrong, here Jake Silberg and James Manyika of Mickinsey discusses how minimising the bias of our AI systems will be crucial to their success and how systems are only as good as the data they are fed.
In a recent user research session of our own we had participants repeatedly tell us that trust and security was a big concern throughout their experience online. Their concerns with the services we were testing ranged from dark imagery feeling ‘seedy’, websites with adverts and popups appearing untrustworthy and sites without a clear proposition appearing confusing, adding to levels of distrust. In most cases their concerns were justified, but of course just because something appears trustworthy doesn’t necessarily mean it actually is in reality.
It's also not only individuals who are concerned about trust online. Google’s search algorithm updates recently included a change they call ‘E-A-T’, this means organisations must show demonstrable ‘expertise’, ‘authoritativeness’ and ‘trustworthiness’ on their websites if they want Google to rank them favourably. This means website owners must be obvious, authors must be experts, sites must adhere to certain levels of security and all content must be credible. Initiatives like this from Google are no surprise when you read that only 21% of visitors feel very safe on retail sites and 64% of consumers are concerned that their credit card/financial information will be compromised within the next year (IDG Research Services, 2016).
Just like meeting someone for the first time, we make a judgement on the things we see online in a few seconds, but is that sufficient? When browsing news sources for example, should we spend more time understanding the type of publication or organisation behind the stories before we commit to reading one. With our teenagers spending up to six hours a day and more on screens, we should all be mindful of what we trust and how we go about determining validity as we interact with media and brands online. It goes without saying that as designers and makers we must put more emphasis on conveying trust, purpose and meaning in as many ways as possible, from clear brand messaging and display of an organisation’s values through to consistency of design, reliability and features that are accessible and inclusive. But of course looking trustworthy and actually being trustworthy are two completely different things.
Take Amazon for example, they have a recognisable brand, everything on the site works and seems straightforward enough but according the Wall Street Journal, Amazon’s own search system has been tweaked to more prominently feature product listings that are more profitable for the company, namely Amazon’s own products. Now when you consider that almost two-thirds of all product clicks on Amazon come from the first page of results, you can start to see how the company’s need for more profits will always trump their loyalty to your needs and their desire to provide a truly honest shopping experience.
Take Trip Advisor, the go to recommendation destination for a night out or for someone on their travels. Used by 490 million travellers each month to make decisions about where to visit and where to avoid. Again the site seems trustworthy, but are the reviews, arguably one of the core feature of the site, to be trusted? Charles Goodall bought a derelict pub in 2015, the pub hadn’t been in business since 2011, but on TripAdvisor, people were posting reviews for three years from 2014 to 2016, impossible right?
So if big brands like Trip Advisor and Amazon are unable to walk to the walk by ensuring their systems as well as their slick shop-fronts are genuinely transparent and trustworthy then consumers will forever look at these services and by association other similar services, through a lens of skepticism and confusion. What hope have we got to up-skill the digitally disadvantaged, or to make the web a more inclusive place if the organisations running the show are either deliberately tripping people up or doing so by mistake, through their own incompetence?
It seems for now at least, it is difficult to know who, what or when to trust an organisation online. We believe in adopting a customer centric approach throughout the entire on and offline experience. We start with a set of values that everyone in an organisation can get behind and then ensure they are consistently adhered to and demonstrated throughout all touch-points with customers.
Are you an organisation looking to reinforce trustworthiness across your service offering? Shoot us over a message, we would love to help!
This little insight was brought to you by Steve Johnson, Managing Partner at Furthermore.
Furthermore are a multi-platform digital product and service design studio based in London. We have one mission: to create innovative digital products that stand out in the landscape, are beautiful, purposeful and a delight for the user. Hot on user experience and user research, we believe good ideas can come at any point in a project, so we utilise agile methodologies. Hypotheses are always tested using prototypes and real users, with improvements being constantly fed back into our user experience and visual designs.
Have an idea you want to discuss? Call us for a free consultation
Get in touch with the team to discuss your idea, project or business.