Why data is the user interface of the future

And machine learning has its limits

Take a moment to think about all the devices through which you interact with your machines: keyboards, mice, touchscreens, microphones, AR or VR goggles, or that brain implant you just ordered from Neuralink. All of these rely heavily on data and machine learning to translate physical inputs to machine instructions. This might seem trite (what’s new?), but cutting the one-on-one connection between human input and machine output and replacing those connections with machine learning approximations has some important consequences. The insight dawned on me when I attended the 2020 ACM Recsys conference virtually in September 2020. At the intersection of users, algorithms and data, an important part of building recommender systems is about removing the explicit instruction set from human-machine interactions. In order to achieve this, these systems rely on behavioural data from users (implicit feedback), which they combine with user actions (explicit feedback) and use as input for machine learning models. Through the guesses and approximations made by these machine learning models, recommender systems can in theory automate part of the cognitive load needed to interact with machines. In doing so, they allow users to interact with machines through behavioural programming, changing the behaviour of the system (the outcomes of computations) by interacting with the system in a certain way. In low-stakes repetitive tasks behavioural programming has a lot of the potential to improve the quality of user outcomes while at the same time reducing the amount of cognitive and/or physical human effort involved.

Machine learning
Users of an unnamed digital product, ca 2020 CE (image credit: The Mannequin Gallery)

Systems built for behavioural programming mainly learn from system usage data, which is used as a proxy for user preferences. Since the system context in which users make decisions is often decoupled from the system context in which the machine learning model is applied, applications that learn from data can be very complex to manage. Non-deterministic outcomes are no longer limited to buggy code. Emergent behaviour has become part of the design and intentionality of these systems. And that can be problematic. After all, they take their behavioural cues from humans. As anyone who has worked on recommender systems or chatbots on the free-ranging Internet can attest, human behaviour can get pretty ugly pretty fast when it is unburdened by social control. Besides behavioural inhibitions, it seems that cognition also takes a hard left when non-verbal cues aren’t available. Even generally positive signals such as physical beauty and personal charm are vulnerable to intentional abuse at the massive scale enabled by these platforms.

Non-deterministic outcomes are no longer limited to buggy code.

And this is by design. As Sinan Aral points out in his 2020 book The Hype Machine, some of the most widely used recommender systems take their behavioural cues— their explicit feedback — mainly from likes and shares. Amplified through trillions and trillions of signals, it has lead to some pretty strange emergent behaviour. Sinan does a great job of discussing the consequences of this particular design choice in his book, as does the 2020 Netflix documentary The Social Dilemma. It seems that rather than a UX choice, the focus on likes and shares was made from a profit maximisation perspective. It ties recommender system optimisation directly to business metrics related to user growth and engagement, rather than metrics that measure user experience. This also makes it very hard for designers and engineers working on these systems to create meaningful user experiences with data. It is a design failure that emanates from the boardrooms down, and is especially jarring at Facebook. It is the failure to design a coherent value proposition for social media services, which has resulted in a focus on short- to mid-term KPIs and OKRs. Yes, of course we want products to succeed commercially and our contributions to the business to be profitable, but that should not come at the expense of a lasting negative impact on UX, your user base, or even society as a whole.

Online users consuming viral content (cf David Hirshleifer / @4misceldah)

While there are laws against environmental pollution that make it illegal to dump toxic waste on public property, somehow there are still no laws against dumping toxic content on the general public. As a result, some of the brightest computer scientists and machine learning researchers out there are working to optimise dark patterns. Compare for example recommendations provided by Spotify or Netflix — both subscription services — to those on YouTube. It is easy to see the vast difference in quality, relevance, and user experience emerging from these different business models. This has nothing to do with the quality of the people working on these systems, and everything with the corporate incentives and optimisation targets driving the teams working on these systems. One possible fix could be to implement regulations that force social media services serving over 15% of a national population— vital communications infrastructure — to pivot from an advertisement-driven to a subscription-based business model. This would allow enough room for new social media platforms to build critical mass, while the incentives and targets for behaviour change at larger social media companies are redefined. Add to that measure laws on verifiable digital identities to combat the bot problem, tighter controls on consumer data ownership and monetisation, the consumer data portability across platforms that Sinan advocates, and better licensing for user-generated content, and perhaps we might end up with social media that is both fun to use and has a net positive impact on society as a whole. It seems strange — in hindsight, of course — that a business model born out of necessity on the television is now applied to commercialise our interactions with friends and family in a landscape where distribution costs are fixed and bandwidth is (almost) free. Advertising is a great way to promote products and services to potential customers, but it should not invade and seek to shape our social interactions, behaviour and communities. Its presence should be limited to the digital environments where advertising actually makes sense — digital marketplaces, product search, online stores etc.

Once these business model and organisational design issues are resolved, companies could start working on building digital services that strike a better balance between consumer and business value. That doesn’t mean that fixing the business model issues will automatically lead to great products. From a UX design perspective, there is still a lot to do. There are often two conflicting objectives guiding the development of digital services. Productivity enhancing services such as search tend to be optimised as a minimisation problem — the less time it takes a user to complete a task, the better the system performs. On the other end of the spectrum, products that create meaningful connections, content or interactions — that serve the relaxation objective—are optimised using metrics formulated as a maximisation problem: time spent, number of interactions, number of connections, etc. In larger organisations this often leads to a situation in which different teams working on a single product have different objectives, product goals and performance metrics. Since the quality of the system usage data directly impacts the quality of the machine learning products built using that data, and context is key in setting the stage for data-driven interactions, a large part of building a data-driven digital product is about setting the stage for the algorithms to operate in. That means aligning the teams working on the data products around a single vision and mission of how the product creates value for its users and impacts their daily lives or work environment.

Machine learning
Corporate-centric UX design (image credit: The Verge)

A case in point is the Facebook homepage shown above. Recommendations presented on the Facebook homepage appear to be pulling users in at least three different directions. In contrast, the image below shows that the team that designed TikTok opted for a design in which content takes centre stage. This has led to a user interface that is aesthetically pleasing, simple and intuitive. However, its core strength comes from the fact that this simplicity is extended to the implicit and explicit signals the algorithms teams work with. The usage data on which TikTok’s content recommendation algorithms are trained is applied comes from the same context in which the data is collected. This makes building a performing recommendations engine much more simple. Besides the fact that the recommendations no longer need to compete with hundreds of other signals presented to the user, it allows the team to work with much cleaner measurements of user preferences. The problem of attributing user behaviour to generated recommendations disappears, and the algorithms team can now spend all its time improving the quality of the content recommendations. The result is a product that is both fun to use and delivers on the implicit promise that machine learning would help automate low-stakes decisions to make our lives more enjoyable and interesting.

Why data is the user interface of the future
Content-centric UX design (image credit: Fintory)

There is a catch, though. As machine learning practitioners will be aware, statistical models are reductive in nature. Their ability to represent reality by reducing it to a parameter grid is exactly what makes them so powerful and elegant. However, once these mathematical representations become part of the human world, human definitions of leisure and productivity start to shift through perspectives generated with data, math and optimisation algorithms. As such, these mathematical models actively distort human reality, and the feedback loops built into these systems perpetuate these tiny distortions ad infinitum (or at least until the user moves on to a different platform or service). It happens both by choice—a new song recommended by Spotify brightens up your morning — and in a sort of perpetual failure mode (again, YouTube). I am not trying to diminish the engineering and algorithms design headaches that come from having to deal with massive amounts of user-generated content of unknown quality in near-real time, but I do think that recommendations at both YouTube and Facebook suffer in large part due to flaws inherent in their recommender system design and business model, rather than from actual technical issues. And the behaviour changes brought about by these systems are steadily seeping into real-life social interactions, reshaping the human world in the process. Fringe movements emboldened by their a version of reality incubated online in a space devoid of the inhibitions that have guided human interactions since forever are now at war with a world they see as fake — consensus built over hundreds of thousands of years is slowly being replaced with algorithmic perspectives that helped shape some truly ridiculous alternative takes.

Machine learning

The digital friends in your portable pocket calculator

Even in less visible cases the implications of the ongoing digitisation of the human world are immense. The more our mental models are shaped by the non-choices of recommender systems, the more we are aligning our real-world experiences and expectations with the world online. While there is of course nothing wrong with applying behavioural data to improve your lifestyle choices, health or wellbeing, any kind of algorithm that messes with our mental models of others and society in general is a truly dangerous thing. Cue fascism. This also entails a question. If data is the user interface of the future, should it be the lens we apply to society? The only way I think this could work is with better consumer protection through data legislation, and through machine learning techniques like federated learning and differential privacy that try to find a balance between privacy and mass personalisation. And even then, the goal of the a mass personalisation system should never be personalisation itself. This gives teams working to improve a product too little guidance on how to improve the overall value of the product for the users.

The options a recommender systems presents to you as choices are a melee of heuristics and behaviour of strangers shaped by algorithms design choices made by engineering and design teams to optimise corporate metrics and KPIs. Since the feedback loops that drive personalisation also drive behaviour change and impact human lives across the world, they should fall squarely within the domain of corporate social responsibility. So rather than tying the metrics of teams working on personalised services directly to profit, these metrics and the implicit and explicit feedback they collect should be shaped by the mission and value proposition of the service or product the company is offering consumers. Only then should these metrics and proxies be used to optimise digital services using the data and algorithms at their disposal.

WEBINAR

STATISTICS ON STEROIDS

8-DEC

Schrijf je gratis in!

Ander onderwerp?

Blijf altijd op de hoogte bij The Future Group en update jouw kennis! Is dit blogbericht toch niet helemaal wat je zoekt? Navigeer dan terug naar ons blogoverzicht.

Terug naar overzicht
Jonas Braadbaart

Auteur: Jonas Braadbaart
Machine Learning

Meer leren op het gebied van Machine Learning?

Meld je gratis aan voor de webinar op 8 december aanstaande: Statistics on Steroids

Bekijk de webinar

DEEL DIT BERICHT IN JE NETWERK:

MEER LEZEN?

Bekijk hier alle blogs.

Alle blogs