UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

User experience without intentional limitations creates chaos

Simo Herold
UX Collective
Published in
9 min readNov 3, 2024

a man standing and wearing a coat in a desert storm
Image by cottonbro studio on Pexels.

Mad Max is an Australian film franchise, following a police officer on the brink of societal collapse, where the government has no capacity to protect its citizens in a dystopian wasteland. The main character, Max, is rather wary of others, struggling to decide whether to help or go his own way. While we are not living on the brink of a civilization collapse, a bad user experience can make you feel like so. Applying intentional limitations to the user experience can help to reduce bad behavior.

Constraints in design are often technical, resource, legal, and time-based limitations, which can play a big role in designing products. Besides maximizing profits, Corporate Social Responsibility (CSR) has been an integral part of company initiatives for a few decades already, where businesses have strategies to make a positive impact on the world and have responsibility towards the society in which they operate.

The responsibilities are often categorized into environmental, ethical, philanthropic, and economic. CSR can be summarized as three Ps, meaning profit, people, and the planet. Product user responsibility refers to the duties of the person who uses the product, but what about product provider responsibility?

Technology companies are addressing issues of cyberbullying and how to better protect their users with tools, guides, and reporting, but more is still needed.

User and provider responsibilities

When a business is already forging meaningful relationships with customers, is aware of the constraints around design and development, adheres to legalities, has great CSR initiatives, and designs with the user in mind - are there other duties and responsibilities the product team should consider? Employee well-being is often discussed, but how about user well-being?

UX designers are walking in the shoes of the persona they are building for, researching the motivations and behavior of the users, but are they also intentionally protecting and supporting what’s in the best interests of the users? Yes, but besides talking about accessibility, problem-solving, ease of use, or enjoyability; the more invisible factors that can impact the experience, are the duty or responsibility to design and develop for the user’s well-being.

While we can't know with 100% certainty the real motivations behind one's actions, the provider can still strive to design to protect the user against possible harm, whether physical, mental, financial, or otherwise.

4 different profile images of a man, generated with Adobe Firefly
These 4 different profile images were created with Adobe Firefly 3. Image by the author.

Invisible constraints

If we intentionally apply a constraint to the user experience in favor of the user, would this be perceived as negative or positive? A limitation tends to have a negative connotation, but it is not always the case. While adding a constraint to the kind of images a user can upload as their profile picture can sound like a limiting factor, it is so if we do not elaborate on why.

AI has advanced in recent years, which is great, but it brings a lot of attention to how to build for security, avoid fraud, and design the experience around the content we interact with. This is not only for detecting illicit content or picking up on certain words to protect the community but also for determining whether something was created by AI.

However, it’s not ethically wrong or breaking the law to use AI-generated content as a profile picture, nor is it to detect AI-generated content and block its usage, so why would it matter?

Implementing such a limitation may not affect the usual Spotify user listening to music, but can make a difference if applied on a platform like LinkedIn, where strangers often interact with each other and exchange sensitive data, sharing a CV in the hopes of employment for example. Context matters, especially when the user’s data is at play.

A LinkedIn post and comment dialogue using AI-generated images
A re-creation of a LinkedIn post/comment dialogue. The profile images were created with Adobe Firefly 3, except the author’s profile image. None of the content is real. Image by the author.

AI detecting AI can become hard as technology evolves. Online platforms do have trust and safety measures in place, such as verified identities. Such measures can make users feel more confident and trusting when interacting on the platform. However, it is also easy to surpass these measures.

Fraud in the US topped $10 billion in 2023, where one of the most commonly reported categories was imposter scams. Digital tools and platforms are making it even easier. A lot of the data protection acts help users to keep their data private, and not track their activity, but what about the protection around interaction with a product?

The duty and obligation of the business is to prevent wrongdoings as much as possible. One big challenge can be how or when to approach these kinds of initiatives. Should one wait until there is a lot of negative feedback, or would this be similar to designing for edge cases? Could it be simply part of aiming to create the best possible experience?

A combination of manual and automated checks can help to tackle AI misuse through AI authentication methods, such as defined by the Information Technology Industry Council (ITIC) as Labeling:

  • Watermarks: embedding an invisible or visible signal in text or image, with information. This allows the user to know the content was made with AI. Steganography is a technique that hides information inside the least significant bit of a media file for example.
  • Provenance tracking and metadata authentication: tracing the history, modification, and quality of a dataset. Content provenance, or Content Credentials, binds provenance information to the media at creation or alterations.
  • Human validation of content to verify whether the content was created by AI or not.
Content Credentials concept explained
Content Credentials by the Coalition for Content Provenance and Authenticity (C2PA). Source: ITIC. Image recreated by the author.

One technique might not be enough to authenticate AI-generated content and AI authentication is still developing, but informing users when content is generated by AI is a good policy for promoting consumer transparency around AI-generated content.

three people looking at each other suspiciously
Image by cottonbro studio on Pexels

I was contacted by a CEO, looking for a designer to create a mobile app design for their real-estate project. The LinkedIn profile had a legitimate photo, name, bio, 25k followers, and a premium account badge. However, the profile had no activity or way to see the number of connections.

The company name was listed on his profile, but the company was not accessible on LinkedIn. After researching on Google, I found the company was making millions in annual revenue a few years before being delisted, yet according to the LinkedIn profile, the company had 2 employees whose profiles had the same convincing characteristics.

When I visited the same profile three weeks after contact, I noticed that the follower number had dropped to 20k, and the company information was removed. Information can seem legitimate at first glance, making digital literacy an important part of the Internet.

Spotify playlist page view
For comparison, this is a screenshot of a playlist on Spotify (2024), using the same generative AI image as the user profile on the LinkedIn example. The importance of the information displayed on profiles is very different.

Digital literacy and the responsibility of the product

Digital literacy varies between ages and educational backgrounds. A 2021 Stanford study found that less than 0.1% of 2019 high-school students could identify an original voting video from a fake one, and policies on the education of digital literacy vary across geographies. Digital literacy is not only for the young, but senior citizens are also partaking in the digital transformation, creating a very diverse user group across multiple life stages.

Older adults tend not to be open to learning or using new digital skills due to their younger family members helping to do it for them, the reluctance to engage in social media, or the acceptance of utilizing technology for daily life.

Not everyone is learning or has learned about digital literacy. Would it be the product or service provider’s responsibility to educate the users or potential users on how to make the best of their product?

The flag of the European Union in front of a city view
Image by Vicente Viana Martinez on Pexels.

According to a Eurobarometer survey, 72% of users want to know how their data is processed on social media platforms, and 63% of EU citizens and residents desire a secure single digital ID for all online services. With EU Digital Identity Wallets, the holder can verify their identity without revealing their full identity or other personal details, such as date of birth.

example user flow for applying a bank load with digital ID wallet
User flow of applying for a bank loan with the digital ID wallet. The user selects the required documents and submits them electronically to the bank. Source: The European Commission. Image by the author.

This ID gives the holder control of the details shared with third parties. The ID wallet initiative is currently being tested in real-life scenarios and is designed to be more secure and user-friendly than a password.

Spam filters are an example of setting intentional constraints on the experience. Emails about inheriting millions might be a thing of the past, yet it is still hard to determine for certain if the person you are talking with online is real, made up, both, or neither.

Product and service providers are often the experts in their field, which should be part of their duty and responsibility to design, develop, and educate what is best for the user.

Defined Constraints

In most modern cars, there will be a constant chime sound if you do not have your seatbelt fastened on a moving vehicle. It is to ensure you fasten a seatbelt to prevent fatal injuries. Phones in Japan are required to make a shutter sound when taking a picture to alert people nearby of a photographic activity, which is part of a privacy protection law against nonconsensual photography.

Defined constraints aim to prevent a problem or limit the usage of features in favor of the user and the people around them.

Defined limitations are often applied to product plans, such as the limitations between a Design Seat and a Viewer role on Figma. Perhaps limiting the design privileges is sometimes more beneficial and secure than having full access for everyone.

Security constraints are applied when the user types their password wrong multiple times and is locked out for an interval before being able to try again, or is further asked to complete 2-step verification.

Product plan types and password constraints are not enforced by law, whereas the shutter sound requirement on mobile phones in Japan is. These types of constraints are the responsibility of the business.

Mobile phone with a password input field
Image by the author.

The UK has become the first country to introduce laws on password security under the Product Security and Telecommunications Infrastructure Act. The law ensures every new smart device distributed by a manufacturer, importer, or distributor will require a password to be set upon start, and that passwords can't be default or too weak, such as "password" or “admin”. This is to help the consumer to protect their data and well-being.

Furthermore, providers are required to communicate more transparency around security updates and vulnerabilities. Companies failing to adhere can face up to $12.5 million in fines, recalls, or 4% of their global revenue.

Besides security and safety, there are also defined limitations on experience. Financial tools, such as using leverage on stock trading, often limit the usage of services based on the experience level of the user. This is not in place to restrict the user’s freedom, but more in favor of the user to protect them from major financial losses due to their lack of experience, as defined by the product.

User well-being

Intentional constraints for users do not mean that you are restricting the user’s freedom, but rather help, support, and guide the users to do what’s best for them. Limitations are also part of a company or product strategy.

Applying limitations may not solve a user problem directly, but it can help to avoid one.

Corporate social responsibility has been making an impact for a long time, and in recent years businesses have become more location-independent, making online well-being perhaps even greater responsibility in the future.

Dystopia or utopia — regardless of the situation, making the best of the situation we are in.

References and further reading

Written by Simo Herold

a Finnish designer based in Tokyo, Japan.

Responses (2)

Write a response

It's a great job of explaining the importance of intentional limitations in user experience design. While limitations can sometimes be seen as negative, they are actually essential for creating a more focused and efficient user experience. By…

--

The "Mad Max" analogy effectively highlights how poor user experiences can feel chaotic and unmanageable, much like a dystopian world. By implementing thoughtful constraints, designers can protect users and enhance their well-being, aligning with…

--