Reflecting on my time in a Master's program for Interaction Design, I recall a pivotal debate: "Should designers know how to code?" I was tasked to argue against it—ironically, given how foundational a basic grasp of coding later proved to be in my own UX practice. That debate underscored a fundamental truth: design isn't just about aesthetics; it's about understanding the systems that shape user experiences. Today, the same principle applies to AI. As designers, we can't afford to treat AI as a black box—we must engage with its mechanics to create experiences that are not only functional but also ethical, inclusive, and truly user-centric.
Every day, artificial intelligence shapes the digital experiences of billions of users—from personalized recommendations to automated customer service. According to Gartner [1], by 2025, over 70% of enterprises will have operationalized AI architectures. This shift towards AI-driven design isn't just trendy—it's driven by compelling business outcomes. Companies are increasingly turning to AI for design decisions because it can process vast amounts of user behavior data, identify patterns humans might miss, and adapt interfaces in real-time based on user interactions. McKinsey's research [2] shows that companies that fully absorb AI tools across their workflows and business functions can potentially double their cash flow by 2030.
However, despite these efficiencies, AI lacks the human intuition, ethical reasoning, and emotional intelligence needed to craft truly meaningful and inclusive experiences. Designers provide the critical human perspective—ensuring that AI-driven decisions align with user needs, cultural nuances, and ethical considerations that algorithms alone might overlook.
Yet, as AI’s influence grows, many UX designers are building experiences around technologies they don’t fully understand. This gap is no longer sustainable. To design AI-powered products that are responsible, inclusive, and user-centric, we must go beyond the interface and understand the systems driving them. Otherwise, we risk designing experiences around a “black box,” unable to anticipate its consequences—or control its impact.

The stakes: Why AI literacy matters for UX
When designers lack understanding of AI fundamentals, seemingly minor design decisions can cascade into significant user problems. Research from Microsoft [3] shows that designing appropriate user interfaces for AI systems remains one of the biggest challenges in creating effective human-AI interactions. Consider Netflix's recommendation system evolution, documented in their technical publications [4, 5]. Their initial recommendation approach relied heavily on explicit user ratings, which proved limiting as users often rated what they thought they should like rather than what they actually enjoyed watching.
After enhancing their UX with an AI-informed approach, Netflix implemented a more nuanced preference system that incorporated viewing history, browsing patterns, and time-of-day context. This redesign—created by designers who understood the algorithmic capabilities and limitations—significantly improved content discovery and reduced browsing time. As Gomez-Uribe and Hunt from Netflix explain, their recommendation system "enables us to get the right content in front of our members at the right time" [5], showcasing how UX designers with strong AI literacy can create interfaces that effectively translate user inputs into algorithm-ready data, resulting in measurably improved experiences.
The implications extend far beyond entertainment. AI now powers critical systems in healthcare, finance, and public services. According to research published in the Journal of the American Medical Informatics Association [6], effective design of AI interfaces in healthcare settings is critical for preventing misinterpretation of AI recommendations. Poor design choices in these contexts can reinforce biases, exclude vulnerable users, or even cause direct harm. UX designers serve as the bridge between user needs and technical implementation—we have both the opportunity and responsibility to shape how AI systems interact with humans.
Essential AI concepts for UX designers
Rather than viewing AI as a mystifying black box, designers should understand these key principles that directly impact user experience:
Data quality and collection
The AI systems we design are only as good as the data they learn from. Research from Google [7] has demonstrated that thoughtfully designed data collection interfaces are crucial for producing effective AI systems. This means:
Creating intuitive interfaces for data collection that encourage accurate input: e.g., Airbnb's calendar selection tool that automatically highlights available dates and grays out unavailable ones, reducing errors compared to manual date entry. As documented in a case study by Nielsen Norman Group [8], such thoughtful design patterns can significantly improve data accuracy.
Designing transparent consent mechanisms: e.g., Clear, contextual explanations of how personal data will be used at the moment of collection. Research by Cranor [9] shows that transparent, user-friendly consent mechanisms increase both user trust and willingness to share accurate information.
Implementing progressive disclosure: e.g., Prompt new mobile app users to fill in personal details at relevant steps, such as inputting their location when selecting shipping preferences, rather than front-loading profile creation as an onboarding task. This approach has been shown to increase completion rates while improving data quality [10].
Building inclusive data collection methods: e.g., Voice recognition systems that account for different accents and speech patterns. Microsoft's inclusive design guidelines [11] emphasize the importance of collecting diverse data sets to ensure AI systems work for all users.
Framework for responsible AI-driven design
To create ethical AI experiences, designers should follow these principles:
Transparency by design
- Make AI presence and capabilities clear to users: Explicitly indicate when AI is being used and what it can and cannot do
- Explain how AI uses their data: Provide clear, accessible explanations of data usage and decision-making processes
- Provide visibility into AI decisions: Show users the key factors that influenced AI recommendations or actions
- Create feedback mechanisms: Design easy ways for users to report AI mistakes or unexpected behaviors
User control and agency
- Allow users to adjust AI behavior: Create intuitive controls for users to influence how the AI system works for them
- Provide meaningful opt-out options: Give users clear choices about AI feature usage without degrading core functionality
- Design override mechanisms: Allow users to easily correct or override AI decisions when needed
- Enable understanding and challenge: Provide ways for users to question and understand AI decisions
Inclusive data practices
- Design for diverse user groups: Ensure interfaces work well for users with different abilities, backgrounds, and needs
- Create accessible feedback mechanisms: Make it easy for all users to provide input and corrections
- Build bias safeguards: Implement checks and balances to prevent and identify potential biases
- Regular diverse testing: Continuously test with varied user groups to ensure inclusive experiences

Practical implementation
Here's how these principles translate into concrete design practices:
Onboarding and data collection
- Multiple selection and custom inputs: Allow users to express complex preferences that better reflect their needs
- Choice influence transparency: Clearly show how user selections affect their experience
- Progressive profiling: Gather user preferences over time rather than all at once
- Clear privacy controls: Give users granular control over their data usage
Research from the University of Minnesota [12] found that recommender systems that provide users with more control over their data and preferences lead to higher user satisfaction and engagement levels.
AI-human interaction
When designing chatbots or AI assistants:
- Clearly indicate AI system capabilities and limitations
- Provide seamless escalation to human support
- Design for graceful failure when AI reaches its limits
- Include feedback mechanisms for improving AI responses
Microsoft's guidelines for conversational AI [3] emphasize the importance of setting clear expectations about AI capabilities and providing smooth handoffs to human agents when needed.
Recommendation systems
Personalized experiences demand a delicate balance between algorithmic precision and user autonomy. Effective AI-driven recommendation systems should display confidence levels alongside their suggestions, helping users understand the reliability of each recommendation. By explaining the rationale behind AI decisions, we empower users to make informed choices while building trust in the system. Furthermore, allowing users to adjust algorithm parameters gives them agency over their experience. Perhaps most importantly, recommendation systems should break free from the echo chamber effect by presenting diverse options beyond primary recommendations, encouraging discovery and serendipity.
The path forward
As AI continues to evolve, UX designers must position themselves as advocates for responsible innovation. This evolution requires a fundamental shift in how designers approach their craft. Rather than viewing AI as a separate technical component, designers need to actively immerse themselves in understanding AI capabilities and limitations. This knowledge enables more meaningful collaboration with data scientists and engineers, creating a shared language that bridges the gap between technical possibilities and user needs.
Thorough user research becomes even more critical in AI-driven experiences, as it reveals not just usability issues but also potential biases and ethical concerns. By incorporating ethical considerations into the design process from the outset, designers can help shape AI systems that respect user privacy, promote fairness, and maintain transparency. This proactive approach to ethical design isn't just about avoiding harm—it's about creating AI experiences that actively benefit users and society.
The future of digital experiences will be increasingly AI-driven. By understanding and thoughtfully designing these systems, we can ensure they enhance rather than diminish human experience. The challenge for UX designers isn't just to make AI interfaces usable—it's to make them trustworthy, inclusive, and genuinely beneficial for all users. As we navigate this complex landscape, our success will be measured not by the sophistication of our AI systems, but by how well they serve and empower the humans who use them.
How Hypersolid can contribute to meaningful change
At Hypersolid, we view the intersection of technology and creativity as a powerful space for innovation. Our team of data and AI experts works alongside designers to explore how AI can drive impactful solutions across various aspects of business, as we've done with clients like Lotus. Beyond the technical side, we design meaningful brand experiences —such as for Polestar in a D2C commerce environment, a website unlike any other automotive brand, which seamlessly blends rich brand experience with conversion. We don't believe in the traditional advertising model, but we deliver differentiated and targeted campaigns for brands like IMC, Under Armour, and Heineken, while leveraging AI to optimize assets as well as quick and deep consumer insights to drive creativity.
As AI continues to evolve, we look forward to collaborating with businesses to address specific challenges and explore new possibilities. Through that initial engagement, we establish our integrated way of working with internal IT and Marketing teams and start to demonstrate impact. Our experience with our other clients teaches us that this is usually the start of a longer partnership where we empower internal organizations to be more ready for disruption, speed up experimentation, and build out a solid technical and brand foundation.
References
- Gartner. (2022). "Gartner Top Strategic Technology Trends for 2023." Gartner Research.
- McKinsey & Company. (2023). "The economic potential of generative AI: The next productivity frontier." McKinsey Global Institute.
- Horvitz, E. (2019). "Human-AI Partnership in Decision Making." Microsoft Research Blog.
- Basilico, J., & Raimond, Y. (2017). "Recommending for the World." Netflix Technology Blog.
- Gomez-Uribe, C. A., & Hunt, N. (2016). "The Netflix Recommender System: Algorithms, Business Value, and Innovation." ACM Transactions on Management Information Systems (TMIS), 6(4), 1-19.
- Sendak, M. P., Gao, M., Brajer, N., & Balu, S. (2020). "Presenting machine learning model information to clinical end users with model facts labels." NPJ Digital Medicine, 3(1), 1-4.
- Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., ... & Horvitz, E. (2019). "Guidelines for human-AI interaction." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-13.
- Nielsen Norman Group. (2021). "UX Guidelines for AI Products." Retrieved from nngroup.com
- Cranor, L. F. (2012). "Necessary but not sufficient: Standardized mechanisms for privacy notice and choice." Journal of Telecommunications and High Technology Law, 10, 273.
- Wroblewski, L. (2008). "Web Form Design: Filling in the Blanks." Rosenfeld Media.
- Microsoft Design. (2022). "Inclusive Design Methodology." Microsoft Design Toolkit.
- Konstan, J. A., & Riedl, J. (2012). "Recommender systems: from algorithms to user experience." User Modeling and User-Adapted Interaction, 22(1), 101-123.
- IEEE. (2022). "IEEE Standard for Transparency of Autonomous Systems." IEEE Std 7001-2021.
Industry insights, company updates, and groundbreaking achievements. Stories that drive Hypersolid forward.
Solid change starts here
Contact us and let's get started.