AI fuels a $4.6T opportunity for service-as-software learn more

Why Generative AI Needs Design

Ideas / Points of View / Why Generative AI Needs Design

11.03.2023 | By: Steve Vassallo

Mention generative AI in a crowded room, and reactions fall somewhere on a spectrum from worry to wonder. Venture capitalists—particularly those like me who work with founders at the very first stages of company creation—are fond of saying that we’re still in the “early days” of a technology’s life cycle. But with the breakneck advances in AI over the past year, from the launch of ChatGPT last November to the emergence of multimodal models, this statement feels uniquely unexaggerated. The possibilities are equal parts thrilling and disquieting, with so much still uncertain and unbuilt.

Today, most popular generative AI applications rely on an open-ended chat box that reveals little about the underlying system’s immense abilities and limitations. The onus falls on users to express their intent, recall all the relevant context, and evaluate the AI’s answer. Yet, to paraphrase Dieter Rams, good design should reveal a product’s proper use, like a glove that slides effortlessly onto your hand. By taking a human-centered approach, designers can craft intuitive interfaces that enhance usability, foster trust, and allow users to intentionally navigate a model’s vast conceptual interior, rather than aimlessly bumble through it.

With generative AI, computers gain a new set of capabilities: the ability to understand and generate language, code, images, and sounds; learn and represent knowledge; and perform logical reasoning. Together, these serve as a set of building blocks for innovation, all of which promise to become more performant over time. But, when it comes to integrating AI into our lives and professional pursuits, they remain just that: building blocks, not fully formed buildings. Thoughtful, deliberate design and product thinking will be key for humans to harness AI’s full power and ensure that it works in harmony with our values and goals.

Steve Jobs famously likened the computer to a “bicycle for the the mind”: a tool that augments and extends the human intellect. Generative AI transforms this bicycle into a supersonic jet. By accelerating toward creative capabilities once deemed exclusively human, this latest wave of AI models compels us to rethink the nature of our relationship with computers. The key question is not “How can AI help me complete this sentence?” but “How can I partner with a novel form of intelligence that writes, sees, hears, speaks, and (increasingly) understands me?”

Along with my friends at Designer Fund, I recently gathered over 50 founders and builders across AI and design to discuss the intersection of these two fields at Foundation Capital’s office in San Francisco. The evening began with two fireside chats—the first between Noah Levin, VP of product design at Figma, and Enrique Allen, co-founder of Designer Fund, followed by my own discussion with Nadim Hossain, VP of product management at Databricks—which opened onto Q&A. From these conversations and the socializing over drinks that followed, four key themes emerged. Let’s unpack each theme in turn.

1. Lowering the Floor, Raising the Ceiling

Generative AI tools like Midjourney, DALL-E 3, and GPT-4 have led many designers to ask: “What does this technology mean for my career?” To put their impact in context, Noah offered a helpful metaphor. On one hand, AI lowers the floor, making design accessible to non-experts. With simple prompts, anyone can translate an idea into a visual prototype that a group can riff on. While imperfect, these AI mockups catalyze the creative process by overcoming the dreaded blank canvas: the first, and perhaps largest, barrier to entry for creative work.

At the same time, AI lifts the ceiling, pushing out the perimeter of what experts can achieve. By automating monotonous, repetitive tasks—nudging boxes, adjusting corner radii, removing backgrounds, and so on—it frees professionals to tackle higher-order challenges, like setting vision and strategy, conceiving systems and frameworks, and crafting end-to-end experiences. By shifting designers’ focus from narrow execution to big-picture conceptualization, AI can help raise their work to the level of human-centered problem-solving—which, as I explore at length in my book The Way to Design, is the core task of design, and the reason why many are drawn to the profession to begin with.

2. From Pixels to Patterns

Designers often fixate on isolated elements, like buttons, text fields, and drop-downs, which they manipulate one by one. By contrast, patterns attend to the user’s full journey through a product or service and consider the connections and transitions between each step. Navigation systems, search interactions, and onboarding flows are all good examples: each combines discrete pixels into intuitive flows that align with the user’s intentions, needs, and goals.

AI can help smooth this transition toward a more holistic approach to design that places the user at its core. Imagine an AI model that’s trained on your company’s design system. Designers could describe their ideas at the conceptual, pattern level, then let the model handle the tedious work of assembling the pixels. This is yet another example of AI both democratizes and uplevels design, changing both the profile of who designs and the process by which design happens.

3. Beyond the Text Box

Chat interfaces provide a helpful onramp for engaging with AI. They echo the ease and immediacy of human conversation and enable quick turns to address models’ frequent mistakes. Building conversational user interfaces atop large language models (LLMs) is also straightforward: they simply extend the model’s built-in text-completion interface.

Yet, while chat allows for rapid, free-form interactions, the blank text box may not be as simple or obvious as it appears. For starters, it provides no affordances to guide effective human-AI interactions. Users may struggle to articulate their intent and ascertain how best to steer the model toward a desired output. What’s more, prompts can only influence models in indirect, opaque ways. It’s akin to operating an incredibly elaborate, sci-fi machine with a binary “on-off” switch. Key elements that shape the model’s responses, such as weights, activations, and temperature, are entirely hidden from the user.

Early software interfaces tend to expose the most direct interface to a new computing capability. Over time, these interfaces adopt new metaphors that better resonate with how humans think and navigate virtual space. The first personal computers, for example, laid bare the raw power of microprocessors and memory via command-line interfaces. Later evolutions, like Windows and Mac OS, introduced more user-friendly metaphors—desktops, folders, and trash cans—to humanize this power.

While text boxes are certainly a step up from punch cards, they’re much closer to the command line than the modern graphical user interface. Command lines work well for tech enthusiasts and early adopters; less so for the general public. As AI advances, especially toward multimodality, our need for more intuitive, multidimensional interfaces will intensify. The seeming simplicity of chatting with an LLM will become a liability.

Future AI interfaces could collaborate with humans in ways we’ve yet to imagine. The chat box serves as a gateway to information, but it doesn’t help us think. Imagine an interface that guides users through a model’s latent space, highlighting non-obvious connections between concepts and actively participating in ideation rather than simply executing commands. Tools like adjustable sliders, dashboards, and filters could add additional nuance, allowing users access to the parameters that inform the model’s outputs.

More fluent abstractions for AI capabilities have the potential to unlock massive amounts of utility and joy for everyday users. In the best product experiences, AI will be so seamlessly integrated that we’ll no longer perceive it as “AI” at all.

4. Solving AI’s “Last Mile” Problem

For all their impressive abilities, generative AI models are a far cry from being reliable enough for most real-world applications. This gap between “gee whiz!” demos and dependable deployments is what technologists refer to as the “last mile” problem. Just as self-driving cars blunder on busy streets, AI models struggle with inconsistency, bias, and a tendency to hallucinate. In creative contexts, these features may be harmless or even welcome. But in business, finance, medicine, and other high-stakes use cases, anything short of 99.9% accuracy is often unacceptable.

Innovative design of human-AI systems can help manage these last-mile risks while the technology continues to mature. Confirmation prompts (asking users to verify that generated content is accurate) and uncertainty estimates (indicating the model’s confidence level in a given output) bake human oversight into the generation process. Explainability features can further increase visibility into the AI’s “black box”: for example, using heat maps to visualize which inputs most influenced the model’s output, including citations to external sources, and showing alternative responses the model considered. Interfaces that let users tweak prompts and parameters and observe the effects in real time could further boost transparency into a model’s inner workings.

Together, design elements like these can help optimize “time to trust”: that pivotal moment when a user gains enough confidence to rely on an AI system for complex, mission-critical tasks.

Doing > Deliberating

Design has always evolved alongside technology, from the printing press to the 3D printer. As with prior breakthroughs, AI won’t replace designers; rather, it will profoundly expand and reshape not just what we design, but who designs and how. To realize this future, we’ll need novel designs for human-AI systems.

Much ink has been spilled over whether incumbents or startups will benefit more from our present AI wave. Incumbents benefit from accumulated data, distribution, and deep coffers, while startups benefit from speed, focus, and the ability to build AI-native products unencumbered by legacy systems.

Hand wringing aside, my advice for aspiring AI founders is simple: start building. Prototyping reveals insights that no amount of pondering or planning can. The teams that move fastest to ship and iterate with relentless velocity will win. Or, as I sometimes say, “build to think.” Ask the right questions, and answer the hardest ones first. The future is made by those who do, not deliberate.


Published on 11.03.2023
Written by Steve Vassallo