In 1996, Ben Shneiderman wrote a paper without a single data table. No graphs. No study participants. Just an idea. And in it a line, repeated ten times, like a mantra. That idea changed how we see much of our digital space.
This week on Seeing Beyond the Dashboard, Pete Wright sits down with Ben, the father of direct manipulation and one of the minds behind the blue hyperlink. You know, the one you’ve clicked a million times. Together, they retrace the evolution of user empowerment through design, from early command lines to Google Maps and the treemap visualizations that turned markets into rectangles. Ben’s famous framework—“Overview first, zoom and filter, then details on demand”—wasn’t just a guideline. It was a philosophy. A lens through which data became human.
But what happens when your ideas win? When every software company starts parroting your mantra? Ben wrestles with that, too, venturing into the murky waters of AI hallucinations, human responsibility, and what it means to remain optimistic in a world where the machines keep getting louder. This is a conversation about the thrill of discovery and the beauty of a well-designed interface.
Links & Notes
Transcript
Pete Wright:
Imagine you’re searching for a home, not scrolling through endless listings, not typing ZIP codes into clunky boxes, but sliding a few sliders, dragging a map, watching colors shift in real time as your ideal neighborhood pulses into view. That was the dynamic query home finder, a prototype from the early 1990s. It didn’t just change how people looked for homes. It changed how we look at data.
Today on the show, Ben Shneiderman, a pioneer in human-computer interaction and the mind behind the mantra that has shaped the field of information visualization. Overview first, zoom and filter, then details on demand. This framework is foundational. What happens when the ideas go from radical to routine? Where do we go next when the world adopts your vision? I’m Pete Wright. Welcome to Seeing Beyond the Dashboard from Hopara. Ben, so grateful for your time today.
Ben Shneiderman:
Thank you, Pete. Good to be here.
Pete Wright:
You talk about empowering users. Can we take a step back in time and talk about how this vision for user empowerment met this idea behind information visualization?
Ben Shneiderman:
Absolutely. Yes. I have always thought that the purpose of computing was to give people power, superpowers, I mean. As microscopes or telescopes gave them, so too the computer gives them power over data to explore, to discovery, to see patterns and clusters, to see gaps and anomalies and outliers. There’s a lot of rich things to see in the large data spaces that we have today.
But the earliest notion goes back to the days where we were still struggling with text-only interfaces and command lines, and we moved to graphic user interfaces, which certainly helped get people to be more comfortable to use their computers. And as we moved to graphic user interfaces, we were developing the ideas of direct manipulation.
It’s a framework that I described and coined in 1981, 1982, and it suggested that the way to do computers, now it seems obvious, was to have a visual, visual representation of the world of objects, the world of the items you’re caring about, and then to have operations that you could zoom around, move around, click, flick, or whatever you were doing. You could get to explore that data in a fluid way that was rapid, incremental, and reversible. That was the principles of direct manipulation. And you did it not by typing at a keyboard, but by moving a mouse, touching on a touch screen.
Pete Wright:
Yeah. The keyboard is already an abstraction here, right? Already. One variable too far.
Ben Shneiderman:
That’s right. I mean, the keyboard was and remains a useful tool for some items. But for exploration, we don’t want to be typing on the keyboard. We want to be flying through the data, if you will, or swimming through the data, if you want. So there’s lots of ways you could think about moving around large data spaces, but the idea of direct manipulation was that you were in control. You operated. You were empowered.
That’s the idea of computing for me, and that’s remained a powerful idea, and it’s continued in the advertising of Apple particularly, which talks about its AI systems as empowering users. That’s the phrase I use. Amplify, augment, empower, and enhance people’s performance. So that led me towards the 1990s when I began to apply the ideas of direct manipulation to large visual spaces.
And we can talk later about the treemap idea which came along, and the idea of exploring, as you described in the dynamic query home finder, that lets you see a neighborhood and find the houses with three or four or five bedrooms, and see how far it is from your workplace, and see what the routes might be, and look for different kinds of houses. So that fluid exploration, where you’re in control, is the essence of it.
And in trying to describe that world, I came up with this notion of the visual information-seeking mantra, which I playfully put into this paper. I was at a conference of visual languages in 1996, and it said, very simply, “Overview first, zoom and filter, then details on demand.” And just to emphasize the mantra-ish, that paper listed that phrase on 10 separate lines as a pretty bold graphic.
And to my amazement, this paper, which had no empirical results, which was an opinion piece, but it’s become a steady and still highly referenced paper, with more than 8,000 citations in the literature, a startling number by any account. So that’s been a joy to me.
Pete Wright:
Can I ask just a sidebar question? May it please the court. How long do you feel like you had to workshop that mantra, as you say it, before you feel like you got it right? Did it come easy to you, or were you just sweating over piles of whirled-up paper?
Ben Shneiderman:
Well, it was an interesting situation. It came from a conference that I spoke at in Italy, the Advanced Visual Languages Conference, and the opening speaker was a dear buddy and colleague, Stuart Card, who had this pretty good idea about what to do and how to explore interface. But I knew that there was more to what he said, and I was the closing keynote speaker. So I had a few days during the conference to figure that out, but there was a pressure on there, and I was revising my slides and pushed along, and I just sort of spun it out.
But there were really seven things you needed to do, overview, zoom, filter, details on demand, but I also had three more: relate, view the relationships among items; history, keep a history of the actions to support undo, replay; and then extract, that you could extract subsets of that data. So they were really set in at the time, but the first four really resonated with people. The other three, well, okay, so I tried it out at this conference, and I got a very warm reception to that. So I knew I was onto something. And so, I continued down the path of formulating it, and I put it out as a playful thing in this keynote.
It’s the kind of paper that you could only do as a keynote, because for a rigorous conference, you wouldn’t get through without some kind of evaluation and empirical implementation. So it was a thought piece, and it went in very late in the conference cycle, and it appeared… And I’ve been startled ever since and delighted by the many, many people who find it valuable, not just those in computing, but in other related fields, geography and mapmaking and medicine and biology, and many other areas where people are dealing with large, complex sets of data.
Pete Wright:
I want to talk about the user side of the mantra for a bit. At what point did you start to observe that focusing on the visual identity of data wasn’t just about usability, but about agency on the user side?
Ben Shneiderman:
Yeah. It’s always been my point. It wasn’t about having an agent that did the job for you. It was having the agency to do the job yourself, and the fun, the fun of exploring. I mean, it is just a thrill of discovery that you have. If you’ve had a dataset that’s meaningful to you and you can explore it, zoom around, filter, get details, you can see patterns more rapidly. That’s the great power of information visualization, to let you see more clearly into your data and find the patterns and the connections between items.
And as I said, the patterns, the clusters, the gaps, the outliers, the anomalies, these are all the kind of creatures that inhabit the world of large data visualization. And the remarkable thing is that the technology has come along to allow us to do this rapidly, incrementally, and reversibly even on large datasets with millions of items.
I think that’s the contributions of people who followed me. They made this possible not just on fast machines, but on ordinary laptops, on web browsers even, and the idea that you could do this in the web browser really stood out. A fellow named Mike Bostock pushed this along early on, and others, and companies like Hopara have built this up as a theme for their information exploration tools.
Pete Wright:
Well, and that’s, of course, one of the great reasons why we’re talking today, because of the influence of the paper and the mantra and the developments that you contributed have been so deeply influential in how the Hopara technology ends up working. When you think about a large variety of implementation that we have out there in various tools, what has surprised you about how others have applied the tool? Do you have a favorite sort of implementation or tool and how they allow you to see data in a new way?
Ben Shneiderman:
Well, I think the most common and popular success story of this idea is Google Maps, where you see an overview. You look in at a whole map of the United States, and you zoom in on Washington, D.C., and you zoom in on University of Maryland, for example, just enjoyed Washington, D.C., and then you zoom back out or you see how far it is from University of Maryland down to the White House, for example. Okay? There’s lots of things you can see once you have a context settled.
So zooming in on Maps was one of the early things that we worked on, and we actually developed an algorithm for these multilayered maps that Google later also built its techniques on. So that was one application, but we were working a lot on libraries, and I worked with the Library of Congress. So how do you explore a vast library of 100 million objects, books and maps and letters and documents of all kinds and well as recordings and videos, et cetera?
So those became other challenges of how to do it, and most of the early implementations could not do this fast zooming in. And so, you did it step-by-step. So in a hierarchy, you’d say, “Okay. I want a book at the library.” So you see the Library of Congress catalog and you say, “I want physics.” And you click in physics, and you want solid-state physics, and you drill down in a step-by-step way. But the dream is to make it more fluid. And so, for example, you’d like to have a kind of overview. I think the challenge that remains, by the way, is to integrate visual and textual data in a smooth way. Let me just backtrack a moment.
Pete Wright:
Sure.
Ben Shneiderman:
You asked me about favorite implementations. Our own work was a tool called Spotfire, which became a commercial success story. It continues to exist, but friends at Stanford developed Tableau, which became a still larger commercial success, and they still thrive. They’re within Salesforce, but that’s a huge, successful community that’s done a lot of the things.
And we have a lot of newer companies like Hopara that’s trying to do the next step here, which is to take on the smooth visual exploration. So the things I’d like to see, for example, would be a treemap, let’s say, of the entire Library of Congress, and then to be able to zoom in on some parts, and then you might get a list at some point, or you might get down to a particular book, and then you can actually see the table of contents, and then down to a particular chapter, and you can explore in that way or back out to that book and look for similar books in the same category.
Other applications, I think, would be the stock market. A company called FINVIZ, I have no connection with them, but they have used the treemap idea to show the 500 stocks on the Standard & Poor’s stocks, and each one is coded as a rectangle whose area is the market capitalization, and then the color coding shows the change from the last day or week or month or year, whatever you want to set it to. And so, the red stocks are the ones that are falling and the green stocks are the ones that are rising. And that makes a wonderful overview, and you can instantly spot the…
There are 11 industry sectors. So you can see healthcare, for example. You can see computers, and you can see which of the big companies right away is their big blocks, and then you could see the ones that went down or up. And the most interesting days are ones when most of it’s red, things have gone down, but there’s one bright green in there, and you click on that, and you find out why that company was up on a day when everybody was down. And in these days of stock market turmoil, I’d been going there a lot to see these patterns, which are just fascinating to me. Sometimes you’ll find a whole sector, like utilities, that goes up when the rest go down.
Pete Wright:
Sure.
Ben Shneiderman:
Sometimes you find individual stocks, and that’s a pretty interesting thing to look at.
Pete Wright:
There’s a movie from the ’80s or ’90s. It’s called Disclosure with Michael Douglas, and it’s a movie about a software company that makes a virtual reality tool where you have to put on the headset and gloves, and you get in a frame that has a treadmill on it, and that’s their access point for large troves of data. And it’s always stuck with me, because if you can imagine, in order to access this data, this was the state-of-the-art imagineering in Hollywood, based on a Michael Crichton book, where you actually had to get in this VR unit and walk the shelves of a virtual library and open virtual drawers and finger through them just like you would on paper.
It was, I think, one of the most inane approaches to information findability that I have ever seen, and stuck with me ever since, even though it’s a dramatic visualization. So on the other side of viewing successful treemaps, where do you find you can go wrong applying this sort of visualization model?
Ben Shneiderman:
Yeah. Of course, no idea is the perfect one in every situation, and many people have written articles about exactly that question, and some suggest you do the other way around. Sometimes you start with details, and then you have a particular person, and then you might like to zoom out. And so, it’s reversing till you see an overview of every person.
So that’s a particular case where sometimes you want to start with a detail and go back out. And so, I’m sure there are many other cases where things would go a different way. I’d say one other issue would be where you don’t have a natural hierarchy, but you have a network. And exploring networks is a particularly tricky issue, and that’s an issue I’ve worked on with a tool called NodeXL, N-O-D-E-X-L. So it’s network exploration based inside the Excel.
It’s the most widely used network analysis tool for visualizing social media networks or other networks. So networks are another puzzle place where you have to use different techniques other than the hierarchical browsing down. You mentioned the other challenge, which is the movie Disclosure, where there’s a VR world sort of substituted. I did see that movie. I thought it was kind of clunky to pull drawers open that way, but it made for a playful visualization.
Pete Wright:
Yeah.
Ben Shneiderman:
So I could see people would be attracted to that. I think VR has some important capabilities for people. It’s certainly entertaining, and there’s some great game applications, and people seem to enjoy it. But I would say, for real work, you don’t want to be in the data. You want to be looking at the data. So you want to be able to see it all from outside. And I’m not sure that a head-mounted device, even as the resolution goes up and latency goes down, they got very good, but I think the idea of a high-resolution display in front of you with lots of data, what I call information-abundant user interfaces.
So, for example, the Bloomberg Terminal, pretty good example of how you present a lot of information, possibly two or three large screen displays, where the user, who’s a stock market or bond trader, would have arranged the data that they want, in the place they want, in a spatially stable arrangement, but then they can quickly glance from one to the other. So here’s another situation where zooming in and out can be problematic. And if you have enough large screen display and you can see everything at once without doing any zooming, well, that might be a better alternative.
Pete Wright:
Let’s talk a little bit about what’s next. What extensions or evolutions of the framework excite you today? I know when we started our conversation, you were pointing at human-centered AI, which is obviously a topic that’s hot right now, and I’m interested in your perspective on the rise of AI and LLMs and immersive tech, challenge or reinforce these ideas.
Ben Shneiderman:
Yeah. Absolutely. It’s a new world. It’s a great opportunity. These are powerful tools, and my fundamental comment is that the AI technologies of LLMs and so on are startlingly impressive. I continue to be impressed, but alarmingly flawed. And so, you have to deal with that duality where something magical happens. You get a result either in text or you get graphics output or even data tables or data visualizations, but they may be flawed.
And so, you have a constant wonder, “Is this correct?” and “When can I use it?” So it’s okay if you’re doing playful things, if you just want to find a movie to watch. Not highly consequential. But if you’re doing serious work that’s business-consequential or life-critical, such as air traffic control or medical or military or the transportation applications, I don’t think you want to have the kind of uncertainty of hallucinations that intrude into AI applications.
So my approach, and my book out recently is called Human-Centered AI, and it takes the next step of, what do we mean if we’re going to amplify, augment, empower, and enhance human performance? How do we do that? How do we build the self-efficacy of the user, support their creativity, clarify their responsibility for what they do, and facilitate their sharing results easily with people?
So there’s a kind of another mantra, if you wish, that might follow on, self-efficacy, creativity, responsibility, and social connectedness. I think the good powerful tools, most of the data visualization, information visualization technologies make it easy for you to do most of these things, and to share the results, put them into a spreadsheet, put them into a PowerPoint, or share them on email. These are all ways that it facilitates the social connectedness.
And for me, a central issue is responsibility, because when you have business-consequential and life-critical applications, you must have the audit trail of who did what and how decisions were made. And when you make those important, highly consequential business decisions or life-critical ones, the necessity for careful recording of what’s happening is essential.
So that clarifies the responsibility that it’s never the computer that goes wrong. It’s the user that goes wrong. So when the computer goes wrong, there’s AI hallucinations, you better watch out. And so, I say rather bluntly, shut it down. If it doesn’t perform reliably and safely, shut it down. It may seem cool to you to use a tool that has all these fun things in it, but I’m much more cautious. And when I’m dealing with life-critical situations, getting 99.9% accuracy is not enough.
Pete Wright:
That sort of leads to a question or maybe a thought experiment that are we economically incented to do the kinds of things that you’re talking about, because we are seeing headlines today as we record this. “Don’t come to me with a proposal for a new position until you’ve proven that that job can’t be done by AI.” That’s a Shopify CEO memo just linked today.
I think that is an interesting growing narrative that automation is inevitable. And when you bring it back to this mantra that you are foundational to, the meeting of automation and data could create some sort of magical, visual, heavily user-centric displays. At what point is that no longer human-centered?
Ben Shneiderman:
The current infatuation with AI is sometimes misleading, because it goes overboard with the idea of autonomous systems. And so, you want to make the right blend, which I think your digital camera is an excellent example of, that is, there’s lots of AI. The AI does shutter and focus, aperture, color balance, reduces hand jitter. Okay? Those are all great things, but you’re in control.
You frame, compose the picture. You zoom in, and you wait for your decisive moment when the expression is right on the faces that you’re photographing. You’re responsible. If someone looks terrible in that picture, you should not distribute that, and it’s also easy to share. That’s the message of digital cameras. You can also edit many, many ways. So there’s lots of opportunities for you to be creative before, during, and after the taking of a photo, and those are the things you want to see.
So I would return to the business case here also, is that you do want to make clear to people that they’re responsible for the work, and you want to design interfaces that ensure human control while increasing the level of automation. You use the word automation. I was pleased by that rather than autonomy. Some in the AI community are very devoted to the idea of machines doing it on their own. The right design is a tool, or a super tool, as they say, which give people superpowers. That’s what technology has always done.
It’s a microscope or it’s a telescope, or it’s an airplane. These are things that improve lives, change lives by a factor of a thousand, emails, web search. All these things make huge difference in people’s lives, and those are the things we want to do. I don’t want a machine. I’m not interested in a machine which merely does what a human does. How narrow, how limiting. I want tools that make people a thousand times more powerful than they ever been.
Pete Wright:
What are you doing to keep busy? You’ve retired. You have so much yet to contribute, Ben. What are you doing?
Ben Shneiderman:
I’m working. I’m working. Yes. I’m happily retired.
Pete Wright:
Okay.
Ben Shneiderman:
So I have the freedom to do as I want, and I’m enjoying times where I’m just out walking or hiking with friends or doing other things, traveling. But yeah, I’m in the book-writing business. It’s been a theme. So the Human-Centered AI book is, that’s my last contribution, and I’m working on the next one. Stay tuned and I’ll fill you in. But it focuses on the idea of empowerment and the notion that people are remarkable, and we should celebrate and appreciate their expertise and their capacities, and we should design tools that facilitate and empower them.
Apple gets that right, but you’ll find other companies stray from that notion where the focus is the user. Apple gets it right in their advertising also, where the advertising is all about you. You can do this. You can do that. It’s not that the magical machine will do it for you. It’s you can do it. That’s the message of empowerment.
Pete Wright:
And we’ve only just met. You seem to me a delightfully optimistic human being. How do you characterize your optimism in a way that we can make contagious to others?
Ben Shneiderman:
I do convey optimism. Certainly, the times we’re living in now are more challenging than any time in my life. The turbulence in the world around us is very troubling to me, and at times, it does register and sends me down into melancholy, let’s say. But I rise up as I can, and the optimism is born from the fact I have the experience of having seen how ideas do change, and the fact that our early work on direct manipulation and touch screen designs led to the remarkable devices in billions of people’s pockets.
Steve Jobs came to visit my lab and saw the demos, and that was a remarkable moment. He went quickly from one to the next and said, “That’s great. That’s great. That sucks. That’s great. That sucks.” There was nothing in between, and I became a consultant for Apple for five years. So I could see how ideas did get from ideas to products, and the success of the phone is one of those remarkable ones.
I’m pleased. Some of the ideas that I had, the idea of the highlighted selectable link, that was me, that they should be blue, that was our studies. We had studied different colors and different ways of showing where the links were. But the idea of links, that was a published paper. Tim Berners-Lee picked up on it. We built a system that was a modest commercial success in the mid-’80s, mid- to late ’80s. And then Tim Berners-Lee, in his spring 1989 manifesto for the web, he cited that work that we had done and took over and put the blue links into the web browser.
So that’s nice, or the touch screen, the high-precision touch screen and the little keyboard on your phone. That was originally our work. I mean, at the time, touch screens had to have big buttons, like inch square buttons. So a keyboard on the screen would be like nine inches wide, and we made a seven-inch and then a five and a three-inch-wide one that you could touch at carefully, and you could be rather precise than your movements, and one of the reviewers of that paper couldn’t believe that we had done this.
We had to produce a video of it to show this skeptical reviewer. So I’ve seen how these kind of small ideas travel far and have great influence. And the satisfaction of seeing information visualization in the way of the treemap idea and of the Spotfire commercial product, those are also satisfied. So my optimism stems from a belief that good ideas can succeed. It’s really difficult. The world of new ideas is very competitive, but I hope that tomorrow is a better day and that there’s work to be done. So let’s get on with it.
Pete Wright:
I’ll tell you, your optimism is contagious to me. I hope it is for our listeners. Thank you so, so much, Ben, for your time. It’s too short. And thank you, everybody, for downloading and listening to this show. I’m going to put notes, links, blue links in the show notes for you to check out. Jump directly to Ben’s book, Human-Centered AI. And to learn more information about the paper, the mantra, we’ll put it all in there for your further study. On behalf of Ben Shneiderman, I’m Pete Wright. We’ll see you next time right here on Seeing Beyond the Dashboard.