In Part One of this post, I mentioned Reach Advisors’ report on visitors’ preferences for computers. In their nationwide surveys of 40,000+ from over a 100 museums, they found that only 11% of visitors liked using computers to receive information. In response to the crowing of the Luddites online, I felt compelled to offer up a little analysis of their analysis. Just remember, I’m not a professional evaluator, just a consumer of evaluations. You’ve been warned.
I ended my last post with the question “What’s going on here?” Their “findings” and the way they’ve chosen to present them are pretty sensationalist. But I have some problems with their methodology. The big one is their sample.
40,000 people sounds like an impressive amount, and it is. But if you look closely at their report, you see that their response rate across all the museums they worked with hovers around 5%. That’s a tiny amount of the visitorship of any museum, and that 5% were not a random sample of visitors. They are exclusively people who volunteered to fill out a survey from the museum. And if you’ve ever read visitor comment cards from your museum you know that by and large they are written exclusively by people with strong feelings one way or the other. The Museum of Science has been conducting random visitor surveys for some time, and one of the things we’ve learned is that members’ opinions differ greatly from non-members when it comes to satisfaction. Without a random sample, it’s not really possible to make broad claims about museum visitors, or even “core” visitors as they call them. With a 5% uptake rate, they can’t even say how their sample differs from the museum-going population as a whole.
Definitions (or “a car is a car, right?”)
My main beef is the way they’ve defined “computer interactives” to be the most boring, vanilla implementation of computers in museums — the lonely kiosk. I agree with their reasoning that it would’ve been too hard to explain to visitors all the ways that computing technology might be embedded in an experience that doesn’t look like a computer. For solid results I think you’d need to have an evaluator on hand explaining it. But then, I wouldn’t be quite so liberal with the proclamations about their results. I’m not a big fan of kiosks either. But my beef with them is usually a design problem, not an inherent quality of their “computerness” which is how it is possible to interpret Reach Advisors’ report.
I’ve always taken it as a given, but maybe it needs saying. In the hierarchy of interactions, the pinnacle has always been person to person. If we could station an interpreter at every exhibit, there’d be no need for people who do what I do. Given that we can’t have people everywhere, we settle for interactive exhibits to give visitors as rich an experience as possible. Which takes me to the part of the report I actually love, which I mentioned in the first part of this post.
Looking back over the report, it almost feels to me like two different people wrote it. After starting out with some grand statements about visitors and computers, they wind up saying what every exhibit designer or developer hopefully learns on Day One; that every situation is unique, and that reflective practitioners focus on the ideas and experiences they want the visitor to have, and then design an experience that delivers on that,using the best technologies to achieve it. I wrote about this some time ago, in a post called Listen! What’s the Work Telling You?
The takeaway message of the report seems to be, “Be a thoughtful developer and don’t just use a tool or technique for it’s own sake!” This is terrific advice, and one that more people should take, but it’s not really a conclusion in the classical sense. It doesn’t necessarily flow from their data; it’s just good practice. A thoughtful designer chooses the right tools for the job. And there are some experiences that are best handled by a computer, like manipulating data, and displaying dynamic responses to changing inputs.Some experiences are best handled as mechanical interactives. And many only work as staff-mediated experiences. It’s the developer’s job to pick the right one.
Lest you think I’m a complete ingrate, sneering at the huge amount of work Reach Advisors has done and shared for free, be assured that is not the case. I think this report should be a great conversation starter. I’ve already had some great talks with colleagues at other museums about the report itself, about what else we could/should ask visitors, and more. I just wouldn’t use the report as anything more than suggestive data.
What did you think of the report?