Are your product development surveys all about new bells and whistles? That’s critical information—and fun—but it’s never the whole picture.
On the less fun side is asking what irritates your customers. This is also more expensive to research, both because collecting the information is more involved (verbatims, interviews, forum mining), and because digging through both measured and SHOUTED complaints about our products is exhausting.
]]> Why do you care about their pain? Because retaining current customers is far less expensive than acquiring new ones.Picture a new entrant in the market. Their pitch to a slice of your users might be “We don’t have all the features of your current product, but we’re half the price and you know those four hours a week you spend rearranging data in Excel? That disappears.” As someone who’s spent quality time rearranging data because of limited import/export options, that utterly un-sexy enhancement sounds like bliss.
Also, while some pain points may involve a major product change, others may have a tiny remedy. Three product irritations I’ve dealt with just this morning:
Sounds silly, right? Things you’d never brag about on a press release, things nobody would report to a customer service line, but they remove a little friction in a customer’s day. I’m still thankful my credit union revised their app’s login screen so the second field and button are no longer covered by my phone keyboard.
What can you tweak today?
]]>What they will know from my responses? I’m cheap.
What they won’t know? Why.
]]> They could assume I’m generically budget conscious. Or they could look at my industry, business consulting, and guess that as a non-creative I’m not their prime market. But they won’t know.There are two questions I’d have added to the survey for a clearer picture. I know, I know, length is always an issue. However, this was already an extensive questionnaire, and these are less mentally taxing than the product comparisons in the conjoint portion.
How frequently do you use your current subscription?
How well do the features in your subscription suit your needs?
Now we know if someone is purely price sensitive, or an infrequent user, or a basic function user. Each facet offers its own challenges and opportunities for retention, upgrades (to additional apps), or a possible broader market penetration.
I’d also have added my usual “safety net” question: Comments! Yes, they’re costly to analyze, especially on larger studies, but no survey will ever cover every topic and they’re still cheaper than focus groups or interviews.
]]>As the candidates stabilize, voters firm up who they'll vote for (if anyone), and the samples grow more detailed, polls naturally become increasingly accurate. At six months, they average +/- 5.8%; four months it's around 4.8%; down to 1.7% a week before election.
There are approximately 245 million potential voters in the US. Only ~150 million of those are likely to cast a ballot, and ~58 million of those show up at the primaries to determine who will appear on our ballots. So if you're politically active, encouraging turnout has the greatest impact, not arguing conversion.
Our two major parties also have different turnout rates. In lower turnout years, Republicans tend to win (as we saw in the 2014 Senate majority change), while higher turnouts give Democrats an edge. See image (thanks for the share, Chris).
No single methodology is perfect—the best picture tends to come from multiple sources. Sites which track polling:
Many thanks to Chris Jackson at Ipsos/Reuters for an engaging talk at the Puget Sound Research Forum. If you ever have the opportunity to hear a full time pollster talk about their methodology, it's well worth it.
]]>Steve Krug’s “Rocket Surgery Made Easy” is an engaging, pragmatic, and quick read from an experienced expert in Web usability. If you’re considering qualitative studies, or want to do them but are stopped by budget constraints, this is a great introduction to the basics of user testing.
I only have one thing I’d add to his text. It’s easy to focus on finding all the problems—and only on finding problems. When doing any kind of testing or editing, I also try to pay attention to what is working, because:
Krug gets at this a bit via his mandate “When fixing problems, try to do the least you can do”—I just like to give it a smidge more focus.
As always, I’m a fan of my local library, but you can also buy it on Amazon.
Happy testing!
]]>I attended a Tableau event last week, and was delighted to see their software demonstrated in the same way.
]]> On rare occasion there will be a Grand Plan for the reports, with details about exactly what breakdowns decision makers care about, or matching a prior year’s format. If you know your reporting software well enough, you can dive in and build each figure in a single step.But most of the time, we’re exploring the information as we go, playing with relationships and sub-groups to see what is or isn’t significant. We’re looking for both the insights and the best way to communicate them to others—whether that reader wants a quick dashboard check-in or the nitty-gritty numbers.
Plus, software is complicated as heck, and unless you’re a power user, you’re not going to remember exactly where every control is located. Never beat yourself up over having to poke around.
So be curious, and try letting your data speak to you, rather than arranging it into an expected output.
By the way, I’ll have more about Tableau and how it might work for your surveys in a couple months. I’ve started a business intelligence analytics certificate, and one of the classes will give me a nice grounding in its functions.
]]>As far as your research model, the issues are pretty straightforward:
Most survey software provides some flavor of skip logic, so unless you’re on a very basic/free offering, a simple skip should be available. Note this is one of those features where the details can differ widely, so if you’re doing complex surveys make sure you get specifics before setting on an application.
Whenever possible, make the question a useful data point, not simply a throw-away used to trigger the jump. For example, if you’re conducting a community survey, you may ask:
Do you currently have children in school?
Yes
No
Simple, right? That works if you only care about current parents, but often it’s more complex. So instead you could ask:
Are you familiar with our community’s schools? Mark all that apply:
Preschool age children, or planning to have a family
Children in grades K-12 (including home schooling)
Older children who attended in our community
Employed by, or actively involved in our schools
Other, please specify:
Not familiar
Not only did you get more interesting information from the respondent—without having to ask a follow-up question—you also included several groups who may be very involved in your schools, even though they don’t have any children in grades K-12.
]]>Most market research mirrors classic product development (waterfall model), with a steady progression through needs analysis, research & development, and delivery. It assumes big releases which stay stable for an extended period, which is a good fit to physical projects like manufacturing and construction.
Agile development works in smaller chunks, with products continuously evolving. The idea is to release a good start and keep improving it, rather than a giant push to create the end all be all specification. You may have noticed many of your smart phone apps update frequently, sometimes as often as every two weeks—this is a reflection of agile methodology.
]]> So if you’re in an agile product environment, why shouldn’t your market research join in?Imagine your app has a weekly question, which it pops up for users to answer. Don’t make it difficult, just something like this:
How is our application handling your image needs?
More tools, please
The current features are fine
I can add images?
Depending on the app, you may be able to provide micro payments for participating in the polls, such as extending subscription periods or gems to spend in a game. Even if it’s miniscule or your users are engaged enough to answer without, it’s a nice “Thank you.”
So as usual, there are trade-offs you’ll have to weigh. But given how many iTunes Store reviews I’ve seen where the user is clearly “speaking” to the publisher about problems and improvements, it’s pretty clear we need some more channels.
]]>Brainstorm every way you might want to slice and dice. A year from now, when you look at your accumulated data and decide it would be nice to compare groups, you'd better already have the breakdown factor. An extra field or three today costs little, but retrofitting values is often impossible.
This is an ongoing struggle. We want to make the survey fit each client—getting them the best information is a good thing. However, the more we tailor to one instance, the less we can compare those results to the accumulated data.
Is it a small enough change that the meaning is the same? If so, should this become the new standard phrasing? Are you documenting your changes, so you'll know which records were asked which version?
If it does significantly change the meaning (sometimes it only takes one word), it's no longer an edit. Instead, you need to treat it as a removal of the old question 27, and addition of a whole new question which happens to be on the same topic. Note you'll be able to roll the new question into aggregate scores for the topic, you just can't treat it as the same question for trending or cross-tabulations.
Maybe the firm rolled out a major training program or product line this year which they want to ask about. Or they have a niche division which is critical to their operations, but not part of 99% of firms. Adding something unique to one instance of the survey is actually less disruptive than edits. You'll still have all the trending questions for comparisons, and nobody will expect that type of comparison on the special section.
If you only ask a handful of firms if they have a Professional Services Group, then you won’t be able to develop much of a benchmark between firms for people in similar roles.
One way to tackle it is to develop a master list of every department you can conceive, from which you pull the sub-set you need for a particular client. The other approach is to have a handful of broad categories which fit every group under the sun.
This all comes back to the question of how you're using the data. If you're looking at divisional performance from year to year within one organization, departments tailored to that firm will be important. If you're trying to give clients a benchmark against your accumulated results, then more generic categories are actually better.
You're unlikely to have a perfect crystal ball. While the trending is valuable, if you notice a problem in the survey instrument, it's more important to fix it and move on than to set flaws in stone. If you're planning on a broad distribution, such as my client to many firms over many months, you may want to view the first few instances as a pilot/beta program.
Remember Y2K? Even minor changes can make major ripples in data sets. Work with your techies to understand what types of changes are easy or hard. And if they start looking spooked, ask what they suggest instead.
]]>We can never know exactly what environment a respondent will be using to complete our web surveys, so how do we add bells and whistles without the forms breaking? (Seriously, never. I've had IT people give me rigid specs on their employees' systems, only to have the server logs show other configurations.)
]]> Keeping a low profileWe've all visited websites where the scripting does the browser equivalent of a musical theatre number, with pop-ups, auto-scrolling, beeps, and more. Unobtrusive JavaScript is the idea that scripts are a quiet helper, there to provide support when needed, but for the most part unnoticed as the user goes through your survey. Think of it as the perfect dining experience, where your glass is always filled and the next course arrives just when you want it, without the server ever interrupting your conversation.
In the early days of the Web, browsers would add support for their own special functions, some of which eventually made it into the core HTML specification, while others never caught on so pages wouldn't work consistently in different browsers. Web standards is the idea that we all benefit from using the same set of language specifications, including having browsers interpret them consistently.
While you probably won't be involved in the nitty-gritty of your code, standards are still a good thing to keep in mind. Every time you add an animated scroll bar to a page, or swap out standard checkboxes for extra large ones, you're overriding built-in functions. As tempting as it is to go with the fancy/cool "upgrade," remember you're trying to get respondents to answer a form, not provide them entertainment. And while the standard form controls might seem a little boring, they're also ones the respondents know exactly how to deal with, and which will work reliably in every circumstance.
My phone will display the same web pages as my laptop, but shrinking and panning isn't an ideal experience. Responsive web design is the idea that websites should smoothly accommodate a range of devices. Naturally, this is much simpler to say than to do.
If you have a very simple single column page, you could simply set it to 100% of the browser width, but while that would be readable on a phone, it would actually be difficult on a desktop because the line lengths would be far too long (45-75 characters per line is a common target). In the case of a site like this, it gets trickier because in small displays the sidebar needs to go away, with its content either dropped completely or rearranged in a way which works better. Even within the main body column, we start having to deal with scaling images, or perhaps briefer content.
Surveys have their own particular challenges, especially in longer forms with extensive grids, such as employee satisfaction or industry salary surveys. In order to fit your grid on a phone screen, you may need to change from text labels to a 1-5, swap in pull-down lists, rearrange from a grid with radio buttons to the side to a series of questions with the scale below, etc. In addition, when text fields are used, a large portion of the screen is covered by the keyboard. And despite my general dislike of one page per question syndrome, I'm willing to revisit it for tiny screens.
With pages like this site, designers often use CSS media queries to say "If the screen is >x pixels, do y." (These are generally referred to as breakpoints.) That action may be resizing, rearranging, or hiding sections, but it's all acting on a single source page. (This is also used for print-friendly pages.) At other times, designers will create two essentially independent sites, tailored for specific devices, with a switcher script that sends respondents to one site or the other. I've used designated sites for mobile projects because tailoring forms generally takes more than a little styling, but just be sure you think through its impact on pause/resume, one time use passwords, and such.
If a mobile edition of your survey is important to your research, I recommend starting with that as the target, and then making cosmetic adjustments for larger displays. It's easier to begin with the most constrained situation, than to cut down a survey you've already refined for more flexible environments.
The user has JavaScript disabled or cookies blocked. What now?
You could tell the user what they need to "fix." Most of us have seen notices about installing a plugin or enabling some browser feature to proceed, and many of us have responded with "Bye!" If your scripts will block a non-compliant browser, keep in mind that a respondent has to be (a) willing, and (b) able to remedy the problem—and there are no small requests when we're all scrambling for respondents.
Instead, most sites are designed to proceed even without full support, via either graceful degradation or progressive enhancement. These are the same concept approached from two directions.
With graceful degradation, you design your ideal experience for the "typical" user. Then you go through the scripts, playing what-if in case of older browsers or restrictive environments.
With progressive enhancement, you design for the bare bones, and then consider what icing you could add for people who happen to have the latest and greatest installed.
Whichever way you tackle it, this usually involves an overlap of server-side and client-side scripting. So even if there's no instant JavaScript check on their e-mail address format, it's still OK because you've backed it up with a PHP or ASP function as soon as they click Next/Submit.
]]>When you access a web survey or other page, you use a browser such as Internet Explorer, Safari, or Firefox. This downloads a collection of HTML, CSS, and scripting files, which are interpreted by your desktop or phone and browser for display. The combination of device + browser + personal settings (installed plugins, large fonts, security) results in a huge range of possible environments in which a survey needs to function. Which is why widow/orphan line breaks are the last thing we worry about.
]]>HTML includes the text you read, as well as information on how the page is structured. Pages are organized into sections such as the header, body, or sidebar, and the HTML also indicates if something is a title, paragraph, list, table, image, or link.
The radio buttons, checkboxes, pull-down lists, and Submit buttons in a form are part of this core language. The browser also performs some standard behaviors, such as pressing Tab or Enter while you're within a form.
In olden days (12 years ago), formatting instructions such as fonts, colors, and backgrounds were intermixed with the HTML. Over time, this information was pulled out, so the styling of a paragraph is a layer on top of the text itself. This makes it easy to re-style an entire site when your logo color changes, or if you decide you want the sidebar on the left instead of right. It also allows for more dramatic effects, such as creating mobile-friendly or print-friendly versions of pages. While CSS allows for more control over layouts these days, most designers still allow some flexibility to support individual settings such as large fonts.
JavaScript is the most common form of client-side scripting, meaning the script is downloaded to your local device, and runs there. When it comes to forms, there's a wide range of functions you can add via JavaScript, from automatically formatting your phone number to activating a secondary question when you mark the first. Apart from the interactivity, what's great about client-side scripting is it responds instantly as you type or click.
There are just two catches. First, since the script runs locally, it may encounter a browser with scripting blocked, or an old version which doesn't properly understand your functions. Second, every bell and whistle is more development time, and even if you have survey software which adds common functions with a click, more features usually mean higher end tools.
Common plugins include Flash, Silverlight, Acrobat, and Java. These allow developers to create a very controlled user experience—assuming the user has the plugin installed. For the vast majority of surveys, I recommend sticking with conventional HTML forms, for the broadest respondent reach. If someone is pitching you a form which requires a plugin, be sure you ask what functions it provides that they can't do in other ways, and decide if the portion of your population who cannot run the application is an acceptable loss.
A cookie is a small bit of data which a script (server-side or client-side) saves to your local system. Cookies can be used to remember respondents, for resuming an abandoned session or preventing a second submit. However, they're not infallable—they can be blocked, deleted, and only apply to a specific device+user+browser combination so someone can answer multiple times simply by switching from Internet Explorer to Firefox, or desktop to tablet. Cookies are also a problem in shared computer environments.
PHP, ASP, Perl, and Python are common server-side scripting languages. These execute on the server itself, which means you know exactly how they will work every time. Server-side scripts should be the core of any survey processing for functions such as data validation and skips, because they don't depend on what the user is running. So why do we use client-side scripting at all? Because in order for a server-side script to act, the user needs to click a Submit or Next button, or load a page, so the server can receive a request, process it, and then send the result back to the user.
Many survey applications will be based on, or connected to, a databse. In shared hosting environments, MySQL is very common, while in corporate networks the database may be Microsoft SQL Server. The database server is an important choice for IT professionals, but unlikely to impact your range of survey functions, because there is range of APIs (application programming interfaces) which let the common databases and programming languages connect.
The lowest level is the web server itself, such as Microsoft IIS, or Apache on most shared hosts. Just like Windows vs. Apple, this may impact the survey applications you can install on your site. There are also occasional functions, such as restricting a folder to certain IP addresses (see more), which can be set via the server configuration.
Want more web jargon? Read on with Programming Philosophies.
]]>For when it is an issue—or anyone thinking “Why not use it if I have it?”—here we go...
]]> Why random?Random addresses order biases—the hypothesis/fact that a respondent will answer a question more positively, negatively, or frequently simply based on where it appears. Random doesn’t actually fix anything, it just averages out any effects by changing which item shows up first/fifth/twentieth. Order biases can take several forms:
You can randomize any unordered set of elements, including:
Sometimes you simply have too many questions or combinations to present everything to each respondent—such as with Conjoint analysis. In this case, you may present a random sub-set to each individual. Configuring these surveys gets tricky, as respondents will often answer relative to the question set they see.
There are two facets here: avoiding usability problems, and not tipping your hand to respondents about a survey’s tricks.
The only way to really determine the degree of bias your project will face is a split test—half random, half fixed order—with a statistically valid sample. However, you can probably make a guess as to whether enough respondents will answer with the same bias to a strong enough degree that you’d come to a different business decision based on results from random/not.
]]>If you're lucky, something dramatic happens to highlight the need for a new measure—such as when Lufthansa developed thinner seats which provided more usable space in less “seat pitch,” throwing a wrench in one of the most common cabin metrics.
But more often, it’s simply an accumulation of technical, marketplace, and fashion shifts which might have you coming up with a slightly different set of metrics—if you thought about it today.
So how long has it been since yours had a check-up?
]]>So no matter how unpleasant the message, it's important to remember that all feedback is just information. What matters is how we evaluate it and what we do next.
]]> 1. Filter out the tone.Most of us wouldn't scream at a salesperson face to face, but it happens in writing, whether through clumsiness or the safety of anonymity. Just because someone is shouting at you about how YOUR SHODDY PRODUCT RUINED TIMMY'S CHRISTMAS MORNING! doesn't mean the complaint isn't valid.
The customer isn't always right, but even when they're wrong there may still be a kernel of truth—so dig before dismissing. If nothing else, you may discover a place where you could more clearly set expectations.
I received chapter markups from one reader who wanted more detail, while another felt I was too wordy. Contradiction doesn't make the opinions invalid—both may be fair depending on usage, department, or some other circumstance.
If a customer or employee writes a paragraph about what's wrong, they're more likely to be a disappointed fan than a fussbudget. Even impractical suggestions may still be great brainstorming fodder.
Before you dismiss a comment, look for similar issues—that outlier may just be a different expression of a more common issue.
Whether through coding verbatims or adding a rating question to the next edition, try to get a tally on your issues.
]]>This got me to playing around with it a bit, and I ended up with a largely self-explanatory—because it's annotated—play survey here. (At least for a while, and if I eventually bump it for something else I'll update this post with a PDF or other details :-)
And for your experimenting fun, here's a zip of the SP5 file. For those who don't have SurveyPro 5, you can achieve a similar effect with Answer Piping, which is covered in the Dynamics chapter of your NetCollect User Guide.
]]>