The objective of a case study for a web application is threefold:
- Find out who your customer is (ex: their demographic)
- Collect both qualitative (customer testing & feedback) and quantitative (ex: Google Anayltics) data
- Determine actionable steps to improve the area of focus based on the findings
Get your data -> interpret the data -> extrapolate your interpretation into plans of action. Sounds simple enough, right? If only.
At my current job, I'm consulting for a company going through the final stages of a complete website overhaul. From a completely re-written and custom back-end to a front-end re-design with new functionality. We beta tested the new version last week and now it's time for me to take last week's data and benchmark it against the legacy version's data.
The user engagement metrics are...wait for it...the same (actually a little worse, but I'm blaming bugs for this). The qualitative feedback is....our new version is terrible. This is when it becomes difficult because I need to determine how heavily I should weight their responses. We have some very dedicated and powerful users who provide a lot of value to the community and to our website's value proposition, but should we tailor to our current users or potential new users?
I just read in Re-Work that there are always more people not using your product than those who are so build for the mass market. Let's hope there is a balance to be found between both groups so we can appease our power users while skating to where the puck is going to be for the mass market.