Aside from general A/B testing best practices, there are several things to keep in mind when testing ecommerce category pages. Consider what makes them unique within your ecommerce experience.
At this step of the customer journey, the goal is to present the best set of results, and the best products first.
If the results set is too broad, the customer should be able to interact with relevant and usable filter, facet and sort tools. Long category lists should be paginated or auto-loaded the right way, and product list information should offer enough to reduce the need to pogo-stick between product results and the category page, without cluttering the page.
Key metrics to track
Depending on your test hypothesis and which category page elements you are testing, you may track any of the following success metrics:
- Category page bounce rate
- Category page exit rate
- Engagement with sort and filter tools
- Engagement with pagination links
- Product page click-through rate
- “Add to Cart” rate
- Conversion rate
- Items per order
- Average order value
- Revenue per visitor
- Browse-to-search rate (the percentage that abandon category browse to perform a search query in a single visit)
Time on site is a less useful metric in a category test, as both good and poor designs can increase time spent within categories. It takes longer to find a suitable product with a clunky, disorganized category experience, and a great category experience may encourage more site exploration.
Items per order is a more precise metric to track for category tests than average order value, as improved category experiences are assumed to lead to more items selected and purchased. Order values may vary across test segments simply because each cart’s total is unique.
Make sure to segment desktop and mobile visitors, as design and context is very different between devices — even if your site is responsive.
Don’t test the small stuff
While you can literally test anything in your experience, certain design elements have less influence over customer behavior, and will provide you less return on your testing investment:
- Showing (#) next to filter and facet items (it’s a no-brainer, just do it!)
- Breadcrumb pixel size (unless it’s increased as part of a “radical redesign” overhaul)
- Image rollover to show additional view (won’t make enough difference to the purchase decision at this stage)
- Add to Bag vs Add to Cart button labels, or color (minimal impact on engagement)
- Showing “favorite” icons vs not showing them (this is a nice feature, but won’t significantly affect click through or purchase much)
- Back to top links (no brainer)
So, what should you consider testing?
Landing Page Layout
Many ecommerce sites have embraced the trend of designing high-level category landing pages similar to home pages. Rather than present product results, these landing pages display hero banners, featured offers and other graphics and calls to action, often alongside category sub-menus.
For example, Cabela’s, Bass Pro Shops and Orvis have all applied custom merchandising to their Fishing categories:
Cabela’s Fishing category landing page
Bass Pro Shops’ Fishing category landing page
Orvis’ Fishing category landing page
If you choose to experiment with category landing pages, it’s a good idea to initially test them against showing a typical product list. Keep in mind that customers click category pages with the intent to browse products, and your featured content may not be relevant to customers in this mindframe. Measure conversion rate, revenue per visitor, bounce / exit rate and sub-menu engagement at the individual category level.
If you drive enough traffic to support a more complex A/B test with more than two treatments, consider throwing simple banners and graphic sub-category tiles in the mix. Plenty of examples of category page designs are covered in Chapter 10 of Ecommerce Illustrated.
Many ecommerce sites present graphic sub-category tiles in lieu of product list results for high-level categories
Some ecommerce sites include sub-category navigation as a link map, rather than a sidebar menu for high-level categories
Like homepage-style treatments, category hero headers should be tested against not showing them. They often are superfluous and only serve to push products below the “fold” – especially problematic on mobile devices.
Some banners are less disruptive. Harry and David might hypothesize that showing a hero banner with a value proposition for Royal Riviera pears increases pear purchases.
The only way to validate this hypothesis is to test it against not showing it.
Filters and Facets
Visibility and placement
Filters and facets support tighter product results, which increases the likelihood a customer finds the products she wants. Thus, it’s critical that a customer notices that they exist.
Online shoppers can both overlook filter menus and misunderstand sort tools as filters, as observed during user testing by Baymard Institute. When shoppers don’t notice left-hand navigation but see sort drop-downs above product results, they often expect their desired filter options to appear in the drop-down, assuming they don’t exist on the site if not in the sort menu.
Placing filters and facet menus horizontally, above product results increases visibility, makes room for more products-per-row (or larger thumbnails) and reduces the likelihood that shoppers mistake sort for filter. More and more sites, including Shoeline, embrace this approach.
Compare this to the mock-up below of a vertical treatment.
Filter menu placement is a great starter test for your category pages. Keep in mind, if you want to control for the placement variable, design of both menus should be very similar. A radically redesigned horizontal menu, for example, may win based on another factor than placement, such as background color, text size or menu behavior.
It’s also a good idea to limit the filter menu placement test to new visitors, as die-hard, long-time customers who have learned your site may initially struggle with the change.
Expand and collapse behavior
The beauty of a vertical filter menu is it can to accommodate as many filters and facets as you desire. But the more you have, the harder it is for customers to digest their options, and the further useful sections are pushed down the page if each section is “expanded” by default.
Expand and collapse helps you control the presentation of filters and facets, and the sections you choose to show expanded by default can influence your customer to apply the most useful refinements first — and this can be crafted on a category-by-category basis, depending on the buying context.
For example, Staples presents all its category facets collapsed by default. A conversion optimization specialist may hypothesize that a) expanding “Holiday Card Theme” is more important to the greeting card purchase decision than brand, rating, price or “auto restock” attributes, and b) showing an expanded section draws more attention to the facet menu, and will increase overall engagement with all facets.
Staples’ default facet presentation (left) vs a hypothetical A/B test treatment (right)
Similarly, PacSun may hypothesize that an expanded color section draws more attention to its faceted navigation menu and increases sales and revenue for fashion items.
PacSun’s default facet presentation (above) vs a hypothetical A/B test treatment (below)
Visibility and placement
As with filter and facet menus, sort visibility is key. The sort drop-down (or link) is often not treated as the important call-to-action that it is — styled very faintly, and tucked away in the top left or top right corner of the product list page.
Sometimes, sort is unconventionally placed amongst sidebar filters, making them even harder to spot.
One way to increase sort’s visibility is to expand sort options horizontally. This allows customers to scan options without fiddling with a drop-down menu.
Folica draws even more attention to its sort options by highlighting its default sort in green.
It’s not the peripherals that matter most to category page optimization — it’s the product. Testing default sort is one of the simplest ways to increase click through and revenue metrics, as described in Ecommerce Illustrated’s chapter on category merchandising strategies. Consider testing this on a category-by-category basis if your traffic volume and ecommerce platform supports it.
The tradeoff between showing more products per row and showing larger product thumbnails can be quantified with A/B testing.
Many shops, like Dylan’s Candy Bar, already allow customers to toggle between smaller and larger images. If your shop has this feature, consider testing which view you present by default, as many visitors overlook or don’t understand the toggle icon.
Calling out a spotlight item in a category list can increase engagement and sales for that item, and as an A/B test should be run category-by-category, with the same product pinned to first position.
Though Quick View is used by roughly half of top ecommerce sites, it causes several usability issues (covered in Chapter 12 of Ecommerce Illustrated).
If your site uses Quick View, before even thinking of testing how you design Quick View buttons or modal windows, test your category pages with and without Quick View to validate whether or not it adds value (you may discover test metrics lift when you remove it).
Many shoppers will engage Quick View solely to view zoomed thumbnail images from the category page. Consider testing Quick View against a category list that supports image zoom upon hover.
Hover effects can also be used to reveal additional product information or context, such as available sizes, colors or brief descriptions. Consider testing hover effects against the Quick View modal window. This is a testing scenario where qualitative user testing (in a lab setting, virtual recording or through services like UserTesting.com) may be more helpful than a quantitative A/B test. You want to “hear” user feedback on why they click on Quick View (is it intentional or accidental) and what their experience is (was it helpful or detrimental to their evaluation).
Grid vs List view
The rule of thumb(nail) with grid and list view is to apply the view that best serves the customer’s buying criteria for a given product type. With apparel, gifts, toys and some home furnishings, pictures typically matter most. For wine, electronics, software, hardware and office supplies, thumbnail images are often similar or identical in the product list, and additional product information beyond product name, price and average star rating is helpful.
Many ecommerce sites apply one universal view to all categories, which can negatively impact performance within categories for which the opposite view would be more appropriate.
For example, sticky notes are low-ticket products that are differentiated primarily by color, size and price. Grid view supports quick scanning of product results, and quick side-by-side comparison of color, size (in title) and price.
It requires a lot slower eye movement and greater “thinking” to compare these attributes in list view.
If your ecommerce platform supports it and you can budget for development and implementation, apply the most appropriate view on a category-by-category basis. This is a judgment call you can reasonably make without testing (just think like a customer). Because category-by-category testing is resource intensive and can take much longer for less-trafficked categories, reserve A/B testing of list layout for high-trafficked categories for which the optimal presentation could reasonably “go either way.”
Many ecommerce sites present truncated lists of sub-category product results on high-level category pages, with links to “see more.”
While this approach aims to push the customer to a more relevant set of product results, shoppers may assume what they see is all there is, and miss the “see more” links entirely. When taking this approach, testing the prominence, placement and clarity of “see more” calls to action is critical. Consider also testing this layout against the graphic sub-category tile alternative.
Pagination vs auto-load is covered in depth in Chapter 14 of Ecommerce Illustrated.
Though auto-load is trendy and provides a more seamless experience than pagination, it should not be implemented without testing, no matter how long it took your team to build it. Just ask Etsy, which under the assumption that more-results-are-more spent months building infinite scroll for its search pages, only to observe a drop in item clicks and favorites. And while conversion and revenue didn’t suffer, customers “just stopped using search to find these items.”
Even if you just use pagination and plan to stick with it, it’s worthwhile to test the pagination design, with the goal of increasing engagement with paginated links and reducing category page abandonment.
Maximizing category page performance
As with all A/B testing, optimizing your category pages starts with a strong hypothesis, based on an identified conversion problem or underperforming metric.
Keep in mind the contextual differences between mobile experience and desktop, as well as the differences that may exist within your product categories. You may benefit from category-specific tests if traffic volume supports them.
A review of Ecommerce Illustrated Category Chapters 10 through 17 can help you identify potential conversion problems and design test treatments.
Need help with your ecommerce A/B testing strategy? Drop me a line.
Ecommerce Illustrated is a project of Edgacent, an ecommerce advisory group.