126 | Change the header to a sticky header | Mobile
Last updated:04 July 2025
First validation:01 May 2016
Last validation:02 Sep 2024
Number:126

A/B test results
Number of A/B tests4
Win ratio25 %
Loss ratio50%
Inconclusive ratio25%
Average impact KPI-5.4%
KPI:Transactions
Visitors:1.291.952
Transactions:39.160
Device:Mobile
Pagetype:Sitewide
Effort to implement:Low
Industry:Furniture & Home, Automotive, Fashion & Shoes, Pets
Theme:Header
Channel:On-Site
Short description
In this experiment, we tested the effect of changing a non-sticky header to a sticky header on a mobile device.
Tips for applying to your e-commerce site
Do’s
Reconsider implementing sticky headers.
Do optimize all interactive features: If you're making a feature more prominent, ensure it functions excellently to avoid negative impacts on user experience. (e.g. internal search).
Consider Evidoo number 139: semi-sticky headers.
Don'ts
Overcrowd the header: don't clutter your sticky header with too much information or too many links. Keep it simple and focus on the most used tasks of your users.
We found an increase in internal site usage. Don't apply this if your internal search does not perform well.
This hypothesis is grounded in the following psychological principles:
- Attract attention: The sticky header constantly keeps important navigation links in the user’s visual field, ensuring that calls to action and other key elements remain prominent.
- Make it easier: The easy access to navigation and other tools provided by the sticky header reduces the effort required to browse and purchase, thus enhancing user ability to complete transactions.
Based on this background, we believe that introducing a sticky header for mobile users will increase their motivation and ability to shop and will attract their attention to key areas, leading to an increase in online transactions. We expect to confirm this when we observe an increase in completed purchases.
The experiment results suggest that making the header sticky did not lead to improved conversion. In some cases it may even harm it.
Possible explanations why the impact was not as hypothesized
- Distraction: The sticky header may have drawn attention away from product content or calls to action.
- Reduced viewport space: On small screens, a sticky header reduces visible product space, increasing the need to scroll.
- Micro KPI analysis: Upon closer examination of the A/B test results, we observed that there was indeed a negative effect associated with the introduction of the sticky header. This adverse outcome was primarily because the sticky header made the internal search function constantly visible, prompting more users to utilize it. Unfortunately, the search function's poor performance led to a negative user experience, overshadowing the benefits of having a sticky header.
General
All A/B tests in Evidoo have been analysed using Bayesian Statistics. The most important advantage of Bayesian statistics is that it is easy to understand. If there is a difference between the control and the variant, we determine the probability that there is a difference. The probability that the variation differs from the control, is indicated in a percentage.
An A/B test labeled as ">80%" (winner), indicates that the hypothesis has a high probability (>80%) of being true.
An A/B test labeled as "21 - 79%" (inconclusive), suggests the hypothesis has an intermediate chance of being true. This probability range indicates that there is still uncertainty regarding the hypothesis. Therefore it can not be clearly categorized as true or false.
An A/B test labeled as "< 20%" (loser), likely represents a hypothesis that has less than a 20% chance of being true. This suggests that the hypothesis is likely false.
If the primairy KPI is ‘transactions’, there is no impact on average order value, unless mentioned in the learnings.
All A/B tests in Evidoo are also analysed on secondary KPIs, for example 'add to carts'. If we found remarkable results on other KPI’s, check the tab 'learnings'. We also analysed different segments, for example 'new visitors' or 'returning visitors'. If we found remarkable results on specific segments, check also the tab 'learnings'.
All conducted A/B tests in Evidoo comply with:
- conducting sample size and power checks.
- performing the experiment only after an A/A test has been completed.
- implementing Sample Ratio Mismatch (SRM) checks.
- maintaining a minimum runtime of 2 weeks.
- measuring A/B tests only when they are visible to the website visitor, such as counting an A/B test in the middle of the product page only if the visitor has scrolled to that point.
All A/B tests in Evidoo have been analysed using Bayesian Statistics. The most important advantage of Bayesian statistics is that it is easy to understand. If there is a difference between the control and the variant, we determine the probability that there is a difference. The probability that the variation differs from the control, is indicated in a percentage.
An A/B test labeled as ">80%" (winner), indicates that the hypothesis has a high probability (>80%) of being true.
An A/B test labeled as "21 - 79%" (inconclusive), suggests the hypothesis has an intermediate chance of being true. This probability range indicates that there is still uncertainty regarding the hypothesis. Therefore it can not be clearly categorized as true or false.
An A/B test labeled as "< 20%" (loser), likely represents a hypothesis that has less than a 20% chance of being true. This suggests that the hypothesis is likely false.
If the primairy KPI is ‘transactions’, there is no impact on average order value, unless mentioned in the learnings.
All A/B tests in Evidoo are also analysed on secondary KPIs, for example 'add to carts'. If we found remarkable results on other KPI’s, check the tab 'learnings'. We also analysed different segments, for example 'new visitors' or 'returning visitors'. If we found remarkable results on specific segments, check also the tab 'learnings'.
All conducted A/B tests in Evidoo comply with:
- conducting sample size and power checks.
- performing the experiment only after an A/A test has been completed.
- implementing Sample Ratio Mismatch (SRM) checks.
- maintaining a minimum runtime of 2 weeks.
- measuring A/B tests only when they are visible to the website visitor, such as counting an A/B test in the middle of the product page only if the visitor has scrolled to that point.
Questions? Visit our FAQ page. Need any advice?
Contact us