Reverse scoring is used sometimes to ensure that we can find internal validity within the assessment and also “surprise” survey takers so they keep their attention throughout the survey.
So how do you interpret the results of a reverse statement in Comparative Agility?
By default, high scores in CA are a “good thing”. That is the survey items are written in such a way that if you tend to score high on a question, that contributes to a high score – which is positive in your overall analysis. For instance, a question like: “I enjoy working on my team” is a positive question where a high score is a good thing.
However, for reverse-coded questions, the situation is the exact opposite. If you see a question like “I hate being on my team”, having a high score here is a bad thing. (In other words, if I respond with “Strong Agree” to this question, that is not good.) Hence, to reflect that this is a reverse coded question, the scores are also reversed. This means that if I indicate “Strongly Agree” (normally a 5), this would be translated into a score of 1. This means that if teams are indeed hating being on the team, you would see a low score here.
When we provide our “Insights” analytics, we indicate the high scoring (positive things) and low scoring (negative things) items. In this case, with such a low score, this would be indicated as an area of improvement, which is absolutely what it should be.