# Standard Report Calculation Reference Guide

## Product Guides for EchoSpan 360-Degree Feedback

This guide outlines how the calculations in our standard 360-degree feedback PDF reports are generated. Note that if you have enlisted EchoSpan to build custom reports for your company, some or all of the calculation methods below may be overridden by your own rules.

### Item-level Scores

**Item Scores for a "Relationship Group"**

Item scores are calculated for each Rater group by taking the average of all Respondents in a particular group (Peers, Direct Reports, etc.). In order for a score to be calculated, the following criteria must be met for each rating:

- The Rater must be in an "In Progress" or "Finished" status.
- The review question must be a rated item, reverse-scored item or a behavioral anchor item.
- The response must be non-zero (calculations exclude "not observed" or "not applicable" ratings).
- The question cannot be a "hidden" item.
- The number of responses to the item cannot be below the minimum response filter value (optional).

**Item Scores for the "Self-Rater"**

Item scores for the "Self-Rater" are presented as entered and must meet the following criteria:

- The review question must be a rated item, reverse-scored item or a behavioral anchor item.
- The response must be non-zero (calculations exclude "not observed" or "not applicable" ratings).
- The question cannot be a "hidden" item.

**Item Scores for the "Overall" or "All Raters" Group**

Item scores for "All Raters" are calculated by taking the average of all Respondents as if they were part of a single relationship group. In order for a score to be calculated, the following criteria must be met for each rating:

- The Rater is not a Self-Rater.
- The Rater must be in an "In Progress" or "Finished" status.
- The review question must be a rated item, reverse-scored item or a behavioral anchor item.
- The response must be non-zero (calculations exclude "not observed" or "not applicable" ratings).
- The question cannot be a "hidden" item.

**Item Scores for the "Benchmark"**

Item scores for "Benchmark" are calculated by taking the average of all Respondents as if they were part of a single relationship group *for all Targets in the review project*. In order for a score to be calculated, the following criteria must be met for each rating:

- The Rater is not a Self-Rater.
- The Rater must be in an "In Progress" or "Finished" status.
- The review question must be a rated item, reverse-scored item or a behavioral anchor item.
- The response must be non-zero (calculations exclude "not observed" or "not applicable" ratings).
- The question cannot be a "hidden" item.

### Competency-level Scores

**Competency Scores for a "Relationship Group"**

Competency scores for each "Relationship Group" are calculated by taking the average of all item scores for that "Relationship Group" (Peers, Direct Reports, etc). This way, all item scores bear the same weight regardless of the number of Respondents to each in calculating the competency score. Blank or "0" item scores are omitted. Competencies must be a "scored" competency (not classified as a comments-only review section. This is the default setting, but should be checked if report output is unexpected.)

**Competency Scores for the "Overall" or "All Raters" Group**

Competency scores for "All Raters" are calculated by taking the average of all item scores for the "Overall" or "All Raters" group. This way, all item scores bear the same weight regardless of number of Respondents to each in calculating the competency score. Blank or "0" item scores are omitted. Competencies must be a "scored" competency (not classified as a comments-only review section. This is the default setting, but should be checked if report output is unexpected.)

**Competency Scores for the "Benchmark"**

Competency scores for the "Benchmark" group (all Targets in the review project) are calculated by taking the average of all item scores for the "Benchmark" group. This way, all item scores bear the same weight regardless of number of Respondents to each in calculating the competency score. Blank or "0" item scores are omitted. Competencies must be a "scored" competency (not classified as a comments-only review section. This is the default setting, but should be checked if report output is unexpected.)

### Overall Review Score

The Overall Review Score for the review can be calculated for the "Overall" or "All Raters" group, as well as for each individual Rater group. In either case, it is equal to the average of all overall item scores in the entire review or the item scores for a particular respondent group, respectively. Note that the *n *value for the Overall Review Score is equal to the smallest "All Raters" *n *value on any particular review item. So, for example, if there are 10 raters responding to a particular target, but only 3 answer a particular question, then the* n* value for the Overall Score is 3. This is in an effort to be highly conservative in protecting rater anonymity. This is particularly important when using minimum response filters in your reports, as this may sometimes cause the Overall Review Score to not display in some cases.

### Super-Competency-level Scores

**Super-Competency Scores for a "Relationship Group"**

Super-Competency scores for each "Relationship Group" are calculated by taking the average of all item scores for that "Relationship Group" (Peers, Direct Reports, etc.) included in each of the Competency levels that are mapped to the Super-Competency level. This way, all item scores bear the same weight regardless of the number of Respondents to each in calculating the super-competency score. Blank or "0" item scores are omitted. Super-Competencies must be made up of "scored" competencies (not classified as comments-only review sections. This is the default setting, but should be checked if report output is unexpected.)

**Super-Competency Scores for the "Overall" or "All Raters" Group**

Super-Competency scores for "All Raters" are calculated by taking the average of all item scores for the "Overall" or "All Raters" group for all Competency levels that are mapped to the Super-Competency. This way, all item scores bear the same weight regardless of number of Respondents to each in calculating the super-competency score. Blank or "0" item scores are omitted. Super-Competencies must be made up of "scored" competencies (not classified as comments-only review sections. This is the default setting, but should be checked if report output is unexpected.)

### Quartiles

Quartiles rank Targets by segmenting them into four equal groups based on the distribution of a particular rating or score. For projects where quartile rankings are activated, "All Raters" and relationship-specific quartiles for the review, competencies and items are computed as described below. Please note that ranking scores are not compatible with Rater pooling, but can take into account minimum response filters. If you have Rater pooling turned ON when quartiles are also activated, Rater pooling will be automatically disabled when quartiles are computed.

- Quartiles are automatically calculated and stored when the Administrator turns OFF the Feedback Phase under the Project menu option >> Project Phases, or when the Project Autopilot disables feedback. Feedback cannot be received while quartiles are being calculated as the inclusion of additional ratings will invalidate computations.
- The system will verify that the project contains at least 20 Targets to include in the population. Smaller projects will not work with the quartile engine.
- The system will compute the scores to be included in the quartile ranking, subject to minimum response filter settings. These settings can be modified under the Reports menu option >> Rank Calculations.
- The system will reverse rank (more favorable rankings are higher) the Targets in the sample by the score being included in the quartile calculation (review overall score, competency score or item score). When ties are encountered during ranking, all Targets with the same score are assigned the same next lowest rank number. For example, if three Targets score 4.7 on a given item and the next highest score of 4.8 is assigned a rank position of 28, then all three Targets tied with the score of 4.7 would be assigned a rank position of 25.
- The system will segment the sample population into quartiles based on the rank score where higher scores fall into lower quartiles. In the event ties are encountered in the quartile-creation process, the system will defer to the review's Overall Review Score (described above) as a tie-breaker. The quartile-creation process will attempt to divide the population into four equal groups. Sometimes, due to ties that exist even after attempting to resolve by reference to the Overall Review Score, or, due to a small or odd number of Targets, this is not possible and quartiles will not contain equal numbers of Targets.

### Percentiles

Percentiles rank Targets by segmenting them into 100 groups according to the distribution of a particular rating or score. The percentile score can be thought of as the percent of Targets that scored the same or lower than a given Target did on a particular measure. For projects where percentiles are activated, "All Raters" and relationship-specific percentiles for the review, competencies and items are computed as described below. Please note that ranking scores are not compatible with Rater pooling, but can take into account minimum response filters. If you have Rater pooling turned ON when percentiles are also activated, Rater pooling will be automatically disabled when percentiles are computed.

- Percentiles are automatically calculated and stored when the Administrator turns OFF the Feedback Phase under the Project menu option >> Project Phases, or when the Project Autopilot disables feedback. Feedback cannot be received while percentiles are being calculated as the inclusion of additional ratings will invalidate computations.
- The system will verify that the project contains at least 20 Targets to include in the population. Smaller projects will not work with the percentile engine.
- The system will compute the scores to be included in the percentile ranking, subject to minimum response filter settings. These settings can be modified under the Reports menu option >> Rank Calculations.
- The system will rank (more favorable rankings are lower) the Targets in the sample by the score being included in the percentile calculation (review overall score, competency score or item score). When ties are encountered during ranking, all Targets with the same score are assigned the same next highest rank number. For example, in a sample of 76 Targets the best score is 3.9 and is held by 3 individuals, they will each have a rank of 1, whereas the next highest score will be ranked 4. Therefore, ranks may be nonconsecutive.
- The system will convert the rank to a percentile. The percentile is calculated as the percentage of Targets that have the same or lower score than the Subject in question. Taking the example above, if a particular Target's rank on an item is 3 out of 76, the Target sits in the 97th percentile as there are only 2 Targets out of 76 (2.6 percent) that scored higher.

### Additional Notes

- Standard deviation of ratings for a Target or project is calculated using the MSSQL STDEVP function unless noted otherwise.
- Frequency distributions are computed by counting the ratings from non-self Raters in the "finished" or "in progress" status dispositions at the time the report was run.

141