Complexity

Complexity is the number of paths through the code, typically observed at the level of individual methods or functions. The more complex the code is, it is more difficult to understand the code's control flow, to test it and it is less predictable. This characteristic is used to equalize and distribute disproportionate complexity among components or eliminate it, if possible.

  • Complexity – It is the Cyclomatic Complexity calculated based on the number of paths through the code. The more complex the code is, it is more difficult to understand and test and is less predictable.
  • Cognitive Complexity – How hard it is to understand the code's control flow.

Duplications

Duplications indicate the presence of copy-paste code, which enlarges code size, typically indicates flaws in software design and makes it more difficult to maintain or adapt. Duplicated blocks are repeated sequences of lines with the same of successive statements regardless of the differences in the indentation and literals. Each repetition detected increases the numbers of duplicated blocks, files, lines and percentage of lines involved in duplications.

  • Duplicated blocks – Number of duplicated blocks of lines.
  • Duplicated files – Number of files involved in duplications.
  • Duplicated lines – Number of lines involved in duplications.
  • Duplicated lines (%) – Percentage of lines involved in duplications.

Issues

The code may contain a number of typical issues that are recognized on the base of rules and code patterns. They are classified in type as bugs, code smells and vulnerabilities and in severity as blockers, critical, major, minor, info. After they are analyzed by developers or reviewers, they may be classified into false positives, confirmed, or open issues and potentially reopened. Resolved issues are closed on the subsequent scan.

  • New issues – Number of issues raised for the first time in the New Code period.
  • New xxx issues – Number of issues of the specified severity raised for the first time in the New Code period, where xxx is one of: blocker, critical, major, minor, info.
  • Issues – Total count of issues in all states.
  • <severity> issues – Total count of issues of the specified severity, where <severity> is one of: blocker, critical, major, minor, info.
  • False positive issues – Total count of issues marked False Positive
  • Open issues – Total count of issues in the Open state.
  • Confirmed issues – Total count of issues in the Confirmed state.
  • Reopened issues – Total count of issues in the Reopened state.

Maintainability

Maintainability is based on the number of code smells, i.e. suspicious places in the code that indicate possible weakness in design or readability, technical debt, i.e. the effort to fix all code smells, estimated in minutes or workdays, or technical debt ratio, which is the ratio between the cost to develop the software and the cost to fix it, based on the time cost of the issues and the estimate of the time to write the given number of lines of code. It can be also measured as the number of bug issues, with reliability rating, which is determined from the presence of bugs of various severities, or estimated effort to fix all bug issues.

  • Code Smells – Total count of Code Smell issues.
  • New Code Smells – Total count of Code Smell issues raised for the first time in the New Code period.
  • Maintainability Rating – This (SQALE) rating given to the project is related to the value of Technical Debt Ratio.
  • Technical Debt – Effort to fix all Code Smells. The measure is stored in minutes in the database. An 8-hour day is assumed when values are shown in days.
  • Technical Debt on New Code – Effort to fix all Code Smells raised for the first time in the New Code period.
  • Technical Debt Ratio – Ratio between the cost to develop the software and the cost to fix it, based on the time cost of the issues and the estimate of the time to write the given number of lines of code.
  • Technical Debt Ratio on New Code – Ratio between the cost to develop the code changed in the New Code period and the cost of the issues linked to it.

Reliability

The primary indication of reliability is the number of bug issues. The difficulty of individual issues, their number, statuses, types, and severities are used to determine reliability rating and reliability remediation effort.

  • Bugs – Number of bug issues.
  • New Bugs – Number of new bug issues.
  • Reliability Rating – A-E, depending on the presence of minor, major, critical, or blocker bugs.
  • Reliability remediation effort – Effort to fix all bug issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.
  • Reliability remediation effort on new code – Same as Reliability remediation effort but on the code changed in the New Code period.

Security

This characteristic of the code is based on the vulnerability issues, i.e. suspicious places in the code that indicate possible security weaknesses, security remediation effort, i.e. the effort to fix all vulnerabilities, estimated in minutes or workdays, and security rating, which is determined from the presence of vulnerabilities of various severities.

  • Vulnerabilities – Number of vulnerability issues.
  • Vulnerabilities on new code – Number of new vulnerability issues.
  • Security Rating – A-E, depending on the presence of minor, major, critical, or blocker vulnerabilities.
  • Security remediation effort – Effort to fix all vulnerability issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.
  • Security remediation effort on new code – Same as Security remediation effort but on the code changed in the New Code period.
  • Security Hotspots – Number of Security Hotspots.
  • Security Hotspots on new code – Number of new Security Hotspots in new code.
  • Security Review Rating – A letter grade based on the percentage of Reviewed (Fixed or Safe) Security Hotspots.
  • Security Review Rating on new code – The same for new code.
  • Security Hotspots Reviewed – Percentage of Reviewed (Fixed or Safe) Security Hotspots.
  • New Security Hotspots Reviewed – Percentage of Reviewed (Fixed or Safe) Security Hotspots for new code.


Size

These metrics describe the size of the code and how it is commented. They include the number of classes, number of comment lines, i.e. lines containing either comments or commented-out code, the density of comment, number of files, number of lines, number of lines with code, number of methods and functions, or number of statements.

  • Classes – Number of classes (including nested classes, interfaces, enums and annotations).
  • Comment lines – Number of lines containing either comment or commented-out code.
  • Comments (%) – Density of comment lines
  • Directories – Number of directories.
  • Files – Number of files.
  • Lines – Number of physical lines (number of carriage returns).
  • Lines of code – Number of physical lines that contain at least one character which is not whitespace or part of a comment.
  • Functions – Number of functions. Depending on the language, a function is either a function or a method or a paragraph.
  • Projects – Number of projects in a Portfolio.
  • Statements – Number of statements.

Tests

These metrics associated with unit tests include coverage-related measures such as condition coverage, i.e. whether the expressions were evaluated both to true and false for lines of code with Boolean expressions, expressed by line, by condition, by uncovered conditions, line coverage, i.e. portion of executable lines covered during the execution of the unit tests, number of lines of code which are not covered by unit tests. Characteristics directly related to unit test include the number of unit tests, skipped unit tests, the time required to execute all the unit tests, number of unit tests that have failed, number of unit tests that have failed with an unexpected exception, and percentage of unit tests passed without errors or failures.

  • Condition coverage – For all lines of code with boolean expressions, whether the expression was evaluated both to true and false.
  • Condition coverage on new code – Identical to Condition coverage but restricted to the new / updated source code.
  • Condition coverage hits – List of covered conditions.
  • Conditions by line – Number of conditions by line.
  • Covered conditions by line – Number of covered conditions by line.
  • Coverage – It is a mix of Line coverage and Condition coverage, providing a more accurate estimate of much of the source code has been covered by the unit tests
  • Coverage on new code – Identical to Coverage but restricted to the new / updated source code.
  • Line coverage – Portion of executable lines covered during the execution of the unit tests.
  • Line coverage on new code – Identical to Line coverage but restricted to the new / updated source code.
  • Line coverage hits – List of covered lines.
  • Lines to cover – Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).
  • Lines to cover on new code – Identical to Lines to cover but restricted to the new / updated source code.
  • Skipped unit tests – Number of skipped unit tests.
  • Uncovered conditions – Number of conditions which are not covered by unit tests.
  • Uncovered conditions on new code – Identical to Uncovered conditions but restricted to the new / updated source code.
  • Uncovered lines – Number of lines of code which are not covered by unit tests.
  • Uncovered lines on new code – Identical to Uncovered lines but restricted to the new / updated source code.
  • Unit tests – Number of unit tests.
  • Unit tests duration – Time required to execute all the unit tests.
  • Unit test errors – Number of unit tests that have failed.
  • Unit test failures – Number of unit tests that have failed with an unexpected exception.
  • Unit test success density (%) – Percentage of unit tests passed without errors or failures.