Quality assessment and awarding

Site: EOSC-Synergy
Course: Software Quality Assurance as a Service
Book: Quality assessment and awarding
Printed by: Guest user
Date: Thursday, 21 November 2024, 6:05 PM

1. Intro

The Quality Assessment & Awarding (QAA) module analyzes the level of compliance of a given code repository with a set of standards for software.

What does the QAA module bring?

For any given code repository the QAA performs an assessment through the selection and subsquent execution of the right set of open source tools. The tools that will be ran for each quality criterion (such as licensing, documentation, unit or security testing) are defined beforehand, and their outputs are parsed in order to certify whether such criterion was successfully or unsuccessfully fulfilled.

With this information the QAA provides two main outcomes: 1. A quality report with the results of the assessment. The validity of each quality criterion is computed according to the outputs provided by the tools. 2. A digital badge highlighting the achievements of the software. The SQAaaS supports three of badges for software that, from lowest to highest levels of quality are: bronze, silver and gold.

What happens under the hood?

Unsurprinsingly, the assessment process uses a CI/CD pipeline in order to execute the complete set of tools that will evaluate the multiple quality attributes covered in the criteria. The pipeline is composed by several stages and defined according to the following requirements: - each stage in the pipeline executes a single tool, - the stages are run sequentially, and - the execution of the pipeline is not interrupted if a stage fails.

Certifying the results

The selection and execution of the appropriate tools that take part in the quality assessment process must be accompanied by the validation of their outputs. Thus, it is not enough to just rely on the exit status of each tool, but also to inspect the output of the tool as the only way to ensure that any given quality attribute has been properly evaluated.

Digital badges are the result of the certification process. They are issued using the Badgr platform that implements the Open Badges specification. Hence, each badge has associated metadata that is used by the SQAaaS to store relevant data about the quality assessment process, such as pointers to the standard (with the definition of the quality criteria), or the build data, using permanent links to the continuous integration (CI) system. The image below shows the metadata, as it is displayed by Badgr, for an awarded badge:

Badge metadata

2. Triggering the Assessment

Running a quality assessment for your code repository is a quite straightforward task, you just need to provide its URL and click on "Start Assessment" button as follows:

The state of the assessment will be displayed in the popup that appears once started. Once finished successfully, you will be taken to the results view.

There are two main stages it will go through, 1) the pipeline creation and execution, 2) the validation of the results. The former relies on the core functionality provided by the Pipeline as a Service in order to compose and run a pipeline with all the supported quality criteria. The specific tools and commands to run are built upon the SQAaaS tooling metadata.

Once having the results from the pipeline, the next step is to validate the obtained results. This task is done with the aid of the SQAaaS reporting component, a plugin-based tool that parses each output and estimates whether each quality criterion (as well as associated subcriteria) has been successfully fulfilled by the code being analysed.

Settings

Current customization boils down to the specification of a different code repository for the documentation, other than the one that hosts the code. This is a common practice, so in case that your docs-as-code are not maintained in the same repo as the code, be sure to add this URL by clicking on the *"External repo for documentation?" checkbox:

3. Analysing the results

The results obtained by the QAA module highlight the achievements that characterize a given code repository, and point developers or code owners to those specific parts where quality can be improved. Thus, the ultimate goal is to increase the overall quality of the code so that the software product takes credit.

The results view shows a report detailing the validity of the criteria covered during the assessment. This validity is estimated on the basis of the results and criticality provided by the individual subcriteria. This means that only the subcriteria with the highest level of criticality is considered for the criterion's overall success.

The codes that identify the subcriteria are aligned, as stated throughout the docs, with the A set of Common Software Quality Assurance Baseline Criteria for Research Projects document.

Awarding (aka Badges)

Reporting is complemented with awarding when the software being analysed reaches a minimum level of quality. This is based on the fact that, similarly to the subcriteria covered above, not all the criteria have the same level of importance. We have previously established those levels in the Badging in EOSC-Synergy.

Whenever the assessed code repository has reached any of the required levels of quality, a digital badge will be displayed on top of the report as shown in the next image:

4. Badging in EOSC-Synergy

The approach followed by EOSC-Synergy identifies three levels (gold, silver and bronze) for the three categories or types of badges (software, services and data).

Badges for software

The following table summarizes the software criteria that is associated to each badge level:

Bronze Silver Gold
Accessibility (QC.Acc) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Code Management (QC.Man) :heavy_check_mark:
Code Metadata (QC.Met) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Code Style (QC.Sty) :heavy_check_mark: :heavy_check_mark:
Code Workflow (QC.Wor) :heavy_check_mark:
Delivery (QC.Del) :heavy_check_mark:
Documentation (QC.Doc) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Licensing (QC.Lic) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Security Static Analysis (QC.Sec) :heavy_check_mark: :heavy_check_mark:
Unit Testing (QC.Uni) :heavy_check_mark:
Versioning (QC.Ver) :heavy_check_mark: :heavy_check_mark:

Software criteria baseline

The codes showcased in the table above (e.g. QC.Acc) are defined in the standard to which the current implementation of the SQAaaS is aligned.

and the images corresponding to each level:

Badges for services

The following table summarizes the service criteria that is associated to each badge level:

Bronze Silver Gold
Deployment (SvcQC.Dep) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
API Testing (SvcQC.API) :heavy_check_mark:
Integration Testing (SvcQC.Int) :heavy_check_mark:
Functional Testing (SvcQC.Fun) :heavy_check_mark: :heavy_check_mark:
Performance Testing (SvcQC.Per) :heavy_check_mark:
Security Dynamic Analysis (SvcQC.Sec) :heavy_check_mark: :heavy_check_mark:
Documentation (SvcQC.Doc) :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:

Service criteria baseline

The codes showcased in the table above (e.g. SvcQC.Dep) are defined in the standard to which the current implementation of the SQAaaS is aligned.

and the images corresponding to each level: