Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This is a working document, major changes can be expected at any time.

Table of Contents

Introduction

...

  • Half past every hour metadata acquisition is started on mds-feed aggregation host and is pefromed in the following steps:
    • mds-feed aggregation host downloads federation metadata feeds using conditional GET

    • if the conditional GET resulted in a download of a new metadata file, such file is passed through the local validator instance, if validation succeeds the downloaded file is used as an input for aggregator, if it fails, the previous correct feed copy is used instead

    • the newest available validated copy of the federation metadata feed is kept for future use
    • the validated metadata files are passed to a pyFF flow, see also [eduGAIn-meta] Metadata combination and collision handling

    • pyFF aggregates and then signs the resulting feed; currently the signing is done with key files stored at the mds-feed aggregation host host

    • the resulting file is analysed, split into entities and used to update the edugain-db

    • the final output is uploaded with sftp to the technical host using a dedicated user account on the the technical host.

  • At 45 minutes past every hour the new copy of eduGAIN metadata aggregate is copied to the final destination directory and when the copy is completed the mv action is performed in order to substitute the production file in an atomic mode

  • Finally the new eduGAIN metadata aggregate file is copied to the history repository and compressed

  • At midnight (CET) hourly copies of metadata are deleted from the repository, leaving only a single daily file. These daily files can then be used as a source of various data analysis.

...