Rugged config

Rugged configuration is loaded in a “layered” fashion. It will load configuration in the following order:

  • Package default configuration (/opt/rugged/rugged/default_config.yaml)
  • System-side configuration (/etc/rugged/config.yaml)
  • User-specific configuration. (~/.config/rugged/config.yaml)

As each config file is loaded, it is merged into the existing config. As such, config loaded later will override entries loaded earlier.

Default config

---

# @TODO: Point to fixtures dir for more examples.
# @TODO: Point to commands docs for examples of where some configs are being overridden, etc.

#################
# Global config #
#################

# Define the credentials to connect to the task (Celery/RabbitMQ) queue.
# This should be defined in each worker's system config (eg. `/etc/rugged/config.yaml`)
# or in the local config on the Rugged Admin's system.
#broker_connection_string: 'pyamqp://<username>:<password>@<host>//'

# The path to the file where the CLI or worker will log its operations.
log_file: /var/log/rugged/rugged.log

# Override the standard log format to indicate which function is being called for each log message.
log_format: '%(asctime)s %(levelname)s (%(module)s.%(funcName)s): %(message)s'

# The path to the root of the TUF repository.
repo_path: /var/rugged/tuf_repo

# The path that the Targets worker will look in to find new targets to add to the repo.
# N.B. This is where you put targets if you intend to call `add-targets` directly.
# @see: https://rugged.works/reference/commands/add-targets/
inbound_targets_path: /var/rugged/incoming_targets

# The path that the Monitor worker will look in to find new targets to process.
# N.B. This is where your packaging pipeline should put targets, using the appropriate naming convention.
# @see: https://rugged.works/background/architecture/workflows/monitor-workflow/#part-3-post-to-tuf
post_to_tuf_path: /opt/post_to_tuf

# The path where the Targets worker will store targets after adding them to the repo.
repo_targets_path: /var/rugged/tuf_repo/targets

# The path where TUF metadata will be generated.
repo_metadata_path: /var/rugged/tuf_repo/metadata

# A dictionary of roles. Each role entry needs to specify:
#   - threshold: The minimum number of signatures required to consider the role's metadata valid;
#   - expiry: The time (in seconds) that the role's metadata will be considered valid.
roles:
  root:
    threshold: 1
    expiry: 31536000 # 365 days
  timestamp:
    threshold: 1
    expiry: 86400    # 1 day
  snapshot:
    threshold: 1
    expiry: 604800   # 7 days
  targets:
    threshold: 1
    expiry: 604800   # 7 days

# The threshold below which a refresh task will update a worker's metadata expiry periods.
# N.B. This is used in the `refresh-expiry` command, see: https://rugged.works/reference/commands/refresh-expiry/
# N.B. This is also used https://rugged.works/background/architecture/environments/monitor-worker/#operation
expiry_refresh_threshold: 43200 # 12 hours

# This feature of TUF has not been implemented. So this is just a placeholder at the moment.
consistent_snapshot: False

# Whether to print the name of the host (localhost, or an individual worker) in the `log`, `status`, `config` commands.
# Disabling this might make parsing the output of these commands easier.
print_host_headers: True


######################
# Hashed-bins config #
######################

# @see: https://rugged.works/api/tuf/hashed_bins.html

# Whether to enable hashed bins.
# @see: TBD
use_hashed_bins: False

# The number of hashed bins to use. This should be a power of 2, as this will allow even distribution of hash
# prefixes across all bins.
# @see: TBD
number_of_bins: 16

# Which key to use for signing hashed bins metadata.
# To set a different key to be used by hashed bins roles, you will also need to
# specify the name as a key in both the 'keys' and 'roles' arrays. For an
# example, see: features/fixtures/config/hashed_bins_with_key.yaml
hashed_bins_key_name: targets


#################
# Worker config #
#################

# Max memory a child worker can allocate before being replaced (unit: kilobytes).
# Setting this limits growth of celery workers over time, if task uses more than limit,
# will complete before replacing worker.
# (https://docs.celeryq.dev/en/latest/userguide/configuration.html#worker-max-memory-per-child)
celery_worker_max_memory_per_child: 0


################
# Admin config #
################

# These should be defined in the local config on the Rugged Admin's system.

# Define the keys to generate and manage.
# A dictionary of keys, indexed by role. Each role entry is a list of key names.
#keys:
#  root:
#    - root
#    - root1
#  snapshot:
#    - snapshot
#  targets:
#    - targets
#  timestamp:
#    - timestamp

# Define the workers for dispatching commands to individual workers, mostly for admin commands (eg. `logs`, `config`)
# A dictionary of worker names, indexed by role. Each role entry is a dictionary
# with the name of the worker responsible for satisfying the TUF role.
#workers:
#  root:
#    name: root-worker
#  snapshot:
#    name: snapshot-worker
#  targets:
#    name: targets-worker
#  timestamp:
#    name: timestamp-worker


#########################
# Targets-worker config #
#########################

# Whether to delete targets once the Targets worker has signed them.
# This can reduce storage requirements if the package artifacts are not being served from the TUF repo.
delete_targets_after_signing: False


#########################
# Monitor-worker config #
#########################

# Configuration for the scheduled 'add-targets' task on the monitor-worker.
#https://rugged.works/background/architecture/workflows/monitor-workflow/#part-1-periodic-scan-for-new-targets
#scheduler_scan_period: 5.0
#scheduler_log_level: 'INFO'

# Configuration for the scheduled 'refresh-expiry' task on the monitor-worker.
# @see: https://rugged.works/api/workers/monitor-worker.html#rugged.workers.monitor-worker.MonitorWorker.refresh_expiry_task
#scheduler_refresh_period: 3600.0  # 1 hour

# Configuration for the scheduled 'reset-semaphores' task on the monitor-worker.
# @see: https://rugged.works/api/workers/monitor-worker.html#rugged.workers.monitor-worker.MonitorWorker.reset_semaphores_task
#scheduler_reset_period: 300.0  # 5 minutes

# Enable test mode on the monitor-worker, to test for resilience to network
# instability. Setting this to True will completely disrupt the
# monitor-worker's ability to add targets. NEVER use this config outside of a
# test scenario tagged '@monitor-resilience'
#monitor_enable_network_instability_resiliency_test_mode: False


############################################
# Inter-process communication (IPC) config #
#                                          #
# These configs are mostly relevant to the #
# Monitor-worker and command-line          #
# interface (CLI).                         #
############################################

# Configuration for the timeout when posting tasks to the Celery queue.
# This is mostly relevant to the CLI and monitor-worker
task_timeout: 10

# Defaults for `pause-processing` timeout options.
# @see: https://rugged.works/reference/commands/pause-processing/
wait_for_processing_task_to_complete_timeout: 30
wait_for_refreshing_task_to_complete_timeout: 15

# The age at which a semaphore is considered stale.
# *N.B.* The semaphore names map to the contants used in creating the flags
# See: `rugged/lib/constants.py`
stale_semaphore_age_thresholds:
  tuf_paused: 3600            # 1 hour (set manually, no automatic cleanup)
  tuf_processing_: 300        # 5 minutes
  tuf_refreshing_expiry: 60   # 1 minute
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Rugged TUF Server is a trademark of Consensus Enterprises.