This tutorial will show you how to initialize a simple TUF repository with Rugged. It expands on the My first TUF repo to illustrate the process of generating keys and initializing the repository. This assumes you have already created a local Rugged environment, and started it:
make start dev
NB This setup is intended for demonstration purposes only. A production deployment of Rugged requires significant security infrastructure and carefully prepared ceremonies to be trustworthy. See the other tutorials and HOWTOs in this documentation, and refer to our Rugged Runbook Templates repository for examples of secure best practice.
Briefly, the steps described below are as follows:
These make up the core steps to preparing and initializing a new TUF repository for use with Rugged.
Now we need to generate keypairs for each of the roles. The root keys are by far the most important, as they form the root of trust for the whole system. By default, Rugged is configured to expect 2 root keys, with a threshold of 1, which allows for root key rotation without invalidating the TUF repository.
Under standard Rugged operation, the root keys would be generated offline and kept secure to ensure the trustworthiness of the TUF repository. Typically this is done with OpenSSL or a Hardware Security Module (HSM).
For the purposes of this tutorial, we will generate root keys within our local environment. First we set up a directory to hold them temporarily:
export RUGGED_TMP=/var/rugged/tuf_repo/tmp; ddev exec sudo mkdir -p $RUGGED_TMP
Now, using typical OpenSSL commands, we generate a keypair for both of the root keys we’ve configured:
ddev exec sudo openssl genpkey -algorithm ED25519 -out $RUGGED_TMP/root_private.pem ; \
ddev exec sudo openssl pkey -in $RUGGED_TMP/root_private.pem -pubout -out $RUGGED_TMP/root_public.pem ; \
ddev exec sudo openssl genpkey -algorithm ED25519 -out $RUGGED_TMP/root1_private.pem ; \
ddev exec sudo openssl pkey -in $RUGGED_TMP/root1_private.pem -pubout -out $RUGGED_TMP/root1_public.pem
Now we make use of the rugged generate-keys
command to create keypairs for the remaining roles:
ddev rugged generate-keys --local --role=snapshot ; \
ddev rugged generate-keys --local --role=targets ; \
ddev rugged generate-keys --local --role=timestamp
You should see output like this:
Generating 'snapshot' keypair for 'snapshot' role.
Signing key: /var/rugged/signing_keys/snapshot/snapshot
Verification key: /var/rugged/verification_keys/snapshot/snapshot.pub
Generating 'targets' keypair for 'targets' role.
Signing key: /var/rugged/signing_keys/targets/targets
Verification key: /var/rugged/verification_keys/targets/targets.pub
Generating 'timestamp' keypair for 'timestamp' role.
Signing key: /var/rugged/signing_keys/timestamp/timestamp
Verification key: /var/rugged/verification_keys/timestamp/timestamp.pub
Once we have a set of keys in place, we can begin initializing the TUF
repository metadata. To bootstrap this process, we need a partial root metadata
file, containing the signable portion of the root.json
metadata file, which
we will then combine with a signature from each root key to form a complete
root.json
metadata.
The rugged initialize-partial-root-metadata
command will initialize a 1.root.json
file which will become
our root metadata file for the TUF repo:
$ ddev rugged initialize-partial-root-metadata
You should see output like this:
Initializing partial root metadata at: /var/rugged/tuf_repo/partial/1.root.json
Partial root metadata path not found. Creating: /var/rugged/tuf_repo/partial
Now we can use rugged show-partial-root-metadata
command allows us to see the
status. Currently it should look like this:
$ ddev rugged show-partial-root-metadata
Retrieved partial Root metadata for version 1 (1.root.json).
=== METADATA ===
Expires in 364 days, 23 hours and 59 minutes
Metadata is not valid for deployment
=== SIGNATURES ===
Signatures: 0 (Does not meet threshold)
Threshold: 1
=== KEYS ===
=== ROLES ===
root:
Keys required (to meet signature threshold): 1
Keys provided: 0 (Does not meet threshold)
keyids: No keyids found
snapshot:
Keys required (to meet signature threshold): 1
Keys provided: 0 (Does not meet threshold)
keyids: No keyids found
targets:
Keys required (to meet signature threshold): 1
Keys provided: 0 (Does not meet threshold)
keyids: No keyids found
timestamp:
Keys required (to meet signature threshold): 1
Keys provided: 0 (Does not meet threshold)
keyids: No keyids found
As we can see, there are no keys and no signatures in our root metadata as yet. Let’s start by adding the verification keys for each role:
ddev rugged add-verification-key root /var/rugged/tuf_repo/tmp/root_public.pem --key-type=pem ; \
ddev rugged add-verification-key root /var/rugged/tuf_repo/tmp/root1_public.pem --key-type=pem ; \
ddev rugged add-verification-key snapshot /var/rugged/verification_keys/snapshot/snapshot.pub ; \
ddev rugged add-verification-key targets /var/rugged/verification_keys/targets/targets.pub ; \
ddev rugged add-verification-key timestamp /var/rugged/verification_keys/timestamp/timestamp.pub
Finally, if we run rugged show-partial-root-metadata
once again, we’ll see these new
keys are now reflected in the root metadata we are building up:
$ ddev rugged show-partial-root-metadata
Retrieved partial Root metadata for version 1 (1.root.json).
=== METADATA ===
Expires in 364 days, 23 hours and 59 minutes
Metadata is not valid for deployment
=== SIGNATURES ===
Signatures: 0 (Does not meet threshold)
Threshold: 1
=== KEYS ===
root 2/2:
type: ed25519
keyid: e8cb7a22f2fac2807accd4c77947b2d886cf38bd602bc25b0b4f69506750a95b
root 1/2:
type: ed25519
keyid: f684a2ec548502271e600fa52b87084cd8e97a8e17542a46a87b195f196a505f
snapshot 1/1:
type: ed25519
keyid: dea07ba9e1186060c2542019d3544b77cdb68024d336cd584bb64c9cad2204c8
targets 1/1:
type: ed25519
keyid: c878f3b6473fcaf6ef01ca10ea2886949ac9a54319317655050e22b1124acb54
timestamp 1/1:
type: ed25519
keyid: aba2d8abda40de829fc82531933681f2aab95495de7ca530d5fb703408a8bcb6
=== ROLES ===
root:
Keys required (to meet signature threshold): 1
Keys provided: 2 (Meets threshold)
keyids: f684a2ec…, e8cb7a22…
snapshot:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: dea07ba9…
targets:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: c878f3b6…
timestamp:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: aba2d8ab…
However, we can also see that there are no signatures attached to our root
metadata as yet. We need to sign the partial root metadata with each of our
root keys, and then incorporate those signatures into the 1.root.json
metadata file.
The next step is to get signatures from each of our root keyholders, who will
use their signing keys on the signable portion of the root metadata which
Rugged has been building up in /var/rugged/tuf_repo/partial/signable-1.root.json
.
This signable-1.root.json
file is simply the section of the 1.root.json
file which will be digitally signed by the root keys, stripped of whitespace
and canonicalized to ensure consistency when generating the signature.
Ordinarily, this signing process would take place within a formal ceremony (@TODO: Add link) designed to ensure the security and integrity of the keys and signatures. For demonstration purposes, we will pretend to be both keyholders at once, and generate signatures in our local environment:
export TUF=/var/rugged/tuf_repo ; \
ddev exec sudo openssl pkeyutl -in $TUF/partial/signable-1.root.json -rawin -sign -inkey $TUF/tmp/root_private.pem -out $TUF/tmp/root_signature.bin ; \
ddev exec sudo openssl pkeyutl -in $TUF/partial/signable-1.root.json -rawin -sign -inkey $TUF/tmp/root1_private.pem -out $TUF/tmp/root1_signature.bin ; \
ddev exec sudo chown rugged:rugged $TUF/tmp/\*_signature.bin
Having generated signatures for each of our root keys, we can now add these
signatures to the root metadata, thus completing the 1.root.json
metadata:
export TMP=/var/rugged/tuf_repo/tmp ; \
ddev rugged add-root-signature $TMP/root_public.pem $TMP/root_signature.bin --key-type=pem ; \
ddev rugged add-root-signature $TMP/root1_public.pem $TMP/root1_signature.bin --key-type=pem
With these steps complete, we can re-run rugged show-partial-root-metadata
to
see that we now have signatures as well as keys, and the root metadata is valid
and complete (meets all thresholds):
$ ddev rugged show-partial-root-metadata
Retrieved partial Root metadata for version 1 (1.root.json).
=== METADATA ===
Expires in 364 days, 23 hours and 40 minutes
Metadata is valid for deployment
=== SIGNATURES ===
Signatures: 3 (Meets threshold)
Threshold: 1
Signature 1 of 3: signed by root 1/3 -- VALID (keyid: 8c64b948…)
Signature 2 of 3: signed by root 2/3 -- VALID (keyid: ec56730f…)
Signature 3 of 3: signed by root 3/3 -- VALID (keyid: 09026ee2…)
=== KEYS ===
root 3/3:
type: ed25519
keyid: 09026ee273de92d4d3287c7da66928e47e6cf26add887fb6e5e92453edf1f4cc
root 1/3:
type: ed25519
keyid: 8c64b94847a29553a2fa452690187e5e0cf0759441cc60df2a7eda3fb4c8b080
root 2/3:
type: ed25519
keyid: ec56730fbe0f8e7a9fc94bb243fa0e4a899fcff1f707084d9058948a0695da19
snapshot 1/1:
type: ed25519
keyid: 390ef30cfba0986e0c18b7fa75b797ff5454301b6611a145be9f737761d70b91
targets 1/1:
type: ed25519
keyid: e9e664180fe152945324f0ebdceccb496c438a6b87cc0445825fb8e654c27412
timestamp 1/1:
type: ed25519
keyid: 10adffe0d4406248380d5d05b68f4512a828341ee80ffd2c11e84bcdcf2f9327
=== ROLES ===
root:
Keys required (to meet signature threshold): 1
Keys provided: 3 (Meets threshold)
keyids: 8c64b948…, ec56730f…, 09026ee2…
snapshot:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: 390ef30c…
targets:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: e9e66418…
timestamp:
Keys required (to meet signature threshold): 1
Keys provided: 1 (Meets threshold)
keyids: 10adffe0…
Now we’re ready to deploy our completed root metadata file into the TUF repository to allow us to finish initializing:
ddev exec sudo sudo -u rugged mkdir /var/rugged/tuf_repo/metadata
ddev exec sudo sudo -u rugged cp /var/rugged/tuf_repo/partial/1.root.json /var/rugged/tuf_repo/metadata
The above commands will shortly be replaced by a rugged
command. Here we are
using sudo
to become the rugged
user to ensure the file is copied into
place with the correct ownership and permissions.
With our completed root metadata deployed into the TUF repository, we can complete initializing the repository, generating signatures for the remaining roles:
ddev rugged initialize --local
You should see output like this:
Initializing new TUF repository at /var/rugged/tuf_repo.
warning: Initialized 'root' metadata from disk.
If you did not intend to initialize with existing 'root' metadata then delete '1.root.json' and re-run this command.
Updated root metadata.
Updated targets metadata.
Updated snapshot metadata.
Updated timestamp metadata.
TUF repository initialized.
Note the warning
in the output above, which indicates that Rugged has picked
up our signed root metadata. If we had run rugged initialize
without
deploying 1.root.json
, it would not emit this warning, and simply generated
everything from scratch.
We can now run a rugged status
to show the fully operational TUF repository:
$ ddev rugged status --local
=== Repository status for local operations ===
Targets Total Size
--------- ------------
0 0 Bytes
Role Capability Signatures Version TUF Spec Expires
--------- ------------ ------------ --------- ---------- ---------------------------------
targets Signing 1 / 1 1 1.0.31 6 days, 23 hours and 57 minutes
snapshot Signing 1 / 1 1 1.0.31 6 days, 23 hours and 57 minutes
timestamp Signing 1 / 1 1 1.0.31 23 hours and 57 minutes
root Verification 0 / 1 1 1.0.31 364 days, 23 hours and 57 minutes
Key name Role Key type(s) Scheme Path
---------- --------- --------------- -------- --------------------------------------------
targets targets public, private ed25519 /var/rugged/signing_keys/targets/targets
snapshot snapshot public, private ed25519 /var/rugged/signing_keys/snapshot/snapshot
timestamp timestamp public, private ed25519 /var/rugged/signing_keys/timestamp/timestamp
We have now successfully initialized a functioning TUF repository. To test it, we can add a simple target, and observe Rugged producing signed metadata for it:
echo 'Hello, world' > fixtures/incoming_targets/test.txt
ddev rugged add-targets
You should see output like this:
Added the following targets to the repository:
test.txt
Updated targets metadata.
Updated snapshot metadata.
Updated timestamp metadata.
A rugged status
will now reflect the 1 target we’ve added:
$ ddev rugged status --local
=== Repository status for local operations ===
Targets Total Size
--------- ------------
1 13 Bytes
Role Capability Signatures Version TUF Spec Expires
--------- ------------ ------------ --------- ---------- ---------------------------------
targets Signing 1 / 1 2 1.0.31 6 days, 23 hours and 59 minutes
snapshot Signing 1 / 1 2 1.0.31 6 days, 23 hours and 59 minutes
timestamp Signing 1 / 1 2 1.0.31 23 hours and 59 minutes
root Verification 0 / 1 1 1.0.31 364 days, 23 hours and 54 minutes
Key name Role Key type(s) Scheme Path
---------- --------- --------------- -------- --------------------------------------------
targets targets public, private ed25519 /var/rugged/signing_keys/targets/targets
snapshot snapshot public, private ed25519 /var/rugged/signing_keys/snapshot/snapshot
timestamp timestamp public, private ed25519 /var/rugged/signing_keys/timestamp/timestamp
We can also observe the target being processed by looking at the targets-worker
logs:
$ ddev rugged logs --worker=targets-worker
=== Log for targets-worker: /var/log/rugged/rugged.log ===
2025-04-25 17:46:18,712 INFO (targets-worker.add_targets_task): Received add-targets task.
2025-04-25 17:46:18,714 INFO (repo._move_inbound_target_to_targets_dir): Moved inbound target 'test.txt' to targets directory.
2025-04-25 17:46:18,714 INFO (repo.add_target_to_metadata): Added target 'test.txt' to 'targets' role.
2025-04-25 17:46:18,716 INFO (repo.update_targets): Updated targets metadata.
2025-04-25 17:48:48,365 INFO (targets-worker.get_expiring_metadata_task): Received get-expiring-metadata task.
2025-04-25 17:48:58,384 INFO (targets-worker.get_expiring_metadata_task): Received get-expiring-metadata task.
2025-04-25 18:27:19,443 INFO (targets-worker.add_targets_task): Received add-targets task.
2025-04-25 18:27:19,448 INFO (repo._move_inbound_target_to_targets_dir): Moved inbound target 'test.txt' to targets directory.
2025-04-25 18:27:19,448 INFO (repo.add_target_to_metadata): Added target 'test.txt' to 'targets' role.
2025-04-25 18:27:19,450 INFO (repo.update_targets): Updated targets metadata.
Finally, we can look directly at the targets.json
metadata file, to see the test.txt
entry:
cat fixtures/tuf_repo/metadata/targets.json
{
"signatures": [
{
"keyid": "971bc6a249ec884bf6257f3fdaaa034fbf87583f937db06bcad1c9004e681f8c",
"sig": "677fd3d793a81d309c0d42287905eca13e35c53523a607400c952eabe7fb8da8e13922afde77c01d15ce3146d7cabfe63d8a8b21a5c292619e2868d52a530801"
}
],
"signed": {
"_type": "targets",
"expires": "2025-05-02T18:27:19Z",
"spec_version": "1.0.31",
"targets": {
"test.txt": {
"hashes": {
"sha256": "37980c33951de6b0e450c3701b219bfeee930544705f637cd1158b63827bb390"
},
"length": 13
}
},
"version": 2
}
}%