blob: 28757a79f1669699f2170e27007dc07762b7454c (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
-*- markdown -*-
The merge process
=================
- merge-fetch maintains a single file 'fetched' referring to a given
entry in 'logorder', indicating which entries are fetched and
sequenced so far.
- merge-backup reads 'fetched' and pushes these entries to secondary
merge nodes, maintaining one file per secondary,
'backup.<secondary>', indicating how many entries that have been
copied to and verified at the secondary in question.
- merge-sth writes a new 'sth' file by reading the
'backup.<secondary>' files into a list, picking a new tree size by
sorting the list (in falling order) and indexing it with the
'backupquorum' config option. If the new tree size is smaller than
what the old 'sth' file says, no new STH is created.
- merge-dist distributes 'sth' and missing entries to frontend nodes.
TODO
====
- Run the three pieces in parallell.
- Improve merge-fetch by parallellising it using one process per
storage node writing to a "queue info" file (storage-node, hash) and a
single "queue handling process" reading queue files and writing to the
'fetched' file.
|