Discussion:
[PATCH] reinstate ceph cluster_snap support
(too old to reply)
Alexandre Oliva
2013-08-22 09:10:08 UTC
Permalink
This patch brings back and updates (for dumpling) the code originally
introduced to support =E2=80=9Cceph osd cluster_snap <snap>=E2=80=9D, t=
hat was disabled
and partially removed before cuttlefish.

Some minimal testing appears to indicate this even works: the modified
mon actually generated an osdmap with the cluster_snap request, and
starting a modified osd that was down and letting it catch up caused
the osd to take the requested snapshot. I see no reason why it
wouldn't have taken it if it was up and running, so... Why was this
feature disabled in the first place?

Signed-off-by: Alexandre Oliva <***@gnu.org>
---
src/mon/MonCommands.h | 6 ++++--
src/mon/OSDMonitor.cc | 11 +++++++----
src/osd/OSD.cc | 13 +++++++++++++
3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/src/mon/MonCommands.h b/src/mon/MonCommands.h
index 8e9c2bb..225c687 100644
--- a/src/mon/MonCommands.h
+++ b/src/mon/MonCommands.h
@@ -431,8 +431,10 @@ COMMAND("osd set " \
COMMAND("osd unset " \
"name=3Dkey,type=3DCephChoices,strings=3Dpause|noup|nodown|noout|noin=
|nobackfill|norecover|noscrub|nodeep-scrub", \
"unset <key>", "osd", "rw", "cli,rest")
-COMMAND("osd cluster_snap", "take cluster snapshot (disabled)", \
- "osd", "r", "")
+COMMAND("osd cluster_snap " \
+ "name=3Dsnap,type=3DCephString", \
+ "take cluster snapshot", \
+ "osd", "r", "cli")
COMMAND("osd down " \
"type=3DCephString,name=3Dids,n=3DN", \
"set osd(s) <id> [<id>...] down", "osd", "rw", "cli,rest")
diff --git a/src/mon/OSDMonitor.cc b/src/mon/OSDMonitor.cc
index 07022ae..9bf9511 100644
--- a/src/mon/OSDMonitor.cc
+++ b/src/mon/OSDMonitor.cc
@@ -3099,10 +3099,13 @@ bool OSDMonitor::prepare_command(MMonCommand *m=
)
return prepare_unset_flag(m, CEPH_OSDMAP_NODEEP_SCRUB);
=20
} else if (prefix =3D=3D "osd cluster_snap") {
- // ** DISABLE THIS FOR NOW **
- ss << "cluster snapshot currently disabled (broken implementation)=
";
- // ** DISABLE THIS FOR NOW **
-
+ string snap;
+ cmd_getval(g_ceph_context, cmdmap, "snap", snap);
+ pending_inc.cluster_snapshot =3D snap;
+ ss << "creating cluster snap " << snap;
+ getline(ss, rs);
+ wait_for_finished_proposal(new Monitor::C_Command(mon, m, 0, rs, g=
et_last_committed()));
+ return true;
} else if (prefix =3D=3D "osd down" ||
prefix =3D=3D "osd out" ||
prefix =3D=3D "osd in" ||
diff --git a/src/osd/OSD.cc b/src/osd/OSD.cc
index 1a77dae..e41a6b3 100644
--- a/src/osd/OSD.cc
+++ b/src/osd/OSD.cc
@@ -5022,6 +5022,19 @@ void OSD::handle_osd_map(MOSDMap *m)
assert(0 =3D=3D "MOSDMap lied about what maps it had?");
}
=20
+ // check for cluster snapshots
+ for (epoch_t cur =3D superblock.current_epoch + 1; cur <=3D m->get_l=
ast(); cur++) {
+ OSDMapRef newmap =3D get_map(cur);
+ string cluster_snap =3D newmap->get_cluster_snapshot();
+ if (cluster_snap.length() =3D=3D 0)
+ continue;
+
+ dout(0) << "creating cluster snapshot '" << cluster_snap << "'" <<=
dendl;
+ int r =3D store->snapshot(cluster_snap);
+ if (r)
+ dout(0) << "failed to create cluster snapshot: " << cpp_strerror=
(r) << dendl;
+ }
+
if (superblock.oldest_map) {
int num =3D 0;
epoch_t min(

--=20
Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/ FSF Latin America board member
=46ree Software Evangelist Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2013-08-24 00:17:22 UTC
Permalink
Sam, do you remember why we disabled this?

I think it happened when the pg threading stuff went in, but I'm not sure
why we can't just take a blanket snapshot of current/.

FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.

sage
Post by Alexandre Oliva
This patch brings back and updates (for dumpling) the code originally
introduced to support ?ceph osd cluster_snap <snap>?, that was disabled
and partially removed before cuttlefish.
Some minimal testing appears to indicate this even works: the modified
mon actually generated an osdmap with the cluster_snap request, and
starting a modified osd that was down and letting it catch up caused
the osd to take the requested snapshot. I see no reason why it
wouldn't have taken it if it was up and running, so... Why was this
feature disabled in the first place?
---
src/mon/MonCommands.h | 6 ++++--
src/mon/OSDMonitor.cc | 11 +++++++----
src/osd/OSD.cc | 13 +++++++++++++
3 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/src/mon/MonCommands.h b/src/mon/MonCommands.h
index 8e9c2bb..225c687 100644
--- a/src/mon/MonCommands.h
+++ b/src/mon/MonCommands.h
@@ -431,8 +431,10 @@ COMMAND("osd set " \
COMMAND("osd unset " \
"name=key,type=CephChoices,strings=pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub", \
"unset <key>", "osd", "rw", "cli,rest")
-COMMAND("osd cluster_snap", "take cluster snapshot (disabled)", \
- "osd", "r", "")
+COMMAND("osd cluster_snap " \
+ "name=snap,type=CephString", \
+ "take cluster snapshot", \
+ "osd", "r", "cli")
COMMAND("osd down " \
"type=CephString,name=ids,n=N", \
"set osd(s) <id> [<id>...] down", "osd", "rw", "cli,rest")
diff --git a/src/mon/OSDMonitor.cc b/src/mon/OSDMonitor.cc
index 07022ae..9bf9511 100644
--- a/src/mon/OSDMonitor.cc
+++ b/src/mon/OSDMonitor.cc
@@ -3099,10 +3099,13 @@ bool OSDMonitor::prepare_command(MMonCommand *m)
return prepare_unset_flag(m, CEPH_OSDMAP_NODEEP_SCRUB);
} else if (prefix == "osd cluster_snap") {
- // ** DISABLE THIS FOR NOW **
- ss << "cluster snapshot currently disabled (broken implementation)";
- // ** DISABLE THIS FOR NOW **
-
+ string snap;
+ cmd_getval(g_ceph_context, cmdmap, "snap", snap);
+ pending_inc.cluster_snapshot = snap;
+ ss << "creating cluster snap " << snap;
+ getline(ss, rs);
+ wait_for_finished_proposal(new Monitor::C_Command(mon, m, 0, rs, get_last_committed()));
+ return true;
} else if (prefix == "osd down" ||
prefix == "osd out" ||
prefix == "osd in" ||
diff --git a/src/osd/OSD.cc b/src/osd/OSD.cc
index 1a77dae..e41a6b3 100644
--- a/src/osd/OSD.cc
+++ b/src/osd/OSD.cc
@@ -5022,6 +5022,19 @@ void OSD::handle_osd_map(MOSDMap *m)
assert(0 == "MOSDMap lied about what maps it had?");
}
+ // check for cluster snapshots
+ for (epoch_t cur = superblock.current_epoch + 1; cur <= m->get_last(); cur++) {
+ OSDMapRef newmap = get_map(cur);
+ string cluster_snap = newmap->get_cluster_snapshot();
+ if (cluster_snap.length() == 0)
+ continue;
+
+ dout(0) << "creating cluster snapshot '" << cluster_snap << "'" << dendl;
+ int r = store->snapshot(cluster_snap);
+ if (r)
+ dout(0) << "failed to create cluster snapshot: " << cpp_strerror(r) << dendl;
+ }
+
if (superblock.oldest_map) {
int num = 0;
epoch_t min(
--
Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/ FSF Latin America board member
Free Software Evangelist Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre Oliva
2013-08-24 14:56:35 UTC
Permalink
Post by Sage Weil
FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.
That depends on what's expected from the feature, actually.

One use is to roll back a single osd, and for that, the feature works
just fine. Of course, for that one doesn't need the multi-osd snapshots
to be mutually consistent, but it's still convenient to be able to take
a global snapshot with a single command.

Another use is to roll back the entire cluster to an earlier state, and
for that, you *probably* want to roll back the monitors too, although it
doesn't seem like this is strictly necessary, unless some significant
configuration changes occurrend in the cluster since the snapshot was
taken, and you want to roll back those too.

In my experience, rolling back only osds has worked just fine, with the
exception of cases in which the snapshot is much too old, and the mons
have already expired osdmaps after the last one the osd got when the
snapshot was taken. For this one case, I have a patch that enables the
osd to rejoin the cluster in spite of the expired osdmaps, which has
always worked for me, but I understand there may be exceptional cluster
reconfigurations in which this wouldn't have worked.


As for snapshotting monitors... I suppose the way to go is to start a
store.db dump in background, instead of taking a btrfs snapshot, since
the store.db is not created as a subvolume. That said, it would make
some sense to make it so, to make it trivially snapshottable.


Anyway, I found a problem in the earlier patch: when I added a new disk
to my cluster this morning, it tried to iterate over osdmaps that were
not available (e.g. the long-gone osdmap 1), and crashed.

Here's a fixed version, that makes sure we don't start the iteration
before m->get_first().
Sage Weil
2013-08-27 22:21:52 UTC
Permalink
Hi,
Post by Alexandre Oliva
Post by Sage Weil
FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.
That depends on what's expected from the feature, actually.
One use is to roll back a single osd, and for that, the feature works
just fine. Of course, for that one doesn't need the multi-osd snapshots
to be mutually consistent, but it's still convenient to be able to take
a global snapshot with a single command.
Another use is to roll back the entire cluster to an earlier state, and
for that, you *probably* want to roll back the monitors too, although it
doesn't seem like this is strictly necessary, unless some significant
configuration changes occurrend in the cluster since the snapshot was
taken, and you want to roll back those too.
In my experience, rolling back only osds has worked just fine, with the
exception of cases in which the snapshot is much too old, and the mons
have already expired osdmaps after the last one the osd got when the
snapshot was taken. For this one case, I have a patch that enables the
osd to rejoin the cluster in spite of the expired osdmaps, which has
always worked for me, but I understand there may be exceptional cluster
reconfigurations in which this wouldn't have worked.
As for snapshotting monitors... I suppose the way to go is to start a
store.db dump in background, instead of taking a btrfs snapshot, since
the store.db is not created as a subvolume. That said, it would make
some sense to make it so, to make it trivially snapshottable.
Anyway, I found a problem in the earlier patch: when I added a new disk
to my cluster this morning, it tried to iterate over osdmaps that were
not available (e.g. the long-gone osdmap 1), and crashed.
Here's a fixed version, that makes sure we don't start the iteration
before m->get_first().
In principle, we can add this back in. I think it needs a few changes,
though.

First, FileStore::snapshot() needs to pause and drain the workqueue before
taking the snapshot, similar to what is done with the sync sequence.
Otherwise it isn't a transactionally consistent snapshot and may tear some
update. Because it is draining the work queue, it *might* also need to
drop some locks, but I'm hopeful that that isn't necessary.

Second, the call in handle_osd_map() should probably go in the loop a bit
further down that is consuming maps. It probably won't matter most of the
time, but I'm paranoid about corner conditions. It also avoids iterating
over the new OSDMaps multiple times in the common case where there is no
cluster_snap (a minor win).

Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Yan, Zheng
2013-08-28 00:54:31 UTC
Permalink
Post by Sage Weil
Hi,
Post by Alexandre Oliva
Post by Sage Weil
FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.
That depends on what's expected from the feature, actually.
One use is to roll back a single osd, and for that, the feature works
just fine. Of course, for that one doesn't need the multi-osd snapshots
to be mutually consistent, but it's still convenient to be able to take
a global snapshot with a single command.
Another use is to roll back the entire cluster to an earlier state, and
for that, you *probably* want to roll back the monitors too, although it
doesn't seem like this is strictly necessary, unless some significant
configuration changes occurrend in the cluster since the snapshot was
taken, and you want to roll back those too.
In my experience, rolling back only osds has worked just fine, with the
exception of cases in which the snapshot is much too old, and the mons
have already expired osdmaps after the last one the osd got when the
snapshot was taken. For this one case, I have a patch that enables the
osd to rejoin the cluster in spite of the expired osdmaps, which has
always worked for me, but I understand there may be exceptional cluster
reconfigurations in which this wouldn't have worked.
As for snapshotting monitors... I suppose the way to go is to start a
store.db dump in background, instead of taking a btrfs snapshot, since
the store.db is not created as a subvolume. That said, it would make
some sense to make it so, to make it trivially snapshottable.
Anyway, I found a problem in the earlier patch: when I added a new disk
to my cluster this morning, it tried to iterate over osdmaps that were
not available (e.g. the long-gone osdmap 1), and crashed.
Here's a fixed version, that makes sure we don't start the iteration
before m->get_first().
In principle, we can add this back in. I think it needs a few changes,
though.
First, FileStore::snapshot() needs to pause and drain the workqueue before
taking the snapshot, similar to what is done with the sync sequence.
Otherwise it isn't a transactionally consistent snapshot and may tear some
update. Because it is draining the work queue, it *might* also need to
drop some locks, but I'm hopeful that that isn't necessary.
Second, the call in handle_osd_map() should probably go in the loop a bit
further down that is consuming maps. It probably won't matter most of the
time, but I'm paranoid about corner conditions. It also avoids iterating
over the new OSDMaps multiple times in the common case where there is no
cluster_snap (a minor win).
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I think we also need to snapshot the osd journal

Regards
Yan, Zheng
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2013-08-28 04:34:11 UTC
Permalink
Post by Yan, Zheng
Post by Sage Weil
Hi,
Post by Alexandre Oliva
Post by Sage Weil
FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.
That depends on what's expected from the feature, actually.
One use is to roll back a single osd, and for that, the feature works
just fine. Of course, for that one doesn't need the multi-osd snapshots
to be mutually consistent, but it's still convenient to be able to take
a global snapshot with a single command.
Another use is to roll back the entire cluster to an earlier state, and
for that, you *probably* want to roll back the monitors too, although it
doesn't seem like this is strictly necessary, unless some significant
configuration changes occurrend in the cluster since the snapshot was
taken, and you want to roll back those too.
In my experience, rolling back only osds has worked just fine, with the
exception of cases in which the snapshot is much too old, and the mons
have already expired osdmaps after the last one the osd got when the
snapshot was taken. For this one case, I have a patch that enables the
osd to rejoin the cluster in spite of the expired osdmaps, which has
always worked for me, but I understand there may be exceptional cluster
reconfigurations in which this wouldn't have worked.
As for snapshotting monitors... I suppose the way to go is to start a
store.db dump in background, instead of taking a btrfs snapshot, since
the store.db is not created as a subvolume. That said, it would make
some sense to make it so, to make it trivially snapshottable.
Anyway, I found a problem in the earlier patch: when I added a new disk
to my cluster this morning, it tried to iterate over osdmaps that were
not available (e.g. the long-gone osdmap 1), and crashed.
Here's a fixed version, that makes sure we don't start the iteration
before m->get_first().
In principle, we can add this back in. I think it needs a few changes,
though.
First, FileStore::snapshot() needs to pause and drain the workqueue before
taking the snapshot, similar to what is done with the sync sequence.
Otherwise it isn't a transactionally consistent snapshot and may tear some
update. Because it is draining the work queue, it *might* also need to
drop some locks, but I'm hopeful that that isn't necessary.
Second, the call in handle_osd_map() should probably go in the loop a bit
further down that is consuming maps. It probably won't matter most of the
time, but I'm paranoid about corner conditions. It also avoids iterating
over the new OSDMaps multiple times in the common case where there is no
cluster_snap (a minor win).
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I think we also need to snapshot the osd journal
If the snapshot does a sync (drain op_tp before doing the snap) that puts
the file subvol in a consistent state. To actually use it ceph-osd rolls
back to that point on startup. I didn't check that code, but I think what
it should do is ignore/reset the journal then.

This is annoying code to test, unfortunately.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre Oliva
2013-12-17 12:14:27 UTC
Permalink
Post by Sage Weil
Hi,
Post by Alexandre Oliva
Post by Sage Weil
FWIW Alexandre, this feature was never really complete. For it to work,
we also need to snapshot the monitors, and roll them back as well.
That depends on what's expected from the feature, actually.
One use is to roll back a single osd, and for that, the feature works
just fine. Of course, for that one doesn't need the multi-osd snapshots
to be mutually consistent, but it's still convenient to be able to take
a global snapshot with a single command.
In principle, we can add this back in. I think it needs a few changes,
though.
First, FileStore::snapshot() needs to pause and drain the workqueue before
taking the snapshot, similar to what is done with the sync sequence.
Otherwise it isn't a transactionally consistent snapshot and may tear some
update. Because it is draining the work queue, it *might* also need to
drop some locks, but I'm hopeful that that isn't necessary.
Hmm... I don't quite get this. The Filestore implementation of
snapshot already performs a sync_and_flush before calling the backend's
create_checkpoint. Shouldn't that be enough? FWIW, the code I brought
in from argonaut didn't do any such thing; it did drop locks, but that
doesn't seem to be necessary any more:

// flush here so that the peering code can re-read any pg data off
// disk that it needs to... say for backlog generation. (hmm, is
// this really needed?)
osd_lock.Unlock();
if (cluster_snap.length()) {
dout(0) << "creating cluster snapshot '" << cluster_snap << "'" << dendl;
int r = store->snapshot(cluster_snap);
if (r)
dout(0) << "failed to create cluster snapshot: " << cpp_strerror(r) << de
} else {
store->flush();
}
osd_lock.Lock();
Post by Sage Weil
Second, the call in handle_osd_map() should probably go in the loop a bit
further down that is consuming maps. It probably won't matter most of the
time, but I'm paranoid about corner conditions. It also avoids iterating
over the new OSDMaps multiple times in the common case where there is no
cluster_snap (a minor win).
I've just moved the cluster creation down to the loop I think you're
speaking of above. Here's the revised patch, so far untested, just for
reference so that you don't have to refer to the archives to locate the
earlier patch and make sense of the comments in this old thread.
Post by Sage Weil
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I haven't looked into this yet.
Alexandre Oliva
2013-12-17 13:50:11 UTC
Permalink
Post by Alexandre Oliva
Post by Sage Weil
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I haven't looked into this yet.
I looked a bit at the leveldb interface. It offers a facility to create
Snapshots, but they only last for the duration of one session of the
database. It can be used to create multiple iterators at once state of
the db, or to read multiple values from the same state of the db, but
not to roll back to a state you had at an earlier session, e.g., after a
monitor restart. So they won't help us.

I thus see a few possibilities (all of them to be done between taking
note of the request for the new snapshot and returning a response to the
requestor that the request was satisfied):

1. take a snapshot, create an iterator out of the snapshot, create a new
database named after the cluster_snap key, and go over all key/value
pairs tha the iterator can see, adding each one to this new database.

2. close the database, create a dir named after the cluster_snap key,
create hardlinks to all files in the database tree in the cluster_snap
dir, and then reopen the database

3. flush the leveldb (how? will a write with sync=true do? must we
close it?) and take a btrfs snapshot of the store.db tree, named after
the cluster_snap key, and then reopen the database

None of these are particularly appealing; (1) wastes disk space and cpu
cycles; (2) relies on leveldb internal implementation details such as
the fact that files are never modified after they're first closed, and
(3) requires a btrfs subvol for the store.db. My favorite choice would
be 3, but can we just fail mon snaps when this requirement is not met?
--
Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/ FSF Latin America board member
Free Software Evangelist Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre Oliva
2013-12-17 14:22:55 UTC
Permalink
Post by Alexandre Oliva
Post by Alexandre Oliva
Post by Sage Weil
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I haven't looked into this yet.
None of these are particularly appealing; (1) wastes disk space and cpu
cycles; (2) relies on leveldb internal implementation details such as
the fact that files are never modified after they're first closed, and
(3) requires a btrfs subvol for the store.db. My favorite choice would
be 3, but can we just fail mon snaps when this requirement is not met?
Another aspect that needs to be considered is whether to take a snapshot
of the leader only, or of all monitors in the quorum. The fact that the
snapshot operation may take a while to complete (particularly (1)), and
monitors may not make progress while taking the snapshot (which might
cause the client and other monitors to assume other monitors have
failed), make the whole thing quite more complex than what I'd have
hoped for.

Another point that may affect the decision is the amount of information
in store.db that may have to be retained. E.g., if it's just a small
amount of information, creating a separate database makes far more sense
than taking a complete copy of the entire database, and it might even
make sense for the leader to include the full snapshot data in the
snapshot-taking message shared with other monitors, so that they all
take exactly the same snapshot, even if they're not in the quorum and
receive the update at a later time. Of course this wouldn't work if the
amount of snapshotted monitor data was more than reasonable for a
monitor message.

Anyway, this is probably more than what I'd be able to undertake myself,
at least in part because, although I can see one place to add the
snapshot-taking code to the leader (assuming it's ok to take the
snapshot just before or right after all monitors agree on it), I have no
idea of where to plug the snapshot-taking behavior into peon and
recovering monitors. Absent a two-phase protocol, it seems to me that
all monitors ought to take snapshots tentatively when they issue or
acknowledge the snapshot-taking proposal, so as to make sure that if it
succeeds we'll have a quorum of snapshots, but if the proposal doesn't
succeed at first, I don't know how to deal with retries (overwrite
existing snapshots? discard the snapshot when its proposal fails?) or
cancellation (say, the client doesn't get confirmation from the leader,
the leader changes, it retries that some times, and eventually it gives
up, but some monitors have already tentatively taken the snapshot in the
mean time).
--
Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/ FSF Latin America board member
Free Software Evangelist Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Gregory Farnum
2013-12-18 19:35:40 UTC
Permalink
Post by Alexandre Oliva
Hi,
Post by Alexandre Oliva
FWIW Alexandre, this feature was never really complete. For it t=
o work,
Post by Alexandre Oliva
Post by Alexandre Oliva
we also need to snapshot the monitors, and roll them back as well=
=2E
Post by Alexandre Oliva
Post by Alexandre Oliva
That depends on what's expected from the feature, actually.
One use is to roll back a single osd, and for that, the feature wor=
ks
Post by Alexandre Oliva
Post by Alexandre Oliva
just fine. Of course, for that one doesn't need the multi-osd snap=
shots
Post by Alexandre Oliva
Post by Alexandre Oliva
to be mutually consistent, but it's still convenient to be able to =
take
Post by Alexandre Oliva
Post by Alexandre Oliva
a global snapshot with a single command.
In principle, we can add this back in. I think it needs a few chang=
es,
Post by Alexandre Oliva
though.
First, FileStore::snapshot() needs to pause and drain the workqueue =
before
Post by Alexandre Oliva
taking the snapshot, similar to what is done with the sync sequence.
Otherwise it isn't a transactionally consistent snapshot and may tea=
r some
Post by Alexandre Oliva
update. Because it is draining the work queue, it *might* also need=
to
Post by Alexandre Oliva
drop some locks, but I'm hopeful that that isn't necessary.
Hmm... I don't quite get this. The Filestore implementation of
snapshot already performs a sync_and_flush before calling the backend=
's
Post by Alexandre Oliva
create_checkpoint. Shouldn't that be enough? FWIW, the code I broug=
ht
Post by Alexandre Oliva
in from argonaut didn't do any such thing; it did drop locks, but tha=
t
=46rom a quick skim I think you're right about that. The more serious
concern in the OSDs (which motivated removing the cluster snap) is
what Sage mentioned: we used to be able to take a snapshot for which
all PGs were at the same epoch, and we can't do that now. It's
possible that's okay, but it makes the semantics even weirder than
they used to be (you've never been getting a real point-in-time
snapshot, although as long as you didn't use external communication
channels you could at least be sure it contained a causal cut).

And of course that's nothing compared to snapshotting the monitors, as
you've noticed =97 but making it actually be a cluster snapshot (instea=
d
of something you could basically do by taking a btrfs snapshot
yourself) is something I would want to see before we bring the feature
back into mainline.
Post by Alexandre Oliva
Post by Alexandre Oliva
Finally, eventually we should make this do a checkpoint on the mon=
s too.
Post by Alexandre Oliva
Post by Alexandre Oliva
We can add the osd snapping back in first, but before this can/sho=
uld
Post by Alexandre Oliva
Post by Alexandre Oliva
really be used the mons need to be snapshotted as well. Probably =
that's
Post by Alexandre Oliva
Post by Alexandre Oliva
just adding in a snapshot() method to MonitorStore.h and doing eit=
her a
Post by Alexandre Oliva
Post by Alexandre Oliva
leveldb snap or making a full copy of store.db... I forget what le=
veldb is
Post by Alexandre Oliva
Post by Alexandre Oliva
capable of here.
I haven't looked into this yet.
None of these are particularly appealing; (1) wastes disk space and =
cpu
Post by Alexandre Oliva
cycles; (2) relies on leveldb internal implementation details such a=
s
Post by Alexandre Oliva
the fact that files are never modified after they're first closed, a=
nd
Post by Alexandre Oliva
(3) requires a btrfs subvol for the store.db. My favorite choice wo=
uld
Post by Alexandre Oliva
be 3, but can we just fail mon snaps when this requirement is not me=
t?
Post by Alexandre Oliva
Another aspect that needs to be considered is whether to take a snaps=
hot
Post by Alexandre Oliva
of the leader only, or of all monitors in the quorum. The fact that =
the
Post by Alexandre Oliva
snapshot operation may take a while to complete (particularly (1)), a=
nd
Post by Alexandre Oliva
monitors may not make progress while taking the snapshot (which might
cause the client and other monitors to assume other monitors have
failed), make the whole thing quite more complex than what I'd have
hoped for.
Another point that may affect the decision is the amount of informati=
on
Post by Alexandre Oliva
in store.db that may have to be retained. E.g., if it's just a small
amount of information, creating a separate database makes far more se=
nse
Post by Alexandre Oliva
than taking a complete copy of the entire database, and it might even
make sense for the leader to include the full snapshot data in the
snapshot-taking message shared with other monitors, so that they all
take exactly the same snapshot, even if they're not in the quorum and
receive the update at a later time. Of course this wouldn't work if =
the
Post by Alexandre Oliva
amount of snapshotted monitor data was more than reasonable for a
monitor message.
Anyway, this is probably more than what I'd be able to undertake myse=
lf,
Post by Alexandre Oliva
at least in part because, although I can see one place to add the
snapshot-taking code to the leader (assuming it's ok to take the
snapshot just before or right after all monitors agree on it), I have=
no
Post by Alexandre Oliva
idea of where to plug the snapshot-taking behavior into peon and
recovering monitors. Absent a two-phase protocol, it seems to me tha=
t
Post by Alexandre Oliva
all monitors ought to take snapshots tentatively when they issue or
acknowledge the snapshot-taking proposal, so as to make sure that if =
it
Post by Alexandre Oliva
succeeds we'll have a quorum of snapshots, but if the proposal doesn'=
t
Post by Alexandre Oliva
succeed at first, I don't know how to deal with retries (overwrite
existing snapshots? discard the snapshot when its proposal fails?) o=
r
Post by Alexandre Oliva
cancellation (say, the client doesn't get confirmation from the leade=
r,
Post by Alexandre Oliva
the leader changes, it retries that some times, and eventually it giv=
es
Post by Alexandre Oliva
up, but some monitors have already tentatively taken the snapshot in =
the
Post by Alexandre Oliva
mean time).
The best way I can think of in a short time to solve these problems
would be to make snapshots first-class citizens in the monitor. We
could extend the monitor store to handle multiple leveldb instances,
and then a snapshot would would be an async operation which does a
leveldb snapshot inline and spins off a thread to clone that data into
a new leveldb instance. When all the up monitors complete, the user
gets a report saying the snapshot was successful and it gets marked
complete in some snapshot map. Any monitors which have to get a full
store sync would also sync any snapshots they don't already have. If
the monitors can't complete a snapshot (all failing at once for some
reason) then they could block the user from doing anything except
deleting them.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre Oliva
2013-12-19 08:22:56 UTC
Permalink
Post by Gregory Farnum
(you've never been getting a real point-in-time
snapshot, although as long as you didn't use external communication
channels you could at least be sure it contained a causal cut).
I never expected more than a causal cut, really (my wife got a PhD in
consistent checkpointing of distributed systems, so my expectations may
be somewhat better informed than those a random user might have ;-),
although even now I've seldom got a snapshot in which the osd data
differs across replicas (I actually check for that; that's part of my
reason for taking the snapshots in the first place), even when I fail t=
o
explicitly make the cluster quiescent. But that's probably just =E2=80=
=9Cluck=E2=80=9D,
as my cluster usually isn't busy when I take such snapshots ;-)
Post by Gregory Farnum
And of course that's nothing compared to snapshotting the monitors, a=
s
Post by Gregory Farnum
you've noticed
I've given it some more thought, and it occurred to me that, if we make
mons take the snapshot when the snapshot-taking request is committed to
the cluster history, we should have the snapshots taking at the right
time and without the need for rolling them back and taking them again.

The idea is that, if the snapshot-taking is committed, eventually we'll
have a quorum carrying that commit, and thus each of the quorum members
will have taken a snapshot as soon as they got that commit, even if the=
y
did so during recovery, or if they took so long to take the snapshot
that they got kicked out of the quorum for a while. If they get
actually restarted, they will get the commit again and take the snapsho=
t
from the beginning. If all mons in the quorum that accepted the commit
get restarted so that none of them actually records the commit request,
and it doesn't get propagated to other mons that attempt to rejoin,
well, it's as if the request had never been committed. OTOH, if it did
get to other mons, or if any of them survives, the committed request
will make to a quorum and eventually to all monitors, each one taking
its snapshot at the time it gets the commit.

This should work as long as all mons get recovery info in the same
order, i.e., they won't get into their database history information tha=
t
happens-after the snapshot commit before the snapshot commit, nor will
they fail to get information that happened-before the snapshot commit
before getting the snapshot commit. That said, having little idea of
the inner workings of the monitors, I can't tell whether they actually
meet this =E2=80=9Cas long as=E2=80=9D condition ;-(
Post by Gregory Farnum
=E2=80=94 but making it actually be a cluster snapshot (instead
of something you could basically do by taking a btrfs snapshot
yourself)
Taking btrfs snapshots manually over several osds on several hosts is
hardly a way to get a causal cut (but you already knew that ;-)

--=20
Alexandre Oliva, freedom fighter http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/ FSF Latin America board member
=46ree Software Evangelist Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre Oliva
2014-10-21 02:49:52 UTC
Permalink
Post by Sage Weil
Finally, eventually we should make this do a checkpoint on the mons too.
We can add the osd snapping back in first, but before this can/should
really be used the mons need to be snapshotted as well. Probably that's
just adding in a snapshot() method to MonitorStore.h and doing either a
leveldb snap or making a full copy of store.db... I forget what leveldb is
capable of here.
I suppose it might be a bit too late for Giant, but I finally got 'round
to implementing this. I attach the patch that implements it, to be
applied on top of the updated version of the patch I posted before, also
attached.

I have a backport to Firefly too, if there's interest.

I have tested both methods: btrfs snapshotting of store.db (I've
manually turned store.db into a btrfs subvolume), and creating a new db
with all (prefix,key,value) triples. I'm undecided about inserting
multiple transaction commits for the latter case; the mon mem use grew
up a lot as it was, and in a few tests the snapshotting ran twice, but
in the end a dump of all the data in the database created by btrfs
snapshotting was identical to that created by explicit copying. So, the
former is preferred, since it's so incredibly more efficient. I also
considered hardlinking all files in store.db into a separate tree, but I
didn't like the idea of coding that in C+-, :-) and I figured it might
not work with other db backends, and maybe even not be guaranteed to
work with leveldb. It's probably not worth much more effort.

Loading...