Discussion:
OSD suicide after being down/in for one day as it needs to search large amount of objects
Guang Yang
2014-08-19 06:30:11 UTC
Permalink
Hi ceph-devel,
David (cc=92ed) reported a bug (http://tracker.ceph.com/issues/9128) wh=
ich we came across in our test cluster during our failure testing, basi=
cally the way to reproduce it was to leave one OSD daemon down and in f=
or a day, at the same time, keep giving write traffic. When the OSD dae=
mon was started again, it hit suicide timeout and kill itself.

After some analysis (details in the bug), David found that the op threa=
d was busy searching for missing objects and once the volume to search =
increase, the thread is expected to work that long time, please refer t=
o the bug for detailed logs.

One simple fix is to let the op thread reset the suicide timeout period=
ically when it is doing long-time work, other fix might be to cut the w=
ork into smaller pieces?

Any suggestion is welcome.

Thanks,
Guang--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Gregory Farnum
2014-08-19 22:09:25 UTC
Permalink
Post by Guang Yang
Hi ceph-devel,
David (cc=E2=80=99ed) reported a bug (http://tracker.ceph.com/issues/=
9128) which we came across in our test cluster during our failure testi=
ng, basically the way to reproduce it was to leave one OSD daemon down =
and in for a day, at the same time, keep giving write traffic. When the=
OSD daemon was started again, it hit suicide timeout and kill itself.
Post by Guang Yang
After some analysis (details in the bug), David found that the op thr=
ead was busy searching for missing objects and once the volume to searc=
h increase, the thread is expected to work that long time, please refer=
to the bug for detailed logs.

Can you talk a little more about what's going on here? At a quick
naive glance, I'm not seeing why leaving an OSD down and in should
require work based on the amount of write traffic. Perhaps if the rest
of the cluster was changing mappings...?
Post by Guang Yang
One simple fix is to let the op thread reset the suicide timeout peri=
odically when it is doing long-time work, other fix might be to cut the=
work into smaller pieces?

We do both of those things throughout the OSD (although I think the
first is simpler and more common); search for the accesses to
cct->get_heartbeat_map()->reset_timeout.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Guang Yang
2014-08-20 11:42:31 UTC
Permalink
Thanks Greg.
Post by Gregory Farnum
Post by Guang Yang
Hi ceph-devel,
David (cc=92ed) reported a bug (http://tracker.ceph.com/issues/9128)=
which we came across in our test cluster during our failure testing, b=
asically the way to reproduce it was to leave one OSD daemon down and i=
n for a day, at the same time, keep giving write traffic. When the OSD =
daemon was started again, it hit suicide timeout and kill itself.
Post by Gregory Farnum
Post by Guang Yang
=20
After some analysis (details in the bug), David found that the op th=
read was busy searching for missing objects and once the volume to sear=
ch increase, the thread is expected to work that long time, please refe=
r to the bug for detailed logs.
Post by Gregory Farnum
=20
Can you talk a little more about what's going on here? At a quick
naive glance, I'm not seeing why leaving an OSD down and in should
require work based on the amount of write traffic. Perhaps if the res=
t
Post by Gregory Farnum
of the cluster was changing mappings=85?
We increased the down to out time interval from 5 minutes to 2 days to =
avoid migrating data back and forth which could increase latency, so th=
at we target to mark OSD out manually. To achieve such, we are testing =
against some boundary cases to let the OSD down and in for like 1 day, =
however, when we try to bring it up again, it always failed due to hit =
the suicide timeout.
Post by Gregory Farnum
=20
Post by Guang Yang
=20
One simple fix is to let the op thread reset the suicide timeout per=
iodically when it is doing long-time work, other fix might be to cut th=
e work into smaller pieces?
Post by Gregory Farnum
=20
We do both of those things throughout the OSD (although I think the
first is simpler and more common); search for the accesses to
cct->get_heartbeat_map()->reset_timeout.
-Greg
=20
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-08-20 15:19:46 UTC
Permalink
Post by Guang Yang
Thanks Greg.
Post by Gregory Farnum
Post by Guang Yang
Hi ceph-devel,
David (cc?ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
Can you talk a little more about what's going on here? At a quick
naive glance, I'm not seeing why leaving an OSD down and in should
require work based on the amount of write traffic. Perhaps if the rest
of the cluster was changing mappings??
We increased the down to out time interval from 5 minutes to 2 days to
avoid migrating data back and forth which could increase latency, so
that we target to mark OSD out manually. To achieve such, we are testing
against some boundary cases to let the OSD down and in for like 1 day,
however, when we try to bring it up again, it always failed due to hit
the suicide timeout.
Looking at the log snippet I see the PG had log range

5481'28667,5646'34066

Which is ~5500 log events. The default max is 10k. search_for_missing is
basically going to iterate over this list and check if the object is
present locally.

If that's slow enough to trigger a suicide (which it seems to be), teh
fix is simple: as Greg says we just need to make it probe the internel
heartbeat code to indicate progress. In most contexts this is done by
passing a ThreadPool::TPHandle &handle into each method and then
calling handle.reset_tp_timeout() on each iteration. The same needs to be
done for search_for_missing...

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Guang Yang
2014-08-21 01:42:08 UTC
Permalink
Thanks Sage. We will provide a patch based on this.

Thanks,
Guang
Post by Sage Weil
Post by Guang Yang
Thanks Greg.
Post by Gregory Farnum
Post by Guang Yang
Hi ceph-devel,
David (cc?ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
Can you talk a little more about what's going on here? At a quick
naive glance, I'm not seeing why leaving an OSD down and in should
require work based on the amount of write traffic. Perhaps if the rest
of the cluster was changing mappings??
We increased the down to out time interval from 5 minutes to 2 days to
avoid migrating data back and forth which could increase latency, so
that we target to mark OSD out manually. To achieve such, we are testing
against some boundary cases to let the OSD down and in for like 1 day,
however, when we try to bring it up again, it always failed due to hit
the suicide timeout.
Looking at the log snippet I see the PG had log range
5481'28667,5646'34066
Which is ~5500 log events. The default max is 10k. search_for_missing is
basically going to iterate over this list and check if the object is
present locally.
If that's slow enough to trigger a suicide (which it seems to be), teh
fix is simple: as Greg says we just need to make it probe the internel
heartbeat code to indicate progress. In most contexts this is done by
passing a ThreadPool::TPHandle &handle into each method and then
calling handle.reset_tp_timeout() on each iteration. The same needs to be
done for search_for_missing...
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...