Discussion:
OpTracker optimization
Somnath Roy
2014-09-09 20:33:03 UTC
Permalink
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code degrading performance severely. For example, in my setup a single OSD node with 10 clients is reaching ~103K read iops with io served from memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. Probably, running OSD without enabling OpTracker is not an option for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flight) and removing some other bottlenecks I am able to match the performance of OpTracking enabled OSD with OpTracking disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.

http://tracker.ceph.com/issues/9384


and probably http://tracker.ceph.com/issues/8885 too.



I have created following pull request for the same. Please review it.



https://github.com/ceph/ceph/pull/2440



Thanks & Regards

Somnath


________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Samuel Just
2014-09-10 18:16:51 UTC
Permalink
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code degrading
performance severely. For example, in my setup a single OSD node with 10
clients is reaching ~103K read iops with io served from memory while
optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option for many
of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to match the
performance of OpTracking enabled OSD with OpTracking disabled, but with the
expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is
intended only for the use of the designated recipient(s) named above. If the
reader of this message is not the intended recipient, you are hereby
notified that you have received this message in error and that any review,
dissemination, distribution, or copying of this message is strictly
prohibited. If you have received this communication in error, please notify
the sender by telephone or e-mail (as shown above) immediately and destroy
any and all copies of this message in your possession (whether hard copies
or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-09-10 20:30:48 UTC
Permalink
Thanks Sam..I responded back :-)

-----Original Message-----
From: ceph-devel-owner-***@public.gmane.org [mailto:ceph-devel-owner-***@public.gmane.org] On Behalf Of Samuel Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+***@public.gmane.org); ceph-devel-***@public.gmane.org; ceph-users-***@public.gmane.org
Subject: Re: OpTracker optimization

Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code degrading
performance severely. For example, in my setup a single OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option for
many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to match
the performance of OpTracking enabled OSD with OpTracking disabled,
but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message
is intended only for the use of the designated recipient(s) named
above. If the reader of this message is not the intended recipient,
you are hereby notified that you have received this message in error
and that any review, dissemination, distribution, or copying of this
message is strictly prohibited. If you have received this
communication in error, please notify the sender by telephone or
e-mail (as shown above) immediately and destroy any and all copies of
this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo-***@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Samuel Just
2014-09-10 21:36:27 UTC
Permalink
Responded with cosmetic nonsense. Once you've got that and the other
comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code degrading
performance severely. For example, in my setup a single OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option for
many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to match
the performance of OpTracking enabled OSD with OpTracking disabled,
but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message
is intended only for the use of the designated recipient(s) named
above. If the reader of this message is not the intended recipient,
you are hereby notified that you have received this message in error
and that any review, dissemination, distribution, or copying of this
message is strictly prohibited. If you have received this
communication in error, please notify the sender by telephone or
e-mail (as shown above) immediately and destroy any and all copies of
this message in your possession (whether hard copies or electronically stored copies).
--
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-09-10 21:38:30 UTC
Permalink
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?

Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just-4GqslpFJ+***@public.gmane.org]
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+***@public.gmane.org); ceph-devel-***@public.gmane.org; ceph-users-***@public.gmane.org
Subject: Re: OpTracker optimization

Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single OSD
node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option for
many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to match
the performance of OpTracking enabled OSD with OpTracking disabled,
but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution, or
copying of this message is strictly prohibited. If you have received
this communication in error, please notify the sender by telephone or
e-mail (as shown above) immediately and destroy any and all copies of
this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Samuel Just
2014-09-10 22:07:55 UTC
Permalink
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single OSD
node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option for
many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to match
the performance of OpTracking enabled OSD with OpTracking disabled,
but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution, or
copying of this message is strictly prohibited. If you have received
this communication in error, please notify the sender by telephone or
e-mail (as shown above) immediately and destroy any and all copies of
this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Somnath Roy
2014-09-10 22:13:08 UTC
Permalink
As I understand, you want me to implement the following.

1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.

2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard

Am I right ?

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:***@inktank.com]
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Cc: Sage Weil (***@redhat.com); ceph-***@vger.kernel.org; ceph-***@lists.ceph.com
Subject: Re: OpTracker optimization

I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution,
or copying of this message is strictly prohibited. If you have
received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any and
all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
��칻�&�~�&���+-��ݶ��w��˛���m��^��b��^n�r���z���h�����&���G���h�
Samuel Just
2014-09-10 22:25:13 UTC
Permalink
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution,
or copying of this message is strictly prohibited. If you have
received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any and
all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Somnath Roy
2014-09-11 01:52:16 UTC
Permalink
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.

https://github.com/ceph/ceph/pull/2440

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:***@inktank.com]
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Cc: Sage Weil (***@redhat.com); ceph-***@vger.kernel.org; ceph-***@lists.ceph.com
Subject: Re: OpTracker optimization

Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want me
to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution,
or copying of this message is strictly prohibited. If you have
received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any
and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
��{.n�+�������+%��lzwm��b�맲��r��yǩ�ׯzX����ܨ}���Ơz�&j:+v�������zZ+
Sage Weil
2014-09-11 03:33:22 UTC
Permalink
I had two substantiative comments on the first patch and then some trivial
whitespace nits. Otherwise looks good!

tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want me
to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution,
or copying of this message is strictly prohibited. If you have
received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any
and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-09-11 18:30:00 UTC
Permalink
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.

https://github.com/ceph/ceph/pull/2440

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+***@public.gmane.org]
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Cc: Samuel Just; ceph-devel-***@public.gmane.org; ceph-users-***@public.gmane.org
Subject: RE: OpTracker optimization

I had two substantiative comments on the first patch and then some trivial
whitespace nits. Otherwise looks good!

tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want me
to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated
recipient(s) named above. If the reader of this message is not
the intended recipient, you are hereby notified that you have
received this message in error and that any review,
dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in
error, please notify the sender by telephone or e-mail (as shown
above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Samuel Just
2014-09-11 18:30:59 UTC
Permalink
Just added it to wip-sam-testing.
-Sam
Post by Somnath Roy
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Subject: RE: OpTracker optimization
I had two substantiative comments on the first patch and then some trivial
whitespace nits. Otherwise looks good!
tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want me
to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a single
OSD node with
10 clients is reaching ~103K read iops with io served from memory
while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an option
for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated
recipient(s) named above. If the reader of this message is not
the intended recipient, you are hereby notified that you have
received this message in error and that any review,
dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in
error, please notify the sender by telephone or e-mail (as shown
above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-09-13 08:03:52 UTC
Permalink
Sam/Sage,
I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ?

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just-4GqslpFJ+***@public.gmane.org]
Sent: Thursday, September 11, 2014 11:31 AM
To: Somnath Roy
Cc: Sage Weil; ceph-devel-***@public.gmane.org; ceph-users-***@public.gmane.org
Subject: Re: OpTracker optimization

Just added it to wip-sam-testing.
-Sam
Post by Somnath Roy
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Subject: RE: OpTracker optimization
I had two substantiative comments on the first patch and then some trivial
whitespace nits. Otherwise looks good!
tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want
me to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a
single OSD node with
10 clients is reaching ~103K read iops with io served from
memory while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an
option for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated
recipient(s) named above. If the reader of this message is not
the intended recipient, you are hereby notified that you have
received this message in error and that any review,
dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in
error, please notify the sender by telephone or e-mail (as shown
above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
Alexandre DERUMIER
2014-09-13 09:00:42 UTC
Permalink
Hi,
as ceph user, It could be wonderfull to have it for Giant,
optracker performance impact is really huge (See my ssd benchmark on ce=
ph user mailing)

Regards,

Alexandre Derumier

----- Mail original -----=20

De: "Somnath Roy" <***@sandisk.com>=20
=C3=80: "Samuel Just" <***@inktank.com>=20
Cc: "Sage Weil" <***@redhat.com>, ceph-***@vger.kernel.org, ceph-us=
***@lists.ceph.com=20
Envoy=C3=A9: Samedi 13 Septembre 2014 10:03:52=20
Objet: Re: [ceph-users] OpTracker optimization=20

Sam/Sage,=20
I saw Giant is forked off today. We need the pull request (https://gith=
ub.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge =
this into Giant when it will be ready ?=20

Thanks & Regards=20
Somnath=20

-----Original Message-----=20
=46rom: Samuel Just [mailto:***@inktank.com]=20
Sent: Thursday, September 11, 2014 11:31 AM=20
To: Somnath Roy=20
Cc: Sage Weil; ceph-***@vger.kernel.org; ceph-***@lists.ceph.com=20
Subject: Re: OpTracker optimization=20

Just added it to wip-sam-testing.=20
-Sam=20

On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <***@sandisk.com>=
wrote:=20
Sam/Sage,=20
I have addressed all of your comments and pushed the changes to the s=
ame pull request.=20
=20
https://github.com/ceph/ceph/pull/2440=20
=20
Thanks & Regards=20
Somnath=20
=20
-----Original Message-----=20
Sent: Wednesday, September 10, 2014 8:33 PM=20
To: Somnath Roy=20
m=20
Subject: RE: OpTracker optimization=20
=20
I had two substantiative comments on the first patch and then some tr=
ivial=20
whitespace nits. Otherwise looks good!=20
=20
tahnks-=20
sage=20
=20
On Thu, 11 Sep 2014, Somnath Roy wrote:=20
=20
Sam/Sage,=20
I have incorporated all of your comments. Please have a look at the =
same pull request.=20
=20
https://github.com/ceph/ceph/pull/2440=20
=20
Thanks & Regards=20
Somnath=20
=20
-----Original Message-----=20
Sent: Wednesday, September 10, 2014 3:25 PM=20
To: Somnath Roy=20
Subject: Re: OpTracker optimization=20
=20
Oh, I changed my mind, your approach is fine. I was unclear.=20
Currently, I just need you to address the other comments.=20
-Sam=20
=20
m> wrote:=20
As I understand, you want me to implement the following.=20
=20
1. Keep this implementation one sharded optracker for the ios goin=
g through ms_dispatch path.=20
=20
2. Additionally, for ios going through ms_fast_dispatch, you want=20
me to implement optracker (without internal shard) per opwq shard=20
=20
Am I right ?=20
=20
Thanks & Regards=20
Somnath=20
=20
-----Original Message-----=20
Sent: Wednesday, September 10, 2014 3:08 PM=20
To: Somnath Roy=20
Subject: Re: OpTracker optimization=20
=20
I don't quite understand.=20
-Sam=20
=20
com> wrote:=20
Thanks Sam.=20
So, you want me to go with optracker/shadedopWq , right ?=20
=20
Regards=20
Somnath=20
=20
-----Original Message-----=20
Sent: Wednesday, September 10, 2014 2:36 PM=20
To: Somnath Roy=20
Subject: Re: OpTracker optimization=20
=20
Responded with cosmetic nonsense. Once you've got that and the ot=
her comments addressed, I can put it in wip-sam-testing.=20
-Sam=20
=20
=2Ecom> wrote:=20
Thanks Sam..I responded back :-)=20
=20
-----Original Message-----=20
Just=20
Sent: Wednesday, September 10, 2014 11:17 AM=20
To: Somnath Roy=20
Subject: Re: OpTracker optimization=20
=20
Added a comment about the approach.=20
-Sam=20
=20
=2Ecom> wrote:=20
Hi Sam/Sage,=20
=20
As we discussed earlier, enabling the present OpTracker code=20
degrading performance severely. For example, in my setup a=20
single OSD node with=20
10 clients is reaching ~103K read iops with io served from=20
memory while optracking is disabled but enabling optracker it i=
s reduced to ~39K iops.=20
Probably, running OSD without enabling OpTracker is not an=20
option for many of Ceph users.=20
=20
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist=
=20
ops_in_flight) and removing some other bottlenecks I am able to=
=20
match the performance of OpTracking enabled OSD with OpTracking=
=20
disabled, but with the expense of ~1 extra cpu core.=20
=20
In this process I have also fixed the following tracker.=20
=20
=20
=20
http://tracker.ceph.com/issues/9384=20
=20
=20
=20
and probably http://tracker.ceph.com/issues/8885 too.=20
=20
=20
=20
I have created following pull request for the same. Please revi=
ew it.=20
=20
=20
=20
https://github.com/ceph/ceph/pull/2440=20
=20
=20
=20
Thanks & Regards=20
=20
Somnath=20
=20
=20
=20
=20
________________________________=20
=20
PLEASE NOTE: The information contained in this electronic mail=20
message is intended only for the use of the designated=20
recipient(s) named above. If the reader of this message is not=20
the intended recipient, you are hereby notified that you have=20
received this message in error and that any review,=20
dissemination, distribution, or copying of this message is=20
strictly prohibited. If you have received this communication in=
=20
error, please notify the sender by telephone or e-mail (as show=
n=20
above) immediately and destroy any and all copies of this messa=
ge in your possession (whether hard copies or electronically stored cop=
ies).=20
=20
--=20
To unsubscribe from this list: send the line "unsubscribe ceph-d=
evel"=20
majordomo info at http://vger.kernel.org/majordomo-info.html=20
=20
________________________________=20
=20
PLEASE NOTE: The information contained in this electronic mail m=
essage is intended only for the use of the designated recipient(s) name=
d above. If the reader of this message is not the intended recipient, y=
ou are hereby notified that you have received this message in error and=
that any review, dissemination, distribution, or copying of this messa=
ge is strictly prohibited. If you have received this communication in e=
rror, please notify the sender by telephone or e-mail (as shown above) =
immediately and destroy any and all copies of this message in your poss=
ession (whether hard copies or electronically stored copies).=20
=20
=20
_______________________________________________=20
ceph-users mailing list=20
ceph-***@lists.ceph.com=20
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com=20
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-09-13 14:32:06 UTC
Permalink
Post by Alexandre DERUMIER
Hi,
as ceph user, It could be wonderfull to have it for Giant,
optracker performance impact is really huge (See my ssd benchmark on ceph user mailing)
Definitely. More importantly, it resolves a few crashes we've observed.
It's going through some testing right now, but once that's done it'll go
into giant.

sage
Post by Alexandre DERUMIER
Regards,
Alexandre Derumier
----- Mail original -----
Envoy?: Samedi 13 Septembre 2014 10:03:52
Objet: Re: [ceph-users] OpTracker optimization
Sam/Sage,
I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Thursday, September 11, 2014 11:31 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Just added it to wip-sam-testing.
-Sam
Post by Somnath Roy
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Subject: RE: OpTracker optimization
I had two substantiative comments on the first patch and then some trivial
whitespace nits. Otherwise looks good!
tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want
me to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a
single OSD node with
10 clients is reaching ~103K read iops with io served from
memory while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an
option for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able to
match the performance of OpTracking enabled OSD with OpTracking
disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated
recipient(s) named above. If the reader of this message is not
the intended recipient, you are hereby notified that you have
received this message in error and that any review,
dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in
error, please notify the sender by telephone or e-mail (as shown
above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-09-13 16:19:46 UTC
Permalink
Thanks Sage!

-----Original Message-----
From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+***@public.gmane.org]
Sent: Saturday, September 13, 2014 7:32 AM
To: Alexandre DERUMIER
Cc: Somnath Roy; ceph-devel-***@public.gmane.org; ceph-users-***@public.gmane.org; Samuel Just
Subject: Re: [ceph-users] OpTracker optimization
Hi,
as ceph user, It could be wonderfull to have it for Giant, optracker
performance impact is really huge (See my ssd benchmark on ceph user
mailing)
Definitely. More importantly, it resolves a few crashes we've observed.
It's going through some testing right now, but once that's done it'll go into giant.

sage
Regards,
Alexandre Derumier
----- Mail original -----
Envoy?: Samedi 13 Septembre 2014 10:03:52
Objet: Re: [ceph-users] OpTracker optimization
Sam/Sage,
I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Thursday, September 11, 2014 11:31 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Just added it to wip-sam-testing.
-Sam
Post by Somnath Roy
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Subject: RE: OpTracker optimization
I had two substantiative comments on the first patch and then some
trivial whitespace nits. Otherwise looks good!
tahnks-
sage
Post by Somnath Roy
Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
Post by Somnath Roy
As I understand, you want me to implement the following.
1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
2. Additionally, for ios going through ms_fast_dispatch, you want
me to implement optracker (without internal shard) per opwq shard
Am I right ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
I don't quite understand.
-Sam
Post by Somnath Roy
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Subject: Re: OpTracker optimization
Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam
Post by Somnath Roy
Thanks Sam..I responded back :-)
-----Original Message-----
Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Subject: Re: OpTracker optimization
Added a comment about the approach.
-Sam
Post by Somnath Roy
Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code
degrading performance severely. For example, in my setup a
single OSD node with
10 clients is reaching ~103K read iops with io served from
memory while optracking is disabled but enabling optracker it is reduced to ~39K iops.
Probably, running OSD without enabling OpTracker is not an
option for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
ops_in_flight) and removing some other bottlenecks I am able
to match the performance of OpTracking enabled OSD with
OpTracking disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.
http://tracker.ceph.com/issues/9384
and probably http://tracker.ceph.com/issues/8885 too.
I have created following pull request for the same. Please review it.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated
recipient(s) named above. If the reader of this message is not
the intended recipient, you are hereby notified that you have
received this message in error and that any review,
dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication
in error, please notify the sender by telephone or e-mail (as
shown
above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
Loading...