Discussion:
The Async messenger benchmark with latest master
Somnath Roy
2014-10-17 20:37:55 UTC
Permalink
Hi Sage/Haomai,

I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.

My configuration:
---------------------

1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster wide
Cpu : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, dual socket, HT enabled, 40 cores.
Used krbd as client.


1 clinet node with 3 rbd images on 3 different pools:
-------------------------------------------------------------------
Master :
---------

~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB

lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%

cpu: ~1-2 % idle

Giant:
------

~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB

lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%

cpu: ~2% idle

2 clients with 3 rbd images each on 3 different pools:
-------------------------------------------------------------------

Master :
---------

~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB

lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%

cpu: ~0-1 % idle

Giant:
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB

lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%

cpu: ~1% idle


So, in summary the master with Async messenger has improved both in iops and latency.

Thanks & Regards
Somnath

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre DERUMIER
2014-10-18 09:07:34 UTC
Permalink
Hi Somnath,

Results seem promising indeed :)

Can you share your ceph.conf ?


----- Mail original -----=20

De: "Somnath Roy" <***@sandisk.com>=20
=C3=80: ceph-***@vger.kernel.org=20
Envoy=C3=A9: Vendredi 17 Octobre 2014 22:37:55=20
Objet: The Async messenger benchmark with latest master=20

Hi Sage/Haomai,=20

I did some 4K Random Read benchmarking with latest master having Async =
messenger changes and result looks promising.=20

My configuration:=20
---------------------=20

1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster wi=
de=20
Cpu : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, dual socket, HT enable=
d, 40 cores.=20
Used krbd as client.=20


1 clinet node with 3 rbd images on 3 different pools:=20
-------------------------------------------------------------------=20
Master :=20
---------=20

~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB=20

lat (usec) : 4=3D0.01%, 10=3D0.01%, 50=3D0.01%, 100=3D0.01%, 250=3D0.03=
%=20
lat (usec) : 500=3D0.86%, 750=3D3.72%, 1000=3D7.19%=20
lat (msec) : 2=3D39.17%, 4=3D41.45%, 10=3D7.26%, 20=3D0.24%, 50=3D0.06%=
=20
lat (msec) : 100=3D0.03%, 250=3D0.01%, 500=3D0.01%=20

cpu: ~1-2 % idle=20

Giant:=20
------=20

~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB=20

lat (usec) : 250=3D0.01%, 500=3D0.51%, 750=3D2.71%, 1000=3D6.03%=20
lat (msec) : 2=3D34.32%, 4=3D45.74%, 10=3D10.46%, 20=3D0.21%, 50=3D0.02=
%=20
lat (msec) : 100=3D0.01%, 250=3D0.01%=20

cpu: ~2% idle=20

2 clients with 3 rbd images each on 3 different pools:=20
-------------------------------------------------------------------=20

Master :=20
---------=20

~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB=20

lat (usec) : 250=3D0.03%, 500=3D0.63%, 750=3D2.67%, 1000=3D5.21%=20
lat (msec) : 2=3D25.12%, 4=3D36.19%, 10=3D24.80%, 20=3D3.16%, 50=3D1.34=
%=20
lat (msec) : 100=3D0.66%, 250=3D0.18%, 500=3D0.01%=20

cpu: ~0-1 % idle=20

Giant:=20
--------=20
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB=20

lat (usec) : 250=3D0.01%, 500=3D0.25%, 750=3D1.47%, 1000=3D3.45%=20
lat (msec) : 2=3D21.22%, 4=3D36.69%, 10=3D30.63%, 20=3D4.28%, 50=3D1.70=
%=20
lat (msec) : 100=3D0.28%, 250=3D0.02%, 500=3D0.01%=20

cpu: ~1% idle=20


So, in summary the master with Async messenger has improved both in iop=
s and latency.=20

Thanks & Regards=20
Somnath=20

________________________________=20

PLEASE NOTE: The information contained in this electronic mail message =
is intended only for the use of the designated recipient(s) named above=
=2E If the reader of this message is not the intended recipient, you ar=
e hereby notified that you have received this message in error and that=
any review, dissemination, distribution, or copying of this message is=
strictly prohibited. If you have received this communication in error,=
please notify the sender by telephone or e-mail (as shown above) immed=
iately and destroy any and all copies of this message in your possessio=
n (whether hard copies or electronically stored copies).=20

--=20
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n=20
the body of a message to ***@vger.kernel.org=20
More majordomo info at http://vger.kernel.org/majordomo-info.html=20
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Haomai Wang
2014-10-19 05:14:34 UTC
Permalink
Thanks Somnath!

I have another simple performance test for async messenger:

For 4k object read, master branch used 4.46s to complete tests, async
Messenger used 3.14s
For 4k object write, master branch used 10.6s to complete, async
Messenger used 6.6s!!

Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more
improvements for your tests.

Master Branch(6fa686c8c42937dd069591f16de92e954d8ed34d):

[***@ceph-test src]# for i in `seq 1 3`; do date &&
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done

Fri Oct 17 17:10:39 UTC 2014

Used Time:4.461581

Fri Oct 17 17:10:44 UTC 2014

Used Time:4.388572

Fri Oct 17 17:10:48 UTC 2014

Used Time:4.448157

[***@ceph-test src]#

[***@ceph-test src]# for i in `seq 1 3`; do date &&
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014

Used Time:10.638783

Fri Oct 17 17:11:33 UTC 2014

Used Time:10.793231

Fri Oct 17 17:11:44 UTC 2014

Used Time:10.908003


Master Branch with AsyncMessenger:

[***@ceph-test src]# for i in `seq 1 3`; do date &&
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done

Sun Oct 19 06:01:50 UTC 2014

Used Time:3.155506

Sun Oct 19 06:01:53 UTC 2014

Used Time:3.134961

Sun Oct 19 06:01:56 UTC 2014

Used Time:3.135814

[***@ceph-test src]# for i in `seq 1 3`; do date &&
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done

Sun Oct 19 06:02:03 UTC 2014

Used Time:6.536319

Sun Oct 19 06:02:10 UTC 2014

Used Time:6.648738

Sun Oct 19 06:02:16 UTC 2014

Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster wide
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-10-19 06:15:57 UTC
Permalink
Haomai,
Sure , I will change ' ms_event_op_threads' and update my findings.

Alexandre,
Here is my config..

[global]

filestore_xattr_use_omap = true

debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0
osd_op_threads = 2
osd_op_num_threads_per_shard = 2
osd_op_num_shards = 12
filestore_op_threads = 4

ms_nocrc = true
filestore_fd_cache_size = 100000
filestore_fd_cache_shards = 10000
cephx sign messages = false
cephx require signatures = false

ms_dispatch_throttle_bytes = 0
throttler_perf_counter = false

osd_pool_default_size = 1
osd_pool_default_min_size = 1

filestore_wbthrottle_enable = false
[osd]

osd_journal_size = 150000
# Execute $ hostname to retrieve the name of your host,
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.

osd_client_message_size_cap = 0
osd_client_message_cap = 0
osd_enable_op_tracker = false


Thanks & Regards
Somnath

-----Original Message-----
From: Haomai Wang [mailto:***@gmail.com]
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Cc: ceph-***@vger.kernel.org
Subject: Re: The Async messenger benchmark with latest master

Thanks Somnath!

I have another simple performance test for async messenger:

For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!

Detailf results see below, 4k object read is a simple ceph client program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.

Master Branch(6fa686c8c42937dd069591f16de92e954d8ed34d):

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Fri Oct 17 17:10:39 UTC 2014

Used Time:4.461581

Fri Oct 17 17:10:44 UTC 2014

Used Time:4.388572

Fri Oct 17 17:10:48 UTC 2014

Used Time:4.448157

[***@ceph-test src]#

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; doneFri Oct 17 17:11:23 UTC 2014

Used Time:10.638783

Fri Oct 17 17:11:33 UTC 2014

Used Time:10.793231

Fri Oct 17 17:11:44 UTC 2014

Used Time:10.908003


Master Branch with AsyncMessenger:

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Sun Oct 19 06:01:50 UTC 2014

Used Time:3.155506

Sun Oct 19 06:01:53 UTC 2014

Used Time:3.134961

Sun Oct 19 06:01:56 UTC 2014

Used Time:3.135814

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Sun Oct 19 06:02:03 UTC 2014

Used Time:6.536319

Sun Oct 19 06:02:10 UTC 2014

Used Time:6.648738

Sun Oct 19 06:02:16 UTC 2014

Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,

Wheat
��칻�&�~�&���+-��ݶ��w��˛���m��^��b��^n�r���z���h�����&���G���h�
Alexandre DERUMIER
2014-10-19 18:49:08 UTC
Permalink
Post by Alexandre DERUMIER
Alexandre,=20
Here is my config..=20
Thanks Sommath !



----- Mail original -----=20

De: "Somnath Roy" <***@sandisk.com>=20
=C3=80: "Haomai Wang" <***@gmail.com>=20
Cc: ceph-***@vger.kernel.org=20
Envoy=C3=A9: Dimanche 19 Octobre 2014 08:15:57=20
Objet: RE: The Async messenger benchmark with latest master=20

Haomai,=20
Sure , I will change ' ms_event_op_threads' and update my findings.=20

Alexandre,=20
Here is my config..=20


-----Original Message-----=20
=46rom: Haomai Wang [mailto:***@gmail.com]=20
Sent: Saturday, October 18, 2014 10:15 PM=20
To: Somnath Roy=20
Cc: ceph-***@vger.kernel.org=20
Subject: Re: The Async messenger benchmark with latest master=20

Thanks Somnath!=20

I have another simple performance test for async messenger:=20

=46or 4k object read, master branch used 4.46s to complete tests, async=
Messenger used 3.14s For 4k object write, master branch used 10.6s to =
complete, async Messenger used 6.6s!!=20

Detailf results see below, 4k object read is a simple ceph client progr=
am which will read 5000 objects and 4k object write will write=20
5000 objects.=20
I increased "ms_event_op_threads" to 10 compared to the default value i=
s 2.=20
Maybe Somnath can do it and tests again, I think we can get more improv=
ements for your tests.=20

Master Branch(6fa686c8c42937dd069591f16de92e954d8ed34d):=20

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k=
_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done=20

=46ri Oct 17 17:10:39 UTC 2014=20

Used Time:4.461581=20

=46ri Oct 17 17:10:44 UTC 2014=20

Used Time:4.388572=20

=46ri Oct 17 17:10:48 UTC 2014=20

Used Time:4.448157=20

[***@ceph-test src]#=20

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4=
k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; doneFri Oct =
17 17:11:23 UTC 2014=20

Used Time:10.638783=20

=46ri Oct 17 17:11:33 UTC 2014=20

Used Time:10.793231=20

=46ri Oct 17 17:11:44 UTC 2014=20

Used Time:10.908003=20


Master Branch with AsyncMessenger:=20

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k=
_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done=20

Sun Oct 19 06:01:50 UTC 2014=20

Used Time:3.155506=20

Sun Oct 19 06:01:53 UTC 2014=20

Used Time:3.134961=20

Sun Oct 19 06:01:56 UTC 2014=20

Used Time:3.135814=20

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4=
k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done=20

Sun Oct 19 06:02:03 UTC 2014=20

Used Time:6.536319=20

Sun Oct 19 06:02:10 UTC 2014=20

Used Time:6.648738=20

Sun Oct 19 06:02:16 UTC 2014=20

Used Time:6.585156=20

On Sat, Oct 18, 2014 at 4:37 AM, Somnath Roy <***@sandisk.com> =
wrote:=20
Post by Alexandre DERUMIER
Hi Sage/Haomai,=20
=20
I did some 4K Random Read benchmarking with latest master having Asyn=
c messenger changes and result looks promising.=20
Post by Alexandre DERUMIER
=20
My configuration:=20
---------------------=20
=20
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster=20
enabled, 40 cores.=20
Post by Alexandre DERUMIER
Used krbd as client.=20
=20
=20
1 clinet node with 3 rbd images on 3 different pools:=20
-------------------------------------------------------------------=20
Master :=20
---------=20
=20
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB=20
=20
lat (usec) : 4=3D0.01%, 10=3D0.01%, 50=3D0.01%, 100=3D0.01%, 250=3D0.=
03%=20
Post by Alexandre DERUMIER
lat (usec) : 500=3D0.86%, 750=3D3.72%, 1000=3D7.19%=20
lat (msec) : 2=3D39.17%, 4=3D41.45%, 10=3D7.26%, 20=3D0.24%, 50=3D0.0=
6%=20
Post by Alexandre DERUMIER
lat (msec) : 100=3D0.03%, 250=3D0.01%, 500=3D0.01%=20
=20
cpu: ~1-2 % idle=20
=20
Giant:=20
------=20
=20
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB=20
=20
lat (usec) : 250=3D0.01%, 500=3D0.51%, 750=3D2.71%, 1000=3D6.03%=20
lat (msec) : 2=3D34.32%, 4=3D45.74%, 10=3D10.46%, 20=3D0.21%, 50=3D0.=
02%=20
Post by Alexandre DERUMIER
lat (msec) : 100=3D0.01%, 250=3D0.01%=20
=20
cpu: ~2% idle=20
=20
2 clients with 3 rbd images each on 3 different pools:=20
-------------------------------------------------------------------=20
=20
Master :=20
---------=20
=20
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB=20
=20
lat (usec) : 250=3D0.03%, 500=3D0.63%, 750=3D2.67%, 1000=3D5.21%=20
lat (msec) : 2=3D25.12%, 4=3D36.19%, 10=3D24.80%, 20=3D3.16%, 50=3D1.=
34%=20
Post by Alexandre DERUMIER
lat (msec) : 100=3D0.66%, 250=3D0.18%, 500=3D0.01%=20
=20
cpu: ~0-1 % idle=20
=20
Giant:=20
--------=20
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB=20
=20
lat (usec) : 250=3D0.01%, 500=3D0.25%, 750=3D1.47%, 1000=3D3.45%=20
lat (msec) : 2=3D21.22%, 4=3D36.69%, 10=3D30.63%, 20=3D4.28%, 50=3D1.=
70%=20
Post by Alexandre DERUMIER
lat (msec) : 100=3D0.28%, 250=3D0.02%, 500=3D0.01%=20
=20
cpu: ~1% idle=20
=20
=20
So, in summary the master with Async messenger has improved both in i=
ops and latency.=20
Post by Alexandre DERUMIER
=20
Thanks & Regards=20
Somnath=20
=20
________________________________=20
=20
PLEASE NOTE: The information contained in this electronic mail messag=
e is intended only for the use of the designated recipient(s) named abo=
ve. If the reader of this message is not the intended recipient, you ar=
e hereby notified that you have received this message in error and that=
any review, dissemination, distribution, or copying of this message is=
strictly prohibited. If you have received this communication in error,=
please notify the sender by telephone or e-mail (as shown above) immed=
iately and destroy any and all copies of this message in your possessio=
n (whether hard copies or electronically stored copies).=20
Post by Alexandre DERUMIER
=20
--=20
To unsubscribe from this list: send the line "unsubscribe ceph-devel"=
=20
Post by Alexandre DERUMIER
info at http://vger.kernel.org/majordomo-info.html=20
--=20
Best Regards,=20

Wheat=20
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alexandre DERUMIER
2014-10-20 06:02:01 UTC
Permalink
Hi,

I'm not sure it's related to this bug
http://tracker.ceph.com/issues/9513

But, when with this fio-rbd benchmark

[global]
ioengine=rbd
clientname=admin
pool=test
rbdname=test
invalidate=0
rw=randread
bs=4k
direct=1
numjobs=8
group_reporting=1
size=10G

[rbd_iodepth32]
iodepth=32



I have around

40000iops (cpu bound on client) with rbd_cache=false

vs

13000iops (40%cpu usage on client) with rbd_cache=true


(Note that it should be direct ios, so It should bypass the cache).
Seem to be a lock or something like that, as the cpu usage is a lot lower too.

Is it the expected behavior ?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-10-20 06:18:48 UTC
Permalink
Hi Alexandre,

Yes, it is related to the defect I filed.
I think with librbd engine , direct =1 is a noop. It is going through librbd engine not through kernel. Whether it will use cache or not depends upon the rbd_cache flag.
This is supposed to be fixed on the latest code but I didn't have time to test this out.
It seems the fix is not there in 0.86.
Could you please check whether the following commit is there in your branch or not ?

82175ec94acc89dc75da0154f86187fb2e4dbf5e

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-***@vger.kernel.org [mailto:ceph-devel-***@vger.kernel.org] On Behalf Of Alexandre DERUMIER
Sent: Sunday, October 19, 2014 11:02 PM
To: ceph-***@vger.kernel.org
Subject: ceph 0.86 : rbd_cache=true, iops a lot slower on randread 4K

Hi,

I'm not sure it's related to this bug
http://tracker.ceph.com/issues/9513

But, when with this fio-rbd benchmark

[global]
ioengine=rbd
clientname=admin
pool=test
rbdname=test
invalidate=0
rw=randread
bs=4k
direct=1
numjobs=8
group_reporting=1
size=10G

[rbd_iodepth32]
iodepth=32



I have around

40000iops (cpu bound on client) with rbd_cache=false

vs

13000iops (40%cpu usage on client) with rbd_cache=true


(Note that it should be direct ios, so It should bypass the cache).
Seem to be a lock or something like that, as the cpu usage is a lot lower too.

Is it the expected behavior ?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to ***@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

��{.n�+�������+%��lzwm��b�맲��r��yǩ�ׯzX����ܨ}���Ơz�&j:+v�������zZ+
Alexandre DERUMIER
2014-10-20 06:39:12 UTC
Permalink
Could you please check whether the following commit is there in your =
branch or not ?=20
82175ec94acc89dc75da0154f86187fb2e4dbf5e=20
I'm this debian repository
http://gitbuilder.ceph.com/ceph-deb-wheezy-x86_64-basic/ref/v0.86

Don't known how to check if the commit was inside or not.


But the good new,
I just tried on last master
http://gitbuilder.ceph.com/ceph-deb-wheezy-x86_64-basic/ref/master


and Indeed, the problem is fixed !


Thanks for help,

Alexandre


----- Mail original -----=20

De: "Somnath Roy" <***@sandisk.com>=20
=C3=80: "Alexandre DERUMIER" <***@odiso.com>, ceph-***@vger.ker=
nel.org=20
Envoy=C3=A9: Lundi 20 Octobre 2014 08:18:48=20
Objet: RE: ceph 0.86 : rbd_cache=3Dtrue, iops a lot slower on randread =
4K=20

Hi Alexandre,=20

Yes, it is related to the defect I filed.=20
I think with librbd engine , direct =3D1 is a noop. It is going through=
librbd engine not through kernel. Whether it will use cache or not dep=
ends upon the rbd_cache flag.=20
This is supposed to be fixed on the latest code but I didn't have time =
to test this out.=20
It seems the fix is not there in 0.86.=20
Could you please check whether the following commit is there in your br=
anch or not ?=20

82175ec94acc89dc75da0154f86187fb2e4dbf5e=20

Thanks & Regards=20
Somnath=20

-----Original Message-----=20
=46rom: ceph-devel-***@vger.kernel.org [mailto:ceph-devel-***@vger.=
kernel.org] On Behalf Of Alexandre DERUMIER=20
Sent: Sunday, October 19, 2014 11:02 PM=20
To: ceph-***@vger.kernel.org=20
Subject: ceph 0.86 : rbd_cache=3Dtrue, iops a lot slower on randread 4K=
=20

Hi,=20

I'm not sure it's related to this bug=20
http://tracker.ceph.com/issues/9513=20

But, when with this fio-rbd benchmark=20

[global]=20
ioengine=3Drbd=20
clientname=3Dadmin=20
pool=3Dtest=20
rbdname=3Dtest=20
invalidate=3D0=20
rw=3Drandread=20
bs=3D4k=20
direct=3D1=20
numjobs=3D8=20
group_reporting=3D1=20
size=3D10G=20

[rbd_iodepth32]=20
iodepth=3D32=20



I have around=20

40000iops (cpu bound on client) with rbd_cache=3Dfalse=20

vs=20

13000iops (40%cpu usage on client) with rbd_cache=3Dtrue=20


(Note that it should be direct ios, so It should bypass the cache).=20
Seem to be a lock or something like that, as the cpu usage is a lot low=
er too.=20

Is it the expected behavior ?=20
--=20
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n the body of a message to ***@vger.kernel.org More majordomo inf=
o at http://vger.kernel.org/majordomo-info.html=20

________________________________=20

PLEASE NOTE: The information contained in this electronic mail message =
is intended only for the use of the designated recipient(s) named above=
=2E If the reader of this message is not the intended recipient, you ar=
e hereby notified that you have received this message in error and that=
any review, dissemination, distribution, or copying of this message is=
strictly prohibited. If you have received this communication in error,=
please notify the sender by telephone or e-mail (as shown above) immed=
iately and destroy any and all copies of this message in your possessio=
n (whether hard copies or electronically stored copies).=20
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-10-22 16:02:22 UTC
Permalink
Hi Haomai/Sage,

I just figured it out that Async messenger is not yet in master. So, I didn't really test the async messenger :-(

I saw here is the pull request.

https://github.com/ceph/ceph/pull/2768

I want to test this out and I have some questions regarding this.

1. The branch I should test out is the following, right ?

https://github.com/ceph/ceph/tree/wip-msgr

2. Will Async messenger by default enabled ? OR I need to add some config option for that ?

3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?

Thanks & Regards
Somnath

-----Original Message-----
From: Haomai Wang [mailto:***@gmail.com]
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Cc: ceph-***@vger.kernel.org
Subject: Re: The Async messenger benchmark with latest master

Thanks Somnath!

I have another simple performance test for async messenger:

For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!

Detailf results see below, 4k object read is a simple ceph client program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.

Master Branch(6fa686c8c42937dd069591f16de92e954d8ed34d):

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Fri Oct 17 17:10:39 UTC 2014

Used Time:4.461581

Fri Oct 17 17:10:44 UTC 2014

Used Time:4.388572

Fri Oct 17 17:10:48 UTC 2014

Used Time:4.448157

[***@ceph-test src]#

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; doneFri Oct 17 17:11:23 UTC 2014

Used Time:10.638783

Fri Oct 17 17:11:33 UTC 2014

Used Time:10.793231

Fri Oct 17 17:11:44 UTC 2014

Used Time:10.908003


Master Branch with AsyncMessenger:

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Sun Oct 19 06:01:50 UTC 2014

Used Time:3.155506

Sun Oct 19 06:01:53 UTC 2014

Used Time:3.134961

Sun Oct 19 06:01:56 UTC 2014

Used Time:3.135814

[***@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done

Sun Oct 19 06:02:03 UTC 2014

Used Time:6.536319

Sun Oct 19 06:02:10 UTC 2014

Used Time:6.648738

Sun Oct 19 06:02:16 UTC 2014

Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,

Wheat
��{.n�+�������+%��lzwm��b�맲��r��yǩ�ׯzX����ܨ}���Ơz�&j:+v�������zZ+
Haomai Wang
2014-10-22 17:02:14 UTC
Permalink
ms_type = async is needed to add [global] section in ceph.conf

No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master. So, I didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-10-22 17:18:29 UTC
Permalink
Thanks Haomai, I will try this branch out.
BTW, will it be compatible with the client which is not using this async messenger for example krbd ?

Regards
Somnath

-----Original Message-----
From: Haomai Wang [mailto:***@gmail.com]
Sent: Wednesday, October 22, 2014 10:02 AM
To: Somnath Roy
Cc: ceph-***@vger.kernel.org
Subject: Re: The Async messenger benchmark with latest master

ms_type = async is needed to add [global] section in ceph.conf

No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master. So, I
didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,

Wheat
��칻�&�~�&���+-��ݶ��w��˛���m��^��b��^n�r���z���h�����&���G���h�
Haomai Wang
2014-10-22 17:49:20 UTC
Permalink
It should be but I'm not convinced of the current impl of
AsyncMessenger. It's possible that exists some corner situations which
may break
Post by Somnath Roy
Thanks Haomai, I will try this branch out.
BTW, will it be compatible with the client which is not using this async messenger for example krbd ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, October 22, 2014 10:02 AM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
ms_type = async is needed to add [global] section in ceph.conf
No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master. So, I
didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-10-22 17:52:38 UTC
Permalink
Post by Haomai Wang
It should be but I'm not convinced of the current impl of
AsyncMessenger. It's possible that exists some corner situations which
may break
The 'ms_type = random' option we added should give us some confidnece that
this is true (by doing QA on mixes old both messenger implementations).

sage
Post by Haomai Wang
Post by Somnath Roy
Thanks Haomai, I will try this branch out.
BTW, will it be compatible with the client which is not using this async messenger for example krbd ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, October 22, 2014 10:02 AM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
ms_type = async is needed to add [global] section in ceph.conf
No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master. So, I
didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster
Used krbd as client.
-------------------------------------------------------------------
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
-------------------------------------------------------------------
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Somnath Roy
2014-10-22 20:34:34 UTC
Permalink
Sage,
So, with krbd as client, should I use 'ms_type = random' in the cluster side to test Async messenger in the cluster side?
BTW, what this option does ? It will pick Async first and fall back to old messenger in case any problem ?

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:***@newdream.net]
Sent: Wednesday, October 22, 2014 10:53 AM
To: Haomai Wang
Cc: Somnath Roy; ceph-***@vger.kernel.org
Subject: Re: The Async messenger benchmark with latest master
Post by Haomai Wang
It should be but I'm not convinced of the current impl of
AsyncMessenger. It's possible that exists some corner situations which
may break
The 'ms_type = random' option we added should give us some confidnece that this is true (by doing QA on mixes old both messenger implementations).

sage
Post by Haomai Wang
Post by Somnath Roy
Thanks Haomai, I will try this branch out.
BTW, will it be compatible with the client which is not using this async messenger for example krbd ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, October 22, 2014 10:02 AM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
ms_type = async is needed to add [global] section in ceph.conf
No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master.
So, I didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg
Used krbd as client.
------------------------------------------------------------------
-
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
------------------------------------------------------------------
-
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-10-22 21:01:53 UTC
Permalink
Post by Somnath Roy
Sage,
So, with krbd as client, should I use 'ms_type = random' in the cluster side to test Async messenger in the cluster side?
BTW, what this option does ? It will pick Async first and fall back to old messenger in case any problem ?
You can just use ms_type = async. random will randomly choose between the
two, the idea being you can spin up a big (test) cluster with ms_type =
random and you'll test interoperability between the two without
manually specifying which daemon uses which implementation.

sage
Post by Somnath Roy
Thanks & Regards
Somnath
-----Original Message-----
Sent: Wednesday, October 22, 2014 10:53 AM
To: Haomai Wang
Subject: Re: The Async messenger benchmark with latest master
Post by Haomai Wang
It should be but I'm not convinced of the current impl of
AsyncMessenger. It's possible that exists some corner situations which
may break
The 'ms_type = random' option we added should give us some confidnece that this is true (by doing QA on mixes old both messenger implementations).
sage
Post by Haomai Wang
Post by Somnath Roy
Thanks Haomai, I will try this branch out.
BTW, will it be compatible with the client which is not using this async messenger for example krbd ?
Regards
Somnath
-----Original Message-----
Sent: Wednesday, October 22, 2014 10:02 AM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
ms_type = async is needed to add [global] section in ceph.conf
No other options are needed
Post by Somnath Roy
Hi Haomai/Sage,
I just figured it out that Async messenger is not yet in master.
So, I didn't really test the async messenger :-(
I saw here is the pull request.
https://github.com/ceph/ceph/pull/2768
I want to test this out and I have some questions regarding this.
1. The branch I should test out is the following, right ?
https://github.com/ceph/ceph/tree/wip-msgr
2. Will Async messenger by default enabled ? OR I need to add some config option for that ?
3. Other than ms_event_op_threads , is there any tunable parameter I should be playing with ?
Thanks & Regards
Somnath
-----Original Message-----
Sent: Saturday, October 18, 2014 10:15 PM
To: Somnath Roy
Subject: Re: The Async messenger benchmark with latest master
Thanks Somnath!
For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!!
Detailf results see below, 4k object read is a simple ceph client
program which will read 5000 objects and 4k object write will write
5000 objects.
I increased "ms_event_op_threads" to 10 compared to the default value is 2.
Maybe Somnath can do it and tests again, I think we can get more improvements for your tests.
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Fri Oct 17 17:10:39 UTC 2014
Used Time:4.461581
Fri Oct 17 17:10:44 UTC 2014
Used Time:4.388572
Fri Oct 17 17:10:48 UTC 2014
Used Time:4.448157
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; doneFri Oct 17 17:11:23 UTC 2014
Used Time:10.638783
Fri Oct 17 17:11:33 UTC 2014
Used Time:10.793231
Fri Oct 17 17:11:44 UTC 2014
Used Time:10.908003
~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:01:50 UTC 2014
Used Time:3.155506
Sun Oct 19 06:01:53 UTC 2014
Used Time:3.134961
Sun Oct 19 06:01:56 UTC 2014
Used Time:3.135814
~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 &&
sleep 3; done
Sun Oct 19 06:02:03 UTC 2014
Used Time:6.536319
Sun Oct 19 06:02:10 UTC 2014
Used Time:6.648738
Sun Oct 19 06:02:16 UTC 2014
Used Time:6.585156
Post by Somnath Roy
Hi Sage/Haomai,
I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising.
---------------------
1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg
Used krbd as client.
------------------------------------------------------------------
-
---------
~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03%
lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19%
lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%
cpu: ~1-2 % idle
------
~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB
lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03%
lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%
cpu: ~2% idle
------------------------------------------------------------------
-
---------
~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB
lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21%
lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34%
lat (msec) : 100=0.66%, 250=0.18%, 500=0.01%
cpu: ~0-1 % idle
--------
~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB
lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45%
lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70%
lat (msec) : 100=0.28%, 250=0.02%, 500=0.01%
cpu: ~1% idle
So, in summary the master with Async messenger has improved both in iops and latency.
Thanks & Regards
Somnath
________________________________
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
majordomo info at http://vger.kernel.org/majordomo-info.html
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
Best Regards,
Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...