Discussion:
why ZFS on ceph is unstable?
Nicheal
2014-09-19 09:20:33 UTC
Permalink
Hi developers,

it mentioned in the source code that OPTION(filestore_zfs_snap,
OPT_BOOL, false) // zfsonlinux is still unstable. So if we turn on
filestore_zfs_snap and neglect journal like btrf, it will be unstable?

As is mentioned on the "zfs on linux community", It is stable enough
to run a ZFS root filesystem on a GNU/Linux installation for your
workstation as something to play around with. It is copy-on-write,
supports compression, deduplication, file atomicity, off-disk caching,
(encryption not support), and much more. So it seems that all
features are supported except for encryption.
Thus, I am puzzled that the unstable, you mean, is ZFS unstable
itself. Or it now is already stable on linux, but still unstable when
used as ceph FileStore filesystem.

If so, what will happen if we use it, losing data or frequent crash?

Nicheal
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-09-19 15:13:44 UTC
Permalink
Post by Nicheal
Hi developers,
it mentioned in the source code that OPTION(filestore_zfs_snap,
OPT_BOOL, false) // zfsonlinux is still unstable. So if we turn on
filestore_zfs_snap and neglect journal like btrf, it will be unstable?
As is mentioned on the "zfs on linux community", It is stable enough
to run a ZFS root filesystem on a GNU/Linux installation for your
workstation as something to play around with. It is copy-on-write,
supports compression, deduplication, file atomicity, off-disk caching,
(encryption not support), and much more. So it seems that all
features are supported except for encryption.
Thus, I am puzzled that the unstable, you mean, is ZFS unstable
itself. Or it now is already stable on linux, but still unstable when
used as ceph FileStore filesystem.
If so, what will happen if we use it, losing data or frequent crash?
At the time the libzfs support was added, zfsonlinux would crash very
quickly under the ceph-osd workload. If that has changed, great! We
haven't tested it, though, since Zheng added the initial support.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Eric Eastman
2014-09-19 15:40:43 UTC
Permalink
Hi developers,it mentioned in the source code that
OPTION(filestore_zfs_snap,OPT_BOOL, false) // zfsonlinux is still

unstable.
So if we turn on filestore_zfs_snap and neglect journal like btrf,
it will be unstable?As is mentioned on the "zfs on linux community",
It is stable enough to run a ZFS root filesystem on a GNU/Linux
installation for yourworkstation as something to play around with.
It is copy-on-write,supports compression, deduplication, file
atomicity, off-disk caching,(encryption not support), and much more.
So it seems that allfeatures are supported except for
encryption.Thus, I am puzzled that the unstable, you mean, is
ZFS unstableitself. Or it now is already stable on linux, but still
unstable when used as ceph FileStore filesystem.If so,
what will happen if we use it, losing data or frequent crash?
In testing I did last year, there were multiple issues with using ZFS

for my OSD backend, that would lock up the ZFS file systems, and take

the OSD down.

Several of these have been fixed by the ZFS team. See:



https://github.com/zfsonlinux/zfs/issues/1891

https://github.com/zfsonlinux/zfs/issues/1961

https://github.com/zfsonlinux/zfs/issues/2015



The recommendation is to use xattr=sa, but looking at the current open

issues for ZFS, there seems to still be issues with this option. See:



https://github.com/zfsonlinux/zfs/issues/2700

https://github.com/zfsonlinux/zfs/issues/2717

https://github.com/zfsonlinux/zfs/issues/2663

and others



Also per the recent ZFS posting on clusterhq, aio will not be supported

until 0.64 so the following needs to be added to your ceph.conf file



filestore zfs_snap = 1

journal aio = 0

journal dio = 0



My plans are to retest ZFS as an OSD backend once ZFS version 0.64 has

been released.



Please test ZFS with Ceph, and submit bugs, as this is how it will get

stable enough to use in production.



Eric
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mark Nelson
2014-09-19 15:50:18 UTC
Permalink
Post by Eric Eastman
Hi developers,it mentioned in the source code that
OPTION(filestore_zfs_snap,OPT_BOOL, false) // zfsonlinux is still
unstable.
So if we turn on filestore_zfs_snap and neglect journal like btrf,
it will be unstable?As is mentioned on the "zfs on linux community",
It is stable enough to run a ZFS root filesystem on a GNU/Linux
installation for yourworkstation as something to play around with.
It is copy-on-write,supports compression, deduplication, file
atomicity, off-disk caching,(encryption not support), and much more.
So it seems that allfeatures are supported except for
encryption.Thus, I am puzzled that the unstable, you mean, is
ZFS unstableitself. Or it now is already stable on linux, but still
unstable when used as ceph FileStore filesystem.If so,
what will happen if we use it, losing data or frequent crash?
In testing I did last year, there were multiple issues with using ZFS
for my OSD backend, that would lock up the ZFS file systems, and take
the OSD down.
https://github.com/zfsonlinux/zfs/issues/1891
https://github.com/zfsonlinux/zfs/issues/1961
https://github.com/zfsonlinux/zfs/issues/2015
The recommendation is to use xattr=sa, but looking at the current open
https://github.com/zfsonlinux/zfs/issues/2700
https://github.com/zfsonlinux/zfs/issues/2717
https://github.com/zfsonlinux/zfs/issues/2663
and others
SA xattrs are pretty important from a performance perspective for Ceph
on ZFS based on some testing I did a while back with Brian Behlendorf.
Post by Eric Eastman
Also per the recent ZFS posting on clusterhq, aio will not be supported
until 0.64 so the following needs to be added to your ceph.conf file
filestore zfs_snap = 1
journal aio = 0
journal dio = 0
My plans are to retest ZFS as an OSD backend once ZFS version 0.64 has
been released.
Please test ZFS with Ceph, and submit bugs, as this is how it will get
stable enough to use in production.
Eric
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...