Date: Wed, 13 Apr 2016 16:51:56 +0000 (UTC) From: Warren Block <wblock@FreeBSD.org> To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r48612 - head/en_US.ISO8859-1/htdocs/news/status Message-ID: <201604131651.u3DGpuqO017005@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: wblock Date: Wed Apr 13 16:51:56 2016 New Revision: 48612 URL: https://svnweb.freebsd.org/changeset/doc/48612 Log: Add Ceph report from Willem Jan Withagen <wjw@digiware.nl>. Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2016-01-2016-03.xml Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2016-01-2016-03.xml ============================================================================== --- head/en_US.ISO8859-1/htdocs/news/status/report-2016-01-2016-03.xml Wed Apr 13 16:15:00 2016 (r48611) +++ head/en_US.ISO8859-1/htdocs/news/status/report-2016-01-2016-03.xml Wed Apr 13 16:51:56 2016 (r48612) @@ -2013,4 +2013,202 @@ </task> </help> </project> + + <project cat='proj'> + <title>Ceph on FreeBSD</title> + + <contact> + <person> + <name> + <given>Willem Jan</given> + <common>Withagen</common> + </name> + <email>wjw@digiware.nl</email> + </person> + </contact> + + <links> + <url href="http://ceph.com">Ceph main site</url> + <url href="https://github.com/ceph/ceph">Main repository</url> + <url href="https://github.com/wjwithagen/ceph">My Fork</url> + <url href="https://github.com/ceph/ceph/pull/7573">The git PULL with all changes</url> + </links> + + <body> + <p>Ceph is a distributed object store and file system designed + to provide excellent performance, reliability and + scalability.</p> + + <ul> + <li> + <p>Object Storage</p> + + <p>Ceph provides seamless access to objects using native + language bindings or radosgw, a REST interface that is + compatible with applications written for S3 and Swift.</p> + </li> + + <li> + <p>Block Storage</p> + + <p>Ceph's RADOS Block Device (RBD) provides access to block + device images that are striped and replicated across the + entire storage cluster.</p> + </li> + + <li> + <p>File System</p> + + <p>Ceph provides a POSIX-compliant network file system that + aims for high performance, large data storage, and maximum + compatibility with legacy applications.</p> + </li> + </ul> + + <p>I started looking into Ceph, because the HAST solution with + CARP and <tt>ggate</tt> did not really do what I wanted. But + I am aiming for running a Ceph storage cluster of storage + nodes that are running ZFS. The end station would be running + <tt>bhyve</tt> on RBD disk that are stored in Ceph.</p> + + <p>The FreeBSD build will build most of the tools in Ceph. Note + that the RBD-dependent items will not work since FreeBSD does + not have RBD yet.</p> + + <p>Build Prerequisites</p> + + <p>Compiling and building Ceph is tested on 11-CURRENT. It uses + the CLANG toolset taht is available, which needs to be at + least 3.7. Clang 3.4 (on 10.2-STABLE) does not have all the + required capabilites to compile everything.</p> + + <p>This setup will get things running for FreeBSD:</p> + + <ul> + <li> + <p>Install bash and link it in <tt>/bin</tt> (requires root + privileges):</p> + + <p><tt>sudo pkg install bash</tt></p> + + <p><tt>sudo ln -s /usr/local/bin/bash /bin/bash</tt></p> + </li> + + <li> + <p>Building Ceph</p> + + <p><tt>./do_freebsd.sh</tt></p> + </li> + </ul> + + <p>Parts Not Yet Included</p> + + <ul> + <li> + <p>RBD</p> + + <p>Rados Block Devices is implemented in the Linux kernel. + It seems that there used to be a userspace implementation + first. And perhaps <tt>ggated</tt> could be used as a + template since it does some of the same functions, other + than just between two disks. And it has a userspace + counterpart.</p> + </li> + + <li> + <p>BlueStore</p> + + <p>FreeBSD and Linux have a different AIO API, and that + needs to be made compatible. Next to that is the + discussion in FreeBSD about <tt>aio_cancel</tt> not + working for all device types.</p> + </li> + + <li> + <p>CephFS</p> + + <p>Cython tries to access an internal field in dirent which + does not compile.</p> + </li> + </ul> + + <p>Tests that verify the correct working of the above are also + excluded from the test set.</p> + + <p>Tests Not Yet Included</p> + + <ul> + <li> + <p><tt>ceph-detect-init/run-tox.sh</tt></p> + + <p>Because the current implementation does not know anything + about FreeBSD rc-init.</p> + </li> + + <li> + <p>Tests that make use of nosestests</p> + + <p>Calling these does not really work since + <tt>nosetests</tt> is not in <tt>/usr/bin</tt>, and + calling through <tt>/usr/bin/env/nosetests</tt> does not + work on FreeBSD.</p> + </li> + + <li> + <p><tt>test/pybind/test_ceph_argparse.py</tt></p> + </li> + + <li> + <p><tt>test/pybind/test_ceph_daemon.py</tt></p> + </li> + </ul> + + <p>Things To Investigate</p> + + <ul> + <li> + <p><tt>ceph-{osd,mon}</tt> need two signals before they + actually terminate.</p> + </li> + + <li> + <p><tt>ceph_erasure_code --debug-osd 20 --plugin_exists jerasure</tt> + crashes due to SIGSEGV. This is a pointer reference that + gets modified outside the regular programflow. Probably + due to a programming error but perhaps wrong mixing and + matching of many libraries.</p> + </li> + </ul> + </body> + + <help> + <task> + <p>Current and foremost task is to get the test set to + complete without errors. This includes fixing several + coredumps.</p> + + <p>Run integration tests to see if the FreeBSD daemons will + work with a Linux Ceph platform.</p> + </task> + + <task> + <p>Get the Python tests that are currently excluded to work, + and test OKE.</p> + </task> + + <task> + <p>Compile and test the user space RBD (Rados Block + Device).</p> + + <p>Investigate and see if an in-kernel RBD device could be + developed a la <tt>ggate</tt>.</p> + </task> + + <task> + <p>Integrate the FreeBSD <tt>/etc/rc.d</tt> init scripts in + the Ceph stack for testing and running Ceph on production + machines.</p> + </task> + </help> + </project> </report>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201604131651.u3DGpuqO017005>