Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 23 Apr 2017 03:15:49 +0000 (UTC)
From:      Benjamin Kaduk <bjk@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org
Subject:   svn commit: r50196 - head/en_US.ISO8859-1/htdocs/news/status
Message-ID:  <201704230315.v3N3Fn8B039846@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: bjk
Date: Sun Apr 23 03:15:49 2017
New Revision: 50196
URL: https://svnweb.freebsd.org/changeset/doc/50196

Log:
  Add 2017Q1 Ceph entry from Willem Jan Withagen

Modified:
  head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml

Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml
==============================================================================
--- head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml	Sat Apr 22 18:07:25 2017	(r50195)
+++ head/en_US.ISO8859-1/htdocs/news/status/report-2017-01-2017-03.xml	Sun Apr 23 03:15:49 2017	(r50196)
@@ -1045,4 +1045,143 @@
 	etc.</task>
     </help>
   </project>
+
+  <project cat='proj'>
+    <title>Ceph on &os;</title>
+
+    <contact>
+      <person>
+	<name>
+	  <given>Willem Jan</given>
+	  <common>Withagen</common>
+	</name>
+	<email>wjw@digiware.nl</email>
+      </person>
+    </contact>
+
+    <links>
+      <url href="http://ceph.com">Ceph Main Site</url>
+      <url href="https://github.com/ceph/ceph">Main Repository</url>
+      <url href="https://github.com/wjwithagen/ceph">My &os; Fork </url>
+    </links>
+
+    <body>
+      <p>Ceph is a distributed object store and file system designed to provide
+	excellent performance, reliability and scalability.</p>
+
+      <ul>
+	<li><p>Object Storage</p>
+
+	  <p>Ceph provides seamless access to objects using native
+	    language bindings or <tt>radosgw</tt>, a REST interface
+	    that is compatible with applications written for S3 and
+	    Swift.</p></li>
+
+	<li><p>Block Storage</p>
+
+	  <p>Ceph’s RADOS Block Device (RBD) provides access to block
+	    device images that are striped and replicated across the
+	    entire storage cluster.</p></li>
+
+	<li><p>File System</p>
+
+	  <p>Ceph provides a POSIX-compliant network file system that
+	    aims for high performance, large data storage, and maximum
+	    compatibility with legacy applications.</p></li>
+      </ul>
+
+      <p>I started looking into Ceph, because the HAST solution with
+	CARP and <tt>ggate</tt> did not really do what I was looking
+	for.  But I aim to run a Ceph storage cluster of storage nodes
+	that are running ZFS.  User stations would be running
+	<tt>bhyve</tt> on RBD disk that are stored in Ceph.</p>
+
+      <p>The &os; build will build most of the tools in Ceph.</p>
+
+      <p>The most notable progress since the last report:</p>
+
+      <ul>
+	<li>The most important change is that a port has been
+	  submitted: <tt>net/ceph-devel</tt>.  However, it does not
+	  yet contain <tt>ceph-fuse</tt>.</li>
+
+	<li>Regular updates to the <tt>ceph-devel</tt> port are
+	  expected, with the next one coming in April.</li>
+
+	<li><tt>ceph-fuse</tt> works, allowing one to mount a CephFS
+	  filesystem on a &os; system and perform normal operations.</li>
+
+	<li><tt>ceph-disk prepare</tt> and <tt>activate</tt> work for
+	  FileStore on ZFS, allowing for easy creation of OSDs.</li>
+
+	<li>RBD is actually buildable and can be used to manage RADOS BLOCK
+	  DEVICEs.</li>
+
+	<li>Most of the awkward dependancies on Linux-isms are deleted
+	  &mdash; only <tt>/bin/bash</tt> is here to stay.</li>
+      </ul>
+
+      <p>To get things running on a &os; system, run <tt>pkg install
+	  net/ceph-devel</tt> or clone <a
+	  href="https://github.com/wjwithagen/ceph">https://github.com/wjwithagen/ceph</a>;
+	and build manually by running <tt>./do_freebsd.sh</tt> in the
+	checkout root.</p>
+
+      <p>Parts not (yet) included:</p>
+
+      <ul>
+	<li>KRBD: Kernel Rados Block Devices are implemented in the
+	  Linux kernel, but not yet in the &os; kernel.  It is possible
+	  that <tt>ggated</tt> could be used as a template, since it
+	  does similar things, just between two disks.  It also has a
+	  userspace counterpart, which could ease development.</li>
+
+	<li>BlueStore: &os; and Linux have different AIO APIs, and
+	  that incompatibility needs to resolved somehow.  Additionally,
+	  there is discussion in &os; about <tt>aio_cancel</tt> not
+	  working for all devicetypes.</li>
+
+	<li>CephFS as native filesystem: though <tt>ceph-fuse</tt>
+	  works, it can be advantageous to have an in-kernel
+	  implementation for heavy workloads.  Cython tries to access
+	  an internal field in <tt>struct dirent</tt>, which does not
+	  compile.</li>
+      </ul>
+    </body>
+
+    <help>
+      <task>Run integration tests to see if the &os; daemons will work
+	with a Linux Ceph platform.</task>
+
+      <task>Compile and test the userspace RBD (Rados Block Device).
+	This currently works but testing has been limitted.</task>
+
+      <task>Investigate and see if an in-kernel RBD device could be
+	developed akin to <tt>ggate</tt>.</task>
+
+      <task>Investigate the keystore, which can be embedded in the
+	kernel on Linux and currently prevents building Cephfs and
+	some other parts.  The first question is whether it is really
+	required, or only KRBD requires it.</task>
+
+      <task>Scheduler information is not used at the moment, because the
+	schedulers work rather differently between Linux and &os;.
+	But at a certain point in time, this will need some attention
+	(in src/common/Thread.cc).</task>
+
+      <task>Improve the &os; initscripts in the Ceph stack, both for
+	testing purposes and for running Ceph on production machines.
+	Work on <tt>ceph-disk</tt> and <tt>ceph-deploy</tt> to make it
+	more &os;- and ZFS- compatible.</task>
+
+      <task>Build a test cluster and start running some of the
+	teuthology integration tests on it.  Teuthology wants to build
+	its own <tt>libvirt</tt> and that does not quite work with all
+	the packages &os; already has in place.  There are many
+	details to work out here.</task>
+
+      <task>Design a vitual disk implementation that can be used with
+	<tt>bhyve</tt> and attached to an RBD image.</task>
+    </help>
+  </project>
 </report>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201704230315.v3N3Fn8B039846>