Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Aug 2012 21:42:30 +0100
From:      Steve O'Hara-Smith <steve@sohara.org>
To:        Paul Schmehl <pschmehl_lists@tx.rr.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Best file system for a busy webserver
Message-ID:  <20120816214230.0f4fb446.steve@sohara.org>
In-Reply-To: <175D3B4E21331C5682EE2148@localhost>
References:  <47AFB706686083E99B3A3F3E@localhost> <20120816180257.6f5d58e5.steve@sohara.org> <175D3B4E21331C5682EE2148@localhost>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 16 Aug 2012 13:16:26 -0500
Paul Schmehl <pschmehl_lists@tx.rr.com> wrote:

> --On August 16, 2012 6:02:57 PM +0100 Steve O'Hara-Smith
> <steve@sohara.org> wrote:
> 
> > On Thu, 16 Aug 2012 10:45:25 -0500
> > Paul Schmehl <pschmehl_lists@tx.rr.com> wrote:
> >
> >> Does anyone have any opinions on which file system is best for a busy
> >> webserver (7 million hits/month)?  Is anyone one system noticeably
> >> better  than any other?
> >
> > 	That's an average of about 3 hits per second. If it's static
> > pages then pretty much anything will handle it easily (but please don't
> > use FAT). If it's dynamic then the whole problem is more complex than a
> > simple page rate. If that load is bursty it may make a difference too.
> >
> 
> Thanks for the reply.  It's a combination.  There are many static pages, 
> but there is also a php-mysql forum that generates pages on the fly.  It 
> accounts for about half of the traffic.  I've always used ufs but am 
> wondering if switching to zfs would make sense.
> 
> This stats page might answer some of your questions: 
> <http://www.stovebolt.com/stats/
> 
> Basically traffic is steady but it's busiest in the evenings (US time
> zones)
> 
> > 	Other considerations may come into play - how big is this
> > filesystem (number of files, maximum number of entries in a directory,
> > volume of data) ? Are there many users needing to be protected from each
> > other ? What about archives ? snapshots ? growth ? churn ? uptime
> > requirements, disaster recovery time ?
> 
> I don't even know where to begin.  There's about 15G of data on the
> server.

	OK I would say there's no pressing reason to consider ZFS for this
purpose. You'd save a bit of time in crash recovery with no fsck going on,
and perhaps the checksum mechanism would give some peace of mind - but
really in 15GB silent corruption is a very slow process - now if it were
15TB ...

> last pid: 40369;  load averages:  0.01,  0.03,  0.00 
> up 104+09:33:44 13:14:49
> 137 processes: 1 running, 136 sleeping
> CPU:  0.7% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.2% idle
> Mem: 229M Active, 6108M Inact, 1056M Wired, 15M Cache, 828M Buf, 514M Free
> Swap: 16G Total, 28K Used, 16G Free

	OTOH you have plenty of memory lying around doing nothing much
(6108M inactive) so you can easily support ZFS if you want to play with
it's features (the smooth integration of volume management and filesystem
is rather cool).

> The system is not being stressed.
> 
> If by users, you means shell accounts, there's two, so that's not really
> an issue.

	OK so no need for fancy quota schemes then.

> Uptime is not an issue.  The owners have repeatedly said if the site is 
> down for two days they don't care.  (The forum users don't feel that way 
> though!)  We've had one "disaster" (hard drive failure and raid failed 
> while I was on vacation), and it took about 36 hours to get back online, 
> but that was 10 years ago.  The site doesn't go down - it's running on 
> FreeBSD. :-)

	It sounds like you have backups or at least some means of restoring
the site in the event of disaster so that's all good. If there was a
pressing need to be able to get back up fairly quickly and easily I'd be
suggesting ZFS in RAID1 with a hot swap bay in which a third disc goes,
attached as a third mirror, periodically split it off the mirror take
it off site, and replace it with the one that's been off site.

	There's really nothing here that's pushing you in any particular
direction for a filesystem, at 15GB if performance ever becomes a problem a
RAID1 of SSDs with UFS would make it fly probably into the hundreds of hits
per second range.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120816214230.0f4fb446.steve>