Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 25 Jul 2001 14:23:13 +0200
From:      Gabriel Ambuehl <gabriel_ambuehl@buz.ch>
To:        Paul Robinson <paul@akita.co.uk>
Cc:        freebsd-isp@freebsd.org
Subject:   Re[2]: Redundant setup on a budget??
Message-ID:  <2411019395.20010725142313@buz.ch>
In-Reply-To: <20010725124353.A6548@jake.akitanet.co.uk>
References:  <510EAC2065C0D311929200A0247252622F7A7B@NETIVITY-FS> <20010724154211.C34017@jake.akitanet.co.uk> <1241681557.20010725114735@buz.ch> <20010725112250.N83511@jake.akitanet.co.uk> <1996903256.20010725131437@buz.ch> <20010725124353.A6548@jake.akitanet.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
-----BEGIN PGP SIGNED MESSAGE-----

Hello Paul,

Wednesday, July 25, 2001, 1:43:54 PM, you wrote:
>> sometimes suffers a bit of it's extrem multi platform approach,
>> which means that you can't always use the newest release on your
>> box.
> I prefer ipfw simply because I have more experience with it.
> ipfilter is a little bit too Linuxy in it's approach.

Actually, it's more solarisy ;-). AFAIK, it works  on *BSD, Solaris
and HPUX and Linux 2.0.x. But the state stuff is really interesting.

> And with distributed RAID you haven't addressed your problem of
> atomic transactions.

ACk. But as said, I don't care for them on FS level. FS isn't meant
to
be used as a DB, for that we got DBMS installed.

> It's all very well having no single point of failure, but I
> can guarantee in every scenario you will have multiple single
> points of failure.

Sure. But you should eliminate those you can.

> "You're using the same OS on every machine?",

Actually, this is a point I've been thinking about for quite some
time, mostly because of security reasons. But it's simply unpractical
to have two different servers OS doing the same job. And further,
many
of the holes in the daemons (the OS themselves normally haven't got
remotely exploitable ones) are cross platform anyway.

>  "You use the same
> power company and brand of UPS and generator for all your power?",
> etc. It can all come back to single points of failure.

Yeah sure. But this is were the finger pointing effect comes into the
game. It's the colo facilities job to ensure my servers got power and
that the LAN is up, it's mine to ensure the rest.

> In my experience, I would rather have multiple RAID cards, in a
> beefy box, perhaps doing incremental backups once an hour to a hot
> standby, with hot-swap disks, etc. and know that my file writes are
> being locked properly than take a risk with several boxes informing
> each other of transactions and just hoping it works. The problem
> with your approach, is that it is likely to look as though it's
> working fine at first, but once you put load on, maybe 0.00001% of
> transactions will start suffering. Then 0.000015%, and so on as the
> load increases. It will just look like a weird bug somewhere down
> in the system that will be easy to pass over. The occasional
> screw-up. But as load increases, these problems will rise steadily.

Not necessarily. I don't plan to share data among different systems
with different systems writing to it. Data should be saved on two or
more boxes, but only ONE boxes will have write access to it under
normal operation. If that box goes down, it's twin gets the write
rights and so on.

>> every node of your cluster but this isn't the case for us, as we
>> do webhosting which can very nicely be segmented).
> Well, I wasn't going to allow all machines access to all the data.
> If he gets into an SQL server, he gets to mess with SQL data. He
> gets into a web server, he gets access to web data. However, I've
> already had discussions about security in general on this and other
> lists, and I don't want to re-visit them now.

ACK. One can't do  uch about it anyway.

> Although rpc.lockd isn't fast, you have to ask the question as to
> whether speed is what is imporant to you in this environment. In a
> web-hosting environment, we're talking about a heavy-read setup.
> We're not going to be too worried.

Exactly. And this is why I don't care too much for atomicity of write
operations as we simply won't guarantee it (further, most clients
don't
need to know we're running redundant setups) . We'll "guarantee" for
DB
consistency and for XX.YY% uptime of the webservers.

> For SQL stuff, we might get concerned if we're doing a lot of
> INSERTs and UPDATEs.

For DBMS, the only solution I can think of is faster
hardware. Shared DBMS is a big mess.

> For a mail setup, we are definitely going to be
> concerned.
> However, can you take the risk with your customer's mail that
> because you haven't got locking sorted that mail is being written
> to a spool from one machine, but then gets trashed by mail from
> another machine? no? In that case you need locking.

Use the proper MTA. qmail is written to be NFS safe (and despite, it
saves you to worry about the security of your mailservers since it
hasn't been ONE hole in 1.03!).

>> Trunking isn't supported at all by FreeBSD  if I'm not totally
>> mistaken.
> That's why I put a smiley after my statement. Trunking is hard.
> It'd be nice to have, but it's hard. So, off to see what Gigabit
> cards FBSD is supporting now. :-)

I'd rather want FreeBSD to support TCP/IP over firewire ;-)

>> If you got atomic operations on the filesystem, you're doing
>> something
>> wrong, IMO. That's what databases are for.
> And if you have several MySQL servers acting as heads to the same
> database?

Simply don't do it. With MySQL, this cries for trouble (not to
mention
the immense performance penalty). If the DB *server*
isn't fast enough, use MySQL's realtime replication and redirect the
selects to a slave, if this isn't enough, get better hardware. But I
somehow doubt that a Athlon MP 1200 isn't fast enough for 95% of all
people out there. The rest probably runs Sun or IBM anyway.

> You need file-level locking if your cluster is to have any write
> operations.

Only if more than one machine is allowed to write to a segment of
data
at any given time. My setup doesn't require this.

> To say file servers shouldn't have atomic locking raises the
> question as to why the hell qpopper puts locks in place.

Badly written daemon?

> To me, it's obvious, that servers are EXACTLY where atomic actions
> should be taking place.

Sure. But one can go great way without even needing them.

>> If you need real load balancing, use a DBMS, that's what were made
>> for.
> I didn't say I needed it. I just said I was going to build it. You
> can't see the advantage of being able to cluster free SQL servers
> together?

If it works, I can see it. And actually, I'm running MySQL in
replicated master/slave mode since the day the replication feature
got
stable enough for production. And I've got my doubts whether I would
want to rely on a multiple master setup with MySQL 3.23.


> You can't see how docs on how to get multiple SMTP/POP3/IMAP
> servers all working on the same spools on a big fat RAID is not
> useful?

Sure I can. But I don't see why I should use locking there if my
mailserver was designed to work with NFS without locking.

> it. Others do. I'm planning on doing it because I work in a job
> where I get to do fun things, the way I want to. :-)

Oh that looks like 50% of my job description and actually that's the
reason why I'm currently working on the fail over stuff as I
currently
consider it to be one of the most interesting fields of computing
;-).
Well let's better not talk about the other 50% ;-.)

>> More or less ACK. Mostly FTP uploads by the users and some writes
>> to the FS from some badly implemented scripts which I'm not going
>> to babysit. If you
> Whereas we do full-on scripting/e-com/god knows what where we are
> doing read-write all the time. For your setup, what I'm suggesting
> is overkill.  

Oh I for myself do this kind of stuff as well. But since we don't
offer load balanced servers with atomic FS operations and never will
as I consider this to be the wrong approach to build dynamic websites
and our techies got better things to than to listen to weenies who
bitch about
their crappy Perl script, I don't see any need to guarantee this.
My personal opinion is that one should shoot people who are using
flat
file scripts for anything serious. You need to store records? Use a
DBMS where, if needed, I'll go great lengths to ensure your data
stays
consistent.

>> needed to replicate the data. It gets hairy, though, if one DB
>> server isn't enough to cope with the update/insert statements but
>> then you should probably spend more money on the DB server.
> I'd rather have 10 boxes costing my 500 quid each than 1 box
> costing me 20k. So would a lot of other companies. Plus, it's more
> scalable. Plus, I'm doing this because it's fun. :-)

Oh I see, you follow our business model (lots of cheap servers are
much better for your reliability than one expensive one). I just feel
it
comes to an end when it comes to loaded DB servers as it's awfully
hard to have two boxes working on the same table.

If MySQL 4 takes a similar approach like MS SQL in the data center
edition (once saw this
beast in action and was surprised as it appears to be working pretty
good), i.e. plug another server in the cluster and get automatically
more power, I'm all ears for it. But until this stuff is ready, I
either need to look that the load can be spread to different DB
servers (it already IS possible to spread anything down to tables
over
different MySQL servers) or just buy more hardware.

> The problem with replication, is we get into trouble with atomic
> actions again.

Exactly. And that's why I would go for a single master/multiple slave
setup where atomicity is guaranteed. Direct all updates/inserts to
the master which then propagates
them to the slaves and do as many selects as you like on the slaves.
A
multiple master setup isn't something I'd want to use with current
MySQL versions.

Oh and last but not least, MySQL is probably not the right choice if
you need bomb proof reliability anyway (IIRC, it still isn't ACID
even
with transaction support).





Best regards,
 Gabriel
el

-----BEGIN PGP SIGNATURE-----
Version: PGP 6.5i

iQEVAwUBO16sJ8Za2WpymlDxAQFURQgAurVC/i9MDz1Dp4BWoWkqMX+69tdWmSVw
vwgIO9+p5Yu2SnEjIhQoAMfbA/sNVWB05Des+gGaEQiVNJEJpf2+V03dh0BDcM7U
4o3GtHsdDueIUF1LbNfv0Rvosl5eN6noHoODtbtpnthbMiFUVzkO82ZzHnxr4+h0
FYb82iw9Fng1SRDfmfybR+mBjmHCYwCA5Sfe6byHbF/MoxL4/l04/pqIwJU8f9NS
5kH+iD3uNiqses7OL0BjZTnz4IT2PzIA73SkhNTg+d10jifAyAKP9Zx1Bpk0A6Da
puSKQNYDzO2WXfi0YTOam5VE2yzxTZWv45c6/Xx3gjipnOIH2REhbw==
=nDzR
-----END PGP SIGNATURE-----


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2411019395.20010725142313>