Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 02 Jan 2012 21:23:30 -0500
From:      Robert Boyer <rwboyer@mac.com>
To:        Eduardo Morras <nec556@retena.com>
Cc:        "Muhammet S. AYDIN" <whalberg@gmail.com>, freebsd-questions@freebsd.org
Subject:   Re: freebsd server limits question
Message-ID:  <15170C4F-7142-479F-8C61-EC1F2D516441@mac.com>
In-Reply-To: <BF32B73F-CFAB-4682-80E0-D7E4DE2E339A@mac.com>
References:  <CAP28s1DhhsSV%2Bz8BuRDVjHypD%2BpECuXQEH5BKjJRKMorcWL0rw@mail.gmail.com> <0LX600GBUUP8AWE1@ms02044.mac.com> <AD321296-15AC-493D-9885-DE29A70DA33B@mac.com> <BF32B73F-CFAB-4682-80E0-D7E4DE2E339A@mac.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Just realized that the MongoDB site now has some recipes up for what you =
really need to do to make sure you can handle a lot of incoming new =
documents concurrently=85.

Boy you had to figure this stuff out yourself just last year - I guess =
the mongo community has come a very long way=85.

Splitting Shard Chunks - MongoDB


enjoy=85.

RB

On Jan 2, 2012, at 5:38 PM, Robert Boyer wrote:

> Sorry one more thought and a clarification=85.
>=20
>=20
> I have found that it is best to run mongos with each app server =
instance most of the mongo interface libraries aren't intelligent about =
the way that they distribute requests to available mongos processes. =
mongos processes are also relatively lightweight and need no =
coordination or synchronization with each other - simplifies things a =
lot and makes any potential bugs/complexity with app server/mongo db =
connection logic just go away.
>=20
> It's pretty important when configuring shards to take on the write =
volume that you do your best to pre-allocate chunks and avoid chunk =
migrations during your traffic floods - not hard to do at all. There are =
also about a million different ways to deal with atomicity (if that is a =
word) and a very mongo specific way of ensuring writes actually "made it =
to disk" somewhere =3D from your brief description of the app in =
question it does not sound that it is too critical to ensure "every =
single solitary piece of data persists no matter what" as I am assuming =
most of it is irrelevant and becomes completely irrelevant after the =
show- or some time there after. Most of the programing and config =
examples make an opposite assumption in that they assume that each =
transaction MUST be completely durable - if you forgo that you can get =
screaming TPS out of a mongo shard.
>=20
> Also if you do not find what you are looking for via a ruby support =
group - the JS and node JS community also may be of assistance but they =
tend to have a very narrow view of the world=85. ;-)
>=20
> RB
> On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote:
>=20
>> To deal with this kind of traffic you will most likely need to set up =
a mongo db cluster of more than a few instances=85 much better. There =
should be A LOT of info on how to scale mongo to the level you are =
looking for but most likely you will find that on ruby forums NOT on =
*NIX boards=85.
>>=20
>> The OS boards/focus will help you with fine tuning but all the fine =
tuning in the world will not solve an app architecture issue=85
>>=20
>> I have setup MASSIVE mongo/ruby installs for testing that can do this =
sort of volume with ease=85 the stack looks something like this=85.
>>=20
>> Nginix=20
>> Unicorn
>> Sinatra
>> MongoMapper
>> MongoDB
>>=20
>> with only one Nginix instance can feed an almost arbitrary number of =
Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly =
configured MongoDB cluster with pre-allocated key distribution so that =
the incoming inserts are spread evenly against the cluster instances=85
>>=20
>> Even if you do not use ruby that community will have scads of info on =
scaling MongoDB.
>>=20
>> One more comment related to L's advice - true you DO NOT want more =
transactions queued up if your back-end resources cannot handle the TPS =
- this will just make the issue harder to isolate and potentially make =
the recovery more difficult. Better to reject the connection at the =
front-end than take it and blow up the app/system.
>>=20
>> The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) =
is that there is no queue that is feed to the workers when there are no =
workers - the request is rejected. The unicorn worker model can be =
reproduced for any other implementation environment (PHP/Perl/C/etc) =
outside of ruby in about 30 minutes. It's simple and Nginix is very well =
suited to low overhead reverse proxy to this kind of setup.
>>=20
>> Wishing you the best - if i can be of more help let me know=85
>>=20
>> RB
>>=20
>> On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote:
>>=20
>>> At 20:12 02/01/2012, Muhammet S. AYDIN wrote:
>>>> Hello everyone.
>>>>=20
>>>> My first post here and I'd like to thank everyone who's involved =
within the
>>>> FreeBSD project. We are using FreeBSD on our web servers and we are =
very
>>>> happy with it.
>>>>=20
>>>> We have an online messaging application that is using mongodb. Our =
members
>>>> send messages to "the voice" show's (turkish version) contestants. =
Our two
>>>> mongodb instances ended up in two centos6 servers. We have failed. =
So hard.
>>>> There were announcements and calls made live on tv. We had +30K/sec
>>>> visitors to the app.
>>>>=20
>>>> When I looked at the mongodb errors, I had thousands of these:
>>>> http://pastie.org/private/nd681sndos0bednzjea0g. You may be =
wondering why
>>>> I'm telling you about centos. Well, we are making the switch from =
centos to
>>>> freebsd FreeBSD. I would like to know what are our limits? How we =
can set
>>>> it up so our FreeBSD servers can handle min 20K connections =
(mongodb's
>>>> connection limit)?
>>>>=20
>>>> Our two servers have 24 core CPUs and 32 GBs of RAM. We are also =
very open
>>>> to suggestions. Please help me out here so we don't fail deadly, =
again.
>>>>=20
>>>> ps. this question was asked in the forums as well however as =
someone
>>>> suggested in the forums, i am posting it here too.
>>>=20
>>> Is your app limited by cpu or by i/o? What do vmstat/iostat says =
about your hd usage? Perhaps mongodb fails to read/write fast enough and =
making process thread pool bigger only will make problem worse, there =
will be more threads trying to read/write.
>>>=20
>>> Have you already tuned mongodb?
>>>=20
>>> Post more info please, several lines (not the first one) of iostat =
and vmstat may be a start. Your hd configuration, raid, etc... too.
>>>=20
>>> L=20
>>>=20
>>> _______________________________________________
>>> freebsd-questions@freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>>> To unsubscribe, send any mail to =
"freebsd-questions-unsubscribe@freebsd.org"
>>=20
>> _______________________________________________
>> freebsd-questions@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to =
"freebsd-questions-unsubscribe@freebsd.org"
>=20
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to =
"freebsd-questions-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?15170C4F-7142-479F-8C61-EC1F2D516441>