From owner-freebsd-performance@FreeBSD.ORG Sun Apr 17 14:30:56 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7FCB916A4CE for ; Sun, 17 Apr 2005 14:30:56 +0000 (GMT) Received: from cyrus.watson.org (cyrus.watson.org [204.156.12.53]) by mx1.FreeBSD.org (Postfix) with ESMTP id 42C0043D41 for ; Sun, 17 Apr 2005 14:30:55 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by cyrus.watson.org (Postfix) with ESMTP id 4160246B43 for ; Sun, 17 Apr 2005 10:30:54 -0400 (EDT) Date: Sun, 17 Apr 2005 15:31:50 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: performance@FreeBSD.org Message-ID: <20050417134448.L85588@fledge.watson.org> MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="0-136826264-1113748310=:85588" Subject: Memory allocation performance/statistics patches X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Apr 2005 14:30:56 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --0-136826264-1113748310=:85588 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Attached please find three patches: (1) uma.diff, which modifies the UMA slab allocator to use critical sections instead of mutexes to protect per-CPU caches. (2) malloc.diff, which modifies the malloc memory allocator to use critical sections and per-CPU data instead of mutexes to store per-malloc-type statistics, coalescing for the purposes of the sysctl used to generate vmstat -m output. (3) mbuf.diff, which modifies the mbuf allocator to use per-CPU data and critical sections for statistics, instead of synchronization-free statistics which could result in substantial inconsistency on SMP systems. These changes are facilitated by John Baldwin's recent re-introduction of critical section optimizations that permit critical sections to be implemented "in software", rather than using the hardware interrupt disable mechanism, which is quite expensive on modern processors (especially Xeon P4 CPUs). While not identical, this is similar to the softspl behavior in 4.x, and Linux's preemption disable mechanisms (and various other post-Vax systems :-)). The reason this is interesting is that it allows synchronization of per-CPU data to be performed at a much lower cost than previously, and consistently across UP and SMP systems. Prior to these changes, the use of critical sections and per-CPU data as an alternative to mutexes would lead to an improvement on SMP, but not on UP. So, that said, here's what I'd like us to look at: - Patches (1) and (2) are intended to improve performance by reducing the overhead of maintaining cache consistency and statistics for UMA and malloc(9), and may universally impact performance (in a small way) due to the breadth of their use through the kernel. - Patch (3) is intended to restore consistency to statistics in the presence of SMP and preemption, at the possible cost of some performance. I'd like to confirm that for the first two patches, for interesting workloads, performance generally improves, and that stability doesn't degrade. For the third partch, I'd like to quantify the cost of the changes for interesting workloads, and likewise confirm no loss of stability. Because these will have a relatively small impact, a fair amount of caution is required in testing. We may be talking about a percent or two, maybe four, difference in benchmark performance, and many benchmarks have a higher variance than that. A couple of observations for those interested: - The INVARIANTS panic with UMA seen in some earlier patch versions is believed to be corrected. - Right now, because I use arrays of foo[MAXCPUS], I'm concerned that different CPUs will be writing to the same cache line as they're adjacent in memory. Moving to per-CPU chunks of memory to hold this stuff is desirable, but I think first we need to identify a model by which to do that cleanly. I'm not currently enamored of the 'struct pcpu' model, since it makes us very sensitive to ABI changes, as well as not offering a model by which modules can register new per-cpu data cleanly. I'm also inconsistent about how I dereference into the arrays, and intend to move to using 'curcpu' throughout. - Because mutexes are no longer used in UMA, and not for the others either, stats read across different CPUs that are coalesced may be slightly inconsistent. I'm not all that concerned about it, but it's worth thinking on. - Malloc stats for realloc() are still broken if you apply this patch. - High watermarks are no longer maintained for malloc since they require a global notion of "high" that is tracked continuously (i.e., at each change), and there's no longer a global view except when the observer kicks in (sysctl). You can imagine various models to restore some notion of a high watermark, but I'm not currently sure which is the best. The high watermark notion is desirable though. So this is a request for: (1) Stability testing of these patches. Put them on a machine, make them hurt. If things go South, try applying the patches one by one until it's clear which is the source. (2) Performance testing of these patches. Subject to the challenges in testing them. If you are interested, please test each patch separately to evaluate its impact on your system. Then apply all together and see how it evens out. You may find that the mbuf allocator patch outweighs the benefits of the other two patches, if so, that is interesting and something to work on! I've done some micro-benchmarking using tools like netblast, syscall_timing, etc, but I'm interested particularly in the impact on macrobenchmarks. Thanks! Robert N M Watson --0-136826264-1113748310=:85588 Content-Type: TEXT/PLAIN; charset=US-ASCII; name=uma.diff Content-Transfer-Encoding: BASE64 Content-ID: <20050417153150.I85588@fledge.watson.org> Content-Description: Content-Disposition: attachment; filename=uma.diff LS0tIC8vZGVwb3QvdmVuZG9yL2ZyZWVic2Qvc3JjL3N5cy92bS91bWFfY29y ZS5jCTIwMDUvMDIvMjQgMDY6MzA6MzYNCisrKyAvL2RlcG90L3VzZXIvcndh dHNvbi9wZXJjcHUvc3lzL3ZtL3VtYV9jb3JlLmMJMjAwNS8wNC8wNiAxMDoz MzowMg0KQEAgLTEsNCArMSw1IEBADQogLyotDQorICogQ29weXJpZ2h0IChj KSAyMDA0LTIwMDUgUm9iZXJ0IE4uIE0uIFdhdHNvbg0KICAqIENvcHlyaWdo dCAoYykgMjAwNCwgMjAwNSwNCiAgKiAgICAgQm9za28gTWlsZWtpYyA8Ym1p bGVraWNARnJlZUJTRC5vcmc+LiAgQWxsIHJpZ2h0cyByZXNlcnZlZC4NCiAg KiBDb3B5cmlnaHQgKGMpIDIwMDIsIDIwMDMsIDIwMDQsIDIwMDUsDQpAQCAt MTE5LDkgKzEyMCw2IEBADQogLyogVGhpcyBtdXRleCBwcm90ZWN0cyB0aGUg a2VnIGxpc3QgKi8NCiBzdGF0aWMgc3RydWN0IG10eCB1bWFfbXR4Ow0KIA0K LS8qIFRoZXNlIGFyZSB0aGUgcGNwdSBjYWNoZSBsb2NrcyAqLw0KLXN0YXRp YyBzdHJ1Y3QgbXR4IHVtYV9wY3B1X210eFtNQVhDUFVdOw0KLQ0KIC8qIExp bmtlZCBsaXN0IG9mIGJvb3QgdGltZSBwYWdlcyAqLw0KIHN0YXRpYyBMSVNU X0hFQUQoLHVtYV9zbGFiKSB1bWFfYm9vdF9wYWdlcyA9DQogICAgIExJU1Rf SEVBRF9JTklUSUFMSVpFUigmdW1hX2Jvb3RfcGFnZXMpOw0KQEAgLTM4NCw0 OCArMzgyLDE5IEBADQogem9uZV90aW1lb3V0KHVtYV96b25lX3Qgem9uZSkN CiB7DQogCXVtYV9rZWdfdCBrZWc7DQotCXVtYV9jYWNoZV90IGNhY2hlOw0K IAl1X2ludDY0X3QgYWxsb2M7DQotCWludCBjcHU7DQogDQogCWtlZyA9IHpv bmUtPnV6X2tlZzsNCiAJYWxsb2MgPSAwOw0KIA0KIAkvKg0KLQkgKiBBZ2dy ZWdhdGUgcGVyIGNwdSBjYWNoZSBzdGF0aXN0aWNzIGJhY2sgdG8gdGhlIHpv bmUuDQotCSAqDQotCSAqIFhYWCBUaGlzIHNob3VsZCBiZSBkb25lIGluIHRo ZSBzeXNjdGwgaGFuZGxlci4NCi0JICoNCi0JICogSSBtYXkgcmV3cml0ZSB0 aGlzIHRvIHNldCBhIGZsYWcgaW4gdGhlIHBlciBjcHUgY2FjaGUgaW5zdGVh ZCBvZg0KLQkgKiBsb2NraW5nLiAgSWYgdGhlIGZsYWcgaXMgbm90IGNsZWFy ZWQgb24gdGhlIG5leHQgcm91bmQgSSB3aWxsIGhhdmUNCi0JICogdG8gbG9j ayBhbmQgZG8gaXQgaGVyZSBpbnN0ZWFkIHNvIHRoYXQgdGhlIHN0YXRpc3Rp Y3MgZG9uJ3QgZ2V0IHRvbw0KLQkgKiBmYXIgb3V0IG9mIHN5bmMuDQotCSAq Lw0KLQlpZiAoIShrZWctPnVrX2ZsYWdzICYgVU1BX1pGTEFHX0lOVEVSTkFM KSkgew0KLQkJZm9yIChjcHUgPSAwOyBjcHUgPD0gbXBfbWF4aWQ7IGNwdSsr KSB7DQotCQkJaWYgKENQVV9BQlNFTlQoY3B1KSkNCi0JCQkJY29udGludWU7 DQotCQkJQ1BVX0xPQ0soY3B1KTsNCi0JCQljYWNoZSA9ICZ6b25lLT51el9j cHVbY3B1XTsNCi0JCQkvKiBBZGQgdGhlbSB1cCwgYW5kIHJlc2V0ICovDQot CQkJYWxsb2MgKz0gY2FjaGUtPnVjX2FsbG9jczsNCi0JCQljYWNoZS0+dWNf YWxsb2NzID0gMDsNCi0JCQlDUFVfVU5MT0NLKGNwdSk7DQotCQl9DQotCX0N Ci0NCi0JLyogTm93IHB1c2ggdGhlc2Ugc3RhdHMgYmFjayBpbnRvIHRoZSB6 b25lLi4gKi8NCi0JWk9ORV9MT0NLKHpvbmUpOw0KLQl6b25lLT51el9hbGxv Y3MgKz0gYWxsb2M7DQotDQotCS8qDQogCSAqIEV4cGFuZCB0aGUgem9uZSBo YXNoIHRhYmxlLg0KIAkgKg0KIAkgKiBUaGlzIGlzIGRvbmUgaWYgdGhlIG51 bWJlciBvZiBzbGFicyBpcyBsYXJnZXIgdGhhbiB0aGUgaGFzaCBzaXplLg0K IAkgKiBXaGF0IEknbSB0cnlpbmcgdG8gZG8gaGVyZSBpcyBjb21wbGV0ZWx5 IHJlZHVjZSBjb2xsaXNpb25zLiAgVGhpcw0KIAkgKiBtYXkgYmUgYSBsaXR0 bGUgYWdncmVzc2l2ZS4gIFNob3VsZCBJIGFsbG93IGZvciB0d28gY29sbGlz aW9ucyBtYXg/DQogCSAqLw0KLQ0KKwlaT05FX0xPQ0soem9uZSk7DQogCWlm IChrZWctPnVrX2ZsYWdzICYgVU1BX1pPTkVfSEFTSCAmJg0KIAkgICAga2Vn LT51a19wYWdlcyAvIGtlZy0+dWtfcHBlcmEgPj0ga2VnLT51a19oYXNoLnVo X2hhc2hzaXplKSB7DQogCQlzdHJ1Y3QgdW1hX2hhc2ggbmV3aGFzaDsNCkBA IC02MTMsNiArNTgyLDEwIEBADQogLyoNCiAgKiBEcmFpbnMgdGhlIHBlciBj cHUgY2FjaGVzIGZvciBhIHpvbmUuDQogICoNCisgKiBOT1RFOiBUaGlzIG1h eSBvbmx5IGJlIGNhbGxlZCB3aGlsZSB0aGUgem9uZSBpcyBiZWluZyB0dXJu IGRvd24sIGFuZCBub3QNCisgKiBkdXJpbmcgbm9ybWFsIG9wZXJhdGlvbi4g IFRoaXMgaXMgbmVjZXNzYXJ5IGluIG9yZGVyIHRoYXQgd2UgZG8gbm90IGhh dmUNCisgKiB0byBtaWdyYXRlIENQVXMgdG8gZHJhaW4gdGhlIHBlci1DUFUg Y2FjaGVzLg0KKyAqDQogICogQXJndW1lbnRzOg0KICAqCXpvbmUgICAgIFRo ZSB6b25lIHRvIGRyYWluLCBtdXN0IGJlIHVubG9ja2VkLg0KICAqDQpAQCAt NjI2LDEyICs1OTksMjAgQEANCiAJaW50IGNwdTsNCiANCiAJLyoNCi0JICog V2UgaGF2ZSB0byBsb2NrIGVhY2ggY3B1IGNhY2hlIGJlZm9yZSBsb2NraW5n IHRoZSB6b25lDQorCSAqIFhYWDogSXQgaXMgc2FmZSB0byBub3QgbG9jayB0 aGUgcGVyLUNQVSBjYWNoZXMsIGJlY2F1c2Ugd2UncmUNCisJICogdGVhcmlu ZyBkb3duIHRoZSB6b25lIGFueXdheS4gIEkuZS4sIHRoZXJlIHdpbGwgYmUg bm8gZnVydGhlciB1c2UNCisJICogb2YgdGhlIGNhY2hlcyBhdCB0aGlzIHBv aW50Lg0KKwkgKg0KKwkgKiBYWFg6IEl0IHdvdWxkIGdvb2QgdG8gYmUgYWJs ZSB0byBhc3NlcnQgdGhhdCB0aGUgem9uZSBpcyBiZWluZw0KKwkgKiB0b3Ju IGRvd24gdG8gcHJldmVudCBpbXByb3BlciB1c2Ugb2YgY2FjaGVfZHJhaW4o KS4NCisJICoNCisJICogWFhYOiBXZSBsb2NrIHRoZSB6b25lIGJlZm9yZSBw YXNzaW5nIGludG8gYnVja2V0X2NhY2hlX2RyYWluKCkgYXMNCisJICogaXQg aXMgdXNlZCBlbHNld2hlcmUuICBTaG91bGQgdGhlIHRlYXItZG93biBwYXRo IGJlIG1hZGUgc3BlY2lhbA0KKwkgKiB0aGVyZSBpbiBzb21lIGZvcm0/DQog CSAqLw0KIAlmb3IgKGNwdSA9IDA7IGNwdSA8PSBtcF9tYXhpZDsgY3B1Kysp IHsNCiAJCWlmIChDUFVfQUJTRU5UKGNwdSkpDQogCQkJY29udGludWU7DQot CQlDUFVfTE9DSyhjcHUpOw0KIAkJY2FjaGUgPSAmem9uZS0+dXpfY3B1W2Nw dV07DQogCQlidWNrZXRfZHJhaW4oem9uZSwgY2FjaGUtPnVjX2FsbG9jYnVj a2V0KTsNCiAJCWJ1Y2tldF9kcmFpbih6b25lLCBjYWNoZS0+dWNfZnJlZWJ1 Y2tldCk7DQpAQCAtNjQ0LDExICs2MjUsNiBAQA0KIAlaT05FX0xPQ0soem9u ZSk7DQogCWJ1Y2tldF9jYWNoZV9kcmFpbih6b25lKTsNCiAJWk9ORV9VTkxP Q0soem9uZSk7DQotCWZvciAoY3B1ID0gMDsgY3B1IDw9IG1wX21heGlkOyBj cHUrKykgew0KLQkJaWYgKENQVV9BQlNFTlQoY3B1KSkNCi0JCQljb250aW51 ZTsNCi0JCUNQVV9VTkxPQ0soY3B1KTsNCi0JfQ0KIH0NCiANCiAvKg0KQEAg LTgyOCw3ICs4MDQsOCBAQA0KIAkgICAgJmZsYWdzLCB3YWl0KTsNCiAJaWYg KG1lbSA9PSBOVUxMKSB7DQogCQlpZiAoa2VnLT51a19mbGFncyAmIFVNQV9a T05FX09GRlBBR0UpDQotCQkJdW1hX3pmcmVlX2ludGVybmFsKGtlZy0+dWtf c2xhYnpvbmUsIHNsYWIsIE5VTEwsIDApOw0KKwkJCXVtYV96ZnJlZV9pbnRl cm5hbChrZWctPnVrX3NsYWJ6b25lLCBzbGFiLCBOVUxMLA0KKwkJCSAgICBT S0lQX05PTkUpOw0KIAkJWk9ORV9MT0NLKHpvbmUpOw0KIAkJcmV0dXJuIChO VUxMKTsNCiAJfQ0KQEAgLTE2NDMsMTAgKzE2MjAsNiBAQA0KICNpZmRlZiBV TUFfREVCVUcNCiAJcHJpbnRmKCJJbml0aWFsaXppbmcgcGNwdSBjYWNoZSBs b2Nrcy5cbiIpOw0KICNlbmRpZg0KLQkvKiBJbml0aWFsaXplIHRoZSBwY3B1 IGNhY2hlIGxvY2sgc2V0IG9uY2UgYW5kIGZvciBhbGwgKi8NCi0JZm9yIChp ID0gMDsgaSA8PSBtcF9tYXhpZDsgaSsrKQ0KLQkJQ1BVX0xPQ0tfSU5JVChp KTsNCi0NCiAjaWZkZWYgVU1BX0RFQlVHDQogCXByaW50ZigiQ3JlYXRpbmcg c2xhYiBhbmQgaGFzaCB6b25lcy5cbiIpOw0KICNlbmRpZg0KQEAgLTE3OTMs NiArMTc2Niw5IEBADQogCXVtYV9jYWNoZV90IGNhY2hlOw0KIAl1bWFfYnVj a2V0X3QgYnVja2V0Ow0KIAlpbnQgY3B1Ow0KKyNpZmRlZiBJTlZBUklBTlRT DQorCWludCBjb3VudDsNCisjZW5kaWYNCiAJaW50IGJhZG5lc3M7DQogDQog CS8qIFRoaXMgaXMgdGhlIGZhc3QgcGF0aCBhbGxvY2F0aW9uICovDQpAQCAt MTgyNywxMiArMTgwMywzMyBAQA0KIAkJfQ0KIAl9DQogDQorCS8qDQorCSAq IElmIHBvc3NpYmxlLCBhbGxvY2F0ZSBmcm9tIHRoZSBwZXItQ1BVIGNhY2hl LiAgVGhlcmUgYXJlIHR3bw0KKwkgKiByZXF1aXJlbWVudHMgZm9yIHNhZmUg YWNjZXNzIHRvIHRoZSBwZXItQ1BVIGNhY2hlOiAoMSkgdGhlIHRocmVhZA0K KwkgKiBhY2Nlc3NpbmcgdGhlIGNhY2hlIG11c3Qgbm90IGJlIHByZWVtcHRl ZCBvciB5aWVsZCBkdXJpbmcgYWNjZXNzLA0KKwkgKiBhbmQgKDIpIHRoZSB0 aHJlYWQgbXVzdCBub3QgbWlncmF0ZSBDUFVzIHdpdGhvdXQgc3dpdGNoaW5n IHdoaWNoDQorCSAqIGNhY2hlIGl0IGFjY2Vzc2VzLiAgV2UgcmVseSBvbiBh IGNyaXRpY2FsIHNlY3Rpb24gdG8gcHJldmVudA0KKwkgKiBwcmVlbXB0aW9u IGFuZCBtaWdyYXRpb24uICBXZSByZWxlYXNlIHRoZSBjcml0aWNhbCBzZWN0 aW9uIGluDQorCSAqIG9yZGVyIHRvIGFjcXVpcmUgdGhlIHpvbmUgbXV0ZXgg aWYgd2UgYXJlIHVuYWJsZSB0byBhbGxvY2F0ZSBmcm9tDQorCSAqIHRoZSBj dXJyZW50IGNhY2hlOyB3aGVuIHdlIHJlLWFjcXVpcmUgdGhlIGNyaXRpY2Fs IHNlY3Rpb24sIHdlDQorCSAqIG11c3QgZGV0ZWN0IGFuZCBoYW5kbGUgbWln cmF0aW9uIGlmIGl0IGhhcyBvY2N1cnJlZC4NCisJICovDQorI2lmZGVmIElO VkFSSUFOVFMNCisJY291bnQgPSAwOw0KKyNlbmRpZg0KIHphbGxvY19yZXN0 YXJ0Og0KKwljcml0aWNhbF9lbnRlcigpOw0KIAljcHUgPSBQQ1BVX0dFVChj cHVpZCk7DQotCUNQVV9MT0NLKGNwdSk7DQogCWNhY2hlID0gJnpvbmUtPnV6 X2NwdVtjcHVdOw0KIA0KIHphbGxvY19zdGFydDoNCisjaWZkZWYgSU5WQVJJ QU5UUw0KKwljb3VudCsrOw0KKwlLQVNTRVJUKGNvdW50IDwgMTAsICgidW1h X3phbGxvY19hcmc6IGNvdW50ID09IDEwIikpOw0KKyNlbmRpZg0KKyNpZiAw DQorCWNyaXRpY2FsX2Fzc2VydCgpOw0KKyNlbmRpZg0KIAlidWNrZXQgPSBj YWNoZS0+dWNfYWxsb2NidWNrZXQ7DQogDQogCWlmIChidWNrZXQpIHsNCkBA IC0xODQ1LDEyICsxODQyLDEyIEBADQogCQkJS0FTU0VSVChpdGVtICE9IE5V TEwsDQogCQkJICAgICgidW1hX3phbGxvYzogQnVja2V0IHBvaW50ZXIgbWFu Z2xlZC4iKSk7DQogCQkJY2FjaGUtPnVjX2FsbG9jcysrOw0KKwkJCWNyaXRp Y2FsX2V4aXQoKTsNCiAjaWZkZWYgSU5WQVJJQU5UUw0KIAkJCVpPTkVfTE9D Syh6b25lKTsNCiAJCQl1bWFfZGJnX2FsbG9jKHpvbmUsIE5VTEwsIGl0ZW0p Ow0KIAkJCVpPTkVfVU5MT0NLKHpvbmUpOw0KICNlbmRpZg0KLQkJCUNQVV9V TkxPQ0soY3B1KTsNCiAJCQlpZiAoem9uZS0+dXpfY3RvciAhPSBOVUxMKSB7 DQogCQkJCWlmICh6b25lLT51el9jdG9yKGl0ZW0sIHpvbmUtPnV6X2tlZy0+ dWtfc2l6ZSwNCiAJCQkJICAgIHVkYXRhLCBmbGFncykgIT0gMCkgew0KQEAg LTE4ODAsNyArMTg3NywzMyBAQA0KIAkJCX0NCiAJCX0NCiAJfQ0KKwkvKg0K KwkgKiBBdHRlbXB0IHRvIHJldHJpZXZlIHRoZSBpdGVtIGZyb20gdGhlIHBl ci1DUFUgY2FjaGUgaGFzIGZhaWxlZCwgc28NCisJICogd2UgbXVzdCBnbyBi YWNrIHRvIHRoZSB6b25lLiAgVGhpcyByZXF1aXJlcyB0aGUgem9uZSBsb2Nr LCBzbyB3ZQ0KKwkgKiBtdXN0IGRyb3AgdGhlIGNyaXRpY2FsIHNlY3Rpb24s IHRoZW4gcmUtYWNxdWlyZSBpdCB3aGVuIHdlIGdvIGJhY2sNCisJICogdG8g dGhlIGNhY2hlLiAgU2luY2UgdGhlIGNyaXRpY2FsIHNlY3Rpb24gaXMgcmVs ZWFzZWQsIHdlIG1heSBiZQ0KKwkgKiBwcmVlbXB0ZWQgb3IgbWlncmF0ZS4g IEFzIHN1Y2gsIG1ha2Ugc3VyZSBub3QgdG8gbWFpbnRhaW4gYW55DQorCSAq IHRocmVhZC1sb2NhbCBzdGF0ZSBzcGVjaWZpYyB0byB0aGUgY2FjaGUgZnJv bSBwcmlvciB0byByZWxlYXNpbmcNCisJICogdGhlIGNyaXRpY2FsIHNlY3Rp b24uDQorCSAqLw0KKwljcml0aWNhbF9leGl0KCk7DQogCVpPTkVfTE9DSyh6 b25lKTsNCisJY3JpdGljYWxfZW50ZXIoKTsNCisJY3B1ID0gUENQVV9HRVQo Y3B1aWQpOw0KKwljYWNoZSA9ICZ6b25lLT51el9jcHVbY3B1XTsNCisJYnVj a2V0ID0gY2FjaGUtPnVjX2FsbG9jYnVja2V0Ow0KKwlpZiAoYnVja2V0ICE9 IE5VTEwpIHsNCisJCWlmIChidWNrZXQgIT0gTlVMTCAmJiBidWNrZXQtPnVi X2NudCA+IDApIHsNCisJCQlaT05FX1VOTE9DSyh6b25lKTsNCisJCQlnb3Rv IHphbGxvY19zdGFydDsNCisJCX0NCisJCWJ1Y2tldCA9IGNhY2hlLT51Y19m cmVlYnVja2V0Ow0KKwkJaWYgKGJ1Y2tldCAhPSBOVUxMICYmIGJ1Y2tldC0+ dWJfY250ID4gMCkgew0KKwkJCVpPTkVfVU5MT0NLKHpvbmUpOw0KKwkJCWdv dG8gemFsbG9jX3N0YXJ0Ow0KKwkJfQ0KKwl9DQorDQogCS8qIFNpbmNlIHdl IGhhdmUgbG9ja2VkIHRoZSB6b25lIHdlIG1heSBhcyB3ZWxsIHNlbmQgYmFj ayBvdXIgc3RhdHMgKi8NCiAJem9uZS0+dXpfYWxsb2NzICs9IGNhY2hlLT51 Y19hbGxvY3M7DQogCWNhY2hlLT51Y19hbGxvY3MgPSAwOw0KQEAgLTE5MDQs OCArMTkyNyw4IEBADQogCQlaT05FX1VOTE9DSyh6b25lKTsNCiAJCWdvdG8g emFsbG9jX3N0YXJ0Ow0KIAl9DQotCS8qIFdlIGFyZSBubyBsb25nZXIgYXNz b2NpYXRlZCB3aXRoIHRoaXMgY3B1ISEhICovDQotCUNQVV9VTkxPQ0soY3B1 KTsNCisJLyogV2UgYXJlIG5vIGxvbmdlciBhc3NvY2lhdGVkIHdpdGggdGhp cyBDUFUuICovDQorCWNyaXRpY2FsX2V4aXQoKTsNCiANCiAJLyogQnVtcCB1 cCBvdXIgdXpfY291bnQgc28gd2UgZ2V0IGhlcmUgbGVzcyAqLw0KIAlpZiAo em9uZS0+dXpfY291bnQgPCBCVUNLRVRfTUFYKQ0KQEAgLTIyMjgsMTAgKzIy NTEsMTAgQEANCiAJdW1hX2J1Y2tldF90IGJ1Y2tldDsNCiAJaW50IGJmbGFn czsNCiAJaW50IGNwdTsNCi0JZW51bSB6ZnJlZXNraXAgc2tpcDsNCisjaWZk ZWYgSU5WQVJJQU5UUw0KKwlpbnQgY291bnQ7DQorI2VuZGlmDQogDQotCS8q IFRoaXMgaXMgdGhlIGZhc3QgcGF0aCBmcmVlICovDQotCXNraXAgPSBTS0lQ X05PTkU7DQogCWtlZyA9IHpvbmUtPnV6X2tlZzsNCiANCiAjaWZkZWYgVU1B X0RFQlVHX0FMTE9DXzENCkBAIC0yMjQwLDI1ICsyMjYzLDUwIEBADQogCUNU UjIoS1RSX1VNQSwgInVtYV96ZnJlZV9hcmcgdGhyZWFkICV4IHpvbmUgJXMi LCBjdXJ0aHJlYWQsDQogCSAgICB6b25lLT51el9uYW1lKTsNCiANCisJaWYg KHpvbmUtPnV6X2R0b3IpDQorCQl6b25lLT51el9kdG9yKGl0ZW0sIGtlZy0+ dWtfc2l6ZSwgdWRhdGEpOw0KKyNpZmRlZiBJTlZBUklBTlRTDQorCVpPTkVf TE9DSyh6b25lKTsNCisJaWYgKGtlZy0+dWtfZmxhZ3MgJiBVTUFfWk9ORV9N QUxMT0MpDQorCQl1bWFfZGJnX2ZyZWUoem9uZSwgdWRhdGEsIGl0ZW0pOw0K KwllbHNlDQorCQl1bWFfZGJnX2ZyZWUoem9uZSwgTlVMTCwgaXRlbSk7DQor CVpPTkVfVU5MT0NLKHpvbmUpOw0KKyNlbmRpZg0KIAkvKg0KIAkgKiBUaGUg cmFjZSBoZXJlIGlzIGFjY2VwdGFibGUuICBJZiB3ZSBtaXNzIGl0IHdlJ2xs IGp1c3QgaGF2ZSB0byB3YWl0DQogCSAqIGEgbGl0dGxlIGxvbmdlciBmb3Ig dGhlIGxpbWl0cyB0byBiZSByZXNldC4NCiAJICovDQotDQogCWlmIChrZWct PnVrX2ZsYWdzICYgVU1BX1pGTEFHX0ZVTEwpDQogCQlnb3RvIHpmcmVlX2lu dGVybmFsOw0KIA0KLQlpZiAoem9uZS0+dXpfZHRvcikgew0KLQkJem9uZS0+ dXpfZHRvcihpdGVtLCBrZWctPnVrX3NpemUsIHVkYXRhKTsNCi0JCXNraXAg PSBTS0lQX0RUT1I7DQotCX0NCi0NCisjaWZkZWYgSU5WQVJJQU5UUw0KKwlj b3VudCA9IDA7DQorI2VuZGlmDQorCS8qDQorCSAqIElmIHBvc3NpYmxlLCBm cmVlIHRvIHRoZSBwZXItQ1BVIGNhY2hlLiAgVGhlcmUgYXJlIHR3bw0KKwkg KiByZXF1aXJlbWVudHMgZm9yIHNhZmUgYWNjZXNzIHRvIHRoZSBwZXItQ1BV IGNhY2hlOiAoMSkgdGhlIHRocmVhZA0KKwkgKiBhY2Nlc3NpbmcgdGhlIGNh Y2hlIG11c3Qgbm90IGJlIHByZWVtcHRlZCBvciB5aWVsZCBkdXJpbmcgYWNj ZXNzLA0KKwkgKiBhbmQgKDIpIHRoZSB0aHJlYWQgbXVzdCBub3QgbWlncmF0 ZSBDUFVzIHdpdGhvdXQgc3dpdGNoaW5nIHdoaWNoDQorCSAqIGNhY2hlIGl0 IGFjY2Vzc2VzLiAgV2UgcmVseSBvbiBhIGNyaXRpY2FsIHNlY3Rpb24gdG8g cHJldmVudA0KKwkgKiBwcmVlbXB0aW9uIGFuZCBtaWdyYXRpb24uICBXZSBy ZWxlYXNlIHRoZSBjcml0aWNhbCBzZWN0aW9uIGluDQorCSAqIG9yZGVyIHRv IGFjcXVpcmUgdGhlIHpvbmUgbXV0ZXggaWYgd2UgYXJlIHVuYWJsZSB0byBm cmVlIHRvIHRoZQ0KKwkgKiBjdXJyZW50IGNhY2hlOyB3aGVuIHdlIHJlLWFj cXVpcmUgdGhlIGNyaXRpY2FsIHNlY3Rpb24sIHdlIG11c3QNCisJICogZGV0 ZWN0IGFuZCBoYW5kbGUgbWlncmF0aW9uIGlmIGl0IGhhcyBvY2N1cnJlZC4N CisJICovDQogemZyZWVfcmVzdGFydDoNCisJY3JpdGljYWxfZW50ZXIoKTsN CiAJY3B1ID0gUENQVV9HRVQoY3B1aWQpOw0KLQlDUFVfTE9DSyhjcHUpOw0K IAljYWNoZSA9ICZ6b25lLT51el9jcHVbY3B1XTsNCiANCiB6ZnJlZV9zdGFy dDoNCisjaWZkZWYgSU5WQVJJQU5UUw0KKwljb3VudCsrOw0KKwlLQVNTRVJU KGNvdW50IDwgMTAsICgidW1hX3pmcmVlX2FyZzogY291bnQgPT0gMTAiKSk7 DQorI2VuZGlmDQorI2lmIDANCisJY3JpdGljYWxfYXNzZXJ0KCk7DQorI2Vu ZGlmDQogCWJ1Y2tldCA9IGNhY2hlLT51Y19mcmVlYnVja2V0Ow0KIA0KIAlp ZiAoYnVja2V0KSB7DQpAQCAtMjI3MiwxNSArMjMyMCw3IEBADQogCQkJICAg ICgidW1hX3pmcmVlOiBGcmVlaW5nIHRvIG5vbiBmcmVlIGJ1Y2tldCBpbmRl eC4iKSk7DQogCQkJYnVja2V0LT51Yl9idWNrZXRbYnVja2V0LT51Yl9jbnRd ID0gaXRlbTsNCiAJCQlidWNrZXQtPnViX2NudCsrOw0KLSNpZmRlZiBJTlZB UklBTlRTDQotCQkJWk9ORV9MT0NLKHpvbmUpOw0KLQkJCWlmIChrZWctPnVr X2ZsYWdzICYgVU1BX1pPTkVfTUFMTE9DKQ0KLQkJCQl1bWFfZGJnX2ZyZWUo em9uZSwgdWRhdGEsIGl0ZW0pOw0KLQkJCWVsc2UNCi0JCQkJdW1hX2RiZ19m cmVlKHpvbmUsIE5VTEwsIGl0ZW0pOw0KLQkJCVpPTkVfVU5MT0NLKHpvbmUp Ow0KLSNlbmRpZg0KLQkJCUNQVV9VTkxPQ0soY3B1KTsNCisJCQljcml0aWNh bF9leGl0KCk7DQogCQkJcmV0dXJuOw0KIAkJfSBlbHNlIGlmIChjYWNoZS0+ dWNfYWxsb2NidWNrZXQpIHsNCiAjaWZkZWYgVU1BX0RFQlVHX0FMTE9DDQpA QCAtMjMwNCw5ICsyMzQ0LDMyIEBADQogCSAqDQogCSAqIDEpIFRoZSBidWNr ZXRzIGFyZSBOVUxMDQogCSAqIDIpIFRoZSBhbGxvYyBhbmQgZnJlZSBidWNr ZXRzIGFyZSBib3RoIHNvbWV3aGF0IGZ1bGwuDQorCSAqDQorCSAqIFdlIG11 c3QgZ28gYmFjayB0aGUgem9uZSwgd2hpY2ggcmVxdWlyZXMgYWNxdWlyaW5n IHRoZSB6b25lIGxvY2ssDQorCSAqIHdoaWNoIGluIHR1cm4gbWVhbnMgd2Ug bXVzdCByZWxlYXNlIGFuZCByZS1hY3F1aXJlIHRoZSBjcml0aWNhbA0KKwkg KiBzZWN0aW9uLiAgU2luY2UgdGhlIGNyaXRpY2FsIHNlY3Rpb24gaXMgcmVs ZWFzZWQsIHdlIG1heSBiZQ0KKwkgKiBwcmVlbXB0ZWQgb3IgbWlncmF0ZS4g IEFzIHN1Y2gsIG1ha2Ugc3VyZSBub3QgdG8gbWFpbnRhaW4gYW55DQorCSAq IHRocmVhZC1sb2NhbCBzdGF0ZSBzcGVjaWZpYyB0byB0aGUgY2FjaGUgZnJv bSBwcmlvciB0byByZWxlYXNpbmcNCisJICogdGhlIGNyaXRpY2FsIHNlY3Rp b24uDQogCSAqLw0KLQ0KKwljcml0aWNhbF9leGl0KCk7DQogCVpPTkVfTE9D Syh6b25lKTsNCisJY3JpdGljYWxfZW50ZXIoKTsNCisJY3B1ID0gUENQVV9H RVQoY3B1aWQpOw0KKwljYWNoZSA9ICZ6b25lLT51el9jcHVbY3B1XTsNCisJ aWYgKGNhY2hlLT51Y19mcmVlYnVja2V0ICE9IE5VTEwpIHsNCisJCWlmIChj YWNoZS0+dWNfZnJlZWJ1Y2tldC0+dWJfY250IDwNCisJCSAgICBjYWNoZS0+ dWNfZnJlZWJ1Y2tldC0+dWJfZW50cmllcykgew0KKwkJCVpPTkVfVU5MT0NL KHpvbmUpOw0KKwkJCWdvdG8gemZyZWVfc3RhcnQ7DQorCQl9DQorCQlpZiAo Y2FjaGUtPnVjX2FsbG9jYnVja2V0ICE9IE5VTEwgJiYNCisJCSAgICAoY2Fj aGUtPnVjX2FsbG9jYnVja2V0LT51Yl9jbnQgPA0KKwkJICAgIGNhY2hlLT51 Y19mcmVlYnVja2V0LT51Yl9jbnQpKSB7DQorCQkJWk9ORV9VTkxPQ0soem9u ZSk7DQorCQkJZ290byB6ZnJlZV9zdGFydDsNCisJCX0NCisJfQ0KIA0KIAli dWNrZXQgPSBjYWNoZS0+dWNfZnJlZWJ1Y2tldDsNCiAJY2FjaGUtPnVjX2Zy ZWVidWNrZXQgPSBOVUxMOw0KQEAgLTIzMjgsOCArMjM5MSw4IEBADQogCQlj YWNoZS0+dWNfZnJlZWJ1Y2tldCA9IGJ1Y2tldDsNCiAJCWdvdG8gemZyZWVf c3RhcnQ7DQogCX0NCi0JLyogV2UncmUgZG9uZSB3aXRoIHRoaXMgQ1BVIG5v dyAqLw0KLQlDUFVfVU5MT0NLKGNwdSk7DQorCS8qIFdlIGFyZSBubyBsb25n ZXIgYXNzb2NpYXRlZCB3aXRoIHRoaXMgQ1BVLiAqLw0KKwljcml0aWNhbF9l eGl0KCk7DQogDQogCS8qIEFuZCB0aGUgem9uZS4uICovDQogCVpPTkVfVU5M T0NLKHpvbmUpOw0KQEAgLTIzNTMsMjcgKzI0MTYsOSBAQA0KIAkvKg0KIAkg KiBJZiBub3RoaW5nIGVsc2UgY2F1Z2h0IHRoaXMsIHdlJ2xsIGp1c3QgZG8g YW4gaW50ZXJuYWwgZnJlZS4NCiAJICovDQotDQogemZyZWVfaW50ZXJuYWw6 DQorCXVtYV96ZnJlZV9pbnRlcm5hbCh6b25lLCBpdGVtLCB1ZGF0YSwgU0tJ UF9EVE9SKTsNCiANCi0jaWZkZWYgSU5WQVJJQU5UUw0KLQkvKg0KLQkgKiBJ ZiB3ZSBuZWVkIHRvIHNraXAgdGhlIGR0b3IgYW5kIHRoZSB1bWFfZGJnX2Zy ZWUgaW4NCi0JICogdW1hX3pmcmVlX2ludGVybmFsIGJlY2F1c2Ugd2UndmUg YWxyZWFkeSBjYWxsZWQgdGhlIGR0b3INCi0JICogYWJvdmUsIGJ1dCB3ZSBl bmRlZCB1cCBoZXJlLCB0aGVuIHdlIG5lZWQgdG8gbWFrZSBzdXJlDQotCSAq IHRoYXQgd2UgdGFrZSBjYXJlIG9mIHRoZSB1bWFfZGJnX2ZyZWUgaW1tZWRp YXRlbHkuDQotCSAqLw0KLQlpZiAoc2tpcCkgew0KLQkJWk9ORV9MT0NLKHpv bmUpOw0KLQkJaWYgKGtlZy0+dWtfZmxhZ3MgJiBVTUFfWk9ORV9NQUxMT0Mp DQotCQkJdW1hX2RiZ19mcmVlKHpvbmUsIHVkYXRhLCBpdGVtKTsNCi0JCWVs c2UNCi0JCQl1bWFfZGJnX2ZyZWUoem9uZSwgTlVMTCwgaXRlbSk7DQotCQla T05FX1VOTE9DSyh6b25lKTsNCi0JfQ0KLSNlbmRpZg0KLQl1bWFfemZyZWVf aW50ZXJuYWwoem9uZSwgaXRlbSwgdWRhdGEsIHNraXApOw0KLQ0KIAlyZXR1 cm47DQogfQ0KIA0KQEAgLTI2NTUsNyArMjcwMCw3IEBADQogCQlzbGFiLT51 c19mbGFncyA9IGZsYWdzIHwgVU1BX1NMQUJfTUFMTE9DOw0KIAkJc2xhYi0+ dXNfc2l6ZSA9IHNpemU7DQogCX0gZWxzZSB7DQotCQl1bWFfemZyZWVfaW50 ZXJuYWwoc2xhYnpvbmUsIHNsYWIsIE5VTEwsIDApOw0KKwkJdW1hX3pmcmVl X2ludGVybmFsKHNsYWJ6b25lLCBzbGFiLCBOVUxMLCBTS0lQX05PTkUpOw0K IAl9DQogDQogCXJldHVybiAobWVtKTsNCkBAIC0yNjY2LDcgKzI3MTEsNyBA QA0KIHsNCiAJdnNldG9iaigodm1fb2Zmc2V0X3Qpc2xhYi0+dXNfZGF0YSwg a21lbV9vYmplY3QpOw0KIAlwYWdlX2ZyZWUoc2xhYi0+dXNfZGF0YSwgc2xh Yi0+dXNfc2l6ZSwgc2xhYi0+dXNfZmxhZ3MpOw0KLQl1bWFfemZyZWVfaW50 ZXJuYWwoc2xhYnpvbmUsIHNsYWIsIE5VTEwsIDApOw0KKwl1bWFfemZyZWVf aW50ZXJuYWwoc2xhYnpvbmUsIHNsYWIsIE5VTEwsIFNLSVBfTk9ORSk7DQog fQ0KIA0KIHZvaWQNCkBAIC0yNzQzLDYgKzI3ODgsNyBAQA0KIAlpbnQgY2Fj aGVmcmVlOw0KIAl1bWFfYnVja2V0X3QgYnVja2V0Ow0KIAl1bWFfY2FjaGVf dCBjYWNoZTsNCisJdV9pbnQ2NF90IGFsbG9jOw0KIA0KIAljbnQgPSAwOw0K IAltdHhfbG9jaygmdW1hX210eCk7DQpAQCAtMjc2NiwxNSArMjgxMiw5IEBA DQogCSAgTElTVF9GT1JFQUNIKHosICZ6ay0+dWtfem9uZXMsIHV6X2xpbmsp IHsNCiAJCWlmIChjbnQgPT0gMCkJLyogbGlzdCBtYXkgaGF2ZSBjaGFuZ2Vk IHNpemUgKi8NCiAJCQlicmVhazsNCi0JCWlmICghKHprLT51a19mbGFncyAm IFVNQV9aRkxBR19JTlRFUk5BTCkpIHsNCi0JCQlmb3IgKGNwdSA9IDA7IGNw dSA8PSBtcF9tYXhpZDsgY3B1KyspIHsNCi0JCQkJaWYgKENQVV9BQlNFTlQo Y3B1KSkNCi0JCQkJCWNvbnRpbnVlOw0KLQkJCQlDUFVfTE9DSyhjcHUpOw0K LQkJCX0NCi0JCX0NCiAJCVpPTkVfTE9DSyh6KTsNCiAJCWNhY2hlZnJlZSA9 IDA7DQorCQlhbGxvYyA9IDA7DQogCQlpZiAoISh6ay0+dWtfZmxhZ3MgJiBV TUFfWkZMQUdfSU5URVJOQUwpKSB7DQogCQkJZm9yIChjcHUgPSAwOyBjcHUg PD0gbXBfbWF4aWQ7IGNwdSsrKSB7DQogCQkJCWlmIChDUFVfQUJTRU5UKGNw dSkpDQpAQCAtMjc4NCw5ICsyODI0LDEyIEBADQogCQkJCQljYWNoZWZyZWUg Kz0gY2FjaGUtPnVjX2FsbG9jYnVja2V0LT51Yl9jbnQ7DQogCQkJCWlmIChj YWNoZS0+dWNfZnJlZWJ1Y2tldCAhPSBOVUxMKQ0KIAkJCQkJY2FjaGVmcmVl ICs9IGNhY2hlLT51Y19mcmVlYnVja2V0LT51Yl9jbnQ7DQotCQkJCUNQVV9V TkxPQ0soY3B1KTsNCisJCQkJYWxsb2MgKz0gY2FjaGUtPnVjX2FsbG9jczsN CisJCQkJY2FjaGUtPnVjX2FsbG9jcyA9IDA7DQogCQkJfQ0KIAkJfQ0KKwkJ YWxsb2MgKz0gei0+dXpfYWxsb2NzOw0KKw0KIAkJTElTVF9GT1JFQUNIKGJ1 Y2tldCwgJnotPnV6X2Z1bGxfYnVja2V0LCB1Yl9saW5rKSB7DQogCQkJY2Fj aGVmcmVlICs9IGJ1Y2tldC0+dWJfY250Ow0KIAkJfQ0KQEAgLTI3OTcsNyAr Mjg0MCw3IEBADQogCQkgICAgemstPnVrX21heHBhZ2VzICogemstPnVrX2lw ZXJzLA0KIAkJICAgICh6ay0+dWtfaXBlcnMgKiAoemstPnVrX3BhZ2VzIC8g emstPnVrX3BwZXJhKSkgLSB0b3RhbGZyZWUsDQogCQkgICAgdG90YWxmcmVl LA0KLQkJICAgICh1bnNpZ25lZCBsb25nIGxvbmcpei0+dXpfYWxsb2NzKTsN CisJCSAgICAodW5zaWduZWQgbG9uZyBsb25nKWFsbG9jKTsNCiAJCVpPTkVf VU5MT0NLKHopOw0KIAkJZm9yIChwID0gb2Zmc2V0ICsgMTI7IHAgPiBvZmZz ZXQgJiYgKnAgPT0gJyAnOyAtLXApDQogCQkJLyogbm90aGluZyAqLyA7DQot LS0gLy9kZXBvdC92ZW5kb3IvZnJlZWJzZC9zcmMvc3lzL3ZtL3VtYV9pbnQu aAkyMDA1LzAyLzE2IDIxOjUwOjI5DQorKysgLy9kZXBvdC91c2VyL3J3YXRz b24vcGVyY3B1L3N5cy92bS91bWFfaW50LmgJMjAwNS8wMy8xNSAxOTo1Nzoy NA0KQEAgLTM0MiwxNiArMzQyLDYgQEANCiAjZGVmaW5lCVpPTkVfTE9DSyh6 KQltdHhfbG9jaygoeiktPnV6X2xvY2spDQogI2RlZmluZSBaT05FX1VOTE9D Syh6KQltdHhfdW5sb2NrKCh6KS0+dXpfbG9jaykNCiANCi0jZGVmaW5lCUNQ VV9MT0NLX0lOSVQoY3B1KQkJCQkJXA0KLQltdHhfaW5pdCgmdW1hX3BjcHVf bXR4WyhjcHUpXSwgIlVNQSBwY3B1IiwgIlVNQSBwY3B1IiwJXA0KLQkgICAg TVRYX0RFRiB8IE1UWF9EVVBPSykNCi0NCi0jZGVmaW5lIENQVV9MT0NLKGNw dSkJCQkJCQlcDQotCW10eF9sb2NrKCZ1bWFfcGNwdV9tdHhbKGNwdSldKQ0K LQ0KLSNkZWZpbmUgQ1BVX1VOTE9DSyhjcHUpCQkJCQkJXA0KLQltdHhfdW5s b2NrKCZ1bWFfcGNwdV9tdHhbKGNwdSldKQ0KLQ0KIC8qDQogICogRmluZCBh IHNsYWIgd2l0aGluIGEgaGFzaCB0YWJsZS4gIFRoaXMgaXMgdXNlZCBmb3Ig T0ZGUEFHRSB6b25lcyB0byBsb29rdXANCiAgKiB0aGUgc2xhYiBzdHJ1Y3R1 cmUuDQo= --0-136826264-1113748310=:85588 Content-Type: TEXT/PLAIN; charset=US-ASCII; name=mbuf.diff Content-Transfer-Encoding: BASE64 Content-ID: <20050417153150.H85588@fledge.watson.org> Content-Description: Content-Disposition: attachment; filename=mbuf.diff LS0tIC8vZGVwb3QvdmVuZG9yL2ZyZWVic2Qvc3JjL3N5cy9rZXJuL2tlcm5f bWJ1Zi5jCTIwMDUvMDIvMTYgMjE6NTA6MjkNCisrKyAvL2RlcG90L3VzZXIv cndhdHNvbi9wZXJjcHUvc3lzL2tlcm4va2Vybl9tYnVmLmMJMjAwNS8wNC8x NSAxMToxMToyNg0KQEAgLTEsNiArMSw3IEBADQogLyotDQotICogQ29weXJp Z2h0IChjKSAyMDA0LCAyMDA1LA0KLSAqIAlCb3NrbyBNaWxla2ljIDxibWls ZWtpY0BGcmVlQlNELm9yZz4uICBBbGwgcmlnaHRzIHJlc2VydmVkLg0KKyAq IENvcHlyaWdodCAoYykgMjAwNCwgMjAwNSBCb3NrbyBNaWxla2ljIDxibWls ZWtpY0BGcmVlQlNELm9yZz4NCisgKiBDb3B5cmlnaHQgKGMpIDIwMDUgUm9i ZXJ0IE4uIE0uIFdhdHNvbg0KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuDQog ICoNCiAgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQg YmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQNCiAgKiBtb2RpZmljYXRp b24sIGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5n IGNvbmRpdGlvbnMNCkBAIC0zMSw2ICszMiw5IEBADQogI2luY2x1ZGUgIm9w dF9tYWMuaCINCiAjaW5jbHVkZSAib3B0X3BhcmFtLmgiDQogDQorLyogTmVl ZCBtYnN0YXRfcGVyY3B1IGRlZmluaXRpb24gZnJvbSBtYnVmLmguICovDQor I2RlZmluZQlXQU5UX01CU1RBVF9QRVJDUFUNCisNCiAjaW5jbHVkZSA8c3lz L3BhcmFtLmg+DQogI2luY2x1ZGUgPHN5cy9tYWMuaD4NCiAjaW5jbHVkZSA8 c3lzL21hbGxvYy5oPg0KQEAgLTM5LDYgKzQzLDcgQEANCiAjaW5jbHVkZSA8 c3lzL2RvbWFpbi5oPg0KICNpbmNsdWRlIDxzeXMvZXZlbnRoYW5kbGVyLmg+ DQogI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4NCisjaW5jbHVkZSA8c3lzL3By b2MuaD4NCiAjaW5jbHVkZSA8c3lzL3Byb3Rvc3cuaD4NCiAjaW5jbHVkZSA8 c3lzL3NtcC5oPg0KICNpbmNsdWRlIDxzeXMvc3lzY3RsLmg+DQpAQCAtNzks NyArODQsMTggQEANCiAgKi8NCiANCiBpbnQgbm1iY2x1c3RlcnM7DQorDQor LyoNCisgKiBtYnN0YXQgaXMgdGhlIG1idWYgc3RhdGlzdGljcyBzdHJ1Y3R1 cmUgZXhwb3NlZCB0byB1c2Vyc3BhY2UuDQorICoNCisgKiBtYnN0YXRfcGVy Y3B1IGlzIHRoZSBwZXItQ1BVIHN0YXRpc3RpY3Mgc3RydWN0dXJlIGluIHdo aWNoIG1hbnkgb2YgdGhlDQorICogbWJzdGF0IG1lYXN1cmVtZW50cyBhcmUg Z2F0aGVyZWQgYmVmb3JlIGJlaW5nIGNvbWJpbmVkIGZvciBleHBvc3VyZSB0 bw0KKyAqIHVzZXJzcGFjZS4gIG1ic3RhdF9wZXJjcHUgaXMgcmVhZCBsb2Nr bGVzcywgc28gc3ViamVjdCB0byBzbWFsbA0KKyAqIGNvbnNpc3RlbmN5IHJh Y2VzLiAgSXQgaXMgbW9kaWZpZWQgaG9sZGluZyBhIGNyaXRpY2FsIHNlY3Rp b24gdG8gYXZvaWQNCisgKiByZWFkLW1vZGlmeS13cml0ZSByYWNlcyBpbiB0 aGUgcHJlc2VuY2Ugb2YgcHJlZW1wdGlvbi4NCisgKi8NCiBzdHJ1Y3QgbWJz dGF0IG1ic3RhdDsNCitzdHJ1Y3QgbWJzdGF0X3BlcmNwdSBtYnN0YXRfcGVy Y3B1W01BWENQVV07DQogDQogc3RhdGljIHZvaWQNCiB0dW5hYmxlX21iaW5p dCh2b2lkICpkdW1teSkNCkBAIC05MSwxMSArMTA3LDEzIEBADQogfQ0KIFNZ U0lOSVQodHVuYWJsZV9tYmluaXQsIFNJX1NVQl9UVU5BQkxFUywgU0lfT1JE RVJfQU5ZLCB0dW5hYmxlX21iaW5pdCwgTlVMTCk7DQogDQorc3RhdGljIGlu dCBzeXNjdGxfa2Vybl9pcGNfbWJzdGF0KFNZU0NUTF9IQU5ETEVSX0FSR1Mp Ow0KKw0KIFNZU0NUTF9ERUNMKF9rZXJuX2lwYyk7DQogU1lTQ1RMX0lOVChf a2Vybl9pcGMsIE9JRF9BVVRPLCBubWJjbHVzdGVycywgQ1RMRkxBR19SVywg Jm5tYmNsdXN0ZXJzLCAwLA0KICAgICAiTWF4aW11bSBudW1iZXIgb2YgbWJ1 ZiBjbHVzdGVycyBhbGxvd2VkIik7DQotU1lTQ1RMX1NUUlVDVChfa2Vybl9p cGMsIE9JRF9BVVRPLCBtYnN0YXQsIENUTEZMQUdfUkQsICZtYnN0YXQsIG1i c3RhdCwNCi0gICAgIk1idWYgZ2VuZXJhbCBpbmZvcm1hdGlvbiBhbmQgc3Rh dGlzdGljcyIpOw0KK1NZU0NUTF9QUk9DKF9rZXJuX2lwYywgT0lEX0FVVE8s IG1ic3RhdCwgQ1RMRkxBR19SRCwgTlVMTCwgMCwNCisgICAgc3lzY3RsX2tl cm5faXBjX21ic3RhdCwgIiIsICJNYnVmIGdlbmVyYWwgaW5mb3JtYXRpb24g YW5kIHN0YXRpc3RpY3MiKTsNCiANCiAvKg0KICAqIFpvbmVzIGZyb20gd2hp Y2ggd2UgYWxsb2NhdGUuDQpAQCAtMTcwLDggKzE4OCw2OSBAQA0KIAltYnN0 YXQubV9tY2ZhaWwgPSBtYnN0YXQubV9tcGZhaWwgPSAwOw0KIAltYnN0YXQu c2ZfaW9jbnQgPSAwOw0KIAltYnN0YXQuc2ZfYWxsb2N3YWl0ID0gbWJzdGF0 LnNmX2FsbG9jZmFpbCA9IDA7DQorDQorCS8qIG1ic3RhdF9wZXJjcHUgaXMg emVybydkIGJ5IEJTUy4gKi8NCiB9DQogDQorc3RhdGljIGludA0KK3N5c2N0 bF9rZXJuX2lwY19tYnN0YXQoU1lTQ1RMX0hBTkRMRVJfQVJHUykNCit7DQor CXN0cnVjdCBtYnN0YXRfcGVyY3B1ICptYnAsIG1icF9sb2NhbDsNCisJdV9j aGFyIGNwdTsNCisNCisJYnplcm8oJm1icF9sb2NhbCwgc2l6ZW9mKG1icF9s b2NhbCkpOw0KKwlmb3IgKGNwdSA9IDA7IGNwdSA8IE1BWENQVTsgY3B1Kysp IHsNCisJCW1icCA9ICZtYnN0YXRfcGVyY3B1W2NwdV07DQorCQltYnBfbG9j YWwubWJwX21idWZfYWxsb2NzICs9IG1icC0+bWJwX21idWZfYWxsb2NzOw0K KwkJbWJwX2xvY2FsLm1icF9tYnVmX2ZyZWVzICs9IG1icC0+bWJwX21idWZf ZnJlZXM7DQorCQltYnBfbG9jYWwubWJwX21idWZfZmFpbHMgKz0gbWJwLT5t YnBfbWJ1Zl9mYWlsczsNCisJCW1icF9sb2NhbC5tYnBfbWJ1Zl9kcmFpbnMg Kz0gbWJwLT5tYnBfbWJ1Zl9kcmFpbnM7DQorCQltYnBfbG9jYWwubWJwX2Ns dXN0X2FsbG9jcyArPSBtYnAtPm1icF9jbHVzdF9hbGxvY3M7DQorCQltYnBf bG9jYWwubWJwX2NsdXN0X2ZyZWVzICs9IG1icC0+bWJwX2NsdXN0X2ZyZWVz Ow0KKw0KKwkJbWJwX2xvY2FsLm1icF9jb3B5X2ZhaWxzICs9IG1icC0+bWJw X2NvcHlfZmFpbHM7DQorCQltYnBfbG9jYWwubWJwX3B1bGx1cF9mYWlscyAr PSBtYnAtPm1icF9wdWxsdXBfZmFpbHM7DQorDQorCQltYnBfbG9jYWwuc2Zw X2lvY250ICs9IG1icC0+c2ZwX2lvY250Ow0KKwkJbWJwX2xvY2FsLnNmcF9h bGxvY19mYWlscyArPSBtYnAtPnNmcF9hbGxvY19mYWlsczsNCisJCW1icF9s b2NhbC5zZnBfYWxsb2Nfd2FpdHMgKz0gbWJwLT5zZnBfYWxsb2Nfd2FpdHM7 DQorCX0NCisNCisJLyoNCisJICogSWYsIGR1ZSB0byByYWNlcywgdGhlIG51 bWJlciBvZiBmcmVlcyBmb3IgbWJ1ZnMgb3IgY2x1c3RlcnMgaXMNCisJICog Z3JlYXRlciB0aGFuIHRoZSBudW1iZXIgb2YgYWxsb2NzLCBhZGp1c3QgYWxs b2Mgc3RhdHMgdG8gMC4gIFRoaXMNCisJICogaXNuJ3QgcXVpdGUgYWNjdXJh dGUsIGJ1dCBmb3IgdGhlIHRpbWUgYmVpbmcsIHdlIGNvbnNpZGVyIHRoZQ0K KwkgKiBwZXJmb3JtYW5jZSB3aW4gb2YgcmFjZXMgd29ydGggdGhlIG9jY2Fz aW9uYWwgaW5hY2N1cmFjeS4NCisJICovDQorCWlmIChtYnBfbG9jYWwubWJw X21idWZfYWxsb2NzID4gbWJwX2xvY2FsLm1icF9tYnVmX2ZyZWVzKQ0KKwkJ bWJzdGF0Lm1fbWJ1ZnMgPSBtYnBfbG9jYWwubWJwX21idWZfYWxsb2NzIC0N CisJCSAgICBtYnBfbG9jYWwubWJwX21idWZfZnJlZXM7DQorCWVsc2UNCisJ CW1ic3RhdC5tX21idWZzID0gMDsNCisNCisJaWYgKG1icF9sb2NhbC5tYnBf Y2x1c3RfYWxsb2NzID4gbWJwX2xvY2FsLm1icF9jbHVzdF9mcmVlcykNCisJ CW1ic3RhdC5tX21jbHVzdHMgPSBtYnBfbG9jYWwubWJwX2NsdXN0X2FsbG9j cyAtDQorCQkgICAgbWJwX2xvY2FsLm1icF9jbHVzdF9mcmVlczsNCisJZWxz ZQ0KKwkJbWJzdGF0Lm1fbWNsdXN0cyA9IDA7DQorDQorCW1ic3RhdC5tX2Ry YWluID0gbWJwX2xvY2FsLm1icF9tYnVmX2RyYWluczsNCisJbWJzdGF0Lm1f bWNmYWlsID0gbWJwX2xvY2FsLm1icF9jb3B5X2ZhaWxzOw0KKwltYnN0YXQu bV9tcGZhaWwgPSBtYnBfbG9jYWwubWJwX3B1bGx1cF9mYWlsczsNCisNCisJ bWJzdGF0LnNmX2lvY250ID0gbWJwX2xvY2FsLnNmcF9pb2NudDsNCisJbWJz dGF0LnNmX2FsbG9jZmFpbCA9IG1icF9sb2NhbC5zZnBfYWxsb2NfZmFpbHM7 DQorCS8qDQorCSAqIHNmX2FsbG9jd2FpdCBpcyBwcm90ZWN0ZWQgYnkgcGVy LWFyY2hpdGVjdHVyZSBtdXRleCBzZl9idWZfbG9jaywNCisJICogd2hpY2gg aXMgaGVsZCB3aGVuZXZlciBzZl9hbGxvY3dhaXQgaXMgdXBkYXRlZCwgc28g ZG9uJ3QgdXNlIHRoZQ0KKwkgKiBwZXItY3B1IHZlcnNpb24gaGVyZQ0KKwkg Kg0KKwkgKiBtYnN0YXQuc2ZfYWxsb2N3YWl0ID0gbWJwX2xvY2FsLnNmcF9h bGxvY193YWl0czsNCisJICovDQorDQorCXJldHVybiAoU1lTQ1RMX09VVChy ZXEsICZtYnN0YXQsIHNpemVvZihtYnN0YXQpKSk7DQorfQ0KKw0KIC8qDQog ICogQ29uc3RydWN0b3IgZm9yIE1idWYgbWFzdGVyIHpvbmUuDQogICoNCkBA IC0yMTIsNyArMjkxLDEwIEBADQogI2VuZGlmDQogCX0gZWxzZQ0KIAkJbS0+ bV9kYXRhID0gbS0+bV9kYXQ7DQotCW1ic3RhdC5tX21idWZzICs9IDE7CS8q IFhYWCAqLw0KKw0KKwljcml0aWNhbF9lbnRlcigpOw0KKwltYnN0YXRfcGVy Y3B1W2N1cmNwdV0ubWJwX21idWZfYWxsb2NzKys7DQorCWNyaXRpY2FsX2V4 aXQoKTsNCiAJcmV0dXJuICgwKTsNCiB9DQogDQpAQCAtMjI3LDcgKzMwOSw5 IEBADQogCW0gPSAoc3RydWN0IG1idWYgKiltZW07DQogCWlmICgobS0+bV9m bGFncyAmIE1fUEtUSERSKSAhPSAwKQ0KIAkJbV90YWdfZGVsZXRlX2NoYWlu KG0sIE5VTEwpOw0KLQltYnN0YXQubV9tYnVmcyAtPSAxOwkvKiBYWFggKi8N CisJY3JpdGljYWxfZW50ZXIoKTsNCisJbWJzdGF0X3BlcmNwdVtjdXJjcHVd Lm1icF9tYnVmX2ZyZWVzKys7DQorCWNyaXRpY2FsX2V4aXQoKTsNCiB9DQog DQogLyogWFhYIE9ubHkgYmVjYXVzZSBvZiBzdGF0cyAqLw0KQEAgLTIzNSwx MiArMzE5LDE2IEBADQogbWJfZHRvcl9wYWNrKHZvaWQgKm1lbSwgaW50IHNp emUsIHZvaWQgKmFyZykNCiB7DQogCXN0cnVjdCBtYnVmICptOw0KKwl1X2No YXIgY3B1Ow0KIA0KIAltID0gKHN0cnVjdCBtYnVmICopbWVtOw0KIAlpZiAo KG0tPm1fZmxhZ3MgJiBNX1BLVEhEUikgIT0gMCkNCiAJCW1fdGFnX2RlbGV0 ZV9jaGFpbihtLCBOVUxMKTsNCi0JbWJzdGF0Lm1fbWJ1ZnMgLT0gMTsJLyog WFhYICovDQotCW1ic3RhdC5tX21jbHVzdHMgLT0gMTsJLyogWFhYICovDQor CWNyaXRpY2FsX2VudGVyKCk7DQorCWNwdSA9IGN1cmNwdTsNCisJbWJzdGF0 X3BlcmNwdVtjcHVdLm1icF9tYnVmX2ZyZWVzKys7DQorCW1ic3RhdF9wZXJj cHVbY3B1XS5tYnBfY2x1c3RfZnJlZXMrKzsNCisJY3JpdGljYWxfZXhpdCgp Ow0KIH0NCiANCiAvKg0KQEAgLTI2Myw3ICszNTEsOSBAQA0KIAltLT5tX2V4 dC5leHRfc2l6ZSA9IE1DTEJZVEVTOw0KIAltLT5tX2V4dC5leHRfdHlwZSA9 IEVYVF9DTFVTVEVSOw0KIAltLT5tX2V4dC5yZWZfY250ID0gTlVMTDsJLyog TGF6eSBjb3VudGVyIGFzc2lnbi4gKi8NCi0JbWJzdGF0Lm1fbWNsdXN0cyAr PSAxOwkvKiBYWFggKi8NCisJY3JpdGljYWxfZW50ZXIoKTsNCisJbWJzdGF0 X3BlcmNwdVtjdXJjcHVdLm1icF9jbHVzdF9hbGxvY3MrKzsNCisJY3JpdGlj YWxfZXhpdCgpOw0KIAlyZXR1cm4gKDApOw0KIH0NCiANCkBAIC0yNzEsNyAr MzYxLDEwIEBADQogc3RhdGljIHZvaWQNCiBtYl9kdG9yX2NsdXN0KHZvaWQg Km1lbSwgaW50IHNpemUsIHZvaWQgKmFyZykNCiB7DQotCW1ic3RhdC5tX21j bHVzdHMgLT0gMTsJLyogWFhYICovDQorDQorCWNyaXRpY2FsX2VudGVyKCk7 DQorCW1ic3RhdF9wZXJjcHVbY3VyY3B1XS5tYnBfY2x1c3RfZnJlZXMrKzsN CisJY3JpdGljYWxfZXhpdCgpOw0KIH0NCiANCiAvKg0KQEAgLTI4OCw3ICsz ODEsOSBAQA0KIAl1bWFfemFsbG9jX2FyZyh6b25lX2NsdXN0LCBtLCBob3cp Ow0KIAlpZiAobS0+bV9leHQuZXh0X2J1ZiA9PSBOVUxMKQ0KIAkJcmV0dXJu IChFTk9NRU0pOw0KLQltYnN0YXQubV9tY2x1c3RzIC09IDE7CS8qIFhYWCAq Lw0KKwljcml0aWNhbF9lbnRlcigpOw0KKwltYnN0YXRfcGVyY3B1W2N1cmNw dV0ubWJwX2NsdXN0X2ZyZWVzKys7DQorCWNyaXRpY2FsX2V4aXQoKTsNCiAJ cmV0dXJuICgwKTsNCiB9DQogDQpAQCAtMzA0LDcgKzM5OSw5IEBADQogCW0g PSAoc3RydWN0IG1idWYgKiltZW07DQogCXVtYV96ZnJlZV9hcmcoem9uZV9j bHVzdCwgbS0+bV9leHQuZXh0X2J1ZiwgTlVMTCk7DQogCW0tPm1fZXh0LmV4 dF9idWYgPSBOVUxMOw0KLQltYnN0YXQubV9tY2x1c3RzICs9IDE7CS8qIFhY WCAqLw0KKwljcml0aWNhbF9lbnRlcigpOw0KKwltYnN0YXRfcGVyY3B1W2N1 cmNwdV0ubWJwX2NsdXN0X2FsbG9jcysrOw0KKwljcml0aWNhbF9leGl0KCk7 DQogfQ0KIA0KIC8qDQpAQCAtMzIwLDYgKzQxNyw3IEBADQogI2VuZGlmDQog CWludCBmbGFnczsNCiAJc2hvcnQgdHlwZTsNCisJdV9jaGFyIGNwdTsNCiAN CiAJbSA9IChzdHJ1Y3QgbWJ1ZiAqKW1lbTsNCiAJYXJncyA9IChzdHJ1Y3Qg bWJfYXJncyAqKWFyZzsNCkBAIC0zNDgsOCArNDQ2LDExIEBADQogCQkJcmV0 dXJuIChlcnJvcik7DQogI2VuZGlmDQogCX0NCi0JbWJzdGF0Lm1fbWJ1ZnMg Kz0gMTsJLyogWFhYICovDQotCW1ic3RhdC5tX21jbHVzdHMgKz0gMTsJLyog WFhYICovDQorCWNyaXRpY2FsX2VudGVyKCk7DQorCWNwdSA9IGN1cmNwdTsN CisJbWJzdGF0X3BlcmNwdVtjcHVdLm1icF9tYnVmX2FsbG9jcysrOw0KKwlt YnN0YXRfcGVyY3B1W2NwdV0ubWJwX2NsdXN0X2FsbG9jcysrOw0KKwljcml0 aWNhbF9leGl0KCk7DQogCXJldHVybiAoMCk7DQogfQ0KIA0KQEAgLTM2OSw3 ICs0NzAsOSBAQA0KIAlXSVRORVNTX1dBUk4oV0FSTl9HSUFOVE9LIHwgV0FS Tl9TTEVFUE9LIHwgV0FSTl9QQU5JQywgTlVMTCwNCiAJICAgICJtYl9yZWNs YWltKCkiKTsNCiANCi0JbWJzdGF0Lm1fZHJhaW4rKzsNCisJY3JpdGljYWxf ZW50ZXIoKTsNCisJbWJzdGF0X3BlcmNwdVtjdXJjcHVdLm1icF9tYnVmX2Ry YWlucysrOw0KKwljcml0aWNhbF9leGl0KCk7DQogCWZvciAoZHAgPSBkb21h aW5zOyBkcCAhPSBOVUxMOyBkcCA9IGRwLT5kb21fbmV4dCkNCiAJCWZvciAo cHIgPSBkcC0+ZG9tX3Byb3Rvc3c7IHByIDwgZHAtPmRvbV9wcm90b3N3TlBS T1RPU1c7IHByKyspDQogCQkJaWYgKHByLT5wcl9kcmFpbiAhPSBOVUxMKQ0K LS0tIC8vZGVwb3QvdmVuZG9yL2ZyZWVic2Qvc3JjL3N5cy9rZXJuL3VpcGNf bWJ1Zi5jCTIwMDUvMDMvMTcgMTk6MzU6MTkNCisrKyAvL2RlcG90L3VzZXIv cndhdHNvbi9wZXJjcHUvc3lzL2tlcm4vdWlwY19tYnVmLmMJMjAwNS8wNC8x NSAxMDo1NTo0NA0KQEAgLTM2LDYgKzM2LDkgQEANCiAjaW5jbHVkZSAib3B0 X3BhcmFtLmgiDQogI2luY2x1ZGUgIm9wdF9tYnVmX3N0cmVzc190ZXN0Lmgi DQogDQorLyogTmVlZCBtYnN0YXRfcGVyY3B1IGRlZmluaXRpb24gZnJvbSBt YnVmLmguICovDQorI2RlZmluZQlXQU5UX01CU1RBVF9QRVJDUFUNCisNCiAj aW5jbHVkZSA8c3lzL3BhcmFtLmg+DQogI2luY2x1ZGUgPHN5cy9zeXN0bS5o Pg0KICNpbmNsdWRlIDxzeXMva2VybmVsLmg+DQpAQCAtNDQsOCArNDcsMTAg QEANCiAjaW5jbHVkZSA8c3lzL21hYy5oPg0KICNpbmNsdWRlIDxzeXMvbWFs bG9jLmg+DQogI2luY2x1ZGUgPHN5cy9tYnVmLmg+DQorI2luY2x1ZGUgPHN5 cy9wY3B1Lmg+DQogI2luY2x1ZGUgPHN5cy9zeXNjdGwuaD4NCiAjaW5jbHVk ZSA8c3lzL2RvbWFpbi5oPg0KKyNpbmNsdWRlIDxzeXMvcHJvYy5oPg0KICNp bmNsdWRlIDxzeXMvcHJvdG9zdy5oPg0KICNpbmNsdWRlIDxzeXMvdWlvLmg+ DQogDQpAQCAtNDI4LDEzICs0MzMsMTggQEANCiAJCW0gPSBtLT5tX25leHQ7 DQogCQlucCA9ICZuLT5tX25leHQ7DQogCX0NCi0JaWYgKHRvcCA9PSBOVUxM KQ0KLQkJbWJzdGF0Lm1fbWNmYWlsKys7CS8qIFhYWDogTm8gY29uc2lzdGVu Y3kuICovDQorCWlmICh0b3AgPT0gTlVMTCkgew0KKwkJY3JpdGljYWxfZW50 ZXIoKTsNCisJCW1ic3RhdF9wZXJjcHVbY3VyY3B1XS5tYnBfY29weV9mYWls cysrOw0KKwkJY3JpdGljYWxfZXhpdCgpOw0KKwl9DQogDQogCXJldHVybiAo dG9wKTsNCiBub3NwYWNlOg0KIAltX2ZyZWVtKHRvcCk7DQotCW1ic3RhdC5t X21jZmFpbCsrOwkvKiBYWFg6IE5vIGNvbnNpc3RlbmN5LiAqLw0KKwljcml0 aWNhbF9lbnRlcigpOw0KKwltYnN0YXRfcGVyY3B1W2N1cmNwdV0ubWJwX2Nv cHlfZmFpbHMrKzsNCisJY3JpdGljYWxfZXhpdCgpOw0KIAlyZXR1cm4gKE5V TEwpOw0KIH0NCiANCkBAIC00OTcsNyArNTA3LDkgQEANCiAJcmV0dXJuIHRv cDsNCiBub3NwYWNlOg0KIAltX2ZyZWVtKHRvcCk7DQotCW1ic3RhdC5tX21j ZmFpbCsrOwkvKiBYWFg6IE5vIGNvbnNpc3RlbmN5LiAqLyANCisJY3JpdGlj YWxfZW50ZXIoKTsNCisJbWJzdGF0X3BlcmNwdVtjdXJjcHVdLm1icF9jb3B5 X2ZhaWxzKys7DQorCWNyaXRpY2FsX2V4aXQoKTsNCiAJcmV0dXJuIChOVUxM KTsNCiB9DQogDQpAQCAtNjAwLDcgKzYxMiw5IEBADQogDQogbm9zcGFjZToN CiAJbV9mcmVlbSh0b3ApOw0KLQltYnN0YXQubV9tY2ZhaWwrKzsJLyogWFhY OiBObyBjb25zaXN0ZW5jeS4gKi8NCisJY3JpdGljYWxfZW50ZXIoKTsNCisJ bWJzdGF0X3BlcmNwdVtjdXJjcHVdLm1icF9jb3B5X2ZhaWxzKys7DQorCWNy aXRpY2FsX2V4aXQoKTsNCiAJcmV0dXJuIChOVUxMKTsNCiB9DQogDQpAQCAt NzYyLDcgKzc3Niw5IEBADQogCXJldHVybiAobSk7DQogYmFkOg0KIAltX2Zy ZWVtKG4pOw0KLQltYnN0YXQubV9tcGZhaWwrKzsJLyogWFhYOiBObyBjb25z aXN0ZW5jeS4gKi8NCisJY3JpdGljYWxfZW50ZXIoKTsNCisJbWJzdGF0X3Bl cmNwdVtjdXJjcHVdLm1icF9wdWxsdXBfZmFpbHMrKzsNCisJY3JpdGljYWxf ZXhpdCgpOw0KIAlyZXR1cm4gKE5VTEwpOw0KIH0NCiANCi0tLSAvL2RlcG90 L3ZlbmRvci9mcmVlYnNkL3NyYy9zeXMva2Vybi91aXBjX3N5c2NhbGxzLmMJ MjAwNS8wMy8zMSAwNDozNToxNg0KKysrIC8vZGVwb3QvdXNlci9yd2F0c29u L3BlcmNwdS9zeXMva2Vybi91aXBjX3N5c2NhbGxzLmMJMjAwNS8wNC8xNSAx MDo1NTo0NA0KQEAgLTM5LDYgKzM5LDkgQEANCiAjaW5jbHVkZSAib3B0X2t0 cmFjZS5oIg0KICNpbmNsdWRlICJvcHRfbWFjLmgiDQogDQorLyogTmVlZCBt YnN0YXRfcGVyY3B1IGRlZmluaXRpb24gZnJvbSBtYnVmLmguICovDQorI2Rl ZmluZSBXQU5UX01CU1RBVF9QRVJDUFUNCisNCiAjaW5jbHVkZSA8c3lzL3Bh cmFtLmg+DQogI2luY2x1ZGUgPHN5cy9zeXN0bS5oPg0KICNpbmNsdWRlIDxz eXMva2VybmVsLmg+DQpAQCAtMTkyNiw3ICsxOTI5LDkgQEANCiAJCQl2bV9w YWdlX2lvX2ZpbmlzaChwZyk7DQogCQkJaWYgKCFlcnJvcikNCiAJCQkJVk1f T0JKRUNUX1VOTE9DSyhvYmopOw0KLQkJCW1ic3RhdC5zZl9pb2NudCsrOw0K KwkJCWNyaXRpY2FsX2VudGVyKCk7DQorCQkJbWJzdGF0X3BlcmNwdVtjdXJj cHVdLnNmcF9pb2NudCsrOw0KKwkJCWNyaXRpY2FsX2V4aXQoKTsNCiAJCX0N CiAJDQogCQlpZiAoZXJyb3IpIHsNCkBAIC0xOTU0LDcgKzE5NTksOSBAQA0K IAkJICogYnV0IHRoaXMgd2FpdCBjYW4gYmUgaW50ZXJydXB0ZWQuDQogCQkg Ki8NCiAJCWlmICgoc2YgPSBzZl9idWZfYWxsb2MocGcsIFNGQl9DQVRDSCkp ID09IE5VTEwpIHsNCi0JCQltYnN0YXQuc2ZfYWxsb2NmYWlsKys7DQorCQkJ Y3JpdGljYWxfZW50ZXIoKTsNCisJCQltYnN0YXRfcGVyY3B1W2N1cmNwdV0u c2ZwX2FsbG9jX2ZhaWxzKys7DQorCQkJY3JpdGljYWxfZXhpdCgpOw0KIAkJ CXZtX3BhZ2VfbG9ja19xdWV1ZXMoKTsNCiAJCQl2bV9wYWdlX3Vud2lyZShw ZywgMCk7DQogCQkJaWYgKHBnLT53aXJlX2NvdW50ID09IDAgJiYgcGctPm9i amVjdCA9PSBOVUxMKQ0KLS0tIC8vZGVwb3QvdmVuZG9yL2ZyZWVic2Qvc3Jj L3N5cy9zeXMvbWJ1Zi5oCTIwMDUvMDMvMTcgMTk6MzU6MTkNCisrKyAvL2Rl cG90L3VzZXIvcndhdHNvbi9wZXJjcHUvc3lzL3N5cy9tYnVmLmgJMjAwNS8w NC8xNSAxMDo1NTo0NA0KQEAgLTI0Myw2ICsyNDMsMjkgQEANCiAjZGVmaW5l CU1UX05UWVBFUwkxNgkvKiBudW1iZXIgb2YgbWJ1ZiB0eXBlcyBmb3IgbWJ0 eXBlc1tdICovDQogDQogLyoNCisgKiBQZXItQ1BVIG1idWYgYWxsb2NhdG9y IHN0YXRpc3RpY3MsIHdoaWNoIGFyZSBjb2xsYXRlZCB0byBjb25zdHJ1Y3Qg dGhlDQorICogZ2xvYmFsIHN0YXRpc3RpY3MuICBUaGV5IGFyZSByZWFkIGxv Y2tsZXNzLCBidXQgd3JpdHRlbiB0byB3aGlsZSBpbiBhDQorICogY3JpdGlj YWwgc2VjdGlvbiB0byBwcmV2ZW50IHJlYWQtbW9kaWZ5LXdyaXRlIHJhY2Vz Lg0KKyAqDQorICogWFhYUlc6IEFzIHdpdGggY29tbWVudHMgYmVsb3csIG1h eWJlIHNlbmRmaWxlIHN0YXRzIHNob3VsZCBiZSBlbHNlc2V3aGVyZS4NCisg Ki8NCitzdHJ1Y3QgbWJzdGF0X3BlcmNwdSB7DQorCXVfbG9uZwltYnBfbWJ1 Zl9hbGxvY3M7CS8qIG1idWZzIGFsbG9jJ2Qgb24gQ1BVLiAqLw0KKwl1X2xv bmcJbWJwX21idWZfZnJlZXM7CQkvKiBtYnVmcyBmcmVlZCBvbiBDUFUuICov DQorCXVfbG9uZwltYnBfbWJ1Zl9mYWlsczsJCS8qIG1idWYgYWxsb2MgZmFp bHVyZXMgb24gQ1BVLiAqLw0KKwl1X2xvbmcJbWJwX21idWZfZHJhaW5zOwkv KiBtYnVmIGRyYWlucyBvbiBDUFUgLiovDQorCXVfbG9uZwltYnBfY2x1c3Rf YWxsb2NzOwkvKiBjbHVzdGVycyBhbGxvYydkIG9uIENQVS4gKi8NCisJdV9s b25nCW1icF9jbHVzdF9mcmVlczsJLyogY2x1c3RlcnMgZnJlZWQgb24gQ1BV LiAqLw0KKw0KKwl1X2xvbmcJbWJwX2NvcHlfZmFpbHM7CQkvKiBtYnVmIGNv cHkgZmFpbHVyZXMgb24gQ1BVLiAqLw0KKwl1X2xvbmcJbWJwX3B1bGx1cF9m YWlsczsJLyogbWJ1ZiBwdWxsdXAgZmFpbHVyZXMgb24gQ1BVLiAqLw0KKw0K Kwl1X2xvbmcJc2ZwX2lvY250OwkJLyogc2VuZGZpbGUgSS9PJ3Mgb24gQ1BV LiAqLw0KKwl1X2xvbmcJc2ZwX2FsbG9jX2ZhaWxzOwkvKiBzZW5kZmlsZSBh bGxvYyBmYWlsdXJlcyBvbiBDUFUuICovDQorCXVfbG9uZwlzZnBfYWxsb2Nf d2FpdHM7CS8qIHNlbmRmaWxlIGFsbG9jIHdhaXRzIG9uIENQVS4gKi8NCit9 Ow0KKw0KKy8qDQogICogR2VuZXJhbCBtYnVmIGFsbG9jYXRvciBzdGF0aXN0 aWNzIHN0cnVjdHVyZS4NCiAgKi8NCiBzdHJ1Y3QgbWJzdGF0IHsNCkBAIC01 NTAsNiArNTczLDE1IEBADQogZXh0ZXJuCXN0cnVjdCBtYnN0YXQgbWJzdGF0 OwkJLyogR2VuZXJhbCBtYnVmIHN0YXRzL2luZm9zICovDQogZXh0ZXJuCWlu dCBubWJjbHVzdGVyczsJCS8qIE1heGltdW0gbnVtYmVyIG9mIGNsdXN0ZXJz ICovDQogDQorLyoNCisgKiBBdm9pZCBleHBvc2luZyBQRVJDUFUgZGVmaW5p dGlvbiBvdXRzaWRlIG9mIGEgdmVyeSBsaW1pdGVkIHNldCBvZiBmaWxlcywN CisgKiBzbyB0aGF0IHRoZSBjb21waWxlLXRpbWUgdmFsdWUgb2YgUEVSQ1BV IGRvZXNuJ3QgYmVjb21lIHBhcnQgb2YgdGhlDQorICogZXhwb3NlZCBrZXJu ZWwgQUJJLg0KKyAqLw0KKyNpZmRlZiBXQU5UX01CU1RBVF9QRVJDUFUNCitl eHRlcm4Jc3RydWN0IG1ic3RhdF9wZXJjcHUgbWJzdGF0X3BlcmNwdVtNQVhD UFVdOw0KKyNlbmRpZg0KKw0KIHN0cnVjdCB1aW87DQogDQogdm9pZAkJIG1f YWRqKHN0cnVjdCBtYnVmICosIGludCk7DQotLS0gLy9kZXBvdC92ZW5kb3Iv ZnJlZWJzZC9zcmMvc3lzL3N5cy9wY3B1LmgJMjAwNS8wMS8wNyAwMjozMjox Ng0KKysrIC8vZGVwb3QvdXNlci9yd2F0c29uL3BlcmNwdS9zeXMvc3lzL3Bj cHUuaAkyMDA1LzA0LzE1IDEwOjU1OjQ0DQpAQCAtODEsNiArODEsNyBAQA0K IGV4dGVybiBzdHJ1Y3QgY3B1aGVhZCBjcHVoZWFkOw0KIA0KICNkZWZpbmUJ Q1VSUFJPQwkJKGN1cnRocmVhZC0+dGRfcHJvYykNCisjZGVmaW5lCWN1cmNw dQkJKGN1cnRocmVhZC0+dGRfb25jcHUpDQogI2RlZmluZQljdXJrc2UJCShj dXJ0aHJlYWQtPnRkX2tzZSkNCiAjZGVmaW5lCWN1cmtzZWdycAkoY3VydGhy ZWFkLT50ZF9rc2VncnApDQogI2RlZmluZQljdXJwcm9jCQkoY3VydGhyZWFk LT50ZF9wcm9jKQ0K --0-136826264-1113748310=:85588 Content-Type: TEXT/PLAIN; charset=US-ASCII; name=malloc.diff Content-Transfer-Encoding: BASE64 Content-ID: <20050417153150.W85588@fledge.watson.org> Content-Description: Content-Disposition: attachment; filename=malloc.diff LS0tIC8vZGVwb3QvdmVuZG9yL2ZyZWVic2Qvc3JjL3N5cy9rZXJuL2tlcm5f bWFsbG9jLmMJMjAwNS8wNC8xMiAyMzo1NTozOA0KKysrIC8vZGVwb3QvdXNl ci9yd2F0c29uL3BlcmNwdS9zeXMva2Vybi9rZXJuX21hbGxvYy5jCTIwMDUv MDQvMTQgMjI6Mzg6MTYNCkBAIC0xLDQgKzEsNSBAQA0KIC8qLQ0KKyAqIENv cHlyaWdodCAoYykgMjAwNSBSb2JlcnQgTi4gTS4gV2F0c29uDQogICogQ29w eXJpZ2h0IChjKSAxOTg3LCAxOTkxLCAxOTkzDQogICoJVGhlIFJlZ2VudHMg b2YgdGhlIFVuaXZlcnNpdHkgb2YgQ2FsaWZvcm5pYS4gIEFsbCByaWdodHMg cmVzZXJ2ZWQuDQogICoNCkBAIC00NCw2ICs0NSw3IEBADQogI2luY2x1ZGUg PHN5cy9tdXRleC5oPg0KICNpbmNsdWRlIDxzeXMvdm1tZXRlci5oPg0KICNp bmNsdWRlIDxzeXMvcHJvYy5oPg0KKyNpbmNsdWRlIDxzeXMvc2J1Zi5oPg0K ICNpbmNsdWRlIDxzeXMvc3lzY3RsLmg+DQogI2luY2x1ZGUgPHN5cy90aW1l Lmg+DQogDQpAQCAtMTMzLDYgKzEzNSwzMyBAQA0KIAl7MCwgTlVMTH0sDQog fTsNCiANCisvKg0KKyAqIFR3byBtYWxsb2MgdHlwZSBzdHJ1Y3R1cmVzIGFy ZSBwcmVzZW50OiBtYWxsb2NfdHlwZSwgd2hpY2ggaXMgdXNlZCBieSBhDQor ICogdHlwZSBvd25lciB0byBkZWNsYXJlIHRoZSB0eXBlLCBhbmQgbWFsbG9j X3R5cGVfaW50ZXJuYWwsIHdoaWNoIGhvbGRzDQorICogbWFsbG9jLW93bmVk IHN0YXRpc3RpY3MgYW5kIG90aGVyIEFCSS1zZW5zaXRpdmUgZmllbGRzLCBz dWNoIGFzIHRoZSBzZXQgb2YNCisgKiBtYWxsb2Mgc3RhdGlzdGljcyBpbmRl eGVkIGJ5IHRoZSBjb21waWxlLXRpbWUgTUFYQ1BVIGNvbnN0YW50Lg0KKyAq DQorICogVGhlIG1hbGxvY190eXBlIGtzX25leHQgZmllbGQgaXMgcHJvdGVj dGVkIGJ5IG1hbGxvY19tdHguICBPdGhlciBmaWVsZHMgaW4NCisgKiBtYWxs b2NfdHlwZSBhcmUgc3RhdGljIGFmdGVyIGluaXRpYWxpemF0aW9uIHNvIHVu c3luY2hyb25pemVkLg0KKyAqDQorICogU3RhdGlzdGljcyBpbiBtYWxsb2Nf dHlwZV9zdGF0cyBhcmUgd3JpdHRlbiBvbmx5IHdoZW4gaG9sZGluZyBhIGNy aXRpY2FsDQorICogc2VjdGlvbiwgYnV0IHJlYWQgbG9jay1mcmVlIHJlc3Vs dGluZyBpbiBwb3NzaWJsZSAobWlub3IpIHJhY2VzLCB3aGljaCB0aGUNCisg KiBtb25pdG9yaW5nIGFwcCBzaG91bGQgdGFrZSBpbnRvIGFjY291bnQuDQor ICovDQorc3RydWN0IG1hbGxvY190eXBlX3N0YXRzIHsNCisJdV9sb25nCQlt dHNfbWVtYWxsb2NlZDsJLyogQnl0ZXMgYWxsb2NhdGVkIG9uIENQVS4gKi8N CisJdV9sb25nCQltdHNfbWVtZnJlZWQ7CS8qIEJ5dGVzIGZyZWVkIG9uIENQ VS4gKi8NCisJdV9sb25nCQltdHNfbnVtYWxsb2NzOwkvKiBOdW1iZXIgb2Yg YWxsb2NhdGVzIG9uIENQVS4gKi8NCisJdV9sb25nCQltdHNfbnVtZnJlZXM7 CS8qIE51bWJlciBvZiBmcmVlcyBvbiBDUFUuICovDQorCXVfbG9uZwkJbXRz X3NpemU7CS8qIEJpdG1hc2sgb2Ygc2l6ZXMgYWxsb2NhdGVkIG9uIENQVS4g Ki8NCit9Ow0KKw0KK3N0cnVjdCBtYWxsb2NfdHlwZV9pbnRlcm5hbCB7DQor CXN0cnVjdCBtYWxsb2NfdHlwZV9zdGF0cwkgbXRpX3N0YXRzW01BWENQVV07 DQorfTsNCisNCit1bWFfem9uZV90IG10X3pvbmU7DQorDQogI2lmZGVmIERF QlVHX01FTUdVQVJEDQogdV9pbnQgdm1fbWVtZ3VhcmRfZGl2aXNvcjsNCiBT WVNDVExfVUlOVChfdm0sIE9JRF9BVVRPLCBtZW1ndWFyZF9kaXZpc29yLCBD VExGTEFHX1JELCAmdm1fbWVtZ3VhcmRfZGl2aXNvciwNCkBAIC0xOTcsNDEg KzIyNiw0OCBAQA0KICAqIEFkZCB0aGlzIHRvIHRoZSBpbmZvcm1hdGlvbmFs IG1hbGxvY190eXBlIGJ1Y2tldC4NCiAgKi8NCiBzdGF0aWMgdm9pZA0KLW1h bGxvY190eXBlX3pvbmVfYWxsb2NhdGVkKHN0cnVjdCBtYWxsb2NfdHlwZSAq a3NwLCB1bnNpZ25lZCBsb25nIHNpemUsDQorbWFsbG9jX3R5cGVfem9uZV9h bGxvY2F0ZWQoc3RydWN0IG1hbGxvY190eXBlICp0eXBlLCB1bnNpZ25lZCBs b25nIHNpemUsDQogICAgIGludCB6aW5keCkNCiB7DQotCW10eF9sb2NrKCZr c3AtPmtzX210eCk7DQotCWtzcC0+a3NfY2FsbHMrKzsNCisJc3RydWN0IG1h bGxvY190eXBlX2ludGVybmFsICptdGk7DQorCXN0cnVjdCBtYWxsb2NfdHlw ZV9zdGF0cyAqbXRzOw0KKwl1X2NoYXIgY3B1Ow0KKw0KKwljcml0aWNhbF9l bnRlcigpOw0KKwljcHUgPSBjdXJ0aHJlYWQtPnRkX29uY3B1Ow0KKwltdGkg PSAoc3RydWN0IG1hbGxvY190eXBlX2ludGVybmFsICopKHR5cGUtPmtzX2hh bmRsZSk7DQorCW10cyA9ICZtdGktPm10aV9zdGF0c1tjcHVdOw0KKwltdHMt Pm10c19tZW1hbGxvY2VkICs9IHNpemU7DQorCW10cy0+bXRzX251bWFsbG9j cysrOw0KIAlpZiAoemluZHggIT0gLTEpDQotCQlrc3AtPmtzX3NpemUgfD0g MSA8PCB6aW5keDsNCi0JaWYgKHNpemUgIT0gMCkgew0KLQkJa3NwLT5rc19t ZW11c2UgKz0gc2l6ZTsNCi0JCWtzcC0+a3NfaW51c2UrKzsNCi0JCWlmIChr c3AtPmtzX21lbXVzZSA+IGtzcC0+a3NfbWF4dXNlZCkNCi0JCQlrc3AtPmtz X21heHVzZWQgPSBrc3AtPmtzX21lbXVzZTsNCi0JfQ0KLQltdHhfdW5sb2Nr KCZrc3AtPmtzX210eCk7DQorCQltdHMtPm10c19zaXplIHw9IDEgPDwgemlu ZHg7DQorCWNyaXRpY2FsX2V4aXQoKTsNCiB9DQogDQogdm9pZA0KLW1hbGxv Y190eXBlX2FsbG9jYXRlZChzdHJ1Y3QgbWFsbG9jX3R5cGUgKmtzcCwgdW5z aWduZWQgbG9uZyBzaXplKQ0KK21hbGxvY190eXBlX2FsbG9jYXRlZChzdHJ1 Y3QgbWFsbG9jX3R5cGUgKnR5cGUsIHVuc2lnbmVkIGxvbmcgc2l6ZSkNCiB7 DQotCW1hbGxvY190eXBlX3pvbmVfYWxsb2NhdGVkKGtzcCwgc2l6ZSwgLTEp Ow0KKw0KKwltYWxsb2NfdHlwZV96b25lX2FsbG9jYXRlZCh0eXBlLCBzaXpl LCAtMSk7DQogfQ0KIA0KIC8qDQogICogUmVtb3ZlIHRoaXMgYWxsb2NhdGlv biBmcm9tIHRoZSBpbmZvcm1hdGlvbmFsIG1hbGxvY190eXBlIGJ1Y2tldC4N CiAgKi8NCiB2b2lkDQotbWFsbG9jX3R5cGVfZnJlZWQoc3RydWN0IG1hbGxv Y190eXBlICprc3AsIHVuc2lnbmVkIGxvbmcgc2l6ZSkNCittYWxsb2NfdHlw ZV9mcmVlZChzdHJ1Y3QgbWFsbG9jX3R5cGUgKnR5cGUsIHVuc2lnbmVkIGxv bmcgc2l6ZSkNCiB7DQotCW10eF9sb2NrKCZrc3AtPmtzX210eCk7DQotCUtB U1NFUlQoc2l6ZSA8PSBrc3AtPmtzX21lbXVzZSwNCi0JCSgibWFsbG9jKDkp L2ZyZWUoOSkgY29uZnVzaW9uLlxuJXMiLA0KLQkJICJQcm9iYWJseSBmcmVl aW5nIHdpdGggd3JvbmcgdHlwZSwgYnV0IG1heWJlIG5vdCBoZXJlLiIpKTsN Ci0Ja3NwLT5rc19tZW11c2UgLT0gc2l6ZTsNCi0Ja3NwLT5rc19pbnVzZS0t Ow0KLQltdHhfdW5sb2NrKCZrc3AtPmtzX210eCk7DQorCXN0cnVjdCBtYWxs b2NfdHlwZV9pbnRlcm5hbCAqbXRpOw0KKwlzdHJ1Y3QgbWFsbG9jX3R5cGVf c3RhdHMgKm10czsNCisJdV9jaGFyIGNwdTsNCisNCisJY3JpdGljYWxfZW50 ZXIoKTsNCisJY3B1ID0gY3VydGhyZWFkLT50ZF9vbmNwdTsNCisJbXRpID0g KHN0cnVjdCBtYWxsb2NfdHlwZV9pbnRlcm5hbCAqKXR5cGUtPmtzX2hhbmRs ZTsNCisJbXRzID0gJm10aS0+bXRpX3N0YXRzW2NwdV07DQorCW10cy0+bXRz X21lbWZyZWVkICs9IHNpemU7DQorCW10cy0+bXRzX251bWZyZWVzKys7DQor CWNyaXRpY2FsX2V4aXQoKTsNCiB9DQogDQogLyoNCkBAIC0zNTEsOSArMzg3 LDYgQEANCiAJfQ0KICNlbmRpZg0KIA0KLQlLQVNTRVJUKHR5cGUtPmtzX21l bXVzZSA+IDAsDQotCQkoIm1hbGxvYyg5KS9mcmVlKDkpIGNvbmZ1c2lvbi5c biVzIiwNCi0JCSAiUHJvYmFibHkgZnJlZWluZyB3aXRoIHdyb25nIHR5cGUs IGJ1dCBtYXliZSBub3QgaGVyZS4iKSk7DQogCXNpemUgPSAwOw0KIA0KIAlz bGFiID0gdnRvc2xhYigodm1fb2Zmc2V0X3QpYWRkciAmICh+VU1BX1NMQUJf TUFTSykpOw0KQEAgLTQwNSw2ICs0MzgsMTEgQEANCiAJaWYgKGFkZHIgPT0g TlVMTCkNCiAJCXJldHVybiAobWFsbG9jKHNpemUsIHR5cGUsIGZsYWdzKSk7 DQogDQorCS8qDQorCSAqIFhYWDogU2hvdWxkIHJlcG9ydCBmcmVlIG9mIG9s ZCBtZW1vcnkgYW5kIGFsbG9jIG9mIG5ldyBtZW1vcnkgdG8NCisJICogcGVy LUNQVSBzdGF0cy4NCisJICovDQorDQogI2lmZGVmIERFQlVHX01FTUdVQVJE DQogLyogWFhYOiBDSEFOR0VNRSEgKi8NCiBpZiAodHlwZSA9PSBNX1NVQlBS T0MpIHsNCkBAIC01NDMsNiArNTgxLDEzIEBADQogDQogCXVtYV9zdGFydHVw MigpOw0KIA0KKwltdF96b25lID0gdW1hX3pjcmVhdGUoIm10X3pvbmUiLCBz aXplb2Yoc3RydWN0IG1hbGxvY190eXBlX2ludGVybmFsKSwNCisjaWZkZWYg SU5WQVJJQU5UUw0KKwkJICAgIG10cmFzaF9jdG9yLCBtdHJhc2hfZHRvciwg bXRyYXNoX2luaXQsIG10cmFzaF9maW5pLA0KKyNlbHNlDQorCQkgICAgTlVM TCwgTlVMTCwgTlVMTCwgTlVMTCwNCisjZW5kaWYNCisJICAgIFVNQV9BTElH Tl9QVFIsIFVNQV9aT05FX01BTExPQyk7DQogCWZvciAoaSA9IDAsIGluZHgg PSAwOyBrbWVtem9uZXNbaW5keF0ua3pfc2l6ZSAhPSAwOyBpbmR4KyspIHsN CiAJCWludCBzaXplID0ga21lbXpvbmVzW2luZHhdLmt6X3NpemU7DQogCQlj aGFyICpuYW1lID0ga21lbXpvbmVzW2luZHhdLmt6X25hbWU7DQpAQCAtNTYy LDEyNyArNjA3LDE0MiBAQA0KIH0NCiANCiB2b2lkDQotbWFsbG9jX2luaXQo dm9pZCAqZGF0YSkNCittYWxsb2NfaW5pdCh2b2lkICp0eXBlKQ0KIHsNCi0J c3RydWN0IG1hbGxvY190eXBlICp0eXBlID0gKHN0cnVjdCBtYWxsb2NfdHlw ZSAqKWRhdGE7DQorCXN0cnVjdCBtYWxsb2NfdHlwZV9pbnRlcm5hbCAqbXRp Ow0KKwlzdHJ1Y3QgbWFsbG9jX3R5cGUgKm10Ow0KIA0KLQltdHhfbG9jaygm bWFsbG9jX210eCk7DQotCWlmICh0eXBlLT5rc19tYWdpYyAhPSBNX01BR0lD KQ0KLQkJcGFuaWMoIm1hbGxvYyB0eXBlIGxhY2tzIG1hZ2ljIik7DQorCUtB U1NFUlQoY250LnZfcGFnZV9jb3VudCAhPSAwLCAoIm1hbGxvY19yZWdpc3Rl ciBiZWZvcmUgdm1faW5pdCIpKTsNCiANCi0JaWYgKGNudC52X3BhZ2VfY291 bnQgPT0gMCkNCi0JCXBhbmljKCJtYWxsb2NfaW5pdCBub3QgYWxsb3dlZCBi ZWZvcmUgdm0gaW5pdCIpOw0KKwltdCA9IHR5cGU7DQorCW10aSA9IHVtYV96 YWxsb2MobXRfem9uZSwgTV9XQUlUT0sgfCBNX1pFUk8pOw0KKwltdC0+a3Nf aGFuZGxlID0gbXRpOw0KIA0KLQlpZiAodHlwZS0+a3NfbmV4dCAhPSBOVUxM KQ0KLQkJcmV0dXJuOw0KLQ0KLQl0eXBlLT5rc19uZXh0ID0ga21lbXN0YXRp c3RpY3M7CQ0KKwltdHhfbG9jaygmbWFsbG9jX210eCk7DQorCW10LT5rc19u ZXh0ID0ga21lbXN0YXRpc3RpY3M7DQogCWttZW1zdGF0aXN0aWNzID0gdHlw ZTsNCi0JbXR4X2luaXQoJnR5cGUtPmtzX210eCwgdHlwZS0+a3Nfc2hvcnRk ZXNjLCAiTWFsbG9jIFN0YXRzIiwgTVRYX0RFRik7DQogCW10eF91bmxvY2so Jm1hbGxvY19tdHgpOw0KIH0NCiANCiB2b2lkDQotbWFsbG9jX3VuaW5pdCh2 b2lkICpkYXRhKQ0KK21hbGxvY191bmluaXQodm9pZCAqdHlwZSkNCiB7DQot CXN0cnVjdCBtYWxsb2NfdHlwZSAqdHlwZSA9IChzdHJ1Y3QgbWFsbG9jX3R5 cGUgKilkYXRhOw0KLQlzdHJ1Y3QgbWFsbG9jX3R5cGUgKnQ7DQorCXN0cnVj dCBtYWxsb2NfdHlwZV9pbnRlcm5hbCAqbXRpOw0KKwlzdHJ1Y3QgbWFsbG9j X3R5cGUgKm10LCAqdGVtcDsNCiANCisJbXQgPSB0eXBlOw0KKwlLQVNTRVJU KG10LT5rc19oYW5kbGUgIT0gTlVMTCwgKCJtYWxsb2NfZGVyZWdpc3Rlcjog Y29va2llIE5VTEwiKSk7DQogCW10eF9sb2NrKCZtYWxsb2NfbXR4KTsNCi0J bXR4X2xvY2soJnR5cGUtPmtzX210eCk7DQotCWlmICh0eXBlLT5rc19tYWdp YyAhPSBNX01BR0lDKQ0KLQkJcGFuaWMoIm1hbGxvYyB0eXBlIGxhY2tzIG1h Z2ljIik7DQotDQotCWlmIChjbnQudl9wYWdlX2NvdW50ID09IDApDQotCQlw YW5pYygibWFsbG9jX3VuaW5pdCBub3QgYWxsb3dlZCBiZWZvcmUgdm0gaW5p dCIpOw0KLQ0KLQlpZiAodHlwZSA9PSBrbWVtc3RhdGlzdGljcykNCi0JCWtt ZW1zdGF0aXN0aWNzID0gdHlwZS0+a3NfbmV4dDsNCi0JZWxzZSB7DQotCQlm b3IgKHQgPSBrbWVtc3RhdGlzdGljczsgdC0+a3NfbmV4dCAhPSBOVUxMOyB0 ID0gdC0+a3NfbmV4dCkgew0KLQkJCWlmICh0LT5rc19uZXh0ID09IHR5cGUp IHsNCi0JCQkJdC0+a3NfbmV4dCA9IHR5cGUtPmtzX25leHQ7DQotCQkJCWJy ZWFrOw0KLQkJCX0NCisJbXRpID0gbXQtPmtzX2hhbmRsZTsNCisJbXQtPmtz X2hhbmRsZSA9IE5VTEw7DQorCWlmIChtdCAhPSBrbWVtc3RhdGlzdGljcykg ew0KKwkJZm9yICh0ZW1wID0ga21lbXN0YXRpc3RpY3M7IHRlbXAgIT0gTlVM TDsNCisJCSAgICB0ZW1wID0gdGVtcC0+a3NfbmV4dCkgew0KKwkJCWlmICh0 ZW1wLT5rc19uZXh0ID09IG10KQ0KKwkJCQl0ZW1wLT5rc19uZXh0ID0gbXQt PmtzX25leHQ7DQogCQl9DQotCX0NCi0JdHlwZS0+a3NfbmV4dCA9IE5VTEw7 DQotCW10eF9kZXN0cm95KCZ0eXBlLT5rc19tdHgpOw0KKwl9IGVsc2UNCisJ CWttZW1zdGF0aXN0aWNzID0gbXQtPmtzX25leHQ7DQogCW10eF91bmxvY2so Jm1hbGxvY19tdHgpOw0KKwl1bWFfemZyZWUobXRfem9uZSwgdHlwZSk7DQog fQ0KIA0KIHN0YXRpYyBpbnQNCiBzeXNjdGxfa2Vybl9tYWxsb2MoU1lTQ1RM X0hBTkRMRVJfQVJHUykNCiB7DQorCXN0cnVjdCBtYWxsb2NfdHlwZV9zdGF0 cyAqbXRzLCBtdHNfbG9jYWw7DQorCXN0cnVjdCBtYWxsb2NfdHlwZV9pbnRl cm5hbCAqbXRpOw0KKwlsb25nIHRlbXBfYWxsb2NzLCB0ZW1wX2J5dGVzOw0K IAlzdHJ1Y3QgbWFsbG9jX3R5cGUgKnR5cGU7DQogCWludCBsaW5lc2l6ZSA9 IDEyODsNCi0JaW50IGN1cmxpbmU7DQorCXN0cnVjdCBzYnVmIHNidWY7DQog CWludCBidWZzaXplOw0KIAlpbnQgZmlyc3Q7DQogCWludCBlcnJvcjsNCiAJ Y2hhciAqYnVmOw0KLQljaGFyICpwOw0KIAlpbnQgY250Ow0KLQlpbnQgbGVu Ow0KIAlpbnQgaTsNCiANCiAJY250ID0gMDsNCiANCisJLyogR3Vlc3MgYXQg aG93IG11Y2ggcm9vbSBpcyBuZWVkZWQuICovDQogCW10eF9sb2NrKCZtYWxs b2NfbXR4KTsNCiAJZm9yICh0eXBlID0ga21lbXN0YXRpc3RpY3M7IHR5cGUg IT0gTlVMTDsgdHlwZSA9IHR5cGUtPmtzX25leHQpDQogCQljbnQrKzsNCisJ bXR4X3VubG9jaygmbWFsbG9jX210eCk7DQogDQotCW10eF91bmxvY2soJm1h bGxvY19tdHgpOw0KIAlidWZzaXplID0gbGluZXNpemUgKiAoY250ICsgMSk7 DQotCXAgPSBidWYgPSAoY2hhciAqKW1hbGxvYyhidWZzaXplLCBNX1RFTVAs IE1fV0FJVE9LfE1fWkVSTyk7DQorCWJ1ZiA9IChjaGFyICopbWFsbG9jKGJ1 ZnNpemUsIE1fVEVNUCwgTV9XQUlUT0t8TV9aRVJPKTsNCisJc2J1Zl9uZXco JnNidWYsIGJ1ZiwgYnVmc2l6ZSwgU0JVRl9GSVhFRExFTik7DQorDQogCW10 eF9sb2NrKCZtYWxsb2NfbXR4KTsNCiANCi0JbGVuID0gc25wcmludGYocCwg bGluZXNpemUsDQorDQorCXNidWZfcHJpbnRmKCZzYnVmLA0KIAkgICAgIlxu ICAgICAgICBUeXBlICBJblVzZSBNZW1Vc2UgSGlnaFVzZSBSZXF1ZXN0cyAg U2l6ZShzKVxuIik7DQotCXAgKz0gbGVuOw0KLQ0KIAlmb3IgKHR5cGUgPSBr bWVtc3RhdGlzdGljczsgY250ICE9IDAgJiYgdHlwZSAhPSBOVUxMOw0KIAkg ICAgdHlwZSA9IHR5cGUtPmtzX25leHQsIGNudC0tKSB7DQotCQlpZiAodHlw ZS0+a3NfY2FsbHMgPT0gMCkNCisJCW10aSA9IHR5cGUtPmtzX2hhbmRsZTsN CisJCWJ6ZXJvKCZtdHNfbG9jYWwsIHNpemVvZihtdHNfbG9jYWwpKTsNCisJ CWZvciAoaSA9IDA7IGkgPCBNQVhDUFU7IGkrKykgew0KKwkJCW10cyA9ICZt dGktPm10aV9zdGF0c1tpXTsNCisJCQltdHNfbG9jYWwubXRzX21lbWFsbG9j ZWQgKz0gbXRzLT5tdHNfbWVtYWxsb2NlZDsNCisJCQltdHNfbG9jYWwubXRz X21lbWZyZWVkICs9IG10cy0+bXRzX21lbWZyZWVkOw0KKwkJCW10c19sb2Nh bC5tdHNfbnVtYWxsb2NzICs9IG10cy0+bXRzX251bWFsbG9jczsNCisJCQlt dHNfbG9jYWwubXRzX251bWZyZWVzICs9IG10cy0+bXRzX251bWZyZWVzOw0K KwkJCW10c19sb2NhbC5tdHNfc2l6ZSB8PSBtdHMtPm10c19zaXplOw0KKwkJ fQ0KKwkJaWYgKG10c19sb2NhbC5tdHNfbnVtYWxsb2NzID09IDApDQogCQkJ Y29udGludWU7DQogDQotCQljdXJsaW5lID0gbGluZXNpemUgLSAyOwkvKiBM ZWF2ZSByb29tIGZvciB0aGUgXG4gKi8NCi0JCWxlbiA9IHNucHJpbnRmKHAs IGN1cmxpbmUsICIlMTNzJTZsdSU2bHVLJTdsdUslOWxsdSIsDQotCQkJdHlw ZS0+a3Nfc2hvcnRkZXNjLA0KLQkJCXR5cGUtPmtzX2ludXNlLA0KLQkJCSh0 eXBlLT5rc19tZW11c2UgKyAxMDIzKSAvIDEwMjQsDQotCQkJKHR5cGUtPmtz X21heHVzZWQgKyAxMDIzKSAvIDEwMjQsDQotCQkJKGxvbmcgbG9uZyB1bnNp Z25lZCl0eXBlLT5rc19jYWxscyk7DQotCQljdXJsaW5lIC09IGxlbjsNCi0J CXAgKz0gbGVuOw0KKwkJLyoNCisJCSAqIER1ZSB0byByYWNlcyBpbiBwZXIt Q1BVIHN0YXRpc3RpY3MgZ2F0aGVyLCBpdCdzIHBvc3NpYmxlIHRvDQorCQkg KiBnZXQgYSBzbGlnaHRseSBuZWdhdGl2ZSBudW1iZXIgaGVyZS4gIElmIHdl IGRvLCBhcHByb3hpbWF0ZQ0KKwkJICogd2l0aCAwLg0KKwkJICovDQorCQlp ZiAobXRzX2xvY2FsLm10c19udW1hbGxvY3MgPiBtdHNfbG9jYWwubXRzX251 bWZyZWVzKQ0KKwkJCXRlbXBfYWxsb2NzID0gbXRzX2xvY2FsLm10c19udW1h bGxvY3MgLQ0KKwkJCSAgICBtdHNfbG9jYWwubXRzX251bWZyZWVzOw0KKwkJ ZWxzZQ0KKwkJCXRlbXBfYWxsb2NzID0gMDsNCisNCisJCS8qDQorCQkgKiBE aXR0byBmb3IgYnl0ZXMgYWxsb2NhdGVkLg0KKwkJICovDQorCQlpZiAobXRz X2xvY2FsLm10c19tZW1hbGxvY2VkID4gbXRzX2xvY2FsLm10c19tZW1mcmVl ZCkNCisJCQl0ZW1wX2J5dGVzID0gbXRzX2xvY2FsLm10c19tZW1hbGxvY2Vk IC0NCisJCQkgICAgbXRzX2xvY2FsLm10c19tZW1mcmVlZDsNCisJCWVsc2UN CisJCQl0ZW1wX2J5dGVzID0gMDsNCisNCisJCXNidWZfcHJpbnRmKCZzYnVm LCAiJTEzcyU2bHUlNmx1SyU3bHVLJTlsdSIsDQorCQkgICAgdHlwZS0+a3Nf c2hvcnRkZXNjLA0KKwkJICAgIHRlbXBfYWxsb2NzLA0KKwkJICAgICh0ZW1w X2J5dGVzICsgMTAyMykgLyAxMDI0LA0KKwkJICAgIDBMLAkJCS8qIFhYWDog Tm90IGF2YWlsYWJsZSBjdXJyZW50bHkuICovDQorCQkgICAgbXRzX2xvY2Fs Lm10c19udW1hbGxvY3MpOw0KIA0KIAkJZmlyc3QgPSAxOw0KIAkJZm9yIChp ID0gMDsgaSA8IHNpemVvZihrbWVtem9uZXMpIC8gc2l6ZW9mKGttZW16b25l c1swXSkgLSAxOw0KIAkJICAgIGkrKykgew0KLQkJCWlmICh0eXBlLT5rc19z aXplICYgKDEgPDwgaSkpIHsNCisJCQlpZiAobXRzX2xvY2FsLm10c19zaXpl ICYgKDEgPDwgaSkpIHsNCiAJCQkJaWYgKGZpcnN0KQ0KLQkJCQkJbGVuID0g c25wcmludGYocCwgY3VybGluZSwgIiAgIik7DQorCQkJCQlzYnVmX3ByaW50 Zigmc2J1ZiwgIiAgIik7DQogCQkJCWVsc2UNCi0JCQkJCWxlbiA9IHNucHJp bnRmKHAsIGN1cmxpbmUsICIsIik7DQotCQkJCWN1cmxpbmUgLT0gbGVuOw0K LQkJCQlwICs9IGxlbjsNCi0NCi0JCQkJbGVuID0gc25wcmludGYocCwgY3Vy bGluZSwNCi0JCQkJICAgICIlcyIsIGttZW16b25lc1tpXS5rel9uYW1lKTsN Ci0JCQkJY3VybGluZSAtPSBsZW47DQotCQkJCXAgKz0gbGVuOw0KLQ0KKwkJ CQkJc2J1Zl9wcmludGYoJnNidWYsICIsIik7DQorCQkJCXNidWZfcHJpbnRm KCZzYnVmLCAiJXMiLA0KKwkJCQkgICAga21lbXpvbmVzW2ldLmt6X25hbWUp Ow0KIAkJCQlmaXJzdCA9IDA7DQogCQkJfQ0KIAkJfQ0KLQ0KLQkJbGVuID0g c25wcmludGYocCwgMiwgIlxuIik7DQotCQlwICs9IGxlbjsNCisJCXNidWZf cHJpbnRmKCZzYnVmLCAiXG4iKTsNCiAJfQ0KKwlzYnVmX2ZpbmlzaCgmc2J1 Zik7DQorCW10eF91bmxvY2soJm1hbGxvY19tdHgpOw0KIA0KLQltdHhfdW5s b2NrKCZtYWxsb2NfbXR4KTsNCi0JZXJyb3IgPSBTWVNDVExfT1VUKHJlcSwg YnVmLCBwIC0gYnVmKTsNCisJZXJyb3IgPSBTWVNDVExfT1VUKHJlcSwgc2J1 Zl9kYXRhKCZzYnVmKSwgc2J1Zl9sZW4oJnNidWYpKTsNCiANCisJc2J1Zl9k ZWxldGUoJnNidWYpOw0KIAlmcmVlKGJ1ZiwgTV9URU1QKTsNCiAJcmV0dXJu IChlcnJvcik7DQogfQ0KQEAgLTY5Niw2ICs3NTYsNyBAQA0KIHN5c2N0bF9r ZXJuX21wcm9mKFNZU0NUTF9IQU5ETEVSX0FSR1MpDQogew0KIAlpbnQgbGlu ZXNpemUgPSA2NDsNCisJc3RydWN0IHNidWYgc2J1ZjsNCiAJdWludDY0X3Qg Y291bnQ7DQogCXVpbnQ2NF90IHdhc3RlOw0KIAl1aW50NjRfdCBtZW07DQpA QCAtNzA0LDcgKzc2NSw2IEBADQogCWNoYXIgKmJ1ZjsNCiAJaW50IHJzaXpl Ow0KIAlpbnQgc2l6ZTsNCi0JY2hhciAqcDsNCiAJaW50IGxlbjsNCiAJaW50 IGk7DQogDQpAQCAtNzE0LDM0ICs3NzQsMzAgQEANCiAJd2FzdGUgPSAwOw0K IAltZW0gPSAwOw0KIA0KLQlwID0gYnVmID0gKGNoYXIgKiltYWxsb2MoYnVm c2l6ZSwgTV9URU1QLCBNX1dBSVRPS3xNX1pFUk8pOw0KLQlsZW4gPSBzbnBy aW50ZihwLCBidWZzaXplLA0KKwlidWYgPSAoY2hhciAqKW1hbGxvYyhidWZz aXplLCBNX1RFTVAsIE1fV0FJVE9LfE1fWkVSTyk7DQorCXNidWZfbmV3KCZz YnVmLCBidWYsIGJ1ZnNpemUsIFNCVUZfRklYRURMRU4pOw0KKwlzYnVmX3By aW50Zigmc2J1ZiwgDQogCSAgICAiXG4gIFNpemUgICAgICAgICAgICAgICAg ICAgIFJlcXVlc3RzICBSZWFsIFNpemVcbiIpOw0KLQlidWZzaXplIC09IGxl bjsNCi0JcCArPSBsZW47DQotDQogCWZvciAoaSA9IDA7IGkgPCBLTUVNX1pT SVpFOyBpKyspIHsNCiAJCXNpemUgPSBpIDw8IEtNRU1fWlNISUZUOw0KIAkJ cnNpemUgPSBrbWVtem9uZXNba21lbXNpemVbaV1dLmt6X3NpemU7DQogCQlj b3VudCA9IChsb25nIGxvbmcgdW5zaWduZWQpa3JlcXVlc3RzW2ldOw0KIA0K LQkJbGVuID0gc25wcmludGYocCwgYnVmc2l6ZSwgIiU2ZCUyOGxsdSUxMWRc biIsDQotCQkgICAgc2l6ZSwgKHVuc2lnbmVkIGxvbmcgbG9uZyljb3VudCwg cnNpemUpOw0KLQkJYnVmc2l6ZSAtPSBsZW47DQotCQlwICs9IGxlbjsNCisJ CXNidWZfcHJpbnRmKCZzYnVmLCAiJTZkJTI4bGx1JTExZFxuIiwgc2l6ZSwN CisJCSAgICAodW5zaWduZWQgbG9uZyBsb25nKWNvdW50LCByc2l6ZSk7DQog DQogCQlpZiAoKHJzaXplICogY291bnQpID4gKHNpemUgKiBjb3VudCkpDQog CQkJd2FzdGUgKz0gKHJzaXplICogY291bnQpIC0gKHNpemUgKiBjb3VudCk7 DQogCQltZW0gKz0gKHJzaXplICogY291bnQpOw0KIAl9DQotDQotCWxlbiA9 IHNucHJpbnRmKHAsIGJ1ZnNpemUsDQorCXNidWZfcHJpbnRmKCZzYnVmLA0K IAkgICAgIlxuVG90YWwgbWVtb3J5IHVzZWQ6XHQlMzBsbHVcblRvdGFsIE1l bW9yeSB3YXN0ZWQ6XHQlMzBsbHVcbiIsDQogCSAgICAodW5zaWduZWQgbG9u ZyBsb25nKW1lbSwgKHVuc2lnbmVkIGxvbmcgbG9uZyl3YXN0ZSk7DQotCXAg Kz0gbGVuOw0KKwlzYnVmX2ZpbmlzaCgmc2J1Zik7DQogDQotCWVycm9yID0g U1lTQ1RMX09VVChyZXEsIGJ1ZiwgcCAtIGJ1Zik7DQorCWVycm9yID0gU1lT Q1RMX09VVChyZXEsIHNidWZfZGF0YSgmc2J1ZiksIHNidWZfbGVuKCZzYnVm KSk7DQogDQorCXNidWZfZGVsZXRlKCZzYnVmKTsNCiAJZnJlZShidWYsIE1f VEVNUCk7DQogCXJldHVybiAoZXJyb3IpOw0KIH0NCi0tLSAvL2RlcG90L3Zl bmRvci9mcmVlYnNkL3NyYy9zeXMvc3lzL21hbGxvYy5oCTIwMDUvMDEvMDcg MDI6MzI6MTYNCisrKyAvL2RlcG90L3VzZXIvcndhdHNvbi9wZXJjcHUvc3lz L3N5cy9tYWxsb2MuaAkyMDA1LzA0LzE0IDEyOjU0OjAwDQpAQCAtNTAsMjUg KzUwLDUxIEBADQogDQogI2RlZmluZQlNX01BR0lDCQk4Nzc5ODM5NzcJLyog dGltZSB3aGVuIGZpcnN0IGRlZmluZWQgOi0pICovDQogDQorLyoNCisgKiBB QkktY29tcGF0aWJsZSB2ZXJzaW9uIG9mIHRoZSBvbGQgJ3N0cnVjdCBtYWxs b2NfdHlwZScsIG9ubHkgYWxsIHN0YXRzIGFyZQ0KKyAqIG5vdyBtYWxsb2Mt bWFuYWdlZCBpbiBtYWxsb2Mtb3duZWQgbWVtb3J5IHJhdGhlciB0aGFuIGlu IGNhbGxlciBtZW1vcnksIHNvDQorICogYXMgdG8gYXZvaWQgQUJJIGlzc3Vl cy4gIFRoZSBrc19uZXh0IHBvaW50ZXIgaXMgcmV1c2VkIGFzIGEgcG9pbnRl ciB0byB0aGUNCisgKiBpbnRlcm5hbCBkYXRhIGhhbmRsZS4NCisgKg0KKyAq IFhYWFJXOiBXaHkgaXMgdGhpcyBub3QgaWZkZWYgX0tFUk5FTD8NCisgKg0K KyAqIFhYWFJXOiBVc2Ugb2Yga3Nfc2hvcnRkZXNjIGhhcyBsZWFrZWQgb3V0 IG9mIGtlcm5fbWFsbG9jLmMuDQorICovDQogc3RydWN0IG1hbGxvY190eXBl IHsNCi0Jc3RydWN0IG1hbGxvY190eXBlICprc19uZXh0OwkvKiBuZXh0IGlu IGxpc3QgKi8NCi0JdV9sb25nIAlrc19tZW11c2U7CS8qIHRvdGFsIG1lbW9y eSBoZWxkIGluIGJ5dGVzICovDQotCXVfbG9uZwlrc19zaXplOwkvKiBzaXpl cyBvZiB0aGlzIHRoaW5nIHRoYXQgYXJlIGFsbG9jYXRlZCAqLw0KLQl1X2xv bmcJa3NfaW51c2U7CS8qICMgb2YgcGFja2V0cyBvZiB0aGlzIHR5cGUgY3Vy cmVudGx5IGluIHVzZSAqLw0KLQl1aW50NjRfdCBrc19jYWxsczsJLyogdG90 YWwgcGFja2V0cyBvZiB0aGlzIHR5cGUgZXZlciBhbGxvY2F0ZWQgKi8NCi0J dV9sb25nCWtzX21heHVzZWQ7CS8qIG1heGltdW0gbnVtYmVyIGV2ZXIgdXNl ZCAqLw0KLQl1X2xvbmcJa3NfbWFnaWM7CS8qIGlmIGl0J3Mgbm90IG1hZ2lj LCBkb24ndCB0b3VjaCBpdCAqLw0KLQljb25zdCBjaGFyICprc19zaG9ydGRl c2M7CS8qIHNob3J0IGRlc2NyaXB0aW9uICovDQotCXN0cnVjdCBtdHgga3Nf bXR4OwkvKiBsb2NrIGZvciBzdGF0cyAqLw0KKwlzdHJ1Y3QgbWFsbG9jX3R5 cGUJKmtzX25leHQ7CS8qIE5leHQgaW4gZ2xvYmFsIGNoYWluLiAqLw0KKwl1 X2xvbmcJCQkgX2tzX3NpemU7CS8qIE5vIGxvbmdlciB1c2VkLiAqLw0KKwl1 X2xvbmcJCQkgX2tzX2ludXNlOwkvKiBObyBsb25nZXIgdXNlZC4gKi8NCisJ dWludDY0X3QJCSBfa3NfY2FsbHM7CS8qIE5vIGxvbmdlciB1c2VkLiAqLw0K Kwl1X2xvbmcJCQkgX2tzX21heHVzZWQ7CS8qIE5vIGxvbmdlciB1c2VkLiAq Lw0KKwl1X2xvbmcJCQkga3NfbWFnaWM7CS8qIERldGVjdCBwcm9ncmFtbWVy IGVycm9yLiAqLw0KKwljb25zdCBjaGFyCQkqa3Nfc2hvcnRkZXNjOwkvKiBQ cmludGFibGUgdHlwZSBuYW1lLiAqLw0KKw0KKwkvKg0KKwkgKiBzdHJ1Y3Qg bWFsbG9jX3R5cGUgd2FzIHRlcm1pbmF0ZWQgd2l0aCBhIHN0cnVjdCBtdHgs IHdoaWNoIGlzIG5vDQorCSAqIGxvbmdlciByZXF1aXJlZC4gIEZvciBBQkkg cmVhc29ucywgY29udGludWUgdG8gZmxlc2ggb3V0IHRoZSBmdWxsDQorCSAq IHNpemUgb2YgdGhlIG9sZCBzdHJ1Y3R1cmUsIGJ1dCByZXVzZSB0aGUgX2xv X2NsYXNzIGZpZWxkIGZvciBvdXINCisJICogaW50ZXJuYWwgZGF0YSBoYW5k bGUuDQorCSAqLw0KKwl2b2lkCQkJKmtzX2hhbmRsZTsJLyogUHJpdi4gZGF0 YSwgd2FzIGxvX2NsYXNzLiAqLw0KKwljb25zdCBjaGFyCQkqX2xvX25hbWU7 DQorCWNvbnN0IGNoYXIJCSpfbG9fdHlwZTsNCisJdV9pbnQJCQkgX2xvX2Zs YWdzOw0KKwl2b2lkCQkJKl9sb19saXN0X25leHQ7DQorCXN0cnVjdCB3aXRu ZXNzCQkqX2xvX3dpdG5lc3M7DQorCXVpbnRwdHJfdAkJIF9tdHhfbG9jazsN CisJdV9pbnQJCQkgX210eF9yZWN1cnNlOw0KIH07DQogDQogI2lmZGVmIF9L RVJORUwNCi0jZGVmaW5lCU1BTExPQ19ERUZJTkUodHlwZSwgc2hvcnRkZXNj LCBsb25nZGVzYykgXA0KLQlzdHJ1Y3QgbWFsbG9jX3R5cGUgdHlwZVsxXSA9 IHsgXA0KLQkJeyBOVUxMLCAwLCAwLCAwLCAwLCAwLCBNX01BR0lDLCBzaG9y dGRlc2MsIHt9IH0gXA0KLQl9OyBcDQotCVNZU0lOSVQodHlwZSMjX2luaXQs IFNJX1NVQl9LTUVNLCBTSV9PUkRFUl9TRUNPTkQsIG1hbGxvY19pbml0LCB0 eXBlKTsgXA0KLQlTWVNVTklOSVQodHlwZSMjX3VuaW5pdCwgU0lfU1VCX0tN RU0sIFNJX09SREVSX0FOWSwgbWFsbG9jX3VuaW5pdCwgdHlwZSkNCisjZGVm aW5lCU1BTExPQ19ERUZJTkUodHlwZSwgc2hvcnRkZXNjLCBsb25nZGVzYykJ CQlcDQorCXN0cnVjdCBtYWxsb2NfdHlwZSB0eXBlWzFdID0gewkJCQkJXA0K KwkJeyBOVUxMLCAwLCAwLCAwLCAwLCBNX01BR0lDLCBzaG9ydGRlc2MsIE5V TEwsIE5VTEwsCVwNCisJCSAgICBOVUxMLCAwLCBOVUxMLCBOVUxMLCAwLCAw IH0JCQkJXA0KKwl9OwkJCQkJCQkJXA0KKwlTWVNJTklUKHR5cGUjI19pbml0 LCBTSV9TVUJfS01FTSwgU0lfT1JERVJfU0VDT05ELCBtYWxsb2NfaW5pdCwJ XA0KKwkgICAgdHlwZSk7CQkJCQkJCVwNCisJU1lTVU5JTklUKHR5cGUjI191 bmluaXQsIFNJX1NVQl9LTUVNLCBTSV9PUkRFUl9BTlksCQlcDQorCSAgICBt YWxsb2NfdW5pbml0LCB0eXBlKTsNCiANCiAjZGVmaW5lCU1BTExPQ19ERUNM QVJFKHR5cGUpIFwNCiAJZXh0ZXJuIHN0cnVjdCBtYWxsb2NfdHlwZSB0eXBl WzFdDQpAQCAtMTEyLDYgKzEzOCw3IEBADQogCSAgICBpbnQgZmxhZ3MpOw0K IHZvaWQJKnJlYWxsb2NmKHZvaWQgKmFkZHIsIHVuc2lnbmVkIGxvbmcgc2l6 ZSwgc3RydWN0IG1hbGxvY190eXBlICp0eXBlLA0KIAkgICAgaW50IGZsYWdz KTsNCisNCiAjZW5kaWYgLyogX0tFUk5FTCAqLw0KIA0KICNlbmRpZiAvKiAh X1NZU19NQUxMT0NfSF8gKi8NCg== --0-136826264-1113748310=:85588-- From owner-freebsd-performance@FreeBSD.ORG Sun Apr 17 17:49:15 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9E7C016A4CE for ; Sun, 17 Apr 2005 17:49:15 +0000 (GMT) Received: from cyrus.watson.org (cyrus.watson.org [204.156.12.53]) by mx1.FreeBSD.org (Postfix) with ESMTP id 435A243D5D for ; Sun, 17 Apr 2005 17:49:15 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by cyrus.watson.org (Postfix) with ESMTP id EB7D246B8A for ; Sun, 17 Apr 2005 13:49:14 -0400 (EDT) Date: Sun, 17 Apr 2005 18:50:12 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: performance@FreeBSD.org In-Reply-To: <20050417134448.L85588@fledge.watson.org> Message-ID: <20050417184726.V91149@fledge.watson.org> References: <20050417134448.L85588@fledge.watson.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Subject: Re: Memory allocation performance/statistics patches X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Apr 2005 17:49:15 -0000 On Sun, 17 Apr 2005, Robert Watson wrote: > I'd like to confirm that for the first two patches, for interesting > workloads, performance generally improves, and that stability doesn't > degrade. For the third partch, I'd like to quantify the cost of the > changes for interesting workloads, and likewise confirm no loss of > stability. Just an FYI on some earlier testing done by a couple of people: - Bosko Milekic has reported that the UMA changes resulted in a performance increase in his high bandwidth denial of service testing, as well as a decreased occurence of livelock. - Scott Long has reported that the UMA changes produced a performance increase in MySQL testing. However, that the combined malloc + mbuf + uma patches produced a slight decrease in performance. He has not yet been able to try them split out to see if there's another factor at work, or which elements are causing the problem. - I've observed clear micro-benchmark improvements with the UMA cache in place for high speed UDP packet generation in userland, as well as syscall tests for the allocation/free of pipes and sockets. Robert N M Watson From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:18:55 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AB3B316A4CE for ; Tue, 19 Apr 2005 11:18:55 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.204]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2CFCA43D49 for ; Tue, 19 Apr 2005 11:18:55 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1249462rng for ; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=D1osuG4j7ZmDJWUd30I33xHltxY40qVZ570u4L4Wt3MtkuTgQYnBMLZ3kztNt0CLpiZ7N469kmQfeY4Sl2lLLlBqI28EGVVVaItPpv1PuIFgEao/9IVMnyKTK4RcttynqsleSGGIH7EmhGiTji25DcPTa2LDYn6RMG1TOW+7HfE= Received: by 10.38.74.31 with SMTP id w31mr7045431rna; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 04:18:54 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 13:18:54 +0200 From: Claus Guttesen To: freebsd-stable@freebsd.org, freebsd-performance@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Subject: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:18:55 -0000 Hi. Sorry for x-posting but the thread was originally meant for freebsd-stable but then a performance-related question slowly emerged into the message ;-) Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some simple benchmarks against a FreeBSD 5.4 RC2-server. My seven clients are RC1 and is a mix of i386 and amd64. The purpose of this test was *not* to measure throughput using various r/w-sizes. So all clients were mounted using r/w-sizes of 32768. The only difference was the usage of udp- or tcp-mounts. I only ran the test once. The server has net.isr.enable set to 1 (active), gbit-nic is em. Used 'systat -ifstat 1' to measure throughput. The storage is ide->fiber using a qlogic 2310 hba. It's a dual PIII at 1.3 GHz. I'm rsyncing to and from the nfsserver, the files are some KB (thumbnails) and and at most 1 MB (the image itself). The folder is approx. 1.8 GB. The mix of files very much reflects our load. *to* nfs-server *from* nfs-server tcp 41 MB/s 100 MB/s udp 30 MB/s 74 MB/s In my environment tcp is (quite) faster than udp, so I'll stick to that in the near future. So eventhough I only made one run the tcp-times are so much faster and it utilized the cpu more that I beleive doing more runs would only level the score a bit. Q: Will I get better performance upgrading the server from dual PIII to dual X= eon? A: regards Claus From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:33:34 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1693A16A4CE; Tue, 19 Apr 2005 11:33:34 +0000 (GMT) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1008643D5C; Tue, 19 Apr 2005 11:33:33 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id j3JBXWUl033554; Tue, 19 Apr 2005 06:33:32 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <4264EC60.3020600@centtech.com> Date: Tue, 19 Apr 2005 06:32:48 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-performance@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:33:34 -0000 Claus Guttesen wrote: > Hi. > > Sorry for x-posting but the thread was originally meant for > freebsd-stable but then a performance-related question slowly emerged > into the message ;-) > > Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some > simple benchmarks against a FreeBSD 5.4 RC2-server. My seven clients > are RC1 and is a mix of i386 and amd64. > > The purpose of this test was *not* to measure throughput using various > r/w-sizes. So all clients were mounted using r/w-sizes of 32768. The > only difference was the usage of udp- or tcp-mounts. I only ran the > test once. > > The server has net.isr.enable set to 1 (active), gbit-nic is em. Used > 'systat -ifstat 1' to measure throughput. The storage is ide->fiber > using a qlogic 2310 hba. It's a dual PIII at 1.3 GHz. > > I'm rsyncing to and from the nfsserver, the files are some KB > (thumbnails) and and at most 1 MB (the image itself). The folder is > approx. 1.8 GB. The mix of files very much reflects our load. > > *to* nfs-server *from* nfs-server > tcp 41 MB/s 100 MB/s > udp 30 MB/s 74 MB/s > > In my environment tcp is (quite) faster than udp, so I'll stick to > that in the near future. So eventhough I only made one run the > tcp-times are so much faster and it utilized the cpu more that I > beleive doing more runs would only level the score a bit. > > Q: > Will I get better performance upgrading the server from dual PIII to dual Xeon? > > A: rsync is CPU intensive, so depending on how much cpu you were using for this, you may or may not gain. How busy was the server during that time? Is this to a single IDE disk? If so, you are probably bottlenecked by that IDE drive. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:43:58 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A959A16A4CF for ; Tue, 19 Apr 2005 11:43:58 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.206]) by mx1.FreeBSD.org (Postfix) with ESMTP id 146FE43D62 for ; Tue, 19 Apr 2005 11:43:58 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1252995rng for ; Tue, 19 Apr 2005 04:43:57 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=pcykgU3mpXLy/3h8wnsH4YrFuH8Ms7Y2dhpI5G7XcHqt5ir4CHbvtgOjXEGRsE6w++xcKqnckaO1OeSUG8bsOQxgVRxkyYAI6661rFf1HkUeUD6PtW55YSOoWMJ02S/jER8qtFniNtpuhVYAl8M4dpQohhZTGOf36w0kRFC9xOo= Received: by 10.38.86.53 with SMTP id j53mr7024978rnb; Tue, 19 Apr 2005 04:43:57 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 04:43:57 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 13:43:57 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <4264EC60.3020600@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EC60.3020600@centtech.com> cc: freebsd-performance@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:43:58 -0000 > > Q: > > Will I get better performance upgrading the server from dual PIII to du= al Xeon? > > A: >=20 > rsync is CPU intensive, so depending on how much cpu you were using for t= his, > you may or may not gain. How busy was the server during that time? Is t= his to > a single IDE disk? If so, you are probably bottlenecked by that IDE driv= e. The storage is ide->fiber. Using tcp-mounts and peaking 100 MB/s it used just about 100 % cpu. Rsync was only used to copy the folder recursively (-a), it used nfs to trasnfer the files to the nfs-server. regards Claus From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:45:49 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0578A16A4CE; Tue, 19 Apr 2005 11:45:49 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9915443D1F; Tue, 19 Apr 2005 11:45:48 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3JBjmjW066290; Tue, 19 Apr 2005 06:45:48 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <4264EF40.3060900@centtech.com> Date: Tue, 19 Apr 2005 06:45:04 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264EC60.3020600@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/840/Mon Apr 18 20:42:09 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:45:49 -0000 Claus Guttesen wrote: >>>Q: >>>Will I get better performance upgrading the server from dual PIII to dual Xeon? >>>A: >> >>rsync is CPU intensive, so depending on how much cpu you were using for this, >>you may or may not gain. How busy was the server during that time? Is this to >>a single IDE disk? If so, you are probably bottlenecked by that IDE drive. > > > The storage is ide->fiber. Using tcp-mounts and peaking 100 MB/s it > used just about 100 % cpu. > > Rsync was only used to copy the folder recursively (-a), it used nfs > to trasnfer the files to the nfs-server. When you say 'ide->fiber' that could mean a lot of things. Is this a single drive, or a RAID subsystem? Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 11:55:26 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C801216A4E5 for ; Tue, 19 Apr 2005 11:55:26 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.200]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2D4EC43D53 for ; Tue, 19 Apr 2005 11:55:23 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1254691rng for ; Tue, 19 Apr 2005 04:55:22 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=I3Sk90l2YVuBeCHauG0ckAIMgdyiqbgkOkryzrT4uYVwg/9pd97O4mcQLrqTipoPCPCw3QiQE1J+bYmYQozG4Bl3/AnxnhlC7FoBxBKG3J6Tqbqr/o3mclhACgSYay5gRS+yNnWBGu27hcErL6Lv4oXw3P3L0AkiGczNVE0MSZs= Received: by 10.38.160.51 with SMTP id i51mr6226806rne; Tue, 19 Apr 2005 04:55:22 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 04:55:22 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 13:55:22 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <4264EF40.3060900@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> cc: freebsd-performance@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 11:55:27 -0000 > When you say 'ide->fiber' that could mean a lot of things. Is this a sin= gle > drive, or a RAID subsystem? Yes, I do read it different now ;-) It's a raid 5 with 12 400 GB drives split into two volumes (where I performed the test on one of them). regards Claus From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 12:25:58 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0633F16A4CE for ; Tue, 19 Apr 2005 12:25:58 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9D34043D2F for ; Tue, 19 Apr 2005 12:25:57 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3JCPuvH066543; Tue, 19 Apr 2005 07:25:56 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <4264F8A8.3080405@centtech.com> Date: Tue, 19 Apr 2005 07:25:12 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/840/Mon Apr 18 20:42:09 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 12:25:58 -0000 Claus Guttesen wrote: >>When you say 'ide->fiber' that could mean a lot of things. Is this a single >>drive, or a RAID subsystem? > > > Yes, I do read it different now ;-) > > It's a raid 5 with 12 400 GB drives split into two volumes (where I > performed the test on one of them). What does gstat look like on the server when you are doing this? Also - does a dd locally on the server give the same results? You should get about double that I would estimate locally direct to disk. What about a dd over NFS? What is the server spending its time doing? (top?) If you are looking for the best performance, you might try a RAID 0+1 (or 10 possibly) instead of RAID 5. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 13:14:45 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D579116A4CE for ; Tue, 19 Apr 2005 13:14:45 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.202]) by mx1.FreeBSD.org (Postfix) with ESMTP id 57A2143D41 for ; Tue, 19 Apr 2005 13:14:45 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1268546rng for ; Tue, 19 Apr 2005 06:14:44 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Z6qnvX3ImmXAEnOTZRvnxuygkXtwY6IrzTiuX80RwocVAXkYqQIscT2llNwdR4AhK358/W6NNAx5JJBhqn/jn09xy/nZtAPRhyZmjFtDwLovHsTB/2JahzrlZzt247b1j/s9GIXdYAJSMePc8r9nfjcxOf3w4YrtHGbwyymDkrE= Received: by 10.38.67.66 with SMTP id p66mr4912819rna; Tue, 19 Apr 2005 06:14:44 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 06:14:44 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 15:14:44 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <4264F8A8.3080405@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 13:14:46 -0000 > What does gstat look like on the server when you are doing this? > Also - does a dd locally on the server give the same results? You should= get > about double that I would estimate locally direct to disk. What about a = dd over > NFS? dd-command: dd if=3D/dev/zero of=3D/nfssrv/dd.tst bs=3D1024 count=3D1048576 on client: 1073741824 bytes transferred in 24.787112 secs (43318553 bytes/sec) gstat showed approx. 30.000-52.000 KB/s. on nfs-server: 1073741824 bytes transferred in 23.368815 secs (45947637 bytes/sec) gstat showed approx. 45.000-46.000 KB/s. The funny thing is that the outputrate fluxuates more dd'ing from the client (remote) and is more consistent dd'ing on the server (locally). > What is the server spending its time doing? (top?) nfsd. > If you are looking for the best performance, you might try a RAID 0+1 (or= 10 > possibly) instead of RAID 5. I chosed raid 5 to maximize space. regards Claus From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 13:30:51 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6DE9016A4CE for ; Tue, 19 Apr 2005 13:30:51 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5B2BC43D31 for ; Tue, 19 Apr 2005 13:30:50 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3JDUmgN066953; Tue, 19 Apr 2005 08:30:49 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <426507DC.50409@centtech.com> Date: Tue, 19 Apr 2005 08:30:04 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/840/Mon Apr 18 20:42:09 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 13:30:51 -0000 Claus Guttesen wrote: >>What does gstat look like on the server when you are doing this? >>Also - does a dd locally on the server give the same results? You should get >>about double that I would estimate locally direct to disk. What about a dd over >>NFS? > > > dd-command: > > dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576 > > on client: > 1073741824 bytes transferred in 24.787112 secs (43318553 bytes/sec) > gstat showed approx. 30.000-52.000 KB/s. > > on nfs-server: > 1073741824 bytes transferred in 23.368815 secs (45947637 bytes/sec) > gstat showed approx. 45.000-46.000 KB/s. > > The funny thing is that the outputrate fluxuates more dd'ing from the > client (remote) and is more consistent dd'ing on the server (locally). > > >>What is the server spending its time doing? (top?) > > > nfsd. > > >>If you are looking for the best performance, you might try a RAID 0+1 (or 10 >>possibly) instead of RAID 5. > > > I chosed raid 5 to maximize space. What state is nfsd in? Can you send the output of this: ps -auxw|grep nfsd while the server is slammed? I think you are disk bound.. You should not be disk bound at this point with a good RAID controller.. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 13:55:06 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8A30B16A4CE for ; Tue, 19 Apr 2005 13:55:06 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.203]) by mx1.FreeBSD.org (Postfix) with ESMTP id 252AE43D46 for ; Tue, 19 Apr 2005 13:55:06 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so1276865rng for ; Tue, 19 Apr 2005 06:55:05 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=d8+Zr6hAwrtTj/zdEiBXVgKEsfWze9pDbXUJ2N/YAjVlIAwfNUtqWtG13pCf1rStkgJSBeqRIcB+eb090dkNpPXcKRr0ifBq0itPDsm4zo9a0X6NCggD1Jomv1jzFmMLu+g9BdqqMSplxJqKQmU8ep72QeKZRTvFl2XdwTkIdPQ= Received: by 10.38.149.73 with SMTP id w73mr6991426rnd; Tue, 19 Apr 2005 06:55:05 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Tue, 19 Apr 2005 06:55:03 -0700 (PDT) Message-ID: Date: Tue, 19 Apr 2005 15:55:03 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <426507DC.50409@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 13:55:06 -0000 > What state is nfsd in? Can you send the output of this: > ps -auxw|grep nfsd > while the server is slammed? elin~%>ps -auxw|grep nfsd root 378 3,7 0,0 1412 732 ?? D Tor07am 4:08,82 nfsd: server (nfsd) root 380 3,5 0,0 1412 732 ?? D Tor07am 1:56,52 nfsd: server (nfsd) root 379 3,4 0,0 1412 732 ?? D Tor07am 2:34,96 nfsd: server (nfsd) root 381 3,0 0,0 1412 732 ?? D Tor07am 1:31,72 nfsd: server (nfsd) root 382 3,0 0,0 1412 732 ?? S Tor07am 1:14,97 nfsd: server (nfsd) root 377 2,8 0,0 1412 732 ?? D Tor07am 10:18,51 nfsd: server (nfsd) root 383 2,2 0,0 1412 732 ?? S Tor07am 1:03,79 nfsd: server (nfsd) root 387 2,0 0,0 1412 732 ?? S Tor07am 0:41,69 nfsd: server (nfsd) root 388 2,0 0,0 1412 732 ?? D Tor07am 0:38,09 nfsd: server (nfsd) root 384 1,9 0,0 1412 732 ?? S Tor07am 0:55,95 nfsd: server (nfsd) root 385 1,8 0,0 1412 732 ?? S Tor07am 0:50,19 nfsd: server (nfsd) root 389 1,5 0,0 1412 732 ?? S Tor07am 0:35,06 nfsd: server (nfsd) root 386 1,4 0,0 1412 732 ?? S Tor07am 0:45,38 nfsd: server (nfsd) root 391 1,2 0,0 1412 732 ?? S Tor07am 0:27,78 nfsd: server (nfsd) root 394 1,1 0,0 1412 732 ?? S Tor07am 0:21,18 nfsd: server (nfsd) root 395 1,1 0,0 1412 732 ?? S Tor07am 0:20,38 nfsd: server (nfsd) root 392 1,0 0,0 1412 732 ?? S Tor07am 0:24,99 nfsd: server (nfsd) root 390 1,0 0,0 1412 732 ?? S Tor07am 0:31,39 nfsd: server (nfsd) root 393 0,9 0,0 1412 732 ?? S Tor07am 0:22,27 nfsd: server (nfsd) root 396 0,9 0,0 1412 732 ?? D Tor07am 0:19,31 nfsd: server (nfsd) root 376 0,0 0,0 1540 1032 ?? Is Tor07am 0:00,04 nfsd: master (nfsd) > I think you are disk bound.. You should not be disk bound at this point = with a > good RAID controller.. Good point, it's an atabeast from nexsan. regards Claus From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 14:00:00 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0663516A4CE for ; Tue, 19 Apr 2005 14:00:00 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5DF3B43D39 for ; Tue, 19 Apr 2005 13:59:59 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3JDxw7Z067209; Tue, 19 Apr 2005 08:59:58 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <42650EB2.4040409@centtech.com> Date: Tue, 19 Apr 2005 08:59:14 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/840/Mon Apr 18 20:42:09 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 14:00:00 -0000 Claus Guttesen wrote: >>What state is nfsd in? Can you send the output of this: >>ps -auxw|grep nfsd >>while the server is slammed? > > > elin~%>ps -auxw|grep nfsd > root 378 3,7 0,0 1412 732 ?? D Tor07am 4:08,82 nfsd: > server (nfsd) > root 380 3,5 0,0 1412 732 ?? D Tor07am 1:56,52 nfsd: > server (nfsd) > root 379 3,4 0,0 1412 732 ?? D Tor07am 2:34,96 nfsd: > server (nfsd) > root 381 3,0 0,0 1412 732 ?? D Tor07am 1:31,72 nfsd: > server (nfsd) > root 382 3,0 0,0 1412 732 ?? S Tor07am 1:14,97 nfsd: > server (nfsd) > root 377 2,8 0,0 1412 732 ?? D Tor07am 10:18,51 nfsd: > server (nfsd) > root 383 2,2 0,0 1412 732 ?? S Tor07am 1:03,79 nfsd: > server (nfsd) > root 387 2,0 0,0 1412 732 ?? S Tor07am 0:41,69 nfsd: > server (nfsd) > root 388 2,0 0,0 1412 732 ?? D Tor07am 0:38,09 nfsd: > server (nfsd) > root 384 1,9 0,0 1412 732 ?? S Tor07am 0:55,95 nfsd: > server (nfsd) > root 385 1,8 0,0 1412 732 ?? S Tor07am 0:50,19 nfsd: > server (nfsd) > root 389 1,5 0,0 1412 732 ?? S Tor07am 0:35,06 nfsd: > server (nfsd) > root 386 1,4 0,0 1412 732 ?? S Tor07am 0:45,38 nfsd: > server (nfsd) > root 391 1,2 0,0 1412 732 ?? S Tor07am 0:27,78 nfsd: > server (nfsd) > root 394 1,1 0,0 1412 732 ?? S Tor07am 0:21,18 nfsd: > server (nfsd) > root 395 1,1 0,0 1412 732 ?? S Tor07am 0:20,38 nfsd: > server (nfsd) > root 392 1,0 0,0 1412 732 ?? S Tor07am 0:24,99 nfsd: > server (nfsd) > root 390 1,0 0,0 1412 732 ?? S Tor07am 0:31,39 nfsd: > server (nfsd) > root 393 0,9 0,0 1412 732 ?? S Tor07am 0:22,27 nfsd: > server (nfsd) > root 396 0,9 0,0 1412 732 ?? D Tor07am 0:19,31 nfsd: > server (nfsd) > root 376 0,0 0,0 1540 1032 ?? Is Tor07am 0:00,04 nfsd: > master (nfsd) > > >>I think you are disk bound.. You should not be disk bound at this point with a >>good RAID controller.. > > > Good point, it's an atabeast from nexsan. Looks like they are indeed waiting on disk.. You could try making two 6 disk raid5 in your controller, then striping those with vinum. That might help. Possibly, if your controller supports it, setting up the array as JBOD, and then use vinum to build your raid 5 (not sure if it will be faster or not). Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 18:31:47 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0103F16A4D6 for ; Tue, 19 Apr 2005 18:31:47 +0000 (GMT) Received: from joshua.stabbursmoen.no (joshua.stabbursmoen.no [80.203.220.148]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6438343D41 for ; Tue, 19 Apr 2005 18:31:45 +0000 (GMT) (envelope-from eivind@stabbursmoen.no) Received: from drift002v60 (drift-100-v60.i.stabbursmoen.no [10.6.0.100]) by joshua.stabbursmoen.no (Stabbursmoen skole) with ESMTP id F18008131 for ; Tue, 19 Apr 2005 20:33:35 +0200 (CEST) From: "Eivind Hestnes" To: Date: Tue, 19 Apr 2005 20:32:38 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2527 Thread-Index: AcVFDinp1W5MeDKnQ9SGYX0/nnOOzg== Message-Id: <20050419183335.F18008131@joshua.stabbursmoen.no> X-Virus-Scanned: by Stabbursmoen skole Subject: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 18:31:47 -0000 Hi, I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) installed in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD 5.4-RC3. The machine is routing traffic between multiple VLANs. Recently I did a benchmark with/without device polling enabled. Without device polling I was able to transfer roughly 180 Mbit/s. The router however was suffering when doing this benchmark. Interrupt load was peaking 100% - overall the system itself was quite unusable (_very_ high system load). With device polling enabled the interrupt kept stable around 40-50% and max transfer rate was nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin point. However, a Pentium III in combination with a good NIC should in my opinion be a respectful router.. but I'm not satisfied with the results. The pf ruleset is like nothing, and the kernel is stripped and customized for best performance. Any tweaking tips for making my router perform better? Debug information: eivind@core-gw:~$ sysctl -a | grep kern.polling kern.polling.burst: 150 kern.polling.each_burst: 5 kern.polling.burst_max: 150 kern.polling.idle_poll: 0 kern.polling.poll_in_trap: 0 kern.polling.user_frac: 50 kern.polling.reg_frac: 20 kern.polling.short_ticks: 1411 kern.polling.lost_polls: 720 kern.polling.pending_polls: 0 kern.polling.residual_burst: 0 kern.polling.handlers: 0 kern.polling.enable: 1 kern.polling.phase: 0 kern.polling.suspect: 186 kern.polling.stalled: 0 kern.polling.idlepoll_sleeping: 1 eivind@core-gw:~$ cat /etc/sysctl.conf net.inet.ip.forwarding=1 net.inet.ip.fastforwarding=1 net.inet.carp.preempt=1 kern.polling.enable=1 HZ set to 1000 as recommended in README for the em(4) driver. Driver is of cource compiled into kernel. Regards, Eivind Hestnes From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 19:44:16 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D4D8816A4CE for ; Tue, 19 Apr 2005 19:44:16 +0000 (GMT) Received: from smtp815.mail.sc5.yahoo.com (smtp815.mail.sc5.yahoo.com [66.163.170.1]) by mx1.FreeBSD.org (Postfix) with SMTP id 9252543D39 for ; Tue, 19 Apr 2005 19:44:16 +0000 (GMT) (envelope-from noackjr@alumni.rice.edu) Received: from unknown (HELO optimator.noacks.org) (noacks@swbell.net@70.240.205.64 with login) by smtp815.mail.sc5.yahoo.com with SMTP; 19 Apr 2005 19:14:18 -0000 Received: from localhost (localhost [127.0.0.1]) by optimator.noacks.org (Postfix) with ESMTP id 1BA576143; Tue, 19 Apr 2005 14:14:17 -0500 (CDT) Received: from optimator.noacks.org ([127.0.0.1]) by localhost (optimator.noacks.org [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 20435-15-2; Tue, 19 Apr 2005 14:14:15 -0500 (CDT) Received: from [127.0.0.1] (optimator [192.168.1.11]) by optimator.noacks.org (Postfix) with ESMTP id BEC3760D5; Tue, 19 Apr 2005 14:14:15 -0500 (CDT) Message-ID: <42655887.7060203@alumni.rice.edu> Date: Tue, 19 Apr 2005 14:14:15 -0500 From: Jon Noack User-Agent: Mozilla Thunderbird 1.0.2 (Windows/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Eivind Hestnes References: <20050419183335.F18008131@joshua.stabbursmoen.no> In-Reply-To: <20050419183335.F18008131@joshua.stabbursmoen.no> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: amavisd-new at noacks.org cc: performance@FreeBSD.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: noackjr@alumni.rice.edu List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 19:44:16 -0000 On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) installed > in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD 5.4-RC3. > The machine is routing traffic between multiple VLANs. Recently I did a > benchmark with/without device polling enabled. Without device polling I was > able to transfer roughly 180 Mbit/s. The router however was suffering when > doing this benchmark. Interrupt load was peaking 100% - overall the system > itself was quite unusable (_very_ high system load). With device polling > enabled the interrupt kept stable around 40-50% and max transfer rate was > nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin point. The card is plugged into a 32-bit PCI slot, correct? If so, 180 Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs (in 32-bit PCI slots) and get NFS transfers maxing out around 23 MB/s, which is ~180 Mbit/s. Gigabit performance with 32-bit cards is atrocious. It reminds me of the old 100 Mbit/s ISA cards... > > > HZ set to 1000 as recommended in README for the em(4) driver. Driver is of > cource compiled into kernel. You'll need HZ set to more than 1000 for gigabit; bump it up to at least 2000. That should increase polling throughput a lot. I'm not sure about other polling parameters, however. Jon From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:03:20 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C71C116A4CE for ; Tue, 19 Apr 2005 21:03:20 +0000 (GMT) Received: from joshua.stabbursmoen.no (joshua.stabbursmoen.no [80.203.220.148]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3022E43D58 for ; Tue, 19 Apr 2005 21:03:20 +0000 (GMT) (envelope-from eivind@stabbursmoen.no) Received: from [10.5.0.116] (vpnclient-116-v50.i.stabbursmoen.no [10.5.0.116]) B074D80D7; Tue, 19 Apr 2005 23:05:10 +0200 (CEST) Message-ID: <4265722F.5000403@stabbursmoen.no> Date: Tue, 19 Apr 2005 23:03:43 +0200 From: Eivind Hestnes X-Accept-Language: en-us, en MIME-Version: 1.0 To: Jerald Von Dipple , performance@freebsd.org References: <20050419183335.F18008131@joshua.stabbursmoen.no> <5c05f1805041911351d2bd98e@mail.gmail.com> In-Reply-To: <5c05f1805041911351d2bd98e@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by Stabbursmoen skole Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:03:20 -0000 Thanks for the advice. Didn't do any difference, though.. Perhaps I should try to increase the polling frequency.. - E. Jerald Von Dipple wrote: >Hey man > >You need to bump > >kern.polling.burst: 150 > >Upto at least 150000 > >Regards, >Jerald Von D. > >On 4/19/05, Eivind Hestnes wrote: > > >>Hi, >> >>I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) installed >>in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD 5.4-RC3. >>The machine is routing traffic between multiple VLANs. Recently I did a >>benchmark with/without device polling enabled. Without device polling I was >>able to transfer roughly 180 Mbit/s. The router however was suffering when >>doing this benchmark. Interrupt load was peaking 100% - overall the system >>itself was quite unusable (_very_ high system load). With device polling >>enabled the interrupt kept stable around 40-50% and max transfer rate was >>nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin point. >> >>However, a Pentium III in combination with a good NIC should in my opinion >>be a respectful router.. but I'm not satisfied with the results. The pf >>ruleset is like nothing, and the kernel is stripped and customized for best >>performance. >> >>Any tweaking tips for making my router perform better? >> >>Debug information: >>eivind@core-gw:~$ sysctl -a | grep kern.polling >>kern.polling.burst: 150 >>kern.polling.each_burst: 5 >>kern.polling.burst_max: 150 >>kern.polling.idle_poll: 0 >>kern.polling.poll_in_trap: 0 >>kern.polling.user_frac: 50 >>kern.polling.reg_frac: 20 >>kern.polling.short_ticks: 1411 >>kern.polling.lost_polls: 720 >>kern.polling.pending_polls: 0 >>kern.polling.residual_burst: 0 >>kern.polling.handlers: 0 >>kern.polling.enable: 1 >>kern.polling.phase: 0 >>kern.polling.suspect: 186 >>kern.polling.stalled: 0 >>kern.polling.idlepoll_sleeping: 1 >> >>eivind@core-gw:~$ cat /etc/sysctl.conf >>net.inet.ip.forwarding=1 >>net.inet.ip.fastforwarding=1 >>net.inet.carp.preempt=1 >>kern.polling.enable=1 >> >>HZ set to 1000 as recommended in README for the em(4) driver. Driver is of >>cource compiled into kernel. >> >>Regards, >>Eivind Hestnes >> >>_______________________________________________ >>freebsd-performance@freebsd.org mailing list >>http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" >> >> >> From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:03:42 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6B6C116A4CF for ; Tue, 19 Apr 2005 21:03:42 +0000 (GMT) Received: from joshua.stabbursmoen.no (joshua.stabbursmoen.no [80.203.220.148]) by mx1.FreeBSD.org (Postfix) with ESMTP id C2C3A43D1F for ; Tue, 19 Apr 2005 21:03:41 +0000 (GMT) (envelope-from eivind@stabbursmoen.no) Received: from [10.5.0.116] (vpnclient-116-v50.i.stabbursmoen.no [10.5.0.116]) B518A81CF; Tue, 19 Apr 2005 23:05:37 +0200 (CEST) Message-ID: <4265724A.1040705@stabbursmoen.no> Date: Tue, 19 Apr 2005 23:04:10 +0200 From: Eivind Hestnes X-Accept-Language: en-us, en MIME-Version: 1.0 To: noackjr@alumni.rice.edu References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> In-Reply-To: <42655887.7060203@alumni.rice.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by Stabbursmoen skole cc: performance@FreeBSD.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:03:42 -0000 It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 MByte/s. However, when pulling 180 mbit/s without the polling enabled the system is very little responsive due to the interrupt load. I'll try to increase the polling frequency too see if this increases the bandwidth with polling enabled.. Thanks for the advice btw.. - E. Jon Noack wrote: > On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > >> I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) >> installed >> in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD >> 5.4-RC3. >> The machine is routing traffic between multiple VLANs. Recently I did a >> benchmark with/without device polling enabled. Without device polling >> I was >> able to transfer roughly 180 Mbit/s. The router however was suffering >> when >> doing this benchmark. Interrupt load was peaking 100% - overall the >> system >> itself was quite unusable (_very_ high system load). With device polling >> enabled the interrupt kept stable around 40-50% and max transfer rate >> was >> nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin point. > > > The card is plugged into a 32-bit PCI slot, correct? If so, 180 > Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs (in > 32-bit PCI slots) and get NFS transfers maxing out around 23 MB/s, > which is ~180 Mbit/s. Gigabit performance with 32-bit cards is > atrocious. It reminds me of the old 100 Mbit/s ISA cards... > >> >> >> HZ set to 1000 as recommended in README for the em(4) driver. Driver >> is of >> cource compiled into kernel. > > > You'll need HZ set to more than 1000 for gigabit; bump it up to at > least 2000. That should increase polling throughput a lot. I'm not > sure about other polling parameters, however. > > Jon From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:10:27 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 37D5816A4D4 for ; Tue, 19 Apr 2005 21:10:27 +0000 (GMT) Received: from joshua.stabbursmoen.no (joshua.stabbursmoen.no [80.203.220.148]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5AD4943D46 for ; Tue, 19 Apr 2005 21:10:26 +0000 (GMT) (envelope-from eivind@stabbursmoen.no) Received: from [10.5.0.116] (vpnclient-116-v50.i.stabbursmoen.no [10.5.0.116]) A8BCE80D7; Tue, 19 Apr 2005 23:12:16 +0200 (CEST) Message-ID: <426573D9.8080607@stabbursmoen.no> Date: Tue, 19 Apr 2005 23:10:49 +0200 From: Eivind Hestnes X-Accept-Language: en-us, en MIME-Version: 1.0 To: Michael DeMan References: <20050419183335.F18008131@joshua.stabbursmoen.no> <5c05f1805041911351d2bd98e@mail.gmail.com> <42656CA0.9040403@stabbursmoen.no> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by Stabbursmoen skole cc: Jerald Von Dipple cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:10:27 -0000 It sounds sensible, but I have also learned that throwing hardware on a problem is not always right.. Compared to shiny boxes from Cisco, HP etc. a 500 Mhz router is for heavy duty networks. I would try some more tweaking before replacing the box with some more spectular hardware. - E. Michael DeMan wrote: > The rule of thumb I have seen on Intel/UNIX based routers is that you > want 1GHz of CPU for every gigabit of throughput. > > Also, on gigabit NICs, make sure you have a 64-bit PCI bus on the > motherboard. > > > > Michael F. DeMan > Director of Technology > OpenAccess Network Services > Bellingham, WA 98225 > michael@staff.openaccess.org > 360-647-0785 > On Apr 19, 2005, at 1:40 PM, Eivind Hestnes wrote: > >> Thanks for the advice. Didn't do any difference, though.. Perhaps I >> should try to increase the polling frequency.. >> >> Jerald Von Dipple wrote: >> >>> Hey man >>> >>> You need to bump >>> >>> kern.polling.burst: 150 >>> >>> Upto at least 150000 >>> >>> Regards, >>> Jerald Von D. >>> >>> On 4/19/05, Eivind Hestnes wrote: >>> >>>> Hi, >>>> >>>> I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) >>>> installed >>>> in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD >>>> 5.4-RC3. >>>> The machine is routing traffic between multiple VLANs. Recently I >>>> did a >>>> benchmark with/without device polling enabled. Without device >>>> polling I was >>>> able to transfer roughly 180 Mbit/s. The router however was >>>> suffering when >>>> doing this benchmark. Interrupt load was peaking 100% - overall the >>>> system >>>> itself was quite unusable (_very_ high system load). With device >>>> polling >>>> enabled the interrupt kept stable around 40-50% and max transfer >>>> rate was >>>> nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin >>>> point. >>>> >>>> However, a Pentium III in combination with a good NIC should in my >>>> opinion >>>> be a respectful router.. but I'm not satisfied with the results. >>>> The pf >>>> ruleset is like nothing, and the kernel is stripped and customized >>>> for best >>>> performance. >>>> >>>> Any tweaking tips for making my router perform better? >>>> >>>> Debug information: >>>> eivind@core-gw:~$ sysctl -a | grep kern.polling >>>> kern.polling.burst: 150 >>>> kern.polling.each_burst: 5 >>>> kern.polling.burst_max: 150 >>>> kern.polling.idle_poll: 0 >>>> kern.polling.poll_in_trap: 0 >>>> kern.polling.user_frac: 50 >>>> kern.polling.reg_frac: 20 >>>> kern.polling.short_ticks: 1411 >>>> kern.polling.lost_polls: 720 >>>> kern.polling.pending_polls: 0 >>>> kern.polling.residual_burst: 0 >>>> kern.polling.handlers: 0 >>>> kern.polling.enable: 1 >>>> kern.polling.phase: 0 >>>> kern.polling.suspect: 186 >>>> kern.polling.stalled: 0 >>>> kern.polling.idlepoll_sleeping: 1 >>>> >>>> eivind@core-gw:~$ cat /etc/sysctl.conf >>>> net.inet.ip.forwarding=1 >>>> net.inet.ip.fastforwarding=1 >>>> net.inet.carp.preempt=1 >>>> kern.polling.enable=1 >>>> >>>> HZ set to 1000 as recommended in README for the em(4) driver. >>>> Driver is of >>>> cource compiled into kernel. >>>> >>>> Regards, >>>> Eivind Hestnes >>>> >>>> _______________________________________________ >>>> freebsd-performance@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>>> To unsubscribe, send any mail to >>>> "freebsd-performance-unsubscribe@freebsd.org" >>>> >>>> >> >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:11:37 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C420616A4CE for ; Tue, 19 Apr 2005 21:11:37 +0000 (GMT) Received: from silver.he.iki.fi (helenius.fi [193.64.42.241]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4DD7743D3F for ; Tue, 19 Apr 2005 21:11:36 +0000 (GMT) (envelope-from pete@he.iki.fi) Received: from [193.64.42.134] (h86.vuokselantie10.fi [193.64.42.134]) by silver.he.iki.fi (8.13.1/8.11.4) with ESMTP id j3JLBVEg091714; Wed, 20 Apr 2005 00:11:31 +0300 (EEST) (envelope-from pete@he.iki.fi) Message-ID: <42657420.3040104@he.iki.fi> Date: Wed, 20 Apr 2005 00:12:00 +0300 From: Petri Helenius User-Agent: Mozilla Thunderbird 1.0.2 (Windows/20050317) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Eivind Hestnes References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> In-Reply-To: <4265724A.1040705@stabbursmoen.no> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:11:37 -0000 Eivind Hestnes wrote: > It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. > If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 > MByte/s. However, when pulling 180 mbit/s without the polling enabled > the system is very little responsive due to the interrupt load. I'll > try to increase the polling frequency too see if this increases the > bandwidth with polling enabled.. Thanks for the advice btw.. > There is something "interesting" going on in the em driver but I haven't had the time to profile it properly and Intel has been less than forthcoming with the specification which makes it more challenging to try to optimize the driver further. Pete > - E. > > Jon Noack wrote: > >> On 4/19/2005 1:32 PM, Eivind Hestnes wrote: >> >>> I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) >>> installed >>> in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD >>> 5.4-RC3. >>> The machine is routing traffic between multiple VLANs. Recently I did a >>> benchmark with/without device polling enabled. Without device >>> polling I was >>> able to transfer roughly 180 Mbit/s. The router however was >>> suffering when >>> doing this benchmark. Interrupt load was peaking 100% - overall the >>> system >>> itself was quite unusable (_very_ high system load). With device >>> polling >>> enabled the interrupt kept stable around 40-50% and max transfer >>> rate was >>> nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin >>> point. >> >> >> >> The card is plugged into a 32-bit PCI slot, correct? If so, 180 >> Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs >> (in 32-bit PCI slots) and get NFS transfers maxing out around 23 >> MB/s, which is ~180 Mbit/s. Gigabit performance with 32-bit cards is >> atrocious. It reminds me of the old 100 Mbit/s ISA cards... >> >>> >>> >>> HZ set to 1000 as recommended in README for the em(4) driver. Driver >>> is of >>> cource compiled into kernel. >> >> >> >> You'll need HZ set to more than 1000 for gigabit; bump it up to at >> least 2000. That should increase polling throughput a lot. I'm not >> sure about other polling parameters, however. >> >> Jon > > > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org" > From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:42:13 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7637E16A4CE for ; Tue, 19 Apr 2005 21:42:13 +0000 (GMT) Received: from stephanie.unixdaemons.com (stephanie.unixdaemons.com [67.18.111.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id EB5AC43D31 for ; Tue, 19 Apr 2005 21:42:12 +0000 (GMT) (envelope-from bmilekic@technokratis.com) Received: from stephanie.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1])j3JLg919004885; Tue, 19 Apr 2005 17:42:09 -0400 (EDT) Received: (from bmilekic@localhost) by stephanie.unixdaemons.com (8.13.4/8.12.1/Submit) id j3JLg9mR004884; Tue, 19 Apr 2005 17:42:09 -0400 (EDT) (envelope-from bmilekic@technokratis.com) X-Authentication-Warning: stephanie.unixdaemons.com: bmilekic set sender to bmilekic@technokratis.com using -f Date: Tue, 19 Apr 2005 17:42:09 -0400 From: Bosko Milekic To: Eivind Hestnes Message-ID: <20050419214209.GA3656@technokratis.com> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4265724A.1040705@stabbursmoen.no> User-Agent: Mutt/1.4.2.1i cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:42:13 -0000 On Tue, Apr 19, 2005 at 11:04:10PM +0200, Eivind Hestnes wrote: > It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. If > i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 MByte/s. > However, when pulling 180 mbit/s without the polling enabled the system > is very little responsive due to the interrupt load. I'll try to > increase the polling frequency too see if this increases the bandwidth > with polling enabled.. Thanks for the advice btw.. > > - E. You are neglecting bus acquisition cycles as well as bus contention. Likely your 32-bit legacy PCI bus is shared between many devices. 1Gbps for small packets is basically hopeless and you're probably stalling on the bus. Basically, a gigE card in a router you want to perform well in anything but a high-speed PCI-X bus (hopefully little or not contested) has been a terrible waste of money, in my experience. -Bosko From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:46:47 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id B10F816A4CE for ; Tue, 19 Apr 2005 21:46:47 +0000 (GMT) Received: from stephanie.unixdaemons.com (stephanie.unixdaemons.com [67.18.111.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 52A2843D1D for ; Tue, 19 Apr 2005 21:46:47 +0000 (GMT) (envelope-from bmilekic@technokratis.com) Received: from stephanie.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1])j3JLki1P005886; Tue, 19 Apr 2005 17:46:44 -0400 (EDT) Received: (from bmilekic@localhost) by stephanie.unixdaemons.com (8.13.4/8.12.1/Submit) id j3JLkiBC005885; Tue, 19 Apr 2005 17:46:44 -0400 (EDT) (envelope-from bmilekic@technokratis.com) X-Authentication-Warning: stephanie.unixdaemons.com: bmilekic set sender to bmilekic@technokratis.com using -f Date: Tue, 19 Apr 2005 17:46:44 -0400 From: Bosko Milekic To: Petri Helenius Message-ID: <20050419214644.GB3656@technokratis.com> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42657420.3040104@he.iki.fi> User-Agent: Mutt/1.4.2.1i cc: Eivind Hestnes cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:46:47 -0000 My experience with 6.0-CURRENT has been that I am able to push at least about 400kpps INTO THE KERNEL from a gigE em card on its own 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's basically out of the box GENERIC on a dual-CPU box with HTT disabled and no debugging options, with small 50-60 byte UDP packets. I haven't measured how many I can push THROUGH to a second card and forward. That will probably reduce numbers. My tests were done without polling so with very high interrupt load and that also sucks when you have a high-traffic scenario. But still, way better than your numbers. Also, make sure you are not bottlenecking on the sender-side. e.g., make sure that your sender can actually push out more PPS than what you appear to be bottlenecking on in the router. -Bosko On Wed, Apr 20, 2005 at 12:12:00AM +0300, Petri Helenius wrote: > Eivind Hestnes wrote: > > >It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. > >If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 > >MByte/s. However, when pulling 180 mbit/s without the polling enabled > >the system is very little responsive due to the interrupt load. I'll > >try to increase the polling frequency too see if this increases the > >bandwidth with polling enabled.. Thanks for the advice btw.. > > > There is something "interesting" going on in the em driver but I haven't > had the time to profile it properly and Intel has been less than > forthcoming with the specification which makes it more challenging to > try to optimize the driver further. > > Pete > > >- E. > > > >Jon Noack wrote: > > > >>On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > >> > >>>I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) > >>>installed > >>>in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD > >>>5.4-RC3. > >>>The machine is routing traffic between multiple VLANs. Recently I did a > >>>benchmark with/without device polling enabled. Without device > >>>polling I was > >>>able to transfer roughly 180 Mbit/s. The router however was > >>>suffering when > >>>doing this benchmark. Interrupt load was peaking 100% - overall the > >>>system > >>>itself was quite unusable (_very_ high system load). With device > >>>polling > >>>enabled the interrupt kept stable around 40-50% and max transfer > >>>rate was > >>>nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin > >>>point. > >> > >> > >> > >>The card is plugged into a 32-bit PCI slot, correct? If so, 180 > >>Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs > >>(in 32-bit PCI slots) and get NFS transfers maxing out around 23 > >>MB/s, which is ~180 Mbit/s. Gigabit performance with 32-bit cards is > >>atrocious. It reminds me of the old 100 Mbit/s ISA cards... > >> > >>> > >>> > >>>HZ set to 1000 as recommended in README for the em(4) driver. Driver > >>>is of > >>>cource compiled into kernel. > >> > >> > >> > >>You'll need HZ set to more than 1000 for gigabit; bump it up to at > >>least 2000. That should increase polling throughput a lot. I'm not > >>sure about other polling parameters, however. > >> > >>Jon > > > > > > > >_______________________________________________ > >freebsd-performance@freebsd.org mailing list > >http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >To unsubscribe, send any mail to > >"freebsd-performance-unsubscribe@freebsd.org" > > > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org" -- Bosko Milekic bmilekic@technokratis.com bmilekic@FreeBSD.org From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 22:09:22 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9431516A4CE for ; Tue, 19 Apr 2005 22:09:22 +0000 (GMT) Received: from web41205.mail.yahoo.com (web41205.mail.yahoo.com [66.218.93.38]) by mx1.FreeBSD.org (Postfix) with SMTP id 2D43B43D31 for ; Tue, 19 Apr 2005 22:09:22 +0000 (GMT) (envelope-from arne_woerner@yahoo.com) Received: (qmail 51880 invoked by uid 60001); 19 Apr 2005 22:09:22 -0000 Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; b=dbRNnnDhhN5bCy6ebeP0pZf8obeJINxWi/LUtllHrFH0xym9ZkpDWLb30ASKShOVLb8qM25rXBRF5Nl4XHdQMkNQOLDOWXjGR23YZOKS/E3EiNRi2iblPjI7n0qo9VEfZoN5Kubu3KG0ewLwCHaEDc9Vr4Vn9jdymxwTiFW/pxE= ; Message-ID: <20050419220922.51878.qmail@web41205.mail.yahoo.com> Received: from [83.129.186.139] by web41205.mail.yahoo.com via HTTP; Tue, 19 Apr 2005 15:09:22 PDT Date: Tue, 19 Apr 2005 15:09:22 -0700 (PDT) From: Arne "Wörner" To: performance@freebsd.org In-Reply-To: <20050419214644.GB3656@technokratis.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii cc: Eivind Hestnes Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 22:09:22 -0000 I would try to transfer from /dev/zero to /dev/null via the network interface. It might be interesting, 1. if it is a switched network, 2. if there is a lot of concurrency between the network nodes, and 3. if there are really a lot of PCI cards fighting for the bus (btw. when I multiply 33e6, 8 and 32, I get 8.4e9... *sniff*)... There was a similar problem on this list some time ago (maybe that helps?): http://docs.freebsd.org/cgi/getmsg.cgi?fetch=0+0+archive/2004/freebsd-performance/20041121.freebsd-performance http://docs.freebsd.org/cgi/getmsg.cgi?fetch=0+0+archive/2004/freebsd-performance/20041128.freebsd-performance -Arne __________________________________ Do you Yahoo!? Plan great trips with Yahoo! Travel: Now over 17,000 guides! http://travel.yahoo.com/p-travelguide From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 03:19:52 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 95BFD16A4CF for ; Wed, 20 Apr 2005 03:19:52 +0000 (GMT) Received: from mailout2.pacific.net.au (mailout2.pacific.net.au [61.8.0.85]) by mx1.FreeBSD.org (Postfix) with ESMTP id A2E6D43D2F for ; Wed, 20 Apr 2005 03:19:51 +0000 (GMT) (envelope-from bde@zeta.org.au) Received: from mailproxy1.pacific.net.au (mailproxy1.pacific.net.au [61.8.0.86])j3K3Joml021452; Wed, 20 Apr 2005 13:19:50 +1000 Received: from katana.zip.com.au (katana.zip.com.au [61.8.7.246]) j3K3JiIo018373; Wed, 20 Apr 2005 13:19:45 +1000 Date: Wed, 20 Apr 2005 13:19:44 +1000 (EST) From: Bruce Evans X-X-Sender: bde@delplex.bde.org To: Bosko Milekic In-Reply-To: <20050419214644.GB3656@technokratis.com> Message-ID: <20050420123251.A85348@delplex.bde.org> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> <20050419214644.GB3656@technokratis.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed cc: Eivind Hestnes cc: performance@freebsd.org cc: Petri Helenius Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 03:19:52 -0000 On Tue, 19 Apr 2005, Bosko Milekic wrote: > My experience with 6.0-CURRENT has been that I am able to push at > least about 400kpps INTO THE KERNEL from a gigE em card on its own > 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's A 64-bit bus doesn't seem to be essential for reasonable performance. I get about 210 kpps (receive) for a bge card on an old Athlon system with a 32-bit PCI 33MHz bus. Overclocking this bus speeds up at least sending almost proportionally to the overclocking :-). This is with my version of an old version of -current, with no mpsafenet, no driver tuning, and no mistuning (no INVARIANTS, etc., no POLLING, no HZ > 100). Sending goes slightly slower (about 200 kppps). I get about 220 kpps (send) for a much-maligned (last year) sk non-card on a much-maligned Athlon nForce2 newer Athlon system with a 32-bit PCI 33MHz bus. This is with a similar setup but with sending in the driver changed to not use the braindamaged sk interrupt moderation. The changes don't improve the throughput significantly since it is limited by the sk or bus to 4 us per packet, but they reduce interrupt overhead. > basically out of the box GENERIC on a dual-CPU box with HTT disabled > and no debugging options, with small 50-60 byte UDP packets. I used an old version of ttcp for testing. A small packet for me is 5 bytes UDP data since that is the minimum that ttcp will send, but I repeated the tests with a packet size of 50 for comparison. For the sk, the throughput with a packet size of 5 is only slightly larger (240 kpps). There are some kernel deficiencies which at best break testing using simple programs like ttcp and at worst reduce throughput: - when the tx queue fills up, the application should stop sending, at least in the udp case, but there is no way for userland to tell when the queue becomes non-full so that it is useful to try to add to it -- select() doesn't work for this. Applications either have to waste cycles by retrying immediately or waste send slots by retrying after a short sleep. The old version of ttcp that I use uses the latter method, with a sleep interval of 1000 usec. This works poorly, especially with HZ = 100 (which gives an actual sleep interval of 10000 to 20000 usec), or with devices that have a smaller tx queue than sk (511). The tx queue always fills up when blasted with packets; it becomes non-full a few usec later after a tx interrupt, and it becomes empty a few usec or msec later, and then the transmitter is idle while ttcp sleeps. With sk and HZ = 100, throughput is reduced to approximately 511 * (1000000 / 15000) = 34066 pps. HZ = 1000 is just large enough for the sleep to always be shorter than the tx draining time (2/HZ seconds = 2 msec < 4 * 511 usec = 2.044 msec), so transmission can stream. Newer versions of ttcp like the on in ports are aware of this problem but can't fix it since it is in the kernel. tools/netrate is less explicitly aware of this problem and can't fix it... However, if you don't care about using the sender for anything else and don't want to measure efficiency of sending, then retrying immediately can be used to generate almost the maximum pps. Parts of netrate do this. - the tx queue length is too small for all drivers, so the tx queue fills up too often. It defaults to IFQ_MAXLEN = 50. This may be right for 1 Mbps ethernet or even for 10 Mbps ethernet, but it is too small for 100 Mbps ethernet and far too small for 1000 Mbps ethernet. Drivers with a larger hardware tx queue length all bump it up to their tx queue length (often, bogusly, less 1), but it needs to be larger for transmission to stream. I use (SK_TX_RING_CNT + imax(2*tick, 10000) / 4) for sk. > My tests were done without polling so with very high interrupt load > and that also sucks when you have a high-traffic scenario. Interrupt load isn't necessarily very high, relevant or reduced by polling. For transmission, with non-broken hardware and software, there should be not many more than (pps / ) tx interrupts per second, and should be small so that there aren't many txintrs/sec. For sk, this gives 240000 / 511 = 489. After reprogramming sk's interrupt handling, I get 539. The standard driver used to get 7000+ with the old interrupt moderation timeout of 200 usec (actually 137 usec for Yukon, 200 for Genesis), and now 14000+ with an an interrupt moderation timeout of 200 (68.5) usec. The interrupt load for 539 txintrs/sec and 240 kpps is 10% on an AthlonXP2600 (Barton) overclocked. Very little of this is related to interrupts, so the term "interrupt load" is misleading. About 480 packets are handled for every tx interrupt (512 less 32 for watermark stuff). Much more than 90% of the handling is useful work and would have to be done somewhere; it just happens to be done in the interrupt handler, and that is the best place to do it. With polling, it would take longer to do it and the load is poorly reported so it is hard to see. The system load for 539 txintrs/sec and 240 kpps is much larger. It is about 45% (up from 25% in RELENG_4 :-(). [Context almost lost to top posting.] >>>> On 4/19/2005 1:32 PM, Eivind Hestnes wrote: >>>> >>>>> I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) >>>>> installed >>>>> in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD >>>>> 5.4-RC3. >>>>> The machine is routing traffic between multiple VLANs. Recently I did a >>>>> benchmark with/without device polling enabled. Without device >>>>> polling I was >>>>> able to transfer roughly 180 Mbit/s. The router however was >>>>> suffering when >>>>> doing this benchmark. Interrupt load was peaking 100% - overall the >>>>> system >>>>> itself was quite unusable (_very_ high system load). I think it is CPU-bound. My Athlon2600 (overclocked) is many times faster than your P3/500 (5-10 times?), but it doesn't have much CPU left over (sending 240000 5-byte udp packets per second from sk takes 60% of the CPU, and sending 53000 1500-byte udp packets per second takes 30% of the CPU; sending tcp packets takes less CPU but goes slower). Apparently 2 or 3 P3/500's worth of CPU is needed just to keep up with the transmitter (with 100% of the CPU used but no transmission slots missed). RELENG_4 has lower overheads so it might need only 1 or 2 P3/500's worth of CPU to keep up. >>>>> With device >>>>> polling >>>>> enabled the interrupt kept stable around 40-50% and max transfer >>>>> rate was >>>>> nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin >>>>> point. I don't believe in device polling. It's not surprising that it reduces throughput for a device that has large enough hardware queues. It just lets a machine that is too slow to handle 1Gbps ethernet (at least under FreeBSD) sort of work by not using the hardware to its full potentially. 70 Mbit/s is still bad -- it's easy to get more than that with a 100Mbps NIC. >>>>> eivind@core-gw:~$ sysctl -a | grep kern.polling >>>>> ... >>>>> kern.polling.idle_poll: 0 Setting this should increase throughput when the system is idle by taking 100% of the CPU then. With just polling every 1 msec (from HZ = 1000), there are the same problems as with ttcp retrying every 10-20 msec, but scaled down by a factor of 10-20. For my ttcp example, the transmitter runs dry every 2.044 msec so the polling interval must be shorter than 2.044 msec, but this is with a full hardare tx queue (511 entries) on a not very fast NIC. If the hardware is just twice as fast or the tx queue is just half as large of half as full, then the hardware tx queue it will run dry when polled every 1 msec and hardware capability will be wasted. This problem can be reduced by increasing HZ some more, but I don't believe in increasing it beyond 100, since only software that does too much polling would noticed it being larger. Bruce From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 08:17:32 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7F43316A4CE for ; Wed, 20 Apr 2005 08:17:32 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.205]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1356943D3F for ; Wed, 20 Apr 2005 08:17:32 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so57721rng for ; Wed, 20 Apr 2005 01:17:31 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=kpWu2mWz5GMsTSIpjBLGqlWvGyMaXWUt/3bVIHom0EBnJUQ3wRapKXAGq9qQpkTtzXQPJPNxWp9E8MthXrKEtoC478eYTQUCfDV4YaSfFVhnahknWHsEIeIbITSFa/VK2jwYDhDAHMl1qsE0JcVXh5Vj3yUqRiKLO2s5NY8df0c= Received: by 10.38.125.45 with SMTP id x45mr753968rnc; Wed, 20 Apr 2005 01:17:31 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Wed, 20 Apr 2005 01:17:31 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2005 10:17:31 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <42650EB2.4040409@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EC60.3020600@centtech.com> <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 08:17:32 -0000 > >>I think you are disk bound.. You should not be disk bound at this poin= t with a > >>good RAID controller.. > > Good point, it's an atabeast from nexsan. > Looks like they are indeed waiting on disk.. You could try making two 6 d= isk > raid5 in your controller, then striping those with vinum. That might hel= p. > Possibly, if your controller supports it, setting up the array as JBOD, a= nd then > use vinum to build your raid 5 (not sure if it will be faster or not). I had a raid 5 volume with 5 disks (1.6 TB) and did the same dd: elin% dd if=3D/dev/zero of=3D/nfssrv/dd.tst bs=3D1024 count=3D1048576 1048576+0 records in 1048576+0 records out 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec) The dd on the other raid-volume (12 disks) did also comlete in approx. 22 sec. So no difference when I lower the number of disks in an array. Frame size on the storage-device is 2112 (bytes). regards Claus From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 08:47:55 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 993FD16A4CE for ; Wed, 20 Apr 2005 08:47:55 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.202]) by mx1.FreeBSD.org (Postfix) with ESMTP id 333A343D53 for ; Wed, 20 Apr 2005 08:47:55 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so61732rng for ; Wed, 20 Apr 2005 01:47:54 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=en6Grwzyo/+KFiiAm2J/N0BK9cm+WfZx4QdZiL9GS/Eq1qaJ3UllHVd5zMCjkBwCcqpb+7dbKtrPeP1exyHRz9+QC8YQrUstPlzjoU3szBH5KOq7M2uB6VYS6nWQNkVAwaMX/5qPE2kaLdbOuK/91kBNt+ykKZl/VJFmjMTVJRg= Received: by 10.38.74.21 with SMTP id w21mr786807rna; Wed, 20 Apr 2005 01:47:54 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Wed, 20 Apr 2005 01:47:54 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2005 10:47:54 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 08:47:55 -0000 > elin% dd if=3D/dev/zero of=3D/nfssrv/dd.tst bs=3D1024 count=3D1048576 > 1048576+0 records in > 1048576+0 records out > 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec) >=20 Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6 scsi-disks in a raid 5: frodo~%>dd if=3D/dev/zero of=3Ddd.tst bs=3D1024 count=3D1048576 1048576+0 records in 1048576+0 records out 1073741824 bytes transferred in 8.972321 secs (119672693 bytes/sec) Must faster. FreeBSD 5.4 RC2. Are there any benchmarks comparing the atabeast against other ide->fc-storage-systems in relation to disk-access? regards Claus From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 11:54:43 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0B86F16A4CF for ; Wed, 20 Apr 2005 11:54:43 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2011743D48 for ; Wed, 20 Apr 2005 11:54:42 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3KBsfrj077918; Wed, 20 Apr 2005 06:54:41 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <426642D4.8000202@centtech.com> Date: Wed, 20 Apr 2005 06:53:56 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/842/Tue Apr 19 16:39:01 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 11:54:43 -0000 Claus Guttesen wrote: >>elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576 >>1048576+0 records in >>1048576+0 records out >>1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec) >> > > > Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6 > scsi-disks in a raid 5: > > frodo~%>dd if=/dev/zero of=dd.tst bs=1024 count=1048576 > 1048576+0 records in > 1048576+0 records out > 1073741824 bytes transferred in 8.972321 secs (119672693 bytes/sec) > > Must faster. FreeBSD 5.4 RC2. > > Are there any benchmarks comparing the atabeast against other > ide->fc-storage-systems in relation to disk-access? That's about what I expected. RAID 5 depends on fast xor, so a slow processor in a hardware RAID5 box will slow you down a lot. You should try taking the two RAID5's (6 disks each) created on your original controller and striping those together (RAID 50) - this should get you some better performance, probably not as close as the amr device, but I would guess somewhere in the 80-90mb/s range. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:18:38 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4C31316A4CE for ; Tue, 19 Apr 2005 21:18:38 +0000 (GMT) Received: from smtp.openaccess.org (smtp.openaccess.org [216.57.214.76]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9E26C43D53 for ; Tue, 19 Apr 2005 21:18:37 +0000 (GMT) (envelope-from michael@staff.openaccess.org) Received: from [216.57.214.90] (unknown [216.57.214.90]) by smtp.openaccess.org (Postfix) with ESMTP id F2005416B; Tue, 19 Apr 2005 14:18:27 -0700 (PDT) In-Reply-To: <426573D9.8080607@stabbursmoen.no> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <5c05f1805041911351d2bd98e@mail.gmail.com> <42656CA0.9040403@stabbursmoen.no> <426573D9.8080607@stabbursmoen.no> Mime-Version: 1.0 (Apple Message framework v619.2) Content-Type: text/plain; charset=US-ASCII; format=flowed Message-Id: Content-Transfer-Encoding: 7bit From: Michael DeMan Date: Tue, 19 Apr 2005 14:18:36 -0700 To: Eivind Hestnes X-Mailer: Apple Mail (2.619.2) X-Mailman-Approved-At: Wed, 20 Apr 2005 12:22:22 +0000 cc: Jerald Von Dipple cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:18:38 -0000 Yes, Its also important to differentiate between routing and switching needs. Not in the regular layer-3 and layer-2 concept, but in the deployment environment you anticipate. If you really need high throughput ports, nothing will beat a regular switch (layer-2 or layer-3) because Cisco, 3COM, etc, all have dedicated hardware that will give you wire-speed gigabit, and nearly a gigabit at layer-3. Although they use slow CPUs, the bulk of the work is done in the hardware of the ports/blades themselves, not the CPU. On the other hand, with a UNIX-like router, the bulk of the packet processing is done in the CPU, and hence the need for far more CPU power to keep up with the network interfaces. In our case, being an ISP, we actually don't need much on the main backbone links and in many cases use Soekris boxes that are 133MHz and give us 30-40Mbit throughput. However, in office environments, or connecting branch offices with fiber, you may very well want that full gigabit of speed and the appropriate solution is a real switch, not a UNIX-like router. One advantage of the UNIX-like solution is in handling routing tables and such. Since Cisco/3COM/etc typically use slow CPUs and have low limits on the amount of RAM they can take, if you have large routing tables for OSPF or BGP or something, the UNIX-like machines are far cheaper to deploy. Anyway, just my 2-cents from right now. I'm always learning more about this all the time. - mike Michael F. DeMan Director of Technology OpenAccess Network Services Bellingham, WA 98225 michael@staff.openaccess.org 360-647-0785 On Apr 19, 2005, at 2:10 PM, Eivind Hestnes wrote: > It sounds sensible, but I have also learned that throwing hardware on > a problem is not always right.. Compared to shiny boxes from Cisco, HP > etc. a 500 Mhz router is for heavy duty networks. I would try some > more tweaking before replacing the box with some more spectular > hardware. > > - E. > > Michael DeMan wrote: > >> The rule of thumb I have seen on Intel/UNIX based routers is that you >> want 1GHz of CPU for every gigabit of throughput. >> >> Also, on gigabit NICs, make sure you have a 64-bit PCI bus on the >> motherboard. >> >> >> >> Michael F. DeMan >> Director of Technology >> OpenAccess Network Services >> Bellingham, WA 98225 >> michael@staff.openaccess.org >> 360-647-0785 >> On Apr 19, 2005, at 1:40 PM, Eivind Hestnes wrote: >> >>> Thanks for the advice. Didn't do any difference, though.. Perhaps I >>> should try to increase the polling frequency.. >>> >>> Jerald Von Dipple wrote: >>> >>>> Hey man >>>> >>>> You need to bump >>>> >>>> kern.polling.burst: 150 >>>> >>>> Upto at least 150000 >>>> >>>> Regards, >>>> Jerald Von D. >>>> >>>> On 4/19/05, Eivind Hestnes wrote: >>>> >>>>> Hi, >>>>> >>>>> I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) >>>>> installed >>>>> in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD >>>>> 5.4-RC3. >>>>> The machine is routing traffic between multiple VLANs. Recently I >>>>> did a >>>>> benchmark with/without device polling enabled. Without device >>>>> polling I was >>>>> able to transfer roughly 180 Mbit/s. The router however was >>>>> suffering when >>>>> doing this benchmark. Interrupt load was peaking 100% - overall >>>>> the system >>>>> itself was quite unusable (_very_ high system load). With device >>>>> polling >>>>> enabled the interrupt kept stable around 40-50% and max transfer >>>>> rate was >>>>> nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin >>>>> point. >>>>> >>>>> However, a Pentium III in combination with a good NIC should in my >>>>> opinion >>>>> be a respectful router.. but I'm not satisfied with the results. >>>>> The pf >>>>> ruleset is like nothing, and the kernel is stripped and customized >>>>> for best >>>>> performance. >>>>> >>>>> Any tweaking tips for making my router perform better? >>>>> >>>>> Debug information: >>>>> eivind@core-gw:~$ sysctl -a | grep kern.polling >>>>> kern.polling.burst: 150 >>>>> kern.polling.each_burst: 5 >>>>> kern.polling.burst_max: 150 >>>>> kern.polling.idle_poll: 0 >>>>> kern.polling.poll_in_trap: 0 >>>>> kern.polling.user_frac: 50 >>>>> kern.polling.reg_frac: 20 >>>>> kern.polling.short_ticks: 1411 >>>>> kern.polling.lost_polls: 720 >>>>> kern.polling.pending_polls: 0 >>>>> kern.polling.residual_burst: 0 >>>>> kern.polling.handlers: 0 >>>>> kern.polling.enable: 1 >>>>> kern.polling.phase: 0 >>>>> kern.polling.suspect: 186 >>>>> kern.polling.stalled: 0 >>>>> kern.polling.idlepoll_sleeping: 1 >>>>> >>>>> eivind@core-gw:~$ cat /etc/sysctl.conf >>>>> net.inet.ip.forwarding=1 >>>>> net.inet.ip.fastforwarding=1 >>>>> net.inet.carp.preempt=1 >>>>> kern.polling.enable=1 >>>>> >>>>> HZ set to 1000 as recommended in README for the em(4) driver. >>>>> Driver is of >>>>> cource compiled into kernel. >>>>> >>>>> Regards, >>>>> Eivind Hestnes >>>>> >>>>> _______________________________________________ >>>>> freebsd-performance@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>>>> To unsubscribe, send any mail to >>>>> "freebsd-performance-unsubscribe@freebsd.org" >>>>> >>>>> >>> >>> _______________________________________________ >>> freebsd-net@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-net >>> To unsubscribe, send any mail to >>> "freebsd-net-unsubscribe@freebsd.org" >>> >> >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > > > From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 22:23:42 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 06D2916A4CE for ; Tue, 19 Apr 2005 22:23:42 +0000 (GMT) Received: from elvis.mu.org (elvis.mu.org [192.203.228.196]) by mx1.FreeBSD.org (Postfix) with ESMTP id CD91243D49 for ; Tue, 19 Apr 2005 22:23:41 +0000 (GMT) (envelope-from billf@elvis.mu.org) Received: by elvis.mu.org (Postfix, from userid 1098) id C848E5C9FD; Tue, 19 Apr 2005 15:23:41 -0700 (PDT) Date: Tue, 19 Apr 2005 15:23:41 -0700 From: bill fumerola To: Eivind Hestnes Message-ID: <20050419222341.GD31040@elvis.mu.org> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <5c05f1805041911351d2bd98e@mail.gmail.com> <42656CA0.9040403@stabbursmoen.no> <426573D9.8080607@stabbursmoen.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <426573D9.8080607@stabbursmoen.no> User-Agent: Mutt/1.4.2.1i X-Operating-System: FreeBSD 4.10-MUORG-20041118 i386 X-PGP-Key: 1024D/7F868268 X-PGP-Fingerprint: 5B2D 908E 4C2B F253 DAEB FC01 8436 B70B 7F86 8268 X-Mailman-Approved-At: Wed, 20 Apr 2005 12:22:23 +0000 cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 22:23:42 -0000 On Tue, Apr 19, 2005 at 11:10:49PM +0200, Eivind Hestnes wrote: > It sounds sensible, but I have also learned that throwing hardware on a > problem is not always right.. Compared to shiny boxes from Cisco, HP > etc. a 500 Mhz router is for heavy duty networks. I would try some more > tweaking before replacing the box with some more spectular hardware. you've confused commodity CPUs with special purpose hardware. alternatively, 'heavy duty networks' is a relative term. -- bill fumerola From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 00:30:56 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6A9BD16A4CE for ; Wed, 20 Apr 2005 00:30:56 +0000 (GMT) Received: from www.hotel-accommodation.net (www.hotel-accommodation.net [203.146.102.47]) by mx1.FreeBSD.org (Postfix) with ESMTP id D056143D3F for ; Wed, 20 Apr 2005 00:30:55 +0000 (GMT) (envelope-from cws@miraclenet.co.th) Received: from secure.abatravel.net (localhost [127.0.0.1]) by www.hotel-accommodation.net (Postfix) with ESMTP id 1C31A1DA7B for ; Wed, 20 Apr 2005 07:30:54 +0700 (ICT) Received: from 210.86.179.115 (SquirrelMail authenticated user cws); by secure.abatravel.net with HTTP; Wed, 20 Apr 2005 07:30:54 +0700 (ICT) Message-ID: <13101.210.86.179.115.1113957054.squirrel@210.86.179.115> Date: Wed, 20 Apr 2005 07:30:54 +0700 (ICT) From: "Chatchawan Wongsiriprasert" To: freebsd-performance@freebsd.org User-Agent: SquirrelMail/1.4.3a X-Mailer: SquirrelMail/1.4.3a MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Mailman-Approved-At: Wed, 20 Apr 2005 12:22:22 +0000 Subject: Slow realloc X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 00:30:56 -0000 Hi, Last week I got a request from my customer to check that why his PHP code run much slower on FreeBSD than the Linux machine. After sometime of checking I found the the problem is in the PHP serialize function which use a lot of realloc call with small (128 bytes) incremenent. I had sumbited a small patch (http://bugs.php.net/bug.php?id=32727) to fix the problem which decrease the run-time of a test code from 2 seconds to 0.2 second After check on gooble, I found that this is an intension in FreeBSD libc design to increase the performance of malloc and sacrifice the less use realloc (http://phk.freebsd.dk/pubs/malloc.pdf and http://www.freebsd.org/cgi/query-pr.cgi?pr=61691) The question: Is there another way to increase the performance of the program that use a lot of small realloc call other than change the way that a program use realloc (as I do and Poul-Henning Kamp suggest in pr61691)? Becase the way I use need a lot of time to found where to fix and the fix is localize (PHP also use realloc in other place such as string operation which my patch does not fix -- so , concatenate a string runs 10 times slower on FreeBSD than Linux). Moreover, Linux realloc is fast , so many open source deveper that use Linux as their main devepment system will overlook this problem (such as in PHP case). Regards, Chatchwan Wongsiriprasert From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 03:55:48 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2947116A4CE for ; Wed, 20 Apr 2005 03:55:48 +0000 (GMT) Received: from smtp814.mail.sc5.yahoo.com (smtp814.mail.sc5.yahoo.com [66.163.170.84]) by mx1.FreeBSD.org (Postfix) with SMTP id E61EC43D1F for ; Wed, 20 Apr 2005 03:55:47 +0000 (GMT) (envelope-from g_jin@lbl.gov) Received: from unknown (HELO ?192.168.2.11?) (jinmtb@sbcglobal.net@68.127.155.26 with plain) by smtp814.mail.sc5.yahoo.com with SMTP; 20 Apr 2005 03:55:47 -0000 Message-ID: <4265D2D3.9040302@lbl.gov> Date: Tue, 19 Apr 2005 20:56:03 -0700 From: "Jin Guojun [VFFS]" User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050108 X-Accept-Language: zh, zh-CN, en MIME-Version: 1.0 To: Bruce Evans References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> <20050419214644.GB3656@technokratis.com> <20050420123251.A85348@delplex.bde.org> In-Reply-To: <20050420123251.A85348@delplex.bde.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Wed, 20 Apr 2005 12:22:23 +0000 cc: Eivind Hestnes cc: performance@freebsd.org cc: Bosko Milekic cc: Petri Helenius Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 03:55:48 -0000 Bruce Evans wrote: > On Tue, 19 Apr 2005, Bosko Milekic wrote: > >> My experience with 6.0-CURRENT has been that I am able to push at >> least about 400kpps INTO THE KERNEL from a gigE em card on its own >> 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's > > > A 64-bit bus doesn't seem to be essential for reasonable performance. > > I get about 210 kpps (receive) for a bge card on an old Athlon system > with a 32-bit PCI 33MHz bus. Overclocking this bus speeds up at least > sending almost proportionally to the overclocking :-). This is with > my version of an old version of -current, with no mpsafenet, no driver > tuning, and no mistuning (no INVARIANTS, etc., no POLLING, no HZ > 100). > Sending goes slightly slower (about 200 kppps). Yes, 64-bit is not essential for getting 400~700 Mbps as long as the system has enough high memory bandwidth, but it is essential to get full Gigabits. Simple numbers are in "Tips" section at the bottom of the following page: http://www-didc.lbl.gov/NCS/generic/ncs-00.html and the details are described in the papers linked. P.S. Question the unit "kpps" used in original email. I am not sure what this really means. GigE is possible to produce 400 kpps if packet size is 300 bytes or less. If packet size is 1500 byte, the maximum pps is 83k (83kpps). But, 200-400 kbps is kind low, maybe I missed some previous emails. -- ------------ Jin Guojun ----------- v --- jin@george.lbl.gov --- Distributed Systems Department http://www.dsd.lbl.gov/~jin Lawrence Berkeley National Laboratory, Berkeley, CA 94720 From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 14:03:17 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8219916A4CE for ; Wed, 20 Apr 2005 14:03:17 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.197]) by mx1.FreeBSD.org (Postfix) with ESMTP id CDBF843D5D for ; Wed, 20 Apr 2005 14:03:16 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so113274rng for ; Wed, 20 Apr 2005 07:03:16 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=B3dvWptSd1dwRulkLfLp+uVk2qlWQ1Va/4YitJrjctQAtCNi5A+b1kYxUhmz5KPIZREScYLjYlRpgxQW9qwBlUg7QoN+m8DE5rxUygRHJdKLvSWcGyxxLUGpuYedAGhK7HgVB1u0CFlruZuHQlnZNlDkNd/NDnoJs7WJJ2rqvA0= Received: by 10.38.10.66 with SMTP id 66mr1059216rnj; Wed, 20 Apr 2005 07:03:16 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Wed, 20 Apr 2005 07:03:16 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2005 16:03:16 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <426642D4.8000202@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> <426642D4.8000202@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 14:03:17 -0000 > That's about what I expected. RAID 5 depends on fast xor, so a slow proc= essor > in a hardware RAID5 box will slow you down a lot. >=20 > You should try taking the two RAID5's (6 disks each) created on your orig= inal > controller and striping those together (RAID 50) - this should get you so= me > better performance, probably not as close as the amr device, but I would = guess > somewhere in the 80-90mb/s range. This can't be done in hardware, since atabeast only supports raid 0, 1, 4 and 5. But I will definitively have this in my mind this when we get a new storage-system (a different one). Thank you for your guidance. regards Claus From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 14:13:52 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 60EDF16A4CE for ; Wed, 20 Apr 2005 14:13:52 +0000 (GMT) Received: from mh1.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id A8D3743D4C for ; Wed, 20 Apr 2005 14:13:51 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.1/8.13.1) with ESMTP id j3KEDodv079349; Wed, 20 Apr 2005 09:13:51 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <42666371.9080800@centtech.com> Date: Wed, 20 Apr 2005 09:13:05 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Claus Guttesen References: <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> <426642D4.8000202@centtech.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.82/842/Tue Apr 19 16:39:01 2005 on mh1.centtech.com X-Virus-Status: Clean cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 14:13:52 -0000 Claus Guttesen wrote: >>That's about what I expected. RAID 5 depends on fast xor, so a slow processor >>in a hardware RAID5 box will slow you down a lot. >> >>You should try taking the two RAID5's (6 disks each) created on your original >>controller and striping those together (RAID 50) - this should get you some >>better performance, probably not as close as the amr device, but I would guess >>somewhere in the 80-90mb/s range. > > > This can't be done in hardware, since atabeast only supports raid 0, > 1, 4 and 5. But I will definitively have this in my mind this when we > get a new storage-system (a different one). > > Thank you for your guidance. You could use the atabeast to do two raid 5's, then use vinum to stripe those two. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 14:53:50 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 417B116A4CE for ; Wed, 20 Apr 2005 14:53:50 +0000 (GMT) Received: from stephanie.unixdaemons.com (stephanie.unixdaemons.com [67.18.111.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 561BD43D2D for ; Wed, 20 Apr 2005 14:53:49 +0000 (GMT) (envelope-from bmilekic@technokratis.com) Received: from stephanie.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1])j3KErlmh063064; Wed, 20 Apr 2005 10:53:47 -0400 (EDT) Received: (from bmilekic@localhost) by stephanie.unixdaemons.com (8.13.4/8.12.1/Submit) id j3KErkqa063062; Wed, 20 Apr 2005 10:53:46 -0400 (EDT) (envelope-from bmilekic@technokratis.com) X-Authentication-Warning: stephanie.unixdaemons.com: bmilekic set sender to bmilekic@technokratis.com using -f Date: Wed, 20 Apr 2005 10:53:46 -0400 From: Bosko Milekic To: Bruce Evans Message-ID: <20050420145346.GA59707@technokratis.com> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> <20050419214644.GB3656@technokratis.com> <20050420123251.A85348@delplex.bde.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20050420123251.A85348@delplex.bde.org> User-Agent: Mutt/1.4.2.1i cc: Eivind Hestnes cc: performance@freebsd.org cc: Petri Helenius Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 14:53:50 -0000 On Wed, Apr 20, 2005 at 01:19:44PM +1000, Bruce Evans wrote: > On Tue, 19 Apr 2005, Bosko Milekic wrote: > > > My experience with 6.0-CURRENT has been that I am able to push at > > least about 400kpps INTO THE KERNEL from a gigE em card on its own > > 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's > > A 64-bit bus doesn't seem to be essential for reasonable performance. > > I get about 210 kpps (receive) for a bge card on an old Athlon system > with a 32-bit PCI 33MHz bus. Overclocking this bus speeds up at least > sending almost proportionally to the overclocking :-). This is with > my version of an old version of -current, with no mpsafenet, no driver > tuning, and no mistuning (no INVARIANTS, etc., no POLLING, no HZ > 100). > Sending goes slightly slower (about 200 kppps). That is still half as much as I get on the faster bus. Unfortunately, we are not comparing apples with apples, but apples with oranges, since I don't know what "my version of an old version of -current" refers to. :-) > I get about 220 kpps (send) for a much-maligned (last year) sk non-card > on a much-maligned Athlon nForce2 newer Athlon system with a 32-bit > PCI 33MHz bus. This is with a similar setup but with sending in the > driver changed to not use the braindamaged sk interrupt moderation. > The changes don't improve the throughput significantly since it is > limited by the sk or bus to 4 us per packet, but they reduce interrupt > overhead. > > > basically out of the box GENERIC on a dual-CPU box with HTT disabled > > and no debugging options, with small 50-60 byte UDP packets. > > I used an old version of ttcp for testing. A small packet for me is > 5 bytes UDP data since that is the minimum that ttcp will send, but > I repeated the tests with a packet size of 50 for comparison. For > the sk, the throughput with a packet size of 5 is only slightly larger > (240 kpps). > > There are some kernel deficiencies which at best break testing using > simple programs like ttcp and at worst reduce throughput: > - when the tx queue fills up, the application should stop sending, at > least in the udp case, but there is no way for userland to tell > when the queue becomes non-full so that it is useful to try to add > to it -- select() doesn't work for this. Applications either have > to waste cycles by retrying immediately or waste send slots by > retrying after a short sleep. > > The old version of ttcp that I use uses the latter method, with a > sleep interval of 1000 usec. This works poorly, especially with HZ > = 100 (which gives an actual sleep interval of 10000 to 20000 usec), > or with devices that have a smaller tx queue than sk (511). The tx > queue always fills up when blasted with packets; it becomes non-full > a few usec later after a tx interrupt, and it becomes empty a few > usec or msec later, and then the transmitter is idle while ttcp > sleeps. With sk and HZ = 100, throughput is reduced to approximately > 511 * (1000000 / 15000) = 34066 pps. HZ = 1000 is just large enough > for the sleep to always be shorter than the tx draining time (2/HZ > seconds = 2 msec < 4 * 511 usec = 2.044 msec), so transmission can > stream. > > Newer versions of ttcp like the on in ports are aware of this problem > but can't fix it since it is in the kernel. tools/netrate is less > explicitly aware of this problem and can't fix it... However, if > you don't care about using the sender for anything else and don't > want to measure efficiency of sending, then retrying immediately can > be used to generate almost the maximum pps. Parts of netrate do this. > > - the tx queue length is too small for all drivers, so the tx queue fills > up too often. It defaults to IFQ_MAXLEN = 50. This may be right for > 1 Mbps ethernet or even for 10 Mbps ethernet, but it is too small for > 100 Mbps ethernet and far too small for 1000 Mbps ethernet. Drivers > with a larger hardware tx queue length all bump it up to their tx > queue length (often, bogusly, less 1), but it needs to be larger for > transmission to stream. I use (SK_TX_RING_CNT + imax(2*tick, 10000) / 4) > for sk. Yes, I think em bumps it up. FWIW, I use ng_source(4) with a custom packet crafting tool to craft up arbitrary packets and sequences thereof and feed them into the ng_source node. The ng_source node blasts out as most as the tx queue can handle at every clock tick. It runs out-of-kernel and is very fast. > > My tests were done without polling so with very high interrupt load > > and that also sucks when you have a high-traffic scenario. > > Interrupt load isn't necessarily very high, relevant or reduced by > polling. For transmission, with non-broken hardware and software, > there should be not many more than (pps / ) > tx interrupts per second, and should be > small so that there aren't many txintrs/sec. For sk, this gives 240000 > / 511 = 489. After reprogramming sk's interrupt handling, I get 539. > The standard driver used to get 7000+ with the old interrupt moderation > timeout of 200 usec (actually 137 usec for Yukon, 200 for Genesis), > and now 14000+ with an an interrupt moderation timeout of 200 (68.5) > usec. The interrupt load for 539 txintrs/sec and 240 kpps is 10% on an > AthlonXP2600 (Barton) overclocked. Very little of this is related to > interrupts, so the term "interrupt load" is misleading. About 480 > packets are handled for every tx interrupt (512 less 32 for watermark > stuff). Much more than 90% of the handling is useful work and would > have to be done somewhere; it just happens to be done in the interrupt > handler, and that is the best place to do it. With polling, it would > take longer to do it and the load is poorly reported so it is hard to see. > The system load for 539 txintrs/sec and 240 kpps is much larger. It > is about 45% (up from 25% in RELENG_4 :-(). Mainly, I was talking about receiver performance and how interrupt load is obviously reduced by polling. That is a fact of polling. Whether this results in better performance or not is the subject of various papers, notably Luigi's original paper on his original 4.x-based polling implementation. It turns out that in many cases the polling model is better, but not because it can or cannot pull more packets out of the card (that is not as relevant), but because it allows for other things to happen and mitigates the live-lock scenario. You note this very fact in your reply below. > [Context almost lost to top posting.] > > >>>>On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > >>>> > >>>>>I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) > >>>>>installed > >>>>>in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD > >>>>>5.4-RC3. > >>>>>The machine is routing traffic between multiple VLANs. Recently I did a > >>>>>benchmark with/without device polling enabled. Without device > >>>>>polling I was > >>>>>able to transfer roughly 180 Mbit/s. The router however was > >>>>>suffering when > >>>>>doing this benchmark. Interrupt load was peaking 100% - overall the > >>>>>system > >>>>>itself was quite unusable (_very_ high system load). > > I think it is CPU-bound. My Athlon2600 (overclocked) is many times > faster than your P3/500 (5-10 times?), but it doesn't have much CPU > left over (sending 240000 5-byte udp packets per second from sk takes > 60% of the CPU, and sending 53000 1500-byte udp packets per second > takes 30% of the CPU; sending tcp packets takes less CPU but goes > slower). Apparently 2 or 3 P3/500's worth of CPU is needed just to > keep up with the transmitter (with 100% of the CPU used but no > transmission slots missed). RELENG_4 has lower overheads so it might > need only 1 or 2 P3/500's worth of CPU to keep up. > > >>>>>With device > >>>>>polling > >>>>>enabled the interrupt kept stable around 40-50% and max transfer > >>>>>rate was > >>>>>nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin > >>>>>point. > > I don't believe in device polling. It's not surprising that it reduces > throughput for a device that has large enough hardware queues. It just > lets a machine that is too slow to handle 1Gbps ethernet (at least under > FreeBSD) sort of work by not using the hardware to its full potentially. > 70 Mbit/s is still bad -- it's easy to get more than that with a 100Mbps > NIC. > > >>>>>eivind@core-gw:~$ sysctl -a | grep kern.polling > >>>>>... > >>>>>kern.polling.idle_poll: 0 > > Setting this should increase throughput when the system is idle by taking > 100% of the CPU then. With just polling every 1 msec (from HZ = 1000), > there are the same problems as with ttcp retrying every 10-20 msec, but > scaled down by a factor of 10-20. For my ttcp example, the transmitter > runs dry every 2.044 msec so the polling interval must be shorter than > 2.044 msec, but this is with a full hardare tx queue (511 entries) on > a not very fast NIC. If the hardware is just twice as fast or the tx > queue is just half as large of half as full, then the hardware tx queue > it will run dry when polled every 1 msec and hardware capability will be > wasted. This problem can be reduced by increasing HZ some more, but I > don't believe in increasing it beyond 100, since only software that > does too much polling would noticed it being larger. > > Bruce This last point brings up a whole flurry of thoughts, albeit seemingly unrelated: Have you thought about routing all network device interrupts for a particular network device from the IO APIC to the _same_ Local APIC, always? I don't see an advantage in round-robining them. Do you? -- Bosko Milekic bmilekic@technokratis.com bmilekic@FreeBSD.org From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 14:55:22 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0DA7216A4CE for ; Wed, 20 Apr 2005 14:55:22 +0000 (GMT) Received: from stephanie.unixdaemons.com (stephanie.unixdaemons.com [67.18.111.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id B170B43D39 for ; Wed, 20 Apr 2005 14:55:21 +0000 (GMT) (envelope-from bmilekic@technokratis.com) Received: from stephanie.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1])j3KEtKkm063383; Wed, 20 Apr 2005 10:55:20 -0400 (EDT) Received: (from bmilekic@localhost) by stephanie.unixdaemons.com (8.13.4/8.12.1/Submit) id j3KEtJu0063382; Wed, 20 Apr 2005 10:55:19 -0400 (EDT) (envelope-from bmilekic@technokratis.com) X-Authentication-Warning: stephanie.unixdaemons.com: bmilekic set sender to bmilekic@technokratis.com using -f Date: Wed, 20 Apr 2005 10:55:19 -0400 From: Bosko Milekic To: "Jin Guojun [VFFS]" Message-ID: <20050420145519.GB59707@technokratis.com> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> <20050419214644.GB3656@technokratis.com> <20050420123251.A85348@delplex.bde.org> <4265D2D3.9040302@lbl.gov> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4265D2D3.9040302@lbl.gov> User-Agent: Mutt/1.4.2.1i cc: Eivind Hestnes cc: performance@freebsd.org cc: Petri Helenius cc: Bruce Evans Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 14:55:22 -0000 On Tue, Apr 19, 2005 at 08:56:03PM -0700, Jin Guojun [VFFS] wrote: > Bruce Evans wrote: > > >On Tue, 19 Apr 2005, Bosko Milekic wrote: > > > >> My experience with 6.0-CURRENT has been that I am able to push at > >> least about 400kpps INTO THE KERNEL from a gigE em card on its own > >> 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's > > > > > >A 64-bit bus doesn't seem to be essential for reasonable performance. > > > >I get about 210 kpps (receive) for a bge card on an old Athlon system > >with a 32-bit PCI 33MHz bus. Overclocking this bus speeds up at least > >sending almost proportionally to the overclocking :-). This is with > >my version of an old version of -current, with no mpsafenet, no driver > >tuning, and no mistuning (no INVARIANTS, etc., no POLLING, no HZ > 100). > >Sending goes slightly slower (about 200 kppps). > > Yes, 64-bit is not essential for getting 400~700 Mbps as long as the system > has enough high memory bandwidth, but it is essential to get full Gigabits. > > Simple numbers are in "Tips" section at the bottom of the following page: > > http://www-didc.lbl.gov/NCS/generic/ncs-00.html > > and the details are described in the papers linked. > > P.S. Question the unit "kpps" used in original email. I am not sure > what this really means. > GigE is possible to produce 400 kpps if packet size is 300 > bytes or less. > If packet size is 1500 byte, the maximum pps is 83k (83kpps). > But, 200-400 kbps is kind low, maybe I missed some previous > emails. Obviously we're talking about small packets. :-) -- Bosko Milekic bmilekic@technokratis.com bmilekic@FreeBSD.org From owner-freebsd-performance@FreeBSD.ORG Wed Apr 20 20:28:23 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C1DE916A4CE for ; Wed, 20 Apr 2005 20:28:23 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.207]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5EFB143D2D for ; Wed, 20 Apr 2005 20:28:23 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so193095rng for ; Wed, 20 Apr 2005 13:28:22 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=KinUrJxL3sQqPgw0JDZHcB5eEPtT6/AOgPbSN7xR9bFPVnaM4HP+wfyIdUuRVdXxIe1pUYab/FHJcC2i+JYTi9//7u9iVe1ZVv6+okd1mtOpekUshDcoeXpxh1AtKVn/TC6TN0vga5NTf8OwPZN+t7NxPkQmEHBi527BS6FtiLI= Received: by 10.38.24.45 with SMTP id 45mr1290847rnx; Wed, 20 Apr 2005 13:28:11 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Wed, 20 Apr 2005 13:28:11 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2005 22:28:11 +0200 From: Claus Guttesen To: Eric Anderson In-Reply-To: <42666371.9080800@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> <426642D4.8000202@centtech.com> <42666371.9080800@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2005 20:28:23 -0000 > You could use the atabeast to do two raid 5's, then use vinum to stripe t= hose two. I actually thought of that a while ago (unrelated to this). I read the vinum-page in the handbook, assume this is still valid. I recall a discussion regarding it's (re)naming to gvinum, but don't see any mention of it so gvinum is no longer used I guess. Are there any performance-issues doing striping by FreeBSD. I actually have two unused LUN's, each with two 2 TB volumes, so I can stripe the first volume in the first LUN with the first volume in the second LUN and the second volume etc. so I distribute the load on as many disks as possible. As far as I know a vinum-volume don't have the same 2 TB sizelimitation as disklabel and newfs has, but will I be able to newfs? The handbook refers to newfs -v when making a new fs, but I can't find -v in man newfs, man 8 vinum says: Just run newfs(8). Use the -v option to state that the device is not divided into partitions... The example uses the -U (enable softupdates) parameter instead: # newfs -U /dev/vinum/mirror regards Claus From owner-freebsd-performance@FreeBSD.ORG Thu Apr 21 03:57:58 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id AA3A316A4CE for ; Thu, 21 Apr 2005 03:57:58 +0000 (GMT) Received: from avscan2.sentex.ca (avscan2.sentex.ca [199.212.134.19]) by mx1.FreeBSD.org (Postfix) with ESMTP id 0B20043D46 for ; Thu, 21 Apr 2005 03:57:58 +0000 (GMT) (envelope-from mike@sentex.net) Received: from localhost (localhost.sentex.ca [127.0.0.1]) by avscan2.sentex.ca (8.12.11/8.12.11) with ESMTP id j3L3vqrS066358; Wed, 20 Apr 2005 23:57:52 -0400 (EDT) (envelope-from mike@sentex.net) Received: from avscan2.sentex.ca ([127.0.0.1]) by localhost (avscan2.sentex.ca [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 65710-07; Wed, 20 Apr 2005 23:57:52 -0400 (EDT) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by avscan2.sentex.ca (8.12.11/8.12.11) with ESMTP id j3L3vqLP066353; Wed, 20 Apr 2005 23:57:52 -0400 (EDT) (envelope-from mike@sentex.net) Received: from simian.sentex.net (simeon.sentex.ca [192.168.43.27]) by lava.sentex.ca (8.13.3/8.12.11) with ESMTP id j3L3vhFm051880; Wed, 20 Apr 2005 23:57:43 -0400 (EDT) (envelope-from mike@sentex.net) Message-Id: <6.2.1.2.0.20050420235003.04b7bda0@64.7.153.2> X-Mailer: QUALCOMM Windows Eudora Version 6.2.1.2 Date: Wed, 20 Apr 2005 23:57:08 -0400 To: Claus Guttesen , Eric Anderson From: Mike Tancsa In-Reply-To: References: <4264EF40.3060900@centtech.com> <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: by amavisd-new X-Virus-Scanned: by amavisd-new at avscan2b cc: freebsd-performance@freebsd.org Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Apr 2005 03:57:58 -0000 At 04:47 AM 20/04/2005, Claus Guttesen wrote: > > elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576 > > 1048576+0 records in > > 1048576+0 records out > > 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec) > > > >Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6 >scsi-disks in a raid 5: > >frodo~%>dd if=/dev/zero of=dd.tst bs=1024 count=1048576 >1048576+0 records in >1048576+0 records out >1073741824 bytes transferred in 8.972321 secs (119672693 bytes/sec) > >Must faster. FreeBSD 5.4 RC2. FYI, here is on an Areca SATA in Raid5, 4 disks 3Ghz P4 [nfs]# dd if=/dev/zero of=dd.tst bs=1024 count=1048576 1048576+0 records in 1048576+0 records out 1073741824 bytes transferred in 16.416051 secs (65408047 bytes/sec) [nfs]# [nfs]# dd if=/dev/zero of=dd.tst bs=2048 count=1048576 1048576+0 records in 1048576+0 records out 2147483648 bytes transferred in 30.371264 secs (70707747 bytes/sec) [nfs]# Same box, but to an ad0 partition (similar Segate drive) [nfs]# dd if=/dev/zero of=dd.tst bs=512 count=1048576 1048576+0 records in 1048576+0 records out 536870912 bytes transferred in 11.096809 secs (48380658 bytes/sec) [nfs]# RELENG_5 from today. From owner-freebsd-performance@FreeBSD.ORG Thu Apr 21 12:45:33 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 6EFFC16A4CE for ; Thu, 21 Apr 2005 12:45:33 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.203]) by mx1.FreeBSD.org (Postfix) with ESMTP id 035F043D45 for ; Thu, 21 Apr 2005 12:45:33 +0000 (GMT) (envelope-from kometen@gmail.com) Received: by rproxy.gmail.com with SMTP id a41so305530rng for ; Thu, 21 Apr 2005 05:45:32 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=pVvMeeZ1HX0UTITR8Zk8Qv8ISN7jyIRfDgbO7Qt8T8IDKyhdEeELd/K3W6sETRxRdWAndh3ez+hENcmbNBQKdFt4euv+yaAKrjMG/r1H1Zif2Cqm4Uv43I46W/EjeSey9wjjb/nf1xy8EUDyxG//kJNva0qXULOuprDLwk2uiGo= Received: by 10.38.67.8 with SMTP id p8mr2254866rna; Thu, 21 Apr 2005 05:45:32 -0700 (PDT) Received: by 10.38.149.53 with HTTP; Thu, 21 Apr 2005 05:45:32 -0700 (PDT) Message-ID: Date: Thu, 21 Apr 2005 14:45:32 +0200 From: Claus Guttesen To: Mike Tancsa In-Reply-To: <6.2.1.2.0.20050420235003.04b7bda0@64.7.153.2> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <4264F8A8.3080405@centtech.com> <426507DC.50409@centtech.com> <42650EB2.4040409@centtech.com> <6.2.1.2.0.20050420235003.04b7bda0@64.7.153.2> cc: freebsd-performance@freebsd.org cc: Eric Anderson Subject: Re: some simple nfs-benchmarks on 5.4 RC2 X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Claus Guttesen List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Apr 2005 12:45:33 -0000 > > > elin% dd if=3D/dev/zero of=3D/nfssrv/dd.tst bs=3D1024 count=3D1048576 > > > 1048576+0 records in > > > 1048576+0 records out > > > 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec) > > > > > > >Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6 > >scsi-disks in a raid 5: > > > >frodo~%>dd if=3D/dev/zero of=3Ddd.tst bs=3D1024 count=3D1048576 > >1048576+0 records in > >1048576+0 records out > >1073741824 bytes transferred in 8.972321 secs (119672693 bytes/sec) > > > >Must faster. FreeBSD 5.4 RC2. >=20 > FYI, here is on an Areca SATA in Raid5, 4 disks 3Ghz P4 >=20 > [nfs]# dd if=3D/dev/zero of=3Ddd.tst bs=3D1024 count=3D1048576 > 1048576+0 records in > 1048576+0 records out > 1073741824 bytes transferred in 16.416051 secs (65408047 bytes/sec) > [nfs]# >=20 > [nfs]# dd if=3D/dev/zero of=3Ddd.tst bs=3D2048 count=3D1048576 > 1048576+0 records in > 1048576+0 records out > 2147483648 bytes transferred in 30.371264 secs (70707747 bytes/sec) > [nfs]# >=20 > Same box, but to an ad0 partition (similar Segate drive) > [nfs]# dd if=3D/dev/zero of=3Ddd.tst bs=3D512 count=3D1048576 > 1048576+0 records in > 1048576+0 records out > 536870912 bytes transferred in 11.096809 secs (48380658 bytes/sec) > [nfs]# >=20 > RELENG_5 from today. Thank you. regards Claus From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 15:32:15 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id CFDDA16A4CE for ; Fri, 22 Apr 2005 15:32:15 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4B7DF43D1D for ; Fri, 22 Apr 2005 15:32:15 +0000 (GMT) (envelope-from seancody@gmail.com) Received: by rproxy.gmail.com with SMTP id z35so733966rne for ; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=k3oLDHXL9IcLryNZsPBEKgfA4HLKdEX/eb+CJCkFgSzdU6aSkzuJ9+1YBqIsbIn2hc/FNKxnMliUaUCFViZEoIJuc7YRq8nVZ8c/4+C5a65AX2tZRoJ0ZwH6e+lTAF0NF2c0dkJYMOEwJLDlAMnIyQNFflGvWx9mcWumyz1c9zA= Received: by 10.38.75.59 with SMTP id x59mr3630433rna; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) Received: by 10.38.181.13 with HTTP; Fri, 22 Apr 2005 08:32:14 -0700 (PDT) Message-ID: <136272710504220832793dfc3d@mail.gmail.com> Date: Fri, 22 Apr 2005 10:32:14 -0500 From: Sean To: freebsd-performance@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Subject: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Sean List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 15:32:16 -0000 I've been experimenting with the idea of doing channel bonding as a=20 means of improving the performance of some heavily used file servers. Currently I am using a single Intel 1000MT interface on each file server and it has rather lack luster performance. I've set two ports of my switch to 'shared' (an Extreme=20 BlackDiamond 6800) and am using an Intel 1000MT Dual Port for=20 the bonding interfaces. The performance increase with I see is marginally better than=20 just the one interface (70MB/s [bonded] vs 60MB/s [single]) which=20 is slightly disappointing. I am using ifstat and iostat (for disk throughput, 30MB/s on a 3ware 7500-12 yet again disappointing) to monitor and a variant of tcpblast to generate traffic. I'm using 4 other machines (on the same blade on the switch) to generate the traffice to the bonded interface all are similar hardware with=20 varrying versions of FreeBSD. In order to get the numbers as high as I have I've enabled polling (some stability issues being=20 used under SMP). Before I dropped everything and moved over to trying out ng_fec I wanted to get a few opinions on other things I can check or try. =20 These servers typically have anywhere between 20-100 clients reading=20 and writing many large files as fast as they can. So far the machines only perform well when there are fewer than 20 clients. The whole point of the experiment is increase performance of our current resources instead of buying more servers. I really don't know=20 what to expect (in terms of performance) from this but just based on=20 the 'ratings' on the individual parts this machine is not preforming=20 very well. In case anyone has any ideas I've included the 'specs' of the hardware=20 below. Hardware: =09Dual Intel Xeon CPU 2.66GHz=20 =09Intel Server SE7501BR2 Motherboard =092X 512 MB Registered ECC DDR RAM =093ware 7500-12 (12x120GB, RAID-5) =09Intel PRO/1000 MT Dual Port (em0,1) =09Intel PRO/1000 MT (On board) (em2) =09 Switch: =09Extreme Black Diamond 6800 =09Gigabit Blade: G24T^3 51052 Kernel: FreeBSD phoenix 5.3-RELEASE FreeBSD 5.3-RELEASE #1: Wed Apr 20 13:33:09 CDT 2005 =20 root@phoenix.franticfilms.com:/usr/src/sys/i386/compile/SMP i386 Channel Bonding commands used: ifconfig em0 up ifconfig em1 up kldload ng_ether.ko ngctl mkpeer em0: one2many upper one ngctl connect em0: em0:upper lower many0 ngctl connect em1: em0:upper lower many1 echo Allow em1 to xmit/recv em0 frames ngctl msg em1: setpromisc 1 ngctl msg em1: setautosrc 0 ngctl msg em0:upper setconfig "{ xmitAlg=3D1 failAlg=3D1 enabledLinks=3D[ 1= 1 ] }" ifconfig em0 A.B.C.D netmask 255.255.255.0 Contents of /etc/sysctl.conf: net.inet.tcp.inflight_enable=3D1 net.inet.tcp.sendspace=3D32767 net.inet.tcp.recvspace=3D32767 net.inet.tcp.delayed_ack=3D0 vfs.hirunningspace=3D10485760 vfs.lorunningspace=3D10485760 net.inet.tcp.local_slowstart_flightsize=3D32767 net.inet.tcp.rfc1323=3D1 kern.maxfilesperproc=3D2048 vfs.vmiodirenable=3D1 kern.ipc.somaxconn=3D4096 kern.maxfiles=3D65536 kern.polling.enable=3D1 --=20 Sean From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 16:06:47 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 762A516A4CE for ; Fri, 22 Apr 2005 16:06:47 +0000 (GMT) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1A30F43D48 for ; Fri, 22 Apr 2005 16:06:46 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id j3MG6ivU071696; Fri, 22 Apr 2005 11:06:45 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <426920E5.4070806@centtech.com> Date: Fri, 22 Apr 2005 11:05:57 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.5) Gecko/20050325 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Sean References: <136272710504220832793dfc3d@mail.gmail.com> In-Reply-To: <136272710504220832793dfc3d@mail.gmail.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-performance@freebsd.org Subject: Re: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 16:06:47 -0000 Sean wrote: > I've been experimenting with the idea of doing channel bonding as a > means of improving the performance of some heavily used file servers. > Currently I am using a single Intel 1000MT interface on each file > server and it has rather lack luster performance. [..snip..] > In case anyone has any ideas I've included the 'specs' of the hardware > below. > > Hardware: > Dual Intel Xeon CPU 2.66GHz > Intel Server SE7501BR2 Motherboard > 2X 512 MB Registered ECC DDR RAM > 3ware 7500-12 (12x120GB, RAID-5) > Intel PRO/1000 MT Dual Port (em0,1) > Intel PRO/1000 MT (On board) (em2) > > Switch: > Extreme Black Diamond 6800 > Gigabit Blade: G24T^3 51052 [..snip..] Are the gig nics in 64bit slots? 32bit slots can slow you down a bunch. Also, I've seen some cases where the PCI bus itself is the bottleneck with multiple high-IO boards installed on the same bus. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------ From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 16:35:48 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 60D5D16A4CE for ; Fri, 22 Apr 2005 16:35:48 +0000 (GMT) Received: from mailhost.stack.nl (vaak.stack.nl [131.155.140.140]) by mx1.FreeBSD.org (Postfix) with ESMTP id 015BA43D1F for ; Fri, 22 Apr 2005 16:35:48 +0000 (GMT) (envelope-from dean@dragon.stack.nl) Received: from dragon.stack.nl (dragon.stack.nl [IPv6:2001:610:1108:5011:207:e9ff:fe09:230]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mailhost.stack.nl (Postfix) with ESMTP id 633171F44E for ; Fri, 22 Apr 2005 18:35:47 +0200 (CEST) Received: by dragon.stack.nl (Postfix, from userid 1600) id 2F6F65F157; Fri, 22 Apr 2005 18:35:47 +0200 (CEST) Resent-From: dean@dragon.stack.nl Resent-Date: Fri, 22 Apr 2005 18:35:47 +0200 Resent-Message-ID: <20050422163547.GC7252@dragon.stack.nl> Resent-To: freebsd-performance@freebsd.org Date: Fri, 22 Apr 2005 18:33:40 +0200 From: Dean Strik To: Sean Message-ID: <20050422163340.GB7252@dragon.stack.nl> References: <136272710504220832793dfc3d@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <136272710504220832793dfc3d@mail.gmail.com> X-Editor: VIM Rulez! http://www.vim.org/ X-MUD: Outerspace - telnet://mud.stack.nl:3333 X-Really: Yes User-Agent: Mutt/1.5.9i cc: freebsd-performance@freebsd.org Subject: Re: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 16:35:48 -0000 Sean wrote: > I've been experimenting with the idea of doing channel bonding as a > means of improving the performance of some heavily used file servers. > Currently I am using a single Intel 1000MT interface on each file > server and it has rather lack luster performance. > > I've set two ports of my switch to 'shared' (an Extreme > BlackDiamond 6800) and am using an Intel 1000MT Dual Port for > the bonding interfaces. > > The performance increase with I see is marginally better than > just the one interface (70MB/s [bonded] vs 60MB/s [single]) which > is slightly disappointing. I am using ifstat and iostat (for disk > throughput, 30MB/s on a 3ware 7500-12 yet again disappointing) to > monitor and a variant of tcpblast to generate traffic. I'm using > 4 other machines (on the same blade on the switch) to generate the > traffice to the bonded interface all are similar hardware with > varrying versions of FreeBSD. In order to get the numbers as high > as I have I've enabled polling (some stability issues being > used under SMP). If I understand you correctly, you are not doing any load sharing from the FreeBSD box to the BD6800, right? Also, it's likely the BD6800 uses the lsb of the source-mac xor dest-mac. If you have four clients only, a marginal increase in performance could well be because the src^dst often returns the same value (e.g. with 3 out of 4 clients having an even MAC, and 1 out of 4 an odd MAC). Try making this 50/50 by changing a MAC address of the client using 'ifconfig ether'. -- Dean C. Strik Eindhoven University of Technology dean@stack.nl | dean@ipnet6.org | http://www.ipnet6.org/ "This isn't right. This isn't even wrong." -- Wolfgang Pauli From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 16:50:14 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D1DB416A4D2 for ; Fri, 22 Apr 2005 16:50:14 +0000 (GMT) Received: from rproxy.gmail.com (rproxy.gmail.com [64.233.170.199]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6503343D3F for ; Fri, 22 Apr 2005 16:50:14 +0000 (GMT) (envelope-from seancody@gmail.com) Received: by rproxy.gmail.com with SMTP id z35so753890rne for ; Fri, 22 Apr 2005 09:50:14 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=CJkAYvZZhwOC7jqsoMrZEmL0R3RXXeqIkmIEuynNGQxQAZ8b3SQOfyD8qvfUa4vVoYLJScb3ixSsZbFG5UrGLOlyWYrCLABL6EbFzKJRLEhVxt+sG6xfuGTH2IwtrqnwtT99FZQ29m3ousNfL2kC9s3Uuoioy5fXFehyoNscvLA= Received: by 10.38.75.59 with SMTP id x59mr3713133rna; Fri, 22 Apr 2005 09:50:13 -0700 (PDT) Received: by 10.38.181.13 with HTTP; Fri, 22 Apr 2005 09:50:13 -0700 (PDT) Message-ID: <13627271050422095030d5dc39@mail.gmail.com> Date: Fri, 22 Apr 2005 11:50:13 -0500 From: Sean To: Eric Anderson In-Reply-To: <426920E5.4070806@centtech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <136272710504220832793dfc3d@mail.gmail.com> <426920E5.4070806@centtech.com> cc: freebsd-performance@freebsd.org Subject: Re: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: Sean List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 16:50:15 -0000 > Are the gig nics in 64bit slots? 32bit slots can slow you down a bunch. > Also, I've seen some cases where the PCI bus itself is the bottleneck > with multiple high-IO boards installed on the same bus. >=20 Yes. The 3ware card and the Dual Port em are both in the PCI-X 100 slots (the first two). There are no 32bit cards in the machine. Is there some method I can use to determine if the PCI bus is the bottleneck or even to determine the level of contention on the bus? I've been looking at the newer Opteron boards with multiple PCI buses but that's a lot of money to spend on my hunch. --=20 Sean From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 17:12:00 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id E942B16A4CE for ; Fri, 22 Apr 2005 17:12:00 +0000 (GMT) Received: from multiplay.co.uk (www1.multiplay.co.uk [212.42.16.7]) by mx1.FreeBSD.org (Postfix) with ESMTP id EB10D43D3F for ; Fri, 22 Apr 2005 17:11:59 +0000 (GMT) (envelope-from killing@multiplay.co.uk) Received: from stevenp4 ([193.123.241.40]) by multiplay.co.uk (multiplay.co.uk [212.42.16.7]) (MDaemon.PRO.v8.0.1.R) with ESMTP id md50001354598.msg for ; Fri, 22 Apr 2005 18:08:12 +0100 Message-ID: <00a101c5475e$53719de0$7f06000a@int.mediasurface.com> From: "Steven Hartland" To: "Sean" , "Eric Anderson" References: <136272710504220832793dfc3d@mail.gmail.com> <426920E5.4070806@centtech.com> <13627271050422095030d5dc39@mail.gmail.com> Date: Fri, 22 Apr 2005 18:11:29 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.2527 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2527 X-Spam-Processed: multiplay.co.uk, Fri, 22 Apr 2005 18:08:12 +0100 (not processed: message from valid local sender) X-MDRemoteIP: 193.123.241.40 X-Return-Path: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-performance@freebsd.org X-MDAV-Processed: multiplay.co.uk, Fri, 22 Apr 2005 18:08:12 +0100 cc: freebsd-performance@freebsd.org Subject: Re: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 17:12:01 -0000 I will be putting together a dual Opteron this weekend with the hope of testing network throughput. Spec will be: Dual 244, 2Gb RAM, 5x400Gb SATA RAID 5 on a Highpoint 1820a Broadcom 5705, Intel gE and a Intel dual port ( PCI 32 ) for comparison. Will let you know the results. Steve ----- Original Message ----- From: "Sean" To: "Eric Anderson" Cc: Sent: 22 April 2005 17:50 Subject: Re: Channel bonding. > Are the gig nics in 64bit slots? 32bit slots can slow you down a bunch. > Also, I've seen some cases where the PCI bus itself is the bottleneck > with multiple high-IO boards installed on the same bus. > Yes. The 3ware card and the Dual Port em are both in the PCI-X 100 slots (the first two). There are no 32bit cards in the machine. Is there some method I can use to determine if the PCI bus is the bottleneck or even to determine the level of contention on the bus? I've been looking at the newer Opteron boards with multiple PCI buses but that's a lot of money to spend on my hunch. -- Sean _______________________________________________ freebsd-performance@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-performance To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone (023) 8024 3137 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-performance@FreeBSD.ORG Fri Apr 22 16:33:43 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4A01D16A4CE for ; Fri, 22 Apr 2005 16:33:43 +0000 (GMT) Received: from mailhost.stack.nl (vaak.stack.nl [131.155.140.140]) by mx1.FreeBSD.org (Postfix) with ESMTP id E381C43D39 for ; Fri, 22 Apr 2005 16:33:40 +0000 (GMT) (envelope-from dean@dragon.stack.nl) Received: from dragon.stack.nl (dragon.stack.nl [IPv6:2001:610:1108:5011:207:e9ff:fe09:230]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mailhost.stack.nl (Postfix) with ESMTP id 4A5FF1F44E; Fri, 22 Apr 2005 18:33:40 +0200 (CEST) Received: by dragon.stack.nl (Postfix, from userid 1600) id 1BF155F157; Fri, 22 Apr 2005 18:33:40 +0200 (CEST) Date: Fri, 22 Apr 2005 18:33:40 +0200 From: Dean Strik To: Sean Message-ID: <20050422163340.GB7252@dragon.stack.nl> References: <136272710504220832793dfc3d@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <136272710504220832793dfc3d@mail.gmail.com> X-Editor: VIM Rulez! http://www.vim.org/ X-MUD: Outerspace - telnet://mud.stack.nl:3333 X-Really: Yes User-Agent: Mutt/1.5.9i X-Mailman-Approved-At: Sat, 23 Apr 2005 12:35:16 +0000 cc: freebsd-performance@freebsd.org Subject: Re: Channel bonding. X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Apr 2005 16:33:43 -0000 Sean wrote: > I've been experimenting with the idea of doing channel bonding as a > means of improving the performance of some heavily used file servers. > Currently I am using a single Intel 1000MT interface on each file > server and it has rather lack luster performance. > > I've set two ports of my switch to 'shared' (an Extreme > BlackDiamond 6800) and am using an Intel 1000MT Dual Port for > the bonding interfaces. > > The performance increase with I see is marginally better than > just the one interface (70MB/s [bonded] vs 60MB/s [single]) which > is slightly disappointing. I am using ifstat and iostat (for disk > throughput, 30MB/s on a 3ware 7500-12 yet again disappointing) to > monitor and a variant of tcpblast to generate traffic. I'm using > 4 other machines (on the same blade on the switch) to generate the > traffice to the bonded interface all are similar hardware with > varrying versions of FreeBSD. In order to get the numbers as high > as I have I've enabled polling (some stability issues being > used under SMP). If I understand you correctly, you are not doing any load sharing from the FreeBSD box to the BD6800, right? Also, it's likely the BD6800 uses the lsb of the source-mac xor dest-mac. If you have four clients only, a marginal increase in performance could well be because the src^dst often returns the same value (e.g. with 3 out of 4 clients having an even MAC, and 1 out of 4 an odd MAC). Try making this 50/50 by changing a MAC address of the client using 'ifconfig ether'. -- Dean C. Strik Eindhoven University of Technology dean@stack.nl | dean@ipnet6.org | http://www.ipnet6.org/ "This isn't right. This isn't even wrong." -- Wolfgang Pauli