From owner-freebsd-java Thu Jul 29 9:54:41 1999 Delivered-To: freebsd-java@freebsd.org Received: from aurora.rg.iupui.edu (aurora.rg.iupui.edu [134.68.31.122]) by hub.freebsd.org (Postfix) with ESMTP id 6305A155DD for ; Thu, 29 Jul 1999 09:54:24 -0700 (PDT) (envelope-from gunther@aurora.rg.iupui.edu) Received: (from gunther@localhost) by aurora.rg.iupui.edu (8.8.7/8.8.7) id MAA28086 for java@FreeBSD.ORG; Thu, 29 Jul 1999 12:01:45 -0500 (EST) (envelope-from gunther) Date: Thu, 29 Jul 1999 12:01:45 -0500 (EST) From: Gunther Schadow Message-Id: <199907291701.MAA28086@aurora.rg.iupui.edu> To: java@FreeBSD.ORG Subject: JVM optimazations? Sender: owner-freebsd-java@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org Hi, I am wondering whether someone has looked into the JVM code to figure out whether it could not somehow be optimized at a specific point. I have read that the two most costly operations in Java are object creation and array creation. Because all the fancy patterns and the designs of OO purists are heavily wasteful of many little objects and helper objects (factories, iterators, etc.) having a fast object allocation in JVM could speed things up quite significantly. For numbers here's a comparison of costs of actions in Java taken from Bruce Eckel's "Thinking in Java" pp. 831f: Operation Example Normalized time ------------------------------- --------------- Local assignment i = n; 1.0 Instance assignment this.i = n; 1.2 int increment i++; 1.5 byte increment b++; 2.0 short increment s++; 2.0 float increment f++; 2.0 double increment d++; 2.0 Empty loop while(true) n++; 2.0 Ternary expression (x<0) ? -x : x 2.2 Math call Math.abs(x); 2.5 Array assignment a[0] = n; 2.7 long increment l++; 3.5 Method call funct( ); 5.9 throw and catch exception try{ throw e; } catch(e){} 320 synchronized method call synchMethod( ); 570 New Object new Object( ); 980 New array new int[10]; 3100 As you can see, a new object is *very* expensive (not to mention the array here, which may be dependent on the costly object creation anyway.) I wonder whether this is a law of nature or whether it is optimizeable. Imagine what impact a reduction of object allocation time by only 30% would have!! From my old Anatomy of LISP school we used to manage the heap with a free list and an allocation from the freelist was totally cheap: (DE CONS (A D) ((LAMBDA (F) (SETQ *FREELIST* (CDR *FREELIST*)) (RPLACA A (RPLACD D (CAR *FREELIST*)))) (CAR *FREELIST*))) in other words class FreeList { Node head; Node allocateNode(Node head, Node tail) { Node n = freelist.getHead(); head = head.getTail(); n.setHead(head); n.setTail(tail); return n; } ... } now I don't know enough about heap management in FreeBSD and Java, but I would guess that the added difficulty is variable size memory blocks. But does it have to be so expensive? I doubt that the call to malloc is the hog here. What the hell is JVM doing here? I have no JVM source code or I would look into it sometime. Is the source redistributeable now (it wasn't a year ago or so, where you had to do a special signup with SUN.) just curious, -Gunther Gunther Schadow ----------------------------------- http://aurora.rg.iupui.edu Regenstrief Institute for Health Care 1001 W 10th Street RG5, Indianapolis IN 46202, Phone: (317) 630 7960 schadow@aurora.rg.iupui.edu ---------------------- #include To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-java" in the body of the message