From owner-freebsd-gecko@FreeBSD.ORG Sun Apr 4 01:50:07 2010 Return-Path: Delivered-To: gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 834EA106566B; Sun, 4 Apr 2010 01:50:07 +0000 (UTC) (envelope-from e@cube2.se) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.154]) by mx1.freebsd.org (Postfix) with ESMTP id EA2FF8FC08; Sun, 4 Apr 2010 01:50:06 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id l26so342200fgb.13 for ; Sat, 03 Apr 2010 18:50:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.102.215.6 with HTTP; Sat, 3 Apr 2010 18:50:04 -0700 (PDT) X-Originating-IP: [85.229.46.8] In-Reply-To: <20100403165639.75a69a8b@ernst.jennejohn.org> References: <20100403165639.75a69a8b@ernst.jennejohn.org> Date: Sun, 4 Apr 2010 03:50:04 +0200 Received: by 10.102.12.17 with SMTP id 17mr2133297mul.36.1270345804592; Sat, 03 Apr 2010 18:50:04 -0700 (PDT) Message-ID: From: =?ISO-8859-1?Q?Edward_Tj=F6rnhammar?= To: gary.jennejohn@freenet.de Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: Garrett Cooper , freebsd-ports@freebsd.org, gecko@freebsd.org Subject: Re: Bug Report: Firefox 3.6.2 crashes on JavaScript heavy pages X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 01:50:07 -0000 Thank you for your replies! I didn't have any special make.conf but tried using fno-strict-aliasing as suggested by Garret and now it works. Regards On Sat, Apr 3, 2010 at 4:56 PM, Gary Jennejohn wrote: > On Sat, 3 Apr 2010 04:24:05 -0700 > Garrett Cooper wrote: > >> On Sat, Apr 3, 2010 at 3:30 AM, Edward Tj__rnhammar wrote: >> > The browser crashes on JavaScript intensive pages. I've tried to debug >> > the behavior but since firebug crashes with firefox this is a no go. >> > I've tried disabling optimizations in the firefox makefile but it >> > makes no difference. >> > >> > I don't believe it matters but my hardware is a Dell XPS m1210 laptop >> > with intel graphics. >> > >> > What system: >> > pylon# uname -a >> > FreeBSD pylon 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Mar 13 18:50:29 >> > CET 2010 __ __ root@pylon:/usr/obj/usr/src/sys/PYLON __i386 >> > >> > What makefile: >> > pylon# grep Makefile,v /usr/ports/www/firefox/Makefile >> > # $FreeBSD: ports/www/firefox/Makefile,v 1.221 2010/03/23 08:54:22 bea= t Exp $ >> > >> > How to reproduce: >> > I can reproduce this crash everytime by removing my .mozilla and then = visit: >> > __http://review.source.android.com/11633 >> > Most pages are Ok. >> > >> > Error message: >> > Floating point exception (core dumped) >> > >> > Well.. the processor supports SSE instrincs, perhaps this is a SIMD >> > optimization bug somewhere else, I remember I enabled that for some >> > other port.. >> > >> > Other notable behaviour: >> > If i run firefox3 with truss -f firefox3 does not crash on the page. >> > I could probably dive deeper into this. >> >> =A0 =A0 Doesn't crash for me on Firefox 3.5.8: >> >> # /etc/make.conf: >> CFLAGS =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >> CXXFLAGS =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >> # ... >> CPUTYPE =A0 =A0 =A0 =A0 =A0 =A0 =A0 :=3D nocona >> >> =A0 =A0 Do you have a custom make.conf file? If so, what's your processo= r >> and your CPUTYPE / C*FLAGS? >> > > The referenced page works for me using FF 3.6.2. =A0I don't have any > special CFLAGS values set in src.conf/make.conf. > > I also have > CPU: AMD Athlon(tm) Dual Core Processor 4850e (2505.35-MHz K8-class CPU) > Features=3D0x178bfbff MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> > Features2=3D0x2001 > > -- > Gary Jennejohn > From owner-freebsd-gecko@FreeBSD.ORG Sun Apr 4 02:48:12 2010 Return-Path: Delivered-To: gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 35C2C106564A; Sun, 4 Apr 2010 02:48:12 +0000 (UTC) (envelope-from yanefbsd@gmail.com) Received: from qw-out-2122.google.com (qw-out-2122.google.com [74.125.92.24]) by mx1.freebsd.org (Postfix) with ESMTP id CA5818FC14; Sun, 4 Apr 2010 02:48:11 +0000 (UTC) Received: by qw-out-2122.google.com with SMTP id 3so1081129qwe.7 for ; Sat, 03 Apr 2010 19:48:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=tibMNQX02YasAdFel6TxXf2vJEQgkKSSgHvckGYbLkY=; b=c5v6OVYmkiAHPhL21nUJVddzHF9XRQl8+t5X7fT/vS+aU6oUxiAZbb8WlBbuyr4+Cm aBZ7i5Mtmg3fDmad4RAFLLCgEqKUe9cuOKxwr7sVVPrKroebBCkQtfubPR+10zoNjEws ++U7ecfZKswqvNgW4aZL5UX/WIFI8yHo9oQ4g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=JBU/sU8pMCnbEClcN3HhRJ8A+BYMo65rhVOJkzMMsUxFYjIJ4ZGrGhp4Ex0sFAsr9A 6GBLPieELklZNsMQY0ggMhvfMQ0IzcF+1JTRIWmGPA9BAsbLoSDk6VZ9cinvNyrxFQcF 0bCzdLY+6+NCsCubAHSHqjy1Z2v/p0WCDowPg= MIME-Version: 1.0 Received: by 10.229.33.72 with HTTP; Sat, 3 Apr 2010 19:48:09 -0700 (PDT) In-Reply-To: References: <20100403165639.75a69a8b@ernst.jennejohn.org> Date: Sat, 3 Apr 2010 19:48:09 -0700 Received: by 10.229.190.133 with SMTP id di5mr4445303qcb.23.1270349289788; Sat, 03 Apr 2010 19:48:09 -0700 (PDT) Message-ID: From: Garrett Cooper To: =?ISO-8859-1?Q?Edward_Tj=F6rnhammar?= Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: gecko@freebsd.org, gary.jennejohn@freenet.de, freebsd-ports@freebsd.org Subject: Re: Bug Report: Firefox 3.6.2 crashes on JavaScript heavy pages X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 02:48:12 -0000 On Sat, Apr 3, 2010 at 6:50 PM, Edward Tj=F6rnhammar wrote: > Thank you for your replies! > > I didn't have any special make.conf but tried using > fno-strict-aliasing as suggested by Garret and now it works. > > Regards > > On Sat, Apr 3, 2010 at 4:56 PM, Gary Jennejohn > wrote: >> On Sat, 3 Apr 2010 04:24:05 -0700 >> Garrett Cooper wrote: >> >>> On Sat, Apr 3, 2010 at 3:30 AM, Edward Tj__rnhammar wrote: >>> > The browser crashes on JavaScript intensive pages. I've tried to debu= g >>> > the behavior but since firebug crashes with firefox this is a no go. >>> > I've tried disabling optimizations in the firefox makefile but it >>> > makes no difference. >>> > >>> > I don't believe it matters but my hardware is a Dell XPS m1210 laptop >>> > with intel graphics. >>> > >>> > What system: >>> > pylon# uname -a >>> > FreeBSD pylon 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Mar 13 18:50:29 >>> > CET 2010 __ __ root@pylon:/usr/obj/usr/src/sys/PYLON __i386 >>> > >>> > What makefile: >>> > pylon# grep Makefile,v /usr/ports/www/firefox/Makefile >>> > # $FreeBSD: ports/www/firefox/Makefile,v 1.221 2010/03/23 08:54:22 be= at Exp $ >>> > >>> > How to reproduce: >>> > I can reproduce this crash everytime by removing my .mozilla and then= visit: >>> > __http://review.source.android.com/11633 >>> > Most pages are Ok. >>> > >>> > Error message: >>> > Floating point exception (core dumped) >>> > >>> > Well.. the processor supports SSE instrincs, perhaps this is a SIMD >>> > optimization bug somewhere else, I remember I enabled that for some >>> > other port.. >>> > >>> > Other notable behaviour: >>> > If i run firefox3 with truss -f firefox3 does not crash on the page. >>> > I could probably dive deeper into this. >>> >>> =A0 =A0 Doesn't crash for me on Firefox 3.5.8: >>> >>> # /etc/make.conf: >>> CFLAGS =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >>> CXXFLAGS =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >>> # ... >>> CPUTYPE =A0 =A0 =A0 =A0 =A0 =A0 =A0 :=3D nocona >>> >>> =A0 =A0 Do you have a custom make.conf file? If so, what's your process= or >>> and your CPUTYPE / C*FLAGS? >>> >> >> The referenced page works for me using FF 3.6.2. =A0I don't have any >> special CFLAGS values set in src.conf/make.conf. >> >> I also have >> CPU: AMD Athlon(tm) Dual Core Processor 4850e (2505.35-MHz K8-class CPU) >> Features=3D0x178bfbff> MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> >> Features2=3D0x2001 It might be a bug then in how they're type-casting in firefox; if this is reproducible, a bug should definitely be submitted upstream. Thanks, -Garrett From owner-freebsd-gecko@FreeBSD.ORG Sun Apr 4 02:49:16 2010 Return-Path: Delivered-To: gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 991AE1065674; Sun, 4 Apr 2010 02:49:16 +0000 (UTC) (envelope-from yanefbsd@gmail.com) Received: from mail-qy0-f195.google.com (mail-qy0-f195.google.com [209.85.221.195]) by mx1.freebsd.org (Postfix) with ESMTP id 300728FC12; Sun, 4 Apr 2010 02:49:15 +0000 (UTC) Received: by qyk33 with SMTP id 33so3261474qyk.28 for ; Sat, 03 Apr 2010 19:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=7dEGRHAxSd+i/DZZ1DGrH45SiWPpKvbCeL+vwrP7fPI=; b=eOBIh9nm+9xX3VHUvPPSEfC3eN9tlw5G+Hzjap3tfym2rAI4chUgXNvBQsiuVZCMmB TVvDFSGAbnjd3aMvua3sbHgxYUTQP3C7if45ODoNSHZxby0dt6LqWiZHiJbESC2jL0vO weLY2yH8RVSTFAOLPbP5fUF5EG936qstAa8Pc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=SQa+AxnvXmYN8YSxFQ6vRrm9LVXNNpPEmBrKZ3anmnLhR22y0Npyezv1ll9hSSN7Px 2l3sHnEZf+1jTUwIGePuSe1iuRsEW0WgYY1X2ti8/mdbTW0wUP2ag9/3faItJfbkttNj /TtjLU1T182w7iJBiQJCq69QADG/BlDr5nb48= MIME-Version: 1.0 Received: by 10.229.33.72 with HTTP; Sat, 3 Apr 2010 19:49:13 -0700 (PDT) In-Reply-To: References: <20100403165639.75a69a8b@ernst.jennejohn.org> Date: Sat, 3 Apr 2010 19:49:13 -0700 Received: by 10.229.99.77 with SMTP id t13mr6738106qcn.80.1270349354163; Sat, 03 Apr 2010 19:49:14 -0700 (PDT) Message-ID: From: Garrett Cooper To: =?ISO-8859-1?Q?Edward_Tj=F6rnhammar?= Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: gecko@freebsd.org, gary.jennejohn@freenet.de, freebsd-ports@freebsd.org Subject: Re: Bug Report: Firefox 3.6.2 crashes on JavaScript heavy pages X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 02:49:16 -0000 On Sat, Apr 3, 2010 at 7:48 PM, Garrett Cooper wrote: > On Sat, Apr 3, 2010 at 6:50 PM, Edward Tj=F6rnhammar wrote: >> Thank you for your replies! >> >> I didn't have any special make.conf but tried using >> fno-strict-aliasing as suggested by Garret and now it works. >> >> Regards >> >> On Sat, Apr 3, 2010 at 4:56 PM, Gary Jennejohn >> wrote: >>> On Sat, 3 Apr 2010 04:24:05 -0700 >>> Garrett Cooper wrote: >>> >>>> On Sat, Apr 3, 2010 at 3:30 AM, Edward Tj__rnhammar wrote= : >>>> > The browser crashes on JavaScript intensive pages. I've tried to deb= ug >>>> > the behavior but since firebug crashes with firefox this is a no go. >>>> > I've tried disabling optimizations in the firefox makefile but it >>>> > makes no difference. >>>> > >>>> > I don't believe it matters but my hardware is a Dell XPS m1210 lapto= p >>>> > with intel graphics. >>>> > >>>> > What system: >>>> > pylon# uname -a >>>> > FreeBSD pylon 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Mar 13 18:50:2= 9 >>>> > CET 2010 __ __ root@pylon:/usr/obj/usr/src/sys/PYLON __i386 >>>> > >>>> > What makefile: >>>> > pylon# grep Makefile,v /usr/ports/www/firefox/Makefile >>>> > # $FreeBSD: ports/www/firefox/Makefile,v 1.221 2010/03/23 08:54:22 b= eat Exp $ >>>> > >>>> > How to reproduce: >>>> > I can reproduce this crash everytime by removing my .mozilla and the= n visit: >>>> > __http://review.source.android.com/11633 >>>> > Most pages are Ok. >>>> > >>>> > Error message: >>>> > Floating point exception (core dumped) >>>> > >>>> > Well.. the processor supports SSE instrincs, perhaps this is a SIMD >>>> > optimization bug somewhere else, I remember I enabled that for some >>>> > other port.. >>>> > >>>> > Other notable behaviour: >>>> > If i run firefox3 with truss -f firefox3 does not crash on the page. >>>> > I could probably dive deeper into this. >>>> >>>> =A0 =A0 Doesn't crash for me on Firefox 3.5.8: >>>> >>>> # /etc/make.conf: >>>> CFLAGS =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >>>> CXXFLAGS =A0 =A0 =A0+=3D -fno-strict-aliasing -pipe >>>> # ... >>>> CPUTYPE =A0 =A0 =A0 =A0 =A0 =A0 =A0 :=3D nocona >>>> >>>> =A0 =A0 Do you have a custom make.conf file? If so, what's your proces= sor >>>> and your CPUTYPE / C*FLAGS? >>>> >>> >>> The referenced page works for me using FF 3.6.2. =A0I don't have any >>> special CFLAGS values set in src.conf/make.conf. >>> >>> I also have >>> CPU: AMD Athlon(tm) Dual Core Processor 4850e (2505.35-MHz K8-class CPU= ) >>> Features=3D0x178bfbff>> MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> >>> Features2=3D0x2001 > > =A0 =A0It might be a bug then in how they're type-casting in firefox; if > this is reproducible, a bug should definitely be submitted upstream. Another possibility -- if you're using -O2, etc, it might be a compiler optimization bug on your CPU architecture (less likely). Cheers, -Garrett From owner-freebsd-gecko@FreeBSD.ORG Sun Apr 4 18:50:06 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 331B3106566B for ; Sun, 4 Apr 2010 18:50:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 012358FC0C for ; Sun, 4 Apr 2010 18:50:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o34Io5FZ050725 for ; Sun, 4 Apr 2010 18:50:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o34Io505050705; Sun, 4 Apr 2010 18:50:05 GMT (envelope-from gnats) Date: Sun, 4 Apr 2010 18:50:05 GMT Message-Id: <201004041850.o34Io505050705@freefall.freebsd.org> To: gecko@FreeBSD.org From: Kenneth Lind Cc: Subject: Re: ports/143288: www/mplayer-plugin does not build with libxul X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Kenneth Lind List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 18:50:06 -0000 The following reply was made to PR ports/143288; it has been noted by GNATS. From: Kenneth Lind To: bug-followup@FreeBSD.org Cc: "Beat Gaetzi" Subject: Re: ports/143288: www/mplayer-plugin does not build with libxul Date: Sun, 4 Apr 2010 20:48:28 +0200 --Signature=_Sun__4_Apr_2010_20_48_29_+0200_QMfC8A/HNtYBgd/o Content-Type: text/plain; charset=US-ASCII Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi! Indeed, I had firefox2 installed. It was apparantly installed as a dependency of something or other, but that seems no longer to be the case.=20 Removing firefox2 solved the matter elegantly. I do feel slightly silly now :) Thanks for the help! Case closed ;) Regards, Kenneth Lind -- Politically, we are still stuck in the systems of thought of the Greek and Roman slave states, no matter how much we rant about "democracy." Wilhelm Reich - The Mass Psychology of Fascism Homepage: http://www.sasurai.se/ Cellphone .se: +46-(0)73 218 1937 --Signature=_Sun__4_Apr_2010_20_48_29_+0200_QMfC8A/HNtYBgd/o Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (FreeBSD) iEYEARECAAYFAku43v0ACgkQh2Se1CB7M2GJZQCfeXheOtUVnfPF0558HedqDKmp Mj4An2jqIGpzTWQ6qsA0MzDSxkYVhnN2 =BGFU -----END PGP SIGNATURE----- --Signature=_Sun__4_Apr_2010_20_48_29_+0200_QMfC8A/HNtYBgd/o-- From owner-freebsd-gecko@FreeBSD.ORG Mon Apr 5 11:07:00 2010 Return-Path: Delivered-To: freebsd-gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AEA5C106564A for ; Mon, 5 Apr 2010 11:07:00 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 832238FC15 for ; Mon, 5 Apr 2010 11:07:00 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o35B70f5027798 for ; Mon, 5 Apr 2010 11:07:00 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o35B6xCB027796 for freebsd-gecko@FreeBSD.org; Mon, 5 Apr 2010 11:06:59 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 5 Apr 2010 11:06:59 GMT Message-Id: <201004051106.o35B6xCB027796@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-gecko@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-gecko@FreeBSD.org X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 11:07:00 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o ports/134121 gecko www/firefox3-devel packaging errors o ports/103529 gecko www/seamonkey: enable SVG and Pango font rendering sup 2 problems total. From owner-freebsd-gecko@FreeBSD.ORG Mon Apr 5 11:07:20 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0A93010656A8 for ; Mon, 5 Apr 2010 11:07:20 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id EE2D48FC27 for ; Mon, 5 Apr 2010 11:07:19 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o35B7Jwt028062 for ; Mon, 5 Apr 2010 11:07:19 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o35B7Jva028060 for gecko@FreeBSD.org; Mon, 5 Apr 2010 11:07:19 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 5 Apr 2010 11:07:19 GMT Message-Id: <201004051107.o35B7Jva028060@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: gecko@FreeBSD.org Cc: Subject: Current problem reports assigned to gecko@FreeBSD.org X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 11:07:20 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o ports/144734 gecko www/firefox: Firefox-3.6 doesn't terminate it's proces o ports/144145 gecko www/firefox: Firefox 3.6 does not allow any https-conn o ports/144078 gecko www/firefox: firefox-3.6,1 https connections stall o ports/144044 gecko [PATCH] www/firefox: Makefile.webplugins target order f ports/143288 gecko www/mplayer-plugin does not build with libxul o ports/142807 gecko www/firefox35-i18n does not depend on www/firefox35 o ports/142736 gecko [PATCH]www/firefox3*: unbreak with recent versions of 7 problems total. From owner-freebsd-gecko@FreeBSD.ORG Mon Apr 5 16:22:37 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E1411065670; Mon, 5 Apr 2010 16:22:37 +0000 (UTC) (envelope-from beat@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id E86BE8FC18; Mon, 5 Apr 2010 16:22:36 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o35GMZlN006586; Mon, 5 Apr 2010 16:22:35 GMT (envelope-from beat@freefall.freebsd.org) Received: (from beat@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o35GMZUa006582; Mon, 5 Apr 2010 16:22:35 GMT (envelope-from beat) Date: Mon, 5 Apr 2010 16:22:35 GMT Message-Id: <201004051622.o35GMZUa006582@freefall.freebsd.org> To: exil@sasurai.se, beat@FreeBSD.org, gecko@FreeBSD.org From: beat@FreeBSD.org Cc: Subject: Re: ports/143288: www/mplayer-plugin does not build with libxul X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 16:22:37 -0000 Synopsis: www/mplayer-plugin does not build with libxul State-Changed-From-To: feedback->closed State-Changed-By: beat State-Changed-When: Mon Apr 5 16:22:34 UTC 2010 State-Changed-Why: Closed on submitters request. Thanks for your feedback! http://www.freebsd.org/cgi/query-pr.cgi?pr=143288 From owner-freebsd-gecko@FreeBSD.ORG Tue Apr 6 07:24:49 2010 Return-Path: Delivered-To: freebsd-gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE108106564A for ; Tue, 6 Apr 2010 07:24:49 +0000 (UTC) (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: from trillian.chruetertee.ch (trillian.chruetertee.ch [217.150.245.56]) by mx1.freebsd.org (Postfix) with ESMTP id B93708FC19 for ; Tue, 6 Apr 2010 07:24:48 +0000 (UTC) Received: from trillian.chruetertee.ch (trillian [217.150.245.56]) by trillian.chruetertee.ch (8.14.3/8.14.3) with ESMTP id o367OlKg013347 for ; Tue, 6 Apr 2010 07:24:47 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: (from www@localhost) by trillian.chruetertee.ch (8.14.3/8.14.3/Submit) id o367OfBZ011271 for freebsd-gecko@freebsd.org; Tue, 6 Apr 2010 07:24:41 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Date: Tue, 6 Apr 2010 07:24:41 GMT Message-Id: <201004060724.o367OfBZ011271@trillian.chruetertee.ch> X-Authentication-Warning: trillian.chruetertee.ch: www set sender to svn-freebsd-gecko@chruetertee.ch using -f From: svn-freebsd-gecko@chruetertee.ch To: freebsd-gecko@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [SVN-Commit] r301 - branches/experimental/www/firefox branches/experimental/www/firefox-i18n trunk/www/firefox trunk/www/firefox-i18n X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-gecko@freebsd.org List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 07:24:49 -0000 Author: flo Date: Tue Apr 6 07:24:41 2010 New Revision: 301 Log: - Update to 3.6.3 Modified: branches/experimental/www/firefox-i18n/Makefile branches/experimental/www/firefox-i18n/Makefile.lang branches/experimental/www/firefox-i18n/distinfo branches/experimental/www/firefox/Makefile branches/experimental/www/firefox/distinfo trunk/www/firefox-i18n/Makefile trunk/www/firefox-i18n/distinfo trunk/www/firefox/Makefile trunk/www/firefox/distinfo Modified: branches/experimental/www/firefox-i18n/Makefile ============================================================================== --- branches/experimental/www/firefox-i18n/Makefile Tue Mar 30 19:13:01 2010 (r300) +++ branches/experimental/www/firefox-i18n/Makefile Tue Apr 6 07:24:41 2010 (r301) @@ -6,7 +6,7 @@ # PORTNAME= firefox-i18n -PORTVERSION= 3.6.2 +PORTVERSION= 3.6.3 CATEGORIES= www MASTER_SITE_SUBDIR= firefox/releases/${PORTVERSION}/linux-i686/xpi PKGNAMEPREFIX= Modified: branches/experimental/www/firefox-i18n/Makefile.lang ============================================================================== --- branches/experimental/www/firefox-i18n/Makefile.lang Tue Mar 30 19:13:01 2010 (r300) +++ branches/experimental/www/firefox-i18n/Makefile.lang Tue Apr 6 07:24:41 2010 (r301) @@ -209,9 +209,6 @@ .ifndef WITHOUT_LANG_SV FIREFOX_I18N+= sv-SE .endif -.ifndef WITHOUT_LANG_TL -FIREFOX_I18N+= tl -.endif .ifndef WITHOUT_LANG_TA FIREFOX_I18N+= ta .endif Modified: branches/experimental/www/firefox-i18n/distinfo ============================================================================== --- branches/experimental/www/firefox-i18n/distinfo Tue Mar 30 19:13:01 2010 (r300) +++ branches/experimental/www/firefox-i18n/distinfo Tue Apr 6 07:24:41 2010 (r301) @@ -1,219 +1,216 @@ -MD5 (xpi/firefox-i18n-3.6.2/af.xpi) = 6098ee3000db24f8630f6d8481ab6738 -SHA256 (xpi/firefox-i18n-3.6.2/af.xpi) = 3b62d09ecd22b017bb773718d8b59368259c0f3840e228d8f2762b20dd23437c -SIZE (xpi/firefox-i18n-3.6.2/af.xpi) = 121622 -MD5 (xpi/firefox-i18n-3.6.2/ar.xpi) = 99621a07d3625d2b48ca8f6ec3e542bb -SHA256 (xpi/firefox-i18n-3.6.2/ar.xpi) = f960361bdda9ed2fb2f2aaa93cd74736a073eaf395e391011d785c2fa1c7cd60 -SIZE (xpi/firefox-i18n-3.6.2/ar.xpi) = 131701 -MD5 (xpi/firefox-i18n-3.6.2/as.xpi) = 5144667ac67180ee5a802029afcacb22 -SHA256 (xpi/firefox-i18n-3.6.2/as.xpi) = 8744d622d2e08df28edba5e82bcda17100dbbc203818b6f2d3940cd8996d0009 -SIZE (xpi/firefox-i18n-3.6.2/as.xpi) = 134296 -MD5 (xpi/firefox-i18n-3.6.2/be.xpi) = 3dddf0d91249f7ec753b414168910655 -SHA256 (xpi/firefox-i18n-3.6.2/be.xpi) = 755864ca7a0fbc752c4ab863b002e8cdbe66363cc0f8ccfda2cb242f83b4f99a -SIZE (xpi/firefox-i18n-3.6.2/be.xpi) = 121983 -MD5 (xpi/firefox-i18n-3.6.2/bg.xpi) = e3d7230d167d3dfa8ba904158a6d536f -SHA256 (xpi/firefox-i18n-3.6.2/bg.xpi) = e084d6095a88f0122a162ba0fba100f5c1a0ddcf9965c16933fd560706cb0063 -SIZE (xpi/firefox-i18n-3.6.2/bg.xpi) = 134072 -MD5 (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 0781de0249e275d3f46d9d57db07d5e8 -SHA256 (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 9a44cd4dbbe9bba5f940f5c88a78f359947137d72383690b4c56c70f1b5dc3d0 -SIZE (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 140822 -MD5 (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 5292991a5baf49dc238c57555a3e1d23 -SHA256 (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 245838fba89bd54f8ca42dad44da04628ecefa41a9ce0c974f41fb57ee339ab8 -SIZE (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 159545 -MD5 (xpi/firefox-i18n-3.6.2/ca.xpi) = f8978814c17699e464a61c99cf279988 -SHA256 (xpi/firefox-i18n-3.6.2/ca.xpi) = 09b43b56e67fc8dd3212b2804e73135222f0f9af41a44a10d496e246b4a997a9 -SIZE (xpi/firefox-i18n-3.6.2/ca.xpi) = 127570 -MD5 (xpi/firefox-i18n-3.6.2/cs.xpi) = 2390954d3d3899b31559dec854b692c8 -SHA256 (xpi/firefox-i18n-3.6.2/cs.xpi) = ca5b98f0e3e29729a7bae27ed0efae3d942e24872df0e699e18ebb2c79b50857 -SIZE (xpi/firefox-i18n-3.6.2/cs.xpi) = 126416 -MD5 (xpi/firefox-i18n-3.6.2/cy.xpi) = 9f41c969931d8e559d056a054b1844d8 -SHA256 (xpi/firefox-i18n-3.6.2/cy.xpi) = 09be8bbdcffcb7d5ca06e3bdf53993e35d70071083f3b9736dc937de89fc5ce3 -SIZE (xpi/firefox-i18n-3.6.2/cy.xpi) = 121979 -MD5 (xpi/firefox-i18n-3.6.2/da.xpi) = 45ad66651c5f26aef42f5f9d6a5bd1c7 -SHA256 (xpi/firefox-i18n-3.6.2/da.xpi) = de3ccd091253621c8a8ade95af5a93edd8f14714566d1709161eb4d703c26dd0 -SIZE (xpi/firefox-i18n-3.6.2/da.xpi) = 125224 -MD5 (xpi/firefox-i18n-3.6.2/de.xpi) = b701f8f81573aeb3bfddb63955c0f0b8 -SHA256 (xpi/firefox-i18n-3.6.2/de.xpi) = 07306a30f910bfb0309cdde3f498cd18e333a8c054c39e7a780d4fc29e7f969c -SIZE (xpi/firefox-i18n-3.6.2/de.xpi) = 128614 -MD5 (xpi/firefox-i18n-3.6.2/el.xpi) = 1e6915d27a9183bc0070e54332bb5182 -SHA256 (xpi/firefox-i18n-3.6.2/el.xpi) = c8d319f7faa4d8698771a12aee905f098a3195ee2f8fda3a0c5a7c3a21748136 -SIZE (xpi/firefox-i18n-3.6.2/el.xpi) = 134869 -MD5 (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 01c4dccd21e5b31e8ae2bee4925a0846 -SHA256 (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 3a7e4a51e9ae4e65107b9a929e69456c6a578702f9fccb641d35c1322510df28 -SIZE (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 118873 -MD5 (xpi/firefox-i18n-3.6.2/eo.xpi) = e12a528c72e1e8f13e604f48cc7e1e09 -SHA256 (xpi/firefox-i18n-3.6.2/eo.xpi) = c3d9f500cef18b8d0b871a68690633c215bdad583845919c63d882d44a570d22 -SIZE (xpi/firefox-i18n-3.6.2/eo.xpi) = 121574 -MD5 (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 58a325ac2c88c7aeff6d5e5f427e5f8b -SHA256 (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 675f9b963c3a642b4854b5f389b965cfa00eeb7b019d716fb2f7e7b9e51bdbc2 -SIZE (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 127429 -MD5 (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 1a35436da22ccb593667905dbca847b7 -SHA256 (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 7461e8c30c212f587213e33803b2d83b193d7b68a024da06d4c9d7afe1a8e4c4 -SIZE (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 129070 -MD5 (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 244a5969458464ad37ba000dec0e8617 -SHA256 (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 51cf8ee8fe9f6d0c5a6a73ab31e510b71e643748a8d6515f9eb4ffaf2a128e81 -SIZE (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 108478 -MD5 (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 27d1541dd6b77259752543f251a0079b -SHA256 (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 19523599211bbf460af604f331a6a9b762b6559a94e56780214c2af7865d14dc -SIZE (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 127427 -MD5 (xpi/firefox-i18n-3.6.2/et.xpi) = f0e589e072a784651f99e92f042277ea -SHA256 (xpi/firefox-i18n-3.6.2/et.xpi) = 0680530ac2e9fdad9e4c8132c9f73c592b038134c902ff6b0106eb91618728f2 -SIZE (xpi/firefox-i18n-3.6.2/et.xpi) = 126969 -MD5 (xpi/firefox-i18n-3.6.2/eu.xpi) = db5c050b3c2a021e7ac3d3994e854f81 -SHA256 (xpi/firefox-i18n-3.6.2/eu.xpi) = f80873bee8136116d18f7124b7208d220ee262d4785a9a2108c129c01f7056e6 -SIZE (xpi/firefox-i18n-3.6.2/eu.xpi) = 121518 -MD5 (xpi/firefox-i18n-3.6.2/fa.xpi) = 8965f1f0f91d5ddf4a4a13d33a68427d -SHA256 (xpi/firefox-i18n-3.6.2/fa.xpi) = 1f12d6aafcead1002e721e53dfb2009fb93c7b697e712c03719167a8062b7285 -SIZE (xpi/firefox-i18n-3.6.2/fa.xpi) = 138048 -MD5 (xpi/firefox-i18n-3.6.2/fi.xpi) = 24a92bdeab95870a3d731344983b2521 -SHA256 (xpi/firefox-i18n-3.6.2/fi.xpi) = f6afd11c4a1efd7d00fa4b43c9805bd1c9facdc9c0f6d2b83f5ce7d82283469a -SIZE (xpi/firefox-i18n-3.6.2/fi.xpi) = 126440 -MD5 (xpi/firefox-i18n-3.6.2/fr.xpi) = 833fed7b317b75143c0b6ebee4de725e -SHA256 (xpi/firefox-i18n-3.6.2/fr.xpi) = ae4e436fbdbcff9cf8989e356df87e0c691ba75b3236f0d38ed507b7b54d5b3c -SIZE (xpi/firefox-i18n-3.6.2/fr.xpi) = 128534 -MD5 (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 220f25796907e358dc2f4606a85832bc -SHA256 (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 16b1b7868f5089362e30ec34039c6f8fd2ea4590afc1ac9e0a052a21567af72d -SIZE (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 128850 -MD5 (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = a245d68cb35089fda224be7a434b5422 -SHA256 (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = 52ce5e2a24569e380c112c996e5d780c7e42dc03129b906f85724cfb99b4d659 -SIZE (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = 127359 -MD5 (xpi/firefox-i18n-3.6.2/gl.xpi) = 2d209aad5bf5b666f3db4ae093acec5c -SHA256 (xpi/firefox-i18n-3.6.2/gl.xpi) = 0e201d84dfb3835c0794033d290b839cd759fa2c80ae8854b70181a078d667ec -SIZE (xpi/firefox-i18n-3.6.2/gl.xpi) = 124830 -MD5 (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = dfb18d72a2ef86571225ac093f382be6 -SHA256 (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = c9859901aba7f12862a2fe1294935852bb3e36fd4b6d38cc9899865b6d464ee5 -SIZE (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = 145873 -MD5 (xpi/firefox-i18n-3.6.2/he.xpi) = f7d6481aab59ec9072a9d0f3b5e3bb89 -SHA256 (xpi/firefox-i18n-3.6.2/he.xpi) = a7e045f18e94574cf2f42cc00001e18c9178cdb924d92dc8f770b9898226e0d2 -SIZE (xpi/firefox-i18n-3.6.2/he.xpi) = 122707 -MD5 (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = eb57d5908b08fbaf381baf02351a8ef6 -SHA256 (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = 3e35ba3cc4a9604a0bda5f3e2056aa3f48406f583d7ad7823d428dd57af47a89 -SIZE (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = 141483 -MD5 (xpi/firefox-i18n-3.6.2/hr.xpi) = 0de65ab8a1301a3f41c41dcc7306092a -SHA256 (xpi/firefox-i18n-3.6.2/hr.xpi) = 40d257413114d22f03ae59b72c9fafc4670865a36236b44e8069385795047a96 -SIZE (xpi/firefox-i18n-3.6.2/hr.xpi) = 127062 -MD5 (xpi/firefox-i18n-3.6.2/hu.xpi) = 4e75f24d3db20fd823af8d2c57c142aa -SHA256 (xpi/firefox-i18n-3.6.2/hu.xpi) = 18fd022c7fa481734d30483c76ab8f5f91acc8032f4e9431d0d6bfbd0670fd1a -SIZE (xpi/firefox-i18n-3.6.2/hu.xpi) = 131550 -MD5 (xpi/firefox-i18n-3.6.2/id.xpi) = afe450c6774123dad769babddd7ee327 -SHA256 (xpi/firefox-i18n-3.6.2/id.xpi) = 93988991f335a392fdb95e64235457c82c86fb83f84d4562d67e88549cf23dcb -SIZE (xpi/firefox-i18n-3.6.2/id.xpi) = 103926 -MD5 (xpi/firefox-i18n-3.6.2/is.xpi) = 93dc41092e55ecf69fa7f1727fc6de17 -SHA256 (xpi/firefox-i18n-3.6.2/is.xpi) = 42cdd6b55cc70702581f84cd898dda1699bd0b6a2bce5d57cffb9938830e0823 -SIZE (xpi/firefox-i18n-3.6.2/is.xpi) = 123006 -MD5 (xpi/firefox-i18n-3.6.2/it.xpi) = e7ee390f0b20f8bb53a2bc8dfe8c0f51 -SHA256 (xpi/firefox-i18n-3.6.2/it.xpi) = dc14fc6f3a962d567f2bf3eae2c9517629ea6cef3bf2dd420dcf4d4922160e67 -SIZE (xpi/firefox-i18n-3.6.2/it.xpi) = 107706 -MD5 (xpi/firefox-i18n-3.6.2/ja.xpi) = fd78f12fdcdf692f6898f10893924aba -SHA256 (xpi/firefox-i18n-3.6.2/ja.xpi) = 595c8582a79573754a59d0a7af27e0d6539ef0d609033c355460e4ca0500b86f -SIZE (xpi/firefox-i18n-3.6.2/ja.xpi) = 136586 -MD5 (xpi/firefox-i18n-3.6.2/ka.xpi) = 5d218fd81f72fd09d206e38289c55fe8 -SHA256 (xpi/firefox-i18n-3.6.2/ka.xpi) = 79c9642231e7d860f76f92980083be41b2e9232fc6a5b387df8ef45d7cb3e980 -SIZE (xpi/firefox-i18n-3.6.2/ka.xpi) = 154148 -MD5 (xpi/firefox-i18n-3.6.2/kk.xpi) = 290af18dbd8ee444ef90f38cfda48f9d -SHA256 (xpi/firefox-i18n-3.6.2/kk.xpi) = f730b6ea829eb9fa72f18d8a91494b10b457cdb198dc53075f557de4a1f51cb8 -SIZE (xpi/firefox-i18n-3.6.2/kk.xpi) = 137724 -MD5 (xpi/firefox-i18n-3.6.2/kn.xpi) = 62a87e158a14ce330468bdb4bb63fd98 -SHA256 (xpi/firefox-i18n-3.6.2/kn.xpi) = 95478d9e44438e7a7af7c83cdfe9c8b91a92280f8758f15787f89c2cf445c3b2 -SIZE (xpi/firefox-i18n-3.6.2/kn.xpi) = 148421 -MD5 (xpi/firefox-i18n-3.6.2/ko.xpi) = 16d0e0896eff7983c80111db17124550 -SHA256 (xpi/firefox-i18n-3.6.2/ko.xpi) = 67d9109ed8a3e30b3d600d3c99de494c76a10f7e2027e3504ddbac7c137fc9dd -SIZE (xpi/firefox-i18n-3.6.2/ko.xpi) = 115383 -MD5 (xpi/firefox-i18n-3.6.2/ku.xpi) = 76b2fb3445b35dc4a03fa9fe3f91bcff -SHA256 (xpi/firefox-i18n-3.6.2/ku.xpi) = 1e773c66deab6089e064ea66c7ce4730c1bf59702caa43961ca9b95ee51400cb -SIZE (xpi/firefox-i18n-3.6.2/ku.xpi) = 130378 -MD5 (xpi/firefox-i18n-3.6.2/lt.xpi) = 6f185736ebfc5c0f975cc065de5b30a3 -SHA256 (xpi/firefox-i18n-3.6.2/lt.xpi) = c45a0e92c1c7ef08a8fa63c1452ad135ef4a189092e5f880f1236f6d1691d38c -SIZE (xpi/firefox-i18n-3.6.2/lt.xpi) = 139307 -MD5 (xpi/firefox-i18n-3.6.2/lv.xpi) = fafe715b149a643c274883c0ba0d1b89 -SHA256 (xpi/firefox-i18n-3.6.2/lv.xpi) = fdae7a3e4d82e3cbfc6f617d8966171dc6c16abaf553f7e3ab561dc674877b2d -SIZE (xpi/firefox-i18n-3.6.2/lv.xpi) = 124827 -MD5 (xpi/firefox-i18n-3.6.2/mk.xpi) = 7abdea5fbad108a3241fe047553de73e -SHA256 (xpi/firefox-i18n-3.6.2/mk.xpi) = c4b2bda44d8ebc2b349701670169f509287d366a628e0bda13aec4151a306f73 -SIZE (xpi/firefox-i18n-3.6.2/mk.xpi) = 127945 -MD5 (xpi/firefox-i18n-3.6.2/ml.xpi) = 93508bbfdecce59a5104668176cade6e -SHA256 (xpi/firefox-i18n-3.6.2/ml.xpi) = c704c981d6071154288d686e2a0fd79c3cc8df79c4de4d85192b3d29d5c6af54 -SIZE (xpi/firefox-i18n-3.6.2/ml.xpi) = 151880 -MD5 (xpi/firefox-i18n-3.6.2/mr.xpi) = aac86676f5e4fe881990b54d184d869a -SHA256 (xpi/firefox-i18n-3.6.2/mr.xpi) = b0d2cdcf5606aa2d271652fbfdc584721d89be21eeceda1061d289485d132380 -SIZE (xpi/firefox-i18n-3.6.2/mr.xpi) = 140488 -MD5 (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = 5c7926cacdce54e2218a839743496e01 -SHA256 (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = d0237f0b0836e7e0ad0c0995dc4a5d066b5219ac8484977bd2312f32bf1557d1 -SIZE (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = 123728 -MD5 (xpi/firefox-i18n-3.6.2/nl.xpi) = 354395ae42e4cfecced4294775f4186a -SHA256 (xpi/firefox-i18n-3.6.2/nl.xpi) = f566e9bf2e992d9e43bcd145187d83a12c9dd9218c484f20fcf0095894d45c6e -SIZE (xpi/firefox-i18n-3.6.2/nl.xpi) = 126068 -MD5 (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = 0a7dd2f0da27235be30420b81bbf10bc -SHA256 (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = e9859857cbc81837f8ab937851b094e69a6b5a8747c32cc02dd0f5fc3419ceca -SIZE (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = 125121 -MD5 (xpi/firefox-i18n-3.6.2/oc.xpi) = 2e3762813f17e4467716a2040e4c30b5 -SHA256 (xpi/firefox-i18n-3.6.2/oc.xpi) = 42e877f3087f70362b5fea71cf5df0adbdfcfe543b42bd704cd8a2d41ed4b956 -SIZE (xpi/firefox-i18n-3.6.2/oc.xpi) = 108343 -MD5 (xpi/firefox-i18n-3.6.2/or.xpi) = 092ac7f81e134f04cc465145b60d5002 -SHA256 (xpi/firefox-i18n-3.6.2/or.xpi) = fe8dfd8c3f6e030c124f8ee5ce6808fe8274329a790acd0ee83863c1239694d6 -SIZE (xpi/firefox-i18n-3.6.2/or.xpi) = 142200 -MD5 (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 8e11f068479917bd5247c56537abf8b3 -SHA256 (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 5c398d0cae9eaee87f219cdccf91884fa5d7b48e81ba4de4ac77fe16e98850d8 -SIZE (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 141718 -MD5 (xpi/firefox-i18n-3.6.2/pl.xpi) = c4693a24c4724cef4c12013778098f7a -SHA256 (xpi/firefox-i18n-3.6.2/pl.xpi) = 9bc3d7cc648d95d649ea862ba0d803f0613fd5245d4cc9c30ce45e65e8a0fe40 -SIZE (xpi/firefox-i18n-3.6.2/pl.xpi) = 130634 -MD5 (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 36d5443654f9d7bbaa44432c9be62c9a -SHA256 (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 6581bc855b5109eb68b3d87d8edb7596f90c31bbe8f2b4981506dc1e7add119c -SIZE (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 124989 -MD5 (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = 55e812a2384ffda162e6672310166262 -SHA256 (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = c98f8fb1e6794dcb697c8c40fb991743373310dcf4da94f095c9d4aef4391180 -SIZE (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = 124942 -MD5 (xpi/firefox-i18n-3.6.2/rm.xpi) = a239a06d074c3dc2a4a4737b276cfaad -SHA256 (xpi/firefox-i18n-3.6.2/rm.xpi) = a8f40d629f8db7f2cca2768bfb1f341c29a3ad81824b9d2b417181fcff92716d -SIZE (xpi/firefox-i18n-3.6.2/rm.xpi) = 127026 -MD5 (xpi/firefox-i18n-3.6.2/ro.xpi) = 78d506254e977cfc596258665c4f344b -SHA256 (xpi/firefox-i18n-3.6.2/ro.xpi) = f8fe0411d9feae51030c11e983855e4d8f29311450b02ded8ef56ce3fd6ed87f -SIZE (xpi/firefox-i18n-3.6.2/ro.xpi) = 128725 -MD5 (xpi/firefox-i18n-3.6.2/ru.xpi) = e0a3f60f143663acb1c58eb74c4eb585 -SHA256 (xpi/firefox-i18n-3.6.2/ru.xpi) = fc9a855583f039daf5c0a2517d31dea21ff1104b43aae019cd374af605f69d38 -SIZE (xpi/firefox-i18n-3.6.2/ru.xpi) = 124574 -MD5 (xpi/firefox-i18n-3.6.2/si.xpi) = a82f43d6341924bb46ac31702cb80787 -SHA256 (xpi/firefox-i18n-3.6.2/si.xpi) = cc82daec384df0f6eb418a5afdeaa9ad50e62c3e2c1a2d52f1a1d41afad318c9 -SIZE (xpi/firefox-i18n-3.6.2/si.xpi) = 142189 -MD5 (xpi/firefox-i18n-3.6.2/sk.xpi) = 1ac96364b0d62767d1b57b6ad78cf9eb -SHA256 (xpi/firefox-i18n-3.6.2/sk.xpi) = 3f8c15866709e557f3580918445b8496c216dfb0775659f552063d2ba4fa58cf -SIZE (xpi/firefox-i18n-3.6.2/sk.xpi) = 130718 -MD5 (xpi/firefox-i18n-3.6.2/sl.xpi) = 74c60b6cfc632d17cca35a9d79884c7a -SHA256 (xpi/firefox-i18n-3.6.2/sl.xpi) = 2e22e278c0f50ef874d1a0b987c2439d7ac9b7314617dddc6e397a372db47429 -SIZE (xpi/firefox-i18n-3.6.2/sl.xpi) = 123902 -MD5 (xpi/firefox-i18n-3.6.2/sq.xpi) = b68f88ea07722834f5a9dde82581d3fc -SHA256 (xpi/firefox-i18n-3.6.2/sq.xpi) = 2042370ee6b33213e8be5edf757db79305c2521d76f35fa23231ad41648473e9 -SIZE (xpi/firefox-i18n-3.6.2/sq.xpi) = 120934 -MD5 (xpi/firefox-i18n-3.6.2/sr.xpi) = 896a7e7629b61d933846ef9efe288ada -SHA256 (xpi/firefox-i18n-3.6.2/sr.xpi) = 6db567a5eae9c19c6393ef11b83e9e7ee11b136470bbd05741bd49ad266f3597 -SIZE (xpi/firefox-i18n-3.6.2/sr.xpi) = 136930 -MD5 (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = b8dc0908e82f5d338b325d1aed5b6c71 -SHA256 (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = f818535af1cded2c3d55bbc537d1bb4338f9abf7cef14fc561b6a0effae85153 -SIZE (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = 128075 -MD5 (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 94e5d7ea9b0a11c06ebce481ee0cb1a7 -SHA256 (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 6beb2a0369d3ecc990c033a1c9a25dced083c666669a989c90eeeb32d3a93761 -SIZE (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 149330 -MD5 (xpi/firefox-i18n-3.6.2/ta.xpi) = 6c4ee60e1c8a5c7eefa843f7b98a5db0 -SHA256 (xpi/firefox-i18n-3.6.2/ta.xpi) = 4ceb3b58e1e544994345eb3704115eabb19020419e8da693938c1dcad886c66b -SIZE (xpi/firefox-i18n-3.6.2/ta.xpi) = 136804 -MD5 (xpi/firefox-i18n-3.6.2/te.xpi) = 37dc154cff01eb19c9d835831ac5a4a6 -SHA256 (xpi/firefox-i18n-3.6.2/te.xpi) = 713a8596ca847d4b2f9b7375058f34c500066f5fa7cfc4384865891347c5c6f7 -SIZE (xpi/firefox-i18n-3.6.2/te.xpi) = 147402 -MD5 (xpi/firefox-i18n-3.6.2/th.xpi) = 12351748dbc10d3b618123f9ada1284a -SHA256 (xpi/firefox-i18n-3.6.2/th.xpi) = dfe374f3a7dec81da46e61c15fb7dc3879bcca87c356ed6cd5ded339a5b89095 -SIZE (xpi/firefox-i18n-3.6.2/th.xpi) = 134016 -MD5 (xpi/firefox-i18n-3.6.2/tr.xpi) = 3d391c58fd52a7ce9e8005fcc4fefc09 -SHA256 (xpi/firefox-i18n-3.6.2/tr.xpi) = 47f9564fcf4fdf89ba1bde11e87a334bbd605e0adba38c30e55bd0051812beef -SIZE (xpi/firefox-i18n-3.6.2/tr.xpi) = 125029 -MD5 (xpi/firefox-i18n-3.6.2/uk.xpi) = 3fd942c68feb4018cdad30b1dce321cd -SHA256 (xpi/firefox-i18n-3.6.2/uk.xpi) = e1246c0a024f0787d20ea860062a73ce4d3532d855824f3d78a4e62007f76dda -SIZE (xpi/firefox-i18n-3.6.2/uk.xpi) = 142824 -MD5 (xpi/firefox-i18n-3.6.2/vi.xpi) = d25c99e708652201910b19cf8e007cd3 -SHA256 (xpi/firefox-i18n-3.6.2/vi.xpi) = 7502579cbd898d5e75d64d7c52c958150a6afa395180cf0630c2ae747c12f0bb -SIZE (xpi/firefox-i18n-3.6.2/vi.xpi) = 129056 -MD5 (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 36163243e98213ecdff881e54945cc4b -SHA256 (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 047bb0d4ac721fb012c1c4a5aefd368d5080d8cfff475aff78117548c5112954 -SIZE (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 126695 -MD5 (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = d03068915380d6b4fb6f736b71884435 -SHA256 (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = acc73dad235e877baa9cfcf5b3182448d7dcb729d49cd5ee589e7b30315a6067 -SIZE (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = 128259 +MD5 (xpi/firefox-i18n-3.6.3/af.xpi) = 9f1e7a8f015fa9c6673e7a5fd74655f5 +SHA256 (xpi/firefox-i18n-3.6.3/af.xpi) = ca32bb28a1751604f657562f4a4569d514a1fba2d2ea517a7c95ce2c3ecd7afa +SIZE (xpi/firefox-i18n-3.6.3/af.xpi) = 121643 +MD5 (xpi/firefox-i18n-3.6.3/ar.xpi) = 5f6e53b5087e24b91d18ba64c4ffe185 +SHA256 (xpi/firefox-i18n-3.6.3/ar.xpi) = 29bc998e93dc46598412ff1b9dabe104993efefce8e13172b82a9f9c8d815dbf +SIZE (xpi/firefox-i18n-3.6.3/ar.xpi) = 131701 +MD5 (xpi/firefox-i18n-3.6.3/as.xpi) = dfb526adb3247201856e1cbfc55d55b9 +SHA256 (xpi/firefox-i18n-3.6.3/as.xpi) = 28dfa84fbd98f1a49fad4a109519ac96ef6137fa9651a9dcf5745d532c17a9a7 +SIZE (xpi/firefox-i18n-3.6.3/as.xpi) = 134298 +MD5 (xpi/firefox-i18n-3.6.3/be.xpi) = 82b3c8566c2266a9f9e41f9bd00a9a9d +SHA256 (xpi/firefox-i18n-3.6.3/be.xpi) = b362258673a9ddf79453a13cc2ce2bed6509a900e9345b5b185b8733f43fd5e5 +SIZE (xpi/firefox-i18n-3.6.3/be.xpi) = 121938 +MD5 (xpi/firefox-i18n-3.6.3/bg.xpi) = 5c2e2ecaaedde6aedcd9238d036b5026 +SHA256 (xpi/firefox-i18n-3.6.3/bg.xpi) = fc9e6c67db1d794f8cef71f83c0c79fc46dd8fd6059ed88664559a5044bde3be +SIZE (xpi/firefox-i18n-3.6.3/bg.xpi) = 134105 +MD5 (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = d961018ade9bb29edaf6b4d9bf6cc7b6 +SHA256 (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = 6d4dd085b38f14ac52c60967950a5d8eff99343aa5233195ce176b0349887f41 +SIZE (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = 140833 +MD5 (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 00646001d91a81888d349d1526e10607 +SHA256 (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 859a6bae81bc886409a186184087f6ac3ab08cb158f6195f85a92eaa10136d6f +SIZE (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 159580 +MD5 (xpi/firefox-i18n-3.6.3/ca.xpi) = 7ff08e6bb1019a758bd0b4be5a83a805 +SHA256 (xpi/firefox-i18n-3.6.3/ca.xpi) = 2085fd9ccb6262d17b9b56a44a0b097b26ce15844f47583c41efa7e7941886c0 +SIZE (xpi/firefox-i18n-3.6.3/ca.xpi) = 127598 +MD5 (xpi/firefox-i18n-3.6.3/cs.xpi) = d977d0011392ab0ad1bc89731c5294e8 +SHA256 (xpi/firefox-i18n-3.6.3/cs.xpi) = 99505e15f51dedbd15d071d07d065e5126c1073a32bc3d8d889a86329a741337 +SIZE (xpi/firefox-i18n-3.6.3/cs.xpi) = 126390 +MD5 (xpi/firefox-i18n-3.6.3/cy.xpi) = ed20f0a63e8bb0e4f06efaf3372ef0ac +SHA256 (xpi/firefox-i18n-3.6.3/cy.xpi) = 402446abcc8d527111410e05746f5075b1c8546d1cf5f1e2b276106f45eb9422 +SIZE (xpi/firefox-i18n-3.6.3/cy.xpi) = 121977 +MD5 (xpi/firefox-i18n-3.6.3/da.xpi) = 7a29d0ca6e1b2aafd43dadb0bdebc7fc +SHA256 (xpi/firefox-i18n-3.6.3/da.xpi) = 071fa099e23d4f8567d1f411e3a47f5369b7c94f9ee4e342ccaa038ad0b616fd +SIZE (xpi/firefox-i18n-3.6.3/da.xpi) = 125229 +MD5 (xpi/firefox-i18n-3.6.3/de.xpi) = 37d7c2f80c50f7081ae30655b9af8723 +SHA256 (xpi/firefox-i18n-3.6.3/de.xpi) = 83c2275e32bec807f877ea889a0b64eae4f68cb2bbde1d4d7ec54b187bc5e3fa +SIZE (xpi/firefox-i18n-3.6.3/de.xpi) = 128584 +MD5 (xpi/firefox-i18n-3.6.3/el.xpi) = 5ae2e84b06c058b1c30742d65ae35cee +SHA256 (xpi/firefox-i18n-3.6.3/el.xpi) = d50bf7587d2ef25eb688a1b7df78fcd22f21970477f7db3b2af0e1351376e243 +SIZE (xpi/firefox-i18n-3.6.3/el.xpi) = 134908 +MD5 (xpi/firefox-i18n-3.6.3/en-GB.xpi) = 76ba151eb259d36ce8fd5c6cc1a23f51 +SHA256 (xpi/firefox-i18n-3.6.3/en-GB.xpi) = b2d5aff06bf25d234b0aaf3b07b267484f0d91ed6a79deeb127bec5761652db5 +SIZE (xpi/firefox-i18n-3.6.3/en-GB.xpi) = 118905 +MD5 (xpi/firefox-i18n-3.6.3/eo.xpi) = fb8a232ac6334c59b59df247fbacd8dc +SHA256 (xpi/firefox-i18n-3.6.3/eo.xpi) = e4dee773569d2b5a54d8038b21b07a6986896c40c70f8f2e822af4815285c708 +SIZE (xpi/firefox-i18n-3.6.3/eo.xpi) = 121549 +MD5 (xpi/firefox-i18n-3.6.3/es-AR.xpi) = c1b577ff734865137e51358388e124a8 +SHA256 (xpi/firefox-i18n-3.6.3/es-AR.xpi) = d21dbe4cc9bd44b364f42699622c8b15923231888e40dbda1913e5981c768971 +SIZE (xpi/firefox-i18n-3.6.3/es-AR.xpi) = 127417 +MD5 (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 542004b1893ee3be31d2aa81eccd1b2d +SHA256 (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 270a02a4c724c5f7bc491482349306e56a69bff2ae07e2fa0614c61cd871da9c +SIZE (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 129098 +MD5 (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 95a40275aa67f8eea18d6681a163db61 +SHA256 (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 0e5c6aadc0115b87f637017af22495d0d83c73e8e33bbc35ed6aa0249d1b5e7b +SIZE (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 108457 +MD5 (xpi/firefox-i18n-3.6.3/es-MX.xpi) = 7444e7ada0a3bb07ca9a9b97894c7075 +SHA256 (xpi/firefox-i18n-3.6.3/es-MX.xpi) = b2dfdf8d3cae300971347ea092c348c37c708dd78113121d6939b6c9b63876c4 +SIZE (xpi/firefox-i18n-3.6.3/es-MX.xpi) = 127428 +MD5 (xpi/firefox-i18n-3.6.3/et.xpi) = 5972b1f8158ba24311042eec32103e98 +SHA256 (xpi/firefox-i18n-3.6.3/et.xpi) = 3d485226f963bb81a85733f7c2e6b7996804f408c28b2cc983313a28abafd169 +SIZE (xpi/firefox-i18n-3.6.3/et.xpi) = 126949 +MD5 (xpi/firefox-i18n-3.6.3/eu.xpi) = b3393f5c33af179ca8ad86d98c3c38a9 +SHA256 (xpi/firefox-i18n-3.6.3/eu.xpi) = 8a94576607075ef7374317092a7115e1256e3b5c331e3292e3cbff3519c71d65 +SIZE (xpi/firefox-i18n-3.6.3/eu.xpi) = 121541 +MD5 (xpi/firefox-i18n-3.6.3/fa.xpi) = 33ecf5ebaf0959e7bc7ac684c3594298 +SHA256 (xpi/firefox-i18n-3.6.3/fa.xpi) = f8ecd8e292d3052968bd3061234f559469a59ad79376606ed6fc9a026a47700a +SIZE (xpi/firefox-i18n-3.6.3/fa.xpi) = 138018 +MD5 (xpi/firefox-i18n-3.6.3/fi.xpi) = 166d3ebfcac6c4e86f9f18bf4f2652e4 +SHA256 (xpi/firefox-i18n-3.6.3/fi.xpi) = ff5b76a3026ee2a64ab9b694c5a08a3db0493ae99a33f2a875a8b15ad52b26ea +SIZE (xpi/firefox-i18n-3.6.3/fi.xpi) = 126447 +MD5 (xpi/firefox-i18n-3.6.3/fr.xpi) = 47647b52ed1503d6a29bc2a58f7de996 +SHA256 (xpi/firefox-i18n-3.6.3/fr.xpi) = 74fab75828a8c1e913c7602aeca448961f5a8088f2b6ee173828a0994ddbbb8a +SIZE (xpi/firefox-i18n-3.6.3/fr.xpi) = 128542 +MD5 (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 5fd9beb2b74ea00165cc42aa73f019ac +SHA256 (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 2065ffbe1c08f319f70064960a5f08a465bb4c3c484352b8e5ecfffc4bf8eae7 +SIZE (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 128880 +MD5 (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = 03cf568156201821e8ca9f77b7fd2166 +SHA256 (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = e233387bc4edfa4a2241b3b1c772b59208ffe11266c70d61ca1b69ff90766642 +SIZE (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = 127381 +MD5 (xpi/firefox-i18n-3.6.3/gl.xpi) = 295b524b77e7faeb768025eca9253989 +SHA256 (xpi/firefox-i18n-3.6.3/gl.xpi) = 6e6538359da4b6070f87ee3b6c4c059cbe2e847ab844ef1ea379bc99f4e408dc +SIZE (xpi/firefox-i18n-3.6.3/gl.xpi) = 124820 +MD5 (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 7aebe5e8b398ab54444e0a8fb915bbea +SHA256 (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 68fb8a9dccb96ec1d6e9a6c2881a59822c3114675ae1bcbaeb8d052a21f7faec +SIZE (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 145912 +MD5 (xpi/firefox-i18n-3.6.3/he.xpi) = c21073d11f5abe89fffd98a4f67449ce +SHA256 (xpi/firefox-i18n-3.6.3/he.xpi) = 4867971272dfb56bf95c2c073c0c04427928270fc2698d8a186572441658861c +SIZE (xpi/firefox-i18n-3.6.3/he.xpi) = 122744 +MD5 (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = c3adf95f031bedbeb1b06a58133b2a1f +SHA256 (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = e1ce7a007d431087da54c9d0f05c96ebf7bf7e593991f1c45e9538e7dc7022e7 +SIZE (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = 141464 +MD5 (xpi/firefox-i18n-3.6.3/hr.xpi) = b45855fd805a81c8d6fff63b3dbddf2e +SHA256 (xpi/firefox-i18n-3.6.3/hr.xpi) = e9104adc49d537d4d72064809186e347d9af4b0bb64031ed43d22be7b6978cc0 +SIZE (xpi/firefox-i18n-3.6.3/hr.xpi) = 127072 +MD5 (xpi/firefox-i18n-3.6.3/hu.xpi) = 20afeceaebcb56cc10089f3054303441 +SHA256 (xpi/firefox-i18n-3.6.3/hu.xpi) = 2c4e83a54088bfff0f1e6fd62b0e93b4d7dfeeb96b74d0d06400b5f056b2879d +SIZE (xpi/firefox-i18n-3.6.3/hu.xpi) = 131573 +MD5 (xpi/firefox-i18n-3.6.3/id.xpi) = 5d9a0e02c7480400e86c90619517a686 +SHA256 (xpi/firefox-i18n-3.6.3/id.xpi) = 29c9a219e3413d35feffae93a3dc00d8734356ff2d5c73049deec5b719051195 +SIZE (xpi/firefox-i18n-3.6.3/id.xpi) = 103923 +MD5 (xpi/firefox-i18n-3.6.3/is.xpi) = d94b56636da5804b4b92103029f7ab6a +SHA256 (xpi/firefox-i18n-3.6.3/is.xpi) = 81287e28183540a78a9541415f4d5ce6cd346d5d2d58ee97e6746b95e74f8ee1 +SIZE (xpi/firefox-i18n-3.6.3/is.xpi) = 123008 +MD5 (xpi/firefox-i18n-3.6.3/it.xpi) = 55bf6baa6b00e395c9c899f513fa5fde +SHA256 (xpi/firefox-i18n-3.6.3/it.xpi) = 27159a29b4b49827d90821846f521856131ae594256b53f5db81c91fd8de4c2e +SIZE (xpi/firefox-i18n-3.6.3/it.xpi) = 107727 +MD5 (xpi/firefox-i18n-3.6.3/ja.xpi) = 9d4e0d5e0e8396b1c6c60305d4a57ef8 +SHA256 (xpi/firefox-i18n-3.6.3/ja.xpi) = f9e4f10b41d2d84d735c0e0f54f73a80801ca8f7025f3c7f6eb584a801cd215f +SIZE (xpi/firefox-i18n-3.6.3/ja.xpi) = 136587 +MD5 (xpi/firefox-i18n-3.6.3/ka.xpi) = b56e5d155b16b450801b069134d54171 +SHA256 (xpi/firefox-i18n-3.6.3/ka.xpi) = 05f00be54b578c28811f65e4de859a272451c298cd37adaff60dc4739a794055 +SIZE (xpi/firefox-i18n-3.6.3/ka.xpi) = 154154 +MD5 (xpi/firefox-i18n-3.6.3/kk.xpi) = edf4e7d9f79138c48071c544e367efd7 +SHA256 (xpi/firefox-i18n-3.6.3/kk.xpi) = 70e93bd2fa45c392fe5d73afddbf9bfda8dfa8eeb9da7e1f4ae82de75475c056 +SIZE (xpi/firefox-i18n-3.6.3/kk.xpi) = 137720 +MD5 (xpi/firefox-i18n-3.6.3/kn.xpi) = b64aa0463db089f8755ba9e4e477e9ef +SHA256 (xpi/firefox-i18n-3.6.3/kn.xpi) = 92feb2e513e6e1b1809eeafccea5a27ae87c86039ffcf3fd9442c6595159843c +SIZE (xpi/firefox-i18n-3.6.3/kn.xpi) = 148392 +MD5 (xpi/firefox-i18n-3.6.3/ko.xpi) = 4b623a76f0e2f51884d3a1d4399f0028 +SHA256 (xpi/firefox-i18n-3.6.3/ko.xpi) = 848a7c91c4c7934580f17c23a7164bf1def79e25da77cb6662ad2a7fb95c3877 +SIZE (xpi/firefox-i18n-3.6.3/ko.xpi) = 115398 +MD5 (xpi/firefox-i18n-3.6.3/ku.xpi) = 2f097580a7f7680933a5b235946e735a +SHA256 (xpi/firefox-i18n-3.6.3/ku.xpi) = 5e568bd8576e44b03393d768ccf575e27935de699aeb6d07d88baab1a9878045 +SIZE (xpi/firefox-i18n-3.6.3/ku.xpi) = 130363 +MD5 (xpi/firefox-i18n-3.6.3/lt.xpi) = d555eec00dd9c9436974daa2d46cda79 +SHA256 (xpi/firefox-i18n-3.6.3/lt.xpi) = 71bd91e6bdf45ea107fefb948bf9ded8aecc1c263f0e8bb01ce3b7e0a3f2a6ee +SIZE (xpi/firefox-i18n-3.6.3/lt.xpi) = 139307 +MD5 (xpi/firefox-i18n-3.6.3/lv.xpi) = b2754566f884f1d615fe1129524ae1ff +SHA256 (xpi/firefox-i18n-3.6.3/lv.xpi) = b1123b1a5e7f09ec78ad7b3c373d30bbad9950f6a8f851041064d6c8fd3fce2b +SIZE (xpi/firefox-i18n-3.6.3/lv.xpi) = 124804 +MD5 (xpi/firefox-i18n-3.6.3/mk.xpi) = 7fe0e89f7e3fb7c75d4ace3ee1f17d79 +SHA256 (xpi/firefox-i18n-3.6.3/mk.xpi) = 599fab5d14e05d8bf9fdc0046dc03224dcfa07b689282a2adc16ec1c17142c08 +SIZE (xpi/firefox-i18n-3.6.3/mk.xpi) = 127944 +MD5 (xpi/firefox-i18n-3.6.3/ml.xpi) = 750703cb440d417c8fdf2cce332df4e9 +SHA256 (xpi/firefox-i18n-3.6.3/ml.xpi) = 8104081fe1457b90bd8a5b6bd55033b4070368c3e3f80ff480a4a917c80a1bfa +SIZE (xpi/firefox-i18n-3.6.3/ml.xpi) = 151842 +MD5 (xpi/firefox-i18n-3.6.3/mr.xpi) = 63458b4fdb0fe795059ed7a1f1d7e7ca +SHA256 (xpi/firefox-i18n-3.6.3/mr.xpi) = 75df7a30c373ed45f89c0deeb34405a9a44859fafa450c934a99c7385353bf7e +SIZE (xpi/firefox-i18n-3.6.3/mr.xpi) = 140489 +MD5 (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = d62276990caf1add8607bf2a13f80425 +SHA256 (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = b3437fdf88ebb6fdb24d40c43061ce2d917c1031e79511cc1512ff923cee1f55 +SIZE (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = 123730 +MD5 (xpi/firefox-i18n-3.6.3/nl.xpi) = 2e28e9d322306d4cd62ad0d6edde384b +SHA256 (xpi/firefox-i18n-3.6.3/nl.xpi) = de0f8840f0482d07077e3bf2a5b73673759458f727fe3e47cf0e2d8e33754613 +SIZE (xpi/firefox-i18n-3.6.3/nl.xpi) = 126099 +MD5 (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 545762e4e6a20cf9137ceffaea2de293 +SHA256 (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 29b9e5fe8888ce329eef969575559dd6b05e0f41d3ad21eb9f90c12e6a4313b6 +SIZE (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 125130 +MD5 (xpi/firefox-i18n-3.6.3/oc.xpi) = db28c06c28760160fa54f55944bf19c0 +SHA256 (xpi/firefox-i18n-3.6.3/oc.xpi) = b94ff51b71d6f7da4e9cf999686a9d1989cebb4aaf46d64d58a3ee13bb3d521e +SIZE (xpi/firefox-i18n-3.6.3/oc.xpi) = 108342 +MD5 (xpi/firefox-i18n-3.6.3/or.xpi) = 86c940e7b554564b559421dd44ee6abd +SHA256 (xpi/firefox-i18n-3.6.3/or.xpi) = beb0a9b4df191257af5bb475d4d7a1bb39caf2136e0c2383af8a7c6b96b7fec8 +SIZE (xpi/firefox-i18n-3.6.3/or.xpi) = 142167 +MD5 (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = b1f5a0236e87fc9f245a551e217ad13f +SHA256 (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = 4e720c212cdf9e36452513c580d8b53a263247d8b67d41b71339acceaa25239b +SIZE (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = 141676 +MD5 (xpi/firefox-i18n-3.6.3/pl.xpi) = 7830e4e59a54dd293cd485cbb5ca034a +SHA256 (xpi/firefox-i18n-3.6.3/pl.xpi) = c1d8486a16f96316cccd5a9d8f3a83e6ab2196cd902083f0522d6150390e5fa5 +SIZE (xpi/firefox-i18n-3.6.3/pl.xpi) = 130640 +MD5 (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = c208c9d466dc3329be7f213dca022ad2 +SHA256 (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = 3a270939c13db8f9c88aab341801a525df7dde8f306c05926a6a14860687929b +SIZE (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = 124975 +MD5 (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 6ff5989286f33b06ce689a017d107f41 +SHA256 (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 269750bc5ed13a4709620b0200cdec59c74a390cd78ae0d58f0d39d8215e9f9e +SIZE (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 124948 +MD5 (xpi/firefox-i18n-3.6.3/rm.xpi) = 212f3cbc2b1d3c7c3d5c1919cb7cdea0 +SHA256 (xpi/firefox-i18n-3.6.3/rm.xpi) = 85cd2158b4f4a884dee01b68613aca0d065a6cb43378758c771734d0d09c1c24 +SIZE (xpi/firefox-i18n-3.6.3/rm.xpi) = 127029 +MD5 (xpi/firefox-i18n-3.6.3/ro.xpi) = 24e50bb4c20d40f199363262935f7d03 +SHA256 (xpi/firefox-i18n-3.6.3/ro.xpi) = b37d78a5034be7369a6fe001c9772f56419132f1bd35b8334701a5f4b46bee19 +SIZE (xpi/firefox-i18n-3.6.3/ro.xpi) = 128706 +MD5 (xpi/firefox-i18n-3.6.3/ru.xpi) = e85f86833efffa0ebbb67f66c55853a0 +SHA256 (xpi/firefox-i18n-3.6.3/ru.xpi) = 4058ae8e4401d4ae45f678fbd010ef18583ab5b7eb0db22312f1c10e260996b7 +SIZE (xpi/firefox-i18n-3.6.3/ru.xpi) = 124611 +MD5 (xpi/firefox-i18n-3.6.3/si.xpi) = 00ad3e7ff392995e0e578b7210ed0eac +SHA256 (xpi/firefox-i18n-3.6.3/si.xpi) = 9c4e730517c8d4cc3bc98da80be8193bc54f1c3d2c666a95a2208e88a3c3c973 +SIZE (xpi/firefox-i18n-3.6.3/si.xpi) = 142225 +MD5 (xpi/firefox-i18n-3.6.3/sk.xpi) = 4a80b1e2c7d0a0c253379d466a5dea32 +SHA256 (xpi/firefox-i18n-3.6.3/sk.xpi) = 5040fa868adc970ca3aa036172598aba0ae74b06bb48b3bbf6e7176701ddaaf0 +SIZE (xpi/firefox-i18n-3.6.3/sk.xpi) = 130723 +MD5 (xpi/firefox-i18n-3.6.3/sl.xpi) = 82708e7ca784e894531a76e4b760cd63 +SHA256 (xpi/firefox-i18n-3.6.3/sl.xpi) = eb92aa2aa370b31c96df276a84f92aace1fa94c2c2e4c32eea74620981e64c81 +SIZE (xpi/firefox-i18n-3.6.3/sl.xpi) = 123929 +MD5 (xpi/firefox-i18n-3.6.3/sq.xpi) = 7bad1d0dd6022ddecc0c3a9a7e7065dd +SHA256 (xpi/firefox-i18n-3.6.3/sq.xpi) = 9897289f002025dc082f9dbbc2f13878aec6c765eff28a57ff65fd8a6c2e8ed6 +SIZE (xpi/firefox-i18n-3.6.3/sq.xpi) = 120917 +MD5 (xpi/firefox-i18n-3.6.3/sr.xpi) = 6c53cf2ff925a321b6d6076fe5b40fcf +SHA256 (xpi/firefox-i18n-3.6.3/sr.xpi) = 08aa2f5230ce557ea8abd56bc178bb8e8d776f5724e341f2d494ae578f42ffe6 +SIZE (xpi/firefox-i18n-3.6.3/sr.xpi) = 136898 +MD5 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 210407c722919c59747a4c599e27d0cc +SHA256 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = ab5dc44a7f5f908ffc8ba622a222b892a98b9cea5abc6fb8ef02b66796c4c28b +SIZE (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 128080 +MD5 (xpi/firefox-i18n-3.6.3/ta.xpi) = 7f6d205ee070313768588ccb299caff9 +SHA256 (xpi/firefox-i18n-3.6.3/ta.xpi) = 6423a6287aec9f0c2a44d4e510f8ba24318fb3d3ecb9eab80471ea5a352ee26e +SIZE (xpi/firefox-i18n-3.6.3/ta.xpi) = 136772 +MD5 (xpi/firefox-i18n-3.6.3/te.xpi) = 5bb3d9c0a70c8fb1791251578e646adb +SHA256 (xpi/firefox-i18n-3.6.3/te.xpi) = ad0b5766f21bde9e8a091266c3c905c3e2d89cc0028db808a2abb79589d57e0c +SIZE (xpi/firefox-i18n-3.6.3/te.xpi) = 147410 +MD5 (xpi/firefox-i18n-3.6.3/th.xpi) = 72ac1e0a9f590febc6486b2e507d0b79 +SHA256 (xpi/firefox-i18n-3.6.3/th.xpi) = 2ae26f1a52414c5afffc26f202afbafeeb5597a0038d56898f85f1ad7abb548a +SIZE (xpi/firefox-i18n-3.6.3/th.xpi) = 133997 +MD5 (xpi/firefox-i18n-3.6.3/tr.xpi) = b0cb9dea29cf0198cb768b661e536331 +SHA256 (xpi/firefox-i18n-3.6.3/tr.xpi) = 194996a06fe19901d543f9f014a28b220f8a771deaac6706e08e82cfbbaf7310 +SIZE (xpi/firefox-i18n-3.6.3/tr.xpi) = 125048 +MD5 (xpi/firefox-i18n-3.6.3/uk.xpi) = 328a06b4ce0ef75d19a382d2f95fefb7 +SHA256 (xpi/firefox-i18n-3.6.3/uk.xpi) = 24b10df39eb8ffcaffd747dc7f1009131581ae685d74ebcd74749aabdfb6bd81 +SIZE (xpi/firefox-i18n-3.6.3/uk.xpi) = 142861 +MD5 (xpi/firefox-i18n-3.6.3/vi.xpi) = c6e7fd3a4f289269298f1a447d3c6924 +SHA256 (xpi/firefox-i18n-3.6.3/vi.xpi) = 36b68aa047f2a7c9e40bae09d8351bf161208913407be39581dd421d52783ba2 +SIZE (xpi/firefox-i18n-3.6.3/vi.xpi) = 129025 +MD5 (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = e79500e81decbe294de6ad650416e7cd +SHA256 (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = 3946ad88851e19ee26a2828949c1eac611dc2bb6926ef5224d67edfaf29a63d8 +SIZE (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = 126678 +MD5 (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 1836f39bf72e9d10668c2a9e6668bf18 +SHA256 (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 5ea447d1511feb269ae1b46dd14919472d37301cc98208c0c49979fdaa755920 +SIZE (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 128220 Modified: branches/experimental/www/firefox/Makefile ============================================================================== --- branches/experimental/www/firefox/Makefile Tue Mar 30 19:13:01 2010 (r300) +++ branches/experimental/www/firefox/Makefile Tue Apr 6 07:24:41 2010 (r301) @@ -6,7 +6,7 @@ # PORTNAME= firefox -DISTVERSION= 3.6.2 +DISTVERSION= 3.6.3 PORTEPOCH= 1 CATEGORIES= www ipv6 MASTER_SITES= ${MASTER_SITE_MOZILLA} Modified: branches/experimental/www/firefox/distinfo ============================================================================== --- branches/experimental/www/firefox/distinfo Tue Mar 30 19:13:01 2010 (r300) +++ branches/experimental/www/firefox/distinfo Tue Apr 6 07:24:41 2010 (r301) @@ -1,3 +1,3 @@ -MD5 (firefox-3.6.2.source.tar.bz2) = a1972a2216ac7139b92b7378a328ec93 -SHA256 (firefox-3.6.2.source.tar.bz2) = 0f8398efabb14da83e14c7fbabdff99a619752cadb43348b27adf302382046f0 -SIZE (firefox-3.6.2.source.tar.bz2) = 48919538 +MD5 (firefox-3.6.3.source.tar.bz2) = 5e4541ab23084b7b14ba228291ce2688 +SHA256 (firefox-3.6.3.source.tar.bz2) = 35a0b35d79baa502b6c5781ed3e9c4b5eb623bdaf1940460dcf465192848a2d0 +SIZE (firefox-3.6.3.source.tar.bz2) = 48633061 Modified: trunk/www/firefox-i18n/Makefile ============================================================================== --- trunk/www/firefox-i18n/Makefile Tue Mar 30 19:13:01 2010 (r300) +++ trunk/www/firefox-i18n/Makefile Tue Apr 6 07:24:41 2010 (r301) @@ -6,7 +6,7 @@ # PORTNAME= firefox-i18n -PORTVERSION= 3.6.2 +PORTVERSION= 3.6.3 CATEGORIES= www MASTER_SITE_SUBDIR= firefox/releases/${PORTVERSION}/linux-i686/xpi PKGNAMEPREFIX= Modified: trunk/www/firefox-i18n/distinfo ============================================================================== --- trunk/www/firefox-i18n/distinfo Tue Mar 30 19:13:01 2010 (r300) +++ trunk/www/firefox-i18n/distinfo Tue Apr 6 07:24:41 2010 (r301) @@ -1,219 +1,216 @@ -MD5 (xpi/firefox-i18n-3.6.2/af.xpi) = 6098ee3000db24f8630f6d8481ab6738 -SHA256 (xpi/firefox-i18n-3.6.2/af.xpi) = 3b62d09ecd22b017bb773718d8b59368259c0f3840e228d8f2762b20dd23437c -SIZE (xpi/firefox-i18n-3.6.2/af.xpi) = 121622 -MD5 (xpi/firefox-i18n-3.6.2/ar.xpi) = 99621a07d3625d2b48ca8f6ec3e542bb -SHA256 (xpi/firefox-i18n-3.6.2/ar.xpi) = f960361bdda9ed2fb2f2aaa93cd74736a073eaf395e391011d785c2fa1c7cd60 -SIZE (xpi/firefox-i18n-3.6.2/ar.xpi) = 131701 -MD5 (xpi/firefox-i18n-3.6.2/as.xpi) = 5144667ac67180ee5a802029afcacb22 -SHA256 (xpi/firefox-i18n-3.6.2/as.xpi) = 8744d622d2e08df28edba5e82bcda17100dbbc203818b6f2d3940cd8996d0009 -SIZE (xpi/firefox-i18n-3.6.2/as.xpi) = 134296 -MD5 (xpi/firefox-i18n-3.6.2/be.xpi) = 3dddf0d91249f7ec753b414168910655 -SHA256 (xpi/firefox-i18n-3.6.2/be.xpi) = 755864ca7a0fbc752c4ab863b002e8cdbe66363cc0f8ccfda2cb242f83b4f99a -SIZE (xpi/firefox-i18n-3.6.2/be.xpi) = 121983 -MD5 (xpi/firefox-i18n-3.6.2/bg.xpi) = e3d7230d167d3dfa8ba904158a6d536f -SHA256 (xpi/firefox-i18n-3.6.2/bg.xpi) = e084d6095a88f0122a162ba0fba100f5c1a0ddcf9965c16933fd560706cb0063 -SIZE (xpi/firefox-i18n-3.6.2/bg.xpi) = 134072 -MD5 (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 0781de0249e275d3f46d9d57db07d5e8 -SHA256 (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 9a44cd4dbbe9bba5f940f5c88a78f359947137d72383690b4c56c70f1b5dc3d0 -SIZE (xpi/firefox-i18n-3.6.2/bn-BD.xpi) = 140822 -MD5 (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 5292991a5baf49dc238c57555a3e1d23 -SHA256 (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 245838fba89bd54f8ca42dad44da04628ecefa41a9ce0c974f41fb57ee339ab8 -SIZE (xpi/firefox-i18n-3.6.2/bn-IN.xpi) = 159545 -MD5 (xpi/firefox-i18n-3.6.2/ca.xpi) = f8978814c17699e464a61c99cf279988 -SHA256 (xpi/firefox-i18n-3.6.2/ca.xpi) = 09b43b56e67fc8dd3212b2804e73135222f0f9af41a44a10d496e246b4a997a9 -SIZE (xpi/firefox-i18n-3.6.2/ca.xpi) = 127570 -MD5 (xpi/firefox-i18n-3.6.2/cs.xpi) = 2390954d3d3899b31559dec854b692c8 -SHA256 (xpi/firefox-i18n-3.6.2/cs.xpi) = ca5b98f0e3e29729a7bae27ed0efae3d942e24872df0e699e18ebb2c79b50857 -SIZE (xpi/firefox-i18n-3.6.2/cs.xpi) = 126416 -MD5 (xpi/firefox-i18n-3.6.2/cy.xpi) = 9f41c969931d8e559d056a054b1844d8 -SHA256 (xpi/firefox-i18n-3.6.2/cy.xpi) = 09be8bbdcffcb7d5ca06e3bdf53993e35d70071083f3b9736dc937de89fc5ce3 -SIZE (xpi/firefox-i18n-3.6.2/cy.xpi) = 121979 -MD5 (xpi/firefox-i18n-3.6.2/da.xpi) = 45ad66651c5f26aef42f5f9d6a5bd1c7 -SHA256 (xpi/firefox-i18n-3.6.2/da.xpi) = de3ccd091253621c8a8ade95af5a93edd8f14714566d1709161eb4d703c26dd0 -SIZE (xpi/firefox-i18n-3.6.2/da.xpi) = 125224 -MD5 (xpi/firefox-i18n-3.6.2/de.xpi) = b701f8f81573aeb3bfddb63955c0f0b8 -SHA256 (xpi/firefox-i18n-3.6.2/de.xpi) = 07306a30f910bfb0309cdde3f498cd18e333a8c054c39e7a780d4fc29e7f969c -SIZE (xpi/firefox-i18n-3.6.2/de.xpi) = 128614 -MD5 (xpi/firefox-i18n-3.6.2/el.xpi) = 1e6915d27a9183bc0070e54332bb5182 -SHA256 (xpi/firefox-i18n-3.6.2/el.xpi) = c8d319f7faa4d8698771a12aee905f098a3195ee2f8fda3a0c5a7c3a21748136 -SIZE (xpi/firefox-i18n-3.6.2/el.xpi) = 134869 -MD5 (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 01c4dccd21e5b31e8ae2bee4925a0846 -SHA256 (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 3a7e4a51e9ae4e65107b9a929e69456c6a578702f9fccb641d35c1322510df28 -SIZE (xpi/firefox-i18n-3.6.2/en-GB.xpi) = 118873 -MD5 (xpi/firefox-i18n-3.6.2/eo.xpi) = e12a528c72e1e8f13e604f48cc7e1e09 -SHA256 (xpi/firefox-i18n-3.6.2/eo.xpi) = c3d9f500cef18b8d0b871a68690633c215bdad583845919c63d882d44a570d22 -SIZE (xpi/firefox-i18n-3.6.2/eo.xpi) = 121574 -MD5 (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 58a325ac2c88c7aeff6d5e5f427e5f8b -SHA256 (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 675f9b963c3a642b4854b5f389b965cfa00eeb7b019d716fb2f7e7b9e51bdbc2 -SIZE (xpi/firefox-i18n-3.6.2/es-AR.xpi) = 127429 -MD5 (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 1a35436da22ccb593667905dbca847b7 -SHA256 (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 7461e8c30c212f587213e33803b2d83b193d7b68a024da06d4c9d7afe1a8e4c4 -SIZE (xpi/firefox-i18n-3.6.2/es-CL.xpi) = 129070 -MD5 (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 244a5969458464ad37ba000dec0e8617 -SHA256 (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 51cf8ee8fe9f6d0c5a6a73ab31e510b71e643748a8d6515f9eb4ffaf2a128e81 -SIZE (xpi/firefox-i18n-3.6.2/es-ES.xpi) = 108478 -MD5 (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 27d1541dd6b77259752543f251a0079b -SHA256 (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 19523599211bbf460af604f331a6a9b762b6559a94e56780214c2af7865d14dc -SIZE (xpi/firefox-i18n-3.6.2/es-MX.xpi) = 127427 -MD5 (xpi/firefox-i18n-3.6.2/et.xpi) = f0e589e072a784651f99e92f042277ea -SHA256 (xpi/firefox-i18n-3.6.2/et.xpi) = 0680530ac2e9fdad9e4c8132c9f73c592b038134c902ff6b0106eb91618728f2 -SIZE (xpi/firefox-i18n-3.6.2/et.xpi) = 126969 -MD5 (xpi/firefox-i18n-3.6.2/eu.xpi) = db5c050b3c2a021e7ac3d3994e854f81 -SHA256 (xpi/firefox-i18n-3.6.2/eu.xpi) = f80873bee8136116d18f7124b7208d220ee262d4785a9a2108c129c01f7056e6 -SIZE (xpi/firefox-i18n-3.6.2/eu.xpi) = 121518 -MD5 (xpi/firefox-i18n-3.6.2/fa.xpi) = 8965f1f0f91d5ddf4a4a13d33a68427d -SHA256 (xpi/firefox-i18n-3.6.2/fa.xpi) = 1f12d6aafcead1002e721e53dfb2009fb93c7b697e712c03719167a8062b7285 -SIZE (xpi/firefox-i18n-3.6.2/fa.xpi) = 138048 -MD5 (xpi/firefox-i18n-3.6.2/fi.xpi) = 24a92bdeab95870a3d731344983b2521 -SHA256 (xpi/firefox-i18n-3.6.2/fi.xpi) = f6afd11c4a1efd7d00fa4b43c9805bd1c9facdc9c0f6d2b83f5ce7d82283469a -SIZE (xpi/firefox-i18n-3.6.2/fi.xpi) = 126440 -MD5 (xpi/firefox-i18n-3.6.2/fr.xpi) = 833fed7b317b75143c0b6ebee4de725e -SHA256 (xpi/firefox-i18n-3.6.2/fr.xpi) = ae4e436fbdbcff9cf8989e356df87e0c691ba75b3236f0d38ed507b7b54d5b3c -SIZE (xpi/firefox-i18n-3.6.2/fr.xpi) = 128534 -MD5 (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 220f25796907e358dc2f4606a85832bc -SHA256 (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 16b1b7868f5089362e30ec34039c6f8fd2ea4590afc1ac9e0a052a21567af72d -SIZE (xpi/firefox-i18n-3.6.2/fy-NL.xpi) = 128850 -MD5 (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = a245d68cb35089fda224be7a434b5422 -SHA256 (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = 52ce5e2a24569e380c112c996e5d780c7e42dc03129b906f85724cfb99b4d659 -SIZE (xpi/firefox-i18n-3.6.2/ga-IE.xpi) = 127359 -MD5 (xpi/firefox-i18n-3.6.2/gl.xpi) = 2d209aad5bf5b666f3db4ae093acec5c -SHA256 (xpi/firefox-i18n-3.6.2/gl.xpi) = 0e201d84dfb3835c0794033d290b839cd759fa2c80ae8854b70181a078d667ec -SIZE (xpi/firefox-i18n-3.6.2/gl.xpi) = 124830 -MD5 (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = dfb18d72a2ef86571225ac093f382be6 -SHA256 (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = c9859901aba7f12862a2fe1294935852bb3e36fd4b6d38cc9899865b6d464ee5 -SIZE (xpi/firefox-i18n-3.6.2/gu-IN.xpi) = 145873 -MD5 (xpi/firefox-i18n-3.6.2/he.xpi) = f7d6481aab59ec9072a9d0f3b5e3bb89 -SHA256 (xpi/firefox-i18n-3.6.2/he.xpi) = a7e045f18e94574cf2f42cc00001e18c9178cdb924d92dc8f770b9898226e0d2 -SIZE (xpi/firefox-i18n-3.6.2/he.xpi) = 122707 -MD5 (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = eb57d5908b08fbaf381baf02351a8ef6 -SHA256 (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = 3e35ba3cc4a9604a0bda5f3e2056aa3f48406f583d7ad7823d428dd57af47a89 -SIZE (xpi/firefox-i18n-3.6.2/hi-IN.xpi) = 141483 -MD5 (xpi/firefox-i18n-3.6.2/hr.xpi) = 0de65ab8a1301a3f41c41dcc7306092a -SHA256 (xpi/firefox-i18n-3.6.2/hr.xpi) = 40d257413114d22f03ae59b72c9fafc4670865a36236b44e8069385795047a96 -SIZE (xpi/firefox-i18n-3.6.2/hr.xpi) = 127062 -MD5 (xpi/firefox-i18n-3.6.2/hu.xpi) = 4e75f24d3db20fd823af8d2c57c142aa -SHA256 (xpi/firefox-i18n-3.6.2/hu.xpi) = 18fd022c7fa481734d30483c76ab8f5f91acc8032f4e9431d0d6bfbd0670fd1a -SIZE (xpi/firefox-i18n-3.6.2/hu.xpi) = 131550 -MD5 (xpi/firefox-i18n-3.6.2/id.xpi) = afe450c6774123dad769babddd7ee327 -SHA256 (xpi/firefox-i18n-3.6.2/id.xpi) = 93988991f335a392fdb95e64235457c82c86fb83f84d4562d67e88549cf23dcb -SIZE (xpi/firefox-i18n-3.6.2/id.xpi) = 103926 -MD5 (xpi/firefox-i18n-3.6.2/is.xpi) = 93dc41092e55ecf69fa7f1727fc6de17 -SHA256 (xpi/firefox-i18n-3.6.2/is.xpi) = 42cdd6b55cc70702581f84cd898dda1699bd0b6a2bce5d57cffb9938830e0823 -SIZE (xpi/firefox-i18n-3.6.2/is.xpi) = 123006 -MD5 (xpi/firefox-i18n-3.6.2/it.xpi) = e7ee390f0b20f8bb53a2bc8dfe8c0f51 -SHA256 (xpi/firefox-i18n-3.6.2/it.xpi) = dc14fc6f3a962d567f2bf3eae2c9517629ea6cef3bf2dd420dcf4d4922160e67 -SIZE (xpi/firefox-i18n-3.6.2/it.xpi) = 107706 -MD5 (xpi/firefox-i18n-3.6.2/ja.xpi) = fd78f12fdcdf692f6898f10893924aba -SHA256 (xpi/firefox-i18n-3.6.2/ja.xpi) = 595c8582a79573754a59d0a7af27e0d6539ef0d609033c355460e4ca0500b86f -SIZE (xpi/firefox-i18n-3.6.2/ja.xpi) = 136586 -MD5 (xpi/firefox-i18n-3.6.2/ka.xpi) = 5d218fd81f72fd09d206e38289c55fe8 -SHA256 (xpi/firefox-i18n-3.6.2/ka.xpi) = 79c9642231e7d860f76f92980083be41b2e9232fc6a5b387df8ef45d7cb3e980 -SIZE (xpi/firefox-i18n-3.6.2/ka.xpi) = 154148 -MD5 (xpi/firefox-i18n-3.6.2/kk.xpi) = 290af18dbd8ee444ef90f38cfda48f9d -SHA256 (xpi/firefox-i18n-3.6.2/kk.xpi) = f730b6ea829eb9fa72f18d8a91494b10b457cdb198dc53075f557de4a1f51cb8 -SIZE (xpi/firefox-i18n-3.6.2/kk.xpi) = 137724 -MD5 (xpi/firefox-i18n-3.6.2/kn.xpi) = 62a87e158a14ce330468bdb4bb63fd98 -SHA256 (xpi/firefox-i18n-3.6.2/kn.xpi) = 95478d9e44438e7a7af7c83cdfe9c8b91a92280f8758f15787f89c2cf445c3b2 -SIZE (xpi/firefox-i18n-3.6.2/kn.xpi) = 148421 -MD5 (xpi/firefox-i18n-3.6.2/ko.xpi) = 16d0e0896eff7983c80111db17124550 -SHA256 (xpi/firefox-i18n-3.6.2/ko.xpi) = 67d9109ed8a3e30b3d600d3c99de494c76a10f7e2027e3504ddbac7c137fc9dd -SIZE (xpi/firefox-i18n-3.6.2/ko.xpi) = 115383 -MD5 (xpi/firefox-i18n-3.6.2/ku.xpi) = 76b2fb3445b35dc4a03fa9fe3f91bcff -SHA256 (xpi/firefox-i18n-3.6.2/ku.xpi) = 1e773c66deab6089e064ea66c7ce4730c1bf59702caa43961ca9b95ee51400cb -SIZE (xpi/firefox-i18n-3.6.2/ku.xpi) = 130378 -MD5 (xpi/firefox-i18n-3.6.2/lt.xpi) = 6f185736ebfc5c0f975cc065de5b30a3 -SHA256 (xpi/firefox-i18n-3.6.2/lt.xpi) = c45a0e92c1c7ef08a8fa63c1452ad135ef4a189092e5f880f1236f6d1691d38c -SIZE (xpi/firefox-i18n-3.6.2/lt.xpi) = 139307 -MD5 (xpi/firefox-i18n-3.6.2/lv.xpi) = fafe715b149a643c274883c0ba0d1b89 -SHA256 (xpi/firefox-i18n-3.6.2/lv.xpi) = fdae7a3e4d82e3cbfc6f617d8966171dc6c16abaf553f7e3ab561dc674877b2d -SIZE (xpi/firefox-i18n-3.6.2/lv.xpi) = 124827 -MD5 (xpi/firefox-i18n-3.6.2/mk.xpi) = 7abdea5fbad108a3241fe047553de73e -SHA256 (xpi/firefox-i18n-3.6.2/mk.xpi) = c4b2bda44d8ebc2b349701670169f509287d366a628e0bda13aec4151a306f73 -SIZE (xpi/firefox-i18n-3.6.2/mk.xpi) = 127945 -MD5 (xpi/firefox-i18n-3.6.2/ml.xpi) = 93508bbfdecce59a5104668176cade6e -SHA256 (xpi/firefox-i18n-3.6.2/ml.xpi) = c704c981d6071154288d686e2a0fd79c3cc8df79c4de4d85192b3d29d5c6af54 -SIZE (xpi/firefox-i18n-3.6.2/ml.xpi) = 151880 -MD5 (xpi/firefox-i18n-3.6.2/mr.xpi) = aac86676f5e4fe881990b54d184d869a -SHA256 (xpi/firefox-i18n-3.6.2/mr.xpi) = b0d2cdcf5606aa2d271652fbfdc584721d89be21eeceda1061d289485d132380 -SIZE (xpi/firefox-i18n-3.6.2/mr.xpi) = 140488 -MD5 (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = 5c7926cacdce54e2218a839743496e01 -SHA256 (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = d0237f0b0836e7e0ad0c0995dc4a5d066b5219ac8484977bd2312f32bf1557d1 -SIZE (xpi/firefox-i18n-3.6.2/nb-NO.xpi) = 123728 -MD5 (xpi/firefox-i18n-3.6.2/nl.xpi) = 354395ae42e4cfecced4294775f4186a -SHA256 (xpi/firefox-i18n-3.6.2/nl.xpi) = f566e9bf2e992d9e43bcd145187d83a12c9dd9218c484f20fcf0095894d45c6e -SIZE (xpi/firefox-i18n-3.6.2/nl.xpi) = 126068 -MD5 (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = 0a7dd2f0da27235be30420b81bbf10bc -SHA256 (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = e9859857cbc81837f8ab937851b094e69a6b5a8747c32cc02dd0f5fc3419ceca -SIZE (xpi/firefox-i18n-3.6.2/nn-NO.xpi) = 125121 -MD5 (xpi/firefox-i18n-3.6.2/oc.xpi) = 2e3762813f17e4467716a2040e4c30b5 -SHA256 (xpi/firefox-i18n-3.6.2/oc.xpi) = 42e877f3087f70362b5fea71cf5df0adbdfcfe543b42bd704cd8a2d41ed4b956 -SIZE (xpi/firefox-i18n-3.6.2/oc.xpi) = 108343 -MD5 (xpi/firefox-i18n-3.6.2/or.xpi) = 092ac7f81e134f04cc465145b60d5002 -SHA256 (xpi/firefox-i18n-3.6.2/or.xpi) = fe8dfd8c3f6e030c124f8ee5ce6808fe8274329a790acd0ee83863c1239694d6 -SIZE (xpi/firefox-i18n-3.6.2/or.xpi) = 142200 -MD5 (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 8e11f068479917bd5247c56537abf8b3 -SHA256 (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 5c398d0cae9eaee87f219cdccf91884fa5d7b48e81ba4de4ac77fe16e98850d8 -SIZE (xpi/firefox-i18n-3.6.2/pa-IN.xpi) = 141718 -MD5 (xpi/firefox-i18n-3.6.2/pl.xpi) = c4693a24c4724cef4c12013778098f7a -SHA256 (xpi/firefox-i18n-3.6.2/pl.xpi) = 9bc3d7cc648d95d649ea862ba0d803f0613fd5245d4cc9c30ce45e65e8a0fe40 -SIZE (xpi/firefox-i18n-3.6.2/pl.xpi) = 130634 -MD5 (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 36d5443654f9d7bbaa44432c9be62c9a -SHA256 (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 6581bc855b5109eb68b3d87d8edb7596f90c31bbe8f2b4981506dc1e7add119c -SIZE (xpi/firefox-i18n-3.6.2/pt-BR.xpi) = 124989 -MD5 (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = 55e812a2384ffda162e6672310166262 -SHA256 (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = c98f8fb1e6794dcb697c8c40fb991743373310dcf4da94f095c9d4aef4391180 -SIZE (xpi/firefox-i18n-3.6.2/pt-PT.xpi) = 124942 -MD5 (xpi/firefox-i18n-3.6.2/rm.xpi) = a239a06d074c3dc2a4a4737b276cfaad -SHA256 (xpi/firefox-i18n-3.6.2/rm.xpi) = a8f40d629f8db7f2cca2768bfb1f341c29a3ad81824b9d2b417181fcff92716d -SIZE (xpi/firefox-i18n-3.6.2/rm.xpi) = 127026 -MD5 (xpi/firefox-i18n-3.6.2/ro.xpi) = 78d506254e977cfc596258665c4f344b -SHA256 (xpi/firefox-i18n-3.6.2/ro.xpi) = f8fe0411d9feae51030c11e983855e4d8f29311450b02ded8ef56ce3fd6ed87f -SIZE (xpi/firefox-i18n-3.6.2/ro.xpi) = 128725 -MD5 (xpi/firefox-i18n-3.6.2/ru.xpi) = e0a3f60f143663acb1c58eb74c4eb585 -SHA256 (xpi/firefox-i18n-3.6.2/ru.xpi) = fc9a855583f039daf5c0a2517d31dea21ff1104b43aae019cd374af605f69d38 -SIZE (xpi/firefox-i18n-3.6.2/ru.xpi) = 124574 -MD5 (xpi/firefox-i18n-3.6.2/si.xpi) = a82f43d6341924bb46ac31702cb80787 -SHA256 (xpi/firefox-i18n-3.6.2/si.xpi) = cc82daec384df0f6eb418a5afdeaa9ad50e62c3e2c1a2d52f1a1d41afad318c9 -SIZE (xpi/firefox-i18n-3.6.2/si.xpi) = 142189 -MD5 (xpi/firefox-i18n-3.6.2/sk.xpi) = 1ac96364b0d62767d1b57b6ad78cf9eb -SHA256 (xpi/firefox-i18n-3.6.2/sk.xpi) = 3f8c15866709e557f3580918445b8496c216dfb0775659f552063d2ba4fa58cf -SIZE (xpi/firefox-i18n-3.6.2/sk.xpi) = 130718 -MD5 (xpi/firefox-i18n-3.6.2/sl.xpi) = 74c60b6cfc632d17cca35a9d79884c7a -SHA256 (xpi/firefox-i18n-3.6.2/sl.xpi) = 2e22e278c0f50ef874d1a0b987c2439d7ac9b7314617dddc6e397a372db47429 -SIZE (xpi/firefox-i18n-3.6.2/sl.xpi) = 123902 -MD5 (xpi/firefox-i18n-3.6.2/sq.xpi) = b68f88ea07722834f5a9dde82581d3fc -SHA256 (xpi/firefox-i18n-3.6.2/sq.xpi) = 2042370ee6b33213e8be5edf757db79305c2521d76f35fa23231ad41648473e9 -SIZE (xpi/firefox-i18n-3.6.2/sq.xpi) = 120934 -MD5 (xpi/firefox-i18n-3.6.2/sr.xpi) = 896a7e7629b61d933846ef9efe288ada -SHA256 (xpi/firefox-i18n-3.6.2/sr.xpi) = 6db567a5eae9c19c6393ef11b83e9e7ee11b136470bbd05741bd49ad266f3597 -SIZE (xpi/firefox-i18n-3.6.2/sr.xpi) = 136930 -MD5 (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = b8dc0908e82f5d338b325d1aed5b6c71 -SHA256 (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = f818535af1cded2c3d55bbc537d1bb4338f9abf7cef14fc561b6a0effae85153 -SIZE (xpi/firefox-i18n-3.6.2/sv-SE.xpi) = 128075 -MD5 (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 94e5d7ea9b0a11c06ebce481ee0cb1a7 -SHA256 (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 6beb2a0369d3ecc990c033a1c9a25dced083c666669a989c90eeeb32d3a93761 -SIZE (xpi/firefox-i18n-3.6.2/ta-LK.xpi) = 149330 -MD5 (xpi/firefox-i18n-3.6.2/ta.xpi) = 6c4ee60e1c8a5c7eefa843f7b98a5db0 -SHA256 (xpi/firefox-i18n-3.6.2/ta.xpi) = 4ceb3b58e1e544994345eb3704115eabb19020419e8da693938c1dcad886c66b -SIZE (xpi/firefox-i18n-3.6.2/ta.xpi) = 136804 -MD5 (xpi/firefox-i18n-3.6.2/te.xpi) = 37dc154cff01eb19c9d835831ac5a4a6 -SHA256 (xpi/firefox-i18n-3.6.2/te.xpi) = 713a8596ca847d4b2f9b7375058f34c500066f5fa7cfc4384865891347c5c6f7 -SIZE (xpi/firefox-i18n-3.6.2/te.xpi) = 147402 -MD5 (xpi/firefox-i18n-3.6.2/th.xpi) = 12351748dbc10d3b618123f9ada1284a -SHA256 (xpi/firefox-i18n-3.6.2/th.xpi) = dfe374f3a7dec81da46e61c15fb7dc3879bcca87c356ed6cd5ded339a5b89095 -SIZE (xpi/firefox-i18n-3.6.2/th.xpi) = 134016 -MD5 (xpi/firefox-i18n-3.6.2/tr.xpi) = 3d391c58fd52a7ce9e8005fcc4fefc09 -SHA256 (xpi/firefox-i18n-3.6.2/tr.xpi) = 47f9564fcf4fdf89ba1bde11e87a334bbd605e0adba38c30e55bd0051812beef -SIZE (xpi/firefox-i18n-3.6.2/tr.xpi) = 125029 -MD5 (xpi/firefox-i18n-3.6.2/uk.xpi) = 3fd942c68feb4018cdad30b1dce321cd -SHA256 (xpi/firefox-i18n-3.6.2/uk.xpi) = e1246c0a024f0787d20ea860062a73ce4d3532d855824f3d78a4e62007f76dda -SIZE (xpi/firefox-i18n-3.6.2/uk.xpi) = 142824 -MD5 (xpi/firefox-i18n-3.6.2/vi.xpi) = d25c99e708652201910b19cf8e007cd3 -SHA256 (xpi/firefox-i18n-3.6.2/vi.xpi) = 7502579cbd898d5e75d64d7c52c958150a6afa395180cf0630c2ae747c12f0bb -SIZE (xpi/firefox-i18n-3.6.2/vi.xpi) = 129056 -MD5 (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 36163243e98213ecdff881e54945cc4b -SHA256 (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 047bb0d4ac721fb012c1c4a5aefd368d5080d8cfff475aff78117548c5112954 -SIZE (xpi/firefox-i18n-3.6.2/zh-CN.xpi) = 126695 -MD5 (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = d03068915380d6b4fb6f736b71884435 -SHA256 (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = acc73dad235e877baa9cfcf5b3182448d7dcb729d49cd5ee589e7b30315a6067 -SIZE (xpi/firefox-i18n-3.6.2/zh-TW.xpi) = 128259 +MD5 (xpi/firefox-i18n-3.6.3/af.xpi) = 9f1e7a8f015fa9c6673e7a5fd74655f5 +SHA256 (xpi/firefox-i18n-3.6.3/af.xpi) = ca32bb28a1751604f657562f4a4569d514a1fba2d2ea517a7c95ce2c3ecd7afa +SIZE (xpi/firefox-i18n-3.6.3/af.xpi) = 121643 +MD5 (xpi/firefox-i18n-3.6.3/ar.xpi) = 5f6e53b5087e24b91d18ba64c4ffe185 +SHA256 (xpi/firefox-i18n-3.6.3/ar.xpi) = 29bc998e93dc46598412ff1b9dabe104993efefce8e13172b82a9f9c8d815dbf +SIZE (xpi/firefox-i18n-3.6.3/ar.xpi) = 131701 +MD5 (xpi/firefox-i18n-3.6.3/as.xpi) = dfb526adb3247201856e1cbfc55d55b9 +SHA256 (xpi/firefox-i18n-3.6.3/as.xpi) = 28dfa84fbd98f1a49fad4a109519ac96ef6137fa9651a9dcf5745d532c17a9a7 +SIZE (xpi/firefox-i18n-3.6.3/as.xpi) = 134298 +MD5 (xpi/firefox-i18n-3.6.3/be.xpi) = 82b3c8566c2266a9f9e41f9bd00a9a9d +SHA256 (xpi/firefox-i18n-3.6.3/be.xpi) = b362258673a9ddf79453a13cc2ce2bed6509a900e9345b5b185b8733f43fd5e5 +SIZE (xpi/firefox-i18n-3.6.3/be.xpi) = 121938 +MD5 (xpi/firefox-i18n-3.6.3/bg.xpi) = 5c2e2ecaaedde6aedcd9238d036b5026 +SHA256 (xpi/firefox-i18n-3.6.3/bg.xpi) = fc9e6c67db1d794f8cef71f83c0c79fc46dd8fd6059ed88664559a5044bde3be +SIZE (xpi/firefox-i18n-3.6.3/bg.xpi) = 134105 +MD5 (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = d961018ade9bb29edaf6b4d9bf6cc7b6 +SHA256 (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = 6d4dd085b38f14ac52c60967950a5d8eff99343aa5233195ce176b0349887f41 +SIZE (xpi/firefox-i18n-3.6.3/bn-BD.xpi) = 140833 +MD5 (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 00646001d91a81888d349d1526e10607 +SHA256 (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 859a6bae81bc886409a186184087f6ac3ab08cb158f6195f85a92eaa10136d6f +SIZE (xpi/firefox-i18n-3.6.3/bn-IN.xpi) = 159580 +MD5 (xpi/firefox-i18n-3.6.3/ca.xpi) = 7ff08e6bb1019a758bd0b4be5a83a805 +SHA256 (xpi/firefox-i18n-3.6.3/ca.xpi) = 2085fd9ccb6262d17b9b56a44a0b097b26ce15844f47583c41efa7e7941886c0 +SIZE (xpi/firefox-i18n-3.6.3/ca.xpi) = 127598 +MD5 (xpi/firefox-i18n-3.6.3/cs.xpi) = d977d0011392ab0ad1bc89731c5294e8 +SHA256 (xpi/firefox-i18n-3.6.3/cs.xpi) = 99505e15f51dedbd15d071d07d065e5126c1073a32bc3d8d889a86329a741337 +SIZE (xpi/firefox-i18n-3.6.3/cs.xpi) = 126390 +MD5 (xpi/firefox-i18n-3.6.3/cy.xpi) = ed20f0a63e8bb0e4f06efaf3372ef0ac +SHA256 (xpi/firefox-i18n-3.6.3/cy.xpi) = 402446abcc8d527111410e05746f5075b1c8546d1cf5f1e2b276106f45eb9422 +SIZE (xpi/firefox-i18n-3.6.3/cy.xpi) = 121977 +MD5 (xpi/firefox-i18n-3.6.3/da.xpi) = 7a29d0ca6e1b2aafd43dadb0bdebc7fc +SHA256 (xpi/firefox-i18n-3.6.3/da.xpi) = 071fa099e23d4f8567d1f411e3a47f5369b7c94f9ee4e342ccaa038ad0b616fd +SIZE (xpi/firefox-i18n-3.6.3/da.xpi) = 125229 +MD5 (xpi/firefox-i18n-3.6.3/de.xpi) = 37d7c2f80c50f7081ae30655b9af8723 +SHA256 (xpi/firefox-i18n-3.6.3/de.xpi) = 83c2275e32bec807f877ea889a0b64eae4f68cb2bbde1d4d7ec54b187bc5e3fa +SIZE (xpi/firefox-i18n-3.6.3/de.xpi) = 128584 +MD5 (xpi/firefox-i18n-3.6.3/el.xpi) = 5ae2e84b06c058b1c30742d65ae35cee +SHA256 (xpi/firefox-i18n-3.6.3/el.xpi) = d50bf7587d2ef25eb688a1b7df78fcd22f21970477f7db3b2af0e1351376e243 +SIZE (xpi/firefox-i18n-3.6.3/el.xpi) = 134908 +MD5 (xpi/firefox-i18n-3.6.3/en-GB.xpi) = 76ba151eb259d36ce8fd5c6cc1a23f51 +SHA256 (xpi/firefox-i18n-3.6.3/en-GB.xpi) = b2d5aff06bf25d234b0aaf3b07b267484f0d91ed6a79deeb127bec5761652db5 +SIZE (xpi/firefox-i18n-3.6.3/en-GB.xpi) = 118905 +MD5 (xpi/firefox-i18n-3.6.3/eo.xpi) = fb8a232ac6334c59b59df247fbacd8dc +SHA256 (xpi/firefox-i18n-3.6.3/eo.xpi) = e4dee773569d2b5a54d8038b21b07a6986896c40c70f8f2e822af4815285c708 +SIZE (xpi/firefox-i18n-3.6.3/eo.xpi) = 121549 +MD5 (xpi/firefox-i18n-3.6.3/es-AR.xpi) = c1b577ff734865137e51358388e124a8 +SHA256 (xpi/firefox-i18n-3.6.3/es-AR.xpi) = d21dbe4cc9bd44b364f42699622c8b15923231888e40dbda1913e5981c768971 +SIZE (xpi/firefox-i18n-3.6.3/es-AR.xpi) = 127417 +MD5 (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 542004b1893ee3be31d2aa81eccd1b2d +SHA256 (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 270a02a4c724c5f7bc491482349306e56a69bff2ae07e2fa0614c61cd871da9c +SIZE (xpi/firefox-i18n-3.6.3/es-CL.xpi) = 129098 +MD5 (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 95a40275aa67f8eea18d6681a163db61 +SHA256 (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 0e5c6aadc0115b87f637017af22495d0d83c73e8e33bbc35ed6aa0249d1b5e7b +SIZE (xpi/firefox-i18n-3.6.3/es-ES.xpi) = 108457 +MD5 (xpi/firefox-i18n-3.6.3/es-MX.xpi) = 7444e7ada0a3bb07ca9a9b97894c7075 +SHA256 (xpi/firefox-i18n-3.6.3/es-MX.xpi) = b2dfdf8d3cae300971347ea092c348c37c708dd78113121d6939b6c9b63876c4 +SIZE (xpi/firefox-i18n-3.6.3/es-MX.xpi) = 127428 +MD5 (xpi/firefox-i18n-3.6.3/et.xpi) = 5972b1f8158ba24311042eec32103e98 +SHA256 (xpi/firefox-i18n-3.6.3/et.xpi) = 3d485226f963bb81a85733f7c2e6b7996804f408c28b2cc983313a28abafd169 +SIZE (xpi/firefox-i18n-3.6.3/et.xpi) = 126949 +MD5 (xpi/firefox-i18n-3.6.3/eu.xpi) = b3393f5c33af179ca8ad86d98c3c38a9 +SHA256 (xpi/firefox-i18n-3.6.3/eu.xpi) = 8a94576607075ef7374317092a7115e1256e3b5c331e3292e3cbff3519c71d65 +SIZE (xpi/firefox-i18n-3.6.3/eu.xpi) = 121541 +MD5 (xpi/firefox-i18n-3.6.3/fa.xpi) = 33ecf5ebaf0959e7bc7ac684c3594298 +SHA256 (xpi/firefox-i18n-3.6.3/fa.xpi) = f8ecd8e292d3052968bd3061234f559469a59ad79376606ed6fc9a026a47700a +SIZE (xpi/firefox-i18n-3.6.3/fa.xpi) = 138018 +MD5 (xpi/firefox-i18n-3.6.3/fi.xpi) = 166d3ebfcac6c4e86f9f18bf4f2652e4 +SHA256 (xpi/firefox-i18n-3.6.3/fi.xpi) = ff5b76a3026ee2a64ab9b694c5a08a3db0493ae99a33f2a875a8b15ad52b26ea +SIZE (xpi/firefox-i18n-3.6.3/fi.xpi) = 126447 +MD5 (xpi/firefox-i18n-3.6.3/fr.xpi) = 47647b52ed1503d6a29bc2a58f7de996 +SHA256 (xpi/firefox-i18n-3.6.3/fr.xpi) = 74fab75828a8c1e913c7602aeca448961f5a8088f2b6ee173828a0994ddbbb8a +SIZE (xpi/firefox-i18n-3.6.3/fr.xpi) = 128542 +MD5 (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 5fd9beb2b74ea00165cc42aa73f019ac +SHA256 (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 2065ffbe1c08f319f70064960a5f08a465bb4c3c484352b8e5ecfffc4bf8eae7 +SIZE (xpi/firefox-i18n-3.6.3/fy-NL.xpi) = 128880 +MD5 (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = 03cf568156201821e8ca9f77b7fd2166 +SHA256 (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = e233387bc4edfa4a2241b3b1c772b59208ffe11266c70d61ca1b69ff90766642 +SIZE (xpi/firefox-i18n-3.6.3/ga-IE.xpi) = 127381 +MD5 (xpi/firefox-i18n-3.6.3/gl.xpi) = 295b524b77e7faeb768025eca9253989 +SHA256 (xpi/firefox-i18n-3.6.3/gl.xpi) = 6e6538359da4b6070f87ee3b6c4c059cbe2e847ab844ef1ea379bc99f4e408dc +SIZE (xpi/firefox-i18n-3.6.3/gl.xpi) = 124820 +MD5 (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 7aebe5e8b398ab54444e0a8fb915bbea +SHA256 (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 68fb8a9dccb96ec1d6e9a6c2881a59822c3114675ae1bcbaeb8d052a21f7faec +SIZE (xpi/firefox-i18n-3.6.3/gu-IN.xpi) = 145912 +MD5 (xpi/firefox-i18n-3.6.3/he.xpi) = c21073d11f5abe89fffd98a4f67449ce +SHA256 (xpi/firefox-i18n-3.6.3/he.xpi) = 4867971272dfb56bf95c2c073c0c04427928270fc2698d8a186572441658861c +SIZE (xpi/firefox-i18n-3.6.3/he.xpi) = 122744 +MD5 (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = c3adf95f031bedbeb1b06a58133b2a1f +SHA256 (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = e1ce7a007d431087da54c9d0f05c96ebf7bf7e593991f1c45e9538e7dc7022e7 +SIZE (xpi/firefox-i18n-3.6.3/hi-IN.xpi) = 141464 +MD5 (xpi/firefox-i18n-3.6.3/hr.xpi) = b45855fd805a81c8d6fff63b3dbddf2e +SHA256 (xpi/firefox-i18n-3.6.3/hr.xpi) = e9104adc49d537d4d72064809186e347d9af4b0bb64031ed43d22be7b6978cc0 +SIZE (xpi/firefox-i18n-3.6.3/hr.xpi) = 127072 +MD5 (xpi/firefox-i18n-3.6.3/hu.xpi) = 20afeceaebcb56cc10089f3054303441 +SHA256 (xpi/firefox-i18n-3.6.3/hu.xpi) = 2c4e83a54088bfff0f1e6fd62b0e93b4d7dfeeb96b74d0d06400b5f056b2879d +SIZE (xpi/firefox-i18n-3.6.3/hu.xpi) = 131573 +MD5 (xpi/firefox-i18n-3.6.3/id.xpi) = 5d9a0e02c7480400e86c90619517a686 +SHA256 (xpi/firefox-i18n-3.6.3/id.xpi) = 29c9a219e3413d35feffae93a3dc00d8734356ff2d5c73049deec5b719051195 +SIZE (xpi/firefox-i18n-3.6.3/id.xpi) = 103923 +MD5 (xpi/firefox-i18n-3.6.3/is.xpi) = d94b56636da5804b4b92103029f7ab6a +SHA256 (xpi/firefox-i18n-3.6.3/is.xpi) = 81287e28183540a78a9541415f4d5ce6cd346d5d2d58ee97e6746b95e74f8ee1 +SIZE (xpi/firefox-i18n-3.6.3/is.xpi) = 123008 +MD5 (xpi/firefox-i18n-3.6.3/it.xpi) = 55bf6baa6b00e395c9c899f513fa5fde +SHA256 (xpi/firefox-i18n-3.6.3/it.xpi) = 27159a29b4b49827d90821846f521856131ae594256b53f5db81c91fd8de4c2e +SIZE (xpi/firefox-i18n-3.6.3/it.xpi) = 107727 +MD5 (xpi/firefox-i18n-3.6.3/ja.xpi) = 9d4e0d5e0e8396b1c6c60305d4a57ef8 +SHA256 (xpi/firefox-i18n-3.6.3/ja.xpi) = f9e4f10b41d2d84d735c0e0f54f73a80801ca8f7025f3c7f6eb584a801cd215f +SIZE (xpi/firefox-i18n-3.6.3/ja.xpi) = 136587 +MD5 (xpi/firefox-i18n-3.6.3/ka.xpi) = b56e5d155b16b450801b069134d54171 +SHA256 (xpi/firefox-i18n-3.6.3/ka.xpi) = 05f00be54b578c28811f65e4de859a272451c298cd37adaff60dc4739a794055 +SIZE (xpi/firefox-i18n-3.6.3/ka.xpi) = 154154 +MD5 (xpi/firefox-i18n-3.6.3/kk.xpi) = edf4e7d9f79138c48071c544e367efd7 +SHA256 (xpi/firefox-i18n-3.6.3/kk.xpi) = 70e93bd2fa45c392fe5d73afddbf9bfda8dfa8eeb9da7e1f4ae82de75475c056 +SIZE (xpi/firefox-i18n-3.6.3/kk.xpi) = 137720 +MD5 (xpi/firefox-i18n-3.6.3/kn.xpi) = b64aa0463db089f8755ba9e4e477e9ef +SHA256 (xpi/firefox-i18n-3.6.3/kn.xpi) = 92feb2e513e6e1b1809eeafccea5a27ae87c86039ffcf3fd9442c6595159843c +SIZE (xpi/firefox-i18n-3.6.3/kn.xpi) = 148392 +MD5 (xpi/firefox-i18n-3.6.3/ko.xpi) = 4b623a76f0e2f51884d3a1d4399f0028 +SHA256 (xpi/firefox-i18n-3.6.3/ko.xpi) = 848a7c91c4c7934580f17c23a7164bf1def79e25da77cb6662ad2a7fb95c3877 +SIZE (xpi/firefox-i18n-3.6.3/ko.xpi) = 115398 +MD5 (xpi/firefox-i18n-3.6.3/ku.xpi) = 2f097580a7f7680933a5b235946e735a +SHA256 (xpi/firefox-i18n-3.6.3/ku.xpi) = 5e568bd8576e44b03393d768ccf575e27935de699aeb6d07d88baab1a9878045 +SIZE (xpi/firefox-i18n-3.6.3/ku.xpi) = 130363 +MD5 (xpi/firefox-i18n-3.6.3/lt.xpi) = d555eec00dd9c9436974daa2d46cda79 +SHA256 (xpi/firefox-i18n-3.6.3/lt.xpi) = 71bd91e6bdf45ea107fefb948bf9ded8aecc1c263f0e8bb01ce3b7e0a3f2a6ee +SIZE (xpi/firefox-i18n-3.6.3/lt.xpi) = 139307 +MD5 (xpi/firefox-i18n-3.6.3/lv.xpi) = b2754566f884f1d615fe1129524ae1ff +SHA256 (xpi/firefox-i18n-3.6.3/lv.xpi) = b1123b1a5e7f09ec78ad7b3c373d30bbad9950f6a8f851041064d6c8fd3fce2b +SIZE (xpi/firefox-i18n-3.6.3/lv.xpi) = 124804 +MD5 (xpi/firefox-i18n-3.6.3/mk.xpi) = 7fe0e89f7e3fb7c75d4ace3ee1f17d79 +SHA256 (xpi/firefox-i18n-3.6.3/mk.xpi) = 599fab5d14e05d8bf9fdc0046dc03224dcfa07b689282a2adc16ec1c17142c08 +SIZE (xpi/firefox-i18n-3.6.3/mk.xpi) = 127944 +MD5 (xpi/firefox-i18n-3.6.3/ml.xpi) = 750703cb440d417c8fdf2cce332df4e9 +SHA256 (xpi/firefox-i18n-3.6.3/ml.xpi) = 8104081fe1457b90bd8a5b6bd55033b4070368c3e3f80ff480a4a917c80a1bfa +SIZE (xpi/firefox-i18n-3.6.3/ml.xpi) = 151842 +MD5 (xpi/firefox-i18n-3.6.3/mr.xpi) = 63458b4fdb0fe795059ed7a1f1d7e7ca +SHA256 (xpi/firefox-i18n-3.6.3/mr.xpi) = 75df7a30c373ed45f89c0deeb34405a9a44859fafa450c934a99c7385353bf7e +SIZE (xpi/firefox-i18n-3.6.3/mr.xpi) = 140489 +MD5 (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = d62276990caf1add8607bf2a13f80425 +SHA256 (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = b3437fdf88ebb6fdb24d40c43061ce2d917c1031e79511cc1512ff923cee1f55 +SIZE (xpi/firefox-i18n-3.6.3/nb-NO.xpi) = 123730 +MD5 (xpi/firefox-i18n-3.6.3/nl.xpi) = 2e28e9d322306d4cd62ad0d6edde384b +SHA256 (xpi/firefox-i18n-3.6.3/nl.xpi) = de0f8840f0482d07077e3bf2a5b73673759458f727fe3e47cf0e2d8e33754613 +SIZE (xpi/firefox-i18n-3.6.3/nl.xpi) = 126099 +MD5 (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 545762e4e6a20cf9137ceffaea2de293 +SHA256 (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 29b9e5fe8888ce329eef969575559dd6b05e0f41d3ad21eb9f90c12e6a4313b6 +SIZE (xpi/firefox-i18n-3.6.3/nn-NO.xpi) = 125130 +MD5 (xpi/firefox-i18n-3.6.3/oc.xpi) = db28c06c28760160fa54f55944bf19c0 +SHA256 (xpi/firefox-i18n-3.6.3/oc.xpi) = b94ff51b71d6f7da4e9cf999686a9d1989cebb4aaf46d64d58a3ee13bb3d521e +SIZE (xpi/firefox-i18n-3.6.3/oc.xpi) = 108342 +MD5 (xpi/firefox-i18n-3.6.3/or.xpi) = 86c940e7b554564b559421dd44ee6abd +SHA256 (xpi/firefox-i18n-3.6.3/or.xpi) = beb0a9b4df191257af5bb475d4d7a1bb39caf2136e0c2383af8a7c6b96b7fec8 +SIZE (xpi/firefox-i18n-3.6.3/or.xpi) = 142167 +MD5 (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = b1f5a0236e87fc9f245a551e217ad13f +SHA256 (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = 4e720c212cdf9e36452513c580d8b53a263247d8b67d41b71339acceaa25239b +SIZE (xpi/firefox-i18n-3.6.3/pa-IN.xpi) = 141676 +MD5 (xpi/firefox-i18n-3.6.3/pl.xpi) = 7830e4e59a54dd293cd485cbb5ca034a +SHA256 (xpi/firefox-i18n-3.6.3/pl.xpi) = c1d8486a16f96316cccd5a9d8f3a83e6ab2196cd902083f0522d6150390e5fa5 +SIZE (xpi/firefox-i18n-3.6.3/pl.xpi) = 130640 +MD5 (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = c208c9d466dc3329be7f213dca022ad2 +SHA256 (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = 3a270939c13db8f9c88aab341801a525df7dde8f306c05926a6a14860687929b +SIZE (xpi/firefox-i18n-3.6.3/pt-BR.xpi) = 124975 +MD5 (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 6ff5989286f33b06ce689a017d107f41 +SHA256 (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 269750bc5ed13a4709620b0200cdec59c74a390cd78ae0d58f0d39d8215e9f9e +SIZE (xpi/firefox-i18n-3.6.3/pt-PT.xpi) = 124948 +MD5 (xpi/firefox-i18n-3.6.3/rm.xpi) = 212f3cbc2b1d3c7c3d5c1919cb7cdea0 +SHA256 (xpi/firefox-i18n-3.6.3/rm.xpi) = 85cd2158b4f4a884dee01b68613aca0d065a6cb43378758c771734d0d09c1c24 +SIZE (xpi/firefox-i18n-3.6.3/rm.xpi) = 127029 +MD5 (xpi/firefox-i18n-3.6.3/ro.xpi) = 24e50bb4c20d40f199363262935f7d03 +SHA256 (xpi/firefox-i18n-3.6.3/ro.xpi) = b37d78a5034be7369a6fe001c9772f56419132f1bd35b8334701a5f4b46bee19 +SIZE (xpi/firefox-i18n-3.6.3/ro.xpi) = 128706 +MD5 (xpi/firefox-i18n-3.6.3/ru.xpi) = e85f86833efffa0ebbb67f66c55853a0 +SHA256 (xpi/firefox-i18n-3.6.3/ru.xpi) = 4058ae8e4401d4ae45f678fbd010ef18583ab5b7eb0db22312f1c10e260996b7 +SIZE (xpi/firefox-i18n-3.6.3/ru.xpi) = 124611 +MD5 (xpi/firefox-i18n-3.6.3/si.xpi) = 00ad3e7ff392995e0e578b7210ed0eac +SHA256 (xpi/firefox-i18n-3.6.3/si.xpi) = 9c4e730517c8d4cc3bc98da80be8193bc54f1c3d2c666a95a2208e88a3c3c973 +SIZE (xpi/firefox-i18n-3.6.3/si.xpi) = 142225 +MD5 (xpi/firefox-i18n-3.6.3/sk.xpi) = 4a80b1e2c7d0a0c253379d466a5dea32 +SHA256 (xpi/firefox-i18n-3.6.3/sk.xpi) = 5040fa868adc970ca3aa036172598aba0ae74b06bb48b3bbf6e7176701ddaaf0 +SIZE (xpi/firefox-i18n-3.6.3/sk.xpi) = 130723 +MD5 (xpi/firefox-i18n-3.6.3/sl.xpi) = 82708e7ca784e894531a76e4b760cd63 +SHA256 (xpi/firefox-i18n-3.6.3/sl.xpi) = eb92aa2aa370b31c96df276a84f92aace1fa94c2c2e4c32eea74620981e64c81 +SIZE (xpi/firefox-i18n-3.6.3/sl.xpi) = 123929 +MD5 (xpi/firefox-i18n-3.6.3/sq.xpi) = 7bad1d0dd6022ddecc0c3a9a7e7065dd +SHA256 (xpi/firefox-i18n-3.6.3/sq.xpi) = 9897289f002025dc082f9dbbc2f13878aec6c765eff28a57ff65fd8a6c2e8ed6 +SIZE (xpi/firefox-i18n-3.6.3/sq.xpi) = 120917 +MD5 (xpi/firefox-i18n-3.6.3/sr.xpi) = 6c53cf2ff925a321b6d6076fe5b40fcf +SHA256 (xpi/firefox-i18n-3.6.3/sr.xpi) = 08aa2f5230ce557ea8abd56bc178bb8e8d776f5724e341f2d494ae578f42ffe6 +SIZE (xpi/firefox-i18n-3.6.3/sr.xpi) = 136898 +MD5 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 210407c722919c59747a4c599e27d0cc +SHA256 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = ab5dc44a7f5f908ffc8ba622a222b892a98b9cea5abc6fb8ef02b66796c4c28b +SIZE (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 128080 +MD5 (xpi/firefox-i18n-3.6.3/ta.xpi) = 7f6d205ee070313768588ccb299caff9 +SHA256 (xpi/firefox-i18n-3.6.3/ta.xpi) = 6423a6287aec9f0c2a44d4e510f8ba24318fb3d3ecb9eab80471ea5a352ee26e +SIZE (xpi/firefox-i18n-3.6.3/ta.xpi) = 136772 +MD5 (xpi/firefox-i18n-3.6.3/te.xpi) = 5bb3d9c0a70c8fb1791251578e646adb +SHA256 (xpi/firefox-i18n-3.6.3/te.xpi) = ad0b5766f21bde9e8a091266c3c905c3e2d89cc0028db808a2abb79589d57e0c +SIZE (xpi/firefox-i18n-3.6.3/te.xpi) = 147410 +MD5 (xpi/firefox-i18n-3.6.3/th.xpi) = 72ac1e0a9f590febc6486b2e507d0b79 +SHA256 (xpi/firefox-i18n-3.6.3/th.xpi) = 2ae26f1a52414c5afffc26f202afbafeeb5597a0038d56898f85f1ad7abb548a +SIZE (xpi/firefox-i18n-3.6.3/th.xpi) = 133997 +MD5 (xpi/firefox-i18n-3.6.3/tr.xpi) = b0cb9dea29cf0198cb768b661e536331 +SHA256 (xpi/firefox-i18n-3.6.3/tr.xpi) = 194996a06fe19901d543f9f014a28b220f8a771deaac6706e08e82cfbbaf7310 +SIZE (xpi/firefox-i18n-3.6.3/tr.xpi) = 125048 +MD5 (xpi/firefox-i18n-3.6.3/uk.xpi) = 328a06b4ce0ef75d19a382d2f95fefb7 +SHA256 (xpi/firefox-i18n-3.6.3/uk.xpi) = 24b10df39eb8ffcaffd747dc7f1009131581ae685d74ebcd74749aabdfb6bd81 +SIZE (xpi/firefox-i18n-3.6.3/uk.xpi) = 142861 +MD5 (xpi/firefox-i18n-3.6.3/vi.xpi) = c6e7fd3a4f289269298f1a447d3c6924 +SHA256 (xpi/firefox-i18n-3.6.3/vi.xpi) = 36b68aa047f2a7c9e40bae09d8351bf161208913407be39581dd421d52783ba2 +SIZE (xpi/firefox-i18n-3.6.3/vi.xpi) = 129025 +MD5 (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = e79500e81decbe294de6ad650416e7cd +SHA256 (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = 3946ad88851e19ee26a2828949c1eac611dc2bb6926ef5224d67edfaf29a63d8 +SIZE (xpi/firefox-i18n-3.6.3/zh-CN.xpi) = 126678 +MD5 (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 1836f39bf72e9d10668c2a9e6668bf18 +SHA256 (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 5ea447d1511feb269ae1b46dd14919472d37301cc98208c0c49979fdaa755920 +SIZE (xpi/firefox-i18n-3.6.3/zh-TW.xpi) = 128220 Modified: trunk/www/firefox/Makefile ============================================================================== --- trunk/www/firefox/Makefile Tue Mar 30 19:13:01 2010 (r300) +++ trunk/www/firefox/Makefile Tue Apr 6 07:24:41 2010 (r301) @@ -6,7 +6,7 @@ # PORTNAME= firefox -DISTVERSION= 3.6.2 +DISTVERSION= 3.6.3 PORTEPOCH= 1 CATEGORIES= www ipv6 MASTER_SITES= ${MASTER_SITE_MOZILLA} Modified: trunk/www/firefox/distinfo ============================================================================== --- trunk/www/firefox/distinfo Tue Mar 30 19:13:01 2010 (r300) +++ trunk/www/firefox/distinfo Tue Apr 6 07:24:41 2010 (r301) @@ -1,3 +1,3 @@ -MD5 (firefox-3.6.2.source.tar.bz2) = a1972a2216ac7139b92b7378a328ec93 -SHA256 (firefox-3.6.2.source.tar.bz2) = 0f8398efabb14da83e14c7fbabdff99a619752cadb43348b27adf302382046f0 -SIZE (firefox-3.6.2.source.tar.bz2) = 48919538 +MD5 (firefox-3.6.3.source.tar.bz2) = 5e4541ab23084b7b14ba228291ce2688 +SHA256 (firefox-3.6.3.source.tar.bz2) = 35a0b35d79baa502b6c5781ed3e9c4b5eb623bdaf1940460dcf465192848a2d0 +SIZE (firefox-3.6.3.source.tar.bz2) = 48633061 From owner-freebsd-gecko@FreeBSD.ORG Tue Apr 6 08:19:57 2010 Return-Path: Delivered-To: freebsd-gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9682D106566B for ; Tue, 6 Apr 2010 08:19:57 +0000 (UTC) (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: from trillian.chruetertee.ch (trillian.chruetertee.ch [217.150.245.56]) by mx1.freebsd.org (Postfix) with ESMTP id 41F908FC1A for ; Tue, 6 Apr 2010 08:19:56 +0000 (UTC) Received: from trillian.chruetertee.ch (trillian [217.150.245.56]) by trillian.chruetertee.ch (8.14.3/8.14.3) with ESMTP id o368Juk6039306 for ; Tue, 6 Apr 2010 08:19:56 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: (from www@localhost) by trillian.chruetertee.ch (8.14.3/8.14.3/Submit) id o368Jp7u037852 for freebsd-gecko@freebsd.org; Tue, 6 Apr 2010 08:19:51 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Date: Tue, 6 Apr 2010 08:19:51 GMT Message-Id: <201004060819.o368Jp7u037852@trillian.chruetertee.ch> X-Authentication-Warning: trillian.chruetertee.ch: www set sender to svn-freebsd-gecko@chruetertee.ch using -f From: svn-freebsd-gecko@chruetertee.ch To: freebsd-gecko@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [SVN-Commit] r302 - branches/experimental/www/firefox-i18n trunk/www/firefox-i18n X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-gecko@freebsd.org List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 08:19:57 -0000 Author: beat Date: Tue Apr 6 08:19:50 2010 New Revision: 302 Log: - Sync with ports: Chase tl to ta-LK renaming Modified: branches/experimental/www/firefox-i18n/Makefile branches/experimental/www/firefox-i18n/Makefile.lang branches/experimental/www/firefox-i18n/Makefile.option branches/experimental/www/firefox-i18n/distinfo trunk/www/firefox-i18n/Makefile trunk/www/firefox-i18n/Makefile.lang trunk/www/firefox-i18n/Makefile.option trunk/www/firefox-i18n/distinfo Modified: branches/experimental/www/firefox-i18n/Makefile ============================================================================== --- branches/experimental/www/firefox-i18n/Makefile Tue Apr 6 07:24:41 2010 (r301) +++ branches/experimental/www/firefox-i18n/Makefile Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 28 May 2006 # Whom: Andrew Pantyukhin # -# $FreeBSD: ports/www/firefox-i18n/Makefile,v 1.35 2010/02/08 17:43:31 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile,v 1.37 2010/04/06 08:14:02 beat Exp $ # PORTNAME= firefox-i18n Modified: branches/experimental/www/firefox-i18n/Makefile.lang ============================================================================== --- branches/experimental/www/firefox-i18n/Makefile.lang Tue Apr 6 07:24:41 2010 (r301) +++ branches/experimental/www/firefox-i18n/Makefile.lang Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 30 Nov 2008 # Whom: Koji Yokota # -# $FreeBSD: ports/www/firefox-i18n/Makefile.lang,v 1.1 2010/02/08 17:43:32 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile.lang,v 1.2 2010/04/06 08:14:02 beat Exp $ # # Currently, the following 74 languages are supported. # @@ -212,6 +212,9 @@ .ifndef WITHOUT_LANG_TA FIREFOX_I18N+= ta .endif +.ifndef WITHOUT_LANG_LK +FIREFOX_I18N+= ta-LK +.endif .ifndef WITHOUT_LANG_TE FIREFOX_I18N+= te .endif Modified: branches/experimental/www/firefox-i18n/Makefile.option ============================================================================== --- branches/experimental/www/firefox-i18n/Makefile.option Tue Apr 6 07:24:41 2010 (r301) +++ branches/experimental/www/firefox-i18n/Makefile.option Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 30 Nov 2008 # Whom: Koji Yokota # -# $FreeBSD: ports/www/firefox-i18n/Makefile.option,v 1.1 2010/02/08 17:43:32 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile.option,v 1.2 2010/04/06 08:14:02 beat Exp $ # OPTIONS= LANG_AF "Africaans" off \ LANG_SQ "Albanian" off \ @@ -71,7 +71,7 @@ LANG_SV "Swedish" off \ LANG_TW "Taiwanese" off \ LANG_TA "Tamil" off \ - LANG_TL "Tamil (Sri Lanka)" off \ + LANG_LK "Tamil (Sri Lanka)" off \ LANG_TE "Telugu" on \ LANG_TH "Thai" off \ LANG_TR "Turkish" off \ Modified: branches/experimental/www/firefox-i18n/distinfo ============================================================================== --- branches/experimental/www/firefox-i18n/distinfo Tue Apr 6 07:24:41 2010 (r301) +++ branches/experimental/www/firefox-i18n/distinfo Tue Apr 6 08:19:50 2010 (r302) @@ -190,6 +190,9 @@ MD5 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 210407c722919c59747a4c599e27d0cc SHA256 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = ab5dc44a7f5f908ffc8ba622a222b892a98b9cea5abc6fb8ef02b66796c4c28b SIZE (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 128080 +MD5 (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = 7c09b11224ac7fbfbc599e102eb3e444 +SHA256 (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = a867049d41fb0deeda6cdc516246e612591385dc8c7e4fffb12597d10e56dd56 +SIZE (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = 149313 MD5 (xpi/firefox-i18n-3.6.3/ta.xpi) = 7f6d205ee070313768588ccb299caff9 SHA256 (xpi/firefox-i18n-3.6.3/ta.xpi) = 6423a6287aec9f0c2a44d4e510f8ba24318fb3d3ecb9eab80471ea5a352ee26e SIZE (xpi/firefox-i18n-3.6.3/ta.xpi) = 136772 Modified: trunk/www/firefox-i18n/Makefile ============================================================================== --- trunk/www/firefox-i18n/Makefile Tue Apr 6 07:24:41 2010 (r301) +++ trunk/www/firefox-i18n/Makefile Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 28 May 2006 # Whom: Andrew Pantyukhin # -# $FreeBSD: ports/www/firefox-i18n/Makefile,v 1.35 2010/02/08 17:43:31 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile,v 1.37 2010/04/06 08:14:02 beat Exp $ # PORTNAME= firefox-i18n Modified: trunk/www/firefox-i18n/Makefile.lang ============================================================================== --- trunk/www/firefox-i18n/Makefile.lang Tue Apr 6 07:24:41 2010 (r301) +++ trunk/www/firefox-i18n/Makefile.lang Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 30 Nov 2008 # Whom: Koji Yokota # -# $FreeBSD: ports/www/firefox-i18n/Makefile.lang,v 1.1 2010/02/08 17:43:32 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile.lang,v 1.2 2010/04/06 08:14:02 beat Exp $ # # Currently, the following 74 languages are supported. # @@ -209,12 +209,12 @@ .ifndef WITHOUT_LANG_SV FIREFOX_I18N+= sv-SE .endif -.ifndef WITHOUT_LANG_TL -FIREFOX_I18N+= tl -.endif .ifndef WITHOUT_LANG_TA FIREFOX_I18N+= ta .endif +.ifndef WITHOUT_LANG_LK +FIREFOX_I18N+= ta-LK +.endif .ifndef WITHOUT_LANG_TE FIREFOX_I18N+= te .endif Modified: trunk/www/firefox-i18n/Makefile.option ============================================================================== --- trunk/www/firefox-i18n/Makefile.option Tue Apr 6 07:24:41 2010 (r301) +++ trunk/www/firefox-i18n/Makefile.option Tue Apr 6 08:19:50 2010 (r302) @@ -2,7 +2,7 @@ # Date created: 30 Nov 2008 # Whom: Koji Yokota # -# $FreeBSD: ports/www/firefox-i18n/Makefile.option,v 1.1 2010/02/08 17:43:32 beat Exp $ +# $FreeBSD: ports/www/firefox-i18n/Makefile.option,v 1.2 2010/04/06 08:14:02 beat Exp $ # OPTIONS= LANG_AF "Africaans" off \ LANG_SQ "Albanian" off \ @@ -71,7 +71,7 @@ LANG_SV "Swedish" off \ LANG_TW "Taiwanese" off \ LANG_TA "Tamil" off \ - LANG_TL "Tamil (Sri Lanka)" off \ + LANG_LK "Tamil (Sri Lanka)" off \ LANG_TE "Telugu" on \ LANG_TH "Thai" off \ LANG_TR "Turkish" off \ Modified: trunk/www/firefox-i18n/distinfo ============================================================================== --- trunk/www/firefox-i18n/distinfo Tue Apr 6 07:24:41 2010 (r301) +++ trunk/www/firefox-i18n/distinfo Tue Apr 6 08:19:50 2010 (r302) @@ -190,6 +190,9 @@ MD5 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 210407c722919c59747a4c599e27d0cc SHA256 (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = ab5dc44a7f5f908ffc8ba622a222b892a98b9cea5abc6fb8ef02b66796c4c28b SIZE (xpi/firefox-i18n-3.6.3/sv-SE.xpi) = 128080 +MD5 (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = 7c09b11224ac7fbfbc599e102eb3e444 +SHA256 (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = a867049d41fb0deeda6cdc516246e612591385dc8c7e4fffb12597d10e56dd56 +SIZE (xpi/firefox-i18n-3.6.3/ta-LK.xpi) = 149313 MD5 (xpi/firefox-i18n-3.6.3/ta.xpi) = 7f6d205ee070313768588ccb299caff9 SHA256 (xpi/firefox-i18n-3.6.3/ta.xpi) = 6423a6287aec9f0c2a44d4e510f8ba24318fb3d3ecb9eab80471ea5a352ee26e SIZE (xpi/firefox-i18n-3.6.3/ta.xpi) = 136772 From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 08:00:26 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 924311065672; Wed, 7 Apr 2010 08:00:26 +0000 (UTC) (envelope-from edwin@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 697D68FC16; Wed, 7 Apr 2010 08:00:26 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o3780Q1i029059; Wed, 7 Apr 2010 08:00:26 GMT (envelope-from edwin@freefall.freebsd.org) Received: (from edwin@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o3780QKZ029048; Wed, 7 Apr 2010 08:00:26 GMT (envelope-from edwin) Date: Wed, 7 Apr 2010 08:00:26 GMT Message-Id: <201004070800.o3780QKZ029048@freefall.freebsd.org> To: edwin@FreeBSD.org, freebsd-ports-bugs@FreeBSD.org, gecko@FreeBSD.org From: edwin@FreeBSD.org Cc: Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 08:00:26 -0000 Synopsis: [ERROR] cannot compile www/firefox version 3.6.3 Responsible-Changed-From-To: freebsd-ports-bugs->gecko Responsible-Changed-By: edwin Responsible-Changed-When: Wed Apr 7 08:00:24 UTC 2010 Responsible-Changed-Why: Over to maintainer (via the GNATS Auto Assign Tool) http://www.freebsd.org/cgi/query-pr.cgi?pr=145450 From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 08:20:06 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6C5C0106564A for ; Wed, 7 Apr 2010 08:20:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 429628FC14 for ; Wed, 7 Apr 2010 08:20:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o378K5JM048196 for ; Wed, 7 Apr 2010 08:20:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o378K5g6048195; Wed, 7 Apr 2010 08:20:05 GMT (envelope-from gnats) Date: Wed, 7 Apr 2010 08:20:05 GMT Message-Id: <201004070820.o378K5g6048195@freefall.freebsd.org> To: gecko@FreeBSD.org From: Florian Smeets Cc: Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Florian Smeets List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 08:20:06 -0000 The following reply was made to PR ports/145450; it has been noted by GNATS. From: Florian Smeets To: bug-followup@FreeBSD.org, andy.lavr@gmail.com Cc: Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 Date: Wed, 07 Apr 2010 10:16:51 +0200 Hi, show me the output of python -V and pkg_info|grep python? Thanks, Florian From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 08:26:55 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0C7351065740 for ; Wed, 7 Apr 2010 08:26:55 +0000 (UTC) (envelope-from flo@smeets.im) Received: from mail.solomo.de (mail.solomo.de [85.214.124.163]) by mx1.freebsd.org (Postfix) with ESMTP id BBB658FC18 for ; Wed, 7 Apr 2010 08:26:54 +0000 (UTC) Received: from mail.solomo.de (localhost [127.0.0.1]) by mail.solomo.de (Postfix) with ESMTP id 9B15561CAD; Wed, 7 Apr 2010 10:26:53 +0200 (CEST) X-Virus-Scanned: amavisd-new at vistream.de Received: from mail.solomo.de ([127.0.0.1]) by mail.solomo.de (mail.solomo.de [127.0.0.1]) (amavisd-new, port 10024) with LMTP id cEFqax4woqGx; Wed, 7 Apr 2010 10:26:51 +0200 (CEST) Received: from nibbler.vistream.local (relay3.vistream.de [87.139.10.28]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.solomo.de (Postfix) with ESMTPSA id 27BD761C9B; Wed, 7 Apr 2010 10:26:51 +0200 (CEST) Message-ID: <4BBC41CA.2030001@smeets.im> Date: Wed, 07 Apr 2010 10:26:50 +0200 From: Florian Smeets User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 MIME-Version: 1.0 To: andy.lavr@gmail.com References: <201004070820.o378K5g6048195@freefall.freebsd.org> In-Reply-To: <201004070820.o378K5g6048195@freefall.freebsd.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 08:26:55 -0000 On 4/7/10 10:20 AM, Florian Smeets wrote: > The following reply was made to PR ports/145450; it has been noted by GNATS. > > From: Florian Smeets > To: bug-followup@FreeBSD.org, andy.lavr@gmail.com > Cc: > Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 > Date: Wed, 07 Apr 2010 10:16:51 +0200 > > Hi, > > show me the output of python -V and pkg_info|grep python? > Sorry, there is a "can you please" missing. Did not want to sound as harsh as it seems. So, can you please show me the output? :-) Thanks, Florian From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 08:40:06 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 27AB2106564A for ; Wed, 7 Apr 2010 08:40:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 16E8F8FC08 for ; Wed, 7 Apr 2010 08:40:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o378e5d6071304 for ; Wed, 7 Apr 2010 08:40:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o378e5Bo071301; Wed, 7 Apr 2010 08:40:05 GMT (envelope-from gnats) Date: Wed, 7 Apr 2010 08:40:05 GMT Message-Id: <201004070840.o378e5Bo071301@freefall.freebsd.org> To: gecko@FreeBSD.org From: "Andrei V. Lavreniyuk" Cc: Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "Andrei V. Lavreniyuk" List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 08:40:06 -0000 The following reply was made to PR ports/145450; it has been noted by GNATS. From: "Andrei V. Lavreniyuk" To: Florian Smeets Cc: bug-followup@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 Date: Wed, 07 Apr 2010 11:31:06 +0300 07.04.2010 11:16, Florian Smeets пишет: > show me the output of python -V and pkg_info|grep python? > show me the output of python -V and pkg_info|grep python? # python -V Python 2.6.4 # pkg_info|grep python boost-python-libs-1.41.0 Framework for interfacing Python and C++ diffuse-0.4.1_3 A graphical N-way diff and merge tool written in python py26-notify-0.1.1_7 A python bindings for libnotify python26-2.6.4 An interpreted object-oriented programming language -- Best regards, Andrei V. Lavreniyuk. From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 08:50:05 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B71DA106564A for ; Wed, 7 Apr 2010 08:50:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A02598FC13 for ; Wed, 7 Apr 2010 08:50:05 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o378o5mG085643 for ; Wed, 7 Apr 2010 08:50:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o378o5ff085642; Wed, 7 Apr 2010 08:50:05 GMT (envelope-from gnats) Date: Wed, 7 Apr 2010 08:50:05 GMT Message-Id: <201004070850.o378o5ff085642@freefall.freebsd.org> To: gecko@FreeBSD.org From: "Andrei V. Lavreniyuk" X-Mailman-Approved-At: Wed, 07 Apr 2010 11:38:52 +0000 Cc: Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "Andrei V. Lavreniyuk" List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 08:50:05 -0000 The following reply was made to PR ports/145450; it has been noted by GNATS. From: "Andrei V. Lavreniyuk" To: bug-followup@FreeBSD.org Cc: gecko@FreeBSD.org, Florian Smeets Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 Date: Wed, 07 Apr 2010 11:48:12 +0300 This is a multi-part message in MIME format. --------------080004000200030807030507 Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 8bit 07.04.2010 11:26, Florian Smeets пишет: >> show me the output of python -V and pkg_info|grep python? >> > > Sorry, there is a "can you please" missing. Did not want to sound as > harsh as it seems. > > So, can you please show me the output? :-) :) Workaround: # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 python26-2.6.4 + Attached patch. -- Best regards, Andrei V. Lavreniyuk. --------------080004000200030807030507 Content-Type: text/plain; name="patch-header.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-header.py" --- xpcom/idl-parser/header.py.orig 2010-04-02 19:03:13.000000000 +0300 +++ xpcom/idl-parser/header.py 2010-04-07 11:23:45.088650024 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/local/bin/python # header.py - Generate C++ header files from IDL. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507 Content-Type: text/plain; name="patch-ply" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-ply" diff -ruN ply.orig/lex.py ply/lex.py --- other-licenses/ply/ply.orig/lex.py 2010-04-02 19:02:56.000000000 +0300 +++ other-licenses/ply/ply/lex.py 2009-08-27 04:28:42.000000000 +0300 @@ -1,45 +1,61 @@ # ----------------------------------------------------------------------------- # ply: lex.py # -# Author: David M. Beazley (dave@dabeaz.com) +# Copyright (C) 2001-2009, +# David M. Beazley (Dabeaz LLC) +# All rights reserved. # -# Copyright (C) 2001-2008, David M. Beazley +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. # -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -# -# See the file COPYING for a complete copy of the LGPL. +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ----------------------------------------------------------------------------- -__version__ = "2.5" -__tabversion__ = "2.4" # Version of table file used +__version__ = "3.3" +__tabversion__ = "3.2" # Version of table file used import re, sys, types, copy, os -# This regular expression is used to match valid token names -_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$') - -# _INSTANCETYPE sets the valid set of instance types recognized -# by PLY when lexers are defined by a class. In order to maintain -# backwards compatibility with Python-2.0, we have to check for -# the existence of ObjectType. - +# This tuple contains known string types try: - _INSTANCETYPE = (types.InstanceType, types.ObjectType) + # Python 2.6 + StringTypes = (types.StringType, types.UnicodeType) except AttributeError: - _INSTANCETYPE = types.InstanceType - class object: pass # Note: needed if no new-style classes present + # Python 3.0 + StringTypes = (str, bytes) + +# Extract the code attribute of a function. Different implementations +# are for Python 2/3 compatibility. + +if sys.version_info[0] < 3: + def func_code(f): + return f.func_code +else: + def func_code(f): + return f.__code__ + +# This regular expression is used to match valid token names +_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$') # Exception thrown when invalid token encountered and no default error # handler is defined. @@ -49,35 +65,50 @@ self.args = (message,) self.text = s -# An object used to issue one-time warning messages for various features - -class LexWarning(object): - def __init__(self): - self.warned = 0 - def __call__(self,msg): - if not self.warned: - sys.stderr.write("ply.lex: Warning: " + msg+"\n") - self.warned = 1 - -_SkipWarning = LexWarning() # Warning for use of t.skip() on tokens - # Token class. This class is used to represent the tokens produced. class LexToken(object): def __str__(self): return "LexToken(%s,%r,%d,%d)" % (self.type,self.value,self.lineno,self.lexpos) def __repr__(self): return str(self) - def skip(self,n): - self.lexer.skip(n) - _SkipWarning("Calling t.skip() on a token is deprecated. Please use t.lexer.skip()") + +# This object is a stand-in for a logging object created by the +# logging module. + +class PlyLogger(object): + def __init__(self,f): + self.f = f + def critical(self,msg,*args,**kwargs): + self.f.write((msg % args) + "\n") + + def warning(self,msg,*args,**kwargs): + self.f.write("WARNING: "+ (msg % args) + "\n") + + def error(self,msg,*args,**kwargs): + self.f.write("ERROR: " + (msg % args) + "\n") + + info = critical + debug = critical + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self,name): + return self + def __call__(self,*args,**kwargs): + return self # ----------------------------------------------------------------------------- -# Lexer class +# === Lexing Engine === # -# This class encapsulates all of the methods and data associated with a lexer. +# The following Lexer class implements the lexer runtime. There are only +# a few public methods and attributes: # # input() - Store a new string in the lexer # token() - Get the next token +# clone() - Clone the lexer +# +# lineno - Current line number +# lexpos - Current position in the input string # ----------------------------------------------------------------------------- class Lexer: @@ -105,7 +136,6 @@ self.lexliterals = "" # Literal characters that can be passed through self.lexmodule = None # Module self.lineno = 1 # Current line number - self.lexdebug = 0 # Debugging mode self.lexoptimize = 0 # Optimized mode def clone(self,object=None): @@ -145,6 +175,7 @@ filename = os.path.join(outputdir,basetabfilename)+".py" tf = open(filename,"w") tf.write("# %s.py. This file automatically created by PLY (version %s). Don't edit!\n" % (tabfile,__version__)) + tf.write("_tabversion = %s\n" % repr(__version__)) tf.write("_lextokens = %s\n" % repr(self.lextokens)) tf.write("_lexreflags = %s\n" % repr(self.lexreflags)) tf.write("_lexliterals = %s\n" % repr(self.lexliterals)) @@ -184,7 +215,16 @@ if isinstance(tabfile,types.ModuleType): lextab = tabfile else: - exec "import %s as lextab" % tabfile + if sys.version_info[0] < 3: + exec("import %s as lextab" % tabfile) + else: + env = { } + exec("import %s as lextab" % tabfile, env,env) + lextab = env['lextab'] + + if getattr(lextab,"_tabversion","0.0") != __version__: + raise ImportError("Inconsistent PLY version") + self.lextokens = lextab._lextokens self.lexreflags = lextab._lexreflags self.lexliterals = lextab._lexliterals @@ -196,7 +236,7 @@ titem = [] txtitem = [] for i in range(len(lre)): - titem.append((re.compile(lre[i][0],lextab._lexreflags),_names_to_funcs(lre[i][1],fdict))) + titem.append((re.compile(lre[i][0],lextab._lexreflags | re.VERBOSE),_names_to_funcs(lre[i][1],fdict))) txtitem.append(lre[i][0]) self.lexstatere[key] = titem self.lexstateretext[key] = txtitem @@ -211,8 +251,8 @@ def input(self,s): # Pull off the first character to see if s looks like a string c = s[:1] - if not (isinstance(c,types.StringType) or isinstance(c,types.UnicodeType)): - raise ValueError, "Expected a string" + if not isinstance(c,StringTypes): + raise ValueError("Expected a string") self.lexdata = s self.lexpos = 0 self.lexlen = len(s) @@ -221,8 +261,8 @@ # begin() - Changes the lexing state # ------------------------------------------------------------ def begin(self,state): - if not self.lexstatere.has_key(state): - raise ValueError, "Undefined state" + if not state in self.lexstatere: + raise ValueError("Undefined state") self.lexre = self.lexstatere[state] self.lexretext = self.lexstateretext[state] self.lexignore = self.lexstateignore.get(state,"") @@ -255,7 +295,7 @@ self.lexpos += n # ------------------------------------------------------------ - # token() - Return the next token from the Lexer + # opttoken() - Return the next token from the Lexer # # Note: This function has been carefully implemented to be as fast # as possible. Don't make changes unless you really know what @@ -299,10 +339,6 @@ lexpos = m.end() - # if func not callable, it means it's an ignored token - if not callable(func): - break - # If token is processed by a function, call it tok.lexer = self # Set additional attributes useful in token rules @@ -319,9 +355,9 @@ # Verify type of the token. If not in the token map, raise an error if not self.lexoptimize: - if not self.lextokens.has_key(newtok.type): - raise LexError, ("%s:%d: Rule '%s' returned an unknown token type '%s'" % ( - func.func_code.co_filename, func.func_code.co_firstlineno, + if not newtok.type in self.lextokens: + raise LexError("%s:%d: Rule '%s' returned an unknown token type '%s'" % ( + func_code(func).co_filename, func_code(func).co_firstlineno, func.__name__, newtok.type),lexdata[lexpos:]) return newtok @@ -348,60 +384,60 @@ newtok = self.lexerrorf(tok) if lexpos == self.lexpos: # Error method didn't change text position at all. This is an error. - raise LexError, ("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:]) + raise LexError("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:]) lexpos = self.lexpos if not newtok: continue return newtok self.lexpos = lexpos - raise LexError, ("Illegal character '%s' at index %d" % (lexdata[lexpos],lexpos), lexdata[lexpos:]) + raise LexError("Illegal character '%s' at index %d" % (lexdata[lexpos],lexpos), lexdata[lexpos:]) self.lexpos = lexpos + 1 if self.lexdata is None: - raise RuntimeError, "No input string given with input()" + raise RuntimeError("No input string given with input()") return None + # Iterator interface + def __iter__(self): + return self + + def next(self): + t = self.token() + if t is None: + raise StopIteration + return t + + __next__ = next + # ----------------------------------------------------------------------------- -# _validate_file() +# ==== Lex Builder === # -# This checks to see if there are duplicated t_rulename() functions or strings -# in the parser input file. This is done using a simple regular expression -# match on each line in the given file. If the file can't be located or opened, -# a true result is returned by default. +# The functions and classes below are used to collect lexing information +# and build a Lexer object from it. # ----------------------------------------------------------------------------- -def _validate_file(filename): - import os.path - base,ext = os.path.splitext(filename) - if ext != '.py': return 1 # No idea what the file is. Return OK +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- +def get_caller_module_dict(levels): try: - f = open(filename) - lines = f.readlines() - f.close() - except IOError: - return 1 # Couldn't find the file. Don't worry about it - - fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(') - sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=') - - counthash = { } - linen = 1 - noerror = 1 - for l in lines: - m = fre.match(l) - if not m: - m = sre.match(l) - if m: - name = m.group(1) - prev = counthash.get(name) - if not prev: - counthash[name] = linen - else: - print >>sys.stderr, "%s:%d: Rule %s redefined. Previously defined on line %d" % (filename,linen,name,prev) - noerror = 0 - linen += 1 - return noerror + raise RuntimeError + except RuntimeError: + e,b,t = sys.exc_info() + f = t.tb_frame + while levels > 0: + f = f.f_back + levels -= 1 + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + + return ldict # ----------------------------------------------------------------------------- # _funcs_to_names() @@ -466,7 +502,7 @@ lexindexfunc[i] = (None, toknames[f]) return [(lexre,lexindexfunc)],[regex],[lexindexnames] - except Exception,e: + except Exception: m = int(len(relist)/2) if m == 0: m = 1 llist, lre, lnames = _form_master_re(relist[:m],reflags,ldict,toknames) @@ -486,319 +522,446 @@ nonstate = 1 parts = s.split("_") for i in range(1,len(parts)): - if not names.has_key(parts[i]) and parts[i] != 'ANY': break + if not parts[i] in names and parts[i] != 'ANY': break if i > 1: states = tuple(parts[1:i]) else: states = ('INITIAL',) if 'ANY' in states: - states = tuple(names.keys()) + states = tuple(names) tokenname = "_".join(parts[i:]) return (states,tokenname) + # ----------------------------------------------------------------------------- -# lex(module) +# LexerReflect() # -# Build all of the regular expression rules from definitions in the supplied module +# This class represents information needed to build a lexer as extracted from a +# user's input file. # ----------------------------------------------------------------------------- -def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,nowarn=0,outputdir=""): - global lexer - ldict = None - stateinfo = { 'INITIAL' : 'inclusive'} - error = 0 - files = { } - lexobj = Lexer() - lexobj.lexdebug = debug - lexobj.lexoptimize = optimize - global token,input +class LexerReflect(object): + def __init__(self,ldict,log=None,reflags=0): + self.ldict = ldict + self.error_func = None + self.tokens = [] + self.reflags = reflags + self.stateinfo = { 'INITIAL' : 'inclusive'} + self.files = {} + self.error = 0 - if nowarn: warn = 0 - else: warn = 1 + if log is None: + self.log = PlyLogger(sys.stderr) + else: + self.log = log - if object: module = object + # Get all of the basic information + def get_all(self): + self.get_tokens() + self.get_literals() + self.get_states() + self.get_rules() + + # Validate all of the information + def validate_all(self): + self.validate_tokens() + self.validate_literals() + self.validate_rules() + return self.error + + # Get the tokens map + def get_tokens(self): + tokens = self.ldict.get("tokens",None) + if not tokens: + self.log.error("No token list is defined") + self.error = 1 + return - if module: - # User supplied a module object. - if isinstance(module, types.ModuleType): - ldict = module.__dict__ - elif isinstance(module, _INSTANCETYPE): - _items = [(k,getattr(module,k)) for k in dir(module)] - ldict = { } - for (i,v) in _items: - ldict[i] = v - else: - raise ValueError,"Expected a module or instance" - lexobj.lexmodule = module + if not isinstance(tokens,(list, tuple)): + self.log.error("tokens must be a list or tuple") + self.error = 1 + return + + if not tokens: + self.log.error("tokens is empty") + self.error = 1 + return - else: - # No module given. We might be able to get information from the caller. - try: - raise RuntimeError - except RuntimeError: - e,b,t = sys.exc_info() - f = t.tb_frame - f = f.f_back # Walk out to our calling function - if f.f_globals is f.f_locals: # Collect global and local variations from caller - ldict = f.f_globals - else: - ldict = f.f_globals.copy() - ldict.update(f.f_locals) + self.tokens = tokens - if optimize and lextab: + # Validate the tokens + def validate_tokens(self): + terminals = {} + for n in self.tokens: + if not _is_identifier.match(n): + self.log.error("Bad token name '%s'",n) + self.error = 1 + if n in terminals: + self.log.warning("Token '%s' multiply defined", n) + terminals[n] = 1 + + # Get the literals specifier + def get_literals(self): + self.literals = self.ldict.get("literals","") + + # Validate literals + def validate_literals(self): try: - lexobj.readtab(lextab,ldict) - token = lexobj.token - input = lexobj.input - lexer = lexobj - return lexobj + for c in self.literals: + if not isinstance(c,StringTypes) or len(c) > 1: + self.log.error("Invalid literal %s. Must be a single character", repr(c)) + self.error = 1 + continue - except ImportError: - pass + except TypeError: + self.log.error("Invalid literals specification. literals must be a sequence of characters") + self.error = 1 + + def get_states(self): + self.states = self.ldict.get("states",None) + # Build statemap + if self.states: + if not isinstance(self.states,(tuple,list)): + self.log.error("states must be defined as a tuple or list") + self.error = 1 + else: + for s in self.states: + if not isinstance(s,tuple) or len(s) != 2: + self.log.error("Invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')",repr(s)) + self.error = 1 + continue + name, statetype = s + if not isinstance(name,StringTypes): + self.log.error("State name %s must be a string", repr(name)) + self.error = 1 + continue + if not (statetype == 'inclusive' or statetype == 'exclusive'): + self.log.error("State type for state %s must be 'inclusive' or 'exclusive'",name) + self.error = 1 + continue + if name in self.stateinfo: + self.log.error("State '%s' already defined",name) + self.error = 1 + continue + self.stateinfo[name] = statetype + + # Get all of the symbols with a t_ prefix and sort them into various + # categories (functions, strings, error functions, and ignore characters) + + def get_rules(self): + tsymbols = [f for f in self.ldict if f[:2] == 't_' ] + + # Now build up a list of functions and a list of strings + + self.toknames = { } # Mapping of symbols to token names + self.funcsym = { } # Symbols defined as functions + self.strsym = { } # Symbols defined as strings + self.ignore = { } # Ignore strings by state + self.errorf = { } # Error functions by state + + for s in self.stateinfo: + self.funcsym[s] = [] + self.strsym[s] = [] + + if len(tsymbols) == 0: + self.log.error("No rules of the form t_rulename are defined") + self.error = 1 + return - # Get the tokens, states, and literals variables (if any) + for f in tsymbols: + t = self.ldict[f] + states, tokname = _statetoken(f,self.stateinfo) + self.toknames[f] = tokname - tokens = ldict.get("tokens",None) - states = ldict.get("states",None) - literals = ldict.get("literals","") + if hasattr(t,"__call__"): + if tokname == 'error': + for s in states: + self.errorf[s] = t + elif tokname == 'ignore': + line = func_code(t).co_firstlineno + file = func_code(t).co_filename + self.log.error("%s:%d: Rule '%s' must be defined as a string",file,line,t.__name__) + self.error = 1 + else: + for s in states: + self.funcsym[s].append((f,t)) + elif isinstance(t, StringTypes): + if tokname == 'ignore': + for s in states: + self.ignore[s] = t + if "\\" in t: + self.log.warning("%s contains a literal backslash '\\'",f) + + elif tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", f) + self.error = 1 + else: + for s in states: + self.strsym[s].append((f,t)) + else: + self.log.error("%s not defined as a function or string", f) + self.error = 1 - if not tokens: - raise SyntaxError,"lex: module does not define 'tokens'" + # Sort the functions by line number + for f in self.funcsym.values(): + if sys.version_info[0] < 3: + f.sort(lambda x,y: cmp(func_code(x[1]).co_firstlineno,func_code(y[1]).co_firstlineno)) + else: + # Python 3.0 + f.sort(key=lambda x: func_code(x[1]).co_firstlineno) - if not (isinstance(tokens,types.ListType) or isinstance(tokens,types.TupleType)): - raise SyntaxError,"lex: tokens must be a list or tuple." + # Sort the strings by regular expression length + for s in self.strsym.values(): + if sys.version_info[0] < 3: + s.sort(lambda x,y: (len(x[1]) < len(y[1])) - (len(x[1]) > len(y[1]))) + else: + # Python 3.0 + s.sort(key=lambda x: len(x[1]),reverse=True) - # Build a dictionary of valid token names - lexobj.lextokens = { } - if not optimize: - for n in tokens: - if not _is_identifier.match(n): - print >>sys.stderr, "lex: Bad token name '%s'" % n - error = 1 - if warn and lexobj.lextokens.has_key(n): - print >>sys.stderr, "lex: Warning. Token '%s' multiply defined." % n - lexobj.lextokens[n] = None - else: - for n in tokens: lexobj.lextokens[n] = None + # Validate all of the t_rules collected + def validate_rules(self): + for state in self.stateinfo: + # Validate all rules defined by functions + + + + for fname, f in self.funcsym[state]: + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + self.files[file] = 1 - if debug: - print "lex: tokens = '%s'" % lexobj.lextokens.keys() + tokname = self.toknames[fname] + if isinstance(f, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + nargs = func_code(f).co_argcount + if nargs > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__) + self.error = 1 + continue - try: - for c in literals: - if not (isinstance(c,types.StringType) or isinstance(c,types.UnicodeType)) or len(c) > 1: - print >>sys.stderr, "lex: Invalid literal %s. Must be a single character" % repr(c) - error = 1 + if nargs < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__) + self.error = 1 continue - except TypeError: - print >>sys.stderr, "lex: Invalid literals specification. literals must be a sequence of characters." - error = 1 - - lexobj.lexliterals = literals - - # Build statemap - if states: - if not (isinstance(states,types.TupleType) or isinstance(states,types.ListType)): - print >>sys.stderr, "lex: states must be defined as a tuple or list." - error = 1 - else: - for s in states: - if not isinstance(s,types.TupleType) or len(s) != 2: - print >>sys.stderr, "lex: invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')" % repr(s) - error = 1 - continue - name, statetype = s - if not isinstance(name,types.StringType): - print >>sys.stderr, "lex: state name %s must be a string" % repr(name) - error = 1 - continue - if not (statetype == 'inclusive' or statetype == 'exclusive'): - print >>sys.stderr, "lex: state type for state %s must be 'inclusive' or 'exclusive'" % name - error = 1 - continue - if stateinfo.has_key(name): - print >>sys.stderr, "lex: state '%s' already defined." % name - error = 1 - continue - stateinfo[name] = statetype - - # Get a list of symbols with the t_ or s_ prefix - tsymbols = [f for f in ldict.keys() if f[:2] == 't_' ] - - # Now build up a list of functions and a list of strings - - funcsym = { } # Symbols defined as functions - strsym = { } # Symbols defined as strings - toknames = { } # Mapping of symbols to token names - - for s in stateinfo.keys(): - funcsym[s] = [] - strsym[s] = [] - - ignore = { } # Ignore strings by state - errorf = { } # Error functions by state - - if len(tsymbols) == 0: - raise SyntaxError,"lex: no rules of the form t_rulename are defined." - - for f in tsymbols: - t = ldict[f] - states, tokname = _statetoken(f,stateinfo) - toknames[f] = tokname - - if callable(t): - for s in states: funcsym[s].append((f,t)) - elif (isinstance(t, types.StringType) or isinstance(t,types.UnicodeType)): - for s in states: strsym[s].append((f,t)) - else: - print >>sys.stderr, "lex: %s not defined as a function or string" % f - error = 1 + if not f.__doc__: + self.log.error("%s:%d: No regular expression defined for rule '%s'",file,line,f.__name__) + self.error = 1 + continue - # Sort the functions by line number - for f in funcsym.values(): - f.sort(lambda x,y: cmp(x[1].func_code.co_firstlineno,y[1].func_code.co_firstlineno)) - - # Sort the strings by regular expression length - for s in strsym.values(): - s.sort(lambda x,y: (len(x[1]) < len(y[1])) - (len(x[1]) > len(y[1]))) + try: + c = re.compile("(?P<%s>%s)" % (fname,f.__doc__), re.VERBOSE | self.reflags) + if c.match(""): + self.log.error("%s:%d: Regular expression for rule '%s' matches empty string", file,line,f.__name__) + self.error = 1 + except re.error: + _etype, e, _etrace = sys.exc_info() + self.log.error("%s:%d: Invalid regular expression for rule '%s'. %s", file,line,f.__name__,e) + if '#' in f.__doc__: + self.log.error("%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'",file,line, f.__name__) + self.error = 1 + + # Validate all rules defined by strings + for name,r in self.strsym[state]: + tokname = self.toknames[name] + if tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", name) + self.error = 1 + continue - regexs = { } + if not tokname in self.tokens and tokname.find("ignore_") < 0: + self.log.error("Rule '%s' defined for an unspecified token %s",name,tokname) + self.error = 1 + continue - # Build the master regular expressions - for state in stateinfo.keys(): - regex_list = [] + try: + c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | self.reflags) + if (c.match("")): + self.log.error("Regular expression for rule '%s' matches empty string",name) + self.error = 1 + except re.error: + _etype, e, _etrace = sys.exc_info() + self.log.error("Invalid regular expression for rule '%s'. %s",name,e) + if '#' in r: + self.log.error("Make sure '#' in rule '%s' is escaped with '\\#'",name) + self.error = 1 - # Add rules defined by functions first - for fname, f in funcsym[state]: - line = f.func_code.co_firstlineno - file = f.func_code.co_filename - files[file] = None - tokname = toknames[fname] - - ismethod = isinstance(f, types.MethodType) - - if not optimize: - nargs = f.func_code.co_argcount - if ismethod: + if not self.funcsym[state] and not self.strsym[state]: + self.log.error("No rules defined for state '%s'",state) + self.error = 1 + + # Validate the error function + efunc = self.errorf.get(state,None) + if efunc: + f = efunc + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + self.files[file] = 1 + + if isinstance(f, types.MethodType): reqargs = 2 else: reqargs = 1 + nargs = func_code(f).co_argcount if nargs > reqargs: - print >>sys.stderr, "%s:%d: Rule '%s' has too many arguments." % (file,line,f.__name__) - error = 1 - continue + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__) + self.error = 1 if nargs < reqargs: - print >>sys.stderr, "%s:%d: Rule '%s' requires an argument." % (file,line,f.__name__) - error = 1 - continue + self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__) + self.error = 1 - if tokname == 'ignore': - print >>sys.stderr, "%s:%d: Rule '%s' must be defined as a string." % (file,line,f.__name__) - error = 1 - continue + for f in self.files: + self.validate_file(f) - if tokname == 'error': - errorf[state] = f - continue - if f.__doc__: - if not optimize: - try: - c = re.compile("(?P<%s>%s)" % (fname,f.__doc__), re.VERBOSE | reflags) - if c.match(""): - print >>sys.stderr, "%s:%d: Regular expression for rule '%s' matches empty string." % (file,line,f.__name__) - error = 1 - continue - except re.error,e: - print >>sys.stderr, "%s:%d: Invalid regular expression for rule '%s'. %s" % (file,line,f.__name__,e) - if '#' in f.__doc__: - print >>sys.stderr, "%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'." % (file,line, f.__name__) - error = 1 - continue + # ----------------------------------------------------------------------------- + # validate_file() + # + # This checks to see if there are duplicated t_rulename() functions or strings + # in the parser input file. This is done using a simple regular expression + # match on each line in the given file. + # ----------------------------------------------------------------------------- + + def validate_file(self,filename): + import os.path + base,ext = os.path.splitext(filename) + if ext != '.py': return # No idea what the file is. Return OK - if debug: - print "lex: Adding rule %s -> '%s' (state '%s')" % (f.__name__,f.__doc__, state) + try: + f = open(filename) + lines = f.readlines() + f.close() + except IOError: + return # Couldn't find the file. Don't worry about it - # Okay. The regular expression seemed okay. Let's append it to the master regular - # expression we're building + fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(') + sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=') - regex_list.append("(?P<%s>%s)" % (fname,f.__doc__)) - else: - print >>sys.stderr, "%s:%d: No regular expression defined for rule '%s'" % (file,line,f.__name__) + counthash = { } + linen = 1 + for l in lines: + m = fre.match(l) + if not m: + m = sre.match(l) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + self.log.error("%s:%d: Rule %s redefined. Previously defined on line %d",filename,linen,name,prev) + self.error = 1 + linen += 1 + +# ----------------------------------------------------------------------------- +# lex(module) +# +# Build all of the regular expression rules from definitions in the supplied module +# ----------------------------------------------------------------------------- +def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,nowarn=0,outputdir="", debuglog=None, errorlog=None): + global lexer + ldict = None + stateinfo = { 'INITIAL' : 'inclusive'} + lexobj = Lexer() + lexobj.lexoptimize = optimize + global token,input - # Now add all of the simple rules - for name,r in strsym[state]: - tokname = toknames[name] + if errorlog is None: + errorlog = PlyLogger(sys.stderr) + + if debug: + if debuglog is None: + debuglog = PlyLogger(sys.stderr) - if tokname == 'ignore': - if "\\" in r: - print >>sys.stderr, "lex: Warning. %s contains a literal backslash '\\'" % name - ignore[state] = r - continue + # Get the module dictionary used for the lexer + if object: module = object - if not optimize: - if tokname == 'error': - raise SyntaxError,"lex: Rule '%s' must be defined as a function" % name - error = 1 - continue + if module: + _items = [(k,getattr(module,k)) for k in dir(module)] + ldict = dict(_items) + else: + ldict = get_caller_module_dict(2) - if not lexobj.lextokens.has_key(tokname) and tokname.find("ignore_") < 0: - print >>sys.stderr, "lex: Rule '%s' defined for an unspecified token %s." % (name,tokname) - error = 1 - continue - try: - c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | reflags) - if (c.match("")): - print >>sys.stderr, "lex: Regular expression for rule '%s' matches empty string." % name - error = 1 - continue - except re.error,e: - print >>sys.stderr, "lex: Invalid regular expression for rule '%s'. %s" % (name,e) - if '#' in r: - print >>sys.stderr, "lex: Make sure '#' in rule '%s' is escaped with '\\#'." % name + # Collect parser information from the dictionary + linfo = LexerReflect(ldict,log=errorlog,reflags=reflags) + linfo.get_all() + if not optimize: + if linfo.validate_all(): + raise SyntaxError("Can't build lexer") - error = 1 - continue - if debug: - print "lex: Adding rule %s -> '%s' (state '%s')" % (name,r,state) + if optimize and lextab: + try: + lexobj.readtab(lextab,ldict) + token = lexobj.token + input = lexobj.input + lexer = lexobj + return lexobj - regex_list.append("(?P<%s>%s)" % (name,r)) + except ImportError: + pass - if not regex_list: - print >>sys.stderr, "lex: No rules defined for state '%s'" % state - error = 1 + # Dump some basic debugging information + if debug: + debuglog.info("lex: tokens = %r", linfo.tokens) + debuglog.info("lex: literals = %r", linfo.literals) + debuglog.info("lex: states = %r", linfo.stateinfo) - regexs[state] = regex_list + # Build a dictionary of valid token names + lexobj.lextokens = { } + for n in linfo.tokens: + lexobj.lextokens[n] = 1 + # Get literals specification + if isinstance(linfo.literals,(list,tuple)): + lexobj.lexliterals = type(linfo.literals[0])().join(linfo.literals) + else: + lexobj.lexliterals = linfo.literals - if not optimize: - for f in files.keys(): - if not _validate_file(f): - error = 1 + # Get the stateinfo dictionary + stateinfo = linfo.stateinfo + + regexs = { } + # Build the master regular expressions + for state in stateinfo: + regex_list = [] + + # Add rules defined by functions first + for fname, f in linfo.funcsym[state]: + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + regex_list.append("(?P<%s>%s)" % (fname,f.__doc__)) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",fname,f.__doc__, state) - if error: - raise SyntaxError,"lex: Unable to build lexer." + # Now add all of the simple rules + for name,r in linfo.strsym[state]: + regex_list.append("(?P<%s>%s)" % (name,r)) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",name,r, state) - # From this point forward, we're reasonably confident that we can build the lexer. - # No more errors will be generated, but there might be some warning messages. + regexs[state] = regex_list # Build the master regular expressions - for state in regexs.keys(): - lexre, re_text, re_names = _form_master_re(regexs[state],reflags,ldict,toknames) + if debug: + debuglog.info("lex: ==== MASTER REGEXS FOLLOW ====") + + for state in regexs: + lexre, re_text, re_names = _form_master_re(regexs[state],reflags,ldict,linfo.toknames) lexobj.lexstatere[state] = lexre lexobj.lexstateretext[state] = re_text lexobj.lexstaterenames[state] = re_names if debug: for i in range(len(re_text)): - print "lex: state '%s'. regex[%d] = '%s'" % (state, i, re_text[i]) + debuglog.info("lex: state '%s' : regex[%d] = '%s'",state, i, re_text[i]) - # For inclusive states, we need to add the INITIAL state - for state,type in stateinfo.items(): - if state != "INITIAL" and type == 'inclusive': + # For inclusive states, we need to add the regular expressions from the INITIAL state + for state,stype in stateinfo.items(): + if state != "INITIAL" and stype == 'inclusive': lexobj.lexstatere[state].extend(lexobj.lexstatere['INITIAL']) lexobj.lexstateretext[state].extend(lexobj.lexstateretext['INITIAL']) lexobj.lexstaterenames[state].extend(lexobj.lexstaterenames['INITIAL']) @@ -806,30 +969,30 @@ lexobj.lexstateinfo = stateinfo lexobj.lexre = lexobj.lexstatere["INITIAL"] lexobj.lexretext = lexobj.lexstateretext["INITIAL"] + lexobj.lexreflags = reflags # Set up ignore variables - lexobj.lexstateignore = ignore + lexobj.lexstateignore = linfo.ignore lexobj.lexignore = lexobj.lexstateignore.get("INITIAL","") # Set up error functions - lexobj.lexstateerrorf = errorf - lexobj.lexerrorf = errorf.get("INITIAL",None) - if warn and not lexobj.lexerrorf: - print >>sys.stderr, "lex: Warning. no t_error rule is defined." + lexobj.lexstateerrorf = linfo.errorf + lexobj.lexerrorf = linfo.errorf.get("INITIAL",None) + if not lexobj.lexerrorf: + errorlog.warning("No t_error rule is defined") # Check state information for ignore and error rules for s,stype in stateinfo.items(): if stype == 'exclusive': - if warn and not errorf.has_key(s): - print >>sys.stderr, "lex: Warning. no error rule is defined for exclusive state '%s'" % s - if warn and not ignore.has_key(s) and lexobj.lexignore: - print >>sys.stderr, "lex: Warning. no ignore rule is defined for exclusive state '%s'" % s + if not s in linfo.errorf: + errorlog.warning("No error rule is defined for exclusive state '%s'", s) + if not s in linfo.ignore and lexobj.lexignore: + errorlog.warning("No ignore rule is defined for exclusive state '%s'", s) elif stype == 'inclusive': - if not errorf.has_key(s): - errorf[s] = errorf.get("INITIAL",None) - if not ignore.has_key(s): - ignore[s] = ignore.get("INITIAL","") - + if not s in linfo.errorf: + linfo.errorf[s] = linfo.errorf.get("INITIAL",None) + if not s in linfo.ignore: + linfo.ignore[s] = linfo.ignore.get("INITIAL","") # Create global versions of the token() and input() functions token = lexobj.token @@ -856,7 +1019,7 @@ data = f.read() f.close() except IndexError: - print "Reading from standard input (type EOF to end):" + sys.stdout.write("Reading from standard input (type EOF to end):\n") data = sys.stdin.read() if lexer: @@ -872,8 +1035,7 @@ while 1: tok = _token() if not tok: break - print "(%s,%r,%d,%d)" % (tok.type, tok.value, tok.lineno,tok.lexpos) - + sys.stdout.write("(%s,%r,%d,%d)\n" % (tok.type, tok.value, tok.lineno,tok.lexpos)) # ----------------------------------------------------------------------------- # @TOKEN(regex) @@ -884,7 +1046,7 @@ def TOKEN(r): def set_doc(f): - if callable(r): + if hasattr(r,"__call__"): f.__doc__ = r.__doc__ else: f.__doc__ = r diff -ruN ply.orig/yacc.py ply/yacc.py --- other-licenses/ply/ply.orig/yacc.py 2010-04-02 19:02:56.000000000 +0300 +++ other-licenses/ply/ply/yacc.py 2009-09-02 17:27:23.000000000 +0300 @@ -1,26 +1,35 @@ -#----------------------------------------------------------------------------- +# ----------------------------------------------------------------------------- # ply: yacc.py # -# Author(s): David M. Beazley (dave@dabeaz.com) -# -# Copyright (C) 2001-2008, David M. Beazley -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -# -# See the file COPYING for a complete copy of the LGPL. -# +# Copyright (C) 2001-2009, +# David M. Beazley (Dabeaz LLC) +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# ----------------------------------------------------------------------------- # # This implements an LR parser that is constructed from grammar rules defined # as Python functions. The grammer is specified by supplying the BNF inside @@ -50,8 +59,8 @@ # own risk! # ---------------------------------------------------------------------------- -__version__ = "2.5" -__tabversion__ = "2.4" # Table version +__version__ = "3.3" +__tabversion__ = "3.2" # Table version #----------------------------------------------------------------------------- # === User configurable parameters === @@ -71,24 +80,83 @@ yaccdevel = 0 # Set to True if developing yacc. This turns off optimized # implementations of certain functions. -import re, types, sys, cStringIO, md5, os.path +resultlimit = 40 # Size limit of results when running in debug mode. -# Exception raised for yacc-related errors -class YaccError(Exception): pass +pickle_protocol = 0 # Protocol to use when writing pickle files -# Exception raised for errors raised in production rules -class SyntaxError(Exception): pass +import re, types, sys, os.path +# Compatibility function for python 2.6/3.0 +if sys.version_info[0] < 3: + def func_code(f): + return f.func_code +else: + def func_code(f): + return f.__code__ -# Available instance types. This is used when parsers are defined by a class. -# it's a little funky because I want to preserve backwards compatibility -# with Python 2.0 where types.ObjectType is undefined. - +# Compatibility try: - _INSTANCETYPE = (types.InstanceType, types.ObjectType) + MAXINT = sys.maxint except AttributeError: - _INSTANCETYPE = types.InstanceType - class object: pass # Note: needed if no new-style classes present + MAXINT = sys.maxsize + +# Python 2.x/3.0 compatibility. +def load_ply_lex(): + if sys.version_info[0] < 3: + import lex + else: + import ply.lex as lex + return lex + +# This object is a stand-in for a logging object created by the +# logging module. PLY will use this by default to create things +# such as the parser.out file. If a user wants more detailed +# information, they can create their own logging object and pass +# it into PLY. + +class PlyLogger(object): + def __init__(self,f): + self.f = f + def debug(self,msg,*args,**kwargs): + self.f.write((msg % args) + "\n") + info = debug + + def warning(self,msg,*args,**kwargs): + self.f.write("WARNING: "+ (msg % args) + "\n") + + def error(self,msg,*args,**kwargs): + self.f.write("ERROR: " + (msg % args) + "\n") + + critical = debug + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self,name): + return self + def __call__(self,*args,**kwargs): + return self + +# Exception raised for yacc-related errors +class YaccError(Exception): pass + +# Format the result message that the parser produces when running in debug mode. +def format_result(r): + repr_str = repr(r) + if '\n' in repr_str: repr_str = repr(repr_str) + if len(repr_str) > resultlimit: + repr_str = repr_str[:resultlimit]+" ..." + result = "<%s @ 0x%x> (%s)" % (type(r).__name__,id(r),repr_str) + return result + + +# Format stack entries when the parser is running in debug mode +def format_stack_entry(r): + repr_str = repr(r) + if '\n' in repr_str: repr_str = repr(repr_str) + if len(repr_str) < 16: + return repr_str + else: + return "<%s @ 0x%x>" % (type(r).__name__,id(r)) #----------------------------------------------------------------------------- # === LR Parsing Engine === @@ -142,6 +210,9 @@ def lineno(self,n): return getattr(self.slice[n],"lineno",0) + def set_lineno(self,n,lineno): + self.slice[n].lineno = lineno + def linespan(self,n): startline = getattr(self.slice[n],"lineno",0) endline = getattr(self.slice[n],"endlineno",startline) @@ -159,27 +230,18 @@ raise SyntaxError -# The LR Parsing engine. This is defined as a class so that multiple parsers -# can exist in the same process. A user never instantiates this directly. -# Instead, the global yacc() function should be used to create a suitable Parser -# object. - -class Parser: - def __init__(self,magic=None): - - # This is a hack to keep users from trying to instantiate a Parser - # object directly. - - if magic != "xyzzy": - raise YaccError, "Can't directly instantiate Parser. Use yacc() instead." - - # Reset internal state - self.productions = None # List of productions - self.errorfunc = None # Error handling function - self.action = { } # LR Action table - self.goto = { } # LR goto table - self.require = { } # Attribute require table - self.method = "Unknown LR" # Table construction method used +# ----------------------------------------------------------------------------- +# == LRParser == +# +# The LR Parsing engine. +# ----------------------------------------------------------------------------- + +class LRParser: + def __init__(self,lrtab,errorf): + self.productions = lrtab.lr_productions + self.action = lrtab.lr_action + self.goto = lrtab.lr_goto + self.errorfunc = errorf def errok(self): self.errorok = 1 @@ -194,6 +256,8 @@ def parse(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None): if debug or yaccdevel: + if isinstance(debug,int): + debug = PlyLogger(sys.stderr) return self.parsedebug(input,lexer,debug,tracking,tokenfunc) elif tracking: return self.parseopt(input,lexer,debug,tracking,tokenfunc) @@ -215,7 +279,7 @@ # # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - def parsedebug(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None): + def parsedebug(self,input=None,lexer=None,debug=None,tracking=0,tokenfunc=None): lookahead = None # Current lookahead symbol lookaheadstack = [ ] # Stack of lookahead symbols actions = self.action # Local reference to action table (to avoid lookup on self.) @@ -223,12 +287,16 @@ prod = self.productions # Local reference to production list (to avoid lookup on self.) pslice = YaccProduction(None) # Production object passed to grammar rules errorcount = 0 # Used during error recovery - endsym = "$end" # End symbol + + # --! DEBUG + debug.info("PLY: PARSE DEBUG START") + # --! DEBUG + # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer - + # Set up the lexer and parser objects on pslice pslice.lexer = lexer pslice.parser = self @@ -257,7 +325,7 @@ statestack.append(0) sym = YaccSymbol() - sym.type = endsym + sym.type = "$end" symstack.append(sym) state = 0 while 1: @@ -266,8 +334,8 @@ # the next token off of the lookaheadstack or from the lexer # --! DEBUG - if debug > 1: - print 'state', state + debug.debug('') + debug.debug('State : %s', state) # --! DEBUG if not lookahead: @@ -277,35 +345,25 @@ lookahead = lookaheadstack.pop() if not lookahead: lookahead = YaccSymbol() - lookahead.type = endsym + lookahead.type = "$end" # --! DEBUG - if debug: - errorlead = ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip() + debug.debug('Stack : %s', + ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) # --! DEBUG # Check the action table ltype = lookahead.type t = actions[state].get(ltype) - # --! DEBUG - if debug > 1: - print 'action', t - # --! DEBUG - if t is not None: if t > 0: # shift a symbol on the stack - if ltype is endsym: - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t # --! DEBUG - if debug > 1: - sys.stderr.write("%-60s shift state %s\n" % (errorlead, t)) + debug.debug("Action : Shift and goto state %s", t) # --! DEBUG symstack.append(lookahead) @@ -327,8 +385,11 @@ sym.value = None # --! DEBUG - if debug > 1: - sys.stderr.write("%-60s reduce %d\n" % (errorlead, -t)) + if plen: + debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, "["+",".join([format_stack_entry(_v.value) for _v in symstack[-plen:]])+"]",-t) + else: + debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, [],-t) + # --! DEBUG if plen: @@ -355,9 +416,12 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) + # --! DEBUG + debug.info("Result : %s", format_result(pslice[0])) + # --! DEBUG symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -393,7 +457,10 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) + # --! DEBUG + debug.info("Result : %s", format_result(pslice[0])) + # --! DEBUG symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -412,13 +479,18 @@ if t == 0: n = symstack[-1] - return getattr(n,"value",None) + result = getattr(n,"value",None) + # --! DEBUG + debug.info("Done : Returning %s", format_result(result)) + debug.info("PLY: PARSE DEBUG END") + # --! DEBUG + return result if t == None: # --! DEBUG - if debug: - sys.stderr.write(errorlead + "\n") + debug.error('Error : %s', + ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) # --! DEBUG # We have some kind of parsing error here. To handle @@ -435,13 +507,15 @@ errorcount = error_count self.errorok = 0 errtoken = lookahead - if errtoken.type is endsym: + if errtoken.type == "$end": errtoken = None # End of file! if self.errorfunc: global errok,token,restart errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -471,7 +545,7 @@ # entire parse has been rolled back and we're completely hosed. The token is # discarded and we just keep going. - if len(statestack) <= 1 and lookahead.type is not endsym: + if len(statestack) <= 1 and lookahead.type != "$end": lookahead = None errtoken = None state = 0 @@ -483,7 +557,7 @@ # at the end of the file. nuke the top entry and generate an error token # Start nuking entries on the stack - if lookahead.type is endsym: + if lookahead.type == "$end": # Whoa. We're really hosed here. Bail out return @@ -509,7 +583,7 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" + raise RuntimeError("yacc: internal parser error!!!\n") # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # parseopt(). @@ -531,7 +605,7 @@ # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer # Set up the lexer and parser objects on pslice @@ -586,10 +660,6 @@ if t is not None: if t > 0: # shift a symbol on the stack - if ltype == '$end': - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t @@ -635,9 +705,9 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -673,7 +743,7 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -717,6 +787,8 @@ errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -784,7 +856,7 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" + raise RuntimeError("yacc: internal parser error!!!\n") # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # parseopt_notrack(). @@ -805,7 +877,7 @@ # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer # Set up the lexer and parser objects on pslice @@ -860,10 +932,6 @@ if t is not None: if t > 0: # shift a symbol on the stack - if ltype == '$end': - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t @@ -898,9 +966,9 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -930,7 +998,7 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -974,6 +1042,8 @@ errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -1041,169 +1111,172 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" - + raise RuntimeError("yacc: internal parser error!!!\n") # ----------------------------------------------------------------------------- -# === Parser Construction === +# === Grammar Representation === # -# The following functions and variables are used to implement the yacc() function -# itself. This is pretty hairy stuff involving lots of error checking, -# construction of LR items, kernels, and so forth. Although a lot of -# this work is done using global variables, the resulting Parser object -# is completely self contained--meaning that it is safe to repeatedly -# call yacc() with different grammars in the same application. +# The following functions, classes, and variables are used to represent and +# manipulate the rules that make up a grammar. # ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# validate_file() -# -# This function checks to see if there are duplicated p_rulename() functions -# in the parser module file. Without this function, it is really easy for -# users to make mistakes by cutting and pasting code fragments (and it's a real -# bugger to try and figure out why the resulting parser doesn't work). Therefore, -# we just do a little regular expression pattern matching of def statements -# to try and detect duplicates. -# ----------------------------------------------------------------------------- - -def validate_file(filename): - base,ext = os.path.splitext(filename) - if ext != '.py': return 1 # No idea. Assume it's okay. +import re - try: - f = open(filename) - lines = f.readlines() - f.close() - except IOError: - return 1 # Oh well - - # Match def p_funcname( - fre = re.compile(r'\s*def\s+(p_[a-zA-Z_0-9]*)\(') - counthash = { } - linen = 1 - noerror = 1 - for l in lines: - m = fre.match(l) - if m: - name = m.group(1) - prev = counthash.get(name) - if not prev: - counthash[name] = linen - else: - sys.stderr.write("%s:%d: Function %s redefined. Previously defined on line %d\n" % (filename,linen,name,prev)) - noerror = 0 - linen += 1 - return noerror - -# This function looks for functions that might be grammar rules, but which don't have the proper p_suffix. -def validate_dict(d): - for n,v in d.items(): - if n[0:2] == 'p_' and type(v) in (types.FunctionType, types.MethodType): continue - if n[0:2] == 't_': continue - - if n[0:2] == 'p_': - sys.stderr.write("yacc: Warning. '%s' not defined as a function\n" % n) - if 1 and isinstance(v,types.FunctionType) and v.func_code.co_argcount == 1: - try: - doc = v.__doc__.split(" ") - if doc[1] == ':': - sys.stderr.write("%s:%d: Warning. Possible grammar rule '%s' defined without p_ prefix.\n" % (v.func_code.co_filename, v.func_code.co_firstlineno,n)) - except StandardError: - pass +# regex matching identifiers +_is_identifier = re.compile(r'^[a-zA-Z0-9_-]+$') # ----------------------------------------------------------------------------- -# === GRAMMAR FUNCTIONS === +# class Production: # -# The following global variables and functions are used to store, manipulate, -# and verify the grammar rules specified by the user. -# ----------------------------------------------------------------------------- - -# Initialize all of the global variables used during grammar construction -def initialize_vars(): - global Productions, Prodnames, Prodmap, Terminals - global Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - global Errorfunc, Signature, Requires - - Productions = [None] # A list of all of the productions. The first - # entry is always reserved for the purpose of - # building an augmented grammar - - Prodnames = { } # A dictionary mapping the names of nonterminals to a list of all - # productions of that nonterminal. +# This class stores the raw information about a single production or grammar rule. +# A grammar rule refers to a specification such as this: +# +# expr : expr PLUS term +# +# Here are the basic attributes defined on all productions +# +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','PLUS','term'] +# prec - Production precedence level +# number - Production number. +# func - Function that executes on reduce +# file - File where production function is defined +# lineno - Line number where production function is defined +# +# The following attributes are defined or optional. +# +# len - Length of the production (number of symbols on right hand side) +# usyms - Set of unique symbols found in the production +# ----------------------------------------------------------------------------- + +class Production(object): + reduced = 0 + def __init__(self,number,name,prod,precedence=('right',0),func=None,file='',line=0): + self.name = name + self.prod = tuple(prod) + self.number = number + self.func = func + self.callable = None + self.file = file + self.line = line + self.prec = precedence - Prodmap = { } # A dictionary that is only used to detect duplicate - # productions. + # Internal settings used during table construction + + self.len = len(self.prod) # Length of the production - Terminals = { } # A dictionary mapping the names of terminal symbols to a - # list of the rules where they are used. + # Create a list of unique production symbols used in the production + self.usyms = [ ] + for s in self.prod: + if s not in self.usyms: + self.usyms.append(s) + + # List of all LR items for the production + self.lr_items = [] + self.lr_next = None - Nonterminals = { } # A dictionary mapping names of nonterminals to a list - # of rule numbers where they are used. + # Create a string representation + if self.prod: + self.str = "%s -> %s" % (self.name," ".join(self.prod)) + else: + self.str = "%s -> " % self.name - First = { } # A dictionary of precomputed FIRST(x) symbols + def __str__(self): + return self.str - Follow = { } # A dictionary of precomputed FOLLOW(x) symbols + def __repr__(self): + return "Production("+str(self)+")" - Precedence = { } # Precedence rules for each terminal. Contains tuples of the - # form ('right',level) or ('nonassoc', level) or ('left',level) + def __len__(self): + return len(self.prod) - UsedPrecedence = { } # Precedence rules that were actually used by the grammer. - # This is only used to provide error checking and to generate - # a warning about unused precedence rules. + def __nonzero__(self): + return 1 - LRitems = [ ] # A list of all LR items for the grammar. These are the - # productions with the "dot" like E -> E . PLUS E + def __getitem__(self,index): + return self.prod[index] + + # Return the nth lr_item from the production (or None if at the end) + def lr_item(self,n): + if n > len(self.prod): return None + p = LRItem(self,n) - Errorfunc = None # User defined error handler + # Precompute the list of productions immediately following. Hack. Remove later + try: + p.lr_after = Prodnames[p.prod[n+1]] + except (IndexError,KeyError): + p.lr_after = [] + try: + p.lr_before = p.prod[n-1] + except IndexError: + p.lr_before = None - Signature = md5.new() # Digital signature of the grammar rules, precedence - # and other information. Used to determined when a - # parsing table needs to be regenerated. + return p - Signature.update(__tabversion__) + # Bind the production function name to a callable + def bind(self,pdict): + if self.func: + self.callable = pdict[self.func] + +# This class serves as a minimal standin for Production objects when +# reading table data from files. It only contains information +# actually used by the LR parsing engine, plus some additional +# debugging information. +class MiniProduction(object): + def __init__(self,str,name,len,func,file,line): + self.name = name + self.len = len + self.func = func + self.callable = None + self.file = file + self.line = line + self.str = str + def __str__(self): + return self.str + def __repr__(self): + return "MiniProduction(%s)" % self.str - Requires = { } # Requires list + # Bind the production function name to a callable + def bind(self,pdict): + if self.func: + self.callable = pdict[self.func] - # File objects used when creating the parser.out debugging file - global _vf, _vfc - _vf = cStringIO.StringIO() - _vfc = cStringIO.StringIO() # ----------------------------------------------------------------------------- -# class Production: +# class LRItem # -# This class stores the raw information about a single production or grammar rule. -# It has a few required attributes: +# This class represents a specific stage of parsing a production rule. For +# example: # -# name - Name of the production (nonterminal) -# prod - A list of symbols making up its production -# number - Production number. +# expr : expr . PLUS term # -# In addition, a few additional attributes are used to help with debugging or -# optimization of table generation. +# In the above, the "." represents the current location of the parse. Here +# basic attributes: # -# file - File where production action is defined. -# lineno - Line number where action is defined -# func - Action function -# prec - Precedence level -# lr_next - Next LR item. Example, if we are ' E -> E . PLUS E' -# then lr_next refers to 'E -> E PLUS . E' -# lr_index - LR item index (location of the ".") in the prod list. +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','.', 'PLUS','term'] +# number - Production number. +# +# lr_next Next LR item. Example, if we are ' expr -> expr . PLUS term' +# then lr_next refers to 'expr -> expr PLUS . term' +# lr_index - LR item index (location of the ".") in the prod list. # lookaheads - LALR lookahead symbols for this item -# len - Length of the production (number of symbols on right hand side) +# len - Length of the production (number of symbols on right hand side) +# lr_after - List of all productions that immediately follow +# lr_before - Grammar symbol immediately before # ----------------------------------------------------------------------------- -class Production: - def __init__(self,**kw): - for k,v in kw.items(): - setattr(self,k,v) - self.lr_index = -1 - self.lr0_added = 0 # Flag indicating whether or not added to LR0 closure - self.lr1_added = 0 # Flag indicating whether or not added to LR1 - self.usyms = [ ] +class LRItem(object): + def __init__(self,p,n): + self.name = p.name + self.prod = list(p.prod) + self.number = p.number + self.lr_index = n self.lookaheads = { } - self.lk_added = { } - self.setnumbers = [ ] + self.prod.insert(n,".") + self.prod = tuple(self.prod) + self.len = len(self.prod) + self.usyms = p.usyms def __str__(self): if self.prod: @@ -1213,932 +1286,597 @@ return s def __repr__(self): - return str(self) - - # Compute lr_items from the production - def lr_item(self,n): - if n > len(self.prod): return None - p = Production() - p.name = self.name - p.prod = list(self.prod) - p.number = self.number - p.lr_index = n - p.lookaheads = { } - p.setnumbers = self.setnumbers - p.prod.insert(n,".") - p.prod = tuple(p.prod) - p.len = len(p.prod) - p.usyms = self.usyms - - # Precompute list of productions immediately following - try: - p.lrafter = Prodnames[p.prod[n+1]] - except (IndexError,KeyError),e: - p.lrafter = [] - try: - p.lrbefore = p.prod[n-1] - except IndexError: - p.lrbefore = None - - return p - -class MiniProduction: - pass + return "LRItem("+str(self)+")" -# regex matching identifiers -_is_identifier = re.compile(r'^[a-zA-Z0-9_-]+$') +# ----------------------------------------------------------------------------- +# rightmost_terminal() +# +# Return the rightmost terminal from a list of symbols. Used in add_production() +# ----------------------------------------------------------------------------- +def rightmost_terminal(symbols, terminals): + i = len(symbols) - 1 + while i >= 0: + if symbols[i] in terminals: + return symbols[i] + i -= 1 + return None # ----------------------------------------------------------------------------- -# add_production() +# === GRAMMAR CLASS === # -# Given an action function, this function assembles a production rule. -# The production rule is assumed to be found in the function's docstring. -# This rule has the general syntax: -# -# name1 ::= production1 -# | production2 -# | production3 -# ... -# | productionn -# name2 ::= production1 -# | production2 -# ... -# ----------------------------------------------------------------------------- - -def add_production(f,file,line,prodname,syms): - - if Terminals.has_key(prodname): - sys.stderr.write("%s:%d: Illegal rule name '%s'. Already defined as a token.\n" % (file,line,prodname)) - return -1 - if prodname == 'error': - sys.stderr.write("%s:%d: Illegal rule name '%s'. error is a reserved word.\n" % (file,line,prodname)) - return -1 - - if not _is_identifier.match(prodname): - sys.stderr.write("%s:%d: Illegal rule name '%s'\n" % (file,line,prodname)) - return -1 - - for x in range(len(syms)): - s = syms[x] - if s[0] in "'\"": - try: - c = eval(s) - if (len(c) > 1): - sys.stderr.write("%s:%d: Literal token %s in rule '%s' may only be a single character\n" % (file,line,s, prodname)) - return -1 - if not Terminals.has_key(c): - Terminals[c] = [] - syms[x] = c - continue - except SyntaxError: - pass - if not _is_identifier.match(s) and s != '%prec': - sys.stderr.write("%s:%d: Illegal name '%s' in rule '%s'\n" % (file,line,s, prodname)) - return -1 - - # See if the rule is already in the rulemap - map = "%s -> %s" % (prodname,syms) - if Prodmap.has_key(map): - m = Prodmap[map] - sys.stderr.write("%s:%d: Duplicate rule %s.\n" % (file,line, m)) - sys.stderr.write("%s:%d: Previous definition at %s:%d\n" % (file,line, m.file, m.line)) - return -1 - - p = Production() - p.name = prodname - p.prod = syms - p.file = file - p.line = line - p.func = f - p.number = len(Productions) - - - Productions.append(p) - Prodmap[map] = p - if not Nonterminals.has_key(prodname): - Nonterminals[prodname] = [ ] - - # Add all terminals to Terminals - i = 0 - while i < len(p.prod): - t = p.prod[i] - if t == '%prec': - try: - precname = p.prod[i+1] - except IndexError: - sys.stderr.write("%s:%d: Syntax error. Nothing follows %%prec.\n" % (p.file,p.line)) - return -1 - - prec = Precedence.get(precname,None) - if not prec: - sys.stderr.write("%s:%d: Nothing known about the precedence of '%s'\n" % (p.file,p.line,precname)) - return -1 - else: - p.prec = prec - UsedPrecedence[precname] = 1 - del p.prod[i] - del p.prod[i] - continue - - if Terminals.has_key(t): - Terminals[t].append(p.number) - # Is a terminal. We'll assign a precedence to p based on this - if not hasattr(p,"prec"): - p.prec = Precedence.get(t,('right',0)) - else: - if not Nonterminals.has_key(t): - Nonterminals[t] = [ ] - Nonterminals[t].append(p.number) - i += 1 - - if not hasattr(p,"prec"): - p.prec = ('right',0) - - # Set final length of productions - p.len = len(p.prod) - p.prod = tuple(p.prod) - - # Calculate unique syms in the production - p.usyms = [ ] - for s in p.prod: - if s not in p.usyms: - p.usyms.append(s) +# The following class represents the contents of the specified grammar along +# with various computed properties such as first sets, follow sets, LR items, etc. +# This data is used for critical parts of the table generation process later. +# ----------------------------------------------------------------------------- - # Add to the global productions list - try: - Prodnames[p.name].append(p) - except KeyError: - Prodnames[p.name] = [ p ] - return 0 - -# Given a raw rule function, this function rips out its doc string -# and adds rules to the grammar - -def add_function(f): - line = f.func_code.co_firstlineno - file = f.func_code.co_filename - error = 0 +class GrammarError(YaccError): pass - if isinstance(f,types.MethodType): - reqdargs = 2 - else: - reqdargs = 1 +class Grammar(object): + def __init__(self,terminals): + self.Productions = [None] # A list of all of the productions. The first + # entry is always reserved for the purpose of + # building an augmented grammar - if f.func_code.co_argcount > reqdargs: - sys.stderr.write("%s:%d: Rule '%s' has too many arguments.\n" % (file,line,f.__name__)) - return -1 - - if f.func_code.co_argcount < reqdargs: - sys.stderr.write("%s:%d: Rule '%s' requires an argument.\n" % (file,line,f.__name__)) - return -1 - - if f.__doc__: - # Split the doc string into lines - pstrings = f.__doc__.splitlines() - lastp = None - dline = line - for ps in pstrings: - dline += 1 - p = ps.split() - if not p: continue - try: - if p[0] == '|': - # This is a continuation of a previous rule - if not lastp: - sys.stderr.write("%s:%d: Misplaced '|'.\n" % (file,dline)) - return -1 - prodname = lastp - if len(p) > 1: - syms = p[1:] - else: - syms = [ ] - else: - prodname = p[0] - lastp = prodname - assign = p[1] - if len(p) > 2: - syms = p[2:] - else: - syms = [ ] - if assign != ':' and assign != '::=': - sys.stderr.write("%s:%d: Syntax error. Expected ':'\n" % (file,dline)) - return -1 + self.Prodnames = { } # A dictionary mapping the names of nonterminals to a list of all + # productions of that nonterminal. + self.Prodmap = { } # A dictionary that is only used to detect duplicate + # productions. - e = add_production(f,file,dline,prodname,syms) - error += e + self.Terminals = { } # A dictionary mapping the names of terminal symbols to a + # list of the rules where they are used. + for term in terminals: + self.Terminals[term] = [] - except StandardError: - sys.stderr.write("%s:%d: Syntax error in rule '%s'\n" % (file,dline,ps)) - error -= 1 - else: - sys.stderr.write("%s:%d: No documentation string specified in function '%s'\n" % (file,line,f.__name__)) - return error + self.Terminals['error'] = [] + self.Nonterminals = { } # A dictionary mapping names of nonterminals to a list + # of rule numbers where they are used. -# Cycle checking code (Michael Dyck) + self.First = { } # A dictionary of precomputed FIRST(x) symbols -def compute_reachable(): - ''' - Find each symbol that can be reached from the start symbol. - Print a warning for any nonterminals that can't be reached. - (Unused terminals have already had their warning.) - ''' - Reachable = { } - for s in Terminals.keys() + Nonterminals.keys(): - Reachable[s] = 0 - - mark_reachable_from( Productions[0].prod[0], Reachable ) - - for s in Nonterminals.keys(): - if not Reachable[s]: - sys.stderr.write("yacc: Symbol '%s' is unreachable.\n" % s) - -def mark_reachable_from(s, Reachable): - ''' - Mark all symbols that are reachable from symbol s. - ''' - if Reachable[s]: - # We've already reached symbol s. - return - Reachable[s] = 1 - for p in Prodnames.get(s,[]): - for r in p.prod: - mark_reachable_from(r, Reachable) - -# ----------------------------------------------------------------------------- -# compute_terminates() -# -# This function looks at the various parsing rules and tries to detect -# infinite recursion cycles (grammar rules where there is no possible way -# to derive a string of only terminals). -# ----------------------------------------------------------------------------- -def compute_terminates(): - ''' - Raise an error for any symbols that don't terminate. - ''' - Terminates = {} - - # Terminals: - for t in Terminals.keys(): - Terminates[t] = 1 - - Terminates['$end'] = 1 - - # Nonterminals: - - # Initialize to false: - for n in Nonterminals.keys(): - Terminates[n] = 0 - - # Then propagate termination until no change: - while 1: - some_change = 0 - for (n,pl) in Prodnames.items(): - # Nonterminal n terminates iff any of its productions terminates. - for p in pl: - # Production p terminates iff all of its rhs symbols terminate. - for s in p.prod: - if not Terminates[s]: - # The symbol s does not terminate, - # so production p does not terminate. - p_terminates = 0 - break - else: - # didn't break from the loop, - # so every symbol s terminates - # so production p terminates. - p_terminates = 1 - - if p_terminates: - # symbol n terminates! - if not Terminates[n]: - Terminates[n] = 1 - some_change = 1 - # Don't need to consider any more productions for this n. - break - - if not some_change: - break - - some_error = 0 - for (s,terminates) in Terminates.items(): - if not terminates: - if not Prodnames.has_key(s) and not Terminals.has_key(s) and s != 'error': - # s is used-but-not-defined, and we've already warned of that, - # so it would be overkill to say that it's also non-terminating. - pass - else: - sys.stderr.write("yacc: Infinite recursion detected for symbol '%s'.\n" % s) - some_error = 1 + self.Follow = { } # A dictionary of precomputed FOLLOW(x) symbols - return some_error + self.Precedence = { } # Precedence rules for each terminal. Contains tuples of the + # form ('right',level) or ('nonassoc', level) or ('left',level) -# ----------------------------------------------------------------------------- -# verify_productions() -# -# This function examines all of the supplied rules to see if they seem valid. -# ----------------------------------------------------------------------------- -def verify_productions(cycle_check=1): - error = 0 - for p in Productions: - if not p: continue + self.UsedPrecedence = { } # Precedence rules that were actually used by the grammer. + # This is only used to provide error checking and to generate + # a warning about unused precedence rules. - for s in p.prod: - if not Prodnames.has_key(s) and not Terminals.has_key(s) and s != 'error': - sys.stderr.write("%s:%d: Symbol '%s' used, but not defined as a token or a rule.\n" % (p.file,p.line,s)) - error = 1 - continue + self.Start = None # Starting symbol for the grammar - unused_tok = 0 - # Now verify all of the tokens - if yaccdebug: - _vf.write("Unused terminals:\n\n") - for s,v in Terminals.items(): - if s != 'error' and not v: - sys.stderr.write("yacc: Warning. Token '%s' defined, but not used.\n" % s) - if yaccdebug: _vf.write(" %s\n"% s) - unused_tok += 1 - - # Print out all of the productions - if yaccdebug: - _vf.write("\nGrammar\n\n") - for i in range(1,len(Productions)): - _vf.write("Rule %-5d %s\n" % (i, Productions[i])) - - unused_prod = 0 - # Verify the use of all productions - for s,v in Nonterminals.items(): - if not v: - p = Prodnames[s][0] - sys.stderr.write("%s:%d: Warning. Rule '%s' defined, but not used.\n" % (p.file,p.line, s)) - unused_prod += 1 - - - if unused_tok == 1: - sys.stderr.write("yacc: Warning. There is 1 unused token.\n") - if unused_tok > 1: - sys.stderr.write("yacc: Warning. There are %d unused tokens.\n" % unused_tok) - - if unused_prod == 1: - sys.stderr.write("yacc: Warning. There is 1 unused rule.\n") - if unused_prod > 1: - sys.stderr.write("yacc: Warning. There are %d unused rules.\n" % unused_prod) - - if yaccdebug: - _vf.write("\nTerminals, with rules where they appear\n\n") - ks = Terminals.keys() - ks.sort() - for k in ks: - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Terminals[k]]))) - _vf.write("\nNonterminals, with rules where they appear\n\n") - ks = Nonterminals.keys() - ks.sort() - for k in ks: - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Nonterminals[k]]))) - - if (cycle_check): - compute_reachable() - error += compute_terminates() -# error += check_cycles() - return error - -# ----------------------------------------------------------------------------- -# build_lritems() -# -# This function walks the list of productions and builds a complete set of the -# LR items. The LR items are stored in two ways: First, they are uniquely -# numbered and placed in the list _lritems. Second, a linked list of LR items -# is built for each production. For example: -# -# E -> E PLUS E -# -# Creates the list -# -# [E -> . E PLUS E, E -> E . PLUS E, E -> E PLUS . E, E -> E PLUS E . ] -# ----------------------------------------------------------------------------- - -def build_lritems(): - for p in Productions: - lastlri = p - lri = p.lr_item(0) - i = 0 - while 1: - lri = p.lr_item(i) - lastlri.lr_next = lri - if not lri: break - lri.lr_num = len(LRitems) - LRitems.append(lri) - lastlri = lri - i += 1 - # In order for the rest of the parser generator to work, we need to - # guarantee that no more lritems are generated. Therefore, we nuke - # the p.lr_item method. (Only used in debugging) - # Production.lr_item = None + def __len__(self): + return len(self.Productions) -# ----------------------------------------------------------------------------- -# add_precedence() -# -# Given a list of precedence rules, add to the precedence table. -# ----------------------------------------------------------------------------- + def __getitem__(self,index): + return self.Productions[index] -def add_precedence(plist): - plevel = 0 - error = 0 - for p in plist: - plevel += 1 - try: - prec = p[0] - terms = p[1:] - if prec != 'left' and prec != 'right' and prec != 'nonassoc': - sys.stderr.write("yacc: Invalid precedence '%s'\n" % prec) - return -1 - for t in terms: - if Precedence.has_key(t): - sys.stderr.write("yacc: Precedence already specified for terminal '%s'\n" % t) - error += 1 - continue - Precedence[t] = (prec,plevel) - except: - sys.stderr.write("yacc: Invalid precedence table.\n") - error += 1 + # ----------------------------------------------------------------------------- + # set_precedence() + # + # Sets the precedence for a given terminal. assoc is the associativity such as + # 'left','right', or 'nonassoc'. level is a numeric level. + # + # ----------------------------------------------------------------------------- - return error + def set_precedence(self,term,assoc,level): + assert self.Productions == [None],"Must call set_precedence() before add_production()" + if term in self.Precedence: + raise GrammarError("Precedence already specified for terminal '%s'" % term) + if assoc not in ['left','right','nonassoc']: + raise GrammarError("Associativity must be one of 'left','right', or 'nonassoc'") + self.Precedence[term] = (assoc,level) + + # ----------------------------------------------------------------------------- + # add_production() + # + # Given an action function, this function assembles a production rule and + # computes its precedence level. + # + # The production rule is supplied as a list of symbols. For example, + # a rule such as 'expr : expr PLUS term' has a production name of 'expr' and + # symbols ['expr','PLUS','term']. + # + # Precedence is determined by the precedence of the right-most non-terminal + # or the precedence of a terminal specified by %prec. + # + # A variety of error checks are performed to make sure production symbols + # are valid and that %prec is used correctly. + # ----------------------------------------------------------------------------- + + def add_production(self,prodname,syms,func=None,file='',line=0): + + if prodname in self.Terminals: + raise GrammarError("%s:%d: Illegal rule name '%s'. Already defined as a token" % (file,line,prodname)) + if prodname == 'error': + raise GrammarError("%s:%d: Illegal rule name '%s'. error is a reserved word" % (file,line,prodname)) + if not _is_identifier.match(prodname): + raise GrammarError("%s:%d: Illegal rule name '%s'" % (file,line,prodname)) + + # Look for literal tokens + for n,s in enumerate(syms): + if s[0] in "'\"": + try: + c = eval(s) + if (len(c) > 1): + raise GrammarError("%s:%d: Literal token %s in rule '%s' may only be a single character" % (file,line,s, prodname)) + if not c in self.Terminals: + self.Terminals[c] = [] + syms[n] = c + continue + except SyntaxError: + pass + if not _is_identifier.match(s) and s != '%prec': + raise GrammarError("%s:%d: Illegal name '%s' in rule '%s'" % (file,line,s, prodname)) + + # Determine the precedence level + if '%prec' in syms: + if syms[-1] == '%prec': + raise GrammarError("%s:%d: Syntax error. Nothing follows %%prec" % (file,line)) + if syms[-2] != '%prec': + raise GrammarError("%s:%d: Syntax error. %%prec can only appear at the end of a grammar rule" % (file,line)) + precname = syms[-1] + prodprec = self.Precedence.get(precname,None) + if not prodprec: + raise GrammarError("%s:%d: Nothing known about the precedence of '%s'" % (file,line,precname)) + else: + self.UsedPrecedence[precname] = 1 + del syms[-2:] # Drop %prec from the rule + else: + # If no %prec, precedence is determined by the rightmost terminal symbol + precname = rightmost_terminal(syms,self.Terminals) + prodprec = self.Precedence.get(precname,('right',0)) + + # See if the rule is already in the rulemap + map = "%s -> %s" % (prodname,syms) + if map in self.Prodmap: + m = self.Prodmap[map] + raise GrammarError("%s:%d: Duplicate rule %s. " % (file,line, m) + + "Previous definition at %s:%d" % (m.file, m.line)) + + # From this point on, everything is valid. Create a new Production instance + pnumber = len(self.Productions) + if not prodname in self.Nonterminals: + self.Nonterminals[prodname] = [ ] + + # Add the production number to Terminals and Nonterminals + for t in syms: + if t in self.Terminals: + self.Terminals[t].append(pnumber) + else: + if not t in self.Nonterminals: + self.Nonterminals[t] = [ ] + self.Nonterminals[t].append(pnumber) + + # Create a production and add it to the list of productions + p = Production(pnumber,prodname,syms,prodprec,func,file,line) + self.Productions.append(p) + self.Prodmap[map] = p -# ----------------------------------------------------------------------------- -# check_precedence() -# -# Checks the use of the Precedence tables. This makes sure all of the symbols -# are terminals or were used with %prec -# ----------------------------------------------------------------------------- + # Add to the global productions list + try: + self.Prodnames[prodname].append(p) + except KeyError: + self.Prodnames[prodname] = [ p ] + return 0 -def check_precedence(): - error = 0 - for precname in Precedence.keys(): - if not (Terminals.has_key(precname) or UsedPrecedence.has_key(precname)): - sys.stderr.write("yacc: Precedence rule '%s' defined for unknown symbol '%s'\n" % (Precedence[precname][0],precname)) - error += 1 - return error + # ----------------------------------------------------------------------------- + # set_start() + # + # Sets the starting symbol and creates the augmented grammar. Production + # rule 0 is S' -> start where start is the start symbol. + # ----------------------------------------------------------------------------- + + def set_start(self,start=None): + if not start: + start = self.Productions[1].name + if start not in self.Nonterminals: + raise GrammarError("start symbol %s undefined" % start) + self.Productions[0] = Production(0,"S'",[start]) + self.Nonterminals[start].append(0) + self.Start = start -# ----------------------------------------------------------------------------- -# augment_grammar() -# -# Compute the augmented grammar. This is just a rule S' -> start where start -# is the starting symbol. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # find_unreachable() + # + # Find all of the nonterminal symbols that can't be reached from the starting + # symbol. Returns a list of nonterminals that can't be reached. + # ----------------------------------------------------------------------------- -def augment_grammar(start=None): - if not start: - start = Productions[1].name - Productions[0] = Production(name="S'",prod=[start],number=0,len=1,prec=('right',0),func=None) - Productions[0].usyms = [ start ] - Nonterminals[start].append(0) + def find_unreachable(self): + + # Mark all symbols that are reachable from a symbol s + def mark_reachable_from(s): + if reachable[s]: + # We've already reached symbol s. + return + reachable[s] = 1 + for p in self.Prodnames.get(s,[]): + for r in p.prod: + mark_reachable_from(r) + + reachable = { } + for s in list(self.Terminals) + list(self.Nonterminals): + reachable[s] = 0 + mark_reachable_from( self.Productions[0].prod[0] ) -# ------------------------------------------------------------------------- -# first() -# -# Compute the value of FIRST1(beta) where beta is a tuple of symbols. -# -# During execution of compute_first1, the result may be incomplete. -# Afterward (e.g., when called from compute_follow()), it will be complete. -# ------------------------------------------------------------------------- -def first(beta): + return [s for s in list(self.Nonterminals) + if not reachable[s]] + + # ----------------------------------------------------------------------------- + # infinite_cycles() + # + # This function looks at the various parsing rules and tries to detect + # infinite recursion cycles (grammar rules where there is no possible way + # to derive a string of only terminals). + # ----------------------------------------------------------------------------- - # We are computing First(x1,x2,x3,...,xn) - result = [ ] - for x in beta: - x_produces_empty = 0 + def infinite_cycles(self): + terminates = {} - # Add all the non- symbols of First[x] to the result. - for f in First[x]: - if f == '': - x_produces_empty = 1 - else: - if f not in result: result.append(f) + # Terminals: + for t in self.Terminals: + terminates[t] = 1 - if x_produces_empty: - # We have to consider the next x in beta, - # i.e. stay in the loop. - pass - else: - # We don't have to consider any further symbols in beta. - break - else: - # There was no 'break' from the loop, - # so x_produces_empty was true for all x in beta, - # so beta produces empty as well. - result.append('') + terminates['$end'] = 1 - return result + # Nonterminals: + # Initialize to false: + for n in self.Nonterminals: + terminates[n] = 0 -# FOLLOW(x) -# Given a non-terminal. This function computes the set of all symbols -# that might follow it. Dragon book, p. 189. - -def compute_follow(start=None): - # Add '$end' to the follow list of the start symbol - for k in Nonterminals.keys(): - Follow[k] = [ ] - - if not start: - start = Productions[1].name - - Follow[start] = [ '$end' ] - - while 1: - didadd = 0 - for p in Productions[1:]: - # Here is the production set - for i in range(len(p.prod)): - B = p.prod[i] - if Nonterminals.has_key(B): - # Okay. We got a non-terminal in a production - fst = first(p.prod[i+1:]) - hasempty = 0 - for f in fst: - if f != '' and f not in Follow[B]: - Follow[B].append(f) - didadd = 1 - if f == '': - hasempty = 1 - if hasempty or i == (len(p.prod)-1): - # Add elements of follow(a) to follow(b) - for f in Follow[p.name]: - if f not in Follow[B]: - Follow[B].append(f) - didadd = 1 - if not didadd: break - - if 0 and yaccdebug: - _vf.write('\nFollow:\n') - for k in Nonterminals.keys(): - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Follow[k]]))) - -# ------------------------------------------------------------------------- -# compute_first1() -# -# Compute the value of FIRST1(X) for all symbols -# ------------------------------------------------------------------------- -def compute_first1(): - - # Terminals: - for t in Terminals.keys(): - First[t] = [t] - - First['$end'] = ['$end'] - First['#'] = ['#'] # what's this for? - - # Nonterminals: - - # Initialize to the empty set: - for n in Nonterminals.keys(): - First[n] = [] - - # Then propagate symbols until no change: - while 1: - some_change = 0 - for n in Nonterminals.keys(): - for p in Prodnames[n]: - for f in first(p.prod): - if f not in First[n]: - First[n].append( f ) - some_change = 1 - if not some_change: - break - - if 0 and yaccdebug: - _vf.write('\nFirst:\n') - for k in Nonterminals.keys(): - _vf.write("%-20s : %s\n" % - (k, " ".join([str(s) for s in First[k]]))) - -# ----------------------------------------------------------------------------- -# === SLR Generation === -# -# The following functions are used to construct SLR (Simple LR) parsing tables -# as described on p.221-229 of the dragon book. -# ----------------------------------------------------------------------------- - -# Global variables for the LR parsing engine -def lr_init_vars(): - global _lr_action, _lr_goto, _lr_method - global _lr_goto_cache, _lr0_cidhash - - _lr_action = { } # Action table - _lr_goto = { } # Goto table - _lr_method = "Unknown" # LR method used - _lr_goto_cache = { } - _lr0_cidhash = { } - - -# Compute the LR(0) closure operation on I, where I is a set of LR(0) items. -# prodlist is a list of productions. - -_add_count = 0 # Counter used to detect cycles - -def lr0_closure(I): - global _add_count - - _add_count += 1 - prodlist = Productions - - # Add everything in I to J - J = I[:] - didadd = 1 - while didadd: - didadd = 0 - for j in J: - for x in j.lrafter: - if x.lr0_added == _add_count: continue - # Add B --> .G to J - J.append(x.lr_next) - x.lr0_added = _add_count - didadd = 1 - - return J - -# Compute the LR(0) goto function goto(I,X) where I is a set -# of LR(0) items and X is a grammar symbol. This function is written -# in a way that guarantees uniqueness of the generated goto sets -# (i.e. the same goto set will never be returned as two different Python -# objects). With uniqueness, we can later do fast set comparisons using -# id(obj) instead of element-wise comparison. - -def lr0_goto(I,x): - # First we look for a previously cached entry - g = _lr_goto_cache.get((id(I),x),None) - if g: return g - - # Now we generate the goto set in a way that guarantees uniqueness - # of the result - - s = _lr_goto_cache.get(x,None) - if not s: - s = { } - _lr_goto_cache[x] = s - - gs = [ ] - for p in I: - n = p.lr_next - if n and n.lrbefore == x: - s1 = s.get(id(n),None) - if not s1: - s1 = { } - s[id(n)] = s1 - gs.append(n) - s = s1 - g = s.get('$end',None) - if not g: - if gs: - g = lr0_closure(gs) - s['$end'] = g - else: - s['$end'] = gs - _lr_goto_cache[(id(I),x)] = g - return g - -_lr0_cidhash = { } + # Then propagate termination until no change: + while 1: + some_change = 0 + for (n,pl) in self.Prodnames.items(): + # Nonterminal n terminates iff any of its productions terminates. + for p in pl: + # Production p terminates iff all of its rhs symbols terminate. + for s in p.prod: + if not terminates[s]: + # The symbol s does not terminate, + # so production p does not terminate. + p_terminates = 0 + break + else: + # didn't break from the loop, + # so every symbol s terminates + # so production p terminates. + p_terminates = 1 + + if p_terminates: + # symbol n terminates! + if not terminates[n]: + terminates[n] = 1 + some_change = 1 + # Don't need to consider any more productions for this n. + break -# Compute the LR(0) sets of item function -def lr0_items(): + if not some_change: + break - C = [ lr0_closure([Productions[0].lr_next]) ] - i = 0 - for I in C: - _lr0_cidhash[id(I)] = i - i += 1 + infinite = [] + for (s,term) in terminates.items(): + if not term: + if not s in self.Prodnames and not s in self.Terminals and s != 'error': + # s is used-but-not-defined, and we've already warned of that, + # so it would be overkill to say that it's also non-terminating. + pass + else: + infinite.append(s) - # Loop over the items in C and each grammar symbols - i = 0 - while i < len(C): - I = C[i] - i += 1 + return infinite - # Collect all of the symbols that could possibly be in the goto(I,X) sets - asyms = { } - for ii in I: - for s in ii.usyms: - asyms[s] = None - for x in asyms.keys(): - g = lr0_goto(I,x) - if not g: continue - if _lr0_cidhash.has_key(id(g)): continue - _lr0_cidhash[id(g)] = len(C) - C.append(g) + # ----------------------------------------------------------------------------- + # undefined_symbols() + # + # Find all symbols that were used the grammar, but not defined as tokens or + # grammar rules. Returns a list of tuples (sym, prod) where sym in the symbol + # and prod is the production where the symbol was used. + # ----------------------------------------------------------------------------- + def undefined_symbols(self): + result = [] + for p in self.Productions: + if not p: continue - return C + for s in p.prod: + if not s in self.Prodnames and not s in self.Terminals and s != 'error': + result.append((s,p)) + return result -# ----------------------------------------------------------------------------- -# ==== LALR(1) Parsing ==== -# -# LALR(1) parsing is almost exactly the same as SLR except that instead of -# relying upon Follow() sets when performing reductions, a more selective -# lookahead set that incorporates the state of the LR(0) machine is utilized. -# Thus, we mainly just have to focus on calculating the lookahead sets. -# -# The method used here is due to DeRemer and Pennelo (1982). -# -# DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1) -# Lookahead Sets", ACM Transactions on Programming Languages and Systems, -# Vol. 4, No. 4, Oct. 1982, pp. 615-649 -# -# Further details can also be found in: -# -# J. Tremblay and P. Sorenson, "The Theory and Practice of Compiler Writing", -# McGraw-Hill Book Company, (1985). -# -# Note: This implementation is a complete replacement of the LALR(1) -# implementation in PLY-1.x releases. That version was based on -# a less efficient algorithm and it had bugs in its implementation. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # unused_terminals() + # + # Find all terminals that were defined, but not used by the grammar. Returns + # a list of all symbols. + # ----------------------------------------------------------------------------- + def unused_terminals(self): + unused_tok = [] + for s,v in self.Terminals.items(): + if s != 'error' and not v: + unused_tok.append(s) -# ----------------------------------------------------------------------------- -# compute_nullable_nonterminals() -# -# Creates a dictionary containing all of the non-terminals that might produce -# an empty production. -# ----------------------------------------------------------------------------- + return unused_tok -def compute_nullable_nonterminals(): - nullable = {} - num_nullable = 0 - while 1: - for p in Productions[1:]: - if p.len == 0: - nullable[p.name] = 1 - continue - for t in p.prod: - if not nullable.has_key(t): break - else: - nullable[p.name] = 1 - if len(nullable) == num_nullable: break - num_nullable = len(nullable) - return nullable + # ------------------------------------------------------------------------------ + # unused_rules() + # + # Find all grammar rules that were defined, but not used (maybe not reachable) + # Returns a list of productions. + # ------------------------------------------------------------------------------ + + def unused_rules(self): + unused_prod = [] + for s,v in self.Nonterminals.items(): + if not v: + p = self.Prodnames[s][0] + unused_prod.append(p) + return unused_prod -# ----------------------------------------------------------------------------- -# find_nonterminal_trans(C) -# -# Given a set of LR(0) items, this functions finds all of the non-terminal -# transitions. These are transitions in which a dot appears immediately before -# a non-terminal. Returns a list of tuples of the form (state,N) where state -# is the state number and N is the nonterminal symbol. -# -# The input C is the set of LR(0) items. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # unused_precedence() + # + # Returns a list of tuples (term,precedence) corresponding to precedence + # rules that were never used by the grammar. term is the name of the terminal + # on which precedence was applied and precedence is a string such as 'left' or + # 'right' corresponding to the type of precedence. + # ----------------------------------------------------------------------------- + + def unused_precedence(self): + unused = [] + for termname in self.Precedence: + if not (termname in self.Terminals or termname in self.UsedPrecedence): + unused.append((termname,self.Precedence[termname][0])) + + return unused -def find_nonterminal_transitions(C): - trans = [] - for state in range(len(C)): - for p in C[state]: - if p.lr_index < p.len - 1: - t = (state,p.prod[p.lr_index+1]) - if Nonterminals.has_key(t[1]): - if t not in trans: trans.append(t) - state = state + 1 - return trans + # ------------------------------------------------------------------------- + # _first() + # + # Compute the value of FIRST1(beta) where beta is a tuple of symbols. + # + # During execution of compute_first1, the result may be incomplete. + # Afterward (e.g., when called from compute_follow()), it will be complete. + # ------------------------------------------------------------------------- + def _first(self,beta): + + # We are computing First(x1,x2,x3,...,xn) + result = [ ] + for x in beta: + x_produces_empty = 0 + + # Add all the non- symbols of First[x] to the result. + for f in self.First[x]: + if f == '': + x_produces_empty = 1 + else: + if f not in result: result.append(f) -# ----------------------------------------------------------------------------- -# dr_relation() -# -# Computes the DR(p,A) relationships for non-terminal transitions. The input -# is a tuple (state,N) where state is a number and N is a nonterminal symbol. -# -# Returns a list of terminals. -# ----------------------------------------------------------------------------- + if x_produces_empty: + # We have to consider the next x in beta, + # i.e. stay in the loop. + pass + else: + # We don't have to consider any further symbols in beta. + break + else: + # There was no 'break' from the loop, + # so x_produces_empty was true for all x in beta, + # so beta produces empty as well. + result.append('') -def dr_relation(C,trans,nullable): - dr_set = { } - state,N = trans - terms = [] + return result - g = lr0_goto(C[state],N) - for p in g: - if p.lr_index < p.len - 1: - a = p.prod[p.lr_index+1] - if Terminals.has_key(a): - if a not in terms: terms.append(a) + # ------------------------------------------------------------------------- + # compute_first() + # + # Compute the value of FIRST1(X) for all symbols + # ------------------------------------------------------------------------- + def compute_first(self): + if self.First: + return self.First - # This extra bit is to handle the start state - if state == 0 and N == Productions[0].prod[0]: - terms.append('$end') + # Terminals: + for t in self.Terminals: + self.First[t] = [t] - return terms + self.First['$end'] = ['$end'] -# ----------------------------------------------------------------------------- -# reads_relation() -# -# Computes the READS() relation (p,A) READS (t,C). -# ----------------------------------------------------------------------------- + # Nonterminals: -def reads_relation(C, trans, empty): - # Look for empty transitions - rel = [] - state, N = trans + # Initialize to the empty set: + for n in self.Nonterminals: + self.First[n] = [] - g = lr0_goto(C[state],N) - j = _lr0_cidhash.get(id(g),-1) - for p in g: - if p.lr_index < p.len - 1: - a = p.prod[p.lr_index + 1] - if empty.has_key(a): - rel.append((j,a)) + # Then propagate symbols until no change: + while 1: + some_change = 0 + for n in self.Nonterminals: + for p in self.Prodnames[n]: + for f in self._first(p.prod): + if f not in self.First[n]: + self.First[n].append( f ) + some_change = 1 + if not some_change: + break + + return self.First - return rel + # --------------------------------------------------------------------- + # compute_follow() + # + # Computes all of the follow sets for every non-terminal symbol. The + # follow set is the set of all symbols that might follow a given + # non-terminal. See the Dragon book, 2nd Ed. p. 189. + # --------------------------------------------------------------------- + def compute_follow(self,start=None): + # If already computed, return the result + if self.Follow: + return self.Follow + + # If first sets not computed yet, do that first. + if not self.First: + self.compute_first() + + # Add '$end' to the follow list of the start symbol + for k in self.Nonterminals: + self.Follow[k] = [ ] -# ----------------------------------------------------------------------------- -# compute_lookback_includes() -# -# Determines the lookback and includes relations -# -# LOOKBACK: -# -# This relation is determined by running the LR(0) state machine forward. -# For example, starting with a production "N : . A B C", we run it forward -# to obtain "N : A B C ." We then build a relationship between this final -# state and the starting state. These relationships are stored in a dictionary -# lookdict. -# -# INCLUDES: -# -# Computes the INCLUDE() relation (p,A) INCLUDES (p',B). -# -# This relation is used to determine non-terminal transitions that occur -# inside of other non-terminal transition states. (p,A) INCLUDES (p', B) -# if the following holds: -# -# B -> LAT, where T -> epsilon and p' -L-> p -# -# L is essentially a prefix (which may be empty), T is a suffix that must be -# able to derive an empty string. State p' must lead to state p with the string L. -# -# ----------------------------------------------------------------------------- + if not start: + start = self.Productions[1].name -def compute_lookback_includes(C,trans,nullable): + self.Follow[start] = [ '$end' ] - lookdict = {} # Dictionary of lookback relations - includedict = {} # Dictionary of include relations + while 1: + didadd = 0 + for p in self.Productions[1:]: + # Here is the production set + for i in range(len(p.prod)): + B = p.prod[i] + if B in self.Nonterminals: + # Okay. We got a non-terminal in a production + fst = self._first(p.prod[i+1:]) + hasempty = 0 + for f in fst: + if f != '' and f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = 1 + if f == '': + hasempty = 1 + if hasempty or i == (len(p.prod)-1): + # Add elements of follow(a) to follow(b) + for f in self.Follow[p.name]: + if f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = 1 + if not didadd: break + return self.Follow - # Make a dictionary of non-terminal transitions - dtrans = {} - for t in trans: - dtrans[t] = 1 - # Loop over all transitions and compute lookbacks and includes - for state,N in trans: - lookb = [] - includes = [] - for p in C[state]: - if p.name != N: continue + # ----------------------------------------------------------------------------- + # build_lritems() + # + # This function walks the list of productions and builds a complete set of the + # LR items. The LR items are stored in two ways: First, they are uniquely + # numbered and placed in the list _lritems. Second, a linked list of LR items + # is built for each production. For example: + # + # E -> E PLUS E + # + # Creates the list + # + # [E -> . E PLUS E, E -> E . PLUS E, E -> E PLUS . E, E -> E PLUS E . ] + # ----------------------------------------------------------------------------- - # Okay, we have a name match. We now follow the production all the way - # through the state machine until we get the . on the right hand side + def build_lritems(self): + for p in self.Productions: + lastlri = p + i = 0 + lr_items = [] + while 1: + if i > len(p): + lri = None + else: + lri = LRItem(p,i) + # Precompute the list of productions immediately following + try: + lri.lr_after = self.Prodnames[lri.prod[i+1]] + except (IndexError,KeyError): + lri.lr_after = [] + try: + lri.lr_before = lri.prod[i-1] + except IndexError: + lri.lr_before = None + + lastlri.lr_next = lri + if not lri: break + lr_items.append(lri) + lastlri = lri + i += 1 + p.lr_items = lr_items + +# ----------------------------------------------------------------------------- +# == Class LRTable == +# +# This basic class represents a basic table of LR parsing information. +# Methods for generating the tables are not defined here. They are defined +# in the derived class LRGeneratedTable. +# ----------------------------------------------------------------------------- + +class VersionError(YaccError): pass + +class LRTable(object): + def __init__(self): + self.lr_action = None + self.lr_goto = None + self.lr_productions = None + self.lr_method = None - lr_index = p.lr_index - j = state - while lr_index < p.len - 1: - lr_index = lr_index + 1 - t = p.prod[lr_index] + def read_table(self,module): + if isinstance(module,types.ModuleType): + parsetab = module + else: + if sys.version_info[0] < 3: + exec("import %s as parsetab" % module) + else: + env = { } + exec("import %s as parsetab" % module, env, env) + parsetab = env['parsetab'] - # Check to see if this symbol and state are a non-terminal transition - if dtrans.has_key((j,t)): - # Yes. Okay, there is some chance that this is an includes relation - # the only way to know for certain is whether the rest of the - # production derives empty + if parsetab._tabversion != __tabversion__: + raise VersionError("yacc table file version is out of date") - li = lr_index + 1 - while li < p.len: - if Terminals.has_key(p.prod[li]): break # No forget it - if not nullable.has_key(p.prod[li]): break - li = li + 1 - else: - # Appears to be a relation between (j,t) and (state,N) - includes.append((j,t)) + self.lr_action = parsetab._lr_action + self.lr_goto = parsetab._lr_goto - g = lr0_goto(C[j],t) # Go to next set - j = _lr0_cidhash.get(id(g),-1) # Go to next state + self.lr_productions = [] + for p in parsetab._lr_productions: + self.lr_productions.append(MiniProduction(*p)) - # When we get here, j is the final state, now we have to locate the production - for r in C[j]: - if r.name != p.name: continue - if r.len != p.len: continue - i = 0 - # This look is comparing a production ". A B C" with "A B C ." - while i < r.lr_index: - if r.prod[i] != p.prod[i+1]: break - i = i + 1 - else: - lookb.append((j,r)) - for i in includes: - if not includedict.has_key(i): includedict[i] = [] - includedict[i].append((state,N)) - lookdict[(state,N)] = lookb + self.lr_method = parsetab._lr_method + return parsetab._lr_signature - return lookdict,includedict + def read_pickle(self,filename): + try: + import cPickle as pickle + except ImportError: + import pickle + + in_f = open(filename,"rb") + + tabversion = pickle.load(in_f) + if tabversion != __tabversion__: + raise VersionError("yacc table file version is out of date") + self.lr_method = pickle.load(in_f) + signature = pickle.load(in_f) + self.lr_action = pickle.load(in_f) + self.lr_goto = pickle.load(in_f) + productions = pickle.load(in_f) + + self.lr_productions = [] + for p in productions: + self.lr_productions.append(MiniProduction(*p)) + + in_f.close() + return signature + + # Bind all production function names to callable objects in pdict + def bind_callables(self,pdict): + for p in self.lr_productions: + p.bind(pdict) + +# ----------------------------------------------------------------------------- +# === LR Generator === +# +# The following classes and functions are used to generate LR parsing tables on +# a grammar. +# ----------------------------------------------------------------------------- # ----------------------------------------------------------------------------- # digraph() @@ -2181,715 +1919,1358 @@ for a in F.get(y,[]): if a not in F[x]: F[x].append(a) if N[x] == d: - N[stack[-1]] = sys.maxint + N[stack[-1]] = MAXINT F[stack[-1]] = F[x] element = stack.pop() while element != x: - N[stack[-1]] = sys.maxint + N[stack[-1]] = MAXINT F[stack[-1]] = F[x] element = stack.pop() +class LALRError(YaccError): pass + # ----------------------------------------------------------------------------- -# compute_read_sets() +# == LRGeneratedTable == # -# Given a set of LR(0) items, this function computes the read sets. -# -# Inputs: C = Set of LR(0) items -# ntrans = Set of nonterminal transitions -# nullable = Set of empty transitions -# -# Returns a set containing the read sets +# This class implements the LR table generation algorithm. There are no +# public methods except for write() # ----------------------------------------------------------------------------- -def compute_read_sets(C, ntrans, nullable): - FP = lambda x: dr_relation(C,x,nullable) - R = lambda x: reads_relation(C,x,nullable) - F = digraph(ntrans,R,FP) - return F +class LRGeneratedTable(LRTable): + def __init__(self,grammar,method='LALR',log=None): + if method not in ['SLR','LALR']: + raise LALRError("Unsupported method %s" % method) + + self.grammar = grammar + self.lr_method = method + + # Set up the logger + if not log: + log = NullLogger() + self.log = log + + # Internal attributes + self.lr_action = {} # Action table + self.lr_goto = {} # Goto table + self.lr_productions = grammar.Productions # Copy of grammar Production array + self.lr_goto_cache = {} # Cache of computed gotos + self.lr0_cidhash = {} # Cache of closures + + self._add_count = 0 # Internal counter used to detect cycles + + # Diagonistic information filled in by the table generator + self.sr_conflict = 0 + self.rr_conflict = 0 + self.conflicts = [] # List of conflicts + + self.sr_conflicts = [] + self.rr_conflicts = [] + + # Build the tables + self.grammar.build_lritems() + self.grammar.compute_first() + self.grammar.compute_follow() + self.lr_parse_table() + + # Compute the LR(0) closure operation on I, where I is a set of LR(0) items. + + def lr0_closure(self,I): + self._add_count += 1 + + # Add everything in I to J + J = I[:] + didadd = 1 + while didadd: + didadd = 0 + for j in J: + for x in j.lr_after: + if getattr(x,"lr0_added",0) == self._add_count: continue + # Add B --> .G to J + J.append(x.lr_next) + x.lr0_added = self._add_count + didadd = 1 + + return J + + # Compute the LR(0) goto function goto(I,X) where I is a set + # of LR(0) items and X is a grammar symbol. This function is written + # in a way that guarantees uniqueness of the generated goto sets + # (i.e. the same goto set will never be returned as two different Python + # objects). With uniqueness, we can later do fast set comparisons using + # id(obj) instead of element-wise comparison. + + def lr0_goto(self,I,x): + # First we look for a previously cached entry + g = self.lr_goto_cache.get((id(I),x),None) + if g: return g + + # Now we generate the goto set in a way that guarantees uniqueness + # of the result + + s = self.lr_goto_cache.get(x,None) + if not s: + s = { } + self.lr_goto_cache[x] = s -# ----------------------------------------------------------------------------- -# compute_follow_sets() -# -# Given a set of LR(0) items, a set of non-terminal transitions, a readset, -# and an include set, this function computes the follow sets -# -# Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)} -# -# Inputs: -# ntrans = Set of nonterminal transitions -# readsets = Readset (previously computed) -# inclsets = Include sets (previously computed) -# -# Returns a set containing the follow sets -# ----------------------------------------------------------------------------- + gs = [ ] + for p in I: + n = p.lr_next + if n and n.lr_before == x: + s1 = s.get(id(n),None) + if not s1: + s1 = { } + s[id(n)] = s1 + gs.append(n) + s = s1 + g = s.get('$end',None) + if not g: + if gs: + g = self.lr0_closure(gs) + s['$end'] = g + else: + s['$end'] = gs + self.lr_goto_cache[(id(I),x)] = g + return g -def compute_follow_sets(ntrans,readsets,inclsets): - FP = lambda x: readsets[x] - R = lambda x: inclsets.get(x,[]) - F = digraph(ntrans,R,FP) - return F + # Compute the LR(0) sets of item function + def lr0_items(self): -# ----------------------------------------------------------------------------- -# add_lookaheads() -# -# Attaches the lookahead symbols to grammar rules. -# -# Inputs: lookbacks - Set of lookback relations -# followset - Computed follow set -# -# This function directly attaches the lookaheads to productions contained -# in the lookbacks set -# ----------------------------------------------------------------------------- + C = [ self.lr0_closure([self.grammar.Productions[0].lr_next]) ] + i = 0 + for I in C: + self.lr0_cidhash[id(I)] = i + i += 1 + + # Loop over the items in C and each grammar symbols + i = 0 + while i < len(C): + I = C[i] + i += 1 -def add_lookaheads(lookbacks,followset): - for trans,lb in lookbacks.items(): - # Loop over productions in lookback - for state,p in lb: - if not p.lookaheads.has_key(state): - p.lookaheads[state] = [] - f = followset.get(trans,[]) - for a in f: - if a not in p.lookaheads[state]: p.lookaheads[state].append(a) + # Collect all of the symbols that could possibly be in the goto(I,X) sets + asyms = { } + for ii in I: + for s in ii.usyms: + asyms[s] = None + + for x in asyms: + g = self.lr0_goto(I,x) + if not g: continue + if id(g) in self.lr0_cidhash: continue + self.lr0_cidhash[id(g)] = len(C) + C.append(g) -# ----------------------------------------------------------------------------- -# add_lalr_lookaheads() -# -# This function does all of the work of adding lookahead information for use -# with LALR parsing -# ----------------------------------------------------------------------------- + return C + + # ----------------------------------------------------------------------------- + # ==== LALR(1) Parsing ==== + # + # LALR(1) parsing is almost exactly the same as SLR except that instead of + # relying upon Follow() sets when performing reductions, a more selective + # lookahead set that incorporates the state of the LR(0) machine is utilized. + # Thus, we mainly just have to focus on calculating the lookahead sets. + # + # The method used here is due to DeRemer and Pennelo (1982). + # + # DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1) + # Lookahead Sets", ACM Transactions on Programming Languages and Systems, + # Vol. 4, No. 4, Oct. 1982, pp. 615-649 + # + # Further details can also be found in: + # + # J. Tremblay and P. Sorenson, "The Theory and Practice of Compiler Writing", + # McGraw-Hill Book Company, (1985). + # + # ----------------------------------------------------------------------------- + + # ----------------------------------------------------------------------------- + # compute_nullable_nonterminals() + # + # Creates a dictionary containing all of the non-terminals that might produce + # an empty production. + # ----------------------------------------------------------------------------- + + def compute_nullable_nonterminals(self): + nullable = {} + num_nullable = 0 + while 1: + for p in self.grammar.Productions[1:]: + if p.len == 0: + nullable[p.name] = 1 + continue + for t in p.prod: + if not t in nullable: break + else: + nullable[p.name] = 1 + if len(nullable) == num_nullable: break + num_nullable = len(nullable) + return nullable -def add_lalr_lookaheads(C): - # Determine all of the nullable nonterminals - nullable = compute_nullable_nonterminals() + # ----------------------------------------------------------------------------- + # find_nonterminal_trans(C) + # + # Given a set of LR(0) items, this functions finds all of the non-terminal + # transitions. These are transitions in which a dot appears immediately before + # a non-terminal. Returns a list of tuples of the form (state,N) where state + # is the state number and N is the nonterminal symbol. + # + # The input C is the set of LR(0) items. + # ----------------------------------------------------------------------------- - # Find all non-terminal transitions - trans = find_nonterminal_transitions(C) + def find_nonterminal_transitions(self,C): + trans = [] + for state in range(len(C)): + for p in C[state]: + if p.lr_index < p.len - 1: + t = (state,p.prod[p.lr_index+1]) + if t[1] in self.grammar.Nonterminals: + if t not in trans: trans.append(t) + state = state + 1 + return trans - # Compute read sets - readsets = compute_read_sets(C,trans,nullable) + # ----------------------------------------------------------------------------- + # dr_relation() + # + # Computes the DR(p,A) relationships for non-terminal transitions. The input + # is a tuple (state,N) where state is a number and N is a nonterminal symbol. + # + # Returns a list of terminals. + # ----------------------------------------------------------------------------- - # Compute lookback/includes relations - lookd, included = compute_lookback_includes(C,trans,nullable) + def dr_relation(self,C,trans,nullable): + dr_set = { } + state,N = trans + terms = [] + + g = self.lr0_goto(C[state],N) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index+1] + if a in self.grammar.Terminals: + if a not in terms: terms.append(a) + + # This extra bit is to handle the start state + if state == 0 and N == self.grammar.Productions[0].prod[0]: + terms.append('$end') - # Compute LALR FOLLOW sets - followsets = compute_follow_sets(trans,readsets,included) + return terms - # Add all of the lookaheads - add_lookaheads(lookd,followsets) + # ----------------------------------------------------------------------------- + # reads_relation() + # + # Computes the READS() relation (p,A) READS (t,C). + # ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# lr_parse_table() -# -# This function constructs the parse tables for SLR or LALR -# ----------------------------------------------------------------------------- -def lr_parse_table(method): - global _lr_method - goto = _lr_goto # Goto array - action = _lr_action # Action array - actionp = { } # Action production array (temporary) + def reads_relation(self,C, trans, empty): + # Look for empty transitions + rel = [] + state, N = trans + + g = self.lr0_goto(C[state],N) + j = self.lr0_cidhash.get(id(g),-1) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index + 1] + if a in empty: + rel.append((j,a)) - _lr_method = method + return rel - n_srconflict = 0 - n_rrconflict = 0 + # ----------------------------------------------------------------------------- + # compute_lookback_includes() + # + # Determines the lookback and includes relations + # + # LOOKBACK: + # + # This relation is determined by running the LR(0) state machine forward. + # For example, starting with a production "N : . A B C", we run it forward + # to obtain "N : A B C ." We then build a relationship between this final + # state and the starting state. These relationships are stored in a dictionary + # lookdict. + # + # INCLUDES: + # + # Computes the INCLUDE() relation (p,A) INCLUDES (p',B). + # + # This relation is used to determine non-terminal transitions that occur + # inside of other non-terminal transition states. (p,A) INCLUDES (p', B) + # if the following holds: + # + # B -> LAT, where T -> epsilon and p' -L-> p + # + # L is essentially a prefix (which may be empty), T is a suffix that must be + # able to derive an empty string. State p' must lead to state p with the string L. + # + # ----------------------------------------------------------------------------- - if yaccdebug: - sys.stderr.write("yacc: Generating %s parsing table...\n" % method) - _vf.write("\n\nParsing method: %s\n\n" % method) + def compute_lookback_includes(self,C,trans,nullable): - # Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items - # This determines the number of states + lookdict = {} # Dictionary of lookback relations + includedict = {} # Dictionary of include relations - C = lr0_items() + # Make a dictionary of non-terminal transitions + dtrans = {} + for t in trans: + dtrans[t] = 1 + + # Loop over all transitions and compute lookbacks and includes + for state,N in trans: + lookb = [] + includes = [] + for p in C[state]: + if p.name != N: continue + + # Okay, we have a name match. We now follow the production all the way + # through the state machine until we get the . on the right hand side + + lr_index = p.lr_index + j = state + while lr_index < p.len - 1: + lr_index = lr_index + 1 + t = p.prod[lr_index] + + # Check to see if this symbol and state are a non-terminal transition + if (j,t) in dtrans: + # Yes. Okay, there is some chance that this is an includes relation + # the only way to know for certain is whether the rest of the + # production derives empty + + li = lr_index + 1 + while li < p.len: + if p.prod[li] in self.grammar.Terminals: break # No forget it + if not p.prod[li] in nullable: break + li = li + 1 + else: + # Appears to be a relation between (j,t) and (state,N) + includes.append((j,t)) + + g = self.lr0_goto(C[j],t) # Go to next set + j = self.lr0_cidhash.get(id(g),-1) # Go to next state + + # When we get here, j is the final state, now we have to locate the production + for r in C[j]: + if r.name != p.name: continue + if r.len != p.len: continue + i = 0 + # This look is comparing a production ". A B C" with "A B C ." + while i < r.lr_index: + if r.prod[i] != p.prod[i+1]: break + i = i + 1 + else: + lookb.append((j,r)) + for i in includes: + if not i in includedict: includedict[i] = [] + includedict[i].append((state,N)) + lookdict[(state,N)] = lookb - if method == 'LALR': - add_lalr_lookaheads(C) + return lookdict,includedict + # ----------------------------------------------------------------------------- + # compute_read_sets() + # + # Given a set of LR(0) items, this function computes the read sets. + # + # Inputs: C = Set of LR(0) items + # ntrans = Set of nonterminal transitions + # nullable = Set of empty transitions + # + # Returns a set containing the read sets + # ----------------------------------------------------------------------------- + + def compute_read_sets(self,C, ntrans, nullable): + FP = lambda x: self.dr_relation(C,x,nullable) + R = lambda x: self.reads_relation(C,x,nullable) + F = digraph(ntrans,R,FP) + return F - # Build the parser table, state by state - st = 0 - for I in C: - # Loop over each production in I - actlist = [ ] # List of actions - st_action = { } - st_actionp = { } - st_goto = { } - if yaccdebug: - _vf.write("\nstate %d\n\n" % st) + # ----------------------------------------------------------------------------- + # compute_follow_sets() + # + # Given a set of LR(0) items, a set of non-terminal transitions, a readset, + # and an include set, this function computes the follow sets + # + # Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)} + # + # Inputs: + # ntrans = Set of nonterminal transitions + # readsets = Readset (previously computed) + # inclsets = Include sets (previously computed) + # + # Returns a set containing the follow sets + # ----------------------------------------------------------------------------- + + def compute_follow_sets(self,ntrans,readsets,inclsets): + FP = lambda x: readsets[x] + R = lambda x: inclsets.get(x,[]) + F = digraph(ntrans,R,FP) + return F + + # ----------------------------------------------------------------------------- + # add_lookaheads() + # + # Attaches the lookahead symbols to grammar rules. + # + # Inputs: lookbacks - Set of lookback relations + # followset - Computed follow set + # + # This function directly attaches the lookaheads to productions contained + # in the lookbacks set + # ----------------------------------------------------------------------------- + + def add_lookaheads(self,lookbacks,followset): + for trans,lb in lookbacks.items(): + # Loop over productions in lookback + for state,p in lb: + if not state in p.lookaheads: + p.lookaheads[state] = [] + f = followset.get(trans,[]) + for a in f: + if a not in p.lookaheads[state]: p.lookaheads[state].append(a) + + # ----------------------------------------------------------------------------- + # add_lalr_lookaheads() + # + # This function does all of the work of adding lookahead information for use + # with LALR parsing + # ----------------------------------------------------------------------------- + + def add_lalr_lookaheads(self,C): + # Determine all of the nullable nonterminals + nullable = self.compute_nullable_nonterminals() + + # Find all non-terminal transitions + trans = self.find_nonterminal_transitions(C) + + # Compute read sets + readsets = self.compute_read_sets(C,trans,nullable) + + # Compute lookback/includes relations + lookd, included = self.compute_lookback_includes(C,trans,nullable) + + # Compute LALR FOLLOW sets + followsets = self.compute_follow_sets(trans,readsets,included) + + # Add all of the lookaheads + self.add_lookaheads(lookd,followsets) + + # ----------------------------------------------------------------------------- + # lr_parse_table() + # + # This function constructs the parse tables for SLR or LALR + # ----------------------------------------------------------------------------- + def lr_parse_table(self): + Productions = self.grammar.Productions + Precedence = self.grammar.Precedence + goto = self.lr_goto # Goto array + action = self.lr_action # Action array + log = self.log # Logger for output + + actionp = { } # Action production array (temporary) + + log.info("Parsing method: %s", self.lr_method) + + # Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items + # This determines the number of states + + C = self.lr0_items() + + if self.lr_method == 'LALR': + self.add_lalr_lookaheads(C) + + # Build the parser table, state by state + st = 0 + for I in C: + # Loop over each production in I + actlist = [ ] # List of actions + st_action = { } + st_actionp = { } + st_goto = { } + log.info("") + log.info("state %d", st) + log.info("") for p in I: - _vf.write(" (%d) %s\n" % (p.number, str(p))) - _vf.write("\n") + log.info(" (%d) %s", p.number, str(p)) + log.info("") - for p in I: - try: - if p.len == p.lr_index + 1: - if p.name == "S'": - # Start symbol. Accept! - st_action["$end"] = 0 - st_actionp["$end"] = p - else: - # We are at the end of a production. Reduce! - if method == 'LALR': - laheads = p.lookaheads[st] + for p in I: + if p.len == p.lr_index + 1: + if p.name == "S'": + # Start symbol. Accept! + st_action["$end"] = 0 + st_actionp["$end"] = p else: - laheads = Follow[p.name] - for a in laheads: - actlist.append((a,p,"reduce using rule %d (%s)" % (p.number,p))) - r = st_action.get(a,None) - if r is not None: - # Whoa. Have a shift/reduce or reduce/reduce conflict - if r > 0: - # Need to decide on shift or reduce here - # By default we favor shifting. Need to add - # some precedence rules here. - sprec,slevel = Productions[st_actionp[a].number].prec - rprec,rlevel = Precedence.get(a,('right',0)) - if (slevel < rlevel) or ((slevel == rlevel) and (rprec == 'left')): - # We really need to reduce here. - st_action[a] = -p.number - st_actionp[a] = p - if not slevel and not rlevel: - _vfc.write("shift/reduce conflict in state %d resolved as reduce.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as reduce.\n" % a) - n_srconflict += 1 - elif (slevel == rlevel) and (rprec == 'nonassoc'): - st_action[a] = None - else: - # Hmmm. Guess we'll keep the shift - if not rlevel: - _vfc.write("shift/reduce conflict in state %d resolved as shift.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as shift.\n" % a) - n_srconflict +=1 - elif r < 0: - # Reduce/reduce conflict. In this case, we favor the rule - # that was defined first in the grammar file - oldp = Productions[-r] - pp = Productions[p.number] - if oldp.line > pp.line: - st_action[a] = -p.number - st_actionp[a] = p - # sys.stderr.write("Reduce/reduce conflict in state %d\n" % st) - n_rrconflict += 1 - _vfc.write("reduce/reduce conflict in state %d resolved using rule %d (%s).\n" % (st, st_actionp[a].number, st_actionp[a])) - _vf.write(" ! reduce/reduce conflict for %s resolved using rule %d (%s).\n" % (a,st_actionp[a].number, st_actionp[a])) - else: - sys.stderr.write("Unknown conflict in state %d\n" % st) + # We are at the end of a production. Reduce! + if self.lr_method == 'LALR': + laheads = p.lookaheads[st] else: - st_action[a] = -p.number - st_actionp[a] = p - else: - i = p.lr_index - a = p.prod[i+1] # Get symbol right after the "." - if Terminals.has_key(a): - g = lr0_goto(I,a) - j = _lr0_cidhash.get(id(g),-1) - if j >= 0: - # We are in a shift state - actlist.append((a,p,"shift and go to state %d" % j)) - r = st_action.get(a,None) - if r is not None: - # Whoa have a shift/reduce or shift/shift conflict - if r > 0: - if r != j: - sys.stderr.write("Shift/shift conflict in state %d\n" % st) - elif r < 0: - # Do a precedence check. - # - if precedence of reduce rule is higher, we reduce. - # - if precedence of reduce is same and left assoc, we reduce. - # - otherwise we shift - rprec,rlevel = Productions[st_actionp[a].number].prec - sprec,slevel = Precedence.get(a,('right',0)) - if (slevel > rlevel) or ((slevel == rlevel) and (rprec == 'right')): - # We decide to shift here... highest precedence to shift - st_action[a] = j - st_actionp[a] = p - if not rlevel: - n_srconflict += 1 - _vfc.write("shift/reduce conflict in state %d resolved as shift.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as shift.\n" % a) - elif (slevel == rlevel) and (rprec == 'nonassoc'): - st_action[a] = None + laheads = self.grammar.Follow[p.name] + for a in laheads: + actlist.append((a,p,"reduce using rule %d (%s)" % (p.number,p))) + r = st_action.get(a,None) + if r is not None: + # Whoa. Have a shift/reduce or reduce/reduce conflict + if r > 0: + # Need to decide on shift or reduce here + # By default we favor shifting. Need to add + # some precedence rules here. + sprec,slevel = Productions[st_actionp[a].number].prec + rprec,rlevel = Precedence.get(a,('right',0)) + if (slevel < rlevel) or ((slevel == rlevel) and (rprec == 'left')): + # We really need to reduce here. + st_action[a] = -p.number + st_actionp[a] = p + if not slevel and not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as reduce",a) + self.sr_conflicts.append((st,a,'reduce')) + Productions[p.number].reduced += 1 + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the shift + if not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as shift",a) + self.sr_conflicts.append((st,a,'shift')) + elif r < 0: + # Reduce/reduce conflict. In this case, we favor the rule + # that was defined first in the grammar file + oldp = Productions[-r] + pp = Productions[p.number] + if oldp.line > pp.line: + st_action[a] = -p.number + st_actionp[a] = p + chosenp,rejectp = pp,oldp + Productions[p.number].reduced += 1 + Productions[oldp.number].reduced -= 1 + else: + chosenp,rejectp = oldp,pp + self.rr_conflicts.append((st,chosenp,rejectp)) + log.info(" ! reduce/reduce conflict for %s resolved using rule %d (%s)", a,st_actionp[a].number, st_actionp[a]) else: - # Hmmm. Guess we'll keep the reduce - if not slevel and not rlevel: - n_srconflict +=1 - _vfc.write("shift/reduce conflict in state %d resolved as reduce.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as reduce.\n" % a) + raise LALRError("Unknown conflict in state %d" % st) + else: + st_action[a] = -p.number + st_actionp[a] = p + Productions[p.number].reduced += 1 + else: + i = p.lr_index + a = p.prod[i+1] # Get symbol right after the "." + if a in self.grammar.Terminals: + g = self.lr0_goto(I,a) + j = self.lr0_cidhash.get(id(g),-1) + if j >= 0: + # We are in a shift state + actlist.append((a,p,"shift and go to state %d" % j)) + r = st_action.get(a,None) + if r is not None: + # Whoa have a shift/reduce or shift/shift conflict + if r > 0: + if r != j: + raise LALRError("Shift/shift conflict in state %d" % st) + elif r < 0: + # Do a precedence check. + # - if precedence of reduce rule is higher, we reduce. + # - if precedence of reduce is same and left assoc, we reduce. + # - otherwise we shift + rprec,rlevel = Productions[st_actionp[a].number].prec + sprec,slevel = Precedence.get(a,('right',0)) + if (slevel > rlevel) or ((slevel == rlevel) and (rprec == 'right')): + # We decide to shift here... highest precedence to shift + Productions[st_actionp[a].number].reduced -= 1 + st_action[a] = j + st_actionp[a] = p + if not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as shift",a) + self.sr_conflicts.append((st,a,'shift')) + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the reduce + if not slevel and not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as reduce",a) + self.sr_conflicts.append((st,a,'reduce')) + else: + raise LALRError("Unknown conflict in state %d" % st) else: - sys.stderr.write("Unknown conflict in state %d\n" % st) - else: - st_action[a] = j - st_actionp[a] = p + st_action[a] = j + st_actionp[a] = p - except StandardError,e: - print sys.exc_info() - raise YaccError, "Hosed in lr_parse_table" - - # Print the actions associated with each terminal - if yaccdebug: - _actprint = { } - for a,p,m in actlist: - if st_action.has_key(a): - if p is st_actionp[a]: - _vf.write(" %-15s %s\n" % (a,m)) - _actprint[(a,m)] = 1 - _vf.write("\n") - for a,p,m in actlist: - if st_action.has_key(a): - if p is not st_actionp[a]: - if not _actprint.has_key((a,m)): - _vf.write(" ! %-15s [ %s ]\n" % (a,m)) + # Print the actions associated with each terminal + _actprint = { } + for a,p,m in actlist: + if a in st_action: + if p is st_actionp[a]: + log.info(" %-15s %s",a,m) _actprint[(a,m)] = 1 + log.info("") + # Print the actions that were not used. (debugging) + not_used = 0 + for a,p,m in actlist: + if a in st_action: + if p is not st_actionp[a]: + if not (a,m) in _actprint: + log.debug(" ! %-15s [ %s ]",a,m) + not_used = 1 + _actprint[(a,m)] = 1 + if not_used: + log.debug("") + + # Construct the goto table for this state + + nkeys = { } + for ii in I: + for s in ii.usyms: + if s in self.grammar.Nonterminals: + nkeys[s] = None + for n in nkeys: + g = self.lr0_goto(I,n) + j = self.lr0_cidhash.get(id(g),-1) + if j >= 0: + st_goto[n] = j + log.info(" %-30s shift and go to state %d",n,j) + + action[st] = st_action + actionp[st] = st_actionp + goto[st] = st_goto + st += 1 - # Construct the goto table for this state - if yaccdebug: - _vf.write("\n") - nkeys = { } - for ii in I: - for s in ii.usyms: - if Nonterminals.has_key(s): - nkeys[s] = None - for n in nkeys.keys(): - g = lr0_goto(I,n) - j = _lr0_cidhash.get(id(g),-1) - if j >= 0: - st_goto[n] = j - if yaccdebug: - _vf.write(" %-30s shift and go to state %d\n" % (n,j)) - - action[st] = st_action - actionp[st] = st_actionp - goto[st] = st_goto - - st += 1 - - if yaccdebug: - if n_srconflict == 1: - sys.stderr.write("yacc: %d shift/reduce conflict\n" % n_srconflict) - if n_srconflict > 1: - sys.stderr.write("yacc: %d shift/reduce conflicts\n" % n_srconflict) - if n_rrconflict == 1: - sys.stderr.write("yacc: %d reduce/reduce conflict\n" % n_rrconflict) - if n_rrconflict > 1: - sys.stderr.write("yacc: %d reduce/reduce conflicts\n" % n_rrconflict) - -# ----------------------------------------------------------------------------- -# ==== LR Utility functions ==== -# ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# _lr_write_tables() -# -# This function writes the LR parsing tables to a file -# ----------------------------------------------------------------------------- - -def lr_write_tables(modulename=tab_module,outputdir=''): - if isinstance(modulename, types.ModuleType): - print >>sys.stderr, "Warning module %s is inconsistent with the grammar (ignored)" % modulename - return + # ----------------------------------------------------------------------------- + # write() + # + # This function writes the LR parsing tables to a file + # ----------------------------------------------------------------------------- - basemodulename = modulename.split(".")[-1] - filename = os.path.join(outputdir,basemodulename) + ".py" - try: - f = open(filename,"w") + def write_table(self,modulename,outputdir='',signature=""): + basemodulename = modulename.split(".")[-1] + filename = os.path.join(outputdir,basemodulename) + ".py" + try: + f = open(filename,"w") - f.write(""" + f.write(""" # %s # This file is automatically generated. Do not edit. +_tabversion = %r -_lr_method = %s +_lr_method = %r -_lr_signature = %s -""" % (filename, repr(_lr_method), repr(Signature.digest()))) +_lr_signature = %r + """ % (filename, __tabversion__, self.lr_method, signature)) - # Change smaller to 0 to go back to original tables - smaller = 1 + # Change smaller to 0 to go back to original tables + smaller = 1 - # Factor out names to try and make smaller - if smaller: - items = { } - - for s,nd in _lr_action.items(): - for name,v in nd.items(): - i = items.get(name) - if not i: - i = ([],[]) - items[name] = i - i[0].append(s) - i[1].append(v) - - f.write("\n_lr_action_items = {") - for k,v in items.items(): - f.write("%r:([" % k) - for i in v[0]: - f.write("%r," % i) - f.write("],[") - for i in v[1]: - f.write("%r," % i) + # Factor out names to try and make smaller + if smaller: + items = { } - f.write("]),") - f.write("}\n") + for s,nd in self.lr_action.items(): + for name,v in nd.items(): + i = items.get(name) + if not i: + i = ([],[]) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write("\n_lr_action_items = {") + for k,v in items.items(): + f.write("%r:([" % k) + for i in v[0]: + f.write("%r," % i) + f.write("],[") + for i in v[1]: + f.write("%r," % i) - f.write(""" + f.write("]),") + f.write("}\n") + + f.write(""" _lr_action = { } for _k, _v in _lr_action_items.items(): for _x,_y in zip(_v[0],_v[1]): - if not _lr_action.has_key(_x): _lr_action[_x] = { } + if not _x in _lr_action: _lr_action[_x] = { } _lr_action[_x][_k] = _y del _lr_action_items """) - else: - f.write("\n_lr_action = { "); - for k,v in _lr_action.items(): - f.write("(%r,%r):%r," % (k[0],k[1],v)) - f.write("}\n"); - - if smaller: - # Factor out names to try and make smaller - items = { } - - for s,nd in _lr_goto.items(): - for name,v in nd.items(): - i = items.get(name) - if not i: - i = ([],[]) - items[name] = i - i[0].append(s) - i[1].append(v) - - f.write("\n_lr_goto_items = {") - for k,v in items.items(): - f.write("%r:([" % k) - for i in v[0]: - f.write("%r," % i) - f.write("],[") - for i in v[1]: - f.write("%r," % i) + else: + f.write("\n_lr_action = { "); + for k,v in self.lr_action.items(): + f.write("(%r,%r):%r," % (k[0],k[1],v)) + f.write("}\n"); + + if smaller: + # Factor out names to try and make smaller + items = { } + + for s,nd in self.lr_goto.items(): + for name,v in nd.items(): + i = items.get(name) + if not i: + i = ([],[]) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write("\n_lr_goto_items = {") + for k,v in items.items(): + f.write("%r:([" % k) + for i in v[0]: + f.write("%r," % i) + f.write("],[") + for i in v[1]: + f.write("%r," % i) - f.write("]),") - f.write("}\n") + f.write("]),") + f.write("}\n") - f.write(""" + f.write(""" _lr_goto = { } for _k, _v in _lr_goto_items.items(): for _x,_y in zip(_v[0],_v[1]): - if not _lr_goto.has_key(_x): _lr_goto[_x] = { } + if not _x in _lr_goto: _lr_goto[_x] = { } _lr_goto[_x][_k] = _y del _lr_goto_items """) - else: - f.write("\n_lr_goto = { "); - for k,v in _lr_goto.items(): - f.write("(%r,%r):%r," % (k[0],k[1],v)) - f.write("}\n"); - - # Write production table - f.write("_lr_productions = [\n") - for p in Productions: - if p: - if (p.func): - f.write(" (%r,%d,%r,%r,%d),\n" % (p.name, p.len, p.func.__name__,p.file,p.line)) - else: - f.write(" (%r,%d,None,None,None),\n" % (p.name, p.len)) else: - f.write(" None,\n") - f.write("]\n") - - f.close() - - except IOError,e: - print >>sys.stderr, "Unable to create '%s'" % filename - print >>sys.stderr, e - return - -def lr_read_tables(module=tab_module,optimize=0): - global _lr_action, _lr_goto, _lr_productions, _lr_method - try: - if isinstance(module,types.ModuleType): - parsetab = module - else: - exec "import %s as parsetab" % module + f.write("\n_lr_goto = { "); + for k,v in self.lr_goto.items(): + f.write("(%r,%r):%r," % (k[0],k[1],v)) + f.write("}\n"); + + # Write production table + f.write("_lr_productions = [\n") + for p in self.lr_productions: + if p.func: + f.write(" (%r,%r,%d,%r,%r,%d),\n" % (p.str,p.name, p.len, p.func,p.file,p.line)) + else: + f.write(" (%r,%r,%d,None,None,None),\n" % (str(p),p.name, p.len)) + f.write("]\n") + f.close() + + except IOError: + e = sys.exc_info()[1] + sys.stderr.write("Unable to create '%s'\n" % filename) + sys.stderr.write(str(e)+"\n") + return - if (optimize) or (Signature.digest() == parsetab._lr_signature): - _lr_action = parsetab._lr_action - _lr_goto = parsetab._lr_goto - _lr_productions = parsetab._lr_productions - _lr_method = parsetab._lr_method - return 1 - else: - return 0 - except (ImportError,AttributeError): - return 0 + # ----------------------------------------------------------------------------- + # pickle_table() + # + # This function pickles the LR parsing tables to a supplied file object + # ----------------------------------------------------------------------------- + def pickle_table(self,filename,signature=""): + try: + import cPickle as pickle + except ImportError: + import pickle + outf = open(filename,"wb") + pickle.dump(__tabversion__,outf,pickle_protocol) + pickle.dump(self.lr_method,outf,pickle_protocol) + pickle.dump(signature,outf,pickle_protocol) + pickle.dump(self.lr_action,outf,pickle_protocol) + pickle.dump(self.lr_goto,outf,pickle_protocol) + + outp = [] + for p in self.lr_productions: + if p.func: + outp.append((p.str,p.name, p.len, p.func,p.file,p.line)) + else: + outp.append((str(p),p.name,p.len,None,None,None)) + pickle.dump(outp,outf,pickle_protocol) + outf.close() # ----------------------------------------------------------------------------- -# yacc(module) +# === INTROSPECTION === # -# Build the parser module +# The following functions and classes are used to implement the PLY +# introspection features followed by the yacc() function itself. # ----------------------------------------------------------------------------- -def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module, start=None, check_recursion=1, optimize=0,write_tables=1,debugfile=debug_file,outputdir=''): - global yaccdebug - yaccdebug = debug +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- - initialize_vars() - files = { } - error = 0 +def get_caller_module_dict(levels): + try: + raise RuntimeError + except RuntimeError: + e,b,t = sys.exc_info() + f = t.tb_frame + while levels > 0: + f = f.f_back + levels -= 1 + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + + return ldict + +# ----------------------------------------------------------------------------- +# parse_grammar() +# +# This takes a raw grammar rule string and parses it into production data +# ----------------------------------------------------------------------------- +def parse_grammar(doc,file,line): + grammar = [] + # Split the doc string into lines + pstrings = doc.splitlines() + lastp = None + dline = line + for ps in pstrings: + dline += 1 + p = ps.split() + if not p: continue + try: + if p[0] == '|': + # This is a continuation of a previous rule + if not lastp: + raise SyntaxError("%s:%d: Misplaced '|'" % (file,dline)) + prodname = lastp + syms = p[1:] + else: + prodname = p[0] + lastp = prodname + syms = p[2:] + assign = p[1] + if assign != ':' and assign != '::=': + raise SyntaxError("%s:%d: Syntax error. Expected ':'" % (file,dline)) + + grammar.append((file,dline,prodname,syms)) + except SyntaxError: + raise + except Exception: + raise SyntaxError("%s:%d: Syntax error in rule '%s'" % (file,dline,ps.strip())) + + return grammar + +# ----------------------------------------------------------------------------- +# ParserReflect() +# +# This class represents information extracted for building a parser including +# start symbol, error function, tokens, precedence list, action functions, +# etc. +# ----------------------------------------------------------------------------- +class ParserReflect(object): + def __init__(self,pdict,log=None): + self.pdict = pdict + self.start = None + self.error_func = None + self.tokens = None + self.files = {} + self.grammar = [] + self.error = 0 + if log is None: + self.log = PlyLogger(sys.stderr) + else: + self.log = log - # Add parsing method to signature - Signature.update(method) + # Get all of the basic information + def get_all(self): + self.get_start() + self.get_error_func() + self.get_tokens() + self.get_precedence() + self.get_pfunctions() + + # Validate all of the information + def validate_all(self): + self.validate_start() + self.validate_error_func() + self.validate_tokens() + self.validate_precedence() + self.validate_pfunctions() + self.validate_files() + return self.error - # If a "module" parameter was supplied, extract its dictionary. - # Note: a module may in fact be an instance as well. + # Compute a signature over the grammar + def signature(self): + try: + from hashlib import md5 + except ImportError: + from md5 import md5 + try: + sig = md5() + if self.start: + sig.update(self.start.encode('latin-1')) + if self.prec: + sig.update("".join(["".join(p) for p in self.prec]).encode('latin-1')) + if self.tokens: + sig.update(" ".join(self.tokens).encode('latin-1')) + for f in self.pfuncs: + if f[3]: + sig.update(f[3].encode('latin-1')) + except (TypeError,ValueError): + pass + return sig.digest() - if module: - # User supplied a module object. - if isinstance(module, types.ModuleType): - ldict = module.__dict__ - elif isinstance(module, _INSTANCETYPE): - _items = [(k,getattr(module,k)) for k in dir(module)] - ldict = { } - for i in _items: - ldict[i[0]] = i[1] - else: - raise ValueError,"Expected a module" + # ----------------------------------------------------------------------------- + # validate_file() + # + # This method checks to see if there are duplicated p_rulename() functions + # in the parser module file. Without this function, it is really easy for + # users to make mistakes by cutting and pasting code fragments (and it's a real + # bugger to try and figure out why the resulting parser doesn't work). Therefore, + # we just do a little regular expression pattern matching of def statements + # to try and detect duplicates. + # ----------------------------------------------------------------------------- + + def validate_files(self): + # Match def p_funcname( + fre = re.compile(r'\s*def\s+(p_[a-zA-Z_0-9]*)\(') + + for filename in self.files.keys(): + base,ext = os.path.splitext(filename) + if ext != '.py': return 1 # No idea. Assume it's okay. - else: - # No module given. We might be able to get information from the caller. - # Throw an exception and unwind the traceback to get the globals + try: + f = open(filename) + lines = f.readlines() + f.close() + except IOError: + continue - try: - raise RuntimeError - except RuntimeError: - e,b,t = sys.exc_info() - f = t.tb_frame - f = f.f_back # Walk out to our calling function - if f.f_globals is f.f_locals: # Collect global and local variations from caller - ldict = f.f_globals - else: - ldict = f.f_globals.copy() - ldict.update(f.f_locals) + counthash = { } + for linen,l in enumerate(lines): + linen += 1 + m = fre.match(l) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + self.log.warning("%s:%d: Function %s redefined. Previously defined on line %d", filename,linen,name,prev) - # Add starting symbol to signature - if not start: - start = ldict.get("start",None) - if start: - Signature.update(start) + # Get the start symbol + def get_start(self): + self.start = self.pdict.get('start') + + # Validate the start symbol + def validate_start(self): + if self.start is not None: + if not isinstance(self.start,str): + self.log.error("'start' must be a string") # Look for error handler - ef = ldict.get('p_error',None) - if ef: - if isinstance(ef,types.FunctionType): - ismethod = 0 - elif isinstance(ef, types.MethodType): - ismethod = 1 - else: - raise YaccError,"'p_error' defined, but is not a function or method." - eline = ef.func_code.co_firstlineno - efile = ef.func_code.co_filename - files[efile] = None - - if (ef.func_code.co_argcount != 1+ismethod): - raise YaccError,"%s:%d: p_error() requires 1 argument." % (efile,eline) - global Errorfunc - Errorfunc = ef - else: - print >>sys.stderr, "yacc: Warning. no p_error() function is defined." + def get_error_func(self): + self.error_func = self.pdict.get('p_error') + + # Validate the error function + def validate_error_func(self): + if self.error_func: + if isinstance(self.error_func,types.FunctionType): + ismethod = 0 + elif isinstance(self.error_func, types.MethodType): + ismethod = 1 + else: + self.log.error("'p_error' defined, but is not a function or method") + self.error = 1 + return + + eline = func_code(self.error_func).co_firstlineno + efile = func_code(self.error_func).co_filename + self.files[efile] = 1 + + if (func_code(self.error_func).co_argcount != 1+ismethod): + self.log.error("%s:%d: p_error() requires 1 argument",efile,eline) + self.error = 1 + + # Get the tokens map + def get_tokens(self): + tokens = self.pdict.get("tokens",None) + if not tokens: + self.log.error("No token list is defined") + self.error = 1 + return + + if not isinstance(tokens,(list, tuple)): + self.log.error("tokens must be a list or tuple") + self.error = 1 + return + + if not tokens: + self.log.error("tokens is empty") + self.error = 1 + return + + self.tokens = tokens + + # Validate the tokens + def validate_tokens(self): + # Validate the tokens. + if 'error' in self.tokens: + self.log.error("Illegal token name 'error'. Is a reserved word") + self.error = 1 + return + + terminals = {} + for n in self.tokens: + if n in terminals: + self.log.warning("Token '%s' multiply defined", n) + terminals[n] = 1 + + # Get the precedence map (if any) + def get_precedence(self): + self.prec = self.pdict.get("precedence",None) + + # Validate and parse the precedence map + def validate_precedence(self): + preclist = [] + if self.prec: + if not isinstance(self.prec,(list,tuple)): + self.log.error("precedence must be a list or tuple") + self.error = 1 + return + for level,p in enumerate(self.prec): + if not isinstance(p,(list,tuple)): + self.log.error("Bad precedence table") + self.error = 1 + return - # If running in optimized mode. We're going to read tables instead + if len(p) < 2: + self.log.error("Malformed precedence entry %s. Must be (assoc, term, ..., term)",p) + self.error = 1 + return + assoc = p[0] + if not isinstance(assoc,str): + self.log.error("precedence associativity must be a string") + self.error = 1 + return + for term in p[1:]: + if not isinstance(term,str): + self.log.error("precedence items must be strings") + self.error = 1 + return + preclist.append((term,assoc,level+1)) + self.preclist = preclist - if (optimize and lr_read_tables(tabmodule,1)): - # Read parse table - del Productions[:] - for p in _lr_productions: - if not p: - Productions.append(None) + # Get all p_functions from the grammar + def get_pfunctions(self): + p_functions = [] + for name, item in self.pdict.items(): + if name[:2] != 'p_': continue + if name == 'p_error': continue + if isinstance(item,(types.FunctionType,types.MethodType)): + line = func_code(item).co_firstlineno + file = func_code(item).co_filename + p_functions.append((line,file,name,item.__doc__)) + + # Sort all of the actions by line number + p_functions.sort() + self.pfuncs = p_functions + + + # Validate all of the p_functions + def validate_pfunctions(self): + grammar = [] + # Check for non-empty symbols + if len(self.pfuncs) == 0: + self.log.error("no rules of the form p_rulename are defined") + self.error = 1 + return + + for line, file, name, doc in self.pfuncs: + func = self.pdict[name] + if isinstance(func, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + if func_code(func).co_argcount > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,func.__name__) + self.error = 1 + elif func_code(func).co_argcount < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument",file,line,func.__name__) + self.error = 1 + elif not func.__doc__: + self.log.warning("%s:%d: No documentation string specified in function '%s' (ignored)",file,line,func.__name__) else: - m = MiniProduction() - m.name = p[0] - m.len = p[1] - m.file = p[3] - m.line = p[4] - if p[2]: - m.func = ldict[p[2]] - Productions.append(m) + try: + parsed_g = parse_grammar(doc,file,line) + for g in parsed_g: + grammar.append((name, g)) + except SyntaxError: + e = sys.exc_info()[1] + self.log.error(str(e)) + self.error = 1 + + # Looks like a valid grammar rule + # Mark the file in which defined. + self.files[file] = 1 + + # Secondary validation step that looks for p_ definitions that are not functions + # or functions that look like they might be grammar rules. + + for n,v in self.pdict.items(): + if n[0:2] == 'p_' and isinstance(v, (types.FunctionType, types.MethodType)): continue + if n[0:2] == 't_': continue + if n[0:2] == 'p_' and n != 'p_error': + self.log.warning("'%s' not defined as a function", n) + if ((isinstance(v,types.FunctionType) and func_code(v).co_argcount == 1) or + (isinstance(v,types.MethodType) and func_code(v).co_argcount == 2)): + try: + doc = v.__doc__.split(" ") + if doc[1] == ':': + self.log.warning("%s:%d: Possible grammar rule '%s' defined without p_ prefix", + func_code(v).co_filename, func_code(v).co_firstlineno,n) + except Exception: + pass - else: - # Get the tokens map - if (module and isinstance(module,_INSTANCETYPE)): - tokens = getattr(module,"tokens",None) - else: - tokens = ldict.get("tokens",None) + self.grammar = grammar - if not tokens: - raise YaccError,"module does not define a list 'tokens'" - if not (isinstance(tokens,types.ListType) or isinstance(tokens,types.TupleType)): - raise YaccError,"tokens must be a list or tuple." - - # Check to see if a requires dictionary is defined. - requires = ldict.get("require",None) - if requires: - if not (isinstance(requires,types.DictType)): - raise YaccError,"require must be a dictionary." +# ----------------------------------------------------------------------------- +# yacc(module) +# +# Build a parser +# ----------------------------------------------------------------------------- - for r,v in requires.items(): - try: - if not (isinstance(v,types.ListType)): - raise TypeError - v1 = [x.split(".") for x in v] - Requires[r] = v1 - except StandardError: - print >>sys.stderr, "Invalid specification for rule '%s' in require. Expected a list of strings" % r - - - # Build the dictionary of terminals. We a record a 0 in the - # dictionary to track whether or not a terminal is actually - # used in the grammar - - if 'error' in tokens: - print >>sys.stderr, "yacc: Illegal token 'error'. Is a reserved word." - raise YaccError,"Illegal token name" - - for n in tokens: - if Terminals.has_key(n): - print >>sys.stderr, "yacc: Warning. Token '%s' multiply defined." % n - Terminals[n] = [ ] - - Terminals['error'] = [ ] - - # Get the precedence map (if any) - prec = ldict.get("precedence",None) - if prec: - if not (isinstance(prec,types.ListType) or isinstance(prec,types.TupleType)): - raise YaccError,"precedence must be a list or tuple." - add_precedence(prec) - Signature.update(repr(prec)) - - for n in tokens: - if not Precedence.has_key(n): - Precedence[n] = ('right',0) # Default, right associative, 0 precedence - - # Get the list of built-in functions with p_ prefix - symbols = [ldict[f] for f in ldict.keys() - if (type(ldict[f]) in (types.FunctionType, types.MethodType) and ldict[f].__name__[:2] == 'p_' - and ldict[f].__name__ != 'p_error')] +def yacc(method='LALR', debug=yaccdebug, module=None, tabmodule=tab_module, start=None, + check_recursion=1, optimize=0, write_tables=1, debugfile=debug_file,outputdir='', + debuglog=None, errorlog = None, picklefile=None): - # Check for non-empty symbols - if len(symbols) == 0: - raise YaccError,"no rules of the form p_rulename are defined." + global parse # Reference to the parsing method of the last built parser - # Sort the symbols by line number - symbols.sort(lambda x,y: cmp(x.func_code.co_firstlineno,y.func_code.co_firstlineno)) + # If pickling is enabled, table files are not created - # Add all of the symbols to the grammar - for f in symbols: - if (add_function(f)) < 0: - error += 1 - else: - files[f.func_code.co_filename] = None + if picklefile: + write_tables = 0 - # Make a signature of the docstrings - for f in symbols: - if f.__doc__: - Signature.update(f.__doc__) + if errorlog is None: + errorlog = PlyLogger(sys.stderr) - lr_init_vars() + # Get the module dictionary used for the parser + if module: + _items = [(k,getattr(module,k)) for k in dir(module)] + pdict = dict(_items) + else: + pdict = get_caller_module_dict(2) - if error: - raise YaccError,"Unable to construct parser." + # Collect parser information from the dictionary + pinfo = ParserReflect(pdict,log=errorlog) + pinfo.get_all() - if not lr_read_tables(tabmodule): + if pinfo.error: + raise YaccError("Unable to build parser") - # Validate files - for filename in files.keys(): - if not validate_file(filename): - error = 1 + # Check signature against table files (if any) + signature = pinfo.signature() - # Validate dictionary - validate_dict(ldict) + # Read the tables + try: + lr = LRTable() + if picklefile: + read_signature = lr.read_pickle(picklefile) + else: + read_signature = lr.read_table(tabmodule) + if optimize or (read_signature == signature): + try: + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr,pinfo.error_func) + parse = parser.parse + return parser + except Exception: + e = sys.exc_info()[1] + errorlog.warning("There was a problem loading the table file: %s", repr(e)) + except VersionError: + e = sys.exc_info() + errorlog.warning(str(e)) + except Exception: + pass + + if debuglog is None: + if debug: + debuglog = PlyLogger(open(debugfile,"w")) + else: + debuglog = NullLogger() - if start and not Prodnames.has_key(start): - raise YaccError,"Bad starting symbol '%s'" % start + debuglog.info("Created by PLY version %s (http://www.dabeaz.com/ply)", __version__) - augment_grammar(start) - error = verify_productions(cycle_check=check_recursion) - otherfunc = [ldict[f] for f in ldict.keys() - if (type(f) in (types.FunctionType,types.MethodType) and ldict[f].__name__[:2] != 'p_')] - # Check precedence rules - if check_precedence(): - error = 1 + errors = 0 - if error: - raise YaccError,"Unable to construct parser." + # Validate the parser information + if pinfo.validate_all(): + raise YaccError("Unable to build parser") + + if not pinfo.error_func: + errorlog.warning("no p_error() function is defined") - build_lritems() - compute_first1() - compute_follow(start) + # Create a grammar object + grammar = Grammar(pinfo.tokens) - if method in ['SLR','LALR']: - lr_parse_table(method) - else: - raise YaccError, "Unknown parsing method '%s'" % method + # Set precedence level for terminals + for term, assoc, level in pinfo.preclist: + try: + grammar.set_precedence(term,assoc,level) + except GrammarError: + e = sys.exc_info()[1] + errorlog.warning("%s",str(e)) + + # Add productions to the grammar + for funcname, gram in pinfo.grammar: + file, line, prodname, syms = gram + try: + grammar.add_production(prodname,syms,funcname,file,line) + except GrammarError: + e = sys.exc_info()[1] + errorlog.error("%s",str(e)) + errors = 1 - if write_tables: - lr_write_tables(tabmodule,outputdir) + # Set the grammar start symbols + try: + if start is None: + grammar.set_start(pinfo.start) + else: + grammar.set_start(start) + except GrammarError: + e = sys.exc_info()[1] + errorlog.error(str(e)) + errors = 1 + + if errors: + raise YaccError("Unable to build parser") + + # Verify the grammar structure + undefined_symbols = grammar.undefined_symbols() + for sym, prod in undefined_symbols: + errorlog.error("%s:%d: Symbol '%s' used, but not defined as a token or a rule",prod.file,prod.line,sym) + errors = 1 + + unused_terminals = grammar.unused_terminals() + if unused_terminals: + debuglog.info("") + debuglog.info("Unused terminals:") + debuglog.info("") + for term in unused_terminals: + errorlog.warning("Token '%s' defined, but not used", term) + debuglog.info(" %s", term) + + # Print out all productions to the debug log + if debug: + debuglog.info("") + debuglog.info("Grammar") + debuglog.info("") + for n,p in enumerate(grammar.Productions): + debuglog.info("Rule %-5d %s", n, p) + + # Find unused non-terminals + unused_rules = grammar.unused_rules() + for prod in unused_rules: + errorlog.warning("%s:%d: Rule '%s' defined, but not used", prod.file, prod.line, prod.name) + + if len(unused_terminals) == 1: + errorlog.warning("There is 1 unused token") + if len(unused_terminals) > 1: + errorlog.warning("There are %d unused tokens", len(unused_terminals)) + + if len(unused_rules) == 1: + errorlog.warning("There is 1 unused rule") + if len(unused_rules) > 1: + errorlog.warning("There are %d unused rules", len(unused_rules)) + + if debug: + debuglog.info("") + debuglog.info("Terminals, with rules where they appear") + debuglog.info("") + terms = list(grammar.Terminals) + terms.sort() + for term in terms: + debuglog.info("%-20s : %s", term, " ".join([str(s) for s in grammar.Terminals[term]])) + + debuglog.info("") + debuglog.info("Nonterminals, with rules where they appear") + debuglog.info("") + nonterms = list(grammar.Nonterminals) + nonterms.sort() + for nonterm in nonterms: + debuglog.info("%-20s : %s", nonterm, " ".join([str(s) for s in grammar.Nonterminals[nonterm]])) + debuglog.info("") + + if check_recursion: + unreachable = grammar.find_unreachable() + for u in unreachable: + errorlog.warning("Symbol '%s' is unreachable",u) + + infinite = grammar.infinite_cycles() + for inf in infinite: + errorlog.error("Infinite recursion detected for symbol '%s'", inf) + errors = 1 + + unused_prec = grammar.unused_precedence() + for term, assoc in unused_prec: + errorlog.error("Precedence rule '%s' defined for unknown symbol '%s'", assoc, term) + errors = 1 - if yaccdebug: - try: - f = open(os.path.join(outputdir,debugfile),"w") - f.write(_vfc.getvalue()) - f.write("\n\n") - f.write(_vf.getvalue()) - f.close() - except IOError,e: - print >>sys.stderr, "yacc: can't create '%s'" % debugfile,e - - # Made it here. Create a parser object and set up its internal state. - # Set global parse() method to bound method of parser object. - - p = Parser("xyzzy") - p.productions = Productions - p.errorfunc = Errorfunc - p.action = _lr_action - p.goto = _lr_goto - p.method = _lr_method - p.require = Requires - - global parse - parse = p.parse - - global parser - parser = p - - # Clean up all of the globals we created - if (not optimize): - yacc_cleanup() - return p - -# yacc_cleanup function. Delete all of the global variables -# used during table construction - -def yacc_cleanup(): - global _lr_action, _lr_goto, _lr_method, _lr_goto_cache - del _lr_action, _lr_goto, _lr_method, _lr_goto_cache - - global Productions, Prodnames, Prodmap, Terminals - global Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - global Errorfunc, Signature, Requires - - del Productions, Prodnames, Prodmap, Terminals - del Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - del Errorfunc, Signature, Requires - - global _vf, _vfc - del _vf, _vfc - - -# Stub that raises an error if parsing is attempted without first calling yacc() -def parse(*args,**kwargs): - raise YaccError, "yacc: No parser built with yacc()" + if errors: + raise YaccError("Unable to build parser") + + # Run the LRGeneratedTable on the grammar + if debug: + errorlog.debug("Generating %s tables", method) + + lr = LRGeneratedTable(grammar,method,debuglog) + + if debug: + num_sr = len(lr.sr_conflicts) + + # Report shift/reduce and reduce/reduce conflicts + if num_sr == 1: + errorlog.warning("1 shift/reduce conflict") + elif num_sr > 1: + errorlog.warning("%d shift/reduce conflicts", num_sr) + + num_rr = len(lr.rr_conflicts) + if num_rr == 1: + errorlog.warning("1 reduce/reduce conflict") + elif num_rr > 1: + errorlog.warning("%d reduce/reduce conflicts", num_rr) + + # Write out conflicts to the output file + if debug and (lr.sr_conflicts or lr.rr_conflicts): + debuglog.warning("") + debuglog.warning("Conflicts:") + debuglog.warning("") + + for state, tok, resolution in lr.sr_conflicts: + debuglog.warning("shift/reduce conflict for %s in state %d resolved as %s", tok, state, resolution) + + already_reported = {} + for state, rule, rejected in lr.rr_conflicts: + if (state,id(rule),id(rejected)) in already_reported: + continue + debuglog.warning("reduce/reduce conflict in state %d resolved using rule (%s)", state, rule) + debuglog.warning("rejected rule (%s) in state %d", rejected,state) + errorlog.warning("reduce/reduce conflict in state %d resolved using rule (%s)", state, rule) + errorlog.warning("rejected rule (%s) in state %d", rejected, state) + already_reported[state,id(rule),id(rejected)] = 1 + + warned_never = [] + for state, rule, rejected in lr.rr_conflicts: + if not rejected.reduced and (rejected not in warned_never): + debuglog.warning("Rule (%s) is never reduced", rejected) + errorlog.warning("Rule (%s) is never reduced", rejected) + warned_never.append(rejected) + + # Write the table file if requested + if write_tables: + lr.write_table(tabmodule,outputdir,signature) + + # Write a pickled version of the tables + if picklefile: + lr.pickle_table(picklefile,signature) + + # Build the parser + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr,pinfo.error_func) + + parse = parser.parse + return parser --------------080004000200030807030507 Content-Type: text/plain; name="patch-qsgen.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-qsgen.py" --- js/src/xpconnect/src/qsgen.py.orig 2010-04-02 19:02:30.000000000 +0300 +++ js/src/xpconnect/src/qsgen.py 2010-04-07 11:05:56.201435457 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env/python +#!/usr/local/bin/python # qsgen.py - Generate XPConnect quick stubs. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507 Content-Type: text/plain; name="patch-xpidl.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-xpidl.py" --- xpcom/idl-parser/xpidl.py.orig 2010-04-02 19:03:13.000000000 +0300 +++ xpcom/idl-parser/xpidl.py 2010-04-07 11:23:53.544027464 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/local/bin/python # xpidl.py - A parser for cross-platform IDL (XPIDL) files. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507-- From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 09:11:37 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5C4D7106564A for ; Wed, 7 Apr 2010 09:11:37 +0000 (UTC) (envelope-from andy.lavr@reactor-xg.kiev.ua) Received: from mail.reactor-xg.kiev.ua (reactor-xg.kiev.ua [86.111.64.45]) by mx1.freebsd.org (Postfix) with ESMTP id CEBAC8FC1B for ; Wed, 7 Apr 2010 09:11:35 +0000 (UTC) Received: from mail.reactor-xg.kiev.ua (mail.reactor-xg.kiev.ua [86.111.64.45]) by mail.reactor-xg.kiev.ua (Reactor-XG Mailer System) with ESMTP id o378mCEX009152; Wed, 7 Apr 2010 11:48:12 +0300 (EEST) (envelope-from andy.lavr@reactor-xg.kiev.ua) Authentication-Results: mail.reactor-xg.kiev.ua; sender-id=pass header.from=andy.lavr@reactor-xg.kiev.ua; auth=pass (PLAIN); spf=pass smtp.mfrom=andy.lavr@reactor-xg.kiev.ua Message-ID: <4BBC46CC.3020406@reactor-xg.kiev.ua> Date: Wed, 07 Apr 2010 11:48:12 +0300 From: "Andrei V. Lavreniyuk" Organization: Technica-03, Inc. User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; ru; rv:1.9.1.9) Gecko/20100331 Thunderbird/3.0.4 MIME-Version: 1.0 To: bug-followup@FreeBSD.org References: <201004070820.o378K5g6048195@freefall.freebsd.org> <4BBC41CA.2030001@smeets.im> In-Reply-To: <4BBC41CA.2030001@smeets.im> Content-Type: multipart/mixed; boundary="------------080004000200030807030507" X-Mailman-Approved-At: Wed, 07 Apr 2010 11:39:03 +0000 Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: andy.lavr@reactor-xg.kiev.ua List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 09:11:37 -0000 This is a multi-part message in MIME format. --------------080004000200030807030507 Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 8bit 07.04.2010 11:26, Florian Smeets пишет: >> show me the output of python -V and pkg_info|grep python? >> > > Sorry, there is a "can you please" missing. Did not want to sound as > harsh as it seems. > > So, can you please show me the output? :-) :) Workaround: # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 python26-2.6.4 + Attached patch. -- Best regards, Andrei V. Lavreniyuk. --------------080004000200030807030507 Content-Type: text/plain; name="patch-header.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-header.py" --- xpcom/idl-parser/header.py.orig 2010-04-02 19:03:13.000000000 +0300 +++ xpcom/idl-parser/header.py 2010-04-07 11:23:45.088650024 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/local/bin/python # header.py - Generate C++ header files from IDL. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507 Content-Type: text/plain; name="patch-ply" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-ply" diff -ruN ply.orig/lex.py ply/lex.py --- other-licenses/ply/ply.orig/lex.py 2010-04-02 19:02:56.000000000 +0300 +++ other-licenses/ply/ply/lex.py 2009-08-27 04:28:42.000000000 +0300 @@ -1,45 +1,61 @@ # ----------------------------------------------------------------------------- # ply: lex.py # -# Author: David M. Beazley (dave@dabeaz.com) +# Copyright (C) 2001-2009, +# David M. Beazley (Dabeaz LLC) +# All rights reserved. # -# Copyright (C) 2001-2008, David M. Beazley +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. # -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -# -# See the file COPYING for a complete copy of the LGPL. +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ----------------------------------------------------------------------------- -__version__ = "2.5" -__tabversion__ = "2.4" # Version of table file used +__version__ = "3.3" +__tabversion__ = "3.2" # Version of table file used import re, sys, types, copy, os -# This regular expression is used to match valid token names -_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$') - -# _INSTANCETYPE sets the valid set of instance types recognized -# by PLY when lexers are defined by a class. In order to maintain -# backwards compatibility with Python-2.0, we have to check for -# the existence of ObjectType. - +# This tuple contains known string types try: - _INSTANCETYPE = (types.InstanceType, types.ObjectType) + # Python 2.6 + StringTypes = (types.StringType, types.UnicodeType) except AttributeError: - _INSTANCETYPE = types.InstanceType - class object: pass # Note: needed if no new-style classes present + # Python 3.0 + StringTypes = (str, bytes) + +# Extract the code attribute of a function. Different implementations +# are for Python 2/3 compatibility. + +if sys.version_info[0] < 3: + def func_code(f): + return f.func_code +else: + def func_code(f): + return f.__code__ + +# This regular expression is used to match valid token names +_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$') # Exception thrown when invalid token encountered and no default error # handler is defined. @@ -49,35 +65,50 @@ self.args = (message,) self.text = s -# An object used to issue one-time warning messages for various features - -class LexWarning(object): - def __init__(self): - self.warned = 0 - def __call__(self,msg): - if not self.warned: - sys.stderr.write("ply.lex: Warning: " + msg+"\n") - self.warned = 1 - -_SkipWarning = LexWarning() # Warning for use of t.skip() on tokens - # Token class. This class is used to represent the tokens produced. class LexToken(object): def __str__(self): return "LexToken(%s,%r,%d,%d)" % (self.type,self.value,self.lineno,self.lexpos) def __repr__(self): return str(self) - def skip(self,n): - self.lexer.skip(n) - _SkipWarning("Calling t.skip() on a token is deprecated. Please use t.lexer.skip()") + +# This object is a stand-in for a logging object created by the +# logging module. + +class PlyLogger(object): + def __init__(self,f): + self.f = f + def critical(self,msg,*args,**kwargs): + self.f.write((msg % args) + "\n") + + def warning(self,msg,*args,**kwargs): + self.f.write("WARNING: "+ (msg % args) + "\n") + + def error(self,msg,*args,**kwargs): + self.f.write("ERROR: " + (msg % args) + "\n") + + info = critical + debug = critical + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self,name): + return self + def __call__(self,*args,**kwargs): + return self # ----------------------------------------------------------------------------- -# Lexer class +# === Lexing Engine === # -# This class encapsulates all of the methods and data associated with a lexer. +# The following Lexer class implements the lexer runtime. There are only +# a few public methods and attributes: # # input() - Store a new string in the lexer # token() - Get the next token +# clone() - Clone the lexer +# +# lineno - Current line number +# lexpos - Current position in the input string # ----------------------------------------------------------------------------- class Lexer: @@ -105,7 +136,6 @@ self.lexliterals = "" # Literal characters that can be passed through self.lexmodule = None # Module self.lineno = 1 # Current line number - self.lexdebug = 0 # Debugging mode self.lexoptimize = 0 # Optimized mode def clone(self,object=None): @@ -145,6 +175,7 @@ filename = os.path.join(outputdir,basetabfilename)+".py" tf = open(filename,"w") tf.write("# %s.py. This file automatically created by PLY (version %s). Don't edit!\n" % (tabfile,__version__)) + tf.write("_tabversion = %s\n" % repr(__version__)) tf.write("_lextokens = %s\n" % repr(self.lextokens)) tf.write("_lexreflags = %s\n" % repr(self.lexreflags)) tf.write("_lexliterals = %s\n" % repr(self.lexliterals)) @@ -184,7 +215,16 @@ if isinstance(tabfile,types.ModuleType): lextab = tabfile else: - exec "import %s as lextab" % tabfile + if sys.version_info[0] < 3: + exec("import %s as lextab" % tabfile) + else: + env = { } + exec("import %s as lextab" % tabfile, env,env) + lextab = env['lextab'] + + if getattr(lextab,"_tabversion","0.0") != __version__: + raise ImportError("Inconsistent PLY version") + self.lextokens = lextab._lextokens self.lexreflags = lextab._lexreflags self.lexliterals = lextab._lexliterals @@ -196,7 +236,7 @@ titem = [] txtitem = [] for i in range(len(lre)): - titem.append((re.compile(lre[i][0],lextab._lexreflags),_names_to_funcs(lre[i][1],fdict))) + titem.append((re.compile(lre[i][0],lextab._lexreflags | re.VERBOSE),_names_to_funcs(lre[i][1],fdict))) txtitem.append(lre[i][0]) self.lexstatere[key] = titem self.lexstateretext[key] = txtitem @@ -211,8 +251,8 @@ def input(self,s): # Pull off the first character to see if s looks like a string c = s[:1] - if not (isinstance(c,types.StringType) or isinstance(c,types.UnicodeType)): - raise ValueError, "Expected a string" + if not isinstance(c,StringTypes): + raise ValueError("Expected a string") self.lexdata = s self.lexpos = 0 self.lexlen = len(s) @@ -221,8 +261,8 @@ # begin() - Changes the lexing state # ------------------------------------------------------------ def begin(self,state): - if not self.lexstatere.has_key(state): - raise ValueError, "Undefined state" + if not state in self.lexstatere: + raise ValueError("Undefined state") self.lexre = self.lexstatere[state] self.lexretext = self.lexstateretext[state] self.lexignore = self.lexstateignore.get(state,"") @@ -255,7 +295,7 @@ self.lexpos += n # ------------------------------------------------------------ - # token() - Return the next token from the Lexer + # opttoken() - Return the next token from the Lexer # # Note: This function has been carefully implemented to be as fast # as possible. Don't make changes unless you really know what @@ -299,10 +339,6 @@ lexpos = m.end() - # if func not callable, it means it's an ignored token - if not callable(func): - break - # If token is processed by a function, call it tok.lexer = self # Set additional attributes useful in token rules @@ -319,9 +355,9 @@ # Verify type of the token. If not in the token map, raise an error if not self.lexoptimize: - if not self.lextokens.has_key(newtok.type): - raise LexError, ("%s:%d: Rule '%s' returned an unknown token type '%s'" % ( - func.func_code.co_filename, func.func_code.co_firstlineno, + if not newtok.type in self.lextokens: + raise LexError("%s:%d: Rule '%s' returned an unknown token type '%s'" % ( + func_code(func).co_filename, func_code(func).co_firstlineno, func.__name__, newtok.type),lexdata[lexpos:]) return newtok @@ -348,60 +384,60 @@ newtok = self.lexerrorf(tok) if lexpos == self.lexpos: # Error method didn't change text position at all. This is an error. - raise LexError, ("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:]) + raise LexError("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:]) lexpos = self.lexpos if not newtok: continue return newtok self.lexpos = lexpos - raise LexError, ("Illegal character '%s' at index %d" % (lexdata[lexpos],lexpos), lexdata[lexpos:]) + raise LexError("Illegal character '%s' at index %d" % (lexdata[lexpos],lexpos), lexdata[lexpos:]) self.lexpos = lexpos + 1 if self.lexdata is None: - raise RuntimeError, "No input string given with input()" + raise RuntimeError("No input string given with input()") return None + # Iterator interface + def __iter__(self): + return self + + def next(self): + t = self.token() + if t is None: + raise StopIteration + return t + + __next__ = next + # ----------------------------------------------------------------------------- -# _validate_file() +# ==== Lex Builder === # -# This checks to see if there are duplicated t_rulename() functions or strings -# in the parser input file. This is done using a simple regular expression -# match on each line in the given file. If the file can't be located or opened, -# a true result is returned by default. +# The functions and classes below are used to collect lexing information +# and build a Lexer object from it. # ----------------------------------------------------------------------------- -def _validate_file(filename): - import os.path - base,ext = os.path.splitext(filename) - if ext != '.py': return 1 # No idea what the file is. Return OK +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- +def get_caller_module_dict(levels): try: - f = open(filename) - lines = f.readlines() - f.close() - except IOError: - return 1 # Couldn't find the file. Don't worry about it - - fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(') - sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=') - - counthash = { } - linen = 1 - noerror = 1 - for l in lines: - m = fre.match(l) - if not m: - m = sre.match(l) - if m: - name = m.group(1) - prev = counthash.get(name) - if not prev: - counthash[name] = linen - else: - print >>sys.stderr, "%s:%d: Rule %s redefined. Previously defined on line %d" % (filename,linen,name,prev) - noerror = 0 - linen += 1 - return noerror + raise RuntimeError + except RuntimeError: + e,b,t = sys.exc_info() + f = t.tb_frame + while levels > 0: + f = f.f_back + levels -= 1 + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + + return ldict # ----------------------------------------------------------------------------- # _funcs_to_names() @@ -466,7 +502,7 @@ lexindexfunc[i] = (None, toknames[f]) return [(lexre,lexindexfunc)],[regex],[lexindexnames] - except Exception,e: + except Exception: m = int(len(relist)/2) if m == 0: m = 1 llist, lre, lnames = _form_master_re(relist[:m],reflags,ldict,toknames) @@ -486,319 +522,446 @@ nonstate = 1 parts = s.split("_") for i in range(1,len(parts)): - if not names.has_key(parts[i]) and parts[i] != 'ANY': break + if not parts[i] in names and parts[i] != 'ANY': break if i > 1: states = tuple(parts[1:i]) else: states = ('INITIAL',) if 'ANY' in states: - states = tuple(names.keys()) + states = tuple(names) tokenname = "_".join(parts[i:]) return (states,tokenname) + # ----------------------------------------------------------------------------- -# lex(module) +# LexerReflect() # -# Build all of the regular expression rules from definitions in the supplied module +# This class represents information needed to build a lexer as extracted from a +# user's input file. # ----------------------------------------------------------------------------- -def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,nowarn=0,outputdir=""): - global lexer - ldict = None - stateinfo = { 'INITIAL' : 'inclusive'} - error = 0 - files = { } - lexobj = Lexer() - lexobj.lexdebug = debug - lexobj.lexoptimize = optimize - global token,input +class LexerReflect(object): + def __init__(self,ldict,log=None,reflags=0): + self.ldict = ldict + self.error_func = None + self.tokens = [] + self.reflags = reflags + self.stateinfo = { 'INITIAL' : 'inclusive'} + self.files = {} + self.error = 0 - if nowarn: warn = 0 - else: warn = 1 + if log is None: + self.log = PlyLogger(sys.stderr) + else: + self.log = log - if object: module = object + # Get all of the basic information + def get_all(self): + self.get_tokens() + self.get_literals() + self.get_states() + self.get_rules() + + # Validate all of the information + def validate_all(self): + self.validate_tokens() + self.validate_literals() + self.validate_rules() + return self.error + + # Get the tokens map + def get_tokens(self): + tokens = self.ldict.get("tokens",None) + if not tokens: + self.log.error("No token list is defined") + self.error = 1 + return - if module: - # User supplied a module object. - if isinstance(module, types.ModuleType): - ldict = module.__dict__ - elif isinstance(module, _INSTANCETYPE): - _items = [(k,getattr(module,k)) for k in dir(module)] - ldict = { } - for (i,v) in _items: - ldict[i] = v - else: - raise ValueError,"Expected a module or instance" - lexobj.lexmodule = module + if not isinstance(tokens,(list, tuple)): + self.log.error("tokens must be a list or tuple") + self.error = 1 + return + + if not tokens: + self.log.error("tokens is empty") + self.error = 1 + return - else: - # No module given. We might be able to get information from the caller. - try: - raise RuntimeError - except RuntimeError: - e,b,t = sys.exc_info() - f = t.tb_frame - f = f.f_back # Walk out to our calling function - if f.f_globals is f.f_locals: # Collect global and local variations from caller - ldict = f.f_globals - else: - ldict = f.f_globals.copy() - ldict.update(f.f_locals) + self.tokens = tokens - if optimize and lextab: + # Validate the tokens + def validate_tokens(self): + terminals = {} + for n in self.tokens: + if not _is_identifier.match(n): + self.log.error("Bad token name '%s'",n) + self.error = 1 + if n in terminals: + self.log.warning("Token '%s' multiply defined", n) + terminals[n] = 1 + + # Get the literals specifier + def get_literals(self): + self.literals = self.ldict.get("literals","") + + # Validate literals + def validate_literals(self): try: - lexobj.readtab(lextab,ldict) - token = lexobj.token - input = lexobj.input - lexer = lexobj - return lexobj + for c in self.literals: + if not isinstance(c,StringTypes) or len(c) > 1: + self.log.error("Invalid literal %s. Must be a single character", repr(c)) + self.error = 1 + continue - except ImportError: - pass + except TypeError: + self.log.error("Invalid literals specification. literals must be a sequence of characters") + self.error = 1 + + def get_states(self): + self.states = self.ldict.get("states",None) + # Build statemap + if self.states: + if not isinstance(self.states,(tuple,list)): + self.log.error("states must be defined as a tuple or list") + self.error = 1 + else: + for s in self.states: + if not isinstance(s,tuple) or len(s) != 2: + self.log.error("Invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')",repr(s)) + self.error = 1 + continue + name, statetype = s + if not isinstance(name,StringTypes): + self.log.error("State name %s must be a string", repr(name)) + self.error = 1 + continue + if not (statetype == 'inclusive' or statetype == 'exclusive'): + self.log.error("State type for state %s must be 'inclusive' or 'exclusive'",name) + self.error = 1 + continue + if name in self.stateinfo: + self.log.error("State '%s' already defined",name) + self.error = 1 + continue + self.stateinfo[name] = statetype + + # Get all of the symbols with a t_ prefix and sort them into various + # categories (functions, strings, error functions, and ignore characters) + + def get_rules(self): + tsymbols = [f for f in self.ldict if f[:2] == 't_' ] + + # Now build up a list of functions and a list of strings + + self.toknames = { } # Mapping of symbols to token names + self.funcsym = { } # Symbols defined as functions + self.strsym = { } # Symbols defined as strings + self.ignore = { } # Ignore strings by state + self.errorf = { } # Error functions by state + + for s in self.stateinfo: + self.funcsym[s] = [] + self.strsym[s] = [] + + if len(tsymbols) == 0: + self.log.error("No rules of the form t_rulename are defined") + self.error = 1 + return - # Get the tokens, states, and literals variables (if any) + for f in tsymbols: + t = self.ldict[f] + states, tokname = _statetoken(f,self.stateinfo) + self.toknames[f] = tokname - tokens = ldict.get("tokens",None) - states = ldict.get("states",None) - literals = ldict.get("literals","") + if hasattr(t,"__call__"): + if tokname == 'error': + for s in states: + self.errorf[s] = t + elif tokname == 'ignore': + line = func_code(t).co_firstlineno + file = func_code(t).co_filename + self.log.error("%s:%d: Rule '%s' must be defined as a string",file,line,t.__name__) + self.error = 1 + else: + for s in states: + self.funcsym[s].append((f,t)) + elif isinstance(t, StringTypes): + if tokname == 'ignore': + for s in states: + self.ignore[s] = t + if "\\" in t: + self.log.warning("%s contains a literal backslash '\\'",f) + + elif tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", f) + self.error = 1 + else: + for s in states: + self.strsym[s].append((f,t)) + else: + self.log.error("%s not defined as a function or string", f) + self.error = 1 - if not tokens: - raise SyntaxError,"lex: module does not define 'tokens'" + # Sort the functions by line number + for f in self.funcsym.values(): + if sys.version_info[0] < 3: + f.sort(lambda x,y: cmp(func_code(x[1]).co_firstlineno,func_code(y[1]).co_firstlineno)) + else: + # Python 3.0 + f.sort(key=lambda x: func_code(x[1]).co_firstlineno) - if not (isinstance(tokens,types.ListType) or isinstance(tokens,types.TupleType)): - raise SyntaxError,"lex: tokens must be a list or tuple." + # Sort the strings by regular expression length + for s in self.strsym.values(): + if sys.version_info[0] < 3: + s.sort(lambda x,y: (len(x[1]) < len(y[1])) - (len(x[1]) > len(y[1]))) + else: + # Python 3.0 + s.sort(key=lambda x: len(x[1]),reverse=True) - # Build a dictionary of valid token names - lexobj.lextokens = { } - if not optimize: - for n in tokens: - if not _is_identifier.match(n): - print >>sys.stderr, "lex: Bad token name '%s'" % n - error = 1 - if warn and lexobj.lextokens.has_key(n): - print >>sys.stderr, "lex: Warning. Token '%s' multiply defined." % n - lexobj.lextokens[n] = None - else: - for n in tokens: lexobj.lextokens[n] = None + # Validate all of the t_rules collected + def validate_rules(self): + for state in self.stateinfo: + # Validate all rules defined by functions + + + + for fname, f in self.funcsym[state]: + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + self.files[file] = 1 - if debug: - print "lex: tokens = '%s'" % lexobj.lextokens.keys() + tokname = self.toknames[fname] + if isinstance(f, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + nargs = func_code(f).co_argcount + if nargs > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__) + self.error = 1 + continue - try: - for c in literals: - if not (isinstance(c,types.StringType) or isinstance(c,types.UnicodeType)) or len(c) > 1: - print >>sys.stderr, "lex: Invalid literal %s. Must be a single character" % repr(c) - error = 1 + if nargs < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__) + self.error = 1 continue - except TypeError: - print >>sys.stderr, "lex: Invalid literals specification. literals must be a sequence of characters." - error = 1 - - lexobj.lexliterals = literals - - # Build statemap - if states: - if not (isinstance(states,types.TupleType) or isinstance(states,types.ListType)): - print >>sys.stderr, "lex: states must be defined as a tuple or list." - error = 1 - else: - for s in states: - if not isinstance(s,types.TupleType) or len(s) != 2: - print >>sys.stderr, "lex: invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')" % repr(s) - error = 1 - continue - name, statetype = s - if not isinstance(name,types.StringType): - print >>sys.stderr, "lex: state name %s must be a string" % repr(name) - error = 1 - continue - if not (statetype == 'inclusive' or statetype == 'exclusive'): - print >>sys.stderr, "lex: state type for state %s must be 'inclusive' or 'exclusive'" % name - error = 1 - continue - if stateinfo.has_key(name): - print >>sys.stderr, "lex: state '%s' already defined." % name - error = 1 - continue - stateinfo[name] = statetype - - # Get a list of symbols with the t_ or s_ prefix - tsymbols = [f for f in ldict.keys() if f[:2] == 't_' ] - - # Now build up a list of functions and a list of strings - - funcsym = { } # Symbols defined as functions - strsym = { } # Symbols defined as strings - toknames = { } # Mapping of symbols to token names - - for s in stateinfo.keys(): - funcsym[s] = [] - strsym[s] = [] - - ignore = { } # Ignore strings by state - errorf = { } # Error functions by state - - if len(tsymbols) == 0: - raise SyntaxError,"lex: no rules of the form t_rulename are defined." - - for f in tsymbols: - t = ldict[f] - states, tokname = _statetoken(f,stateinfo) - toknames[f] = tokname - - if callable(t): - for s in states: funcsym[s].append((f,t)) - elif (isinstance(t, types.StringType) or isinstance(t,types.UnicodeType)): - for s in states: strsym[s].append((f,t)) - else: - print >>sys.stderr, "lex: %s not defined as a function or string" % f - error = 1 + if not f.__doc__: + self.log.error("%s:%d: No regular expression defined for rule '%s'",file,line,f.__name__) + self.error = 1 + continue - # Sort the functions by line number - for f in funcsym.values(): - f.sort(lambda x,y: cmp(x[1].func_code.co_firstlineno,y[1].func_code.co_firstlineno)) - - # Sort the strings by regular expression length - for s in strsym.values(): - s.sort(lambda x,y: (len(x[1]) < len(y[1])) - (len(x[1]) > len(y[1]))) + try: + c = re.compile("(?P<%s>%s)" % (fname,f.__doc__), re.VERBOSE | self.reflags) + if c.match(""): + self.log.error("%s:%d: Regular expression for rule '%s' matches empty string", file,line,f.__name__) + self.error = 1 + except re.error: + _etype, e, _etrace = sys.exc_info() + self.log.error("%s:%d: Invalid regular expression for rule '%s'. %s", file,line,f.__name__,e) + if '#' in f.__doc__: + self.log.error("%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'",file,line, f.__name__) + self.error = 1 + + # Validate all rules defined by strings + for name,r in self.strsym[state]: + tokname = self.toknames[name] + if tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", name) + self.error = 1 + continue - regexs = { } + if not tokname in self.tokens and tokname.find("ignore_") < 0: + self.log.error("Rule '%s' defined for an unspecified token %s",name,tokname) + self.error = 1 + continue - # Build the master regular expressions - for state in stateinfo.keys(): - regex_list = [] + try: + c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | self.reflags) + if (c.match("")): + self.log.error("Regular expression for rule '%s' matches empty string",name) + self.error = 1 + except re.error: + _etype, e, _etrace = sys.exc_info() + self.log.error("Invalid regular expression for rule '%s'. %s",name,e) + if '#' in r: + self.log.error("Make sure '#' in rule '%s' is escaped with '\\#'",name) + self.error = 1 - # Add rules defined by functions first - for fname, f in funcsym[state]: - line = f.func_code.co_firstlineno - file = f.func_code.co_filename - files[file] = None - tokname = toknames[fname] - - ismethod = isinstance(f, types.MethodType) - - if not optimize: - nargs = f.func_code.co_argcount - if ismethod: + if not self.funcsym[state] and not self.strsym[state]: + self.log.error("No rules defined for state '%s'",state) + self.error = 1 + + # Validate the error function + efunc = self.errorf.get(state,None) + if efunc: + f = efunc + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + self.files[file] = 1 + + if isinstance(f, types.MethodType): reqargs = 2 else: reqargs = 1 + nargs = func_code(f).co_argcount if nargs > reqargs: - print >>sys.stderr, "%s:%d: Rule '%s' has too many arguments." % (file,line,f.__name__) - error = 1 - continue + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,f.__name__) + self.error = 1 if nargs < reqargs: - print >>sys.stderr, "%s:%d: Rule '%s' requires an argument." % (file,line,f.__name__) - error = 1 - continue + self.log.error("%s:%d: Rule '%s' requires an argument", file,line,f.__name__) + self.error = 1 - if tokname == 'ignore': - print >>sys.stderr, "%s:%d: Rule '%s' must be defined as a string." % (file,line,f.__name__) - error = 1 - continue + for f in self.files: + self.validate_file(f) - if tokname == 'error': - errorf[state] = f - continue - if f.__doc__: - if not optimize: - try: - c = re.compile("(?P<%s>%s)" % (fname,f.__doc__), re.VERBOSE | reflags) - if c.match(""): - print >>sys.stderr, "%s:%d: Regular expression for rule '%s' matches empty string." % (file,line,f.__name__) - error = 1 - continue - except re.error,e: - print >>sys.stderr, "%s:%d: Invalid regular expression for rule '%s'. %s" % (file,line,f.__name__,e) - if '#' in f.__doc__: - print >>sys.stderr, "%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'." % (file,line, f.__name__) - error = 1 - continue + # ----------------------------------------------------------------------------- + # validate_file() + # + # This checks to see if there are duplicated t_rulename() functions or strings + # in the parser input file. This is done using a simple regular expression + # match on each line in the given file. + # ----------------------------------------------------------------------------- + + def validate_file(self,filename): + import os.path + base,ext = os.path.splitext(filename) + if ext != '.py': return # No idea what the file is. Return OK - if debug: - print "lex: Adding rule %s -> '%s' (state '%s')" % (f.__name__,f.__doc__, state) + try: + f = open(filename) + lines = f.readlines() + f.close() + except IOError: + return # Couldn't find the file. Don't worry about it - # Okay. The regular expression seemed okay. Let's append it to the master regular - # expression we're building + fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(') + sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=') - regex_list.append("(?P<%s>%s)" % (fname,f.__doc__)) - else: - print >>sys.stderr, "%s:%d: No regular expression defined for rule '%s'" % (file,line,f.__name__) + counthash = { } + linen = 1 + for l in lines: + m = fre.match(l) + if not m: + m = sre.match(l) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + self.log.error("%s:%d: Rule %s redefined. Previously defined on line %d",filename,linen,name,prev) + self.error = 1 + linen += 1 + +# ----------------------------------------------------------------------------- +# lex(module) +# +# Build all of the regular expression rules from definitions in the supplied module +# ----------------------------------------------------------------------------- +def lex(module=None,object=None,debug=0,optimize=0,lextab="lextab",reflags=0,nowarn=0,outputdir="", debuglog=None, errorlog=None): + global lexer + ldict = None + stateinfo = { 'INITIAL' : 'inclusive'} + lexobj = Lexer() + lexobj.lexoptimize = optimize + global token,input - # Now add all of the simple rules - for name,r in strsym[state]: - tokname = toknames[name] + if errorlog is None: + errorlog = PlyLogger(sys.stderr) + + if debug: + if debuglog is None: + debuglog = PlyLogger(sys.stderr) - if tokname == 'ignore': - if "\\" in r: - print >>sys.stderr, "lex: Warning. %s contains a literal backslash '\\'" % name - ignore[state] = r - continue + # Get the module dictionary used for the lexer + if object: module = object - if not optimize: - if tokname == 'error': - raise SyntaxError,"lex: Rule '%s' must be defined as a function" % name - error = 1 - continue + if module: + _items = [(k,getattr(module,k)) for k in dir(module)] + ldict = dict(_items) + else: + ldict = get_caller_module_dict(2) - if not lexobj.lextokens.has_key(tokname) and tokname.find("ignore_") < 0: - print >>sys.stderr, "lex: Rule '%s' defined for an unspecified token %s." % (name,tokname) - error = 1 - continue - try: - c = re.compile("(?P<%s>%s)" % (name,r),re.VERBOSE | reflags) - if (c.match("")): - print >>sys.stderr, "lex: Regular expression for rule '%s' matches empty string." % name - error = 1 - continue - except re.error,e: - print >>sys.stderr, "lex: Invalid regular expression for rule '%s'. %s" % (name,e) - if '#' in r: - print >>sys.stderr, "lex: Make sure '#' in rule '%s' is escaped with '\\#'." % name + # Collect parser information from the dictionary + linfo = LexerReflect(ldict,log=errorlog,reflags=reflags) + linfo.get_all() + if not optimize: + if linfo.validate_all(): + raise SyntaxError("Can't build lexer") - error = 1 - continue - if debug: - print "lex: Adding rule %s -> '%s' (state '%s')" % (name,r,state) + if optimize and lextab: + try: + lexobj.readtab(lextab,ldict) + token = lexobj.token + input = lexobj.input + lexer = lexobj + return lexobj - regex_list.append("(?P<%s>%s)" % (name,r)) + except ImportError: + pass - if not regex_list: - print >>sys.stderr, "lex: No rules defined for state '%s'" % state - error = 1 + # Dump some basic debugging information + if debug: + debuglog.info("lex: tokens = %r", linfo.tokens) + debuglog.info("lex: literals = %r", linfo.literals) + debuglog.info("lex: states = %r", linfo.stateinfo) - regexs[state] = regex_list + # Build a dictionary of valid token names + lexobj.lextokens = { } + for n in linfo.tokens: + lexobj.lextokens[n] = 1 + # Get literals specification + if isinstance(linfo.literals,(list,tuple)): + lexobj.lexliterals = type(linfo.literals[0])().join(linfo.literals) + else: + lexobj.lexliterals = linfo.literals - if not optimize: - for f in files.keys(): - if not _validate_file(f): - error = 1 + # Get the stateinfo dictionary + stateinfo = linfo.stateinfo + + regexs = { } + # Build the master regular expressions + for state in stateinfo: + regex_list = [] + + # Add rules defined by functions first + for fname, f in linfo.funcsym[state]: + line = func_code(f).co_firstlineno + file = func_code(f).co_filename + regex_list.append("(?P<%s>%s)" % (fname,f.__doc__)) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",fname,f.__doc__, state) - if error: - raise SyntaxError,"lex: Unable to build lexer." + # Now add all of the simple rules + for name,r in linfo.strsym[state]: + regex_list.append("(?P<%s>%s)" % (name,r)) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')",name,r, state) - # From this point forward, we're reasonably confident that we can build the lexer. - # No more errors will be generated, but there might be some warning messages. + regexs[state] = regex_list # Build the master regular expressions - for state in regexs.keys(): - lexre, re_text, re_names = _form_master_re(regexs[state],reflags,ldict,toknames) + if debug: + debuglog.info("lex: ==== MASTER REGEXS FOLLOW ====") + + for state in regexs: + lexre, re_text, re_names = _form_master_re(regexs[state],reflags,ldict,linfo.toknames) lexobj.lexstatere[state] = lexre lexobj.lexstateretext[state] = re_text lexobj.lexstaterenames[state] = re_names if debug: for i in range(len(re_text)): - print "lex: state '%s'. regex[%d] = '%s'" % (state, i, re_text[i]) + debuglog.info("lex: state '%s' : regex[%d] = '%s'",state, i, re_text[i]) - # For inclusive states, we need to add the INITIAL state - for state,type in stateinfo.items(): - if state != "INITIAL" and type == 'inclusive': + # For inclusive states, we need to add the regular expressions from the INITIAL state + for state,stype in stateinfo.items(): + if state != "INITIAL" and stype == 'inclusive': lexobj.lexstatere[state].extend(lexobj.lexstatere['INITIAL']) lexobj.lexstateretext[state].extend(lexobj.lexstateretext['INITIAL']) lexobj.lexstaterenames[state].extend(lexobj.lexstaterenames['INITIAL']) @@ -806,30 +969,30 @@ lexobj.lexstateinfo = stateinfo lexobj.lexre = lexobj.lexstatere["INITIAL"] lexobj.lexretext = lexobj.lexstateretext["INITIAL"] + lexobj.lexreflags = reflags # Set up ignore variables - lexobj.lexstateignore = ignore + lexobj.lexstateignore = linfo.ignore lexobj.lexignore = lexobj.lexstateignore.get("INITIAL","") # Set up error functions - lexobj.lexstateerrorf = errorf - lexobj.lexerrorf = errorf.get("INITIAL",None) - if warn and not lexobj.lexerrorf: - print >>sys.stderr, "lex: Warning. no t_error rule is defined." + lexobj.lexstateerrorf = linfo.errorf + lexobj.lexerrorf = linfo.errorf.get("INITIAL",None) + if not lexobj.lexerrorf: + errorlog.warning("No t_error rule is defined") # Check state information for ignore and error rules for s,stype in stateinfo.items(): if stype == 'exclusive': - if warn and not errorf.has_key(s): - print >>sys.stderr, "lex: Warning. no error rule is defined for exclusive state '%s'" % s - if warn and not ignore.has_key(s) and lexobj.lexignore: - print >>sys.stderr, "lex: Warning. no ignore rule is defined for exclusive state '%s'" % s + if not s in linfo.errorf: + errorlog.warning("No error rule is defined for exclusive state '%s'", s) + if not s in linfo.ignore and lexobj.lexignore: + errorlog.warning("No ignore rule is defined for exclusive state '%s'", s) elif stype == 'inclusive': - if not errorf.has_key(s): - errorf[s] = errorf.get("INITIAL",None) - if not ignore.has_key(s): - ignore[s] = ignore.get("INITIAL","") - + if not s in linfo.errorf: + linfo.errorf[s] = linfo.errorf.get("INITIAL",None) + if not s in linfo.ignore: + linfo.ignore[s] = linfo.ignore.get("INITIAL","") # Create global versions of the token() and input() functions token = lexobj.token @@ -856,7 +1019,7 @@ data = f.read() f.close() except IndexError: - print "Reading from standard input (type EOF to end):" + sys.stdout.write("Reading from standard input (type EOF to end):\n") data = sys.stdin.read() if lexer: @@ -872,8 +1035,7 @@ while 1: tok = _token() if not tok: break - print "(%s,%r,%d,%d)" % (tok.type, tok.value, tok.lineno,tok.lexpos) - + sys.stdout.write("(%s,%r,%d,%d)\n" % (tok.type, tok.value, tok.lineno,tok.lexpos)) # ----------------------------------------------------------------------------- # @TOKEN(regex) @@ -884,7 +1046,7 @@ def TOKEN(r): def set_doc(f): - if callable(r): + if hasattr(r,"__call__"): f.__doc__ = r.__doc__ else: f.__doc__ = r diff -ruN ply.orig/yacc.py ply/yacc.py --- other-licenses/ply/ply.orig/yacc.py 2010-04-02 19:02:56.000000000 +0300 +++ other-licenses/ply/ply/yacc.py 2009-09-02 17:27:23.000000000 +0300 @@ -1,26 +1,35 @@ -#----------------------------------------------------------------------------- +# ----------------------------------------------------------------------------- # ply: yacc.py # -# Author(s): David M. Beazley (dave@dabeaz.com) -# -# Copyright (C) 2001-2008, David M. Beazley -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -# -# See the file COPYING for a complete copy of the LGPL. -# +# Copyright (C) 2001-2009, +# David M. Beazley (Dabeaz LLC) +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# ----------------------------------------------------------------------------- # # This implements an LR parser that is constructed from grammar rules defined # as Python functions. The grammer is specified by supplying the BNF inside @@ -50,8 +59,8 @@ # own risk! # ---------------------------------------------------------------------------- -__version__ = "2.5" -__tabversion__ = "2.4" # Table version +__version__ = "3.3" +__tabversion__ = "3.2" # Table version #----------------------------------------------------------------------------- # === User configurable parameters === @@ -71,24 +80,83 @@ yaccdevel = 0 # Set to True if developing yacc. This turns off optimized # implementations of certain functions. -import re, types, sys, cStringIO, md5, os.path +resultlimit = 40 # Size limit of results when running in debug mode. -# Exception raised for yacc-related errors -class YaccError(Exception): pass +pickle_protocol = 0 # Protocol to use when writing pickle files -# Exception raised for errors raised in production rules -class SyntaxError(Exception): pass +import re, types, sys, os.path +# Compatibility function for python 2.6/3.0 +if sys.version_info[0] < 3: + def func_code(f): + return f.func_code +else: + def func_code(f): + return f.__code__ -# Available instance types. This is used when parsers are defined by a class. -# it's a little funky because I want to preserve backwards compatibility -# with Python 2.0 where types.ObjectType is undefined. - +# Compatibility try: - _INSTANCETYPE = (types.InstanceType, types.ObjectType) + MAXINT = sys.maxint except AttributeError: - _INSTANCETYPE = types.InstanceType - class object: pass # Note: needed if no new-style classes present + MAXINT = sys.maxsize + +# Python 2.x/3.0 compatibility. +def load_ply_lex(): + if sys.version_info[0] < 3: + import lex + else: + import ply.lex as lex + return lex + +# This object is a stand-in for a logging object created by the +# logging module. PLY will use this by default to create things +# such as the parser.out file. If a user wants more detailed +# information, they can create their own logging object and pass +# it into PLY. + +class PlyLogger(object): + def __init__(self,f): + self.f = f + def debug(self,msg,*args,**kwargs): + self.f.write((msg % args) + "\n") + info = debug + + def warning(self,msg,*args,**kwargs): + self.f.write("WARNING: "+ (msg % args) + "\n") + + def error(self,msg,*args,**kwargs): + self.f.write("ERROR: " + (msg % args) + "\n") + + critical = debug + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self,name): + return self + def __call__(self,*args,**kwargs): + return self + +# Exception raised for yacc-related errors +class YaccError(Exception): pass + +# Format the result message that the parser produces when running in debug mode. +def format_result(r): + repr_str = repr(r) + if '\n' in repr_str: repr_str = repr(repr_str) + if len(repr_str) > resultlimit: + repr_str = repr_str[:resultlimit]+" ..." + result = "<%s @ 0x%x> (%s)" % (type(r).__name__,id(r),repr_str) + return result + + +# Format stack entries when the parser is running in debug mode +def format_stack_entry(r): + repr_str = repr(r) + if '\n' in repr_str: repr_str = repr(repr_str) + if len(repr_str) < 16: + return repr_str + else: + return "<%s @ 0x%x>" % (type(r).__name__,id(r)) #----------------------------------------------------------------------------- # === LR Parsing Engine === @@ -142,6 +210,9 @@ def lineno(self,n): return getattr(self.slice[n],"lineno",0) + def set_lineno(self,n,lineno): + self.slice[n].lineno = lineno + def linespan(self,n): startline = getattr(self.slice[n],"lineno",0) endline = getattr(self.slice[n],"endlineno",startline) @@ -159,27 +230,18 @@ raise SyntaxError -# The LR Parsing engine. This is defined as a class so that multiple parsers -# can exist in the same process. A user never instantiates this directly. -# Instead, the global yacc() function should be used to create a suitable Parser -# object. - -class Parser: - def __init__(self,magic=None): - - # This is a hack to keep users from trying to instantiate a Parser - # object directly. - - if magic != "xyzzy": - raise YaccError, "Can't directly instantiate Parser. Use yacc() instead." - - # Reset internal state - self.productions = None # List of productions - self.errorfunc = None # Error handling function - self.action = { } # LR Action table - self.goto = { } # LR goto table - self.require = { } # Attribute require table - self.method = "Unknown LR" # Table construction method used +# ----------------------------------------------------------------------------- +# == LRParser == +# +# The LR Parsing engine. +# ----------------------------------------------------------------------------- + +class LRParser: + def __init__(self,lrtab,errorf): + self.productions = lrtab.lr_productions + self.action = lrtab.lr_action + self.goto = lrtab.lr_goto + self.errorfunc = errorf def errok(self): self.errorok = 1 @@ -194,6 +256,8 @@ def parse(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None): if debug or yaccdevel: + if isinstance(debug,int): + debug = PlyLogger(sys.stderr) return self.parsedebug(input,lexer,debug,tracking,tokenfunc) elif tracking: return self.parseopt(input,lexer,debug,tracking,tokenfunc) @@ -215,7 +279,7 @@ # # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - def parsedebug(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None): + def parsedebug(self,input=None,lexer=None,debug=None,tracking=0,tokenfunc=None): lookahead = None # Current lookahead symbol lookaheadstack = [ ] # Stack of lookahead symbols actions = self.action # Local reference to action table (to avoid lookup on self.) @@ -223,12 +287,16 @@ prod = self.productions # Local reference to production list (to avoid lookup on self.) pslice = YaccProduction(None) # Production object passed to grammar rules errorcount = 0 # Used during error recovery - endsym = "$end" # End symbol + + # --! DEBUG + debug.info("PLY: PARSE DEBUG START") + # --! DEBUG + # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer - + # Set up the lexer and parser objects on pslice pslice.lexer = lexer pslice.parser = self @@ -257,7 +325,7 @@ statestack.append(0) sym = YaccSymbol() - sym.type = endsym + sym.type = "$end" symstack.append(sym) state = 0 while 1: @@ -266,8 +334,8 @@ # the next token off of the lookaheadstack or from the lexer # --! DEBUG - if debug > 1: - print 'state', state + debug.debug('') + debug.debug('State : %s', state) # --! DEBUG if not lookahead: @@ -277,35 +345,25 @@ lookahead = lookaheadstack.pop() if not lookahead: lookahead = YaccSymbol() - lookahead.type = endsym + lookahead.type = "$end" # --! DEBUG - if debug: - errorlead = ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip() + debug.debug('Stack : %s', + ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) # --! DEBUG # Check the action table ltype = lookahead.type t = actions[state].get(ltype) - # --! DEBUG - if debug > 1: - print 'action', t - # --! DEBUG - if t is not None: if t > 0: # shift a symbol on the stack - if ltype is endsym: - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t # --! DEBUG - if debug > 1: - sys.stderr.write("%-60s shift state %s\n" % (errorlead, t)) + debug.debug("Action : Shift and goto state %s", t) # --! DEBUG symstack.append(lookahead) @@ -327,8 +385,11 @@ sym.value = None # --! DEBUG - if debug > 1: - sys.stderr.write("%-60s reduce %d\n" % (errorlead, -t)) + if plen: + debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, "["+",".join([format_stack_entry(_v.value) for _v in symstack[-plen:]])+"]",-t) + else: + debug.info("Action : Reduce rule [%s] with %s and goto state %d", p.str, [],-t) + # --! DEBUG if plen: @@ -355,9 +416,12 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) + # --! DEBUG + debug.info("Result : %s", format_result(pslice[0])) + # --! DEBUG symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -393,7 +457,10 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) + # --! DEBUG + debug.info("Result : %s", format_result(pslice[0])) + # --! DEBUG symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -412,13 +479,18 @@ if t == 0: n = symstack[-1] - return getattr(n,"value",None) + result = getattr(n,"value",None) + # --! DEBUG + debug.info("Done : Returning %s", format_result(result)) + debug.info("PLY: PARSE DEBUG END") + # --! DEBUG + return result if t == None: # --! DEBUG - if debug: - sys.stderr.write(errorlead + "\n") + debug.error('Error : %s', + ("%s . %s" % (" ".join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) # --! DEBUG # We have some kind of parsing error here. To handle @@ -435,13 +507,15 @@ errorcount = error_count self.errorok = 0 errtoken = lookahead - if errtoken.type is endsym: + if errtoken.type == "$end": errtoken = None # End of file! if self.errorfunc: global errok,token,restart errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -471,7 +545,7 @@ # entire parse has been rolled back and we're completely hosed. The token is # discarded and we just keep going. - if len(statestack) <= 1 and lookahead.type is not endsym: + if len(statestack) <= 1 and lookahead.type != "$end": lookahead = None errtoken = None state = 0 @@ -483,7 +557,7 @@ # at the end of the file. nuke the top entry and generate an error token # Start nuking entries on the stack - if lookahead.type is endsym: + if lookahead.type == "$end": # Whoa. We're really hosed here. Bail out return @@ -509,7 +583,7 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" + raise RuntimeError("yacc: internal parser error!!!\n") # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # parseopt(). @@ -531,7 +605,7 @@ # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer # Set up the lexer and parser objects on pslice @@ -586,10 +660,6 @@ if t is not None: if t > 0: # shift a symbol on the stack - if ltype == '$end': - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t @@ -635,9 +705,9 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -673,7 +743,7 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -717,6 +787,8 @@ errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -784,7 +856,7 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" + raise RuntimeError("yacc: internal parser error!!!\n") # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # parseopt_notrack(). @@ -805,7 +877,7 @@ # If no lexer was given, we will try to use the lex module if not lexer: - import lex + lex = load_ply_lex() lexer = lex.lexer # Set up the lexer and parser objects on pslice @@ -860,10 +932,6 @@ if t is not None: if t > 0: # shift a symbol on the stack - if ltype == '$end': - # Error, end of input - sys.stderr.write("yacc: Parse error. EOF\n") - return statestack.append(t) state = t @@ -898,9 +966,9 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) del symstack[-plen:] del statestack[-plen:] + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -930,7 +998,7 @@ try: # Call the grammar rule with our special slice object - p.func(pslice) + p.callable(pslice) symstack.append(sym) state = goto[statestack[-1]][pname] statestack.append(state) @@ -974,6 +1042,8 @@ errok = self.errok # Set some special functions available in error recovery token = get_token restart = self.restart + if errtoken and not hasattr(errtoken,'lexer'): + errtoken.lexer = lexer tok = self.errorfunc(errtoken) del errok, token, restart # Delete special functions @@ -1041,169 +1111,172 @@ continue # Call an error function here - raise RuntimeError, "yacc: internal parser error!!!\n" - + raise RuntimeError("yacc: internal parser error!!!\n") # ----------------------------------------------------------------------------- -# === Parser Construction === +# === Grammar Representation === # -# The following functions and variables are used to implement the yacc() function -# itself. This is pretty hairy stuff involving lots of error checking, -# construction of LR items, kernels, and so forth. Although a lot of -# this work is done using global variables, the resulting Parser object -# is completely self contained--meaning that it is safe to repeatedly -# call yacc() with different grammars in the same application. +# The following functions, classes, and variables are used to represent and +# manipulate the rules that make up a grammar. # ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# validate_file() -# -# This function checks to see if there are duplicated p_rulename() functions -# in the parser module file. Without this function, it is really easy for -# users to make mistakes by cutting and pasting code fragments (and it's a real -# bugger to try and figure out why the resulting parser doesn't work). Therefore, -# we just do a little regular expression pattern matching of def statements -# to try and detect duplicates. -# ----------------------------------------------------------------------------- - -def validate_file(filename): - base,ext = os.path.splitext(filename) - if ext != '.py': return 1 # No idea. Assume it's okay. +import re - try: - f = open(filename) - lines = f.readlines() - f.close() - except IOError: - return 1 # Oh well - - # Match def p_funcname( - fre = re.compile(r'\s*def\s+(p_[a-zA-Z_0-9]*)\(') - counthash = { } - linen = 1 - noerror = 1 - for l in lines: - m = fre.match(l) - if m: - name = m.group(1) - prev = counthash.get(name) - if not prev: - counthash[name] = linen - else: - sys.stderr.write("%s:%d: Function %s redefined. Previously defined on line %d\n" % (filename,linen,name,prev)) - noerror = 0 - linen += 1 - return noerror - -# This function looks for functions that might be grammar rules, but which don't have the proper p_suffix. -def validate_dict(d): - for n,v in d.items(): - if n[0:2] == 'p_' and type(v) in (types.FunctionType, types.MethodType): continue - if n[0:2] == 't_': continue - - if n[0:2] == 'p_': - sys.stderr.write("yacc: Warning. '%s' not defined as a function\n" % n) - if 1 and isinstance(v,types.FunctionType) and v.func_code.co_argcount == 1: - try: - doc = v.__doc__.split(" ") - if doc[1] == ':': - sys.stderr.write("%s:%d: Warning. Possible grammar rule '%s' defined without p_ prefix.\n" % (v.func_code.co_filename, v.func_code.co_firstlineno,n)) - except StandardError: - pass +# regex matching identifiers +_is_identifier = re.compile(r'^[a-zA-Z0-9_-]+$') # ----------------------------------------------------------------------------- -# === GRAMMAR FUNCTIONS === +# class Production: # -# The following global variables and functions are used to store, manipulate, -# and verify the grammar rules specified by the user. -# ----------------------------------------------------------------------------- - -# Initialize all of the global variables used during grammar construction -def initialize_vars(): - global Productions, Prodnames, Prodmap, Terminals - global Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - global Errorfunc, Signature, Requires - - Productions = [None] # A list of all of the productions. The first - # entry is always reserved for the purpose of - # building an augmented grammar - - Prodnames = { } # A dictionary mapping the names of nonterminals to a list of all - # productions of that nonterminal. +# This class stores the raw information about a single production or grammar rule. +# A grammar rule refers to a specification such as this: +# +# expr : expr PLUS term +# +# Here are the basic attributes defined on all productions +# +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','PLUS','term'] +# prec - Production precedence level +# number - Production number. +# func - Function that executes on reduce +# file - File where production function is defined +# lineno - Line number where production function is defined +# +# The following attributes are defined or optional. +# +# len - Length of the production (number of symbols on right hand side) +# usyms - Set of unique symbols found in the production +# ----------------------------------------------------------------------------- + +class Production(object): + reduced = 0 + def __init__(self,number,name,prod,precedence=('right',0),func=None,file='',line=0): + self.name = name + self.prod = tuple(prod) + self.number = number + self.func = func + self.callable = None + self.file = file + self.line = line + self.prec = precedence - Prodmap = { } # A dictionary that is only used to detect duplicate - # productions. + # Internal settings used during table construction + + self.len = len(self.prod) # Length of the production - Terminals = { } # A dictionary mapping the names of terminal symbols to a - # list of the rules where they are used. + # Create a list of unique production symbols used in the production + self.usyms = [ ] + for s in self.prod: + if s not in self.usyms: + self.usyms.append(s) + + # List of all LR items for the production + self.lr_items = [] + self.lr_next = None - Nonterminals = { } # A dictionary mapping names of nonterminals to a list - # of rule numbers where they are used. + # Create a string representation + if self.prod: + self.str = "%s -> %s" % (self.name," ".join(self.prod)) + else: + self.str = "%s -> " % self.name - First = { } # A dictionary of precomputed FIRST(x) symbols + def __str__(self): + return self.str - Follow = { } # A dictionary of precomputed FOLLOW(x) symbols + def __repr__(self): + return "Production("+str(self)+")" - Precedence = { } # Precedence rules for each terminal. Contains tuples of the - # form ('right',level) or ('nonassoc', level) or ('left',level) + def __len__(self): + return len(self.prod) - UsedPrecedence = { } # Precedence rules that were actually used by the grammer. - # This is only used to provide error checking and to generate - # a warning about unused precedence rules. + def __nonzero__(self): + return 1 - LRitems = [ ] # A list of all LR items for the grammar. These are the - # productions with the "dot" like E -> E . PLUS E + def __getitem__(self,index): + return self.prod[index] + + # Return the nth lr_item from the production (or None if at the end) + def lr_item(self,n): + if n > len(self.prod): return None + p = LRItem(self,n) - Errorfunc = None # User defined error handler + # Precompute the list of productions immediately following. Hack. Remove later + try: + p.lr_after = Prodnames[p.prod[n+1]] + except (IndexError,KeyError): + p.lr_after = [] + try: + p.lr_before = p.prod[n-1] + except IndexError: + p.lr_before = None - Signature = md5.new() # Digital signature of the grammar rules, precedence - # and other information. Used to determined when a - # parsing table needs to be regenerated. + return p - Signature.update(__tabversion__) + # Bind the production function name to a callable + def bind(self,pdict): + if self.func: + self.callable = pdict[self.func] + +# This class serves as a minimal standin for Production objects when +# reading table data from files. It only contains information +# actually used by the LR parsing engine, plus some additional +# debugging information. +class MiniProduction(object): + def __init__(self,str,name,len,func,file,line): + self.name = name + self.len = len + self.func = func + self.callable = None + self.file = file + self.line = line + self.str = str + def __str__(self): + return self.str + def __repr__(self): + return "MiniProduction(%s)" % self.str - Requires = { } # Requires list + # Bind the production function name to a callable + def bind(self,pdict): + if self.func: + self.callable = pdict[self.func] - # File objects used when creating the parser.out debugging file - global _vf, _vfc - _vf = cStringIO.StringIO() - _vfc = cStringIO.StringIO() # ----------------------------------------------------------------------------- -# class Production: +# class LRItem # -# This class stores the raw information about a single production or grammar rule. -# It has a few required attributes: +# This class represents a specific stage of parsing a production rule. For +# example: # -# name - Name of the production (nonterminal) -# prod - A list of symbols making up its production -# number - Production number. +# expr : expr . PLUS term # -# In addition, a few additional attributes are used to help with debugging or -# optimization of table generation. +# In the above, the "." represents the current location of the parse. Here +# basic attributes: # -# file - File where production action is defined. -# lineno - Line number where action is defined -# func - Action function -# prec - Precedence level -# lr_next - Next LR item. Example, if we are ' E -> E . PLUS E' -# then lr_next refers to 'E -> E PLUS . E' -# lr_index - LR item index (location of the ".") in the prod list. +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','.', 'PLUS','term'] +# number - Production number. +# +# lr_next Next LR item. Example, if we are ' expr -> expr . PLUS term' +# then lr_next refers to 'expr -> expr PLUS . term' +# lr_index - LR item index (location of the ".") in the prod list. # lookaheads - LALR lookahead symbols for this item -# len - Length of the production (number of symbols on right hand side) +# len - Length of the production (number of symbols on right hand side) +# lr_after - List of all productions that immediately follow +# lr_before - Grammar symbol immediately before # ----------------------------------------------------------------------------- -class Production: - def __init__(self,**kw): - for k,v in kw.items(): - setattr(self,k,v) - self.lr_index = -1 - self.lr0_added = 0 # Flag indicating whether or not added to LR0 closure - self.lr1_added = 0 # Flag indicating whether or not added to LR1 - self.usyms = [ ] +class LRItem(object): + def __init__(self,p,n): + self.name = p.name + self.prod = list(p.prod) + self.number = p.number + self.lr_index = n self.lookaheads = { } - self.lk_added = { } - self.setnumbers = [ ] + self.prod.insert(n,".") + self.prod = tuple(self.prod) + self.len = len(self.prod) + self.usyms = p.usyms def __str__(self): if self.prod: @@ -1213,932 +1286,597 @@ return s def __repr__(self): - return str(self) - - # Compute lr_items from the production - def lr_item(self,n): - if n > len(self.prod): return None - p = Production() - p.name = self.name - p.prod = list(self.prod) - p.number = self.number - p.lr_index = n - p.lookaheads = { } - p.setnumbers = self.setnumbers - p.prod.insert(n,".") - p.prod = tuple(p.prod) - p.len = len(p.prod) - p.usyms = self.usyms - - # Precompute list of productions immediately following - try: - p.lrafter = Prodnames[p.prod[n+1]] - except (IndexError,KeyError),e: - p.lrafter = [] - try: - p.lrbefore = p.prod[n-1] - except IndexError: - p.lrbefore = None - - return p - -class MiniProduction: - pass + return "LRItem("+str(self)+")" -# regex matching identifiers -_is_identifier = re.compile(r'^[a-zA-Z0-9_-]+$') +# ----------------------------------------------------------------------------- +# rightmost_terminal() +# +# Return the rightmost terminal from a list of symbols. Used in add_production() +# ----------------------------------------------------------------------------- +def rightmost_terminal(symbols, terminals): + i = len(symbols) - 1 + while i >= 0: + if symbols[i] in terminals: + return symbols[i] + i -= 1 + return None # ----------------------------------------------------------------------------- -# add_production() +# === GRAMMAR CLASS === # -# Given an action function, this function assembles a production rule. -# The production rule is assumed to be found in the function's docstring. -# This rule has the general syntax: -# -# name1 ::= production1 -# | production2 -# | production3 -# ... -# | productionn -# name2 ::= production1 -# | production2 -# ... -# ----------------------------------------------------------------------------- - -def add_production(f,file,line,prodname,syms): - - if Terminals.has_key(prodname): - sys.stderr.write("%s:%d: Illegal rule name '%s'. Already defined as a token.\n" % (file,line,prodname)) - return -1 - if prodname == 'error': - sys.stderr.write("%s:%d: Illegal rule name '%s'. error is a reserved word.\n" % (file,line,prodname)) - return -1 - - if not _is_identifier.match(prodname): - sys.stderr.write("%s:%d: Illegal rule name '%s'\n" % (file,line,prodname)) - return -1 - - for x in range(len(syms)): - s = syms[x] - if s[0] in "'\"": - try: - c = eval(s) - if (len(c) > 1): - sys.stderr.write("%s:%d: Literal token %s in rule '%s' may only be a single character\n" % (file,line,s, prodname)) - return -1 - if not Terminals.has_key(c): - Terminals[c] = [] - syms[x] = c - continue - except SyntaxError: - pass - if not _is_identifier.match(s) and s != '%prec': - sys.stderr.write("%s:%d: Illegal name '%s' in rule '%s'\n" % (file,line,s, prodname)) - return -1 - - # See if the rule is already in the rulemap - map = "%s -> %s" % (prodname,syms) - if Prodmap.has_key(map): - m = Prodmap[map] - sys.stderr.write("%s:%d: Duplicate rule %s.\n" % (file,line, m)) - sys.stderr.write("%s:%d: Previous definition at %s:%d\n" % (file,line, m.file, m.line)) - return -1 - - p = Production() - p.name = prodname - p.prod = syms - p.file = file - p.line = line - p.func = f - p.number = len(Productions) - - - Productions.append(p) - Prodmap[map] = p - if not Nonterminals.has_key(prodname): - Nonterminals[prodname] = [ ] - - # Add all terminals to Terminals - i = 0 - while i < len(p.prod): - t = p.prod[i] - if t == '%prec': - try: - precname = p.prod[i+1] - except IndexError: - sys.stderr.write("%s:%d: Syntax error. Nothing follows %%prec.\n" % (p.file,p.line)) - return -1 - - prec = Precedence.get(precname,None) - if not prec: - sys.stderr.write("%s:%d: Nothing known about the precedence of '%s'\n" % (p.file,p.line,precname)) - return -1 - else: - p.prec = prec - UsedPrecedence[precname] = 1 - del p.prod[i] - del p.prod[i] - continue - - if Terminals.has_key(t): - Terminals[t].append(p.number) - # Is a terminal. We'll assign a precedence to p based on this - if not hasattr(p,"prec"): - p.prec = Precedence.get(t,('right',0)) - else: - if not Nonterminals.has_key(t): - Nonterminals[t] = [ ] - Nonterminals[t].append(p.number) - i += 1 - - if not hasattr(p,"prec"): - p.prec = ('right',0) - - # Set final length of productions - p.len = len(p.prod) - p.prod = tuple(p.prod) - - # Calculate unique syms in the production - p.usyms = [ ] - for s in p.prod: - if s not in p.usyms: - p.usyms.append(s) +# The following class represents the contents of the specified grammar along +# with various computed properties such as first sets, follow sets, LR items, etc. +# This data is used for critical parts of the table generation process later. +# ----------------------------------------------------------------------------- - # Add to the global productions list - try: - Prodnames[p.name].append(p) - except KeyError: - Prodnames[p.name] = [ p ] - return 0 - -# Given a raw rule function, this function rips out its doc string -# and adds rules to the grammar - -def add_function(f): - line = f.func_code.co_firstlineno - file = f.func_code.co_filename - error = 0 +class GrammarError(YaccError): pass - if isinstance(f,types.MethodType): - reqdargs = 2 - else: - reqdargs = 1 +class Grammar(object): + def __init__(self,terminals): + self.Productions = [None] # A list of all of the productions. The first + # entry is always reserved for the purpose of + # building an augmented grammar - if f.func_code.co_argcount > reqdargs: - sys.stderr.write("%s:%d: Rule '%s' has too many arguments.\n" % (file,line,f.__name__)) - return -1 - - if f.func_code.co_argcount < reqdargs: - sys.stderr.write("%s:%d: Rule '%s' requires an argument.\n" % (file,line,f.__name__)) - return -1 - - if f.__doc__: - # Split the doc string into lines - pstrings = f.__doc__.splitlines() - lastp = None - dline = line - for ps in pstrings: - dline += 1 - p = ps.split() - if not p: continue - try: - if p[0] == '|': - # This is a continuation of a previous rule - if not lastp: - sys.stderr.write("%s:%d: Misplaced '|'.\n" % (file,dline)) - return -1 - prodname = lastp - if len(p) > 1: - syms = p[1:] - else: - syms = [ ] - else: - prodname = p[0] - lastp = prodname - assign = p[1] - if len(p) > 2: - syms = p[2:] - else: - syms = [ ] - if assign != ':' and assign != '::=': - sys.stderr.write("%s:%d: Syntax error. Expected ':'\n" % (file,dline)) - return -1 + self.Prodnames = { } # A dictionary mapping the names of nonterminals to a list of all + # productions of that nonterminal. + self.Prodmap = { } # A dictionary that is only used to detect duplicate + # productions. - e = add_production(f,file,dline,prodname,syms) - error += e + self.Terminals = { } # A dictionary mapping the names of terminal symbols to a + # list of the rules where they are used. + for term in terminals: + self.Terminals[term] = [] - except StandardError: - sys.stderr.write("%s:%d: Syntax error in rule '%s'\n" % (file,dline,ps)) - error -= 1 - else: - sys.stderr.write("%s:%d: No documentation string specified in function '%s'\n" % (file,line,f.__name__)) - return error + self.Terminals['error'] = [] + self.Nonterminals = { } # A dictionary mapping names of nonterminals to a list + # of rule numbers where they are used. -# Cycle checking code (Michael Dyck) + self.First = { } # A dictionary of precomputed FIRST(x) symbols -def compute_reachable(): - ''' - Find each symbol that can be reached from the start symbol. - Print a warning for any nonterminals that can't be reached. - (Unused terminals have already had their warning.) - ''' - Reachable = { } - for s in Terminals.keys() + Nonterminals.keys(): - Reachable[s] = 0 - - mark_reachable_from( Productions[0].prod[0], Reachable ) - - for s in Nonterminals.keys(): - if not Reachable[s]: - sys.stderr.write("yacc: Symbol '%s' is unreachable.\n" % s) - -def mark_reachable_from(s, Reachable): - ''' - Mark all symbols that are reachable from symbol s. - ''' - if Reachable[s]: - # We've already reached symbol s. - return - Reachable[s] = 1 - for p in Prodnames.get(s,[]): - for r in p.prod: - mark_reachable_from(r, Reachable) - -# ----------------------------------------------------------------------------- -# compute_terminates() -# -# This function looks at the various parsing rules and tries to detect -# infinite recursion cycles (grammar rules where there is no possible way -# to derive a string of only terminals). -# ----------------------------------------------------------------------------- -def compute_terminates(): - ''' - Raise an error for any symbols that don't terminate. - ''' - Terminates = {} - - # Terminals: - for t in Terminals.keys(): - Terminates[t] = 1 - - Terminates['$end'] = 1 - - # Nonterminals: - - # Initialize to false: - for n in Nonterminals.keys(): - Terminates[n] = 0 - - # Then propagate termination until no change: - while 1: - some_change = 0 - for (n,pl) in Prodnames.items(): - # Nonterminal n terminates iff any of its productions terminates. - for p in pl: - # Production p terminates iff all of its rhs symbols terminate. - for s in p.prod: - if not Terminates[s]: - # The symbol s does not terminate, - # so production p does not terminate. - p_terminates = 0 - break - else: - # didn't break from the loop, - # so every symbol s terminates - # so production p terminates. - p_terminates = 1 - - if p_terminates: - # symbol n terminates! - if not Terminates[n]: - Terminates[n] = 1 - some_change = 1 - # Don't need to consider any more productions for this n. - break - - if not some_change: - break - - some_error = 0 - for (s,terminates) in Terminates.items(): - if not terminates: - if not Prodnames.has_key(s) and not Terminals.has_key(s) and s != 'error': - # s is used-but-not-defined, and we've already warned of that, - # so it would be overkill to say that it's also non-terminating. - pass - else: - sys.stderr.write("yacc: Infinite recursion detected for symbol '%s'.\n" % s) - some_error = 1 + self.Follow = { } # A dictionary of precomputed FOLLOW(x) symbols - return some_error + self.Precedence = { } # Precedence rules for each terminal. Contains tuples of the + # form ('right',level) or ('nonassoc', level) or ('left',level) -# ----------------------------------------------------------------------------- -# verify_productions() -# -# This function examines all of the supplied rules to see if they seem valid. -# ----------------------------------------------------------------------------- -def verify_productions(cycle_check=1): - error = 0 - for p in Productions: - if not p: continue + self.UsedPrecedence = { } # Precedence rules that were actually used by the grammer. + # This is only used to provide error checking and to generate + # a warning about unused precedence rules. - for s in p.prod: - if not Prodnames.has_key(s) and not Terminals.has_key(s) and s != 'error': - sys.stderr.write("%s:%d: Symbol '%s' used, but not defined as a token or a rule.\n" % (p.file,p.line,s)) - error = 1 - continue + self.Start = None # Starting symbol for the grammar - unused_tok = 0 - # Now verify all of the tokens - if yaccdebug: - _vf.write("Unused terminals:\n\n") - for s,v in Terminals.items(): - if s != 'error' and not v: - sys.stderr.write("yacc: Warning. Token '%s' defined, but not used.\n" % s) - if yaccdebug: _vf.write(" %s\n"% s) - unused_tok += 1 - - # Print out all of the productions - if yaccdebug: - _vf.write("\nGrammar\n\n") - for i in range(1,len(Productions)): - _vf.write("Rule %-5d %s\n" % (i, Productions[i])) - - unused_prod = 0 - # Verify the use of all productions - for s,v in Nonterminals.items(): - if not v: - p = Prodnames[s][0] - sys.stderr.write("%s:%d: Warning. Rule '%s' defined, but not used.\n" % (p.file,p.line, s)) - unused_prod += 1 - - - if unused_tok == 1: - sys.stderr.write("yacc: Warning. There is 1 unused token.\n") - if unused_tok > 1: - sys.stderr.write("yacc: Warning. There are %d unused tokens.\n" % unused_tok) - - if unused_prod == 1: - sys.stderr.write("yacc: Warning. There is 1 unused rule.\n") - if unused_prod > 1: - sys.stderr.write("yacc: Warning. There are %d unused rules.\n" % unused_prod) - - if yaccdebug: - _vf.write("\nTerminals, with rules where they appear\n\n") - ks = Terminals.keys() - ks.sort() - for k in ks: - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Terminals[k]]))) - _vf.write("\nNonterminals, with rules where they appear\n\n") - ks = Nonterminals.keys() - ks.sort() - for k in ks: - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Nonterminals[k]]))) - - if (cycle_check): - compute_reachable() - error += compute_terminates() -# error += check_cycles() - return error - -# ----------------------------------------------------------------------------- -# build_lritems() -# -# This function walks the list of productions and builds a complete set of the -# LR items. The LR items are stored in two ways: First, they are uniquely -# numbered and placed in the list _lritems. Second, a linked list of LR items -# is built for each production. For example: -# -# E -> E PLUS E -# -# Creates the list -# -# [E -> . E PLUS E, E -> E . PLUS E, E -> E PLUS . E, E -> E PLUS E . ] -# ----------------------------------------------------------------------------- - -def build_lritems(): - for p in Productions: - lastlri = p - lri = p.lr_item(0) - i = 0 - while 1: - lri = p.lr_item(i) - lastlri.lr_next = lri - if not lri: break - lri.lr_num = len(LRitems) - LRitems.append(lri) - lastlri = lri - i += 1 - # In order for the rest of the parser generator to work, we need to - # guarantee that no more lritems are generated. Therefore, we nuke - # the p.lr_item method. (Only used in debugging) - # Production.lr_item = None + def __len__(self): + return len(self.Productions) -# ----------------------------------------------------------------------------- -# add_precedence() -# -# Given a list of precedence rules, add to the precedence table. -# ----------------------------------------------------------------------------- + def __getitem__(self,index): + return self.Productions[index] -def add_precedence(plist): - plevel = 0 - error = 0 - for p in plist: - plevel += 1 - try: - prec = p[0] - terms = p[1:] - if prec != 'left' and prec != 'right' and prec != 'nonassoc': - sys.stderr.write("yacc: Invalid precedence '%s'\n" % prec) - return -1 - for t in terms: - if Precedence.has_key(t): - sys.stderr.write("yacc: Precedence already specified for terminal '%s'\n" % t) - error += 1 - continue - Precedence[t] = (prec,plevel) - except: - sys.stderr.write("yacc: Invalid precedence table.\n") - error += 1 + # ----------------------------------------------------------------------------- + # set_precedence() + # + # Sets the precedence for a given terminal. assoc is the associativity such as + # 'left','right', or 'nonassoc'. level is a numeric level. + # + # ----------------------------------------------------------------------------- - return error + def set_precedence(self,term,assoc,level): + assert self.Productions == [None],"Must call set_precedence() before add_production()" + if term in self.Precedence: + raise GrammarError("Precedence already specified for terminal '%s'" % term) + if assoc not in ['left','right','nonassoc']: + raise GrammarError("Associativity must be one of 'left','right', or 'nonassoc'") + self.Precedence[term] = (assoc,level) + + # ----------------------------------------------------------------------------- + # add_production() + # + # Given an action function, this function assembles a production rule and + # computes its precedence level. + # + # The production rule is supplied as a list of symbols. For example, + # a rule such as 'expr : expr PLUS term' has a production name of 'expr' and + # symbols ['expr','PLUS','term']. + # + # Precedence is determined by the precedence of the right-most non-terminal + # or the precedence of a terminal specified by %prec. + # + # A variety of error checks are performed to make sure production symbols + # are valid and that %prec is used correctly. + # ----------------------------------------------------------------------------- + + def add_production(self,prodname,syms,func=None,file='',line=0): + + if prodname in self.Terminals: + raise GrammarError("%s:%d: Illegal rule name '%s'. Already defined as a token" % (file,line,prodname)) + if prodname == 'error': + raise GrammarError("%s:%d: Illegal rule name '%s'. error is a reserved word" % (file,line,prodname)) + if not _is_identifier.match(prodname): + raise GrammarError("%s:%d: Illegal rule name '%s'" % (file,line,prodname)) + + # Look for literal tokens + for n,s in enumerate(syms): + if s[0] in "'\"": + try: + c = eval(s) + if (len(c) > 1): + raise GrammarError("%s:%d: Literal token %s in rule '%s' may only be a single character" % (file,line,s, prodname)) + if not c in self.Terminals: + self.Terminals[c] = [] + syms[n] = c + continue + except SyntaxError: + pass + if not _is_identifier.match(s) and s != '%prec': + raise GrammarError("%s:%d: Illegal name '%s' in rule '%s'" % (file,line,s, prodname)) + + # Determine the precedence level + if '%prec' in syms: + if syms[-1] == '%prec': + raise GrammarError("%s:%d: Syntax error. Nothing follows %%prec" % (file,line)) + if syms[-2] != '%prec': + raise GrammarError("%s:%d: Syntax error. %%prec can only appear at the end of a grammar rule" % (file,line)) + precname = syms[-1] + prodprec = self.Precedence.get(precname,None) + if not prodprec: + raise GrammarError("%s:%d: Nothing known about the precedence of '%s'" % (file,line,precname)) + else: + self.UsedPrecedence[precname] = 1 + del syms[-2:] # Drop %prec from the rule + else: + # If no %prec, precedence is determined by the rightmost terminal symbol + precname = rightmost_terminal(syms,self.Terminals) + prodprec = self.Precedence.get(precname,('right',0)) + + # See if the rule is already in the rulemap + map = "%s -> %s" % (prodname,syms) + if map in self.Prodmap: + m = self.Prodmap[map] + raise GrammarError("%s:%d: Duplicate rule %s. " % (file,line, m) + + "Previous definition at %s:%d" % (m.file, m.line)) + + # From this point on, everything is valid. Create a new Production instance + pnumber = len(self.Productions) + if not prodname in self.Nonterminals: + self.Nonterminals[prodname] = [ ] + + # Add the production number to Terminals and Nonterminals + for t in syms: + if t in self.Terminals: + self.Terminals[t].append(pnumber) + else: + if not t in self.Nonterminals: + self.Nonterminals[t] = [ ] + self.Nonterminals[t].append(pnumber) + + # Create a production and add it to the list of productions + p = Production(pnumber,prodname,syms,prodprec,func,file,line) + self.Productions.append(p) + self.Prodmap[map] = p -# ----------------------------------------------------------------------------- -# check_precedence() -# -# Checks the use of the Precedence tables. This makes sure all of the symbols -# are terminals or were used with %prec -# ----------------------------------------------------------------------------- + # Add to the global productions list + try: + self.Prodnames[prodname].append(p) + except KeyError: + self.Prodnames[prodname] = [ p ] + return 0 -def check_precedence(): - error = 0 - for precname in Precedence.keys(): - if not (Terminals.has_key(precname) or UsedPrecedence.has_key(precname)): - sys.stderr.write("yacc: Precedence rule '%s' defined for unknown symbol '%s'\n" % (Precedence[precname][0],precname)) - error += 1 - return error + # ----------------------------------------------------------------------------- + # set_start() + # + # Sets the starting symbol and creates the augmented grammar. Production + # rule 0 is S' -> start where start is the start symbol. + # ----------------------------------------------------------------------------- + + def set_start(self,start=None): + if not start: + start = self.Productions[1].name + if start not in self.Nonterminals: + raise GrammarError("start symbol %s undefined" % start) + self.Productions[0] = Production(0,"S'",[start]) + self.Nonterminals[start].append(0) + self.Start = start -# ----------------------------------------------------------------------------- -# augment_grammar() -# -# Compute the augmented grammar. This is just a rule S' -> start where start -# is the starting symbol. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # find_unreachable() + # + # Find all of the nonterminal symbols that can't be reached from the starting + # symbol. Returns a list of nonterminals that can't be reached. + # ----------------------------------------------------------------------------- -def augment_grammar(start=None): - if not start: - start = Productions[1].name - Productions[0] = Production(name="S'",prod=[start],number=0,len=1,prec=('right',0),func=None) - Productions[0].usyms = [ start ] - Nonterminals[start].append(0) + def find_unreachable(self): + + # Mark all symbols that are reachable from a symbol s + def mark_reachable_from(s): + if reachable[s]: + # We've already reached symbol s. + return + reachable[s] = 1 + for p in self.Prodnames.get(s,[]): + for r in p.prod: + mark_reachable_from(r) + + reachable = { } + for s in list(self.Terminals) + list(self.Nonterminals): + reachable[s] = 0 + mark_reachable_from( self.Productions[0].prod[0] ) -# ------------------------------------------------------------------------- -# first() -# -# Compute the value of FIRST1(beta) where beta is a tuple of symbols. -# -# During execution of compute_first1, the result may be incomplete. -# Afterward (e.g., when called from compute_follow()), it will be complete. -# ------------------------------------------------------------------------- -def first(beta): + return [s for s in list(self.Nonterminals) + if not reachable[s]] + + # ----------------------------------------------------------------------------- + # infinite_cycles() + # + # This function looks at the various parsing rules and tries to detect + # infinite recursion cycles (grammar rules where there is no possible way + # to derive a string of only terminals). + # ----------------------------------------------------------------------------- - # We are computing First(x1,x2,x3,...,xn) - result = [ ] - for x in beta: - x_produces_empty = 0 + def infinite_cycles(self): + terminates = {} - # Add all the non- symbols of First[x] to the result. - for f in First[x]: - if f == '': - x_produces_empty = 1 - else: - if f not in result: result.append(f) + # Terminals: + for t in self.Terminals: + terminates[t] = 1 - if x_produces_empty: - # We have to consider the next x in beta, - # i.e. stay in the loop. - pass - else: - # We don't have to consider any further symbols in beta. - break - else: - # There was no 'break' from the loop, - # so x_produces_empty was true for all x in beta, - # so beta produces empty as well. - result.append('') + terminates['$end'] = 1 - return result + # Nonterminals: + # Initialize to false: + for n in self.Nonterminals: + terminates[n] = 0 -# FOLLOW(x) -# Given a non-terminal. This function computes the set of all symbols -# that might follow it. Dragon book, p. 189. - -def compute_follow(start=None): - # Add '$end' to the follow list of the start symbol - for k in Nonterminals.keys(): - Follow[k] = [ ] - - if not start: - start = Productions[1].name - - Follow[start] = [ '$end' ] - - while 1: - didadd = 0 - for p in Productions[1:]: - # Here is the production set - for i in range(len(p.prod)): - B = p.prod[i] - if Nonterminals.has_key(B): - # Okay. We got a non-terminal in a production - fst = first(p.prod[i+1:]) - hasempty = 0 - for f in fst: - if f != '' and f not in Follow[B]: - Follow[B].append(f) - didadd = 1 - if f == '': - hasempty = 1 - if hasempty or i == (len(p.prod)-1): - # Add elements of follow(a) to follow(b) - for f in Follow[p.name]: - if f not in Follow[B]: - Follow[B].append(f) - didadd = 1 - if not didadd: break - - if 0 and yaccdebug: - _vf.write('\nFollow:\n') - for k in Nonterminals.keys(): - _vf.write("%-20s : %s\n" % (k, " ".join([str(s) for s in Follow[k]]))) - -# ------------------------------------------------------------------------- -# compute_first1() -# -# Compute the value of FIRST1(X) for all symbols -# ------------------------------------------------------------------------- -def compute_first1(): - - # Terminals: - for t in Terminals.keys(): - First[t] = [t] - - First['$end'] = ['$end'] - First['#'] = ['#'] # what's this for? - - # Nonterminals: - - # Initialize to the empty set: - for n in Nonterminals.keys(): - First[n] = [] - - # Then propagate symbols until no change: - while 1: - some_change = 0 - for n in Nonterminals.keys(): - for p in Prodnames[n]: - for f in first(p.prod): - if f not in First[n]: - First[n].append( f ) - some_change = 1 - if not some_change: - break - - if 0 and yaccdebug: - _vf.write('\nFirst:\n') - for k in Nonterminals.keys(): - _vf.write("%-20s : %s\n" % - (k, " ".join([str(s) for s in First[k]]))) - -# ----------------------------------------------------------------------------- -# === SLR Generation === -# -# The following functions are used to construct SLR (Simple LR) parsing tables -# as described on p.221-229 of the dragon book. -# ----------------------------------------------------------------------------- - -# Global variables for the LR parsing engine -def lr_init_vars(): - global _lr_action, _lr_goto, _lr_method - global _lr_goto_cache, _lr0_cidhash - - _lr_action = { } # Action table - _lr_goto = { } # Goto table - _lr_method = "Unknown" # LR method used - _lr_goto_cache = { } - _lr0_cidhash = { } - - -# Compute the LR(0) closure operation on I, where I is a set of LR(0) items. -# prodlist is a list of productions. - -_add_count = 0 # Counter used to detect cycles - -def lr0_closure(I): - global _add_count - - _add_count += 1 - prodlist = Productions - - # Add everything in I to J - J = I[:] - didadd = 1 - while didadd: - didadd = 0 - for j in J: - for x in j.lrafter: - if x.lr0_added == _add_count: continue - # Add B --> .G to J - J.append(x.lr_next) - x.lr0_added = _add_count - didadd = 1 - - return J - -# Compute the LR(0) goto function goto(I,X) where I is a set -# of LR(0) items and X is a grammar symbol. This function is written -# in a way that guarantees uniqueness of the generated goto sets -# (i.e. the same goto set will never be returned as two different Python -# objects). With uniqueness, we can later do fast set comparisons using -# id(obj) instead of element-wise comparison. - -def lr0_goto(I,x): - # First we look for a previously cached entry - g = _lr_goto_cache.get((id(I),x),None) - if g: return g - - # Now we generate the goto set in a way that guarantees uniqueness - # of the result - - s = _lr_goto_cache.get(x,None) - if not s: - s = { } - _lr_goto_cache[x] = s - - gs = [ ] - for p in I: - n = p.lr_next - if n and n.lrbefore == x: - s1 = s.get(id(n),None) - if not s1: - s1 = { } - s[id(n)] = s1 - gs.append(n) - s = s1 - g = s.get('$end',None) - if not g: - if gs: - g = lr0_closure(gs) - s['$end'] = g - else: - s['$end'] = gs - _lr_goto_cache[(id(I),x)] = g - return g - -_lr0_cidhash = { } + # Then propagate termination until no change: + while 1: + some_change = 0 + for (n,pl) in self.Prodnames.items(): + # Nonterminal n terminates iff any of its productions terminates. + for p in pl: + # Production p terminates iff all of its rhs symbols terminate. + for s in p.prod: + if not terminates[s]: + # The symbol s does not terminate, + # so production p does not terminate. + p_terminates = 0 + break + else: + # didn't break from the loop, + # so every symbol s terminates + # so production p terminates. + p_terminates = 1 + + if p_terminates: + # symbol n terminates! + if not terminates[n]: + terminates[n] = 1 + some_change = 1 + # Don't need to consider any more productions for this n. + break -# Compute the LR(0) sets of item function -def lr0_items(): + if not some_change: + break - C = [ lr0_closure([Productions[0].lr_next]) ] - i = 0 - for I in C: - _lr0_cidhash[id(I)] = i - i += 1 + infinite = [] + for (s,term) in terminates.items(): + if not term: + if not s in self.Prodnames and not s in self.Terminals and s != 'error': + # s is used-but-not-defined, and we've already warned of that, + # so it would be overkill to say that it's also non-terminating. + pass + else: + infinite.append(s) - # Loop over the items in C and each grammar symbols - i = 0 - while i < len(C): - I = C[i] - i += 1 + return infinite - # Collect all of the symbols that could possibly be in the goto(I,X) sets - asyms = { } - for ii in I: - for s in ii.usyms: - asyms[s] = None - for x in asyms.keys(): - g = lr0_goto(I,x) - if not g: continue - if _lr0_cidhash.has_key(id(g)): continue - _lr0_cidhash[id(g)] = len(C) - C.append(g) + # ----------------------------------------------------------------------------- + # undefined_symbols() + # + # Find all symbols that were used the grammar, but not defined as tokens or + # grammar rules. Returns a list of tuples (sym, prod) where sym in the symbol + # and prod is the production where the symbol was used. + # ----------------------------------------------------------------------------- + def undefined_symbols(self): + result = [] + for p in self.Productions: + if not p: continue - return C + for s in p.prod: + if not s in self.Prodnames and not s in self.Terminals and s != 'error': + result.append((s,p)) + return result -# ----------------------------------------------------------------------------- -# ==== LALR(1) Parsing ==== -# -# LALR(1) parsing is almost exactly the same as SLR except that instead of -# relying upon Follow() sets when performing reductions, a more selective -# lookahead set that incorporates the state of the LR(0) machine is utilized. -# Thus, we mainly just have to focus on calculating the lookahead sets. -# -# The method used here is due to DeRemer and Pennelo (1982). -# -# DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1) -# Lookahead Sets", ACM Transactions on Programming Languages and Systems, -# Vol. 4, No. 4, Oct. 1982, pp. 615-649 -# -# Further details can also be found in: -# -# J. Tremblay and P. Sorenson, "The Theory and Practice of Compiler Writing", -# McGraw-Hill Book Company, (1985). -# -# Note: This implementation is a complete replacement of the LALR(1) -# implementation in PLY-1.x releases. That version was based on -# a less efficient algorithm and it had bugs in its implementation. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # unused_terminals() + # + # Find all terminals that were defined, but not used by the grammar. Returns + # a list of all symbols. + # ----------------------------------------------------------------------------- + def unused_terminals(self): + unused_tok = [] + for s,v in self.Terminals.items(): + if s != 'error' and not v: + unused_tok.append(s) -# ----------------------------------------------------------------------------- -# compute_nullable_nonterminals() -# -# Creates a dictionary containing all of the non-terminals that might produce -# an empty production. -# ----------------------------------------------------------------------------- + return unused_tok -def compute_nullable_nonterminals(): - nullable = {} - num_nullable = 0 - while 1: - for p in Productions[1:]: - if p.len == 0: - nullable[p.name] = 1 - continue - for t in p.prod: - if not nullable.has_key(t): break - else: - nullable[p.name] = 1 - if len(nullable) == num_nullable: break - num_nullable = len(nullable) - return nullable + # ------------------------------------------------------------------------------ + # unused_rules() + # + # Find all grammar rules that were defined, but not used (maybe not reachable) + # Returns a list of productions. + # ------------------------------------------------------------------------------ + + def unused_rules(self): + unused_prod = [] + for s,v in self.Nonterminals.items(): + if not v: + p = self.Prodnames[s][0] + unused_prod.append(p) + return unused_prod -# ----------------------------------------------------------------------------- -# find_nonterminal_trans(C) -# -# Given a set of LR(0) items, this functions finds all of the non-terminal -# transitions. These are transitions in which a dot appears immediately before -# a non-terminal. Returns a list of tuples of the form (state,N) where state -# is the state number and N is the nonterminal symbol. -# -# The input C is the set of LR(0) items. -# ----------------------------------------------------------------------------- + # ----------------------------------------------------------------------------- + # unused_precedence() + # + # Returns a list of tuples (term,precedence) corresponding to precedence + # rules that were never used by the grammar. term is the name of the terminal + # on which precedence was applied and precedence is a string such as 'left' or + # 'right' corresponding to the type of precedence. + # ----------------------------------------------------------------------------- + + def unused_precedence(self): + unused = [] + for termname in self.Precedence: + if not (termname in self.Terminals or termname in self.UsedPrecedence): + unused.append((termname,self.Precedence[termname][0])) + + return unused -def find_nonterminal_transitions(C): - trans = [] - for state in range(len(C)): - for p in C[state]: - if p.lr_index < p.len - 1: - t = (state,p.prod[p.lr_index+1]) - if Nonterminals.has_key(t[1]): - if t not in trans: trans.append(t) - state = state + 1 - return trans + # ------------------------------------------------------------------------- + # _first() + # + # Compute the value of FIRST1(beta) where beta is a tuple of symbols. + # + # During execution of compute_first1, the result may be incomplete. + # Afterward (e.g., when called from compute_follow()), it will be complete. + # ------------------------------------------------------------------------- + def _first(self,beta): + + # We are computing First(x1,x2,x3,...,xn) + result = [ ] + for x in beta: + x_produces_empty = 0 + + # Add all the non- symbols of First[x] to the result. + for f in self.First[x]: + if f == '': + x_produces_empty = 1 + else: + if f not in result: result.append(f) -# ----------------------------------------------------------------------------- -# dr_relation() -# -# Computes the DR(p,A) relationships for non-terminal transitions. The input -# is a tuple (state,N) where state is a number and N is a nonterminal symbol. -# -# Returns a list of terminals. -# ----------------------------------------------------------------------------- + if x_produces_empty: + # We have to consider the next x in beta, + # i.e. stay in the loop. + pass + else: + # We don't have to consider any further symbols in beta. + break + else: + # There was no 'break' from the loop, + # so x_produces_empty was true for all x in beta, + # so beta produces empty as well. + result.append('') -def dr_relation(C,trans,nullable): - dr_set = { } - state,N = trans - terms = [] + return result - g = lr0_goto(C[state],N) - for p in g: - if p.lr_index < p.len - 1: - a = p.prod[p.lr_index+1] - if Terminals.has_key(a): - if a not in terms: terms.append(a) + # ------------------------------------------------------------------------- + # compute_first() + # + # Compute the value of FIRST1(X) for all symbols + # ------------------------------------------------------------------------- + def compute_first(self): + if self.First: + return self.First - # This extra bit is to handle the start state - if state == 0 and N == Productions[0].prod[0]: - terms.append('$end') + # Terminals: + for t in self.Terminals: + self.First[t] = [t] - return terms + self.First['$end'] = ['$end'] -# ----------------------------------------------------------------------------- -# reads_relation() -# -# Computes the READS() relation (p,A) READS (t,C). -# ----------------------------------------------------------------------------- + # Nonterminals: -def reads_relation(C, trans, empty): - # Look for empty transitions - rel = [] - state, N = trans + # Initialize to the empty set: + for n in self.Nonterminals: + self.First[n] = [] - g = lr0_goto(C[state],N) - j = _lr0_cidhash.get(id(g),-1) - for p in g: - if p.lr_index < p.len - 1: - a = p.prod[p.lr_index + 1] - if empty.has_key(a): - rel.append((j,a)) + # Then propagate symbols until no change: + while 1: + some_change = 0 + for n in self.Nonterminals: + for p in self.Prodnames[n]: + for f in self._first(p.prod): + if f not in self.First[n]: + self.First[n].append( f ) + some_change = 1 + if not some_change: + break + + return self.First - return rel + # --------------------------------------------------------------------- + # compute_follow() + # + # Computes all of the follow sets for every non-terminal symbol. The + # follow set is the set of all symbols that might follow a given + # non-terminal. See the Dragon book, 2nd Ed. p. 189. + # --------------------------------------------------------------------- + def compute_follow(self,start=None): + # If already computed, return the result + if self.Follow: + return self.Follow + + # If first sets not computed yet, do that first. + if not self.First: + self.compute_first() + + # Add '$end' to the follow list of the start symbol + for k in self.Nonterminals: + self.Follow[k] = [ ] -# ----------------------------------------------------------------------------- -# compute_lookback_includes() -# -# Determines the lookback and includes relations -# -# LOOKBACK: -# -# This relation is determined by running the LR(0) state machine forward. -# For example, starting with a production "N : . A B C", we run it forward -# to obtain "N : A B C ." We then build a relationship between this final -# state and the starting state. These relationships are stored in a dictionary -# lookdict. -# -# INCLUDES: -# -# Computes the INCLUDE() relation (p,A) INCLUDES (p',B). -# -# This relation is used to determine non-terminal transitions that occur -# inside of other non-terminal transition states. (p,A) INCLUDES (p', B) -# if the following holds: -# -# B -> LAT, where T -> epsilon and p' -L-> p -# -# L is essentially a prefix (which may be empty), T is a suffix that must be -# able to derive an empty string. State p' must lead to state p with the string L. -# -# ----------------------------------------------------------------------------- + if not start: + start = self.Productions[1].name -def compute_lookback_includes(C,trans,nullable): + self.Follow[start] = [ '$end' ] - lookdict = {} # Dictionary of lookback relations - includedict = {} # Dictionary of include relations + while 1: + didadd = 0 + for p in self.Productions[1:]: + # Here is the production set + for i in range(len(p.prod)): + B = p.prod[i] + if B in self.Nonterminals: + # Okay. We got a non-terminal in a production + fst = self._first(p.prod[i+1:]) + hasempty = 0 + for f in fst: + if f != '' and f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = 1 + if f == '': + hasempty = 1 + if hasempty or i == (len(p.prod)-1): + # Add elements of follow(a) to follow(b) + for f in self.Follow[p.name]: + if f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = 1 + if not didadd: break + return self.Follow - # Make a dictionary of non-terminal transitions - dtrans = {} - for t in trans: - dtrans[t] = 1 - # Loop over all transitions and compute lookbacks and includes - for state,N in trans: - lookb = [] - includes = [] - for p in C[state]: - if p.name != N: continue + # ----------------------------------------------------------------------------- + # build_lritems() + # + # This function walks the list of productions and builds a complete set of the + # LR items. The LR items are stored in two ways: First, they are uniquely + # numbered and placed in the list _lritems. Second, a linked list of LR items + # is built for each production. For example: + # + # E -> E PLUS E + # + # Creates the list + # + # [E -> . E PLUS E, E -> E . PLUS E, E -> E PLUS . E, E -> E PLUS E . ] + # ----------------------------------------------------------------------------- - # Okay, we have a name match. We now follow the production all the way - # through the state machine until we get the . on the right hand side + def build_lritems(self): + for p in self.Productions: + lastlri = p + i = 0 + lr_items = [] + while 1: + if i > len(p): + lri = None + else: + lri = LRItem(p,i) + # Precompute the list of productions immediately following + try: + lri.lr_after = self.Prodnames[lri.prod[i+1]] + except (IndexError,KeyError): + lri.lr_after = [] + try: + lri.lr_before = lri.prod[i-1] + except IndexError: + lri.lr_before = None + + lastlri.lr_next = lri + if not lri: break + lr_items.append(lri) + lastlri = lri + i += 1 + p.lr_items = lr_items + +# ----------------------------------------------------------------------------- +# == Class LRTable == +# +# This basic class represents a basic table of LR parsing information. +# Methods for generating the tables are not defined here. They are defined +# in the derived class LRGeneratedTable. +# ----------------------------------------------------------------------------- + +class VersionError(YaccError): pass + +class LRTable(object): + def __init__(self): + self.lr_action = None + self.lr_goto = None + self.lr_productions = None + self.lr_method = None - lr_index = p.lr_index - j = state - while lr_index < p.len - 1: - lr_index = lr_index + 1 - t = p.prod[lr_index] + def read_table(self,module): + if isinstance(module,types.ModuleType): + parsetab = module + else: + if sys.version_info[0] < 3: + exec("import %s as parsetab" % module) + else: + env = { } + exec("import %s as parsetab" % module, env, env) + parsetab = env['parsetab'] - # Check to see if this symbol and state are a non-terminal transition - if dtrans.has_key((j,t)): - # Yes. Okay, there is some chance that this is an includes relation - # the only way to know for certain is whether the rest of the - # production derives empty + if parsetab._tabversion != __tabversion__: + raise VersionError("yacc table file version is out of date") - li = lr_index + 1 - while li < p.len: - if Terminals.has_key(p.prod[li]): break # No forget it - if not nullable.has_key(p.prod[li]): break - li = li + 1 - else: - # Appears to be a relation between (j,t) and (state,N) - includes.append((j,t)) + self.lr_action = parsetab._lr_action + self.lr_goto = parsetab._lr_goto - g = lr0_goto(C[j],t) # Go to next set - j = _lr0_cidhash.get(id(g),-1) # Go to next state + self.lr_productions = [] + for p in parsetab._lr_productions: + self.lr_productions.append(MiniProduction(*p)) - # When we get here, j is the final state, now we have to locate the production - for r in C[j]: - if r.name != p.name: continue - if r.len != p.len: continue - i = 0 - # This look is comparing a production ". A B C" with "A B C ." - while i < r.lr_index: - if r.prod[i] != p.prod[i+1]: break - i = i + 1 - else: - lookb.append((j,r)) - for i in includes: - if not includedict.has_key(i): includedict[i] = [] - includedict[i].append((state,N)) - lookdict[(state,N)] = lookb + self.lr_method = parsetab._lr_method + return parsetab._lr_signature - return lookdict,includedict + def read_pickle(self,filename): + try: + import cPickle as pickle + except ImportError: + import pickle + + in_f = open(filename,"rb") + + tabversion = pickle.load(in_f) + if tabversion != __tabversion__: + raise VersionError("yacc table file version is out of date") + self.lr_method = pickle.load(in_f) + signature = pickle.load(in_f) + self.lr_action = pickle.load(in_f) + self.lr_goto = pickle.load(in_f) + productions = pickle.load(in_f) + + self.lr_productions = [] + for p in productions: + self.lr_productions.append(MiniProduction(*p)) + + in_f.close() + return signature + + # Bind all production function names to callable objects in pdict + def bind_callables(self,pdict): + for p in self.lr_productions: + p.bind(pdict) + +# ----------------------------------------------------------------------------- +# === LR Generator === +# +# The following classes and functions are used to generate LR parsing tables on +# a grammar. +# ----------------------------------------------------------------------------- # ----------------------------------------------------------------------------- # digraph() @@ -2181,715 +1919,1358 @@ for a in F.get(y,[]): if a not in F[x]: F[x].append(a) if N[x] == d: - N[stack[-1]] = sys.maxint + N[stack[-1]] = MAXINT F[stack[-1]] = F[x] element = stack.pop() while element != x: - N[stack[-1]] = sys.maxint + N[stack[-1]] = MAXINT F[stack[-1]] = F[x] element = stack.pop() +class LALRError(YaccError): pass + # ----------------------------------------------------------------------------- -# compute_read_sets() +# == LRGeneratedTable == # -# Given a set of LR(0) items, this function computes the read sets. -# -# Inputs: C = Set of LR(0) items -# ntrans = Set of nonterminal transitions -# nullable = Set of empty transitions -# -# Returns a set containing the read sets +# This class implements the LR table generation algorithm. There are no +# public methods except for write() # ----------------------------------------------------------------------------- -def compute_read_sets(C, ntrans, nullable): - FP = lambda x: dr_relation(C,x,nullable) - R = lambda x: reads_relation(C,x,nullable) - F = digraph(ntrans,R,FP) - return F +class LRGeneratedTable(LRTable): + def __init__(self,grammar,method='LALR',log=None): + if method not in ['SLR','LALR']: + raise LALRError("Unsupported method %s" % method) + + self.grammar = grammar + self.lr_method = method + + # Set up the logger + if not log: + log = NullLogger() + self.log = log + + # Internal attributes + self.lr_action = {} # Action table + self.lr_goto = {} # Goto table + self.lr_productions = grammar.Productions # Copy of grammar Production array + self.lr_goto_cache = {} # Cache of computed gotos + self.lr0_cidhash = {} # Cache of closures + + self._add_count = 0 # Internal counter used to detect cycles + + # Diagonistic information filled in by the table generator + self.sr_conflict = 0 + self.rr_conflict = 0 + self.conflicts = [] # List of conflicts + + self.sr_conflicts = [] + self.rr_conflicts = [] + + # Build the tables + self.grammar.build_lritems() + self.grammar.compute_first() + self.grammar.compute_follow() + self.lr_parse_table() + + # Compute the LR(0) closure operation on I, where I is a set of LR(0) items. + + def lr0_closure(self,I): + self._add_count += 1 + + # Add everything in I to J + J = I[:] + didadd = 1 + while didadd: + didadd = 0 + for j in J: + for x in j.lr_after: + if getattr(x,"lr0_added",0) == self._add_count: continue + # Add B --> .G to J + J.append(x.lr_next) + x.lr0_added = self._add_count + didadd = 1 + + return J + + # Compute the LR(0) goto function goto(I,X) where I is a set + # of LR(0) items and X is a grammar symbol. This function is written + # in a way that guarantees uniqueness of the generated goto sets + # (i.e. the same goto set will never be returned as two different Python + # objects). With uniqueness, we can later do fast set comparisons using + # id(obj) instead of element-wise comparison. + + def lr0_goto(self,I,x): + # First we look for a previously cached entry + g = self.lr_goto_cache.get((id(I),x),None) + if g: return g + + # Now we generate the goto set in a way that guarantees uniqueness + # of the result + + s = self.lr_goto_cache.get(x,None) + if not s: + s = { } + self.lr_goto_cache[x] = s -# ----------------------------------------------------------------------------- -# compute_follow_sets() -# -# Given a set of LR(0) items, a set of non-terminal transitions, a readset, -# and an include set, this function computes the follow sets -# -# Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)} -# -# Inputs: -# ntrans = Set of nonterminal transitions -# readsets = Readset (previously computed) -# inclsets = Include sets (previously computed) -# -# Returns a set containing the follow sets -# ----------------------------------------------------------------------------- + gs = [ ] + for p in I: + n = p.lr_next + if n and n.lr_before == x: + s1 = s.get(id(n),None) + if not s1: + s1 = { } + s[id(n)] = s1 + gs.append(n) + s = s1 + g = s.get('$end',None) + if not g: + if gs: + g = self.lr0_closure(gs) + s['$end'] = g + else: + s['$end'] = gs + self.lr_goto_cache[(id(I),x)] = g + return g -def compute_follow_sets(ntrans,readsets,inclsets): - FP = lambda x: readsets[x] - R = lambda x: inclsets.get(x,[]) - F = digraph(ntrans,R,FP) - return F + # Compute the LR(0) sets of item function + def lr0_items(self): -# ----------------------------------------------------------------------------- -# add_lookaheads() -# -# Attaches the lookahead symbols to grammar rules. -# -# Inputs: lookbacks - Set of lookback relations -# followset - Computed follow set -# -# This function directly attaches the lookaheads to productions contained -# in the lookbacks set -# ----------------------------------------------------------------------------- + C = [ self.lr0_closure([self.grammar.Productions[0].lr_next]) ] + i = 0 + for I in C: + self.lr0_cidhash[id(I)] = i + i += 1 + + # Loop over the items in C and each grammar symbols + i = 0 + while i < len(C): + I = C[i] + i += 1 -def add_lookaheads(lookbacks,followset): - for trans,lb in lookbacks.items(): - # Loop over productions in lookback - for state,p in lb: - if not p.lookaheads.has_key(state): - p.lookaheads[state] = [] - f = followset.get(trans,[]) - for a in f: - if a not in p.lookaheads[state]: p.lookaheads[state].append(a) + # Collect all of the symbols that could possibly be in the goto(I,X) sets + asyms = { } + for ii in I: + for s in ii.usyms: + asyms[s] = None + + for x in asyms: + g = self.lr0_goto(I,x) + if not g: continue + if id(g) in self.lr0_cidhash: continue + self.lr0_cidhash[id(g)] = len(C) + C.append(g) -# ----------------------------------------------------------------------------- -# add_lalr_lookaheads() -# -# This function does all of the work of adding lookahead information for use -# with LALR parsing -# ----------------------------------------------------------------------------- + return C + + # ----------------------------------------------------------------------------- + # ==== LALR(1) Parsing ==== + # + # LALR(1) parsing is almost exactly the same as SLR except that instead of + # relying upon Follow() sets when performing reductions, a more selective + # lookahead set that incorporates the state of the LR(0) machine is utilized. + # Thus, we mainly just have to focus on calculating the lookahead sets. + # + # The method used here is due to DeRemer and Pennelo (1982). + # + # DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1) + # Lookahead Sets", ACM Transactions on Programming Languages and Systems, + # Vol. 4, No. 4, Oct. 1982, pp. 615-649 + # + # Further details can also be found in: + # + # J. Tremblay and P. Sorenson, "The Theory and Practice of Compiler Writing", + # McGraw-Hill Book Company, (1985). + # + # ----------------------------------------------------------------------------- + + # ----------------------------------------------------------------------------- + # compute_nullable_nonterminals() + # + # Creates a dictionary containing all of the non-terminals that might produce + # an empty production. + # ----------------------------------------------------------------------------- + + def compute_nullable_nonterminals(self): + nullable = {} + num_nullable = 0 + while 1: + for p in self.grammar.Productions[1:]: + if p.len == 0: + nullable[p.name] = 1 + continue + for t in p.prod: + if not t in nullable: break + else: + nullable[p.name] = 1 + if len(nullable) == num_nullable: break + num_nullable = len(nullable) + return nullable -def add_lalr_lookaheads(C): - # Determine all of the nullable nonterminals - nullable = compute_nullable_nonterminals() + # ----------------------------------------------------------------------------- + # find_nonterminal_trans(C) + # + # Given a set of LR(0) items, this functions finds all of the non-terminal + # transitions. These are transitions in which a dot appears immediately before + # a non-terminal. Returns a list of tuples of the form (state,N) where state + # is the state number and N is the nonterminal symbol. + # + # The input C is the set of LR(0) items. + # ----------------------------------------------------------------------------- - # Find all non-terminal transitions - trans = find_nonterminal_transitions(C) + def find_nonterminal_transitions(self,C): + trans = [] + for state in range(len(C)): + for p in C[state]: + if p.lr_index < p.len - 1: + t = (state,p.prod[p.lr_index+1]) + if t[1] in self.grammar.Nonterminals: + if t not in trans: trans.append(t) + state = state + 1 + return trans - # Compute read sets - readsets = compute_read_sets(C,trans,nullable) + # ----------------------------------------------------------------------------- + # dr_relation() + # + # Computes the DR(p,A) relationships for non-terminal transitions. The input + # is a tuple (state,N) where state is a number and N is a nonterminal symbol. + # + # Returns a list of terminals. + # ----------------------------------------------------------------------------- - # Compute lookback/includes relations - lookd, included = compute_lookback_includes(C,trans,nullable) + def dr_relation(self,C,trans,nullable): + dr_set = { } + state,N = trans + terms = [] + + g = self.lr0_goto(C[state],N) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index+1] + if a in self.grammar.Terminals: + if a not in terms: terms.append(a) + + # This extra bit is to handle the start state + if state == 0 and N == self.grammar.Productions[0].prod[0]: + terms.append('$end') - # Compute LALR FOLLOW sets - followsets = compute_follow_sets(trans,readsets,included) + return terms - # Add all of the lookaheads - add_lookaheads(lookd,followsets) + # ----------------------------------------------------------------------------- + # reads_relation() + # + # Computes the READS() relation (p,A) READS (t,C). + # ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# lr_parse_table() -# -# This function constructs the parse tables for SLR or LALR -# ----------------------------------------------------------------------------- -def lr_parse_table(method): - global _lr_method - goto = _lr_goto # Goto array - action = _lr_action # Action array - actionp = { } # Action production array (temporary) + def reads_relation(self,C, trans, empty): + # Look for empty transitions + rel = [] + state, N = trans + + g = self.lr0_goto(C[state],N) + j = self.lr0_cidhash.get(id(g),-1) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index + 1] + if a in empty: + rel.append((j,a)) - _lr_method = method + return rel - n_srconflict = 0 - n_rrconflict = 0 + # ----------------------------------------------------------------------------- + # compute_lookback_includes() + # + # Determines the lookback and includes relations + # + # LOOKBACK: + # + # This relation is determined by running the LR(0) state machine forward. + # For example, starting with a production "N : . A B C", we run it forward + # to obtain "N : A B C ." We then build a relationship between this final + # state and the starting state. These relationships are stored in a dictionary + # lookdict. + # + # INCLUDES: + # + # Computes the INCLUDE() relation (p,A) INCLUDES (p',B). + # + # This relation is used to determine non-terminal transitions that occur + # inside of other non-terminal transition states. (p,A) INCLUDES (p', B) + # if the following holds: + # + # B -> LAT, where T -> epsilon and p' -L-> p + # + # L is essentially a prefix (which may be empty), T is a suffix that must be + # able to derive an empty string. State p' must lead to state p with the string L. + # + # ----------------------------------------------------------------------------- - if yaccdebug: - sys.stderr.write("yacc: Generating %s parsing table...\n" % method) - _vf.write("\n\nParsing method: %s\n\n" % method) + def compute_lookback_includes(self,C,trans,nullable): - # Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items - # This determines the number of states + lookdict = {} # Dictionary of lookback relations + includedict = {} # Dictionary of include relations - C = lr0_items() + # Make a dictionary of non-terminal transitions + dtrans = {} + for t in trans: + dtrans[t] = 1 + + # Loop over all transitions and compute lookbacks and includes + for state,N in trans: + lookb = [] + includes = [] + for p in C[state]: + if p.name != N: continue + + # Okay, we have a name match. We now follow the production all the way + # through the state machine until we get the . on the right hand side + + lr_index = p.lr_index + j = state + while lr_index < p.len - 1: + lr_index = lr_index + 1 + t = p.prod[lr_index] + + # Check to see if this symbol and state are a non-terminal transition + if (j,t) in dtrans: + # Yes. Okay, there is some chance that this is an includes relation + # the only way to know for certain is whether the rest of the + # production derives empty + + li = lr_index + 1 + while li < p.len: + if p.prod[li] in self.grammar.Terminals: break # No forget it + if not p.prod[li] in nullable: break + li = li + 1 + else: + # Appears to be a relation between (j,t) and (state,N) + includes.append((j,t)) + + g = self.lr0_goto(C[j],t) # Go to next set + j = self.lr0_cidhash.get(id(g),-1) # Go to next state + + # When we get here, j is the final state, now we have to locate the production + for r in C[j]: + if r.name != p.name: continue + if r.len != p.len: continue + i = 0 + # This look is comparing a production ". A B C" with "A B C ." + while i < r.lr_index: + if r.prod[i] != p.prod[i+1]: break + i = i + 1 + else: + lookb.append((j,r)) + for i in includes: + if not i in includedict: includedict[i] = [] + includedict[i].append((state,N)) + lookdict[(state,N)] = lookb - if method == 'LALR': - add_lalr_lookaheads(C) + return lookdict,includedict + # ----------------------------------------------------------------------------- + # compute_read_sets() + # + # Given a set of LR(0) items, this function computes the read sets. + # + # Inputs: C = Set of LR(0) items + # ntrans = Set of nonterminal transitions + # nullable = Set of empty transitions + # + # Returns a set containing the read sets + # ----------------------------------------------------------------------------- + + def compute_read_sets(self,C, ntrans, nullable): + FP = lambda x: self.dr_relation(C,x,nullable) + R = lambda x: self.reads_relation(C,x,nullable) + F = digraph(ntrans,R,FP) + return F - # Build the parser table, state by state - st = 0 - for I in C: - # Loop over each production in I - actlist = [ ] # List of actions - st_action = { } - st_actionp = { } - st_goto = { } - if yaccdebug: - _vf.write("\nstate %d\n\n" % st) + # ----------------------------------------------------------------------------- + # compute_follow_sets() + # + # Given a set of LR(0) items, a set of non-terminal transitions, a readset, + # and an include set, this function computes the follow sets + # + # Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)} + # + # Inputs: + # ntrans = Set of nonterminal transitions + # readsets = Readset (previously computed) + # inclsets = Include sets (previously computed) + # + # Returns a set containing the follow sets + # ----------------------------------------------------------------------------- + + def compute_follow_sets(self,ntrans,readsets,inclsets): + FP = lambda x: readsets[x] + R = lambda x: inclsets.get(x,[]) + F = digraph(ntrans,R,FP) + return F + + # ----------------------------------------------------------------------------- + # add_lookaheads() + # + # Attaches the lookahead symbols to grammar rules. + # + # Inputs: lookbacks - Set of lookback relations + # followset - Computed follow set + # + # This function directly attaches the lookaheads to productions contained + # in the lookbacks set + # ----------------------------------------------------------------------------- + + def add_lookaheads(self,lookbacks,followset): + for trans,lb in lookbacks.items(): + # Loop over productions in lookback + for state,p in lb: + if not state in p.lookaheads: + p.lookaheads[state] = [] + f = followset.get(trans,[]) + for a in f: + if a not in p.lookaheads[state]: p.lookaheads[state].append(a) + + # ----------------------------------------------------------------------------- + # add_lalr_lookaheads() + # + # This function does all of the work of adding lookahead information for use + # with LALR parsing + # ----------------------------------------------------------------------------- + + def add_lalr_lookaheads(self,C): + # Determine all of the nullable nonterminals + nullable = self.compute_nullable_nonterminals() + + # Find all non-terminal transitions + trans = self.find_nonterminal_transitions(C) + + # Compute read sets + readsets = self.compute_read_sets(C,trans,nullable) + + # Compute lookback/includes relations + lookd, included = self.compute_lookback_includes(C,trans,nullable) + + # Compute LALR FOLLOW sets + followsets = self.compute_follow_sets(trans,readsets,included) + + # Add all of the lookaheads + self.add_lookaheads(lookd,followsets) + + # ----------------------------------------------------------------------------- + # lr_parse_table() + # + # This function constructs the parse tables for SLR or LALR + # ----------------------------------------------------------------------------- + def lr_parse_table(self): + Productions = self.grammar.Productions + Precedence = self.grammar.Precedence + goto = self.lr_goto # Goto array + action = self.lr_action # Action array + log = self.log # Logger for output + + actionp = { } # Action production array (temporary) + + log.info("Parsing method: %s", self.lr_method) + + # Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items + # This determines the number of states + + C = self.lr0_items() + + if self.lr_method == 'LALR': + self.add_lalr_lookaheads(C) + + # Build the parser table, state by state + st = 0 + for I in C: + # Loop over each production in I + actlist = [ ] # List of actions + st_action = { } + st_actionp = { } + st_goto = { } + log.info("") + log.info("state %d", st) + log.info("") for p in I: - _vf.write(" (%d) %s\n" % (p.number, str(p))) - _vf.write("\n") + log.info(" (%d) %s", p.number, str(p)) + log.info("") - for p in I: - try: - if p.len == p.lr_index + 1: - if p.name == "S'": - # Start symbol. Accept! - st_action["$end"] = 0 - st_actionp["$end"] = p - else: - # We are at the end of a production. Reduce! - if method == 'LALR': - laheads = p.lookaheads[st] + for p in I: + if p.len == p.lr_index + 1: + if p.name == "S'": + # Start symbol. Accept! + st_action["$end"] = 0 + st_actionp["$end"] = p else: - laheads = Follow[p.name] - for a in laheads: - actlist.append((a,p,"reduce using rule %d (%s)" % (p.number,p))) - r = st_action.get(a,None) - if r is not None: - # Whoa. Have a shift/reduce or reduce/reduce conflict - if r > 0: - # Need to decide on shift or reduce here - # By default we favor shifting. Need to add - # some precedence rules here. - sprec,slevel = Productions[st_actionp[a].number].prec - rprec,rlevel = Precedence.get(a,('right',0)) - if (slevel < rlevel) or ((slevel == rlevel) and (rprec == 'left')): - # We really need to reduce here. - st_action[a] = -p.number - st_actionp[a] = p - if not slevel and not rlevel: - _vfc.write("shift/reduce conflict in state %d resolved as reduce.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as reduce.\n" % a) - n_srconflict += 1 - elif (slevel == rlevel) and (rprec == 'nonassoc'): - st_action[a] = None - else: - # Hmmm. Guess we'll keep the shift - if not rlevel: - _vfc.write("shift/reduce conflict in state %d resolved as shift.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as shift.\n" % a) - n_srconflict +=1 - elif r < 0: - # Reduce/reduce conflict. In this case, we favor the rule - # that was defined first in the grammar file - oldp = Productions[-r] - pp = Productions[p.number] - if oldp.line > pp.line: - st_action[a] = -p.number - st_actionp[a] = p - # sys.stderr.write("Reduce/reduce conflict in state %d\n" % st) - n_rrconflict += 1 - _vfc.write("reduce/reduce conflict in state %d resolved using rule %d (%s).\n" % (st, st_actionp[a].number, st_actionp[a])) - _vf.write(" ! reduce/reduce conflict for %s resolved using rule %d (%s).\n" % (a,st_actionp[a].number, st_actionp[a])) - else: - sys.stderr.write("Unknown conflict in state %d\n" % st) + # We are at the end of a production. Reduce! + if self.lr_method == 'LALR': + laheads = p.lookaheads[st] else: - st_action[a] = -p.number - st_actionp[a] = p - else: - i = p.lr_index - a = p.prod[i+1] # Get symbol right after the "." - if Terminals.has_key(a): - g = lr0_goto(I,a) - j = _lr0_cidhash.get(id(g),-1) - if j >= 0: - # We are in a shift state - actlist.append((a,p,"shift and go to state %d" % j)) - r = st_action.get(a,None) - if r is not None: - # Whoa have a shift/reduce or shift/shift conflict - if r > 0: - if r != j: - sys.stderr.write("Shift/shift conflict in state %d\n" % st) - elif r < 0: - # Do a precedence check. - # - if precedence of reduce rule is higher, we reduce. - # - if precedence of reduce is same and left assoc, we reduce. - # - otherwise we shift - rprec,rlevel = Productions[st_actionp[a].number].prec - sprec,slevel = Precedence.get(a,('right',0)) - if (slevel > rlevel) or ((slevel == rlevel) and (rprec == 'right')): - # We decide to shift here... highest precedence to shift - st_action[a] = j - st_actionp[a] = p - if not rlevel: - n_srconflict += 1 - _vfc.write("shift/reduce conflict in state %d resolved as shift.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as shift.\n" % a) - elif (slevel == rlevel) and (rprec == 'nonassoc'): - st_action[a] = None + laheads = self.grammar.Follow[p.name] + for a in laheads: + actlist.append((a,p,"reduce using rule %d (%s)" % (p.number,p))) + r = st_action.get(a,None) + if r is not None: + # Whoa. Have a shift/reduce or reduce/reduce conflict + if r > 0: + # Need to decide on shift or reduce here + # By default we favor shifting. Need to add + # some precedence rules here. + sprec,slevel = Productions[st_actionp[a].number].prec + rprec,rlevel = Precedence.get(a,('right',0)) + if (slevel < rlevel) or ((slevel == rlevel) and (rprec == 'left')): + # We really need to reduce here. + st_action[a] = -p.number + st_actionp[a] = p + if not slevel and not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as reduce",a) + self.sr_conflicts.append((st,a,'reduce')) + Productions[p.number].reduced += 1 + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the shift + if not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as shift",a) + self.sr_conflicts.append((st,a,'shift')) + elif r < 0: + # Reduce/reduce conflict. In this case, we favor the rule + # that was defined first in the grammar file + oldp = Productions[-r] + pp = Productions[p.number] + if oldp.line > pp.line: + st_action[a] = -p.number + st_actionp[a] = p + chosenp,rejectp = pp,oldp + Productions[p.number].reduced += 1 + Productions[oldp.number].reduced -= 1 + else: + chosenp,rejectp = oldp,pp + self.rr_conflicts.append((st,chosenp,rejectp)) + log.info(" ! reduce/reduce conflict for %s resolved using rule %d (%s)", a,st_actionp[a].number, st_actionp[a]) else: - # Hmmm. Guess we'll keep the reduce - if not slevel and not rlevel: - n_srconflict +=1 - _vfc.write("shift/reduce conflict in state %d resolved as reduce.\n" % st) - _vf.write(" ! shift/reduce conflict for %s resolved as reduce.\n" % a) + raise LALRError("Unknown conflict in state %d" % st) + else: + st_action[a] = -p.number + st_actionp[a] = p + Productions[p.number].reduced += 1 + else: + i = p.lr_index + a = p.prod[i+1] # Get symbol right after the "." + if a in self.grammar.Terminals: + g = self.lr0_goto(I,a) + j = self.lr0_cidhash.get(id(g),-1) + if j >= 0: + # We are in a shift state + actlist.append((a,p,"shift and go to state %d" % j)) + r = st_action.get(a,None) + if r is not None: + # Whoa have a shift/reduce or shift/shift conflict + if r > 0: + if r != j: + raise LALRError("Shift/shift conflict in state %d" % st) + elif r < 0: + # Do a precedence check. + # - if precedence of reduce rule is higher, we reduce. + # - if precedence of reduce is same and left assoc, we reduce. + # - otherwise we shift + rprec,rlevel = Productions[st_actionp[a].number].prec + sprec,slevel = Precedence.get(a,('right',0)) + if (slevel > rlevel) or ((slevel == rlevel) and (rprec == 'right')): + # We decide to shift here... highest precedence to shift + Productions[st_actionp[a].number].reduced -= 1 + st_action[a] = j + st_actionp[a] = p + if not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as shift",a) + self.sr_conflicts.append((st,a,'shift')) + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the reduce + if not slevel and not rlevel: + log.info(" ! shift/reduce conflict for %s resolved as reduce",a) + self.sr_conflicts.append((st,a,'reduce')) + else: + raise LALRError("Unknown conflict in state %d" % st) else: - sys.stderr.write("Unknown conflict in state %d\n" % st) - else: - st_action[a] = j - st_actionp[a] = p + st_action[a] = j + st_actionp[a] = p - except StandardError,e: - print sys.exc_info() - raise YaccError, "Hosed in lr_parse_table" - - # Print the actions associated with each terminal - if yaccdebug: - _actprint = { } - for a,p,m in actlist: - if st_action.has_key(a): - if p is st_actionp[a]: - _vf.write(" %-15s %s\n" % (a,m)) - _actprint[(a,m)] = 1 - _vf.write("\n") - for a,p,m in actlist: - if st_action.has_key(a): - if p is not st_actionp[a]: - if not _actprint.has_key((a,m)): - _vf.write(" ! %-15s [ %s ]\n" % (a,m)) + # Print the actions associated with each terminal + _actprint = { } + for a,p,m in actlist: + if a in st_action: + if p is st_actionp[a]: + log.info(" %-15s %s",a,m) _actprint[(a,m)] = 1 + log.info("") + # Print the actions that were not used. (debugging) + not_used = 0 + for a,p,m in actlist: + if a in st_action: + if p is not st_actionp[a]: + if not (a,m) in _actprint: + log.debug(" ! %-15s [ %s ]",a,m) + not_used = 1 + _actprint[(a,m)] = 1 + if not_used: + log.debug("") + + # Construct the goto table for this state + + nkeys = { } + for ii in I: + for s in ii.usyms: + if s in self.grammar.Nonterminals: + nkeys[s] = None + for n in nkeys: + g = self.lr0_goto(I,n) + j = self.lr0_cidhash.get(id(g),-1) + if j >= 0: + st_goto[n] = j + log.info(" %-30s shift and go to state %d",n,j) + + action[st] = st_action + actionp[st] = st_actionp + goto[st] = st_goto + st += 1 - # Construct the goto table for this state - if yaccdebug: - _vf.write("\n") - nkeys = { } - for ii in I: - for s in ii.usyms: - if Nonterminals.has_key(s): - nkeys[s] = None - for n in nkeys.keys(): - g = lr0_goto(I,n) - j = _lr0_cidhash.get(id(g),-1) - if j >= 0: - st_goto[n] = j - if yaccdebug: - _vf.write(" %-30s shift and go to state %d\n" % (n,j)) - - action[st] = st_action - actionp[st] = st_actionp - goto[st] = st_goto - - st += 1 - - if yaccdebug: - if n_srconflict == 1: - sys.stderr.write("yacc: %d shift/reduce conflict\n" % n_srconflict) - if n_srconflict > 1: - sys.stderr.write("yacc: %d shift/reduce conflicts\n" % n_srconflict) - if n_rrconflict == 1: - sys.stderr.write("yacc: %d reduce/reduce conflict\n" % n_rrconflict) - if n_rrconflict > 1: - sys.stderr.write("yacc: %d reduce/reduce conflicts\n" % n_rrconflict) - -# ----------------------------------------------------------------------------- -# ==== LR Utility functions ==== -# ----------------------------------------------------------------------------- -# ----------------------------------------------------------------------------- -# _lr_write_tables() -# -# This function writes the LR parsing tables to a file -# ----------------------------------------------------------------------------- - -def lr_write_tables(modulename=tab_module,outputdir=''): - if isinstance(modulename, types.ModuleType): - print >>sys.stderr, "Warning module %s is inconsistent with the grammar (ignored)" % modulename - return + # ----------------------------------------------------------------------------- + # write() + # + # This function writes the LR parsing tables to a file + # ----------------------------------------------------------------------------- - basemodulename = modulename.split(".")[-1] - filename = os.path.join(outputdir,basemodulename) + ".py" - try: - f = open(filename,"w") + def write_table(self,modulename,outputdir='',signature=""): + basemodulename = modulename.split(".")[-1] + filename = os.path.join(outputdir,basemodulename) + ".py" + try: + f = open(filename,"w") - f.write(""" + f.write(""" # %s # This file is automatically generated. Do not edit. +_tabversion = %r -_lr_method = %s +_lr_method = %r -_lr_signature = %s -""" % (filename, repr(_lr_method), repr(Signature.digest()))) +_lr_signature = %r + """ % (filename, __tabversion__, self.lr_method, signature)) - # Change smaller to 0 to go back to original tables - smaller = 1 + # Change smaller to 0 to go back to original tables + smaller = 1 - # Factor out names to try and make smaller - if smaller: - items = { } - - for s,nd in _lr_action.items(): - for name,v in nd.items(): - i = items.get(name) - if not i: - i = ([],[]) - items[name] = i - i[0].append(s) - i[1].append(v) - - f.write("\n_lr_action_items = {") - for k,v in items.items(): - f.write("%r:([" % k) - for i in v[0]: - f.write("%r," % i) - f.write("],[") - for i in v[1]: - f.write("%r," % i) + # Factor out names to try and make smaller + if smaller: + items = { } - f.write("]),") - f.write("}\n") + for s,nd in self.lr_action.items(): + for name,v in nd.items(): + i = items.get(name) + if not i: + i = ([],[]) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write("\n_lr_action_items = {") + for k,v in items.items(): + f.write("%r:([" % k) + for i in v[0]: + f.write("%r," % i) + f.write("],[") + for i in v[1]: + f.write("%r," % i) - f.write(""" + f.write("]),") + f.write("}\n") + + f.write(""" _lr_action = { } for _k, _v in _lr_action_items.items(): for _x,_y in zip(_v[0],_v[1]): - if not _lr_action.has_key(_x): _lr_action[_x] = { } + if not _x in _lr_action: _lr_action[_x] = { } _lr_action[_x][_k] = _y del _lr_action_items """) - else: - f.write("\n_lr_action = { "); - for k,v in _lr_action.items(): - f.write("(%r,%r):%r," % (k[0],k[1],v)) - f.write("}\n"); - - if smaller: - # Factor out names to try and make smaller - items = { } - - for s,nd in _lr_goto.items(): - for name,v in nd.items(): - i = items.get(name) - if not i: - i = ([],[]) - items[name] = i - i[0].append(s) - i[1].append(v) - - f.write("\n_lr_goto_items = {") - for k,v in items.items(): - f.write("%r:([" % k) - for i in v[0]: - f.write("%r," % i) - f.write("],[") - for i in v[1]: - f.write("%r," % i) + else: + f.write("\n_lr_action = { "); + for k,v in self.lr_action.items(): + f.write("(%r,%r):%r," % (k[0],k[1],v)) + f.write("}\n"); + + if smaller: + # Factor out names to try and make smaller + items = { } + + for s,nd in self.lr_goto.items(): + for name,v in nd.items(): + i = items.get(name) + if not i: + i = ([],[]) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write("\n_lr_goto_items = {") + for k,v in items.items(): + f.write("%r:([" % k) + for i in v[0]: + f.write("%r," % i) + f.write("],[") + for i in v[1]: + f.write("%r," % i) - f.write("]),") - f.write("}\n") + f.write("]),") + f.write("}\n") - f.write(""" + f.write(""" _lr_goto = { } for _k, _v in _lr_goto_items.items(): for _x,_y in zip(_v[0],_v[1]): - if not _lr_goto.has_key(_x): _lr_goto[_x] = { } + if not _x in _lr_goto: _lr_goto[_x] = { } _lr_goto[_x][_k] = _y del _lr_goto_items """) - else: - f.write("\n_lr_goto = { "); - for k,v in _lr_goto.items(): - f.write("(%r,%r):%r," % (k[0],k[1],v)) - f.write("}\n"); - - # Write production table - f.write("_lr_productions = [\n") - for p in Productions: - if p: - if (p.func): - f.write(" (%r,%d,%r,%r,%d),\n" % (p.name, p.len, p.func.__name__,p.file,p.line)) - else: - f.write(" (%r,%d,None,None,None),\n" % (p.name, p.len)) else: - f.write(" None,\n") - f.write("]\n") - - f.close() - - except IOError,e: - print >>sys.stderr, "Unable to create '%s'" % filename - print >>sys.stderr, e - return - -def lr_read_tables(module=tab_module,optimize=0): - global _lr_action, _lr_goto, _lr_productions, _lr_method - try: - if isinstance(module,types.ModuleType): - parsetab = module - else: - exec "import %s as parsetab" % module + f.write("\n_lr_goto = { "); + for k,v in self.lr_goto.items(): + f.write("(%r,%r):%r," % (k[0],k[1],v)) + f.write("}\n"); + + # Write production table + f.write("_lr_productions = [\n") + for p in self.lr_productions: + if p.func: + f.write(" (%r,%r,%d,%r,%r,%d),\n" % (p.str,p.name, p.len, p.func,p.file,p.line)) + else: + f.write(" (%r,%r,%d,None,None,None),\n" % (str(p),p.name, p.len)) + f.write("]\n") + f.close() + + except IOError: + e = sys.exc_info()[1] + sys.stderr.write("Unable to create '%s'\n" % filename) + sys.stderr.write(str(e)+"\n") + return - if (optimize) or (Signature.digest() == parsetab._lr_signature): - _lr_action = parsetab._lr_action - _lr_goto = parsetab._lr_goto - _lr_productions = parsetab._lr_productions - _lr_method = parsetab._lr_method - return 1 - else: - return 0 - except (ImportError,AttributeError): - return 0 + # ----------------------------------------------------------------------------- + # pickle_table() + # + # This function pickles the LR parsing tables to a supplied file object + # ----------------------------------------------------------------------------- + def pickle_table(self,filename,signature=""): + try: + import cPickle as pickle + except ImportError: + import pickle + outf = open(filename,"wb") + pickle.dump(__tabversion__,outf,pickle_protocol) + pickle.dump(self.lr_method,outf,pickle_protocol) + pickle.dump(signature,outf,pickle_protocol) + pickle.dump(self.lr_action,outf,pickle_protocol) + pickle.dump(self.lr_goto,outf,pickle_protocol) + + outp = [] + for p in self.lr_productions: + if p.func: + outp.append((p.str,p.name, p.len, p.func,p.file,p.line)) + else: + outp.append((str(p),p.name,p.len,None,None,None)) + pickle.dump(outp,outf,pickle_protocol) + outf.close() # ----------------------------------------------------------------------------- -# yacc(module) +# === INTROSPECTION === # -# Build the parser module +# The following functions and classes are used to implement the PLY +# introspection features followed by the yacc() function itself. # ----------------------------------------------------------------------------- -def yacc(method=default_lr, debug=yaccdebug, module=None, tabmodule=tab_module, start=None, check_recursion=1, optimize=0,write_tables=1,debugfile=debug_file,outputdir=''): - global yaccdebug - yaccdebug = debug +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- - initialize_vars() - files = { } - error = 0 +def get_caller_module_dict(levels): + try: + raise RuntimeError + except RuntimeError: + e,b,t = sys.exc_info() + f = t.tb_frame + while levels > 0: + f = f.f_back + levels -= 1 + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + + return ldict + +# ----------------------------------------------------------------------------- +# parse_grammar() +# +# This takes a raw grammar rule string and parses it into production data +# ----------------------------------------------------------------------------- +def parse_grammar(doc,file,line): + grammar = [] + # Split the doc string into lines + pstrings = doc.splitlines() + lastp = None + dline = line + for ps in pstrings: + dline += 1 + p = ps.split() + if not p: continue + try: + if p[0] == '|': + # This is a continuation of a previous rule + if not lastp: + raise SyntaxError("%s:%d: Misplaced '|'" % (file,dline)) + prodname = lastp + syms = p[1:] + else: + prodname = p[0] + lastp = prodname + syms = p[2:] + assign = p[1] + if assign != ':' and assign != '::=': + raise SyntaxError("%s:%d: Syntax error. Expected ':'" % (file,dline)) + + grammar.append((file,dline,prodname,syms)) + except SyntaxError: + raise + except Exception: + raise SyntaxError("%s:%d: Syntax error in rule '%s'" % (file,dline,ps.strip())) + + return grammar + +# ----------------------------------------------------------------------------- +# ParserReflect() +# +# This class represents information extracted for building a parser including +# start symbol, error function, tokens, precedence list, action functions, +# etc. +# ----------------------------------------------------------------------------- +class ParserReflect(object): + def __init__(self,pdict,log=None): + self.pdict = pdict + self.start = None + self.error_func = None + self.tokens = None + self.files = {} + self.grammar = [] + self.error = 0 + if log is None: + self.log = PlyLogger(sys.stderr) + else: + self.log = log - # Add parsing method to signature - Signature.update(method) + # Get all of the basic information + def get_all(self): + self.get_start() + self.get_error_func() + self.get_tokens() + self.get_precedence() + self.get_pfunctions() + + # Validate all of the information + def validate_all(self): + self.validate_start() + self.validate_error_func() + self.validate_tokens() + self.validate_precedence() + self.validate_pfunctions() + self.validate_files() + return self.error - # If a "module" parameter was supplied, extract its dictionary. - # Note: a module may in fact be an instance as well. + # Compute a signature over the grammar + def signature(self): + try: + from hashlib import md5 + except ImportError: + from md5 import md5 + try: + sig = md5() + if self.start: + sig.update(self.start.encode('latin-1')) + if self.prec: + sig.update("".join(["".join(p) for p in self.prec]).encode('latin-1')) + if self.tokens: + sig.update(" ".join(self.tokens).encode('latin-1')) + for f in self.pfuncs: + if f[3]: + sig.update(f[3].encode('latin-1')) + except (TypeError,ValueError): + pass + return sig.digest() - if module: - # User supplied a module object. - if isinstance(module, types.ModuleType): - ldict = module.__dict__ - elif isinstance(module, _INSTANCETYPE): - _items = [(k,getattr(module,k)) for k in dir(module)] - ldict = { } - for i in _items: - ldict[i[0]] = i[1] - else: - raise ValueError,"Expected a module" + # ----------------------------------------------------------------------------- + # validate_file() + # + # This method checks to see if there are duplicated p_rulename() functions + # in the parser module file. Without this function, it is really easy for + # users to make mistakes by cutting and pasting code fragments (and it's a real + # bugger to try and figure out why the resulting parser doesn't work). Therefore, + # we just do a little regular expression pattern matching of def statements + # to try and detect duplicates. + # ----------------------------------------------------------------------------- + + def validate_files(self): + # Match def p_funcname( + fre = re.compile(r'\s*def\s+(p_[a-zA-Z_0-9]*)\(') + + for filename in self.files.keys(): + base,ext = os.path.splitext(filename) + if ext != '.py': return 1 # No idea. Assume it's okay. - else: - # No module given. We might be able to get information from the caller. - # Throw an exception and unwind the traceback to get the globals + try: + f = open(filename) + lines = f.readlines() + f.close() + except IOError: + continue - try: - raise RuntimeError - except RuntimeError: - e,b,t = sys.exc_info() - f = t.tb_frame - f = f.f_back # Walk out to our calling function - if f.f_globals is f.f_locals: # Collect global and local variations from caller - ldict = f.f_globals - else: - ldict = f.f_globals.copy() - ldict.update(f.f_locals) + counthash = { } + for linen,l in enumerate(lines): + linen += 1 + m = fre.match(l) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + self.log.warning("%s:%d: Function %s redefined. Previously defined on line %d", filename,linen,name,prev) - # Add starting symbol to signature - if not start: - start = ldict.get("start",None) - if start: - Signature.update(start) + # Get the start symbol + def get_start(self): + self.start = self.pdict.get('start') + + # Validate the start symbol + def validate_start(self): + if self.start is not None: + if not isinstance(self.start,str): + self.log.error("'start' must be a string") # Look for error handler - ef = ldict.get('p_error',None) - if ef: - if isinstance(ef,types.FunctionType): - ismethod = 0 - elif isinstance(ef, types.MethodType): - ismethod = 1 - else: - raise YaccError,"'p_error' defined, but is not a function or method." - eline = ef.func_code.co_firstlineno - efile = ef.func_code.co_filename - files[efile] = None - - if (ef.func_code.co_argcount != 1+ismethod): - raise YaccError,"%s:%d: p_error() requires 1 argument." % (efile,eline) - global Errorfunc - Errorfunc = ef - else: - print >>sys.stderr, "yacc: Warning. no p_error() function is defined." + def get_error_func(self): + self.error_func = self.pdict.get('p_error') + + # Validate the error function + def validate_error_func(self): + if self.error_func: + if isinstance(self.error_func,types.FunctionType): + ismethod = 0 + elif isinstance(self.error_func, types.MethodType): + ismethod = 1 + else: + self.log.error("'p_error' defined, but is not a function or method") + self.error = 1 + return + + eline = func_code(self.error_func).co_firstlineno + efile = func_code(self.error_func).co_filename + self.files[efile] = 1 + + if (func_code(self.error_func).co_argcount != 1+ismethod): + self.log.error("%s:%d: p_error() requires 1 argument",efile,eline) + self.error = 1 + + # Get the tokens map + def get_tokens(self): + tokens = self.pdict.get("tokens",None) + if not tokens: + self.log.error("No token list is defined") + self.error = 1 + return + + if not isinstance(tokens,(list, tuple)): + self.log.error("tokens must be a list or tuple") + self.error = 1 + return + + if not tokens: + self.log.error("tokens is empty") + self.error = 1 + return + + self.tokens = tokens + + # Validate the tokens + def validate_tokens(self): + # Validate the tokens. + if 'error' in self.tokens: + self.log.error("Illegal token name 'error'. Is a reserved word") + self.error = 1 + return + + terminals = {} + for n in self.tokens: + if n in terminals: + self.log.warning("Token '%s' multiply defined", n) + terminals[n] = 1 + + # Get the precedence map (if any) + def get_precedence(self): + self.prec = self.pdict.get("precedence",None) + + # Validate and parse the precedence map + def validate_precedence(self): + preclist = [] + if self.prec: + if not isinstance(self.prec,(list,tuple)): + self.log.error("precedence must be a list or tuple") + self.error = 1 + return + for level,p in enumerate(self.prec): + if not isinstance(p,(list,tuple)): + self.log.error("Bad precedence table") + self.error = 1 + return - # If running in optimized mode. We're going to read tables instead + if len(p) < 2: + self.log.error("Malformed precedence entry %s. Must be (assoc, term, ..., term)",p) + self.error = 1 + return + assoc = p[0] + if not isinstance(assoc,str): + self.log.error("precedence associativity must be a string") + self.error = 1 + return + for term in p[1:]: + if not isinstance(term,str): + self.log.error("precedence items must be strings") + self.error = 1 + return + preclist.append((term,assoc,level+1)) + self.preclist = preclist - if (optimize and lr_read_tables(tabmodule,1)): - # Read parse table - del Productions[:] - for p in _lr_productions: - if not p: - Productions.append(None) + # Get all p_functions from the grammar + def get_pfunctions(self): + p_functions = [] + for name, item in self.pdict.items(): + if name[:2] != 'p_': continue + if name == 'p_error': continue + if isinstance(item,(types.FunctionType,types.MethodType)): + line = func_code(item).co_firstlineno + file = func_code(item).co_filename + p_functions.append((line,file,name,item.__doc__)) + + # Sort all of the actions by line number + p_functions.sort() + self.pfuncs = p_functions + + + # Validate all of the p_functions + def validate_pfunctions(self): + grammar = [] + # Check for non-empty symbols + if len(self.pfuncs) == 0: + self.log.error("no rules of the form p_rulename are defined") + self.error = 1 + return + + for line, file, name, doc in self.pfuncs: + func = self.pdict[name] + if isinstance(func, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + if func_code(func).co_argcount > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments",file,line,func.__name__) + self.error = 1 + elif func_code(func).co_argcount < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument",file,line,func.__name__) + self.error = 1 + elif not func.__doc__: + self.log.warning("%s:%d: No documentation string specified in function '%s' (ignored)",file,line,func.__name__) else: - m = MiniProduction() - m.name = p[0] - m.len = p[1] - m.file = p[3] - m.line = p[4] - if p[2]: - m.func = ldict[p[2]] - Productions.append(m) + try: + parsed_g = parse_grammar(doc,file,line) + for g in parsed_g: + grammar.append((name, g)) + except SyntaxError: + e = sys.exc_info()[1] + self.log.error(str(e)) + self.error = 1 + + # Looks like a valid grammar rule + # Mark the file in which defined. + self.files[file] = 1 + + # Secondary validation step that looks for p_ definitions that are not functions + # or functions that look like they might be grammar rules. + + for n,v in self.pdict.items(): + if n[0:2] == 'p_' and isinstance(v, (types.FunctionType, types.MethodType)): continue + if n[0:2] == 't_': continue + if n[0:2] == 'p_' and n != 'p_error': + self.log.warning("'%s' not defined as a function", n) + if ((isinstance(v,types.FunctionType) and func_code(v).co_argcount == 1) or + (isinstance(v,types.MethodType) and func_code(v).co_argcount == 2)): + try: + doc = v.__doc__.split(" ") + if doc[1] == ':': + self.log.warning("%s:%d: Possible grammar rule '%s' defined without p_ prefix", + func_code(v).co_filename, func_code(v).co_firstlineno,n) + except Exception: + pass - else: - # Get the tokens map - if (module and isinstance(module,_INSTANCETYPE)): - tokens = getattr(module,"tokens",None) - else: - tokens = ldict.get("tokens",None) + self.grammar = grammar - if not tokens: - raise YaccError,"module does not define a list 'tokens'" - if not (isinstance(tokens,types.ListType) or isinstance(tokens,types.TupleType)): - raise YaccError,"tokens must be a list or tuple." - - # Check to see if a requires dictionary is defined. - requires = ldict.get("require",None) - if requires: - if not (isinstance(requires,types.DictType)): - raise YaccError,"require must be a dictionary." +# ----------------------------------------------------------------------------- +# yacc(module) +# +# Build a parser +# ----------------------------------------------------------------------------- - for r,v in requires.items(): - try: - if not (isinstance(v,types.ListType)): - raise TypeError - v1 = [x.split(".") for x in v] - Requires[r] = v1 - except StandardError: - print >>sys.stderr, "Invalid specification for rule '%s' in require. Expected a list of strings" % r - - - # Build the dictionary of terminals. We a record a 0 in the - # dictionary to track whether or not a terminal is actually - # used in the grammar - - if 'error' in tokens: - print >>sys.stderr, "yacc: Illegal token 'error'. Is a reserved word." - raise YaccError,"Illegal token name" - - for n in tokens: - if Terminals.has_key(n): - print >>sys.stderr, "yacc: Warning. Token '%s' multiply defined." % n - Terminals[n] = [ ] - - Terminals['error'] = [ ] - - # Get the precedence map (if any) - prec = ldict.get("precedence",None) - if prec: - if not (isinstance(prec,types.ListType) or isinstance(prec,types.TupleType)): - raise YaccError,"precedence must be a list or tuple." - add_precedence(prec) - Signature.update(repr(prec)) - - for n in tokens: - if not Precedence.has_key(n): - Precedence[n] = ('right',0) # Default, right associative, 0 precedence - - # Get the list of built-in functions with p_ prefix - symbols = [ldict[f] for f in ldict.keys() - if (type(ldict[f]) in (types.FunctionType, types.MethodType) and ldict[f].__name__[:2] == 'p_' - and ldict[f].__name__ != 'p_error')] +def yacc(method='LALR', debug=yaccdebug, module=None, tabmodule=tab_module, start=None, + check_recursion=1, optimize=0, write_tables=1, debugfile=debug_file,outputdir='', + debuglog=None, errorlog = None, picklefile=None): - # Check for non-empty symbols - if len(symbols) == 0: - raise YaccError,"no rules of the form p_rulename are defined." + global parse # Reference to the parsing method of the last built parser - # Sort the symbols by line number - symbols.sort(lambda x,y: cmp(x.func_code.co_firstlineno,y.func_code.co_firstlineno)) + # If pickling is enabled, table files are not created - # Add all of the symbols to the grammar - for f in symbols: - if (add_function(f)) < 0: - error += 1 - else: - files[f.func_code.co_filename] = None + if picklefile: + write_tables = 0 - # Make a signature of the docstrings - for f in symbols: - if f.__doc__: - Signature.update(f.__doc__) + if errorlog is None: + errorlog = PlyLogger(sys.stderr) - lr_init_vars() + # Get the module dictionary used for the parser + if module: + _items = [(k,getattr(module,k)) for k in dir(module)] + pdict = dict(_items) + else: + pdict = get_caller_module_dict(2) - if error: - raise YaccError,"Unable to construct parser." + # Collect parser information from the dictionary + pinfo = ParserReflect(pdict,log=errorlog) + pinfo.get_all() - if not lr_read_tables(tabmodule): + if pinfo.error: + raise YaccError("Unable to build parser") - # Validate files - for filename in files.keys(): - if not validate_file(filename): - error = 1 + # Check signature against table files (if any) + signature = pinfo.signature() - # Validate dictionary - validate_dict(ldict) + # Read the tables + try: + lr = LRTable() + if picklefile: + read_signature = lr.read_pickle(picklefile) + else: + read_signature = lr.read_table(tabmodule) + if optimize or (read_signature == signature): + try: + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr,pinfo.error_func) + parse = parser.parse + return parser + except Exception: + e = sys.exc_info()[1] + errorlog.warning("There was a problem loading the table file: %s", repr(e)) + except VersionError: + e = sys.exc_info() + errorlog.warning(str(e)) + except Exception: + pass + + if debuglog is None: + if debug: + debuglog = PlyLogger(open(debugfile,"w")) + else: + debuglog = NullLogger() - if start and not Prodnames.has_key(start): - raise YaccError,"Bad starting symbol '%s'" % start + debuglog.info("Created by PLY version %s (http://www.dabeaz.com/ply)", __version__) - augment_grammar(start) - error = verify_productions(cycle_check=check_recursion) - otherfunc = [ldict[f] for f in ldict.keys() - if (type(f) in (types.FunctionType,types.MethodType) and ldict[f].__name__[:2] != 'p_')] - # Check precedence rules - if check_precedence(): - error = 1 + errors = 0 - if error: - raise YaccError,"Unable to construct parser." + # Validate the parser information + if pinfo.validate_all(): + raise YaccError("Unable to build parser") + + if not pinfo.error_func: + errorlog.warning("no p_error() function is defined") - build_lritems() - compute_first1() - compute_follow(start) + # Create a grammar object + grammar = Grammar(pinfo.tokens) - if method in ['SLR','LALR']: - lr_parse_table(method) - else: - raise YaccError, "Unknown parsing method '%s'" % method + # Set precedence level for terminals + for term, assoc, level in pinfo.preclist: + try: + grammar.set_precedence(term,assoc,level) + except GrammarError: + e = sys.exc_info()[1] + errorlog.warning("%s",str(e)) + + # Add productions to the grammar + for funcname, gram in pinfo.grammar: + file, line, prodname, syms = gram + try: + grammar.add_production(prodname,syms,funcname,file,line) + except GrammarError: + e = sys.exc_info()[1] + errorlog.error("%s",str(e)) + errors = 1 - if write_tables: - lr_write_tables(tabmodule,outputdir) + # Set the grammar start symbols + try: + if start is None: + grammar.set_start(pinfo.start) + else: + grammar.set_start(start) + except GrammarError: + e = sys.exc_info()[1] + errorlog.error(str(e)) + errors = 1 + + if errors: + raise YaccError("Unable to build parser") + + # Verify the grammar structure + undefined_symbols = grammar.undefined_symbols() + for sym, prod in undefined_symbols: + errorlog.error("%s:%d: Symbol '%s' used, but not defined as a token or a rule",prod.file,prod.line,sym) + errors = 1 + + unused_terminals = grammar.unused_terminals() + if unused_terminals: + debuglog.info("") + debuglog.info("Unused terminals:") + debuglog.info("") + for term in unused_terminals: + errorlog.warning("Token '%s' defined, but not used", term) + debuglog.info(" %s", term) + + # Print out all productions to the debug log + if debug: + debuglog.info("") + debuglog.info("Grammar") + debuglog.info("") + for n,p in enumerate(grammar.Productions): + debuglog.info("Rule %-5d %s", n, p) + + # Find unused non-terminals + unused_rules = grammar.unused_rules() + for prod in unused_rules: + errorlog.warning("%s:%d: Rule '%s' defined, but not used", prod.file, prod.line, prod.name) + + if len(unused_terminals) == 1: + errorlog.warning("There is 1 unused token") + if len(unused_terminals) > 1: + errorlog.warning("There are %d unused tokens", len(unused_terminals)) + + if len(unused_rules) == 1: + errorlog.warning("There is 1 unused rule") + if len(unused_rules) > 1: + errorlog.warning("There are %d unused rules", len(unused_rules)) + + if debug: + debuglog.info("") + debuglog.info("Terminals, with rules where they appear") + debuglog.info("") + terms = list(grammar.Terminals) + terms.sort() + for term in terms: + debuglog.info("%-20s : %s", term, " ".join([str(s) for s in grammar.Terminals[term]])) + + debuglog.info("") + debuglog.info("Nonterminals, with rules where they appear") + debuglog.info("") + nonterms = list(grammar.Nonterminals) + nonterms.sort() + for nonterm in nonterms: + debuglog.info("%-20s : %s", nonterm, " ".join([str(s) for s in grammar.Nonterminals[nonterm]])) + debuglog.info("") + + if check_recursion: + unreachable = grammar.find_unreachable() + for u in unreachable: + errorlog.warning("Symbol '%s' is unreachable",u) + + infinite = grammar.infinite_cycles() + for inf in infinite: + errorlog.error("Infinite recursion detected for symbol '%s'", inf) + errors = 1 + + unused_prec = grammar.unused_precedence() + for term, assoc in unused_prec: + errorlog.error("Precedence rule '%s' defined for unknown symbol '%s'", assoc, term) + errors = 1 - if yaccdebug: - try: - f = open(os.path.join(outputdir,debugfile),"w") - f.write(_vfc.getvalue()) - f.write("\n\n") - f.write(_vf.getvalue()) - f.close() - except IOError,e: - print >>sys.stderr, "yacc: can't create '%s'" % debugfile,e - - # Made it here. Create a parser object and set up its internal state. - # Set global parse() method to bound method of parser object. - - p = Parser("xyzzy") - p.productions = Productions - p.errorfunc = Errorfunc - p.action = _lr_action - p.goto = _lr_goto - p.method = _lr_method - p.require = Requires - - global parse - parse = p.parse - - global parser - parser = p - - # Clean up all of the globals we created - if (not optimize): - yacc_cleanup() - return p - -# yacc_cleanup function. Delete all of the global variables -# used during table construction - -def yacc_cleanup(): - global _lr_action, _lr_goto, _lr_method, _lr_goto_cache - del _lr_action, _lr_goto, _lr_method, _lr_goto_cache - - global Productions, Prodnames, Prodmap, Terminals - global Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - global Errorfunc, Signature, Requires - - del Productions, Prodnames, Prodmap, Terminals - del Nonterminals, First, Follow, Precedence, UsedPrecedence, LRitems - del Errorfunc, Signature, Requires - - global _vf, _vfc - del _vf, _vfc - - -# Stub that raises an error if parsing is attempted without first calling yacc() -def parse(*args,**kwargs): - raise YaccError, "yacc: No parser built with yacc()" + if errors: + raise YaccError("Unable to build parser") + + # Run the LRGeneratedTable on the grammar + if debug: + errorlog.debug("Generating %s tables", method) + + lr = LRGeneratedTable(grammar,method,debuglog) + + if debug: + num_sr = len(lr.sr_conflicts) + + # Report shift/reduce and reduce/reduce conflicts + if num_sr == 1: + errorlog.warning("1 shift/reduce conflict") + elif num_sr > 1: + errorlog.warning("%d shift/reduce conflicts", num_sr) + + num_rr = len(lr.rr_conflicts) + if num_rr == 1: + errorlog.warning("1 reduce/reduce conflict") + elif num_rr > 1: + errorlog.warning("%d reduce/reduce conflicts", num_rr) + + # Write out conflicts to the output file + if debug and (lr.sr_conflicts or lr.rr_conflicts): + debuglog.warning("") + debuglog.warning("Conflicts:") + debuglog.warning("") + + for state, tok, resolution in lr.sr_conflicts: + debuglog.warning("shift/reduce conflict for %s in state %d resolved as %s", tok, state, resolution) + + already_reported = {} + for state, rule, rejected in lr.rr_conflicts: + if (state,id(rule),id(rejected)) in already_reported: + continue + debuglog.warning("reduce/reduce conflict in state %d resolved using rule (%s)", state, rule) + debuglog.warning("rejected rule (%s) in state %d", rejected,state) + errorlog.warning("reduce/reduce conflict in state %d resolved using rule (%s)", state, rule) + errorlog.warning("rejected rule (%s) in state %d", rejected, state) + already_reported[state,id(rule),id(rejected)] = 1 + + warned_never = [] + for state, rule, rejected in lr.rr_conflicts: + if not rejected.reduced and (rejected not in warned_never): + debuglog.warning("Rule (%s) is never reduced", rejected) + errorlog.warning("Rule (%s) is never reduced", rejected) + warned_never.append(rejected) + + # Write the table file if requested + if write_tables: + lr.write_table(tabmodule,outputdir,signature) + + # Write a pickled version of the tables + if picklefile: + lr.pickle_table(picklefile,signature) + + # Build the parser + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr,pinfo.error_func) + + parse = parser.parse + return parser --------------080004000200030807030507 Content-Type: text/plain; name="patch-qsgen.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-qsgen.py" --- js/src/xpconnect/src/qsgen.py.orig 2010-04-02 19:02:30.000000000 +0300 +++ js/src/xpconnect/src/qsgen.py 2010-04-07 11:05:56.201435457 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env/python +#!/usr/local/bin/python # qsgen.py - Generate XPConnect quick stubs. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507 Content-Type: text/plain; name="patch-xpidl.py" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="patch-xpidl.py" --- xpcom/idl-parser/xpidl.py.orig 2010-04-02 19:03:13.000000000 +0300 +++ xpcom/idl-parser/xpidl.py 2010-04-07 11:23:53.544027464 +0300 @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/local/bin/python # xpidl.py - A parser for cross-platform IDL (XPIDL) files. # # ***** BEGIN LICENSE BLOCK ***** --------------080004000200030807030507-- From owner-freebsd-gecko@FreeBSD.ORG Wed Apr 7 18:17:10 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0FAAB106566C for ; Wed, 7 Apr 2010 18:17:10 +0000 (UTC) (envelope-from flo@smeets.im) Received: from mail.solomo.de (mail.solomo.de [85.214.124.163]) by mx1.freebsd.org (Postfix) with ESMTP id BE4FC8FC0A for ; Wed, 7 Apr 2010 18:17:09 +0000 (UTC) Received: from mail.solomo.de (localhost [127.0.0.1]) by mail.solomo.de (Postfix) with ESMTP id 752E061D75; Wed, 7 Apr 2010 20:17:08 +0200 (CEST) X-Virus-Scanned: amavisd-new at vistream.de Received: from mail.solomo.de ([127.0.0.1]) by mail.solomo.de (mail.solomo.de [127.0.0.1]) (amavisd-new, port 10024) with LMTP id Hcv7059cA3u3; Wed, 7 Apr 2010 20:17:06 +0200 (CEST) Received: from [192.168.0.100] (p509176E3.dip.t-dialin.net [80.145.118.227]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.solomo.de (Postfix) with ESMTPSA id 18FFC61D6E; Wed, 7 Apr 2010 20:17:06 +0200 (CEST) Message-ID: <4BBCCC20.8040505@smeets.im> Date: Wed, 07 Apr 2010 20:17:04 +0200 From: Florian Smeets User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 MIME-Version: 1.0 To: "Andrei V. Lavreniyuk" References: <201004070850.o378o5ff085642@freefall.freebsd.org> In-Reply-To: <201004070850.o378o5ff085642@freefall.freebsd.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 18:17:10 -0000 On 4/7/10 10:50 AM, Andrei V. Lavreniyuk wrote: > Hi Andrei, > 07.04.2010 11:26, Florian Smeets пишет: > > >> show me the output of python -V and pkg_info|grep python? > >> > > > > Sorry, there is a "can you please" missing. Did not want to sound as > > harsh as it seems. > > > > So, can you please show me the output? :-) > > > :) > > > > Workaround: > > > # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 python26-2.6.4 > > + > > Attached patch. > The python warning is only a warning and the build should not stop because of it. Do you have special CFLAGS set in make.conf? Also could you try to build stock firefox from ports and remove or comment MAKE_JOBS_SAFE in the Makefile, i have a suspicion that we do not see the real error in your log. Regarding your patches, can you explain how you created the patch-ply file? The other 3 pathes should not be necessary as env python should use your installed python binary. Cheers, Florian From owner-freebsd-gecko@FreeBSD.ORG Thu Apr 8 06:00:00 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 51F10106564A for ; Thu, 8 Apr 2010 06:00:00 +0000 (UTC) (envelope-from andy.lavr@reactor-xg.kiev.ua) Received: from mail.reactor-xg.kiev.ua (reactor-xg.kiev.ua [86.111.64.45]) by mx1.freebsd.org (Postfix) with ESMTP id CDBAA8FC26 for ; Thu, 8 Apr 2010 05:59:59 +0000 (UTC) Received: from mail.reactor-xg.kiev.ua (mail.reactor-xg.kiev.ua [86.111.64.45]) by mail.reactor-xg.kiev.ua (Reactor-XG Mailer System) with ESMTP id o385xv0f012027; Thu, 8 Apr 2010 08:59:58 +0300 (EEST) (envelope-from andy.lavr@reactor-xg.kiev.ua) Authentication-Results: mail.reactor-xg.kiev.ua; sender-id=pass header.from=andy.lavr@reactor-xg.kiev.ua; auth=pass (PLAIN); spf=pass smtp.mfrom=andy.lavr@reactor-xg.kiev.ua Message-ID: <4BBD70DD.6000100@reactor-xg.kiev.ua> Date: Thu, 08 Apr 2010 08:59:57 +0300 From: "Andrei V. Lavreniyuk" Organization: Technica-03, Inc. User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; ru; rv:1.9.1.9) Gecko/20100331 Thunderbird/3.0.4 MIME-Version: 1.0 To: Florian Smeets References: <201004070850.o378o5ff085642@freefall.freebsd.org> <4BBCCC20.8040505@smeets.im> In-Reply-To: <4BBCCC20.8040505@smeets.im> Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 8bit Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: andy.lavr@reactor-xg.kiev.ua List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 06:00:00 -0000 07.04.2010 21:17, Florian Smeets пишет: >> >> show me the output of python -V and pkg_info|grep python? >> >> >> > >> > Sorry, there is a "can you please" missing. Did not want to sound as >> > harsh as it seems. >> > >> > So, can you please show me the output? :-) >> >> >> :) >> >> >> >> Workaround: >> >> >> # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 python26-2.6.4 >> >> + >> >> Attached patch. >> > > The python warning is only a warning and the build should not stop > because of it. Do you have special CFLAGS set in make.conf? # default build settings for ports collection .if ${.CURDIR:M*/ports/*} CFLAGS+= -O2 -D_FORTIFY_SOURCE=2 -pipe -march=native -mtune=native -fno-delete-null-pointer-checks -fstack-protector -fno-strict-aliasing -Wformat -Wformat-security -s CXXFLAGS= -fconserve-space MAKEOPTS="-j6" .endif > Also could you try to build stock firefox from ports and remove or > comment MAKE_JOBS_SAFE in the Makefile, i have a suspicion that we do > not see the real error in your log. All my patches can be not applied. For me a problem was closed rebuild (# portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 python26-2.6.4 ). I think that useful only patch-ply. > Regarding your patches, can you explain how you created the patch-ply > file? ports/devel/py-ply > The other 3 pathes should not be necessary as env python should > use your installed python binary. I agree with you. -- Best regards, Andrei V. Lavreniyuk. From owner-freebsd-gecko@FreeBSD.ORG Thu Apr 8 08:08:39 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 001F4106564A for ; Thu, 8 Apr 2010 08:08:38 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id C9A0C8FC16 for ; Thu, 8 Apr 2010 08:08:38 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o3888c2m029583 for ; Thu, 8 Apr 2010 08:08:38 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o3888c7N029582; Thu, 8 Apr 2010 08:08:38 GMT (envelope-from gnats) Date: Thu, 8 Apr 2010 08:08:38 GMT Message-Id: <201004080808.o3888c7N029582@freefall.freebsd.org> To: gecko@FreeBSD.org From: Florian Smeets Cc: Subject: Re: ports/144145: www/firefox: Firefox 3.6 does not allow any https-connection X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Florian Smeets List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 08:08:39 -0000 The following reply was made to PR ports/144145; it has been noted by GNATS. From: Florian Smeets To: bug-followup@FreeBSD.org, nkoch@gmx.de Cc: beat@freebsd.org Subject: Re: ports/144145: www/firefox: Firefox 3.6 does not allow any https-connection Date: Thu, 08 Apr 2010 09:21:43 +0200 Hi, could you please send us the output of pkg_info and of ldd /usr/local/lib/firefox3/firefox-bin ? I have been discussing this with Beat lately and we have absolutely no clue what is causing this, yet. Nor are we able to reproduce this. Any additional information could help us. Thanks, Florian From owner-freebsd-gecko@FreeBSD.ORG Thu Apr 8 09:18:49 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 27049106566B for ; Thu, 8 Apr 2010 09:18:49 +0000 (UTC) (envelope-from flo@smeets.im) Received: from mail.solomo.de (mail.solomo.de [85.214.124.163]) by mx1.freebsd.org (Postfix) with ESMTP id D40778FC13 for ; Thu, 8 Apr 2010 09:18:48 +0000 (UTC) Received: from mail.solomo.de (localhost [127.0.0.1]) by mail.solomo.de (Postfix) with ESMTP id 5DE4C61CCA; Thu, 8 Apr 2010 11:18:47 +0200 (CEST) X-Virus-Scanned: amavisd-new at vistream.de Received: from mail.solomo.de ([127.0.0.1]) by mail.solomo.de (mail.solomo.de [127.0.0.1]) (amavisd-new, port 10024) with LMTP id ka4bL9JlwGMK; Thu, 8 Apr 2010 11:18:37 +0200 (CEST) Received: from nibbler.vistream.local (relay3.vistream.de [87.139.10.28]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.solomo.de (Postfix) with ESMTPSA id 6DA1E61CAD; Thu, 8 Apr 2010 11:18:37 +0200 (CEST) Message-ID: <4BBD9F6C.5090904@smeets.im> Date: Thu, 08 Apr 2010 11:18:36 +0200 From: Florian Smeets User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 MIME-Version: 1.0 To: andy.lavr@reactor-xg.kiev.ua References: <201004070850.o378o5ff085642@freefall.freebsd.org> <4BBCCC20.8040505@smeets.im> <4BBD70DD.6000100@reactor-xg.kiev.ua> In-Reply-To: <4BBD70DD.6000100@reactor-xg.kiev.ua> Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 8bit Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 09:18:49 -0000 On 4/8/10 7:59 AM, Andrei V. Lavreniyuk wrote: > 07.04.2010 21:17, Florian Smeets пишет: > >>> >> show me the output of python -V and pkg_info|grep python? >>> >> >>> >>> Workaround: >>> >>> >>> # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 >>> python26-2.6.4 >>> >>> + >>> >>> Attached patch. > > All my patches can be not applied. For me a problem was closed rebuild > (# portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 > python26-2.6.4 > ). I think that useful only patch-ply. > So the issue was fixed by doing the portupgrade, and the patches were optional? Do i understand correctly that this pr can be closed? > >> Regarding your patches, can you explain how you created the patch-ply >> file? > > > ports/devel/py-ply > As firefox 3.7a3 still ships with the same version of py-ply as 3.6.3 i don't think we should diverge from what is shipped with firefox if there is no need. Thank you for your efforts, Florian From owner-freebsd-gecko@FreeBSD.ORG Thu Apr 8 18:54:32 2010 Return-Path: Delivered-To: gecko@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B8B7B1065670 for ; Thu, 8 Apr 2010 18:54:32 +0000 (UTC) (envelope-from andy.lavr@reactor-xg.kiev.ua) Received: from mail.reactor-xg.kiev.ua (reactor-xg.kiev.ua [86.111.64.45]) by mx1.freebsd.org (Postfix) with ESMTP id 3F2958FC1B for ; Thu, 8 Apr 2010 18:54:31 +0000 (UTC) Received: from mail.reactor-xg.kiev.ua (mail.reactor-xg.kiev.ua [86.111.64.45]) by mail.reactor-xg.kiev.ua (Reactor-XG Mailer System) with ESMTP id o38IsTTS024214; Thu, 8 Apr 2010 21:54:30 +0300 (EEST) (envelope-from andy.lavr@reactor-xg.kiev.ua) Authentication-Results: mail.reactor-xg.kiev.ua; sender-id=pass header.from=andy.lavr@reactor-xg.kiev.ua; auth=pass (PLAIN); spf=pass smtp.mfrom=andy.lavr@reactor-xg.kiev.ua Message-ID: <4BBE2665.8000103@reactor-xg.kiev.ua> Date: Thu, 08 Apr 2010 21:54:29 +0300 From: "Andrei V. Lavreniyuk" Organization: Technica-03, Inc. User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; ru; rv:1.9.1.9) Gecko/20100331 Thunderbird/3.0.4 MIME-Version: 1.0 To: Florian Smeets References: <201004070850.o378o5ff085642@freefall.freebsd.org> <4BBCCC20.8040505@smeets.im> <4BBD70DD.6000100@reactor-xg.kiev.ua> <4BBD9F6C.5090904@smeets.im> In-Reply-To: <4BBD9F6C.5090904@smeets.im> Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 8bit Cc: gecko@FreeBSD.org Subject: Re: ports/145450: [ERROR] cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: andy.lavr@reactor-xg.kiev.ua List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 18:54:32 -0000 08.04.2010 12:18, Florian Smeets пишет: >>>> >> show me the output of python -V and pkg_info|grep python? >>>> >> >>>> >>>> Workaround: >>>> >>>> >>>> # portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 >>>> python26-2.6.4 >>>> >>>> + >>>> >>>> Attached patch. >> >> All my patches can be not applied. For me a problem was closed rebuild >> (# portupgrade -f boost-python-libs-1.41.0 py26-notify-0.1.1_7 >> python26-2.6.4 >> ). I think that useful only patch-ply. >> > > So the issue was fixed by doing the portupgrade, and the patches were > optional? Do i understand correctly that this pr can be closed? Yes. Thanks. >>> Regarding your patches, can you explain how you created the patch-ply >>> file? >> >> >> ports/devel/py-ply >> > > As firefox 3.7a3 still ships with the same version of py-ply as 3.6.3 i > don't think we should diverge from what is shipped with firefox if there > is no need. Ok. -- Best regards, Andrei V. Lavreniyuk. From owner-freebsd-gecko@FreeBSD.ORG Fri Apr 9 07:17:16 2010 Return-Path: Delivered-To: gecko@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 727BA1065672; Fri, 9 Apr 2010 07:17:16 +0000 (UTC) (envelope-from beat@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 49E918FC08; Fri, 9 Apr 2010 07:17:16 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o397HGOs034970; Fri, 9 Apr 2010 07:17:16 GMT (envelope-from beat@freefall.freebsd.org) Received: (from beat@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o397HGBd034966; Fri, 9 Apr 2010 07:17:16 GMT (envelope-from beat) Date: Fri, 9 Apr 2010 07:17:16 GMT Message-Id: <201004090717.o397HGBd034966@freefall.freebsd.org> To: andy.lavr@gmail.com, beat@FreeBSD.org, gecko@FreeBSD.org From: beat@FreeBSD.org Cc: Subject: Re: ports/145450: cannot compile www/firefox version 3.6.3 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 07:17:16 -0000 Synopsis: cannot compile www/firefox version 3.6.3 State-Changed-From-To: open->closed State-Changed-By: beat State-Changed-When: Fri Apr 9 07:17:15 UTC 2010 State-Changed-Why: Fixed by doing the portupgrade. Thanks for your feedback and thanks to Florian for tracking this down. http://www.freebsd.org/cgi/query-pr.cgi?pr=145450 From owner-freebsd-gecko@FreeBSD.ORG Fri Apr 9 21:46:34 2010 Return-Path: Delivered-To: freebsd-gecko@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0CBE81065674 for ; Fri, 9 Apr 2010 21:46:34 +0000 (UTC) (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: from trillian.chruetertee.ch (trillian.chruetertee.ch [217.150.245.56]) by mx1.freebsd.org (Postfix) with ESMTP id 998EC8FC18 for ; Fri, 9 Apr 2010 21:46:33 +0000 (UTC) Received: from trillian.chruetertee.ch (trillian [217.150.245.56]) by trillian.chruetertee.ch (8.14.3/8.14.3) with ESMTP id o39LkWfn097977 for ; Fri, 9 Apr 2010 21:46:32 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Received: (from www@localhost) by trillian.chruetertee.ch (8.14.3/8.14.3/Submit) id o39LkRAp097624 for freebsd-gecko@freebsd.org; Fri, 9 Apr 2010 21:46:27 GMT (envelope-from svn-freebsd-gecko@chruetertee.ch) Date: Fri, 9 Apr 2010 21:46:27 GMT Message-Id: <201004092146.o39LkRAp097624@trillian.chruetertee.ch> X-Authentication-Warning: trillian.chruetertee.ch: www set sender to svn-freebsd-gecko@chruetertee.ch using -f From: svn-freebsd-gecko@chruetertee.ch To: freebsd-gecko@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [SVN-Commit] r303 - branches/experimental/www/seamonkey2 trunk/www/seamonkey2 X-BeenThere: freebsd-gecko@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-gecko@freebsd.org List-Id: Gecko Rendering Engine issues List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 21:46:34 -0000 Author: beat Date: Fri Apr 9 21:46:26 2010 New Revision: 303 Log: - Restore USE_MOZILLA dependencies as they where changed accidentally with the last commit. Modified: branches/experimental/www/seamonkey2/Makefile trunk/www/seamonkey2/Makefile Modified: branches/experimental/www/seamonkey2/Makefile ============================================================================== --- branches/experimental/www/seamonkey2/Makefile Tue Apr 6 08:19:50 2010 (r302) +++ branches/experimental/www/seamonkey2/Makefile Fri Apr 9 21:46:26 2010 (r303) @@ -39,7 +39,7 @@ --enable-svg-renderer=cairo \ --enable-system-cairo \ --enable-canvas -USE_MOZILLA= -png dbm -jpeg -xft +USE_MOZILLA= -png -dbm -jpeg -xft MOZ_TOOLKIT= cairo-gtk2 MOZILLA_PLIST_DIRS= bin lib MOZ_PKGCONFIG_FILES= Modified: trunk/www/seamonkey2/Makefile ============================================================================== --- trunk/www/seamonkey2/Makefile Tue Apr 6 08:19:50 2010 (r302) +++ trunk/www/seamonkey2/Makefile Fri Apr 9 21:46:26 2010 (r303) @@ -39,7 +39,7 @@ --enable-svg-renderer=cairo \ --enable-system-cairo \ --enable-canvas -USE_MOZILLA= -png dbm -jpeg -xft +USE_MOZILLA= -png -dbm -jpeg -xft MOZ_TOOLKIT= cairo-gtk2 MOZILLA_PLIST_DIRS= bin lib MOZ_PKGCONFIG_FILES=