From owner-freebsd-embedded@FreeBSD.ORG Sun Jan 23 19:30:34 2011 Return-Path: Delivered-To: freebsd-embedded@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1A85D106567A; Sun, 23 Jan 2011 19:30:34 +0000 (UTC) (envelope-from ray@ddteam.net) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 7707A8FC1D; Sun, 23 Jan 2011 19:30:32 +0000 (UTC) Received: by fxm16 with SMTP id 16so3628297fxm.13 for ; Sun, 23 Jan 2011 11:30:31 -0800 (PST) Received: by 10.223.79.72 with SMTP id o8mr3388824fak.19.1295811029475; Sun, 23 Jan 2011 11:30:29 -0800 (PST) Received: from rnote.ddteam.net (27-165-133-95.pool.ukrtel.net [95.133.165.27]) by mx.google.com with ESMTPS id f24sm4282579fak.0.2011.01.23.11.30.25 (version=SSLv3 cipher=RC4-MD5); Sun, 23 Jan 2011 11:30:27 -0800 (PST) Date: Sun, 23 Jan 2011 21:30:23 +0200 From: Aleksandr Rybalko To: Pawel Jakub Dawidek Message-Id: <20110123213023.304ff042.ray@ddteam.net> In-Reply-To: <20110123175731.GA14458@garage.freebsd.pl> References: <20110120084955.GD1716@garage.freebsd.pl> <20110120122644.9a38974c.ray@dlink.ua> <20110121154636.f10529d8.ray@dlink.ua> <20110121154303.GG1698@garage.freebsd.pl> <20110123003013.90378231.ray@ddteam.net> <20110123022226.0567c2ff.ray@ddteam.net> <20110123175731.GA14458@garage.freebsd.pl> X-Mailer: Sylpheed 3.0.3 (GTK+ 2.22.1; i386-portbld-freebsd8.1) Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY" Cc: freebsd-embedded@freebsd.org, freebsd-geom@freebsd.org Subject: GEOM_UNCOPRESS (was: GEOM_LZMA) X-BeenThere: freebsd-embedded@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Dedicated and Embedded Systems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Jan 2011 19:30:34 -0000 This is a multi-part message in MIME format. --Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Sun, 23 Jan 2011 18:57:31 +0100 Pawel Jakub Dawidek wrote: > On Sun, Jan 23, 2011 at 02:22:26AM +0200, Aleksandr Rybalko wrote: > > On Sun, 23 Jan 2011 00:12:30 +0100 > > Ivan Voras wrote: > > > > > On 22 January 2011 23:30, Aleksandr Rybalko > > > wrote: > > > > > > > So as result of small discussion (thanks Ivan and Pawel Jakub) > > > > think best name is GEOM_UCOMPRESS (module do only UN-compress). > > > > > > It's a good name, I'd accept it. But since you already have a > > > descriptive name, why not add the missing "N" and make it > > > GEOM_UNCOMPRESS? It's more user-friendly in case future users > > > start wondering what the "U" means :) > > > > I'd like full names like UNCOMPRESS, but found UNIX historical > > examples like umount. > > > > Anyway, agree with you and rename it to GEOM_UNCOMPRESS :) > > Could you point me at the final patch? Yea, with pleasure: http://my.ddteam.net/files/geom_uncompress.patch http://my.ddteam.net/files/mkulzma.patch http://my.ddteam.net/files/add_contrib_xz-embedded.patch > > -- > Pawel Jakub Dawidek http://www.wheelsystems.com > pjd@FreeBSD.org http://www.FreeBSD.org > FreeBSD committer Am I Evil? Yes, I Am! -- Aleksandr Rybalko --Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY Content-Type: text/x-diff; name="geom_uncompress.patch" Content-Disposition: attachment; filename="geom_uncompress.patch" Content-Transfer-Encoding: 7bit Index: sys/conf/files =================================================================== --- sys/conf/files (revision 217759) +++ sys/conf/files (working copy) @@ -2089,6 +2089,22 @@ geom/raid3/g_raid3_ctl.c optional geom_raid3 geom/shsec/g_shsec.c optional geom_shsec geom/stripe/g_stripe.c optional geom_stripe +geom/uncompress/g_uncompress.c optional geom_uncompress +contrib/xz-embedded/xz_malloc.c optional geom_uncompress \ + dependency "$S/contrib/xz-embedded/*.[ch]" \ + compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/" +contrib/xz-embedded/xz_crc32.c optional geom_uncompress \ + dependency "$S/contrib/xz-embedded/*.[ch]" \ + compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/" +contrib/xz-embedded/xz_dec_bcj.c optional geom_uncompress \ + dependency "$S/contrib/xz-embedded/*.[ch]" \ + compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/" +contrib/xz-embedded/xz_dec_lzma2.c optional geom_uncompress \ + dependency "$S/contrib/xz-embedded/*.[ch]" \ + compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/" +contrib/xz-embedded/xz_dec_stream.c optional geom_uncompress \ + dependency "$S/contrib/xz-embedded/*.[ch]" \ + compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/" geom/uzip/g_uzip.c optional geom_uzip geom/virstor/binstream.c optional geom_virstor geom/virstor/g_virstor.c optional geom_virstor @@ -2452,7 +2468,7 @@ net/vnet.c optional vimage net/zlib.c optional crypto | geom_uzip | ipsec | \ mxge | netgraph_deflate | \ - ddb_ctf | gzio + ddb_ctf | gzio | geom_uncompress net80211/ieee80211.c optional wlan net80211/ieee80211_acl.c optional wlan wlan_acl net80211/ieee80211_action.c optional wlan Index: sys/geom/uncompress/g_uncompress.c =================================================================== --- sys/geom/uncompress/g_uncompress.c (revision 0) +++ sys/geom/uncompress/g_uncompress.c (revision 0) @@ -0,0 +1,662 @@ +/*- + * Copyright (c) 2010,2011 Aleksandr Rybalko + * Copyright (c) 2004 Max Khon + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +/* + * GEOM UNCOMPRESS module - simple decompression module for use with read-only + * copressed images maked by mkuzip(8) or mkulzma(8) utilities. + * + * To enable module in kernel config, put this line: + * device geom_uncompress + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +#ifdef GEOM_UNCOMPRESS_DEBUG +#define DPRINTF(a) printf a +extern int g_debugflags; +#else +#define DPRINTF(a) +#endif + +MALLOC_DEFINE(M_GEOM_UNCOMPRESS, "geom_uncompress", "GEOM UNCOMPRESS data structures"); + +#define UNCOMPRESS_CLASS_NAME "UNCOMPRESS" +#define GEOM_UZIP_MAJVER '2' +#define GEOM_ULZMA_MAJVER '3' + +/* + * Maximum allowed valid block size (to prevent foot-shooting) + */ +#define MAX_BLKSZ (MAXPHYS) + +/* + * Integer values (block size, number of blocks, offsets) + * are stored in big-endian (network) order on disk and struct cloop_header + * and in native order in struct g_uncompress_softc + */ + +#define CLOOP_MAGIC_LEN 128 +static char CLOOP_MAGIC_START[] = "#!/bin/sh\n"; + +struct cloop_header { + char magic[CLOOP_MAGIC_LEN]; /* cloop magic */ + uint32_t blksz; /* block size */ + uint32_t nblocks; /* number of blocks */ +}; + +struct g_uncompress_softc { + uint32_t blksz; /* block size */ + uint32_t nblocks; /* number of blocks */ + uint64_t *offsets; + enum { + GEOM_UZIP = 1, + GEOM_ULZMA + } type; + + struct mtx last_mtx; + uint32_t last_blk; /* last blk no */ + char *last_buf; /* last blk data */ + int req_total; /* total requests */ + int req_cached; /* cached requests */ + + /* XZ decoder struct`s */ + struct xz_buf *b; + struct xz_dec *s; + z_stream *zs; +}; + +static void +g_uncompress_softc_free(struct g_uncompress_softc *sc, struct g_geom *gp) +{ + if (gp != NULL) { + printf("%s: %d requests, %d cached\n", + gp->name, sc->req_total, sc->req_cached); + } + if (sc->offsets != NULL) { + free(sc->offsets, M_GEOM_UNCOMPRESS); + sc->offsets = 0; + } + + switch (sc->type) { + case GEOM_ULZMA: + if (sc->b) { + free(sc->b, M_GEOM_UNCOMPRESS); + sc->b = 0; + } + if (sc->s) { + xz_dec_end(sc->s); + sc->s = 0; + } + break; + case GEOM_UZIP: + if (sc->zs) { + inflateEnd(sc->zs); + free(sc->zs, M_GEOM_UNCOMPRESS); + sc->zs = 0; + } + break; + } + + mtx_destroy(&sc->last_mtx); + free(sc->last_buf, M_GEOM_UNCOMPRESS); + free(sc, M_GEOM_UNCOMPRESS); +} + +static void * +z_alloc(void *nil, u_int type, u_int size) +{ + void *ptr; + + ptr = malloc(type * size, M_GEOM_UNCOMPRESS, M_NOWAIT); + return ptr; +} + +static void +z_free(void *nil, void *ptr) +{ + free(ptr, M_GEOM_UNCOMPRESS); +} + +static void +g_uncompress_done(struct bio *bp) +{ + int err = 0; + struct bio *bp2; + struct g_provider *pp, *pp2; + struct g_consumer *cp; + struct g_geom *gp; + struct g_uncompress_softc *sc; + off_t pos, upos; + uint32_t start_blk, i; + size_t bsize; + + bp2 = bp->bio_parent; + pp = bp2->bio_to; + gp = pp->geom; + cp = LIST_FIRST(&gp->consumer); + pp2 = cp->provider; + sc = gp->softc; + DPRINTF(("%s: done\n", gp->name)); + + bp2->bio_error = bp->bio_error; + if (bp2->bio_error != 0) + goto done; + + /* + * Example: + * Uncompressed block size = 65536 + * User request: 65540-261632 + * (4 uncompressed blocks -4B at start, -512B at end) + * + * We have 512(secsize)*63(nsec) = 32256B at offset 1024 + * From: 1024 bp->bio_offset = 1024 + * To: 33280 bp->bio_length = 33280 - 1024 = 32256 + * Compressed blocks: 0-1020, 1020-1080, 1080-4845, 4845-12444, + * 12444-33210, 33210-44100, ... + * + * Get start_blk from user request: + * start_blk = bp2->bio_offset / 65536 = 65540/65536 = 1 + * bsize (block size of parent) = pp2->sectorsize (Now is 4B) + * pos(in stream from parent) = sc->offsets[start_blk] % bsize = + * = sc->offsets[1] % 4 = 1020 % 4 = 0 + */ + + + /* + * Uncompress data. + */ + start_blk = bp2->bio_offset / sc->blksz; + bsize = pp2->sectorsize; + pos = sc->offsets[start_blk] % bsize; + upos = 0; + + DPRINTF(("%s: done: bio_length %lld bio_completed %lld start_blk %d, " + "pos %lld, upos %lld (%lld, %d, %d)\n", + gp->name, bp->bio_length, bp->bio_completed, start_blk, pos, upos, + bp2->bio_offset, sc->blksz, bsize)); + + for (i = start_blk; upos < bp2->bio_length; i++) { + off_t len, dlen, ulen, uoff; + + uoff = i == start_blk ? bp2->bio_offset % sc->blksz : 0; + ulen = MIN(sc->blksz - uoff, bp2->bio_length - upos); + dlen = len = sc->offsets[i + 1] - sc->offsets[i]; + + DPRINTF(("%s: done: inflate block %d, start %lld, end %lld " + "len %lld\n", + gp->name, i, sc->offsets[i], sc->offsets[i + 1], len)); + + if (len == 0) { + /* All zero block: no cache update */ + bzero(bp2->bio_data + upos, ulen); + upos += ulen; + bp2->bio_completed += ulen; + continue; + } + + mtx_lock(&sc->last_mtx); + +#ifdef GEOM_UNCOMPRESS_DEBUG + if (g_debugflags & 32) + hexdump(bp->bio_data + pos, dlen, 0, 0); +#endif + + switch (sc->type) { + case GEOM_ULZMA: + sc->b->in = bp->bio_data + pos; + sc->b->out = sc->last_buf; + sc->b->in_pos = sc->b->out_pos = 0; + sc->b->in_size = dlen; + sc->b->out_size = (size_t)-1; + + err = (xz_dec_run(sc->s, sc->b) != XZ_STREAM_END)?1:0; + /* TODO decoder recovery, if needed */ + break; + case GEOM_UZIP: + sc->zs->next_in = bp->bio_data + pos; + sc->zs->avail_in = dlen; + sc->zs->next_out = sc->last_buf; + sc->zs->avail_out = sc->blksz; + + err = (inflate(sc->zs, Z_FINISH) != Z_STREAM_END)?1:0; + if ((err) && (inflateReset(sc->zs) != Z_OK)) + printf("%s: UZIP decoder reset failed\n", + gp->name); + break; + } + + if (err) { + sc->last_blk = -1; + mtx_unlock(&sc->last_mtx); + DPRINTF(("%s: done: inflate failed, code=%d\n", + gp->name, err)); + bp2->bio_error = EIO; + goto done; + } + +#ifdef GEOM_UNCOMPRESS_DEBUG + if (g_debugflags & 32) + hexdump(sc->last_buf, sc->b->out_size, 0, 0); +#endif + + sc->last_blk = i; + DPRINTF(("%s: done: inflated \n", gp->name)); + memcpy(bp2->bio_data + upos, sc->last_buf + uoff, ulen); + mtx_unlock(&sc->last_mtx); + + pos += len; + upos += ulen; + bp2->bio_completed += ulen; + } + +done: + /* + * Finish processing the request. + */ + DPRINTF(("%s: done: (%d, %lld, %ld)\n", + gp->name, bp2->bio_error, bp2->bio_completed, bp2->bio_resid)); + free(bp->bio_data, M_GEOM_UNCOMPRESS); + g_destroy_bio(bp); + g_io_deliver(bp2, bp2->bio_error); +} + +static void +g_uncompress_start(struct bio *bp) +{ + struct bio *bp2; + struct g_provider *pp, *pp2; + struct g_geom *gp; + struct g_consumer *cp; + struct g_uncompress_softc *sc; + uint32_t start_blk, end_blk; + size_t bsize; + + + pp = bp->bio_to; + gp = pp->geom; + DPRINTF(("%s: start (%s) to %s off=%lld len=%lld\n", gp->name, + (bp->bio_cmd==BIO_READ)?"BIO_READ":"BIO_WRITE*", + pp->name, bp->bio_offset, bp->bio_length)); + + if (bp->bio_cmd != BIO_READ) { + g_io_deliver(bp, EOPNOTSUPP); + return; + } + + cp = LIST_FIRST(&gp->consumer); + pp2 = cp->provider; + sc = gp->softc; + + start_blk = bp->bio_offset / sc->blksz; + end_blk = howmany(bp->bio_offset + bp->bio_length, sc->blksz); + KASSERT(start_blk < sc->nblocks, + ("start_blk out of range")); + KASSERT(end_blk <= sc->nblocks, + ("end_blk out of range")); + + sc->req_total++; + if (start_blk + 1 == end_blk) { + mtx_lock(&sc->last_mtx); + if (start_blk == sc->last_blk) { + off_t uoff; + + uoff = bp->bio_offset % sc->blksz; + KASSERT(bp->bio_length <= sc->blksz - uoff, + ("cached data error")); + memcpy(bp->bio_data, sc->last_buf + uoff, + bp->bio_length); + sc->req_cached++; + mtx_unlock(&sc->last_mtx); + + DPRINTF(("%s: start: cached 0 + %lld, " + "%lld + 0 + %lld\n", + gp->name, bp->bio_length, uoff, bp->bio_length)); + bp->bio_completed = bp->bio_length; + g_io_deliver(bp, 0); + return; + } + mtx_unlock(&sc->last_mtx); + } + + bp2 = g_clone_bio(bp); + if (bp2 == NULL) { + g_io_deliver(bp, ENOMEM); + return; + } + DPRINTF(("%s: start (%d..%d), %s: %d + %llu, %s: %d + %llu\n", + gp->name, start_blk, end_blk, + pp->name, pp->sectorsize, pp->mediasize, + pp2->name, pp2->sectorsize, pp2->mediasize)); + + bsize = pp2->sectorsize; + + bp2->bio_done = g_uncompress_done; + bp2->bio_offset = rounddown(sc->offsets[start_blk],bsize); + bp2->bio_length = roundup(sc->offsets[end_blk],bsize) - bp2->bio_offset; + bp2->bio_data = malloc(bp2->bio_length, M_GEOM_UNCOMPRESS, M_NOWAIT); + + DPRINTF(("%s: start %lld + %lld -> %lld + %lld -> %lld + %lld\n", + gp->name, + bp->bio_offset, bp->bio_length, + sc->offsets[start_blk], + sc->offsets[end_blk] - sc->offsets[start_blk], + bp2->bio_offset, bp2->bio_length)); + + if (bp2->bio_data == NULL) { + g_destroy_bio(bp2); + g_io_deliver(bp, ENOMEM); + return; + } + + g_io_request(bp2, cp); + DPRINTF(("%s: start ok\n", gp->name)); +} + +static void +g_uncompress_orphan(struct g_consumer *cp) +{ + struct g_geom *gp; + + g_trace(G_T_TOPOLOGY, "g_uncompress_orphan(%p/%s)", cp, cp->provider->name); + g_topology_assert(); + KASSERT(cp->provider->error != 0, + ("g_uncompress_orphan with error == 0")); + + gp = cp->geom; + g_uncompress_softc_free(gp->softc, gp); + gp->softc = NULL; + g_wither_geom(gp, cp->provider->error); +} + +static int +g_uncompress_access(struct g_provider *pp, int dr, int dw, int de) +{ + struct g_geom *gp; + struct g_consumer *cp; + + gp = pp->geom; + cp = LIST_FIRST(&gp->consumer); + KASSERT (cp != NULL, ("g_uncompress_access but no consumer")); + + if (cp->acw + dw > 0) + return EROFS; + + return (g_access(cp, dr, dw, de)); +} + +static void +g_uncompress_spoiled(struct g_consumer *cp) +{ + struct g_geom *gp; + + gp = cp->geom; + g_trace(G_T_TOPOLOGY, "g_uncompress_spoiled(%p/%s)", cp, gp->name); + g_topology_assert(); + + g_uncompress_softc_free(gp->softc, gp); + gp->softc = NULL; + g_wither_geom(gp, ENXIO); +} + +static struct g_geom * +g_uncompress_taste(struct g_class *mp, struct g_provider *pp, int flags) +{ + int error; + uint32_t i, total_offsets, offsets_read, type; + uint8_t *buf; + struct cloop_header *header; + struct g_consumer *cp; + struct g_geom *gp; + struct g_provider *pp2; + struct g_uncompress_softc *sc; + + g_trace(G_T_TOPOLOGY, "g_uncompress_taste(%s,%s)", mp->name, pp->name); + g_topology_assert(); + + /* Skip providers that are already open for writing. */ + if (pp->acw > 0) + return (NULL); + + buf = NULL; + + /* + * Create geom instance. + */ + gp = g_new_geomf(mp, "%s.uncompress", pp->name); + cp = g_new_consumer(gp); + error = g_attach(cp, pp); + if (error == 0) + error = g_access(cp, 1, 0, 0); + if (error) { + g_detach(cp); + g_destroy_consumer(cp); + g_destroy_geom(gp); + return (NULL); + } + g_topology_unlock(); + + /* + * Read cloop header, look for CLOOP magic, perform + * other validity checks. + */ + DPRINTF(("%s: media sectorsize %u, mediasize %lld\n", + gp->name, pp->sectorsize, pp->mediasize)); + + i = roundup(sizeof(struct cloop_header), pp->sectorsize); + buf = g_read_data(cp, 0, i, NULL); + if (buf == NULL) + goto err; + + header = (struct cloop_header *) buf; + if (strncmp(header->magic, CLOOP_MAGIC_START, + sizeof(CLOOP_MAGIC_START) - 1) != 0) { + DPRINTF(("%s: no CLOOP magic\n", gp->name)); + goto err; + } + + switch (header->magic[0x0b]) { + case 'L': + type = GEOM_ULZMA; + if (header->magic[0x0c] < GEOM_ULZMA_MAJVER) { + DPRINTF(("%s: image version too old\n", gp->name)); + goto err; + } + printf("%s: GEOM_ULZMA image found\n", gp->name); + break; + case 'V': + type = GEOM_UZIP; + if (header->magic[0x0c] < GEOM_UZIP_MAJVER) { + DPRINTF(("%s: image version too old\n", gp->name)); + goto err; + } + printf("%s: GEOM_UZIP image found\n", gp->name); + break; + default: + DPRINTF(("%s: unsupported image type\n", gp->name)); + goto err; + } + + DPRINTF(("%s: found CLOOP magic\n", gp->name)); + /* + * Initialize softc and read offsets. + */ + sc = malloc(sizeof(*sc), M_GEOM_UNCOMPRESS, M_WAITOK | M_ZERO); + gp->softc = sc; + sc->type = type; + sc->blksz = ntohl(header->blksz); + sc->nblocks = ntohl(header->nblocks); + if (sc->blksz % 4 != 0) { + printf("%s: block size (%u) should be multiple of 4.\n", + gp->name, sc->blksz); + goto err; + } + if (sc->blksz > MAX_BLKSZ) { + printf("%s: block size (%u) should not be larger than %d.\n", + gp->name, sc->blksz, MAX_BLKSZ); + } + total_offsets = sc->nblocks + 1; + if (sizeof(struct cloop_header) + + total_offsets * sizeof(uint64_t) > pp->mediasize) { + printf("%s: media too small for %u blocks\n", + gp->name, sc->nblocks); + goto err; + } + sc->offsets = malloc( + total_offsets * sizeof(uint64_t), M_GEOM_UNCOMPRESS, M_WAITOK); + offsets_read = MIN(total_offsets, + (pp->sectorsize - sizeof(*header)) / sizeof(uint64_t)); + for (i = 0; i < offsets_read; i++) + sc->offsets[i] = be64toh(((uint64_t *) (header + 1))[i]); + DPRINTF(("%s: %u offsets in the first sector\n", + gp->name, offsets_read)); + + free(buf, M_GEOM); + i = roundup((sizeof(struct cloop_header) + + total_offsets * sizeof(uint64_t)),pp->sectorsize); + buf = g_read_data(cp, 0, i, NULL); + if (buf == NULL) + goto err; + for (i = 0; i <= total_offsets; i++) { + sc->offsets[i] = + be64toh(((uint64_t *) (buf+sizeof(struct cloop_header)))[i]); + } + DPRINTF(("%s: done reading offsets\n", gp->name)); + mtx_init(&sc->last_mtx, "geom_uncompress cache", NULL, MTX_DEF); + sc->last_blk = -1; + sc->last_buf = malloc(sc->blksz, M_GEOM_UNCOMPRESS, M_WAITOK); + sc->req_total = 0; + sc->req_cached = 0; + + switch (sc->type) { + case GEOM_ULZMA: + xz_crc32_init(); + sc->s = xz_dec_init(XZ_SINGLE, 0); + sc->b = (struct xz_buf*)malloc(sizeof(struct xz_buf), + M_GEOM_UNCOMPRESS, M_WAITOK); + break; + case GEOM_UZIP: + sc->zs = (z_stream *)malloc(sizeof(z_stream), + M_GEOM_UNCOMPRESS, M_WAITOK); + sc->zs->zalloc = z_alloc; + sc->zs->zfree = z_free; + if (inflateInit(sc->zs) != Z_OK) { + goto err; + } + break; + } + + g_topology_lock(); + pp2 = g_new_providerf(gp, "%s", gp->name); + pp2->sectorsize = 512; + pp2->mediasize = (off_t)sc->nblocks * sc->blksz; + pp2->flags = pp->flags & G_PF_CANDELETE; + if (pp->stripesize > 0) { + pp2->stripesize = pp->stripesize; + pp2->stripeoffset = pp->stripeoffset; + } + g_error_provider(pp2, 0); + free(buf, M_GEOM); + g_access(cp, -1, 0, 0); + + DPRINTF(("%s: taste ok (%d, %lld), (%d, %d), %x\n", + gp->name, + pp2->sectorsize, pp2->mediasize, + pp2->stripeoffset, pp2->stripesize, pp2->flags)); + printf("%s: %u x %u blocks\n", + gp->name, sc->nblocks, sc->blksz); + return (gp); + +err: + g_topology_lock(); + g_access(cp, -1, 0, 0); + if (buf != NULL) + free(buf, M_GEOM); + if (gp->softc != NULL) { + g_uncompress_softc_free(gp->softc, NULL); + gp->softc = NULL; + } + g_detach(cp); + g_destroy_consumer(cp); + g_destroy_geom(gp); + return (NULL); +} + +static int +g_uncompress_destroy_geom(struct gctl_req *req, struct g_class *mp, struct g_geom *gp) +{ + struct g_provider *pp; + + g_trace(G_T_TOPOLOGY, "g_uncompress_destroy_geom(%s, %s)", mp->name, gp->name); + g_topology_assert(); + + if (gp->softc == NULL) { + printf("%s(%s): gp->softc == NULL\n", __func__, gp->name); + return (ENXIO); + } + + KASSERT(gp != NULL, ("NULL geom")); + pp = LIST_FIRST(&gp->provider); + KASSERT(pp != NULL, ("NULL provider")); + if (pp->acr > 0 || pp->acw > 0 || pp->ace > 0) + return (EBUSY); + + g_uncompress_softc_free(gp->softc, gp); + gp->softc = NULL; + g_wither_geom(gp, ENXIO); + return (0); +} + +static struct g_class g_uncompress_class = { + .name = UNCOMPRESS_CLASS_NAME, + .version = G_VERSION, + .taste = g_uncompress_taste, + .destroy_geom = g_uncompress_destroy_geom, + + .start = g_uncompress_start, + .orphan = g_uncompress_orphan, + .access = g_uncompress_access, + .spoiled = g_uncompress_spoiled, +}; + +DECLARE_GEOM_CLASS(g_uncompress_class, g_uncompress); + Property changes on: sys/geom/uncompress/g_uncompress.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H --Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY Content-Type: text/x-diff; name="mkulzma.patch" Content-Disposition: attachment; filename="mkulzma.patch" Content-Transfer-Encoding: 7bit Index: usr.bin/mkulzma/mkulzma.c =================================================================== --- usr.bin/mkulzma/mkulzma.c (revision 0) +++ usr.bin/mkulzma/mkulzma.c (revision 0) @@ -0,0 +1,323 @@ +/* + * ---------------------------------------------------------------------------- + * Rewrited from mkuzip.c by Aleksandr Rybalko + * ---------------------------------------------------------------------------- + * "THE BEER-WARE LICENSE" (Revision 42): + * wrote this file. As long as you retain this notice you + * can do whatever you want with this stuff. If we meet some day, and you think + * this stuff is worth it, you can buy me a beer in return. Maxim Sobolev + * ---------------------------------------------------------------------------- + * + * $FreeBSD$ + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + + +#define CLSTSIZE 16384 +#define DEFAULT_SUFX ".ulzma" + +#define USED_BLOCKSIZE DEV_BSIZE + + +#define CLOOP_MAGIC_LEN 128 +/* Format L3.0, since we move to XZ API */ +static char CLOOP_MAGIC_START[] = "#!/bin/sh\n#L3.0 Format\n" + "m=geom_ulzma\n(kldstat -m $m 2>&-||kldload $m)>&-&&" + "mount_cd9660 /dev/`mdconfig -af $0`.ulzma $1\nexit $?\n"; + +static char *readblock(int, char *, u_int32_t); +static void usage(void); +static void *safe_malloc(size_t); +static void cleanup(void); + +static char *cleanfile = NULL; + +int main(int argc, char **argv) +{ + char *iname, *oname, *obuf, *ibuf; + int fdr, fdw, i, opt, verbose, tmp; + struct iovec iov[2]; + struct stat sb; + uint32_t destlen; + uint64_t offset; + uint64_t *toc; + lzma_filter filters[2]; + lzma_options_lzma opt_lzma; + lzma_ret ret; + lzma_stream strm = LZMA_STREAM_INIT; + struct cloop_header { + char magic[CLOOP_MAGIC_LEN]; /* cloop magic */ + uint32_t blksz; /* block size */ + uint32_t nblocks; /* number of blocks */ + } hdr; + + memset(&hdr, 0, sizeof(hdr)); + hdr.blksz = CLSTSIZE; + strcpy(hdr.magic, CLOOP_MAGIC_START); + oname = NULL; + verbose = 0; + + while((opt = getopt(argc, argv, "o:s:v")) != -1) { + switch(opt) { + case 'o': + oname = optarg; + break; + + case 's': + tmp = atoi(optarg); + if (tmp <= 0) { + errx(1, "invalid cluster size specified: %s", + optarg); + /* Not reached */ + } + if (tmp % USED_BLOCKSIZE != 0) { + errx(1, "cluster size should be multiple of %d", + USED_BLOCKSIZE); + /* Not reached */ + } + if ( tmp > MAXPHYS) { + errx(1, "cluster size is too large"); + /* Not reached */ + } + hdr.blksz = tmp; + break; + + case 'v': + verbose = 1; + break; + + default: + usage(); + /* Not reached */ + } + } + argc -= optind; + argv += optind; + + if (argc != 1) { + usage(); + /* Not reached */ + } + + iname = argv[0]; + if (oname == NULL) { + asprintf(&oname, "%s%s", iname, DEFAULT_SUFX); + if (oname == NULL) { + err(1, "can't allocate memory"); + /* Not reached */ + } + } + + obuf = safe_malloc(hdr.blksz*2); + ibuf = safe_malloc(hdr.blksz); + + signal(SIGHUP, exit); + signal(SIGINT, exit); + signal(SIGTERM, exit); + signal(SIGXCPU, exit); + signal(SIGXFSZ, exit); + atexit(cleanup); + + fdr = open(iname, O_RDONLY); + if (fdr < 0) { + err(1, "open(%s)", iname); + /* Not reached */ + } + if (fstat(fdr, &sb) != 0) { + err(1, "fstat(%s)", iname); + /* Not reached */ + } + if (S_ISCHR(sb.st_mode)) { + off_t ms; + + if (ioctl(fdr, DIOCGMEDIASIZE, &ms) < 0) { + err(1, "ioctl(DIOCGMEDIASIZE)"); + /* Not reached */ + } + sb.st_size = ms; + } else if (!S_ISREG(sb.st_mode)) { + fprintf(stderr, "%s: not a character device or regular file\n", + iname); + exit(1); + } + hdr.nblocks = sb.st_size / hdr.blksz; + if ((sb.st_size % hdr.blksz) != 0) { + if (verbose != 0) + fprintf(stderr, "file size is not multiple " + "of %d, padding data\n", hdr.blksz); + hdr.nblocks++; + } + toc = safe_malloc((hdr.nblocks + 1) * sizeof(*toc)); + + fdw = open(oname, O_WRONLY | O_TRUNC | O_CREAT, + S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH); + if (fdw < 0) { + err(1, "open(%s)", oname); + /* Not reached */ + } + cleanfile = oname; + + /* Prepare header that we will write later when we have index ready. */ + iov[0].iov_base = (char *)&hdr; + iov[0].iov_len = sizeof(hdr); + iov[1].iov_base = (char *)toc; + iov[1].iov_len = (hdr.nblocks + 1) * sizeof(*toc); + offset = iov[0].iov_len + iov[1].iov_len; + + /* Reserve space for header */ + lseek(fdw, offset, SEEK_SET); + + if (verbose != 0) + fprintf(stderr, "data size %ju bytes, number of clusters " + "%u, index length %zu bytes\n", sb.st_size, + hdr.nblocks, iov[1].iov_len); + + /* Init lzma encoder */ + if (lzma_lzma_preset(&opt_lzma, LZMA_PRESET_DEFAULT)) + errx(1, "Error loading LZMA preset"); + + filters[0].id = LZMA_FILTER_LZMA2; + filters[0].options = &opt_lzma; + filters[1].id = LZMA_VLI_UNKNOWN; + + for(i = 0; i == 0 || ibuf != NULL; i++) { + ibuf = readblock(fdr, ibuf, hdr.blksz); + if (ibuf != NULL) { + destlen = hdr.blksz*2; + + ret = lzma_stream_encoder(&strm, filters, + LZMA_CHECK_CRC32); + if (ret != LZMA_OK) { + if (ret == LZMA_MEMLIMIT_ERROR) + errx(1, "can't compress data: " + "LZMA_MEMLIMIT_ERROR"); + + errx(1, "can't compress data: " + "LZMA compressor ERROR"); + } + + strm.next_in = ibuf; + strm.avail_in = hdr.blksz; + strm.next_out = obuf; + strm.avail_out = hdr.blksz*2; + + ret = lzma_code(&strm, LZMA_FINISH); + + if (ret != LZMA_STREAM_END) { + /* Error */ + errx(1, "lzma_code FINISH failed, code=%d, " + "pos(in=%d, out=%d)", + ret, + (hdr.blksz - strm.avail_in), + (hdr.blksz*2 - strm.avail_out)); + } + + destlen -= strm.avail_out; + + lzma_end(&strm); + + if (verbose != 0) + fprintf(stderr, "cluster #%d, in %u bytes, " + "out %u bytes\n", i, hdr.blksz, destlen); + } else { + destlen = USED_BLOCKSIZE - (offset % USED_BLOCKSIZE); + memset(obuf, 0, destlen); + if (verbose != 0) + fprintf(stderr, "padding data with %u bytes so" + " that file size is multiple of %d\n", + destlen, + USED_BLOCKSIZE); + } + if (write(fdw, obuf, destlen) < 0) { + err(1, "write(%s)", oname); + /* Not reached */ + } + toc[i] = htobe64(offset); + offset += destlen; + } + close(fdr); + + if (verbose != 0) + fprintf(stderr, "compressed data to %ju bytes, saved %lld " + "bytes, %.2f%% decrease.\n", offset, + (long long)(sb.st_size - offset), + 100.0 * (long long)(sb.st_size - offset) / (float)sb.st_size); + + /* Convert to big endian */ + hdr.blksz = htonl(hdr.blksz); + hdr.nblocks = htonl(hdr.nblocks); + /* Write headers into pre-allocated space */ + lseek(fdw, 0, SEEK_SET); + if (writev(fdw, iov, 2) < 0) { + err(1, "writev(%s)", oname); + /* Not reached */ + } + cleanfile = NULL; + close(fdw); + + exit(0); +} + +static char * +readblock(int fd, char *ibuf, u_int32_t clstsize) +{ + int numread; + + bzero(ibuf, clstsize); + numread = read(fd, ibuf, clstsize); + if (numread < 0) { + err(1, "read() failed"); + /* Not reached */ + } + if (numread == 0) { + return NULL; + } + return ibuf; +} + +static void +usage(void) +{ + + fprintf(stderr, "usage: mkulzma [-v] [-o outfile] [-s cluster_size] " + "infile\n"); + exit(1); +} + +static void * +safe_malloc(size_t size) +{ + void *retval; + + retval = malloc(size); + if (retval == NULL) { + err(1, "can't allocate memory"); + /* Not reached */ + } + return retval; +} + +static void +cleanup(void) +{ + + if (cleanfile != NULL) + unlink(cleanfile); +} Property changes on: usr.bin/mkulzma/mkulzma.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: usr.bin/mkulzma/mkulzma.8 =================================================================== --- usr.bin/mkulzma/mkulzma.8 (revision 0) +++ usr.bin/mkulzma/mkulzma.8 (revision 0) @@ -0,0 +1,108 @@ +.\" ---------------------------------------------------------------------------- +.\" Rewrited from mkuzip.8 by Aleksandr Rybalko +.\" ---------------------------------------------------------------------------- +.\" "THE BEER-WARE LICENSE" (Revision 42): +.\" wrote this file. As long as you retain this notice you +.\" can do whatever you want with this stuff. If we meet some day, and you think +.\" this stuff is worth it, you can buy me a beer in return. Maxim Sobolev +.\" ---------------------------------------------------------------------------- +.\" +.\" $FreeBSD$ +.\" +.Dd March 17, 2006 +.Dt MKulzma 8 +.Os +.Sh NAME +.Nm mkulzma +.Nd compress disk image for use with +.Xr geom_ulzma 4 +class +.Sh SYNOPSIS +.Nm +.Op Fl v +.Op Fl o Ar outfile +.Op Fl s Ar cluster_size +.Ar infile +.Sh DESCRIPTION +The +.Nm +utility compresses a disk image file so that the +.Xr geom_ulzma 4 +class will be able to decompress the resulting image at run-time. +This allows for a significant reduction of size of disk image at +the expense of some CPU time required to decompress the data each +time it is read. +The +.Nm +utility +works in two phases: +.Bl -enum +.It +An +.Ar infile +image is split into clusters; each cluster is compressed using +.Xr zlib 3 . +.It +The resulting set of compressed clusters along with headers that allow +locating each individual cluster is written to the output file. +.El +.Pp +The options are: +.Bl -tag -width indent +.It Fl o Ar outfile +Name of the output file +.Ar outfile . +The default is to use the input name with the suffix +.Pa .ulzma . +.It Fl s Ar cluster_size +Split the image into clusters of +.Ar cluster_size +bytes, 16384 bytes by default. +The +.Ar cluster_size +should be a multiple of 512 bytes. +.It Fl v +Display verbose messages. +.El +.Sh NOTES +The compression ratio largely depends on the cluster size used. +.\" The following two sentences are unclear: how can gzip(1) be +.\" used in a comparable fashion, and wouldn't a gzip-compressed +.\" image suffer from larger cluster sizes as well? +For large cluster sizes (16K and higher), typical compression ratios +are only 1-2% less than those achieved with +.Xr gzip 1 . +However, it should be kept in mind that larger cluster +sizes lead to higher overhead in the +.Xr geom_ulzma 4 +class, as the class has to decompress the whole cluster even if +only a few bytes from that cluster have to be read. +.Pp +The +.Nm +utility +inserts a short shell script at the beginning of the generated image, +which makes it possible to +.Dq run +the image just like any other shell script. +The script tries to load the +.Xr geom_ulzma 4 +class if it is not loaded, configure the image as an +.Xr md 4 +disk device using +.Xr mdconfig 8 , +and automatically mount it using +.Xr mount_cd9660 8 +on the mount point provided as the first argument to the script. +.Sh EXIT STATUS +.Ex -std +.Sh SEE ALSO +.Xr gzip 1 , +.Xr zlib 3 , +.Xr geom 4 , +.Xr geom_ulzma 4 , +.Xr md 4 , +.Xr mdconfig 8 , +.Xr mount_cd9660 8 +.Sh AUTHORS +.An Maxim Sobolev Aq sobomax@FreeBSD.org Property changes on: usr.bin/mkulzma/mkulzma.8 ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: usr.bin/mkulzma/Makefile =================================================================== --- usr.bin/mkulzma/Makefile (revision 0) +++ usr.bin/mkulzma/Makefile (revision 0) @@ -0,0 +1,8 @@ +# $FreeBSD$ + +PROG= mkulzma +MAN= mkulzma.8 +DPADD= ${LIBLZMA} +LDADD= -llzma + +.include Property changes on: usr.bin/mkulzma/Makefile ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native --Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY Content-Type: text/x-diff; name="add_contrib_xz-embedded.patch" Content-Disposition: attachment; filename="add_contrib_xz-embedded.patch" Content-Transfer-Encoding: 7bit Index: sys/contrib/xz-embedded/xz_private.h =================================================================== --- sys/contrib/xz-embedded/xz_private.h (revision 0) +++ sys/contrib/xz-embedded/xz_private.h (revision 0) @@ -0,0 +1,156 @@ +/* + * Private includes and definitions + * + * Author: Lasse Collin + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#ifndef XZ_PRIVATE_H +#define XZ_PRIVATE_H + +#ifdef __KERNEL__ +# include +# include +# include + /* XZ_PREBOOT may be defined only via decompress_unxz.c. */ +# ifndef XZ_PREBOOT +# include +# include +# include +# ifdef CONFIG_XZ_DEC_X86 +# define XZ_DEC_X86 +# endif +# ifdef CONFIG_XZ_DEC_POWERPC +# define XZ_DEC_POWERPC +# endif +# ifdef CONFIG_XZ_DEC_IA64 +# define XZ_DEC_IA64 +# endif +# ifdef CONFIG_XZ_DEC_ARM +# define XZ_DEC_ARM +# endif +# ifdef CONFIG_XZ_DEC_ARMTHUMB +# define XZ_DEC_ARMTHUMB +# endif +# ifdef CONFIG_XZ_DEC_SPARC +# define XZ_DEC_SPARC +# endif +# define memeq(a, b, size) (memcmp(a, b, size) == 0) +# define memzero(buf, size) memset(buf, 0, size) +# endif +# define get_le32(p) le32_to_cpup((const uint32_t *)(p)) +#else + /* + * For userspace builds, use a separate header to define the required + * macros and functions. This makes it easier to adapt the code into + * different environments and avoids clutter in the Linux kernel tree. + */ +# include "xz_config.h" +#endif + +/* If no specific decoding mode is requested, enable support for all modes. */ +#if !defined(XZ_DEC_SINGLE) && !defined(XZ_DEC_PREALLOC) \ + && !defined(XZ_DEC_DYNALLOC) +# define XZ_DEC_SINGLE +# define XZ_DEC_PREALLOC +# define XZ_DEC_DYNALLOC +#endif + +/* + * The DEC_IS_foo(mode) macros are used in "if" statements. If only some + * of the supported modes are enabled, these macros will evaluate to true or + * false at compile time and thus allow the compiler to omit unneeded code. + */ +#ifdef XZ_DEC_SINGLE +# define DEC_IS_SINGLE(mode) ((mode) == XZ_SINGLE) +#else +# define DEC_IS_SINGLE(mode) (false) +#endif + +#ifdef XZ_DEC_PREALLOC +# define DEC_IS_PREALLOC(mode) ((mode) == XZ_PREALLOC) +#else +# define DEC_IS_PREALLOC(mode) (false) +#endif + +#ifdef XZ_DEC_DYNALLOC +# define DEC_IS_DYNALLOC(mode) ((mode) == XZ_DYNALLOC) +#else +# define DEC_IS_DYNALLOC(mode) (false) +#endif + +#if !defined(XZ_DEC_SINGLE) +# define DEC_IS_MULTI(mode) (true) +#elif defined(XZ_DEC_PREALLOC) || defined(XZ_DEC_DYNALLOC) +# define DEC_IS_MULTI(mode) ((mode) != XZ_SINGLE) +#else +# define DEC_IS_MULTI(mode) (false) +#endif + +/* + * If any of the BCJ filter decoders are wanted, define XZ_DEC_BCJ. + * XZ_DEC_BCJ is used to enable generic support for BCJ decoders. + */ +#ifndef XZ_DEC_BCJ +# if defined(XZ_DEC_X86) || defined(XZ_DEC_POWERPC) \ + || defined(XZ_DEC_IA64) || defined(XZ_DEC_ARM) \ + || defined(XZ_DEC_ARM) || defined(XZ_DEC_ARMTHUMB) \ + || defined(XZ_DEC_SPARC) +# define XZ_DEC_BCJ +# endif +#endif + +/* + * Allocate memory for LZMA2 decoder. xz_dec_lzma2_reset() must be used + * before calling xz_dec_lzma2_run(). + */ +XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode, + uint32_t dict_max); + +/* + * Decode the LZMA2 properties (one byte) and reset the decoder. Return + * XZ_OK on success, XZ_MEMLIMIT_ERROR if the preallocated dictionary is not + * big enough, and XZ_OPTIONS_ERROR if props indicates something that this + * decoder doesn't support. + */ +XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s, + uint8_t props); + +/* Decode raw LZMA2 stream from b->in to b->out. */ +XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s, + struct xz_buf *b); + +/* Free the memory allocated for the LZMA2 decoder. */ +XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s); + +#ifdef XZ_DEC_BCJ +/* + * Allocate memory for BCJ decoders. xz_dec_bcj_reset() must be used before + * calling xz_dec_bcj_run(). + */ +XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call); + +/* + * Decode the Filter ID of a BCJ filter. This implementation doesn't + * support custom start offsets, so no decoding of Filter Properties + * is needed. Returns XZ_OK if the given Filter ID is supported. + * Otherwise XZ_OPTIONS_ERROR is returned. + */ +XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id); + +/* + * Decode raw BCJ + LZMA2 stream. This must be used only if there actually is + * a BCJ filter in the chain. If the chain has only LZMA2, xz_dec_lzma2_run() + * must be called directly. + */ +XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s, + struct xz_dec_lzma2 *lzma2, + struct xz_buf *b); + +/* Free the memory allocated for the BCJ filters. */ +#define xz_dec_bcj_end(s) kfree(s) +#endif + +#endif Property changes on: sys/contrib/xz-embedded/xz_private.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_dec_lzma2.c =================================================================== --- sys/contrib/xz-embedded/xz_dec_lzma2.c (revision 0) +++ sys/contrib/xz-embedded/xz_dec_lzma2.c (revision 0) @@ -0,0 +1,1171 @@ +/* + * LZMA2 decoder + * + * Authors: Lasse Collin + * Igor Pavlov + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#include "xz_private.h" +#include "xz_lzma2.h" + +/* + * Range decoder initialization eats the first five bytes of each LZMA chunk. + */ +#define RC_INIT_BYTES 5 + +/* + * Minimum number of usable input buffer to safely decode one LZMA symbol. + * The worst case is that we decode 22 bits using probabilities and 26 + * direct bits. This may decode at maximum of 20 bytes of input. However, + * lzma_main() does an extra normalization before returning, thus we + * need to put 21 here. + */ +#define LZMA_IN_REQUIRED 21 + +/* + * Dictionary (history buffer) + * + * These are always true: + * start <= pos <= full <= end + * pos <= limit <= end + * + * In multi-call mode, also these are true: + * end == size + * size <= size_max + * allocated <= size + * + * Most of these variables are size_t to support single-call mode, + * in which the dictionary variables address the actual output + * buffer directly. + */ +struct dictionary { + /* Beginning of the history buffer */ + uint8_t *buf; + + /* Old position in buf (before decoding more data) */ + size_t start; + + /* Position in buf */ + size_t pos; + + /* + * How full dictionary is. This is used to detect corrupt input that + * would read beyond the beginning of the uncompressed stream. + */ + size_t full; + + /* Write limit; we don't write to buf[limit] or later bytes. */ + size_t limit; + + /* + * End of the dictionary buffer. In multi-call mode, this is + * the same as the dictionary size. In single-call mode, this + * indicates the size of the output buffer. + */ + size_t end; + + /* + * Size of the dictionary as specified in Block Header. This is used + * together with "full" to detect corrupt input that would make us + * read beyond the beginning of the uncompressed stream. + */ + uint32_t size; + + /* + * Maximum allowed dictionary size in multi-call mode. + * This is ignored in single-call mode. + */ + uint32_t size_max; + + /* + * Amount of memory currently allocated for the dictionary. + * This is used only with XZ_DYNALLOC. (With XZ_PREALLOC, + * size_max is always the same as the allocated size.) + */ + uint32_t allocated; + + /* Operation mode */ + enum xz_mode mode; +}; + +/* Range decoder */ +struct rc_dec { + uint32_t range; + uint32_t code; + + /* + * Number of initializing bytes remaining to be read + * by rc_read_init(). + */ + uint32_t init_bytes_left; + + /* + * Buffer from which we read our input. It can be either + * temp.buf or the caller-provided input buffer. + */ + const uint8_t *in; + size_t in_pos; + size_t in_limit; +}; + +/* Probabilities for a length decoder. */ +struct lzma_len_dec { + /* Probability of match length being at least 10 */ + uint16_t choice; + + /* Probability of match length being at least 18 */ + uint16_t choice2; + + /* Probabilities for match lengths 2-9 */ + uint16_t low[POS_STATES_MAX][LEN_LOW_SYMBOLS]; + + /* Probabilities for match lengths 10-17 */ + uint16_t mid[POS_STATES_MAX][LEN_MID_SYMBOLS]; + + /* Probabilities for match lengths 18-273 */ + uint16_t high[LEN_HIGH_SYMBOLS]; +}; + +struct lzma_dec { + /* Distances of latest four matches */ + uint32_t rep0; + uint32_t rep1; + uint32_t rep2; + uint32_t rep3; + + /* Types of the most recently seen LZMA symbols */ + enum lzma_state state; + + /* + * Length of a match. This is updated so that dict_repeat can + * be called again to finish repeating the whole match. + */ + uint32_t len; + + /* + * LZMA properties or related bit masks (number of literal + * context bits, a mask dervied from the number of literal + * position bits, and a mask dervied from the number + * position bits) + */ + uint32_t lc; + uint32_t literal_pos_mask; /* (1 << lp) - 1 */ + uint32_t pos_mask; /* (1 << pb) - 1 */ + + /* If 1, it's a match. Otherwise it's a single 8-bit literal. */ + uint16_t is_match[STATES][POS_STATES_MAX]; + + /* If 1, it's a repeated match. The distance is one of rep0 .. rep3. */ + uint16_t is_rep[STATES]; + + /* + * If 0, distance of a repeated match is rep0. + * Otherwise check is_rep1. + */ + uint16_t is_rep0[STATES]; + + /* + * If 0, distance of a repeated match is rep1. + * Otherwise check is_rep2. + */ + uint16_t is_rep1[STATES]; + + /* If 0, distance of a repeated match is rep2. Otherwise it is rep3. */ + uint16_t is_rep2[STATES]; + + /* + * If 1, the repeated match has length of one byte. Otherwise + * the length is decoded from rep_len_decoder. + */ + uint16_t is_rep0_long[STATES][POS_STATES_MAX]; + + /* + * Probability tree for the highest two bits of the match + * distance. There is a separate probability tree for match + * lengths of 2 (i.e. MATCH_LEN_MIN), 3, 4, and [5, 273]. + */ + uint16_t dist_slot[DIST_STATES][DIST_SLOTS]; + + /* + * Probility trees for additional bits for match distance + * when the distance is in the range [4, 127]. + */ + uint16_t dist_special[FULL_DISTANCES - DIST_MODEL_END]; + + /* + * Probability tree for the lowest four bits of a match + * distance that is equal to or greater than 128. + */ + uint16_t dist_align[ALIGN_SIZE]; + + /* Length of a normal match */ + struct lzma_len_dec match_len_dec; + + /* Length of a repeated match */ + struct lzma_len_dec rep_len_dec; + + /* Probabilities of literals */ + uint16_t literal[LITERAL_CODERS_MAX][LITERAL_CODER_SIZE]; +}; + +struct lzma2_dec { + /* Position in xz_dec_lzma2_run(). */ + enum lzma2_seq { + SEQ_CONTROL, + SEQ_UNCOMPRESSED_1, + SEQ_UNCOMPRESSED_2, + SEQ_COMPRESSED_0, + SEQ_COMPRESSED_1, + SEQ_PROPERTIES, + SEQ_LZMA_PREPARE, + SEQ_LZMA_RUN, + SEQ_COPY + } sequence; + + /* Next position after decoding the compressed size of the chunk. */ + enum lzma2_seq next_sequence; + + /* Uncompressed size of LZMA chunk (2 MiB at maximum) */ + uint32_t uncompressed; + + /* + * Compressed size of LZMA chunk or compressed/uncompressed + * size of uncompressed chunk (64 KiB at maximum) + */ + uint32_t compressed; + + /* + * True if dictionary reset is needed. This is false before + * the first chunk (LZMA or uncompressed). + */ + bool need_dict_reset; + + /* + * True if new LZMA properties are needed. This is false + * before the first LZMA chunk. + */ + bool need_props; +}; + +struct xz_dec_lzma2 { + /* + * The order below is important on x86 to reduce code size and + * it shouldn't hurt on other platforms. Everything up to and + * including lzma.pos_mask are in the first 128 bytes on x86-32, + * which allows using smaller instructions to access those + * variables. On x86-64, fewer variables fit into the first 128 + * bytes, but this is still the best order without sacrificing + * the readability by splitting the structures. + */ + struct rc_dec rc; + struct dictionary dict; + struct lzma2_dec lzma2; + struct lzma_dec lzma; + + /* + * Temporary buffer which holds small number of input bytes between + * decoder calls. See lzma2_lzma() for details. + */ + struct { + uint32_t size; + uint8_t buf[3 * LZMA_IN_REQUIRED]; + } temp; +}; + +/************** + * Dictionary * + **************/ + +/* + * Reset the dictionary state. When in single-call mode, set up the beginning + * of the dictionary to point to the actual output buffer. + */ +static void dict_reset(struct dictionary *dict, struct xz_buf *b) +{ + if (DEC_IS_SINGLE(dict->mode)) { + dict->buf = b->out + b->out_pos; + dict->end = b->out_size - b->out_pos; + } + + dict->start = 0; + dict->pos = 0; + dict->limit = 0; + dict->full = 0; +} + +/* Set dictionary write limit */ +static void dict_limit(struct dictionary *dict, size_t out_max) +{ + if (dict->end - dict->pos <= out_max) + dict->limit = dict->end; + else + dict->limit = dict->pos + out_max; +} + +/* Return true if at least one byte can be written into the dictionary. */ +static inline bool dict_has_space(const struct dictionary *dict) +{ + return dict->pos < dict->limit; +} + +/* + * Get a byte from the dictionary at the given distance. The distance is + * assumed to valid, or as a special case, zero when the dictionary is + * still empty. This special case is needed for single-call decoding to + * avoid writing a '\0' to the end of the destination buffer. + */ +static inline uint32_t dict_get(const struct dictionary *dict, uint32_t dist) +{ + size_t offset = dict->pos - dist - 1; + + if (dist >= dict->pos) + offset += dict->end; + + return dict->full > 0 ? dict->buf[offset] : 0; +} + +/* + * Put one byte into the dictionary. It is assumed that there is space for it. + */ +static inline void dict_put(struct dictionary *dict, uint8_t byte) +{ + dict->buf[dict->pos++] = byte; + + if (dict->full < dict->pos) + dict->full = dict->pos; +} + +/* + * Repeat given number of bytes from the given distance. If the distance is + * invalid, false is returned. On success, true is returned and *len is + * updated to indicate how many bytes were left to be repeated. + */ +static bool dict_repeat(struct dictionary *dict, uint32_t *len, uint32_t dist) +{ + size_t back; + uint32_t left; + + if (dist >= dict->full || dist >= dict->size) + return false; + + left = min_t(size_t, dict->limit - dict->pos, *len); + *len -= left; + + back = dict->pos - dist - 1; + if (dist >= dict->pos) + back += dict->end; + + do { + dict->buf[dict->pos++] = dict->buf[back++]; + if (back == dict->end) + back = 0; + } while (--left > 0); + + if (dict->full < dict->pos) + dict->full = dict->pos; + + return true; +} + +/* Copy uncompressed data as is from input to dictionary and output buffers. */ +static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b, + uint32_t *left) +{ + size_t copy_size; + + while (*left > 0 && b->in_pos < b->in_size + && b->out_pos < b->out_size) { + copy_size = min(b->in_size - b->in_pos, + b->out_size - b->out_pos); + if (copy_size > dict->end - dict->pos) + copy_size = dict->end - dict->pos; + if (copy_size > *left) + copy_size = *left; + + *left -= copy_size; + + memcpy(dict->buf + dict->pos, b->in + b->in_pos, copy_size); + dict->pos += copy_size; + + if (dict->full < dict->pos) + dict->full = dict->pos; + + if (DEC_IS_MULTI(dict->mode)) { + if (dict->pos == dict->end) + dict->pos = 0; + + memcpy(b->out + b->out_pos, b->in + b->in_pos, + copy_size); + } + + dict->start = dict->pos; + + b->out_pos += copy_size; + b->in_pos += copy_size; + } +} + +/* + * Flush pending data from dictionary to b->out. It is assumed that there is + * enough space in b->out. This is guaranteed because caller uses dict_limit() + * before decoding data into the dictionary. + */ +static uint32_t dict_flush(struct dictionary *dict, struct xz_buf *b) +{ + size_t copy_size = dict->pos - dict->start; + + if (DEC_IS_MULTI(dict->mode)) { + if (dict->pos == dict->end) + dict->pos = 0; + + memcpy(b->out + b->out_pos, dict->buf + dict->start, + copy_size); + } + + dict->start = dict->pos; + b->out_pos += copy_size; + return copy_size; +} + +/***************** + * Range decoder * + *****************/ + +/* Reset the range decoder. */ +static void rc_reset(struct rc_dec *rc) +{ + rc->range = (uint32_t)-1; + rc->code = 0; + rc->init_bytes_left = RC_INIT_BYTES; +} + +/* + * Read the first five initial bytes into rc->code if they haven't been + * read already. (Yes, the first byte gets completely ignored.) + */ +static bool rc_read_init(struct rc_dec *rc, struct xz_buf *b) +{ + while (rc->init_bytes_left > 0) { + if (b->in_pos == b->in_size) + return false; + + rc->code = (rc->code << 8) + b->in[b->in_pos++]; + --rc->init_bytes_left; + } + + return true; +} + +/* Return true if there may not be enough input for the next decoding loop. */ +static inline bool rc_limit_exceeded(const struct rc_dec *rc) +{ + return rc->in_pos > rc->in_limit; +} + +/* + * Return true if it is possible (from point of view of range decoder) that + * we have reached the end of the LZMA chunk. + */ +static inline bool rc_is_finished(const struct rc_dec *rc) +{ + return rc->code == 0; +} + +/* Read the next input byte if needed. */ +static __always_inline void rc_normalize(struct rc_dec *rc) +{ + if (rc->range < RC_TOP_VALUE) { + rc->range <<= RC_SHIFT_BITS; + rc->code = (rc->code << RC_SHIFT_BITS) + rc->in[rc->in_pos++]; + } +} + +/* + * Decode one bit. In some versions, this function has been splitted in three + * functions so that the compiler is supposed to be able to more easily avoid + * an extra branch. In this particular version of the LZMA decoder, this + * doesn't seem to be a good idea (tested with GCC 3.3.6, 3.4.6, and 4.3.3 + * on x86). Using a non-splitted version results in nicer looking code too. + * + * NOTE: This must return an int. Do not make it return a bool or the speed + * of the code generated by GCC 3.x decreases 10-15 %. (GCC 4.3 doesn't care, + * and it generates 10-20 % faster code than GCC 3.x from this file anyway.) + */ +static __always_inline int rc_bit(struct rc_dec *rc, uint16_t *prob) +{ + uint32_t bound; + int bit; + + rc_normalize(rc); + bound = (rc->range >> RC_BIT_MODEL_TOTAL_BITS) * *prob; + if (rc->code < bound) { + rc->range = bound; + *prob += (RC_BIT_MODEL_TOTAL - *prob) >> RC_MOVE_BITS; + bit = 0; + } else { + rc->range -= bound; + rc->code -= bound; + *prob -= *prob >> RC_MOVE_BITS; + bit = 1; + } + + return bit; +} + +/* Decode a bittree starting from the most significant bit. */ +static __always_inline uint32_t rc_bittree(struct rc_dec *rc, + uint16_t *probs, uint32_t limit) +{ + uint32_t symbol = 1; + + do { + if (rc_bit(rc, &probs[symbol])) + symbol = (symbol << 1) + 1; + else + symbol <<= 1; + } while (symbol < limit); + + return symbol; +} + +/* Decode a bittree starting from the least significant bit. */ +static __always_inline void rc_bittree_reverse(struct rc_dec *rc, + uint16_t *probs, + uint32_t *dest, uint32_t limit) +{ + uint32_t symbol = 1; + uint32_t i = 0; + + do { + if (rc_bit(rc, &probs[symbol])) { + symbol = (symbol << 1) + 1; + *dest += 1 << i; + } else { + symbol <<= 1; + } + } while (++i < limit); +} + +/* Decode direct bits (fixed fifty-fifty probability) */ +static inline void rc_direct(struct rc_dec *rc, uint32_t *dest, uint32_t limit) +{ + uint32_t mask; + + do { + rc_normalize(rc); + rc->range >>= 1; + rc->code -= rc->range; + mask = (uint32_t)0 - (rc->code >> 31); + rc->code += rc->range & mask; + *dest = (*dest << 1) + (mask + 1); + } while (--limit > 0); +} + +/******** + * LZMA * + ********/ + +/* Get pointer to literal coder probability array. */ +static uint16_t *lzma_literal_probs(struct xz_dec_lzma2 *s) +{ + uint32_t prev_byte = dict_get(&s->dict, 0); + uint32_t low = prev_byte >> (8 - s->lzma.lc); + uint32_t high = (s->dict.pos & s->lzma.literal_pos_mask) << s->lzma.lc; + return s->lzma.literal[low + high]; +} + +/* Decode a literal (one 8-bit byte) */ +static void lzma_literal(struct xz_dec_lzma2 *s) +{ + uint16_t *probs; + uint32_t symbol; + uint32_t match_byte; + uint32_t match_bit; + uint32_t offset; + uint32_t i; + + probs = lzma_literal_probs(s); + + if (lzma_state_is_literal(s->lzma.state)) { + symbol = rc_bittree(&s->rc, probs, 0x100); + } else { + symbol = 1; + match_byte = dict_get(&s->dict, s->lzma.rep0) << 1; + offset = 0x100; + + do { + match_bit = match_byte & offset; + match_byte <<= 1; + i = offset + match_bit + symbol; + + if (rc_bit(&s->rc, &probs[i])) { + symbol = (symbol << 1) + 1; + offset &= match_bit; + } else { + symbol <<= 1; + offset &= ~match_bit; + } + } while (symbol < 0x100); + } + + dict_put(&s->dict, (uint8_t)symbol); + lzma_state_literal(&s->lzma.state); +} + +/* Decode the length of the match into s->lzma.len. */ +static void lzma_len(struct xz_dec_lzma2 *s, struct lzma_len_dec *l, + uint32_t pos_state) +{ + uint16_t *probs; + uint32_t limit; + + if (!rc_bit(&s->rc, &l->choice)) { + probs = l->low[pos_state]; + limit = LEN_LOW_SYMBOLS; + s->lzma.len = MATCH_LEN_MIN; + } else { + if (!rc_bit(&s->rc, &l->choice2)) { + probs = l->mid[pos_state]; + limit = LEN_MID_SYMBOLS; + s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS; + } else { + probs = l->high; + limit = LEN_HIGH_SYMBOLS; + s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS + + LEN_MID_SYMBOLS; + } + } + + s->lzma.len += rc_bittree(&s->rc, probs, limit) - limit; +} + +/* Decode a match. The distance will be stored in s->lzma.rep0. */ +static void lzma_match(struct xz_dec_lzma2 *s, uint32_t pos_state) +{ + uint16_t *probs; + uint32_t dist_slot; + uint32_t limit; + + lzma_state_match(&s->lzma.state); + + s->lzma.rep3 = s->lzma.rep2; + s->lzma.rep2 = s->lzma.rep1; + s->lzma.rep1 = s->lzma.rep0; + + lzma_len(s, &s->lzma.match_len_dec, pos_state); + + probs = s->lzma.dist_slot[lzma_get_dist_state(s->lzma.len)]; + dist_slot = rc_bittree(&s->rc, probs, DIST_SLOTS) - DIST_SLOTS; + + if (dist_slot < DIST_MODEL_START) { + s->lzma.rep0 = dist_slot; + } else { + limit = (dist_slot >> 1) - 1; + s->lzma.rep0 = 2 + (dist_slot & 1); + + if (dist_slot < DIST_MODEL_END) { + s->lzma.rep0 <<= limit; + probs = s->lzma.dist_special + s->lzma.rep0 + - dist_slot - 1; + rc_bittree_reverse(&s->rc, probs, + &s->lzma.rep0, limit); + } else { + rc_direct(&s->rc, &s->lzma.rep0, limit - ALIGN_BITS); + s->lzma.rep0 <<= ALIGN_BITS; + rc_bittree_reverse(&s->rc, s->lzma.dist_align, + &s->lzma.rep0, ALIGN_BITS); + } + } +} + +/* + * Decode a repeated match. The distance is one of the four most recently + * seen matches. The distance will be stored in s->lzma.rep0. + */ +static void lzma_rep_match(struct xz_dec_lzma2 *s, uint32_t pos_state) +{ + uint32_t tmp; + + if (!rc_bit(&s->rc, &s->lzma.is_rep0[s->lzma.state])) { + if (!rc_bit(&s->rc, &s->lzma.is_rep0_long[ + s->lzma.state][pos_state])) { + lzma_state_short_rep(&s->lzma.state); + s->lzma.len = 1; + return; + } + } else { + if (!rc_bit(&s->rc, &s->lzma.is_rep1[s->lzma.state])) { + tmp = s->lzma.rep1; + } else { + if (!rc_bit(&s->rc, &s->lzma.is_rep2[s->lzma.state])) { + tmp = s->lzma.rep2; + } else { + tmp = s->lzma.rep3; + s->lzma.rep3 = s->lzma.rep2; + } + + s->lzma.rep2 = s->lzma.rep1; + } + + s->lzma.rep1 = s->lzma.rep0; + s->lzma.rep0 = tmp; + } + + lzma_state_long_rep(&s->lzma.state); + lzma_len(s, &s->lzma.rep_len_dec, pos_state); +} + +/* LZMA decoder core */ +static bool lzma_main(struct xz_dec_lzma2 *s) +{ + uint32_t pos_state; + + /* + * If the dictionary was reached during the previous call, try to + * finish the possibly pending repeat in the dictionary. + */ + if (dict_has_space(&s->dict) && s->lzma.len > 0) + dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0); + + /* + * Decode more LZMA symbols. One iteration may consume up to + * LZMA_IN_REQUIRED - 1 bytes. + */ + while (dict_has_space(&s->dict) && !rc_limit_exceeded(&s->rc)) { + pos_state = s->dict.pos & s->lzma.pos_mask; + + if (!rc_bit(&s->rc, &s->lzma.is_match[ + s->lzma.state][pos_state])) { + lzma_literal(s); + } else { + if (rc_bit(&s->rc, &s->lzma.is_rep[s->lzma.state])) + lzma_rep_match(s, pos_state); + else + lzma_match(s, pos_state); + + if (!dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0)) + return false; + } + } + + /* + * Having the range decoder always normalized when we are outside + * this function makes it easier to correctly handle end of the chunk. + */ + rc_normalize(&s->rc); + + return true; +} + +/* + * Reset the LZMA decoder and range decoder state. Dictionary is nore reset + * here, because LZMA state may be reset without resetting the dictionary. + */ +static void lzma_reset(struct xz_dec_lzma2 *s) +{ + uint16_t *probs; + size_t i; + + s->lzma.state = STATE_LIT_LIT; + s->lzma.rep0 = 0; + s->lzma.rep1 = 0; + s->lzma.rep2 = 0; + s->lzma.rep3 = 0; + + /* + * All probabilities are initialized to the same value. This hack + * makes the code smaller by avoiding a separate loop for each + * probability array. + * + * This could be optimized so that only that part of literal + * probabilities that are actually required. In the common case + * we would write 12 KiB less. + */ + probs = s->lzma.is_match[0]; + for (i = 0; i < PROBS_TOTAL; ++i) + probs[i] = RC_BIT_MODEL_TOTAL / 2; + + rc_reset(&s->rc); +} + +/* + * Decode and validate LZMA properties (lc/lp/pb) and calculate the bit masks + * from the decoded lp and pb values. On success, the LZMA decoder state is + * reset and true is returned. + */ +static bool lzma_props(struct xz_dec_lzma2 *s, uint8_t props) +{ + if (props > (4 * 5 + 4) * 9 + 8) + return false; + + s->lzma.pos_mask = 0; + while (props >= 9 * 5) { + props -= 9 * 5; + ++s->lzma.pos_mask; + } + + s->lzma.pos_mask = (1 << s->lzma.pos_mask) - 1; + + s->lzma.literal_pos_mask = 0; + while (props >= 9) { + props -= 9; + ++s->lzma.literal_pos_mask; + } + + s->lzma.lc = props; + + if (s->lzma.lc + s->lzma.literal_pos_mask > 4) + return false; + + s->lzma.literal_pos_mask = (1 << s->lzma.literal_pos_mask) - 1; + + lzma_reset(s); + + return true; +} + +/********* + * LZMA2 * + *********/ + +/* + * The LZMA decoder assumes that if the input limit (s->rc.in_limit) hasn't + * been exceeded, it is safe to read up to LZMA_IN_REQUIRED bytes. This + * wrapper function takes care of making the LZMA decoder's assumption safe. + * + * As long as there is plenty of input left to be decoded in the current LZMA + * chunk, we decode directly from the caller-supplied input buffer until + * there's LZMA_IN_REQUIRED bytes left. Those remaining bytes are copied into + * s->temp.buf, which (hopefully) gets filled on the next call to this + * function. We decode a few bytes from the temporary buffer so that we can + * continue decoding from the caller-supplied input buffer again. + */ +static bool lzma2_lzma(struct xz_dec_lzma2 *s, struct xz_buf *b) +{ + size_t in_avail; + uint32_t tmp; + + in_avail = b->in_size - b->in_pos; + if (s->temp.size > 0 || s->lzma2.compressed == 0) { + tmp = 2 * LZMA_IN_REQUIRED - s->temp.size; + if (tmp > s->lzma2.compressed - s->temp.size) + tmp = s->lzma2.compressed - s->temp.size; + if (tmp > in_avail) + tmp = in_avail; + + memcpy(s->temp.buf + s->temp.size, b->in + b->in_pos, tmp); + + if (s->temp.size + tmp == s->lzma2.compressed) { + memzero(s->temp.buf + s->temp.size + tmp, + sizeof(s->temp.buf) + - s->temp.size - tmp); + s->rc.in_limit = s->temp.size + tmp; + } else if (s->temp.size + tmp < LZMA_IN_REQUIRED) { + s->temp.size += tmp; + b->in_pos += tmp; + return true; + } else { + s->rc.in_limit = s->temp.size + tmp - LZMA_IN_REQUIRED; + } + + s->rc.in = s->temp.buf; + s->rc.in_pos = 0; + + if (!lzma_main(s) || s->rc.in_pos > s->temp.size + tmp) + return false; + + s->lzma2.compressed -= s->rc.in_pos; + + if (s->rc.in_pos < s->temp.size) { + s->temp.size -= s->rc.in_pos; + memmove(s->temp.buf, s->temp.buf + s->rc.in_pos, + s->temp.size); + return true; + } + + b->in_pos += s->rc.in_pos - s->temp.size; + s->temp.size = 0; + } + + in_avail = b->in_size - b->in_pos; + if (in_avail >= LZMA_IN_REQUIRED) { + s->rc.in = b->in; + s->rc.in_pos = b->in_pos; + + if (in_avail >= s->lzma2.compressed + LZMA_IN_REQUIRED) + s->rc.in_limit = b->in_pos + s->lzma2.compressed; + else + s->rc.in_limit = b->in_size - LZMA_IN_REQUIRED; + + if (!lzma_main(s)) + return false; + + in_avail = s->rc.in_pos - b->in_pos; + if (in_avail > s->lzma2.compressed) + return false; + + s->lzma2.compressed -= in_avail; + b->in_pos = s->rc.in_pos; + } + + in_avail = b->in_size - b->in_pos; + if (in_avail < LZMA_IN_REQUIRED) { + if (in_avail > s->lzma2.compressed) + in_avail = s->lzma2.compressed; + + memcpy(s->temp.buf, b->in + b->in_pos, in_avail); + s->temp.size = in_avail; + b->in_pos += in_avail; + } + + return true; +} + +/* + * Take care of the LZMA2 control layer, and forward the job of actual LZMA + * decoding or copying of uncompressed chunks to other functions. + */ +XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s, + struct xz_buf *b) +{ + uint32_t tmp; + + while (b->in_pos < b->in_size || s->lzma2.sequence == SEQ_LZMA_RUN) { + switch (s->lzma2.sequence) { + case SEQ_CONTROL: + /* + * LZMA2 control byte + * + * Exact values: + * 0x00 End marker + * 0x01 Dictionary reset followed by + * an uncompressed chunk + * 0x02 Uncompressed chunk (no dictionary reset) + * + * Highest three bits (s->control & 0xE0): + * 0xE0 Dictionary reset, new properties and state + * reset, followed by LZMA compressed chunk + * 0xC0 New properties and state reset, followed + * by LZMA compressed chunk (no dictionary + * reset) + * 0xA0 State reset using old properties, + * followed by LZMA compressed chunk (no + * dictionary reset) + * 0x80 LZMA chunk (no dictionary or state reset) + * + * For LZMA compressed chunks, the lowest five bits + * (s->control & 1F) are the highest bits of the + * uncompressed size (bits 16-20). + * + * A new LZMA2 stream must begin with a dictionary + * reset. The first LZMA chunk must set new + * properties and reset the LZMA state. + * + * Values that don't match anything described above + * are invalid and we return XZ_DATA_ERROR. + */ + tmp = b->in[b->in_pos++]; + + if (tmp >= 0xE0 || tmp == 0x01) { + s->lzma2.need_props = true; + s->lzma2.need_dict_reset = false; + dict_reset(&s->dict, b); + } else if (s->lzma2.need_dict_reset) { + return XZ_DATA_ERROR; + } + + if (tmp >= 0x80) { + s->lzma2.uncompressed = (tmp & 0x1F) << 16; + s->lzma2.sequence = SEQ_UNCOMPRESSED_1; + + if (tmp >= 0xC0) { + /* + * When there are new properties, + * state reset is done at + * SEQ_PROPERTIES. + */ + s->lzma2.need_props = false; + s->lzma2.next_sequence + = SEQ_PROPERTIES; + + } else if (s->lzma2.need_props) { + return XZ_DATA_ERROR; + + } else { + s->lzma2.next_sequence + = SEQ_LZMA_PREPARE; + if (tmp >= 0xA0) + lzma_reset(s); + } + } else { + if (tmp == 0x00) + return XZ_STREAM_END; + + if (tmp > 0x02) + return XZ_DATA_ERROR; + + s->lzma2.sequence = SEQ_COMPRESSED_0; + s->lzma2.next_sequence = SEQ_COPY; + } + + break; + + case SEQ_UNCOMPRESSED_1: + s->lzma2.uncompressed + += (uint32_t)b->in[b->in_pos++] << 8; + s->lzma2.sequence = SEQ_UNCOMPRESSED_2; + break; + + case SEQ_UNCOMPRESSED_2: + s->lzma2.uncompressed + += (uint32_t)b->in[b->in_pos++] + 1; + s->lzma2.sequence = SEQ_COMPRESSED_0; + break; + + case SEQ_COMPRESSED_0: + s->lzma2.compressed + = (uint32_t)b->in[b->in_pos++] << 8; + s->lzma2.sequence = SEQ_COMPRESSED_1; + break; + + case SEQ_COMPRESSED_1: + s->lzma2.compressed + += (uint32_t)b->in[b->in_pos++] + 1; + s->lzma2.sequence = s->lzma2.next_sequence; + break; + + case SEQ_PROPERTIES: + if (!lzma_props(s, b->in[b->in_pos++])) + return XZ_DATA_ERROR; + + s->lzma2.sequence = SEQ_LZMA_PREPARE; + + case SEQ_LZMA_PREPARE: + if (s->lzma2.compressed < RC_INIT_BYTES) + return XZ_DATA_ERROR; + + if (!rc_read_init(&s->rc, b)) + return XZ_OK; + + s->lzma2.compressed -= RC_INIT_BYTES; + s->lzma2.sequence = SEQ_LZMA_RUN; + + case SEQ_LZMA_RUN: + /* + * Set dictionary limit to indicate how much we want + * to be encoded at maximum. Decode new data into the + * dictionary. Flush the new data from dictionary to + * b->out. Check if we finished decoding this chunk. + * In case the dictionary got full but we didn't fill + * the output buffer yet, we may run this loop + * multiple times without changing s->lzma2.sequence. + */ + dict_limit(&s->dict, min_t(size_t, + b->out_size - b->out_pos, + s->lzma2.uncompressed)); + if (!lzma2_lzma(s, b)) + return XZ_DATA_ERROR; + + s->lzma2.uncompressed -= dict_flush(&s->dict, b); + + if (s->lzma2.uncompressed == 0) { + if (s->lzma2.compressed > 0 || s->lzma.len > 0 + || !rc_is_finished(&s->rc)) + return XZ_DATA_ERROR; + + rc_reset(&s->rc); + s->lzma2.sequence = SEQ_CONTROL; + + } else if (b->out_pos == b->out_size + || (b->in_pos == b->in_size + && s->temp.size + < s->lzma2.compressed)) { + return XZ_OK; + } + + break; + + case SEQ_COPY: + dict_uncompressed(&s->dict, b, &s->lzma2.compressed); + if (s->lzma2.compressed > 0) + return XZ_OK; + + s->lzma2.sequence = SEQ_CONTROL; + break; + } + } + + return XZ_OK; +} + +XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode, + uint32_t dict_max) +{ + struct xz_dec_lzma2 *s = kmalloc(sizeof(*s), GFP_KERNEL); + if (s == NULL) + return NULL; + + s->dict.mode = mode; + s->dict.size_max = dict_max; + + if (DEC_IS_PREALLOC(mode)) { + s->dict.buf = vmalloc(dict_max); + if (s->dict.buf == NULL) { + kfree(s); + return NULL; + } + } else if (DEC_IS_DYNALLOC(mode)) { + s->dict.buf = NULL; + s->dict.allocated = 0; + } + + return s; +} + +XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s, uint8_t props) +{ + /* This limits dictionary size to 3 GiB to keep parsing simpler. */ + if (props > 39) + return XZ_OPTIONS_ERROR; + + s->dict.size = 2 + (props & 1); + s->dict.size <<= (props >> 1) + 11; + + if (DEC_IS_MULTI(s->dict.mode)) { + if (s->dict.size > s->dict.size_max) + return XZ_MEMLIMIT_ERROR; + + s->dict.end = s->dict.size; + + if (DEC_IS_DYNALLOC(s->dict.mode)) { + if (s->dict.allocated < s->dict.size) { + vfree(s->dict.buf); + s->dict.buf = vmalloc(s->dict.size); + if (s->dict.buf == NULL) { + s->dict.allocated = 0; + return XZ_MEM_ERROR; + } + } + } + } + + s->lzma.len = 0; + + s->lzma2.sequence = SEQ_CONTROL; + s->lzma2.need_dict_reset = true; + + s->temp.size = 0; + + return XZ_OK; +} + +XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s) +{ + if (DEC_IS_MULTI(s->dict.mode)) + vfree(s->dict.buf); + + kfree(s); +} Property changes on: sys/contrib/xz-embedded/xz_dec_lzma2.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_stream.h =================================================================== --- sys/contrib/xz-embedded/xz_stream.h (revision 0) +++ sys/contrib/xz-embedded/xz_stream.h (revision 0) @@ -0,0 +1,62 @@ +/* + * Definitions for handling the .xz file format + * + * Author: Lasse Collin + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#ifndef XZ_STREAM_H +#define XZ_STREAM_H + +#if defined(__KERNEL__) && !XZ_INTERNAL_CRC32 +# include +# undef crc32 +# define xz_crc32(buf, size, crc) \ + (~crc32_le(~(uint32_t)(crc), buf, size)) +#endif + +/* + * See the .xz file format specification at + * http://tukaani.org/xz/xz-file-format.txt + * to understand the container format. + */ + +#define STREAM_HEADER_SIZE 12 + +#define HEADER_MAGIC "\3757zXZ" +#define HEADER_MAGIC_SIZE 6 + +#define FOOTER_MAGIC "YZ" +#define FOOTER_MAGIC_SIZE 2 + +/* + * Variable-length integer can hold a 63-bit unsigned integer or a special + * value indicating that the value is unknown. + * + * Experimental: vli_type can be defined to uint32_t to save a few bytes + * in code size (no effect on speed). Doing so limits the uncompressed and + * compressed size of the file to less than 256 MiB and may also weaken + * error detection slightly. + */ +typedef uint64_t vli_type; + +#define VLI_MAX ((vli_type)-1 / 2) +#define VLI_UNKNOWN ((vli_type)-1) + +/* Maximum encoded size of a VLI */ +#define VLI_BYTES_MAX (sizeof(vli_type) * 8 / 7) + +/* Integrity Check types */ +enum xz_check { + XZ_CHECK_NONE = 0, + XZ_CHECK_CRC32 = 1, + XZ_CHECK_CRC64 = 4, + XZ_CHECK_SHA256 = 10 +}; + +/* Maximum possible Check ID */ +#define XZ_CHECK_MAX 15 + +#endif Property changes on: sys/contrib/xz-embedded/xz_stream.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz.h =================================================================== --- sys/contrib/xz-embedded/xz.h (revision 0) +++ sys/contrib/xz-embedded/xz.h (revision 0) @@ -0,0 +1,268 @@ +/* + * XZ decompressor + * + * Authors: Lasse Collin + * Igor Pavlov + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#ifndef XZ_H +#define XZ_H + +# include +# include + +#ifdef __cplusplus +extern "C" { +#endif + +/* In Linux, this is used to make extern functions static when needed. */ +#ifndef XZ_EXTERN +# define XZ_EXTERN extern +#endif + +/** + * enum xz_mode - Operation mode + * + * @XZ_SINGLE: Single-call mode. This uses less RAM than + * than multi-call modes, because the LZMA2 + * dictionary doesn't need to be allocated as + * part of the decoder state. All required data + * structures are allocated at initialization, + * so xz_dec_run() cannot return XZ_MEM_ERROR. + * @XZ_PREALLOC: Multi-call mode with preallocated LZMA2 + * dictionary buffer. All data structures are + * allocated at initialization, so xz_dec_run() + * cannot return XZ_MEM_ERROR. + * @XZ_DYNALLOC: Multi-call mode. The LZMA2 dictionary is + * allocated once the required size has been + * parsed from the stream headers. If the + * allocation fails, xz_dec_run() will return + * XZ_MEM_ERROR. + * + * It is possible to enable support only for a subset of the above + * modes at compile time by defining XZ_DEC_SINGLE, XZ_DEC_PREALLOC, + * or XZ_DEC_DYNALLOC. The xz_dec kernel module is always compiled + * with support for all operation modes, but the preboot code may + * be built with fewer features to minimize code size. + */ +enum xz_mode { + XZ_SINGLE, + XZ_PREALLOC, + XZ_DYNALLOC +}; + +/** + * enum xz_ret - Return codes + * @XZ_OK: Everything is OK so far. More input or more + * output space is required to continue. This + * return code is possible only in multi-call mode + * (XZ_PREALLOC or XZ_DYNALLOC). + * @XZ_STREAM_END: Operation finished successfully. + * @XZ_UNSUPPORTED_CHECK: Integrity check type is not supported. Decoding + * is still possible in multi-call mode by simply + * calling xz_dec_run() again. + * Note that this return value is used only if + * XZ_DEC_ANY_CHECK was defined at build time, + * which is not used in the kernel. Unsupported + * check types return XZ_OPTIONS_ERROR if + * XZ_DEC_ANY_CHECK was not defined at build time. + * @XZ_MEM_ERROR: Allocating memory failed. This return code is + * possible only if the decoder was initialized + * with XZ_DYNALLOC. The amount of memory that was + * tried to be allocated was no more than the + * dict_max argument given to xz_dec_init(). + * @XZ_MEMLIMIT_ERROR: A bigger LZMA2 dictionary would be needed than + * allowed by the dict_max argument given to + * xz_dec_init(). This return value is possible + * only in multi-call mode (XZ_PREALLOC or + * XZ_DYNALLOC); the single-call mode (XZ_SINGLE) + * ignores the dict_max argument. + * @XZ_FORMAT_ERROR: File format was not recognized (wrong magic + * bytes). + * @XZ_OPTIONS_ERROR: This implementation doesn't support the requested + * compression options. In the decoder this means + * that the header CRC32 matches, but the header + * itself specifies something that we don't support. + * @XZ_DATA_ERROR: Compressed data is corrupt. + * @XZ_BUF_ERROR: Cannot make any progress. Details are slightly + * different between multi-call and single-call + * mode; more information below. + * + * In multi-call mode, XZ_BUF_ERROR is returned when two consecutive calls + * to XZ code cannot consume any input and cannot produce any new output. + * This happens when there is no new input available, or the output buffer + * is full while at least one output byte is still pending. Assuming your + * code is not buggy, you can get this error only when decoding a compressed + * stream that is truncated or otherwise corrupt. + * + * In single-call mode, XZ_BUF_ERROR is returned only when the output buffer + * is too small or the compressed input is corrupt in a way that makes the + * decoder produce more output than the caller expected. When it is + * (relatively) clear that the compressed input is truncated, XZ_DATA_ERROR + * is used instead of XZ_BUF_ERROR. + */ +enum xz_ret { + XZ_OK, + XZ_STREAM_END, + XZ_UNSUPPORTED_CHECK, + XZ_MEM_ERROR, + XZ_MEMLIMIT_ERROR, + XZ_FORMAT_ERROR, + XZ_OPTIONS_ERROR, + XZ_DATA_ERROR, + XZ_BUF_ERROR +}; + +/** + * struct xz_buf - Passing input and output buffers to XZ code + * @in: Beginning of the input buffer. This may be NULL if and only + * if in_pos is equal to in_size. + * @in_pos: Current position in the input buffer. This must not exceed + * in_size. + * @in_size: Size of the input buffer + * @out: Beginning of the output buffer. This may be NULL if and only + * if out_pos is equal to out_size. + * @out_pos: Current position in the output buffer. This must not exceed + * out_size. + * @out_size: Size of the output buffer + * + * Only the contents of the output buffer from out[out_pos] onward, and + * the variables in_pos and out_pos are modified by the XZ code. + */ +struct xz_buf { + const uint8_t *in; + size_t in_pos; + size_t in_size; + + uint8_t *out; + size_t out_pos; + size_t out_size; +}; + +/** + * struct xz_dec - Opaque type to hold the XZ decoder state + */ +struct xz_dec; + +/** + * xz_dec_init() - Allocate and initialize a XZ decoder state + * @mode: Operation mode + * @dict_max: Maximum size of the LZMA2 dictionary (history buffer) for + * multi-call decoding. This is ignored in single-call mode + * (mode == XZ_SINGLE). LZMA2 dictionary is always 2^n bytes + * or 2^n + 2^(n-1) bytes (the latter sizes are less common + * in practice), so other values for dict_max don't make sense. + * In the kernel, dictionary sizes of 64 KiB, 128 KiB, 256 KiB, + * 512 KiB, and 1 MiB are probably the only reasonable values, + * except for kernel and initramfs images where a bigger + * dictionary can be fine and useful. + * + * Single-call mode (XZ_SINGLE): xz_dec_run() decodes the whole stream at + * once. The caller must provide enough output space or the decoding will + * fail. The output space is used as the dictionary buffer, which is why + * there is no need to allocate the dictionary as part of the decoder's + * internal state. + * + * Because the output buffer is used as the workspace, streams encoded using + * a big dictionary are not a problem in single-call mode. It is enough that + * the output buffer is big enough to hold the actual uncompressed data; it + * can be smaller than the dictionary size stored in the stream headers. + * + * Multi-call mode with preallocated dictionary (XZ_PREALLOC): dict_max bytes + * of memory is preallocated for the LZMA2 dictionary. This way there is no + * risk that xz_dec_run() could run out of memory, since xz_dec_run() will + * never allocate any memory. Instead, if the preallocated dictionary is too + * small for decoding the given input stream, xz_dec_run() will return + * XZ_MEMLIMIT_ERROR. Thus, it is important to know what kind of data will be + * decoded to avoid allocating excessive amount of memory for the dictionary. + * + * Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC): + * dict_max specifies the maximum allowed dictionary size that xz_dec_run() + * may allocate once it has parsed the dictionary size from the stream + * headers. This way excessive allocations can be avoided while still + * limiting the maximum memory usage to a sane value to prevent running the + * system out of memory when decompressing streams from untrusted sources. + * + * On success, xz_dec_init() returns a pointer to struct xz_dec, which is + * ready to be used with xz_dec_run(). If memory allocation fails, + * xz_dec_init() returns NULL. + */ +XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max); + +/** + * xz_dec_run() - Run the XZ decoder + * @s: Decoder state allocated using xz_dec_init() + * @b: Input and output buffers + * + * The possible return values depend on build options and operation mode. + * See enum xz_ret for details. + * + * Note that if an error occurs in single-call mode (return value is not + * XZ_STREAM_END), b->in_pos and b->out_pos are not modified and the + * contents of the output buffer from b->out[b->out_pos] onward are + * undefined. This is true even after XZ_BUF_ERROR, because with some filter + * chains, there may be a second pass over the output buffer, and this pass + * cannot be properly done if the output buffer is truncated. Thus, you + * cannot give the single-call decoder a too small buffer and then expect to + * get that amount valid data from the beginning of the stream. You must use + * the multi-call decoder if you don't want to uncompress the whole stream. + */ +XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b); + +/** + * xz_dec_reset() - Reset an already allocated decoder state + * @s: Decoder state allocated using xz_dec_init() + * + * This function can be used to reset the multi-call decoder state without + * freeing and reallocating memory with xz_dec_end() and xz_dec_init(). + * + * In single-call mode, xz_dec_reset() is always called in the beginning of + * xz_dec_run(). Thus, explicit call to xz_dec_reset() is useful only in + * multi-call mode. + */ +XZ_EXTERN void xz_dec_reset(struct xz_dec *s); + +/** + * xz_dec_end() - Free the memory allocated for the decoder state + * @s: Decoder state allocated using xz_dec_init(). If s is NULL, + * this function does nothing. + */ +XZ_EXTERN void xz_dec_end(struct xz_dec *s); + +/* + * Standalone build (userspace build or in-kernel build for boot time use) + * needs a CRC32 implementation. For normal in-kernel use, kernel's own + * CRC32 module is used instead, and users of this module don't need to + * care about the functions below. + */ +#ifndef XZ_INTERNAL_CRC32 +# ifdef __KERNEL__ +# define XZ_INTERNAL_CRC32 0 +# else +# define XZ_INTERNAL_CRC32 1 +# endif +#endif + +#if XZ_INTERNAL_CRC32 +/* + * This must be called before any other xz_* function to initialize + * the CRC32 lookup table. + */ +XZ_EXTERN void xz_crc32_init(void); + +/* + * Update CRC32 value using the polynomial from IEEE-802.3. To start a new + * calculation, the third argument must be zero. To continue the calculation, + * the previously returned value is passed as the third argument. + */ +XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc); +#endif + +#ifdef __cplusplus +} +#endif + +#endif Property changes on: sys/contrib/xz-embedded/xz.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/COPYING =================================================================== --- sys/contrib/xz-embedded/COPYING (revision 0) +++ sys/contrib/xz-embedded/COPYING (revision 0) @@ -0,0 +1,10 @@ + +Licensing of XZ Embedded +======================== + + All the files in this package have been written by Lasse Collin + and/or Igor Pavlov. All these files have been put into the + public domain. You can do whatever you want with these files. + + As usual, this software is provided "as is", without any warranty. + Index: sys/contrib/xz-embedded/xz_dec_stream.c =================================================================== --- sys/contrib/xz-embedded/xz_dec_stream.c (revision 0) +++ sys/contrib/xz-embedded/xz_dec_stream.c (revision 0) @@ -0,0 +1,821 @@ +/* + * .xz Stream decoder + * + * Author: Lasse Collin + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#include "xz_private.h" +#include "xz_stream.h" + +/* Hash used to validate the Index field */ +struct xz_dec_hash { + vli_type unpadded; + vli_type uncompressed; + uint32_t crc32; +}; + +struct xz_dec { + /* Position in dec_main() */ + enum { + SEQ_STREAM_HEADER, + SEQ_BLOCK_START, + SEQ_BLOCK_HEADER, + SEQ_BLOCK_UNCOMPRESS, + SEQ_BLOCK_PADDING, + SEQ_BLOCK_CHECK, + SEQ_INDEX, + SEQ_INDEX_PADDING, + SEQ_INDEX_CRC32, + SEQ_STREAM_FOOTER + } sequence; + + /* Position in variable-length integers and Check fields */ + uint32_t pos; + + /* Variable-length integer decoded by dec_vli() */ + vli_type vli; + + /* Saved in_pos and out_pos */ + size_t in_start; + size_t out_start; + + /* CRC32 value in Block or Index */ + uint32_t crc32; + + /* Type of the integrity check calculated from uncompressed data */ + enum xz_check check_type; + + /* Operation mode */ + enum xz_mode mode; + + /* + * True if the next call to xz_dec_run() is allowed to return + * XZ_BUF_ERROR. + */ + bool allow_buf_error; + + /* Information stored in Block Header */ + struct { + /* + * Value stored in the Compressed Size field, or + * VLI_UNKNOWN if Compressed Size is not present. + */ + vli_type compressed; + + /* + * Value stored in the Uncompressed Size field, or + * VLI_UNKNOWN if Uncompressed Size is not present. + */ + vli_type uncompressed; + + /* Size of the Block Header field */ + uint32_t size; + } block_header; + + /* Information collected when decoding Blocks */ + struct { + /* Observed compressed size of the current Block */ + vli_type compressed; + + /* Observed uncompressed size of the current Block */ + vli_type uncompressed; + + /* Number of Blocks decoded so far */ + vli_type count; + + /* + * Hash calculated from the Block sizes. This is used to + * validate the Index field. + */ + struct xz_dec_hash hash; + } block; + + /* Variables needed when verifying the Index field */ + struct { + /* Position in dec_index() */ + enum { + SEQ_INDEX_COUNT, + SEQ_INDEX_UNPADDED, + SEQ_INDEX_UNCOMPRESSED + } sequence; + + /* Size of the Index in bytes */ + vli_type size; + + /* Number of Records (matches block.count in valid files) */ + vli_type count; + + /* + * Hash calculated from the Records (matches block.hash in + * valid files). + */ + struct xz_dec_hash hash; + } index; + + /* + * Temporary buffer needed to hold Stream Header, Block Header, + * and Stream Footer. The Block Header is the biggest (1 KiB) + * so we reserve space according to that. buf[] has to be aligned + * to a multiple of four bytes; the size_t variables before it + * should guarantee this. + */ + struct { + size_t pos; + size_t size; + uint8_t buf[1024]; + } temp; + + struct xz_dec_lzma2 *lzma2; + +#ifdef XZ_DEC_BCJ + struct xz_dec_bcj *bcj; + bool bcj_active; +#endif +}; + +#ifdef XZ_DEC_ANY_CHECK +/* Sizes of the Check field with different Check IDs */ +static const uint8_t check_sizes[16] = { + 0, + 4, 4, 4, + 8, 8, 8, + 16, 16, 16, + 32, 32, 32, + 64, 64, 64 +}; +#endif + +/* + * Fill s->temp by copying data starting from b->in[b->in_pos]. Caller + * must have set s->temp.pos to indicate how much data we are supposed + * to copy into s->temp.buf. Return true once s->temp.pos has reached + * s->temp.size. + */ +static bool fill_temp(struct xz_dec *s, struct xz_buf *b) +{ + size_t copy_size = min_t(size_t, + b->in_size - b->in_pos, s->temp.size - s->temp.pos); + + memcpy(s->temp.buf + s->temp.pos, b->in + b->in_pos, copy_size); + b->in_pos += copy_size; + s->temp.pos += copy_size; + + if (s->temp.pos == s->temp.size) { + s->temp.pos = 0; + return true; + } + + return false; +} + +/* Decode a variable-length integer (little-endian base-128 encoding) */ +static enum xz_ret dec_vli(struct xz_dec *s, const uint8_t *in, + size_t *in_pos, size_t in_size) +{ + uint8_t byte; + + if (s->pos == 0) + s->vli = 0; + + while (*in_pos < in_size) { + byte = in[*in_pos]; + ++*in_pos; + + s->vli |= (vli_type)(byte & 0x7F) << s->pos; + + if ((byte & 0x80) == 0) { + /* Don't allow non-minimal encodings. */ + if (byte == 0 && s->pos != 0) + return XZ_DATA_ERROR; + + s->pos = 0; + return XZ_STREAM_END; + } + + s->pos += 7; + if (s->pos == 7 * VLI_BYTES_MAX) + return XZ_DATA_ERROR; + } + + return XZ_OK; +} + +/* + * Decode the Compressed Data field from a Block. Update and validate + * the observed compressed and uncompressed sizes of the Block so that + * they don't exceed the values possibly stored in the Block Header + * (validation assumes that no integer overflow occurs, since vli_type + * is normally uint64_t). Update the CRC32 if presence of the CRC32 + * field was indicated in Stream Header. + * + * Once the decoding is finished, validate that the observed sizes match + * the sizes possibly stored in the Block Header. Update the hash and + * Block count, which are later used to validate the Index field. + */ +static enum xz_ret dec_block(struct xz_dec *s, struct xz_buf *b) +{ + enum xz_ret ret; + + s->in_start = b->in_pos; + s->out_start = b->out_pos; + +#ifdef XZ_DEC_BCJ + if (s->bcj_active) + ret = xz_dec_bcj_run(s->bcj, s->lzma2, b); + else +#endif + ret = xz_dec_lzma2_run(s->lzma2, b); + + s->block.compressed += b->in_pos - s->in_start; + s->block.uncompressed += b->out_pos - s->out_start; + + /* + * There is no need to separately check for VLI_UNKNOWN, since + * the observed sizes are always smaller than VLI_UNKNOWN. + */ + if (s->block.compressed > s->block_header.compressed + || s->block.uncompressed + > s->block_header.uncompressed) + return XZ_DATA_ERROR; + + if (s->check_type == XZ_CHECK_CRC32) + s->crc32 = xz_crc32(b->out + s->out_start, + b->out_pos - s->out_start, s->crc32); + + if (ret == XZ_STREAM_END) { + if (s->block_header.compressed != VLI_UNKNOWN + && s->block_header.compressed + != s->block.compressed) + return XZ_DATA_ERROR; + + if (s->block_header.uncompressed != VLI_UNKNOWN + && s->block_header.uncompressed + != s->block.uncompressed) + return XZ_DATA_ERROR; + + s->block.hash.unpadded += s->block_header.size + + s->block.compressed; + +#ifdef XZ_DEC_ANY_CHECK + s->block.hash.unpadded += check_sizes[s->check_type]; +#else + if (s->check_type == XZ_CHECK_CRC32) + s->block.hash.unpadded += 4; +#endif + + s->block.hash.uncompressed += s->block.uncompressed; + s->block.hash.crc32 = xz_crc32( + (const uint8_t *)&s->block.hash, + sizeof(s->block.hash), s->block.hash.crc32); + + ++s->block.count; + } + + return ret; +} + +/* Update the Index size and the CRC32 value. */ +static void index_update(struct xz_dec *s, const struct xz_buf *b) +{ + size_t in_used = b->in_pos - s->in_start; + s->index.size += in_used; + s->crc32 = xz_crc32(b->in + s->in_start, in_used, s->crc32); +} + +/* + * Decode the Number of Records, Unpadded Size, and Uncompressed Size + * fields from the Index field. That is, Index Padding and CRC32 are not + * decoded by this function. + * + * This can return XZ_OK (more input needed), XZ_STREAM_END (everything + * successfully decoded), or XZ_DATA_ERROR (input is corrupt). + */ +static enum xz_ret dec_index(struct xz_dec *s, struct xz_buf *b) +{ + enum xz_ret ret; + + do { + ret = dec_vli(s, b->in, &b->in_pos, b->in_size); + if (ret != XZ_STREAM_END) { + index_update(s, b); + return ret; + } + + switch (s->index.sequence) { + case SEQ_INDEX_COUNT: + s->index.count = s->vli; + + /* + * Validate that the Number of Records field + * indicates the same number of Records as + * there were Blocks in the Stream. + */ + if (s->index.count != s->block.count) + return XZ_DATA_ERROR; + + s->index.sequence = SEQ_INDEX_UNPADDED; + break; + + case SEQ_INDEX_UNPADDED: + s->index.hash.unpadded += s->vli; + s->index.sequence = SEQ_INDEX_UNCOMPRESSED; + break; + + case SEQ_INDEX_UNCOMPRESSED: + s->index.hash.uncompressed += s->vli; + s->index.hash.crc32 = xz_crc32( + (const uint8_t *)&s->index.hash, + sizeof(s->index.hash), + s->index.hash.crc32); + --s->index.count; + s->index.sequence = SEQ_INDEX_UNPADDED; + break; + } + } while (s->index.count > 0); + + return XZ_STREAM_END; +} + +/* + * Validate that the next four input bytes match the value of s->crc32. + * s->pos must be zero when starting to validate the first byte. + */ +static enum xz_ret crc32_validate(struct xz_dec *s, struct xz_buf *b) +{ + do { + if (b->in_pos == b->in_size) + return XZ_OK; + + if (((s->crc32 >> s->pos) & 0xFF) != b->in[b->in_pos++]) + return XZ_DATA_ERROR; + + s->pos += 8; + + } while (s->pos < 32); + + s->crc32 = 0; + s->pos = 0; + + return XZ_STREAM_END; +} + +#ifdef XZ_DEC_ANY_CHECK +/* + * Skip over the Check field when the Check ID is not supported. + * Returns true once the whole Check field has been skipped over. + */ +static bool check_skip(struct xz_dec *s, struct xz_buf *b) +{ + while (s->pos < check_sizes[s->check_type]) { + if (b->in_pos == b->in_size) + return false; + + ++b->in_pos; + ++s->pos; + } + + s->pos = 0; + + return true; +} +#endif + +/* Decode the Stream Header field (the first 12 bytes of the .xz Stream). */ +static enum xz_ret dec_stream_header(struct xz_dec *s) +{ + if (!memeq(s->temp.buf, HEADER_MAGIC, HEADER_MAGIC_SIZE)) + return XZ_FORMAT_ERROR; + + if (xz_crc32(s->temp.buf + HEADER_MAGIC_SIZE, 2, 0) + != get_le32(s->temp.buf + HEADER_MAGIC_SIZE + 2)) + return XZ_DATA_ERROR; + + if (s->temp.buf[HEADER_MAGIC_SIZE] != 0) + return XZ_OPTIONS_ERROR; + + /* + * Of integrity checks, we support only none (Check ID = 0) and + * CRC32 (Check ID = 1). However, if XZ_DEC_ANY_CHECK is defined, + * we will accept other check types too, but then the check won't + * be verified and a warning (XZ_UNSUPPORTED_CHECK) will be given. + */ + s->check_type = s->temp.buf[HEADER_MAGIC_SIZE + 1]; + +#ifdef XZ_DEC_ANY_CHECK + if (s->check_type > XZ_CHECK_MAX) + return XZ_OPTIONS_ERROR; + + if (s->check_type > XZ_CHECK_CRC32) + return XZ_UNSUPPORTED_CHECK; +#else + if (s->check_type > XZ_CHECK_CRC32) + return XZ_OPTIONS_ERROR; +#endif + + return XZ_OK; +} + +/* Decode the Stream Footer field (the last 12 bytes of the .xz Stream) */ +static enum xz_ret dec_stream_footer(struct xz_dec *s) +{ + if (!memeq(s->temp.buf + 10, FOOTER_MAGIC, FOOTER_MAGIC_SIZE)) + return XZ_DATA_ERROR; + + if (xz_crc32(s->temp.buf + 4, 6, 0) != get_le32(s->temp.buf)) + return XZ_DATA_ERROR; + + /* + * Validate Backward Size. Note that we never added the size of the + * Index CRC32 field to s->index.size, thus we use s->index.size / 4 + * instead of s->index.size / 4 - 1. + */ + if ((s->index.size >> 2) != get_le32(s->temp.buf + 4)) + return XZ_DATA_ERROR; + + if (s->temp.buf[8] != 0 || s->temp.buf[9] != s->check_type) + return XZ_DATA_ERROR; + + /* + * Use XZ_STREAM_END instead of XZ_OK to be more convenient + * for the caller. + */ + return XZ_STREAM_END; +} + +/* Decode the Block Header and initialize the filter chain. */ +static enum xz_ret dec_block_header(struct xz_dec *s) +{ + enum xz_ret ret; + + /* + * Validate the CRC32. We know that the temp buffer is at least + * eight bytes so this is safe. + */ + s->temp.size -= 4; + if (xz_crc32(s->temp.buf, s->temp.size, 0) + != get_le32(s->temp.buf + s->temp.size)) + return XZ_DATA_ERROR; + + s->temp.pos = 2; + + /* + * Catch unsupported Block Flags. We support only one or two filters + * in the chain, so we catch that with the same test. + */ +#ifdef XZ_DEC_BCJ + if (s->temp.buf[1] & 0x3E) +#else + if (s->temp.buf[1] & 0x3F) +#endif + return XZ_OPTIONS_ERROR; + + /* Compressed Size */ + if (s->temp.buf[1] & 0x40) { + if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size) + != XZ_STREAM_END) + return XZ_DATA_ERROR; + + s->block_header.compressed = s->vli; + } else { + s->block_header.compressed = VLI_UNKNOWN; + } + + /* Uncompressed Size */ + if (s->temp.buf[1] & 0x80) { + if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size) + != XZ_STREAM_END) + return XZ_DATA_ERROR; + + s->block_header.uncompressed = s->vli; + } else { + s->block_header.uncompressed = VLI_UNKNOWN; + } + +#ifdef XZ_DEC_BCJ + /* If there are two filters, the first one must be a BCJ filter. */ + s->bcj_active = s->temp.buf[1] & 0x01; + if (s->bcj_active) { + if (s->temp.size - s->temp.pos < 2) + return XZ_OPTIONS_ERROR; + + ret = xz_dec_bcj_reset(s->bcj, s->temp.buf[s->temp.pos++]); + if (ret != XZ_OK) + return ret; + + /* + * We don't support custom start offset, + * so Size of Properties must be zero. + */ + if (s->temp.buf[s->temp.pos++] != 0x00) + return XZ_OPTIONS_ERROR; + } +#endif + + /* Valid Filter Flags always take at least two bytes. */ + if (s->temp.size - s->temp.pos < 2) + return XZ_DATA_ERROR; + + /* Filter ID = LZMA2 */ + if (s->temp.buf[s->temp.pos++] != 0x21) + return XZ_OPTIONS_ERROR; + + /* Size of Properties = 1-byte Filter Properties */ + if (s->temp.buf[s->temp.pos++] != 0x01) + return XZ_OPTIONS_ERROR; + + /* Filter Properties contains LZMA2 dictionary size. */ + if (s->temp.size - s->temp.pos < 1) + return XZ_DATA_ERROR; + + ret = xz_dec_lzma2_reset(s->lzma2, s->temp.buf[s->temp.pos++]); + if (ret != XZ_OK) + return ret; + + /* The rest must be Header Padding. */ + while (s->temp.pos < s->temp.size) + if (s->temp.buf[s->temp.pos++] != 0x00) + return XZ_OPTIONS_ERROR; + + s->temp.pos = 0; + s->block.compressed = 0; + s->block.uncompressed = 0; + + return XZ_OK; +} + +static enum xz_ret dec_main(struct xz_dec *s, struct xz_buf *b) +{ + enum xz_ret ret; + + /* + * Store the start position for the case when we are in the middle + * of the Index field. + */ + s->in_start = b->in_pos; + + while (true) { + switch (s->sequence) { + case SEQ_STREAM_HEADER: + /* + * Stream Header is copied to s->temp, and then + * decoded from there. This way if the caller + * gives us only little input at a time, we can + * still keep the Stream Header decoding code + * simple. Similar approach is used in many places + * in this file. + */ + if (!fill_temp(s, b)) + return XZ_OK; + + /* + * If dec_stream_header() returns + * XZ_UNSUPPORTED_CHECK, it is still possible + * to continue decoding if working in multi-call + * mode. Thus, update s->sequence before calling + * dec_stream_header(). + */ + s->sequence = SEQ_BLOCK_START; + + ret = dec_stream_header(s); + if (ret != XZ_OK) + return ret; + + case SEQ_BLOCK_START: + /* We need one byte of input to continue. */ + if (b->in_pos == b->in_size) + return XZ_OK; + + /* See if this is the beginning of the Index field. */ + if (b->in[b->in_pos] == 0) { + s->in_start = b->in_pos++; + s->sequence = SEQ_INDEX; + break; + } + + /* + * Calculate the size of the Block Header and + * prepare to decode it. + */ + s->block_header.size + = ((uint32_t)b->in[b->in_pos] + 1) * 4; + + s->temp.size = s->block_header.size; + s->temp.pos = 0; + s->sequence = SEQ_BLOCK_HEADER; + + case SEQ_BLOCK_HEADER: + if (!fill_temp(s, b)) + return XZ_OK; + + ret = dec_block_header(s); + if (ret != XZ_OK) + return ret; + + s->sequence = SEQ_BLOCK_UNCOMPRESS; + + case SEQ_BLOCK_UNCOMPRESS: + ret = dec_block(s, b); + if (ret != XZ_STREAM_END) + return ret; + + s->sequence = SEQ_BLOCK_PADDING; + + case SEQ_BLOCK_PADDING: + /* + * Size of Compressed Data + Block Padding + * must be a multiple of four. We don't need + * s->block.compressed for anything else + * anymore, so we use it here to test the size + * of the Block Padding field. + */ + while (s->block.compressed & 3) { + if (b->in_pos == b->in_size) + return XZ_OK; + + if (b->in[b->in_pos++] != 0) + return XZ_DATA_ERROR; + + ++s->block.compressed; + } + + s->sequence = SEQ_BLOCK_CHECK; + + case SEQ_BLOCK_CHECK: + if (s->check_type == XZ_CHECK_CRC32) { + ret = crc32_validate(s, b); + if (ret != XZ_STREAM_END) + return ret; + } +#ifdef XZ_DEC_ANY_CHECK + else if (!check_skip(s, b)) { + return XZ_OK; + } +#endif + + s->sequence = SEQ_BLOCK_START; + break; + + case SEQ_INDEX: + ret = dec_index(s, b); + if (ret != XZ_STREAM_END) + return ret; + + s->sequence = SEQ_INDEX_PADDING; + + case SEQ_INDEX_PADDING: + while ((s->index.size + (b->in_pos - s->in_start)) + & 3) { + if (b->in_pos == b->in_size) { + index_update(s, b); + return XZ_OK; + } + + if (b->in[b->in_pos++] != 0) + return XZ_DATA_ERROR; + } + + /* Finish the CRC32 value and Index size. */ + index_update(s, b); + + /* Compare the hashes to validate the Index field. */ + if (!memeq(&s->block.hash, &s->index.hash, + sizeof(s->block.hash))) + return XZ_DATA_ERROR; + + s->sequence = SEQ_INDEX_CRC32; + + case SEQ_INDEX_CRC32: + ret = crc32_validate(s, b); + if (ret != XZ_STREAM_END) + return ret; + + s->temp.size = STREAM_HEADER_SIZE; + s->sequence = SEQ_STREAM_FOOTER; + + case SEQ_STREAM_FOOTER: + if (!fill_temp(s, b)) + return XZ_OK; + + return dec_stream_footer(s); + } + } + + /* Never reached */ +} + +/* + * xz_dec_run() is a wrapper for dec_main() to handle some special cases in + * multi-call and single-call decoding. + * + * In multi-call mode, we must return XZ_BUF_ERROR when it seems clear that we + * are not going to make any progress anymore. This is to prevent the caller + * from calling us infinitely when the input file is truncated or otherwise + * corrupt. Since zlib-style API allows that the caller fills the input buffer + * only when the decoder doesn't produce any new output, we have to be careful + * to avoid returning XZ_BUF_ERROR too easily: XZ_BUF_ERROR is returned only + * after the second consecutive call to xz_dec_run() that makes no progress. + * + * In single-call mode, if we couldn't decode everything and no error + * occurred, either the input is truncated or the output buffer is too small. + * Since we know that the last input byte never produces any output, we know + * that if all the input was consumed and decoding wasn't finished, the file + * must be corrupt. Otherwise the output buffer has to be too small or the + * file is corrupt in a way that decoding it produces too big output. + * + * If single-call decoding fails, we reset b->in_pos and b->out_pos back to + * their original values. This is because with some filter chains there won't + * be any valid uncompressed data in the output buffer unless the decoding + * actually succeeds (that's the price to pay of using the output buffer as + * the workspace). + */ +XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b) +{ + size_t in_start; + size_t out_start; + enum xz_ret ret; + + if (DEC_IS_SINGLE(s->mode)) + xz_dec_reset(s); + + in_start = b->in_pos; + out_start = b->out_pos; + ret = dec_main(s, b); + + if (DEC_IS_SINGLE(s->mode)) { + if (ret == XZ_OK) + ret = b->in_pos == b->in_size + ? XZ_DATA_ERROR : XZ_BUF_ERROR; + + if (ret != XZ_STREAM_END) { + b->in_pos = in_start; + b->out_pos = out_start; + } + + } else if (ret == XZ_OK && in_start == b->in_pos + && out_start == b->out_pos) { + if (s->allow_buf_error) + ret = XZ_BUF_ERROR; + + s->allow_buf_error = true; + } else { + s->allow_buf_error = false; + } + + return ret; +} + +XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max) +{ + struct xz_dec *s = kmalloc(sizeof(*s), GFP_KERNEL); + if (s == NULL) + return NULL; + + s->mode = mode; + +#ifdef XZ_DEC_BCJ + s->bcj = xz_dec_bcj_create(DEC_IS_SINGLE(mode)); + if (s->bcj == NULL) + goto error_bcj; +#endif + + s->lzma2 = xz_dec_lzma2_create(mode, dict_max); + if (s->lzma2 == NULL) + goto error_lzma2; + + xz_dec_reset(s); + return s; + +error_lzma2: +#ifdef XZ_DEC_BCJ + xz_dec_bcj_end(s->bcj); +error_bcj: +#endif + kfree(s); + return NULL; +} + +XZ_EXTERN void xz_dec_reset(struct xz_dec *s) +{ + s->sequence = SEQ_STREAM_HEADER; + s->allow_buf_error = false; + s->pos = 0; + s->crc32 = 0; + memzero(&s->block, sizeof(s->block)); + memzero(&s->index, sizeof(s->index)); + s->temp.pos = 0; + s->temp.size = STREAM_HEADER_SIZE; +} + +XZ_EXTERN void xz_dec_end(struct xz_dec *s) +{ + if (s != NULL) { + xz_dec_lzma2_end(s->lzma2); +#ifdef XZ_DEC_BCJ + xz_dec_bcj_end(s->bcj); +#endif + kfree(s); + } +} Property changes on: sys/contrib/xz-embedded/xz_dec_stream.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_malloc.c =================================================================== --- sys/contrib/xz-embedded/xz_malloc.c (revision 0) +++ sys/contrib/xz-embedded/xz_malloc.c (revision 0) @@ -0,0 +1,22 @@ +#include +#include +#include "xz_malloc.h" + +/* Wraper for XZ decompressor memmory pool */ + +MALLOC_DEFINE(XZ_DEC, "XZ_DEC", "XZ decompressor data"); + +void * +xz_malloc(unsigned long size) +{ + void *addr; + + addr = malloc(size, XZ_DEC, M_NOWAIT); + return (addr); +} + +void +xz_free(void *addr) +{ + free(addr, XZ_DEC); +} Property changes on: sys/contrib/xz-embedded/xz_malloc.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_dec_bcj.c =================================================================== --- sys/contrib/xz-embedded/xz_dec_bcj.c (revision 0) +++ sys/contrib/xz-embedded/xz_dec_bcj.c (revision 0) @@ -0,0 +1,561 @@ +/* + * Branch/Call/Jump (BCJ) filter decoders + * + * Authors: Lasse Collin + * Igor Pavlov + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#include "xz_private.h" + +/* + * The rest of the file is inside this ifdef. It makes things a little more + * convenient when building without support for any BCJ filters. + */ +#ifdef XZ_DEC_BCJ + +struct xz_dec_bcj { + /* Type of the BCJ filter being used */ + enum { + BCJ_X86 = 4, /* x86 or x86-64 */ + BCJ_POWERPC = 5, /* Big endian only */ + BCJ_IA64 = 6, /* Big or little endian */ + BCJ_ARM = 7, /* Little endian only */ + BCJ_ARMTHUMB = 8, /* Little endian only */ + BCJ_SPARC = 9 /* Big or little endian */ + } type; + + /* + * Return value of the next filter in the chain. We need to preserve + * this information across calls, because we must not call the next + * filter anymore once it has returned XZ_STREAM_END. + */ + enum xz_ret ret; + + /* True if we are operating in single-call mode. */ + bool single_call; + + /* + * Absolute position relative to the beginning of the uncompressed + * data (in a single .xz Block). We care only about the lowest 32 + * bits so this doesn't need to be uint64_t even with big files. + */ + uint32_t pos; + + /* x86 filter state */ + uint32_t x86_prev_mask; + + /* Temporary space to hold the variables from struct xz_buf */ + uint8_t *out; + size_t out_pos; + size_t out_size; + + struct { + /* Amount of already filtered data in the beginning of buf */ + size_t filtered; + + /* Total amount of data currently stored in buf */ + size_t size; + + /* + * Buffer to hold a mix of filtered and unfiltered data. This + * needs to be big enough to hold Alignment + 2 * Look-ahead: + * + * Type Alignment Look-ahead + * x86 1 4 + * PowerPC 4 0 + * IA-64 16 0 + * ARM 4 0 + * ARM-Thumb 2 2 + * SPARC 4 0 + */ + uint8_t buf[16]; + } temp; +}; + +#ifdef XZ_DEC_X86 +/* + * This is used to test the most significant byte of a memory address + * in an x86 instruction. + */ +static inline int bcj_x86_test_msbyte(uint8_t b) +{ + return b == 0x00 || b == 0xFF; +} + +static size_t bcj_x86(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + static const bool mask_to_allowed_status[8] + = { true, true, true, false, true, false, false, false }; + + static const uint8_t mask_to_bit_num[8] = { 0, 1, 2, 2, 3, 3, 3, 3 }; + + size_t i; + size_t prev_pos = (size_t)-1; + uint32_t prev_mask = s->x86_prev_mask; + uint32_t src; + uint32_t dest; + uint32_t j; + uint8_t b; + + if (size <= 4) + return 0; + + size -= 4; + for (i = 0; i < size; ++i) { + if ((buf[i] & 0xFE) != 0xE8) + continue; + + prev_pos = i - prev_pos; + if (prev_pos > 3) { + prev_mask = 0; + } else { + prev_mask = (prev_mask << (prev_pos - 1)) & 7; + if (prev_mask != 0) { + b = buf[i + 4 - mask_to_bit_num[prev_mask]]; + if (!mask_to_allowed_status[prev_mask] + || bcj_x86_test_msbyte(b)) { + prev_pos = i; + prev_mask = (prev_mask << 1) | 1; + continue; + } + } + } + + prev_pos = i; + + if (bcj_x86_test_msbyte(buf[i + 4])) { + src = get_unaligned_le32(buf + i + 1); + while (true) { + dest = src - (s->pos + (uint32_t)i + 5); + if (prev_mask == 0) + break; + + j = mask_to_bit_num[prev_mask] * 8; + b = (uint8_t)(dest >> (24 - j)); + if (!bcj_x86_test_msbyte(b)) + break; + + src = dest ^ (((uint32_t)1 << (32 - j)) - 1); + } + + dest &= 0x01FFFFFF; + dest |= (uint32_t)0 - (dest & 0x01000000); + put_unaligned_le32(dest, buf + i + 1); + i += 4; + } else { + prev_mask = (prev_mask << 1) | 1; + } + } + + prev_pos = i - prev_pos; + s->x86_prev_mask = prev_pos > 3 ? 0 : prev_mask << (prev_pos - 1); + return i; +} +#endif + +#ifdef XZ_DEC_POWERPC +static size_t bcj_powerpc(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + size_t i; + uint32_t instr; + + for (i = 0; i + 4 <= size; i += 4) { + instr = get_unaligned_be32(buf + i); + if ((instr & 0xFC000003) == 0x48000001) { + instr &= 0x03FFFFFC; + instr -= s->pos + (uint32_t)i; + instr &= 0x03FFFFFC; + instr |= 0x48000001; + put_unaligned_be32(instr, buf + i); + } + } + + return i; +} +#endif + +#ifdef XZ_DEC_IA64 +static size_t bcj_ia64(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + static const uint8_t branch_table[32] = { + 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, + 4, 4, 6, 6, 0, 0, 7, 7, + 4, 4, 0, 0, 4, 4, 0, 0 + }; + + /* + * The local variables take a little bit stack space, but it's less + * than what LZMA2 decoder takes, so it doesn't make sense to reduce + * stack usage here without doing that for the LZMA2 decoder too. + */ + + /* Loop counters */ + size_t i; + size_t j; + + /* Instruction slot (0, 1, or 2) in the 128-bit instruction word */ + uint32_t slot; + + /* Bitwise offset of the instruction indicated by slot */ + uint32_t bit_pos; + + /* bit_pos split into byte and bit parts */ + uint32_t byte_pos; + uint32_t bit_res; + + /* Address part of an instruction */ + uint32_t addr; + + /* Mask used to detect which instructions to convert */ + uint32_t mask; + + /* 41-bit instruction stored somewhere in the lowest 48 bits */ + uint64_t instr; + + /* Instruction normalized with bit_res for easier manipulation */ + uint64_t norm; + + for (i = 0; i + 16 <= size; i += 16) { + mask = branch_table[buf[i] & 0x1F]; + for (slot = 0, bit_pos = 5; slot < 3; ++slot, bit_pos += 41) { + if (((mask >> slot) & 1) == 0) + continue; + + byte_pos = bit_pos >> 3; + bit_res = bit_pos & 7; + instr = 0; + for (j = 0; j < 6; ++j) + instr |= (uint64_t)(buf[i + j + byte_pos]) + << (8 * j); + + norm = instr >> bit_res; + + if (((norm >> 37) & 0x0F) == 0x05 + && ((norm >> 9) & 0x07) == 0) { + addr = (norm >> 13) & 0x0FFFFF; + addr |= ((uint32_t)(norm >> 36) & 1) << 20; + addr <<= 4; + addr -= s->pos + (uint32_t)i; + addr >>= 4; + + norm &= ~((uint64_t)0x8FFFFF << 13); + norm |= (uint64_t)(addr & 0x0FFFFF) << 13; + norm |= (uint64_t)(addr & 0x100000) + << (36 - 20); + + instr &= (1 << bit_res) - 1; + instr |= norm << bit_res; + + for (j = 0; j < 6; j++) + buf[i + j + byte_pos] + = (uint8_t)(instr >> (8 * j)); + } + } + } + + return i; +} +#endif + +#ifdef XZ_DEC_ARM +static size_t bcj_arm(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + size_t i; + uint32_t addr; + + for (i = 0; i + 4 <= size; i += 4) { + if (buf[i + 3] == 0xEB) { + addr = (uint32_t)buf[i] | ((uint32_t)buf[i + 1] << 8) + | ((uint32_t)buf[i + 2] << 16); + addr <<= 2; + addr -= s->pos + (uint32_t)i + 8; + addr >>= 2; + buf[i] = (uint8_t)addr; + buf[i + 1] = (uint8_t)(addr >> 8); + buf[i + 2] = (uint8_t)(addr >> 16); + } + } + + return i; +} +#endif + +#ifdef XZ_DEC_ARMTHUMB +static size_t bcj_armthumb(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + size_t i; + uint32_t addr; + + for (i = 0; i + 4 <= size; i += 2) { + if ((buf[i + 1] & 0xF8) == 0xF0 + && (buf[i + 3] & 0xF8) == 0xF8) { + addr = (((uint32_t)buf[i + 1] & 0x07) << 19) + | ((uint32_t)buf[i] << 11) + | (((uint32_t)buf[i + 3] & 0x07) << 8) + | (uint32_t)buf[i + 2]; + addr <<= 1; + addr -= s->pos + (uint32_t)i + 4; + addr >>= 1; + buf[i + 1] = (uint8_t)(0xF0 | ((addr >> 19) & 0x07)); + buf[i] = (uint8_t)(addr >> 11); + buf[i + 3] = (uint8_t)(0xF8 | ((addr >> 8) & 0x07)); + buf[i + 2] = (uint8_t)addr; + i += 2; + } + } + + return i; +} +#endif + +#ifdef XZ_DEC_SPARC +static size_t bcj_sparc(struct xz_dec_bcj *s, uint8_t *buf, size_t size) +{ + size_t i; + uint32_t instr; + + for (i = 0; i + 4 <= size; i += 4) { + instr = get_unaligned_be32(buf + i); + if ((instr >> 22) == 0x100 || (instr >> 22) == 0x1FF) { + instr <<= 2; + instr -= s->pos + (uint32_t)i; + instr >>= 2; + instr = ((uint32_t)0x40000000 - (instr & 0x400000)) + | 0x40000000 | (instr & 0x3FFFFF); + put_unaligned_be32(instr, buf + i); + } + } + + return i; +} +#endif + +/* + * Apply the selected BCJ filter. Update *pos and s->pos to match the amount + * of data that got filtered. + * + * NOTE: This is implemented as a switch statement to avoid using function + * pointers, which could be problematic in the kernel boot code, which must + * avoid pointers to static data (at least on x86). + */ +static void bcj_apply(struct xz_dec_bcj *s, + uint8_t *buf, size_t *pos, size_t size) +{ + size_t filtered; + + buf += *pos; + size -= *pos; + + switch (s->type) { +#ifdef XZ_DEC_X86 + case BCJ_X86: + filtered = bcj_x86(s, buf, size); + break; +#endif +#ifdef XZ_DEC_POWERPC + case BCJ_POWERPC: + filtered = bcj_powerpc(s, buf, size); + break; +#endif +#ifdef XZ_DEC_IA64 + case BCJ_IA64: + filtered = bcj_ia64(s, buf, size); + break; +#endif +#ifdef XZ_DEC_ARM + case BCJ_ARM: + filtered = bcj_arm(s, buf, size); + break; +#endif +#ifdef XZ_DEC_ARMTHUMB + case BCJ_ARMTHUMB: + filtered = bcj_armthumb(s, buf, size); + break; +#endif +#ifdef XZ_DEC_SPARC + case BCJ_SPARC: + filtered = bcj_sparc(s, buf, size); + break; +#endif + default: + /* Never reached but silence compiler warnings. */ + filtered = 0; + break; + } + + *pos += filtered; + s->pos += filtered; +} + +/* + * Flush pending filtered data from temp to the output buffer. + * Move the remaining mixture of possibly filtered and unfiltered + * data to the beginning of temp. + */ +static void bcj_flush(struct xz_dec_bcj *s, struct xz_buf *b) +{ + size_t copy_size; + + copy_size = min_t(size_t, s->temp.filtered, b->out_size - b->out_pos); + memcpy(b->out + b->out_pos, s->temp.buf, copy_size); + b->out_pos += copy_size; + + s->temp.filtered -= copy_size; + s->temp.size -= copy_size; + memmove(s->temp.buf, s->temp.buf + copy_size, s->temp.size); +} + +/* + * The BCJ filter functions are primitive in sense that they process the + * data in chunks of 1-16 bytes. To hide this issue, this function does + * some buffering. + */ +XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s, + struct xz_dec_lzma2 *lzma2, + struct xz_buf *b) +{ + size_t out_start; + + /* + * Flush pending already filtered data to the output buffer. Return + * immediatelly if we couldn't flush everything, or if the next + * filter in the chain had already returned XZ_STREAM_END. + */ + if (s->temp.filtered > 0) { + bcj_flush(s, b); + if (s->temp.filtered > 0) + return XZ_OK; + + if (s->ret == XZ_STREAM_END) + return XZ_STREAM_END; + } + + /* + * If we have more output space than what is currently pending in + * temp, copy the unfiltered data from temp to the output buffer + * and try to fill the output buffer by decoding more data from the + * next filter in the chain. Apply the BCJ filter on the new data + * in the output buffer. If everything cannot be filtered, copy it + * to temp and rewind the output buffer position accordingly. + */ + if (s->temp.size < b->out_size - b->out_pos) { + out_start = b->out_pos; + memcpy(b->out + b->out_pos, s->temp.buf, s->temp.size); + b->out_pos += s->temp.size; + + s->ret = xz_dec_lzma2_run(lzma2, b); + if (s->ret != XZ_STREAM_END + && (s->ret != XZ_OK || s->single_call)) + return s->ret; + + bcj_apply(s, b->out, &out_start, b->out_pos); + + /* + * As an exception, if the next filter returned XZ_STREAM_END, + * we can do that too, since the last few bytes that remain + * unfiltered are meant to remain unfiltered. + */ + if (s->ret == XZ_STREAM_END) + return XZ_STREAM_END; + + s->temp.size = b->out_pos - out_start; + b->out_pos -= s->temp.size; + memcpy(s->temp.buf, b->out + b->out_pos, s->temp.size); + } + + /* + * If we have unfiltered data in temp, try to fill by decoding more + * data from the next filter. Apply the BCJ filter on temp. Then we + * hopefully can fill the actual output buffer by copying filtered + * data from temp. A mix of filtered and unfiltered data may be left + * in temp; it will be taken care on the next call to this function. + */ + if (s->temp.size > 0) { + /* Make b->out{,_pos,_size} temporarily point to s->temp. */ + s->out = b->out; + s->out_pos = b->out_pos; + s->out_size = b->out_size; + b->out = s->temp.buf; + b->out_pos = s->temp.size; + b->out_size = sizeof(s->temp.buf); + + s->ret = xz_dec_lzma2_run(lzma2, b); + + s->temp.size = b->out_pos; + b->out = s->out; + b->out_pos = s->out_pos; + b->out_size = s->out_size; + + if (s->ret != XZ_OK && s->ret != XZ_STREAM_END) + return s->ret; + + bcj_apply(s, s->temp.buf, &s->temp.filtered, s->temp.size); + + /* + * If the next filter returned XZ_STREAM_END, we mark that + * everything is filtered, since the last unfiltered bytes + * of the stream are meant to be left as is. + */ + if (s->ret == XZ_STREAM_END) + s->temp.filtered = s->temp.size; + + bcj_flush(s, b); + if (s->temp.filtered > 0) + return XZ_OK; + } + + return s->ret; +} + +XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call) +{ + struct xz_dec_bcj *s = kmalloc(sizeof(*s), GFP_KERNEL); + if (s != NULL) + s->single_call = single_call; + + return s; +} + +XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id) +{ + switch (id) { +#ifdef XZ_DEC_X86 + case BCJ_X86: +#endif +#ifdef XZ_DEC_POWERPC + case BCJ_POWERPC: +#endif +#ifdef XZ_DEC_IA64 + case BCJ_IA64: +#endif +#ifdef XZ_DEC_ARM + case BCJ_ARM: +#endif +#ifdef XZ_DEC_ARMTHUMB + case BCJ_ARMTHUMB: +#endif +#ifdef XZ_DEC_SPARC + case BCJ_SPARC: +#endif + break; + + default: + /* Unsupported Filter ID */ + return XZ_OPTIONS_ERROR; + } + + s->type = id; + s->ret = XZ_OK; + s->pos = 0; + s->x86_prev_mask = 0; + s->temp.filtered = 0; + s->temp.size = 0; + + return XZ_OK; +} + +#endif Property changes on: sys/contrib/xz-embedded/xz_dec_bcj.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_config.h =================================================================== --- sys/contrib/xz-embedded/xz_config.h (revision 0) +++ sys/contrib/xz-embedded/xz_config.h (revision 0) @@ -0,0 +1,47 @@ + +#ifndef __XZ_CONFIH_H__ +#define __XZ_CONFIH_H__ + + +#include +#include +#include + +#include +#include "xz_malloc.h" + +#define XZ_DEC_SINGLE 1 +#define XZ_PREBOOT 1 +#undef XZ_EXTERN +#define XZ_EXTERN extern +#undef STATIC +#define STATIC +#undef INIT +#define INIT + +#undef bool +#undef true +#undef false +#define bool int +#define true 1 +#define false 0 + +#define kmalloc(size, flags) xz_malloc(size) +#define kfree(ptr) xz_free(ptr) +#define vmalloc(size) xz_malloc(size) +#define vfree(ptr) xz_free(ptr) + +#define memeq(a, b, size) (memcmp(a, b, size) == 0) +#define memzero(buf, size) bzero(buf, size) + +#ifndef min +# define min(x, y) ((x) < (y) ? (x) : (y)) +#endif +#define min_t(type, x, y) min(x, y) + +#define get_le32(ptr) le32toh(*(const uint32_t *)(ptr)) + +#endif /* __XZ_CONFIH_H__ */ + + + Property changes on: sys/contrib/xz-embedded/xz_config.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_lzma2.h =================================================================== --- sys/contrib/xz-embedded/xz_lzma2.h (revision 0) +++ sys/contrib/xz-embedded/xz_lzma2.h (revision 0) @@ -0,0 +1,204 @@ +/* + * LZMA2 definitions + * + * Authors: Lasse Collin + * Igor Pavlov + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +#ifndef XZ_LZMA2_H +#define XZ_LZMA2_H + +/* Range coder constants */ +#define RC_SHIFT_BITS 8 +#define RC_TOP_BITS 24 +#define RC_TOP_VALUE (1 << RC_TOP_BITS) +#define RC_BIT_MODEL_TOTAL_BITS 11 +#define RC_BIT_MODEL_TOTAL (1 << RC_BIT_MODEL_TOTAL_BITS) +#define RC_MOVE_BITS 5 + +/* + * Maximum number of position states. A position state is the lowest pb + * number of bits of the current uncompressed offset. In some places there + * are different sets of probabilities for different position states. + */ +#define POS_STATES_MAX (1 << 4) + +/* + * This enum is used to track which LZMA symbols have occurred most recently + * and in which order. This information is used to predict the next symbol. + * + * Symbols: + * - Literal: One 8-bit byte + * - Match: Repeat a chunk of data at some distance + * - Long repeat: Multi-byte match at a recently seen distance + * - Short repeat: One-byte repeat at a recently seen distance + * + * The symbol names are in from STATE_oldest_older_previous. REP means + * either short or long repeated match, and NONLIT means any non-literal. + */ +enum lzma_state { + STATE_LIT_LIT, + STATE_MATCH_LIT_LIT, + STATE_REP_LIT_LIT, + STATE_SHORTREP_LIT_LIT, + STATE_MATCH_LIT, + STATE_REP_LIT, + STATE_SHORTREP_LIT, + STATE_LIT_MATCH, + STATE_LIT_LONGREP, + STATE_LIT_SHORTREP, + STATE_NONLIT_MATCH, + STATE_NONLIT_REP +}; + +/* Total number of states */ +#define STATES 12 + +/* The lowest 7 states indicate that the previous state was a literal. */ +#define LIT_STATES 7 + +/* Indicate that the latest symbol was a literal. */ +static inline void lzma_state_literal(enum lzma_state *state) +{ + if (*state <= STATE_SHORTREP_LIT_LIT) + *state = STATE_LIT_LIT; + else if (*state <= STATE_LIT_SHORTREP) + *state -= 3; + else + *state -= 6; +} + +/* Indicate that the latest symbol was a match. */ +static inline void lzma_state_match(enum lzma_state *state) +{ + *state = *state < LIT_STATES ? STATE_LIT_MATCH : STATE_NONLIT_MATCH; +} + +/* Indicate that the latest state was a long repeated match. */ +static inline void lzma_state_long_rep(enum lzma_state *state) +{ + *state = *state < LIT_STATES ? STATE_LIT_LONGREP : STATE_NONLIT_REP; +} + +/* Indicate that the latest symbol was a short match. */ +static inline void lzma_state_short_rep(enum lzma_state *state) +{ + *state = *state < LIT_STATES ? STATE_LIT_SHORTREP : STATE_NONLIT_REP; +} + +/* Test if the previous symbol was a literal. */ +static inline bool lzma_state_is_literal(enum lzma_state state) +{ + return state < LIT_STATES; +} + +/* Each literal coder is divided in three sections: + * - 0x001-0x0FF: Without match byte + * - 0x101-0x1FF: With match byte; match bit is 0 + * - 0x201-0x2FF: With match byte; match bit is 1 + * + * Match byte is used when the previous LZMA symbol was something else than + * a literal (that is, it was some kind of match). + */ +#define LITERAL_CODER_SIZE 0x300 + +/* Maximum number of literal coders */ +#define LITERAL_CODERS_MAX (1 << 4) + +/* Minimum length of a match is two bytes. */ +#define MATCH_LEN_MIN 2 + +/* Match length is encoded with 4, 5, or 10 bits. + * + * Length Bits + * 2-9 4 = Choice=0 + 3 bits + * 10-17 5 = Choice=1 + Choice2=0 + 3 bits + * 18-273 10 = Choice=1 + Choice2=1 + 8 bits + */ +#define LEN_LOW_BITS 3 +#define LEN_LOW_SYMBOLS (1 << LEN_LOW_BITS) +#define LEN_MID_BITS 3 +#define LEN_MID_SYMBOLS (1 << LEN_MID_BITS) +#define LEN_HIGH_BITS 8 +#define LEN_HIGH_SYMBOLS (1 << LEN_HIGH_BITS) +#define LEN_SYMBOLS (LEN_LOW_SYMBOLS + LEN_MID_SYMBOLS + LEN_HIGH_SYMBOLS) + +/* + * Maximum length of a match is 273 which is a result of the encoding + * described above. + */ +#define MATCH_LEN_MAX (MATCH_LEN_MIN + LEN_SYMBOLS - 1) + +/* + * Different sets of probabilities are used for match distances that have + * very short match length: Lengths of 2, 3, and 4 bytes have a separate + * set of probabilities for each length. The matches with longer length + * use a shared set of probabilities. + */ +#define DIST_STATES 4 + +/* + * Get the index of the appropriate probability array for decoding + * the distance slot. + */ +static inline uint32_t lzma_get_dist_state(uint32_t len) +{ + return len < DIST_STATES + MATCH_LEN_MIN + ? len - MATCH_LEN_MIN : DIST_STATES - 1; +} + +/* + * The highest two bits of a 32-bit match distance are encoded using six bits. + * This six-bit value is called a distance slot. This way encoding a 32-bit + * value takes 6-36 bits, larger values taking more bits. + */ +#define DIST_SLOT_BITS 6 +#define DIST_SLOTS (1 << DIST_SLOT_BITS) + +/* Match distances up to 127 are fully encoded using probabilities. Since + * the highest two bits (distance slot) are always encoded using six bits, + * the distances 0-3 don't need any additional bits to encode, since the + * distance slot itself is the same as the actual distance. DIST_MODEL_START + * indicates the first distance slot where at least one additional bit is + * needed. + */ +#define DIST_MODEL_START 4 + +/* + * Match distances greater than 127 are encoded in three pieces: + * - distance slot: the highest two bits + * - direct bits: 2-26 bits below the highest two bits + * - alignment bits: four lowest bits + * + * Direct bits don't use any probabilities. + * + * The distance slot value of 14 is for distances 128-191. + */ +#define DIST_MODEL_END 14 + +/* Distance slots that indicate a distance <= 127. */ +#define FULL_DISTANCES_BITS (DIST_MODEL_END / 2) +#define FULL_DISTANCES (1 << FULL_DISTANCES_BITS) + +/* + * For match distances greater than 127, only the highest two bits and the + * lowest four bits (alignment) is encoded using probabilities. + */ +#define ALIGN_BITS 4 +#define ALIGN_SIZE (1 << ALIGN_BITS) +#define ALIGN_MASK (ALIGN_SIZE - 1) + +/* Total number of all probability variables */ +#define PROBS_TOTAL (1846 + LITERAL_CODERS_MAX * LITERAL_CODER_SIZE) + +/* + * LZMA remembers the four most recent match distances. Reusing these + * distances tends to take less space than re-encoding the actual + * distance value. + */ +#define REPS 4 + +#endif Property changes on: sys/contrib/xz-embedded/xz_lzma2.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/README =================================================================== --- sys/contrib/xz-embedded/README (revision 0) +++ sys/contrib/xz-embedded/README (revision 0) @@ -0,0 +1,127 @@ + +XZ Embedded +=========== + + XZ Embedded is a relatively small, limited implementation of the .xz + file format. Currently only decoding is implemented. + + XZ Embedded was written for use in the Linux kernel, but the code can + be easily used in other environments too, including regular userspace + applications. + + This README contains information that is useful only when the copy + of XZ Embedded isn't part of the Linux kernel tree. You should also + read linux/Documentation/xz.txt even if you aren't using XZ Embedded + as part of Linux; information in that file is not repeated in this + README. + +Compiling the Linux kernel module + + The xz_dec module depends on crc32 module, so make sure that you have + it enabled (CONFIG_CRC32). + + Building the xz_dec and xz_dec_test modules without support for BCJ + filters: + + cd linux/lib/xz + make -C /path/to/kernel/source \ + KCPPFLAGS=-I"$(pwd)/../../include" M="$(pwd)" \ + CONFIG_XZ_DEC=m CONFIG_XZ_DEC_TEST=m + + Building the xz_dec and xz_dec_test modules with support for BCJ + filters: + + cd linux/lib/xz + make -C /path/to/kernel/source \ + KCPPFLAGS=-I"$(pwd)/../../include" M="$(pwd)" \ + CONFIG_XZ_DEC=m CONFIG_XZ_DEC_TEST=m CONFIG_XZ_DEC_BCJ=y \ + CONFIG_XZ_DEC_X86=y CONFIG_XZ_DEC_POWERPC=y \ + CONFIG_XZ_DEC_IA64=y CONFIG_XZ_DEC_ARM=y \ + CONFIG_XZ_DEC_ARMTHUMB=y CONFIG_XZ_DEC_SPARC=y + + If you want only one or a few of the BCJ filters, omit the appropriate + variables. CONFIG_XZ_DEC_BCJ=y is always required to build the support + code shared between all BCJ filters. + + Most people don't need the xz_dec_test module. You can skip building + it by omitting CONFIG_XZ_DEC_TEST=m from the make command line. + +Compiler requirements + + XZ Embedded should compile as either GNU-C89 (used in the Linux + kernel) or with any C99 compiler. Getting the code to compile with + non-GNU C89 compiler or a C++ compiler should be quite easy as + long as there is a data type for unsigned 64-bit integer (or the + code is modified not to support large files, which needs some more + care than just using 32-bit integer instead of 64-bit). + + If you use GCC, try to use a recent version. For example, on x86-32, + xz_dec_lzma2.c compiled with GCC 3.3.6 is 15-25 % slower than when + compiled with GCC 4.3.3. + +Embedding into userspace applications + + To embed the XZ decoder, copy the following files into a single + directory in your source code tree: + + linux/include/linux/xz.h + linux/lib/xz/xz_crc32.c + linux/lib/xz/xz_dec_lzma2.c + linux/lib/xz/xz_dec_stream.c + linux/lib/xz/xz_lzma2.h + linux/lib/xz/xz_private.h + linux/lib/xz/xz_stream.h + userspace/xz_config.h + + Alternatively, xz.h may be placed into a different directory but then + that directory must be in the compiler include path when compiling + the .c files. + + Your code should use only the functions declared in xz.h. The rest of + the .h files are meant only for internal use in XZ Embedded. + + You may want to modify xz_config.h to be more suitable for your build + environment. Probably you should at least skim through it even if the + default file works as is. + +BCJ filter support + + If you want support for one or more BCJ filters, you need to copy also + linux/lib/xz/xz_dec_bcj.c into your application, and use appropriate + #defines in xz_config.h or in compiler flags. You don't need these + #defines in the code that just uses XZ Embedded via xz.h, but having + them always #defined doesn't hurt either. + + #define Instruction set BCJ filter endianness + XZ_DEC_X86 x86-32 or x86-64 Little endian only + XZ_DEC_POWERPC PowerPC Big endian only + XZ_DEC_IA64 Itanium (IA-64) Big or little endian + XZ_DEC_ARM ARM Little endian only + XZ_DEC_ARMTHUMB ARM-Thumb Little endian only + XZ_DEC_SPARC SPARC Big or little endian + + While some architectures are (partially) bi-endian, the endianness + setting doesn't change the endianness of the instructions on all + architectures. That's why Itanium and SPARC filters work for both big + and little endian executables (Itanium has little endian instructions + and SPARC has big endian instructions). + + There currently is no filter for little endian PowerPC or big endian + ARM or ARM-Thumb. Implementing filters for them can be considered if + there is a need for such filters in real-world applications. + +Notes about shared libraries + + If you are including XZ Embedded into a shared library, you very + probably should rename the xz_* functions to prevent symbol + conflicts in case your library is linked against some other library + or application that also has XZ Embedded in it (which may even be + a different version of XZ Embedded). TODO: Provide an easy way + to do this. + + Please don't create a shared library of XZ Embedded itself unless + it is fine to rebuild everything depending on that shared library + everytime you upgrade to a newer version of XZ Embedded. There are + no API or ABI stability guarantees between different versions of + XZ Embedded. + Index: sys/contrib/xz-embedded/xz_malloc.h =================================================================== --- sys/contrib/xz-embedded/xz_malloc.h (revision 0) +++ sys/contrib/xz-embedded/xz_malloc.h (revision 0) @@ -0,0 +1,8 @@ +#ifndef __XZ_MALLOC_H__ +#define __XZ_MALLOC_H__ + +void *xz_malloc(unsigned long size); +void xz_free(void *addr); + +#endif /* __XZ_MALLOC_H__ */ + Property changes on: sys/contrib/xz-embedded/xz_malloc.h ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native Index: sys/contrib/xz-embedded/xz_crc32.c =================================================================== --- sys/contrib/xz-embedded/xz_crc32.c (revision 0) +++ sys/contrib/xz-embedded/xz_crc32.c (revision 0) @@ -0,0 +1,59 @@ +/* + * CRC32 using the polynomial from IEEE-802.3 + * + * Authors: Lasse Collin + * Igor Pavlov + * + * This file has been put into the public domain. + * You can do whatever you want with this file. + */ + +/* + * This is not the fastest implementation, but it is pretty compact. + * The fastest versions of xz_crc32() on modern CPUs without hardware + * accelerated CRC instruction are 3-5 times as fast as this version, + * but they are bigger and use more memory for the lookup table. + */ + +#include "xz_private.h" + +/* + * STATIC_RW_DATA is used in the pre-boot environment on some architectures. + * See for details. + */ +#ifndef STATIC_RW_DATA +# define STATIC_RW_DATA static +#endif + +STATIC_RW_DATA uint32_t xz_crc32_table[256]; + +XZ_EXTERN void xz_crc32_init(void) +{ + const uint32_t poly = 0xEDB88320; + + uint32_t i; + uint32_t j; + uint32_t r; + + for (i = 0; i < 256; ++i) { + r = i; + for (j = 0; j < 8; ++j) + r = (r >> 1) ^ (poly & ~((r & 1) - 1)); + + xz_crc32_table[i] = r; + } + + return; +} + +XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc) +{ + crc = ~crc; + + while (size != 0) { + crc = xz_crc32_table[*buf++ ^ (crc & 0xFF)] ^ (crc >> 8); + --size; + } + + return ~crc; +} Property changes on: sys/contrib/xz-embedded/xz_crc32.c ___________________________________________________________________ Added: svn:mime-type + text/plain Added: svn:keywords + FreeBSD=%H Added: svn:eol-style + native --Multipart=_Sun__23_Jan_2011_21_30_23_+0200_nBQzY5lRGbnEEKiY--