From owner-svn-src-vendor@freebsd.org Mon Oct 22 19:45:20 2018 Return-Path: Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 54E3F105EBE0; Mon, 22 Oct 2018 19:45:20 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E775A8F312; Mon, 22 Oct 2018 19:45:19 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id DC6BD24DC9; Mon, 22 Oct 2018 19:45:19 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MJjJA7075849; Mon, 22 Oct 2018 19:45:19 GMT (envelope-from cem@FreeBSD.org) Received: (from cem@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MJjIMU075839; Mon, 22 Oct 2018 19:45:18 GMT (envelope-from cem@FreeBSD.org) Message-Id: <201810221945.w9MJjIMU075839@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org using -f From: Conrad Meyer Date: Mon, 22 Oct 2018 19:45:18 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-vendor@freebsd.org Subject: svn commit: r339610 - in vendor/zstd/dist: . contrib/adaptive-compression contrib/long_distance_matching contrib/meson doc doc/images lib lib/common lib/compress lib/decompress lib/deprecated lib/d... X-SVN-Group: vendor X-SVN-Commit-Author: cem X-SVN-Commit-Paths: in vendor/zstd/dist: . contrib/adaptive-compression contrib/long_distance_matching contrib/meson doc doc/images lib lib/common lib/compress lib/decompress lib/deprecated lib/dictBuilder lib/legacy pro... X-SVN-Commit-Revision: 339610 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-vendor@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the vendor work area tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Oct 2018 19:45:20 -0000 Author: cem Date: Mon Oct 22 19:45:18 2018 New Revision: 339610 URL: https://svnweb.freebsd.org/changeset/base/339610 Log: import zstd 1.3.3 Added: vendor/zstd/dist/contrib/adaptive-compression/ vendor/zstd/dist/contrib/adaptive-compression/Makefile (contents, props changed) vendor/zstd/dist/contrib/adaptive-compression/README.md vendor/zstd/dist/contrib/adaptive-compression/adapt.c (contents, props changed) vendor/zstd/dist/contrib/adaptive-compression/datagencli.c (contents, props changed) vendor/zstd/dist/contrib/adaptive-compression/test-correctness.sh (contents, props changed) vendor/zstd/dist/contrib/adaptive-compression/test-performance.sh (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/ vendor/zstd/dist/contrib/long_distance_matching/Makefile (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/README.md vendor/zstd/dist/contrib/long_distance_matching/ldm.c (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/ldm.h (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/ldm_common.c (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/ldm_params.h (contents, props changed) vendor/zstd/dist/contrib/long_distance_matching/main.c (contents, props changed) vendor/zstd/dist/doc/images/ldmCspeed.png (contents, props changed) vendor/zstd/dist/doc/images/ldmDspeed.png (contents, props changed) vendor/zstd/dist/doc/images/linux-4.7-12-compress.png (contents, props changed) vendor/zstd/dist/doc/images/linux-4.7-12-decompress.png (contents, props changed) vendor/zstd/dist/doc/images/linux-git-compress.png (contents, props changed) vendor/zstd/dist/doc/images/linux-git-decompress.png (contents, props changed) vendor/zstd/dist/doc/images/zstd_logo86.png (contents, props changed) vendor/zstd/dist/lib/compress/zstd_compress_internal.h (contents, props changed) vendor/zstd/dist/tests/seqgen.c (contents, props changed) vendor/zstd/dist/tests/seqgen.h (contents, props changed) Deleted: vendor/zstd/dist/lib/compress/zstd_compress.h Modified: vendor/zstd/dist/Makefile vendor/zstd/dist/NEWS vendor/zstd/dist/README.md vendor/zstd/dist/circle.yml vendor/zstd/dist/contrib/meson/meson.build vendor/zstd/dist/doc/zstd_compression_format.md vendor/zstd/dist/doc/zstd_manual.html vendor/zstd/dist/lib/BUCK vendor/zstd/dist/lib/common/bitstream.h vendor/zstd/dist/lib/common/mem.h vendor/zstd/dist/lib/common/pool.c vendor/zstd/dist/lib/common/zstd_common.c vendor/zstd/dist/lib/common/zstd_internal.h vendor/zstd/dist/lib/compress/zstd_compress.c vendor/zstd/dist/lib/compress/zstd_double_fast.c vendor/zstd/dist/lib/compress/zstd_double_fast.h vendor/zstd/dist/lib/compress/zstd_fast.c vendor/zstd/dist/lib/compress/zstd_fast.h vendor/zstd/dist/lib/compress/zstd_lazy.c vendor/zstd/dist/lib/compress/zstd_lazy.h vendor/zstd/dist/lib/compress/zstd_ldm.h vendor/zstd/dist/lib/compress/zstd_opt.c vendor/zstd/dist/lib/compress/zstd_opt.h vendor/zstd/dist/lib/compress/zstdmt_compress.c vendor/zstd/dist/lib/compress/zstdmt_compress.h vendor/zstd/dist/lib/decompress/zstd_decompress.c vendor/zstd/dist/lib/deprecated/zbuff_compress.c vendor/zstd/dist/lib/dictBuilder/zdict.c vendor/zstd/dist/lib/legacy/zstd_v01.c vendor/zstd/dist/lib/legacy/zstd_v02.c vendor/zstd/dist/lib/legacy/zstd_v03.c vendor/zstd/dist/lib/legacy/zstd_v04.c vendor/zstd/dist/lib/legacy/zstd_v05.c vendor/zstd/dist/lib/legacy/zstd_v06.c vendor/zstd/dist/lib/legacy/zstd_v07.c vendor/zstd/dist/lib/zstd.h vendor/zstd/dist/programs/BUCK vendor/zstd/dist/programs/Makefile vendor/zstd/dist/programs/bench.c vendor/zstd/dist/programs/bench.h vendor/zstd/dist/programs/dibio.c vendor/zstd/dist/programs/fileio.c vendor/zstd/dist/programs/fileio.h vendor/zstd/dist/programs/platform.h vendor/zstd/dist/programs/util.h vendor/zstd/dist/programs/zstd.1 vendor/zstd/dist/programs/zstd.1.md vendor/zstd/dist/programs/zstdcli.c vendor/zstd/dist/tests/Makefile vendor/zstd/dist/tests/decodecorpus.c vendor/zstd/dist/tests/fullbench.c vendor/zstd/dist/tests/fuzzer.c vendor/zstd/dist/tests/paramgrill.c vendor/zstd/dist/tests/playTests.sh vendor/zstd/dist/tests/zbufftest.c vendor/zstd/dist/tests/zstreamtest.c vendor/zstd/dist/zlibWrapper/BUCK vendor/zstd/dist/zlibWrapper/examples/zwrapbench.c vendor/zstd/dist/zlibWrapper/zstd_zlibwrapper.c Modified: vendor/zstd/dist/Makefile ============================================================================== --- vendor/zstd/dist/Makefile Mon Oct 22 19:39:20 2018 (r339609) +++ vendor/zstd/dist/Makefile Mon Oct 22 19:45:18 2018 (r339610) @@ -72,9 +72,12 @@ zstdmt: zlibwrapper: $(MAKE) -C $(ZWRAPDIR) test +.PHONY: check +check: shortest + .PHONY: test shortest test shortest: - $(MAKE) -C $(PRGDIR) allVariants + $(MAKE) -C $(PRGDIR) allVariants MOREFLAGS="-g -DZSTD_DEBUG=1" $(MAKE) -C $(TESTDIR) $@ .PHONY: examples @@ -127,11 +130,6 @@ uninstall: travis-install: $(MAKE) install PREFIX=~/install_test_dir -.PHONY: gppbuild -gppbuild: clean - g++ -v - CC=g++ $(MAKE) -C programs all CFLAGS="-O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror" - .PHONY: gcc5build gcc5build: clean gcc-5 -v @@ -163,7 +161,7 @@ aarch64build: clean CC=aarch64-linux-gnu-gcc CFLAGS="-Werror" $(MAKE) allzstd ppcbuild: clean - CC=powerpc-linux-gnu-gcc CLAGS="-m32 -Wno-attributes -Werror" $(MAKE) allzstd + CC=powerpc-linux-gnu-gcc CFLAGS="-m32 -Wno-attributes -Werror" $(MAKE) allzstd ppc64build: clean CC=powerpc-linux-gnu-gcc CFLAGS="-m64 -Werror" $(MAKE) allzstd Modified: vendor/zstd/dist/NEWS ============================================================================== --- vendor/zstd/dist/NEWS Mon Oct 22 19:39:20 2018 (r339609) +++ vendor/zstd/dist/NEWS Mon Oct 22 19:45:18 2018 (r339610) @@ -1,3 +1,15 @@ +v1.3.3 +perf: faster zstd_opt strategy (levels 17-19) +fix : bug #944 : multithreading with shared ditionary and large data, reported by @gsliepen +cli : fix : content size written in header by default +cli : fix : improved LZ4 format support, by @felixhandte +cli : new : hidden command `-S`, to benchmark multiple files while generating one result per file +api : fix : support large skippable frames, by @terrelln +api : fix : streaming interface was adding a useless 3-bytes null block to small frames +api : change : when setting `pledgedSrcSize`, use `ZSTD_CONTENTSIZE_UNKNOWN` macro value to mean "unknown" +build: fix : compilation under rhel6 and centos6, reported by @pixelb +build: added `check` target + v1.3.2 new : long range mode, using --long command, by Stella Lau (@stellamplau) new : ability to generate and decode magicless frames (#591) Modified: vendor/zstd/dist/README.md ============================================================================== --- vendor/zstd/dist/README.md Mon Oct 22 19:39:20 2018 (r339609) +++ vendor/zstd/dist/README.md Mon Oct 22 19:45:18 2018 (r339610) @@ -1,15 +1,16 @@ - __Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm, - targeting real-time compression scenarios at zlib-level and better compression ratios. +

Zstandard

-It is provided as an open-source BSD-licensed **C** library, -and a command line utility producing and decoding `.zst` and `.gz` files. -For other programming languages, -you can consult a list of known ports on [Zstandard homepage](http://www.zstd.net/#other-languages). +__Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm, +targeting real-time compression scenarios at zlib-level and better compression ratios. +It's backed by a very fast entropy stage, provided by [Huff0 and FSE library](https://github.com/Cyan4973/FiniteStateEntropy). -| dev branch status | -|-------------------| -| [![Build Status][travisDevBadge]][travisLink] [![Build status][AppveyorDevBadge]][AppveyorLink] [![Build status][CircleDevBadge]][CircleLink] +The project is provided as an open-source BSD-licensed **C** library, +and a command line utility producing and decoding `.zst`, `.gz`, `.xz` and `.lz4` files. +Should your project require another programming language, +a list of known ports and bindings is provided on [Zstandard homepage](http://www.zstd.net/#other-languages). +Development branch status : [![Build Status][travisDevBadge]][travisLink] [![Build status][AppveyorDevBadge]][AppveyorLink] [![Build status][CircleDevBadge]][CircleLink] + [travisDevBadge]: https://travis-ci.org/facebook/zstd.svg?branch=dev "Continuous Integration test suite" [travisLink]: https://travis-ci.org/facebook/zstd [AppveyorDevBadge]: https://ci.appveyor.com/api/projects/status/xt38wbdxjk5mrbem/branch/dev?svg=true "Windows test suite" @@ -17,8 +18,9 @@ you can consult a list of known ports on [Zstandard ho [CircleDevBadge]: https://circleci.com/gh/facebook/zstd/tree/dev.svg?style=shield "Short test suite" [CircleLink]: https://circleci.com/gh/facebook/zstd +### Benchmarks -As a reference, several fast compression algorithms were tested and compared +For reference, several fast compression algorithms were tested and compared on a server running Linux Debian (`Linux version 4.8.0-1-amd64`), with a Core i7-6700K CPU @ 4.0GHz, using [lzbench], an open-source in-memory benchmark by @inikep @@ -43,7 +45,9 @@ on the [Silesia compression corpus]. [LZ4]: http://www.lz4.org/ Zstd can also offer stronger compression ratios at the cost of compression speed. -Speed vs Compression trade-off is configurable by small increments. Decompression speed is preserved and remains roughly the same at all settings, a property shared by most LZ compression algorithms, such as [zlib] or lzma. +Speed vs Compression trade-off is configurable by small increments. +Decompression speed is preserved and remains roughly the same at all settings, +a property shared by most LZ compression algorithms, such as [zlib] or lzma. The following tests were run on a server running Linux Debian (`Linux version 4.8.0-1-amd64`) @@ -56,8 +60,8 @@ Compression Speed vs Ratio | Decompression Speed ---------------------------|-------------------- ![Compression Speed vs Ratio](doc/images/Cspeed4.png "Compression Speed vs Ratio") | ![Decompression Speed](doc/images/Dspeed4.png "Decompression Speed") -Several algorithms can produce higher compression ratios, but at slower speeds, falling outside of the graph. -For a larger picture including very slow modes, [click on this link](doc/images/DCspeed5.png) . +A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph. +For a larger picture including slow modes, [click on this link](doc/images/DCspeed5.png). ### The case for Small Data compression @@ -84,7 +88,7 @@ Training works if there is some correlation in a famil Hence, deploying one dictionary per type of data will provide the greatest benefits. Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file. -#### Dictionary compression How To : +#### Dictionary compression How To: 1) Create the dictionary @@ -99,19 +103,16 @@ Dictionary gains are mostly effective in the first few `zstd -D dictionaryName --decompress FILE.zst` -### Build +### Build instructions -Once you have the repository cloned, there are multiple ways provided to build Zstandard. - #### Makefile -If your system is compatible with a standard `make` (or `gmake`) binary generator, -you can simply run it at the root directory. -It will generate `zstd` within root directory. +If your system is compatible with standard `make` (or `gmake`), +invoking `make` in root directory will generate `zstd` cli in root directory. -Other available options include : -- `make install` : create and install zstd binary, library and man page -- `make test` : create and run `zstd` and test tools on local platform +Other available options include: +- `make install` : create and install zstd cli, library and man pages +- `make check` : create and run `zstd`, tests its behavior on local platform #### cmake @@ -125,9 +126,9 @@ A Meson project is provided within `contrib/meson`. #### Visual Studio (Windows) -Going into `build` directory, you will find additional possibilities : -- Projects for Visual Studio 2005, 2008 and 2010 - + VS2010 project is compatible with VS2012, VS2013 and VS2015 +Going into `build` directory, you will find additional possibilities: +- Projects for Visual Studio 2005, 2008 and 2010. + + VS2010 project is compatible with VS2012, VS2013 and VS2015. - Automated build scripts for Visual compiler by @KrzysFR , in `build/VS_scripts`, which will build `zstd` cli and `libzstd` library without any need to open Visual Studio solution. @@ -143,11 +144,7 @@ Zstandard is dual-licensed under [BSD](LICENSE) and [G ### Contributing -The "dev" branch is the one where all contributions will be merged before reaching "master". -If you plan to propose a patch, please commit into the "dev" branch or its own feature branch. +The "dev" branch is the one where all contributions are merged before reaching "master". +If you plan to propose a patch, please commit into the "dev" branch, or its own feature branch. Direct commit to "master" are not permitted. For more information, please read [CONTRIBUTING](CONTRIBUTING.md). - -### Miscellaneous - -Zstd entropy stage is provided by [Huff0 and FSE, from Finite State Entropy library](https://github.com/Cyan4973/FiniteStateEntropy). Modified: vendor/zstd/dist/circle.yml ============================================================================== --- vendor/zstd/dist/circle.yml Mon Oct 22 19:39:20 2018 (r339609) +++ vendor/zstd/dist/circle.yml Mon Oct 22 19:45:18 2018 (r339610) @@ -3,13 +3,11 @@ dependencies: - sudo dpkg --add-architecture i386 - sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test; sudo apt-get -y -qq update - sudo apt-get -y install gcc-powerpc-linux-gnu gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross - - sudo apt-get -y install libstdc++-7-dev clang gcc g++ gcc-5 gcc-6 gcc-7 zlib1g-dev liblzma-dev - - sudo apt-get -y install linux-libc-dev:i386 libc6-dev-i386 test: override: - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then cc -v; make all && make clean && make -C lib libzstd-nomt && make clean; fi && + if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then cc -v; CFLAGS="-O0 -Werror" make all && make clean; fi && if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make gnu90build && make clean; fi : parallel: true @@ -20,32 +18,17 @@ test: parallel: true - ? | if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make c11build && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make cmakebuild && make clean; fi + if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make ppc64build && make clean; fi : parallel: true - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make gppbuild && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make gcc5build && make clean; fi - : - parallel: true - - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make gcc6build && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make clangbuild && make clean; fi - : - parallel: true - - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make m32build && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make armbuild && make clean; fi - : - parallel: true - - ? | if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make aarch64build && make clean; fi && if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make ppcbuild && make clean; fi : parallel: true - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make ppc64build && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make gcc7build && make clean; fi + if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make -j regressiontest && make clean; fi && + if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make armbuild && make clean; fi : parallel: true - ? | @@ -54,8 +37,8 @@ test: : parallel: true - ? | - if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make -j regressiontest && make clean; fi && - if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then true; fi # Could add another test here + if [[ "$CIRCLE_NODE_INDEX" == "0" ]] ; then make cxxtest && make clean; fi && + if [[ "$CIRCLE_NODE_TOTAL" < "2" ]] || [[ "$CIRCLE_NODE_INDEX" == "1" ]]; then make -C lib libzstd-nomt && make clean; fi : parallel: true Added: vendor/zstd/dist/contrib/adaptive-compression/Makefile ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ vendor/zstd/dist/contrib/adaptive-compression/Makefile Mon Oct 22 19:45:18 2018 (r339610) @@ -0,0 +1,76 @@ + +ZSTDDIR = ../../lib +PRGDIR = ../../programs +ZSTDCOMMON_FILES := $(ZSTDDIR)/common/*.c +ZSTDCOMP_FILES := $(ZSTDDIR)/compress/*.c +ZSTDDECOMP_FILES := $(ZSTDDIR)/decompress/*.c +ZSTD_FILES := $(ZSTDDECOMP_FILES) $(ZSTDCOMMON_FILES) $(ZSTDCOMP_FILES) + +MULTITHREAD_LDFLAGS = -pthread +DEBUGFLAGS= -g -DZSTD_DEBUG=1 +CPPFLAGS += -I$(ZSTDDIR) -I$(ZSTDDIR)/common -I$(ZSTDDIR)/compress \ + -I$(ZSTDDIR)/dictBuilder -I$(ZSTDDIR)/deprecated -I$(PRGDIR) +CFLAGS ?= -O3 +CFLAGS += -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow \ + -Wstrict-aliasing=1 -Wswitch-enum -Wdeclaration-after-statement \ + -Wstrict-prototypes -Wundef -Wformat-security \ + -Wvla -Wformat=2 -Winit-self -Wfloat-equal -Wwrite-strings \ + -Wredundant-decls +CFLAGS += $(DEBUGFLAGS) +CFLAGS += $(MOREFLAGS) +FLAGS = $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) $(MULTITHREAD_LDFLAGS) + +all: adapt datagen + +adapt: $(ZSTD_FILES) adapt.c + $(CC) $(FLAGS) $^ -o $@ + +adapt-debug: $(ZSTD_FILES) adapt.c + $(CC) $(FLAGS) -DDEBUG_MODE=2 $^ -o adapt + +datagen : $(PRGDIR)/datagen.c datagencli.c + $(CC) $(FLAGS) $^ -o $@ + +test-adapt-correctness: datagen adapt + @./test-correctness.sh + @echo "test correctness complete" + +test-adapt-performance: datagen adapt + @./test-performance.sh + @echo "test performance complete" + +clean: + @$(RM) -f adapt datagen + @$(RM) -rf *.dSYM + @$(RM) -f tmp* + @$(RM) -f tests/*.zst + @$(RM) -f tests/tmp* + @echo "finished cleaning" + +#----------------------------------------------------------------------------- +# make install is validated only for Linux, OSX, BSD, Hurd and Solaris targets +#----------------------------------------------------------------------------- +ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD NetBSD DragonFly SunOS)) + +ifneq (,$(filter $(shell uname),SunOS)) +INSTALL ?= ginstall +else +INSTALL ?= install +endif + +PREFIX ?= /usr/local +DESTDIR ?= +BINDIR ?= $(PREFIX)/bin + +INSTALL_PROGRAM ?= $(INSTALL) -m 755 + +install: adapt + @echo Installing binaries + @$(INSTALL) -d -m 755 $(DESTDIR)$(BINDIR)/ + @$(INSTALL_PROGRAM) adapt $(DESTDIR)$(BINDIR)/zstd-adaptive + @echo zstd-adaptive installation completed + +uninstall: + @$(RM) $(DESTDIR)$(BINDIR)/zstd-adaptive + @echo zstd-adaptive programs successfully uninstalled +endif Added: vendor/zstd/dist/contrib/adaptive-compression/README.md ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ vendor/zstd/dist/contrib/adaptive-compression/README.md Mon Oct 22 19:45:18 2018 (r339610) @@ -0,0 +1,91 @@ +### Summary + +`adapt` is a new compression tool targeted at optimizing performance across network connections and pipelines. The tool is aimed at sensing network speeds and adapting compression level based on network or pipe speeds. +In situations where the compression level does not appropriately match the network/pipe speed, compression may be bottlenecking the entire pipeline or the files may not be compressed as much as they potentially could be, therefore losing efficiency. It also becomes quite impractical to manually measure and set an optimalcompression level (which could potentially change over time). + +### Using `adapt` + +In order to build and use the tool, you can simply run `make adapt` in the `adaptive-compression` directory under `contrib`. This will generate an executable available for use. Another possible method of installation is running `make install`, which will create and install the binary as the command `zstd-adaptive`. + +Similar to many other compression utilities, `zstd-adaptive` can be invoked by using the following format: + +`zstd-adaptive [options] [file(s)]` + +Supported options for the above format are described below. + +`zstd-adaptive` also supports reading from `stdin` and writing to `stdout`, which is potentially more useful. By default, if no files are given, `zstd-adaptive` reads from and writes to standard I/O. Therefore, you can simply insert it within a pipeline like so: + +`cat FILE | zstd-adaptive | ssh "cat - > tmp.zst"` + +If a file is provided, it is also possible to force writing to stdout using the `-c` flag like so: + +`zstd-adaptive -c FILE | ssh "cat - > tmp.zst"` + +Several options described below can be used to control the behavior of `zstd-adaptive`. More specifically, using the `-l#` and `-u#` flags will will set upper and lower bounds so that the compression level will always be within that range. The `-i#` flag can also be used to change the initial compression level. If an initial compression level is not provided, the initial compression level will be chosen such that it is within the appropriate range (it becomes equal to the lower bound). + +### Options +`-oFILE` : write output to `FILE` + +`-i#` : provide initial compression level (must within the appropriate bounds) + +`-h` : display help/information + +`-f` : force the compression level to stay constant + +`-c` : force write to `stdout` + +`-p` : hide progress bar + +`-q` : quiet mode -- do not show progress bar or other information + +`-l#` : set a lower bound on the compression level (default is 1) + +`-u#` : set an upper bound on the compression level (default is 22) +### Benchmarking / Test results +#### Artificial Tests +These artificial tests were run by using the `pv` command line utility in order to limit pipe speeds (25 MB/s read and 5 MB/s write limits were chosen to mimic severe throughput constraints). A 40 GB backup file was sent through a pipeline, compressed, and written out to a file. Compression time, size, and ratio were computed. Data for `zstd -15` was excluded from these tests because the test runs quite long. + + + + +
25 MB/s read limit
+ +| Compressor Name | Ratio | Compressed Size | Compression Time | +|:----------------|------:|----------------:|-----------------:| +| zstd -3 | 2.108 | 20.718 GB | 29m 48.530s | +| zstd-adaptive | 2.230 | 19.581 GB | 29m 48.798s | + +
+ + + + +
5 MB/s write limit
+ +| Compressor Name | Ratio | Compressed Size | Compression Time | +|:----------------|------:|----------------:|-----------------:| +| zstd -3 | 2.108 | 20.718 GB | 1h 10m 43.076s | +| zstd-adaptive | 2.249 | 19.412 GB | 1h 06m 15.577s | + +
+ +The commands used for this test generally followed the form: + +`cat FILE | pv -L 25m -q | COMPRESSION | pv -q > tmp.zst # impose 25 MB/s read limit` + +`cat FILE | pv -q | COMPRESSION | pv -L 5m -q > tmp.zst # impose 5 MB/s write limit` + +#### SSH Tests + +The following tests were performed by piping a relatively large backup file (approximately 80 GB) through compression and over SSH to be stored on a server. The test data includes statistics for time and compressed size on `zstd` at several compression levels, as well as `zstd-adaptive`. The data highlights the potential advantages that `zstd-adaptive` has over using a low static compression level and the negative imapcts that using an excessively high static compression level can have on +pipe throughput. + +| Compressor Name | Ratio | Compressed Size | Compression Time | +|:----------------|------:|----------------:|-----------------:| +| zstd -3 | 2.212 | 32.426 GB | 1h 17m 59.756s | +| zstd -15 | 2.374 | 30.213 GB | 2h 56m 59.441s | +| zstd-adaptive | 2.315 | 30.993 GB | 1h 18m 52.860s | + +The commands used for this test generally followed the form: + +`cat FILE | COMPRESSION | ssh dev "cat - > tmp.zst"` Added: vendor/zstd/dist/contrib/adaptive-compression/adapt.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ vendor/zstd/dist/contrib/adaptive-compression/adapt.c Mon Oct 22 19:45:18 2018 (r339610) @@ -0,0 +1,1137 @@ +/* + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under both the BSD-style license (found in the + * LICENSE file in the root directory of this source tree) and the GPLv2 (found + * in the COPYING file in the root directory of this source tree). + */ + +#include /* fprintf */ +#include /* malloc, free */ +#include /* pthread functions */ +#include /* memset */ +#include "zstd_internal.h" +#include "util.h" + +#define DISPLAY(...) fprintf(stderr, __VA_ARGS__) +#define PRINT(...) fprintf(stdout, __VA_ARGS__) +#define DEBUG(l, ...) { if (g_displayLevel>=l) { DISPLAY(__VA_ARGS__); } } +#define FILE_CHUNK_SIZE 4 << 20 +#define MAX_NUM_JOBS 2 +#define stdinmark "/*stdin*\\" +#define stdoutmark "/*stdout*\\" +#define MAX_PATH 256 +#define DEFAULT_DISPLAY_LEVEL 1 +#define DEFAULT_COMPRESSION_LEVEL 6 +#define MAX_COMPRESSION_LEVEL_CHANGE 2 +#define CONVERGENCE_LOWER_BOUND 5 +#define CLEVEL_DECREASE_COOLDOWN 5 +#define CHANGE_BY_TWO_THRESHOLD 0.1 +#define CHANGE_BY_ONE_THRESHOLD 0.65 + +#ifndef DEBUG_MODE +static int g_displayLevel = DEFAULT_DISPLAY_LEVEL; +#else +static int g_displayLevel = DEBUG_MODE; +#endif + +static unsigned g_compressionLevel = DEFAULT_COMPRESSION_LEVEL; +static UTIL_time_t g_startTime; +static size_t g_streamedSize = 0; +static unsigned g_useProgressBar = 1; +static UTIL_freq_t g_ticksPerSecond; +static unsigned g_forceCompressionLevel = 0; +static unsigned g_minCLevel = 1; +static unsigned g_maxCLevel; + +typedef struct { + void* start; + size_t size; + size_t capacity; +} buffer_t; + +typedef struct { + size_t filled; + buffer_t buffer; +} inBuff_t; + +typedef struct { + buffer_t src; + buffer_t dst; + unsigned jobID; + unsigned lastJobPlusOne; + size_t compressedSize; + size_t dictSize; +} jobDescription; + +typedef struct { + pthread_mutex_t pMutex; + int noError; +} mutex_t; + +typedef struct { + pthread_cond_t pCond; + int noError; +} cond_t; + +typedef struct { + unsigned compressionLevel; + unsigned numJobs; + unsigned nextJobID; + unsigned threadError; + + /* + * JobIDs for the next jobs to be created, compressed, and written + */ + unsigned jobReadyID; + unsigned jobCompressedID; + unsigned jobWriteID; + unsigned allJobsCompleted; + + /* + * counter for how many jobs in a row the compression level has not changed + * if the counter becomes >= CONVERGENCE_LOWER_BOUND, the next time the + * compression level tries to change (by non-zero amount) resets the counter + * to 1 and does not apply the change + */ + unsigned convergenceCounter; + + /* + * cooldown counter in order to prevent rapid successive decreases in compression level + * whenever compression level is decreased, cooldown is set to CLEVEL_DECREASE_COOLDOWN + * whenever adaptCompressionLevel() is called and cooldown != 0, it is decremented + * as long as cooldown != 0, the compression level cannot be decreased + */ + unsigned cooldown; + + /* + * XWaitYCompletion + * Range from 0.0 to 1.0 + * if the value is not 1.0, then this implies that thread X waited on thread Y to finish + * and thread Y was XWaitYCompletion finished at the time of the wait (i.e. compressWaitWriteCompletion=0.5 + * implies that the compression thread waited on the write thread and it was only 50% finished writing a job) + */ + double createWaitCompressionCompletion; + double compressWaitCreateCompletion; + double compressWaitWriteCompletion; + double writeWaitCompressionCompletion; + + /* + * Completion values + * Range from 0.0 to 1.0 + * Jobs are divided into mini-chunks in order to measure completion + * these values are updated each time a thread finishes its operation on the + * mini-chunk (i.e. finishes writing out, compressing, etc. this mini-chunk). + */ + double compressionCompletion; + double writeCompletion; + double createCompletion; + + mutex_t jobCompressed_mutex; + cond_t jobCompressed_cond; + mutex_t jobReady_mutex; + cond_t jobReady_cond; + mutex_t allJobsCompleted_mutex; + cond_t allJobsCompleted_cond; + mutex_t jobWrite_mutex; + cond_t jobWrite_cond; + mutex_t compressionCompletion_mutex; + mutex_t createCompletion_mutex; + mutex_t writeCompletion_mutex; + mutex_t compressionLevel_mutex; + size_t lastDictSize; + inBuff_t input; + jobDescription* jobs; + ZSTD_CCtx* cctx; +} adaptCCtx; + +typedef struct { + adaptCCtx* ctx; + FILE* dstFile; +} outputThreadArg; + +typedef struct { + FILE* srcFile; + adaptCCtx* ctx; + outputThreadArg* otArg; +} fcResources; + +static void freeCompressionJobs(adaptCCtx* ctx) +{ + unsigned u; + for (u=0; unumJobs; u++) { + jobDescription job = ctx->jobs[u]; + free(job.dst.start); + free(job.src.start); + } +} + +static int destroyMutex(mutex_t* mutex) +{ + if (mutex->noError) { + int const ret = pthread_mutex_destroy(&mutex->pMutex); + return ret; + } + return 0; +} + +static int destroyCond(cond_t* cond) +{ + if (cond->noError) { + int const ret = pthread_cond_destroy(&cond->pCond); + return ret; + } + return 0; +} + +static int freeCCtx(adaptCCtx* ctx) +{ + if (!ctx) return 0; + { + int error = 0; + error |= destroyMutex(&ctx->jobCompressed_mutex); + error |= destroyCond(&ctx->jobCompressed_cond); + error |= destroyMutex(&ctx->jobReady_mutex); + error |= destroyCond(&ctx->jobReady_cond); + error |= destroyMutex(&ctx->allJobsCompleted_mutex); + error |= destroyCond(&ctx->allJobsCompleted_cond); + error |= destroyMutex(&ctx->jobWrite_mutex); + error |= destroyCond(&ctx->jobWrite_cond); + error |= destroyMutex(&ctx->compressionCompletion_mutex); + error |= destroyMutex(&ctx->createCompletion_mutex); + error |= destroyMutex(&ctx->writeCompletion_mutex); + error |= destroyMutex(&ctx->compressionLevel_mutex); + error |= ZSTD_isError(ZSTD_freeCCtx(ctx->cctx)); + free(ctx->input.buffer.start); + if (ctx->jobs){ + freeCompressionJobs(ctx); + free(ctx->jobs); + } + free(ctx); + return error; + } +} + +static int initMutex(mutex_t* mutex) +{ + int const ret = pthread_mutex_init(&mutex->pMutex, NULL); + mutex->noError = !ret; + return ret; +} + +static int initCond(cond_t* cond) +{ + int const ret = pthread_cond_init(&cond->pCond, NULL); + cond->noError = !ret; + return ret; +} + +static int initCCtx(adaptCCtx* ctx, unsigned numJobs) +{ + ctx->compressionLevel = g_compressionLevel; + { + int pthreadError = 0; + pthreadError |= initMutex(&ctx->jobCompressed_mutex); + pthreadError |= initCond(&ctx->jobCompressed_cond); + pthreadError |= initMutex(&ctx->jobReady_mutex); + pthreadError |= initCond(&ctx->jobReady_cond); + pthreadError |= initMutex(&ctx->allJobsCompleted_mutex); + pthreadError |= initCond(&ctx->allJobsCompleted_cond); + pthreadError |= initMutex(&ctx->jobWrite_mutex); + pthreadError |= initCond(&ctx->jobWrite_cond); + pthreadError |= initMutex(&ctx->compressionCompletion_mutex); + pthreadError |= initMutex(&ctx->createCompletion_mutex); + pthreadError |= initMutex(&ctx->writeCompletion_mutex); + pthreadError |= initMutex(&ctx->compressionLevel_mutex); + if (pthreadError) return pthreadError; + } + ctx->numJobs = numJobs; + ctx->jobReadyID = 0; + ctx->jobCompressedID = 0; + ctx->jobWriteID = 0; + ctx->lastDictSize = 0; + + + ctx->createWaitCompressionCompletion = 1; + ctx->compressWaitCreateCompletion = 1; + ctx->compressWaitWriteCompletion = 1; + ctx->writeWaitCompressionCompletion = 1; + ctx->createCompletion = 1; + ctx->writeCompletion = 1; + ctx->compressionCompletion = 1; + ctx->convergenceCounter = 0; + ctx->cooldown = 0; + + ctx->jobs = calloc(1, numJobs*sizeof(jobDescription)); + + if (!ctx->jobs) { + DISPLAY("Error: could not allocate space for jobs during context creation\n"); + return 1; + } + + /* initializing jobs */ + { + unsigned jobNum; + for (jobNum=0; jobNumjobs[jobNum]; + job->src.start = malloc(2 * FILE_CHUNK_SIZE); + job->dst.start = malloc(ZSTD_compressBound(FILE_CHUNK_SIZE)); + job->lastJobPlusOne = 0; + if (!job->src.start || !job->dst.start) { + DISPLAY("Could not allocate buffers for jobs\n"); + return 1; + } + job->src.capacity = FILE_CHUNK_SIZE; + job->dst.capacity = ZSTD_compressBound(FILE_CHUNK_SIZE); + } + } + + ctx->nextJobID = 0; + ctx->threadError = 0; + ctx->allJobsCompleted = 0; + + ctx->cctx = ZSTD_createCCtx(); + if (!ctx->cctx) { + DISPLAY("Error: could not allocate ZSTD_CCtx\n"); + return 1; + } + + ctx->input.filled = 0; + ctx->input.buffer.capacity = 2 * FILE_CHUNK_SIZE; + + ctx->input.buffer.start = malloc(ctx->input.buffer.capacity); + if (!ctx->input.buffer.start) { + DISPLAY("Error: could not allocate input buffer\n"); + return 1; + } + return 0; +} + +static adaptCCtx* createCCtx(unsigned numJobs) +{ + + adaptCCtx* const ctx = calloc(1, sizeof(adaptCCtx)); + if (ctx == NULL) { + DISPLAY("Error: could not allocate space for context\n"); + return NULL; + } + { + int const error = initCCtx(ctx, numJobs); + if (error) { + freeCCtx(ctx); + return NULL; + } + return ctx; + } +} + +static void signalErrorToThreads(adaptCCtx* ctx) +{ + ctx->threadError = 1; + pthread_mutex_lock(&ctx->jobReady_mutex.pMutex); + pthread_cond_signal(&ctx->jobReady_cond.pCond); + pthread_mutex_unlock(&ctx->jobReady_mutex.pMutex); + + pthread_mutex_lock(&ctx->jobCompressed_mutex.pMutex); + pthread_cond_broadcast(&ctx->jobCompressed_cond.pCond); + pthread_mutex_unlock(&ctx->jobReady_mutex.pMutex); + + pthread_mutex_lock(&ctx->jobWrite_mutex.pMutex); + pthread_cond_signal(&ctx->jobWrite_cond.pCond); + pthread_mutex_unlock(&ctx->jobWrite_mutex.pMutex); + + pthread_mutex_lock(&ctx->allJobsCompleted_mutex.pMutex); + pthread_cond_signal(&ctx->allJobsCompleted_cond.pCond); + pthread_mutex_unlock(&ctx->allJobsCompleted_mutex.pMutex); +} + +static void waitUntilAllJobsCompleted(adaptCCtx* ctx) +{ + if (!ctx) return; + pthread_mutex_lock(&ctx->allJobsCompleted_mutex.pMutex); + while (ctx->allJobsCompleted == 0 && !ctx->threadError) { + pthread_cond_wait(&ctx->allJobsCompleted_cond.pCond, &ctx->allJobsCompleted_mutex.pMutex); + } + pthread_mutex_unlock(&ctx->allJobsCompleted_mutex.pMutex); +} + +/* map completion percentages to values for changing compression level */ +static unsigned convertCompletionToChange(double completion) +{ + if (completion < CHANGE_BY_TWO_THRESHOLD) { + return 2; + } + else if (completion < CHANGE_BY_ONE_THRESHOLD) { + return 1; + } + else { + return 0; + } +} + +/* + * Compression level is changed depending on which part of the compression process is lagging + * Currently, three theads exist for job creation, compression, and file writing respectively. + * adaptCompressionLevel() increments or decrements compression level based on which of the threads is lagging + * job creation or file writing lag => increased compression level + * compression thread lag => decreased compression level + * detecting which thread is lagging is done by keeping track of how many calls each thread makes to pthread_cond_wait + */ +static void adaptCompressionLevel(adaptCCtx* ctx) +{ + double createWaitCompressionCompletion; + double compressWaitCreateCompletion; + double compressWaitWriteCompletion; + double writeWaitCompressionCompletion; + double const threshold = 0.00001; + unsigned prevCompressionLevel; + + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + prevCompressionLevel = ctx->compressionLevel; + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + + + if (g_forceCompressionLevel) { + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + ctx->compressionLevel = g_compressionLevel; + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + return; + } + + + DEBUG(2, "adapting compression level %u\n", prevCompressionLevel); + + /* read and reset completion measurements */ + pthread_mutex_lock(&ctx->compressionCompletion_mutex.pMutex); + DEBUG(2, "createWaitCompressionCompletion %f\n", ctx->createWaitCompressionCompletion); + DEBUG(2, "writeWaitCompressionCompletion %f\n", ctx->writeWaitCompressionCompletion); + createWaitCompressionCompletion = ctx->createWaitCompressionCompletion; + writeWaitCompressionCompletion = ctx->writeWaitCompressionCompletion; + pthread_mutex_unlock(&ctx->compressionCompletion_mutex.pMutex); + + pthread_mutex_lock(&ctx->writeCompletion_mutex.pMutex); + DEBUG(2, "compressWaitWriteCompletion %f\n", ctx->compressWaitWriteCompletion); + compressWaitWriteCompletion = ctx->compressWaitWriteCompletion; + pthread_mutex_unlock(&ctx->writeCompletion_mutex.pMutex); + + pthread_mutex_lock(&ctx->createCompletion_mutex.pMutex); + DEBUG(2, "compressWaitCreateCompletion %f\n", ctx->compressWaitCreateCompletion); + compressWaitCreateCompletion = ctx->compressWaitCreateCompletion; + pthread_mutex_unlock(&ctx->createCompletion_mutex.pMutex); + DEBUG(2, "convergence counter: %u\n", ctx->convergenceCounter); + + assert(g_minCLevel <= prevCompressionLevel && g_maxCLevel >= prevCompressionLevel); + + /* adaptation logic */ + if (ctx->cooldown) ctx->cooldown--; + + if ((1-createWaitCompressionCompletion > threshold || 1-writeWaitCompressionCompletion > threshold) && ctx->cooldown == 0) { + /* create or write waiting on compression */ + /* use whichever one waited less because it was slower */ + double const completion = MAX(createWaitCompressionCompletion, writeWaitCompressionCompletion); + unsigned const change = convertCompletionToChange(completion); + unsigned const boundChange = MIN(change, prevCompressionLevel - g_minCLevel); + if (ctx->convergenceCounter >= CONVERGENCE_LOWER_BOUND && boundChange != 0) { + /* reset convergence counter, might have been a spike */ + ctx->convergenceCounter = 0; + DEBUG(2, "convergence counter reset, no change applied\n"); + } + else if (boundChange != 0) { + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + ctx->compressionLevel -= boundChange; + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + ctx->cooldown = CLEVEL_DECREASE_COOLDOWN; + ctx->convergenceCounter = 1; + + DEBUG(2, "create or write threads waiting on compression, tried to decrease compression level by %u\n\n", boundChange); + } + } + else if (1-compressWaitWriteCompletion > threshold || 1-compressWaitCreateCompletion > threshold) { + /* compress waiting on write */ + double const completion = MIN(compressWaitWriteCompletion, compressWaitCreateCompletion); + unsigned const change = convertCompletionToChange(completion); + unsigned const boundChange = MIN(change, g_maxCLevel - prevCompressionLevel); + if (ctx->convergenceCounter >= CONVERGENCE_LOWER_BOUND && boundChange != 0) { + /* reset convergence counter, might have been a spike */ + ctx->convergenceCounter = 0; + DEBUG(2, "convergence counter reset, no change applied\n"); + } + else if (boundChange != 0) { + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + ctx->compressionLevel += boundChange; + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + ctx->cooldown = 0; + ctx->convergenceCounter = 1; + + DEBUG(2, "compress waiting on write or create, tried to increase compression level by %u\n\n", boundChange); + } + + } + + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + if (ctx->compressionLevel == prevCompressionLevel) { + ctx->convergenceCounter++; + } + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); +} + +static size_t getUseableDictSize(unsigned compressionLevel) +{ + ZSTD_parameters const params = ZSTD_getParams(compressionLevel, 0, 0); + unsigned const overlapLog = compressionLevel >= (unsigned)ZSTD_maxCLevel() ? 0 : 3; + size_t const overlapSize = 1 << (params.cParams.windowLog - overlapLog); + return overlapSize; +} + +static void* compressionThread(void* arg) +{ + adaptCCtx* const ctx = (adaptCCtx*)arg; + unsigned currJob = 0; + for ( ; ; ) { + unsigned const currJobIndex = currJob % ctx->numJobs; + jobDescription* const job = &ctx->jobs[currJobIndex]; + DEBUG(2, "starting compression for job %u\n", currJob); + + { + /* check if compression thread will have to wait */ + unsigned willWaitForCreate = 0; + unsigned willWaitForWrite = 0; + + pthread_mutex_lock(&ctx->jobReady_mutex.pMutex); + if (currJob + 1 > ctx->jobReadyID) willWaitForCreate = 1; + pthread_mutex_unlock(&ctx->jobReady_mutex.pMutex); + + pthread_mutex_lock(&ctx->jobWrite_mutex.pMutex); + if (currJob - ctx->jobWriteID >= ctx->numJobs) willWaitForWrite = 1; + pthread_mutex_unlock(&ctx->jobWrite_mutex.pMutex); + + + pthread_mutex_lock(&ctx->createCompletion_mutex.pMutex); + if (willWaitForCreate) { + DEBUG(2, "compression will wait for create on job %u\n", currJob); + ctx->compressWaitCreateCompletion = ctx->createCompletion; + DEBUG(2, "create completion %f\n", ctx->compressWaitCreateCompletion); + + } + else { + ctx->compressWaitCreateCompletion = 1; + } + pthread_mutex_unlock(&ctx->createCompletion_mutex.pMutex); + + pthread_mutex_lock(&ctx->writeCompletion_mutex.pMutex); + if (willWaitForWrite) { + DEBUG(2, "compression will wait for write on job %u\n", currJob); + ctx->compressWaitWriteCompletion = ctx->writeCompletion; + DEBUG(2, "write completion %f\n", ctx->compressWaitWriteCompletion); + } + else { + ctx->compressWaitWriteCompletion = 1; + } + pthread_mutex_unlock(&ctx->writeCompletion_mutex.pMutex); + + } + + /* wait until job is ready */ + pthread_mutex_lock(&ctx->jobReady_mutex.pMutex); + while (currJob + 1 > ctx->jobReadyID && !ctx->threadError) { + pthread_cond_wait(&ctx->jobReady_cond.pCond, &ctx->jobReady_mutex.pMutex); + } + pthread_mutex_unlock(&ctx->jobReady_mutex.pMutex); + + /* wait until job previously in this space is written */ + pthread_mutex_lock(&ctx->jobWrite_mutex.pMutex); + while (currJob - ctx->jobWriteID >= ctx->numJobs && !ctx->threadError) { + pthread_cond_wait(&ctx->jobWrite_cond.pCond, &ctx->jobWrite_mutex.pMutex); + } + pthread_mutex_unlock(&ctx->jobWrite_mutex.pMutex); + /* reset compression completion */ + pthread_mutex_lock(&ctx->compressionCompletion_mutex.pMutex); + ctx->compressionCompletion = 0; + pthread_mutex_unlock(&ctx->compressionCompletion_mutex.pMutex); + + /* adapt compression level */ + if (currJob) adaptCompressionLevel(ctx); + + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + DEBUG(2, "job %u compressed with level %u\n", currJob, ctx->compressionLevel); + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + + /* compress the data */ + { + size_t const compressionBlockSize = ZSTD_BLOCKSIZE_MAX; /* 128 KB */ + unsigned cLevel; + unsigned blockNum = 0; + size_t remaining = job->src.size; + size_t srcPos = 0; + size_t dstPos = 0; + + pthread_mutex_lock(&ctx->compressionLevel_mutex.pMutex); + cLevel = ctx->compressionLevel; + pthread_mutex_unlock(&ctx->compressionLevel_mutex.pMutex); + + /* reset compressed size */ + job->compressedSize = 0; + DEBUG(2, "calling ZSTD_compressBegin()\n"); + /* begin compression */ + { + size_t const useDictSize = MIN(getUseableDictSize(cLevel), job->dictSize); + size_t const dictModeError = ZSTD_setCCtxParameter(ctx->cctx, ZSTD_p_forceRawDict, 1); + ZSTD_parameters params = ZSTD_getParams(cLevel, 0, useDictSize); + params.cParams.windowLog = 23; + { *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-src-vendor@freebsd.org Mon Oct 22 19:46:36 2018 Return-Path: Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1283C105ECC1; Mon, 22 Oct 2018 19:46:36 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BD72C8F4BF; Mon, 22 Oct 2018 19:46:35 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 9E06824DCA; Mon, 22 Oct 2018 19:46:35 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MJkZ10075938; Mon, 22 Oct 2018 19:46:35 GMT (envelope-from cem@FreeBSD.org) Received: (from cem@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MJkZgK075937; Mon, 22 Oct 2018 19:46:35 GMT (envelope-from cem@FreeBSD.org) Message-Id: <201810221946.w9MJkZgK075937@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org using -f From: Conrad Meyer Date: Mon, 22 Oct 2018 19:46:35 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-vendor@freebsd.org Subject: svn commit: r339611 - vendor/zstd/1.3.3 X-SVN-Group: vendor X-SVN-Commit-Author: cem X-SVN-Commit-Paths: vendor/zstd/1.3.3 X-SVN-Commit-Revision: 339611 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-vendor@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the vendor work area tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Oct 2018 19:46:36 -0000 Author: cem Date: Mon Oct 22 19:46:35 2018 New Revision: 339611 URL: https://svnweb.freebsd.org/changeset/base/339611 Log: tag import of zstd 1.3.3 Added: vendor/zstd/1.3.3/ - copied from r339610, vendor/zstd/dist/ From owner-svn-src-vendor@freebsd.org Mon Oct 22 19:50:46 2018 Return-Path: Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5DE14106B0B1; Mon, 22 Oct 2018 19:50:46 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 10E218F719; Mon, 22 Oct 2018 19:50:46 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 0B75724DD8; Mon, 22 Oct 2018 19:50:46 +0000 (UTC) (envelope-from cem@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MJokZ5076174; Mon, 22 Oct 2018 19:50:46 GMT (envelope-from cem@FreeBSD.org) Received: (from cem@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MJohXr076158; Mon, 22 Oct 2018 19:50:43 GMT (envelope-from cem@FreeBSD.org) Message-Id: <201810221950.w9MJohXr076158@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org using -f From: Conrad Meyer Date: Mon, 22 Oct 2018 19:50:43 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-vendor@freebsd.org Subject: svn commit: r339612 - in vendor/zstd/dist: . contrib/adaptive-compression contrib/gen_html contrib/long_distance_matching contrib/meson contrib/seekable_format doc doc/images lib lib/common lib/com... X-SVN-Group: vendor X-SVN-Commit-Author: cem X-SVN-Commit-Paths: in vendor/zstd/dist: . contrib/adaptive-compression contrib/gen_html contrib/long_distance_matching contrib/meson contrib/seekable_format doc doc/images lib lib/common lib/compress lib/decompress lib/... X-SVN-Commit-Revision: 339612 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-vendor@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the vendor work area tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Oct 2018 19:50:46 -0000 Author: cem Date: Mon Oct 22 19:50:43 2018 New Revision: 339612 URL: https://svnweb.freebsd.org/changeset/base/339612 Log: import zstd 1.3.4 Added: vendor/zstd/dist/doc/images/CSpeed2.png (contents, props changed) vendor/zstd/dist/doc/images/DSpeed3.png (contents, props changed) vendor/zstd/dist/doc/images/linux-4.7-12-mt-compress.png (contents, props changed) vendor/zstd/dist/doc/images/linux-git-mt-compress.png (contents, props changed) vendor/zstd/dist/lib/common/cpu.h (contents, props changed) vendor/zstd/dist/tests/checkTag.c (contents, props changed) Deleted: vendor/zstd/dist/contrib/long_distance_matching/ vendor/zstd/dist/doc/images/Cspeed4.png vendor/zstd/dist/doc/images/Dspeed4.png vendor/zstd/dist/tests/namespaceTest.c Modified: vendor/zstd/dist/Makefile vendor/zstd/dist/NEWS vendor/zstd/dist/README.md vendor/zstd/dist/appveyor.yml vendor/zstd/dist/contrib/adaptive-compression/adapt.c vendor/zstd/dist/contrib/gen_html/Makefile vendor/zstd/dist/contrib/meson/meson.build vendor/zstd/dist/contrib/meson/meson_options.txt vendor/zstd/dist/contrib/seekable_format/zstdseek_compress.c vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c vendor/zstd/dist/doc/README.md vendor/zstd/dist/doc/images/dict-cr.png vendor/zstd/dist/doc/images/dict-cs.png vendor/zstd/dist/doc/images/dict-ds.png vendor/zstd/dist/doc/zstd_compression_format.md vendor/zstd/dist/doc/zstd_manual.html vendor/zstd/dist/lib/BUCK vendor/zstd/dist/lib/README.md vendor/zstd/dist/lib/common/bitstream.h vendor/zstd/dist/lib/common/compiler.h vendor/zstd/dist/lib/common/error_private.c vendor/zstd/dist/lib/common/fse.h vendor/zstd/dist/lib/common/fse_decompress.c vendor/zstd/dist/lib/common/huf.h vendor/zstd/dist/lib/common/pool.c vendor/zstd/dist/lib/common/pool.h vendor/zstd/dist/lib/common/threading.h vendor/zstd/dist/lib/common/zstd_errors.h vendor/zstd/dist/lib/common/zstd_internal.h vendor/zstd/dist/lib/compress/fse_compress.c vendor/zstd/dist/lib/compress/huf_compress.c vendor/zstd/dist/lib/compress/zstd_compress.c vendor/zstd/dist/lib/compress/zstd_compress_internal.h vendor/zstd/dist/lib/compress/zstd_double_fast.c vendor/zstd/dist/lib/compress/zstd_double_fast.h vendor/zstd/dist/lib/compress/zstd_fast.c vendor/zstd/dist/lib/compress/zstd_fast.h vendor/zstd/dist/lib/compress/zstd_lazy.c vendor/zstd/dist/lib/compress/zstd_lazy.h vendor/zstd/dist/lib/compress/zstd_ldm.c vendor/zstd/dist/lib/compress/zstd_ldm.h vendor/zstd/dist/lib/compress/zstd_opt.c vendor/zstd/dist/lib/compress/zstd_opt.h vendor/zstd/dist/lib/compress/zstdmt_compress.c vendor/zstd/dist/lib/compress/zstdmt_compress.h vendor/zstd/dist/lib/decompress/huf_decompress.c vendor/zstd/dist/lib/decompress/zstd_decompress.c vendor/zstd/dist/lib/dictBuilder/cover.c vendor/zstd/dist/lib/dictBuilder/zdict.c vendor/zstd/dist/lib/dictBuilder/zdict.h vendor/zstd/dist/lib/legacy/zstd_legacy.h vendor/zstd/dist/lib/legacy/zstd_v04.c vendor/zstd/dist/lib/legacy/zstd_v06.c vendor/zstd/dist/lib/legacy/zstd_v07.c vendor/zstd/dist/lib/zstd.h vendor/zstd/dist/programs/Makefile vendor/zstd/dist/programs/README.md vendor/zstd/dist/programs/bench.c vendor/zstd/dist/programs/bench.h vendor/zstd/dist/programs/fileio.c vendor/zstd/dist/programs/fileio.h vendor/zstd/dist/programs/platform.h vendor/zstd/dist/programs/util.h vendor/zstd/dist/programs/zstd.1 vendor/zstd/dist/programs/zstd.1.md vendor/zstd/dist/programs/zstdcli.c vendor/zstd/dist/tests/.gitignore vendor/zstd/dist/tests/Makefile vendor/zstd/dist/tests/fullbench.c vendor/zstd/dist/tests/fuzz/zstd_helpers.c vendor/zstd/dist/tests/fuzzer.c vendor/zstd/dist/tests/legacy.c vendor/zstd/dist/tests/paramgrill.c vendor/zstd/dist/tests/playTests.sh vendor/zstd/dist/tests/roundTripCrash.c vendor/zstd/dist/tests/zstreamtest.c vendor/zstd/dist/zlibWrapper/examples/zwrapbench.c Modified: vendor/zstd/dist/Makefile ============================================================================== --- vendor/zstd/dist/Makefile Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/Makefile Mon Oct 22 19:50:43 2018 (r339612) @@ -27,7 +27,7 @@ endif default: lib-release zstd-release .PHONY: all -all: | allmost examples manual +all: | allmost examples manual contrib .PHONY: allmost allmost: allzstd @@ -72,14 +72,18 @@ zstdmt: zlibwrapper: $(MAKE) -C $(ZWRAPDIR) test -.PHONY: check -check: shortest +.PHONY: test +test: + $(MAKE) -C $(PRGDIR) allVariants MOREFLAGS+="-g -DZSTD_DEBUG=1" + $(MAKE) -C $(TESTDIR) $@ -.PHONY: test shortest -test shortest: - $(MAKE) -C $(PRGDIR) allVariants MOREFLAGS="-g -DZSTD_DEBUG=1" +.PHONY: shortest +shortest: $(MAKE) -C $(TESTDIR) $@ +.PHONY: check +check: shortest + .PHONY: examples examples: CPPFLAGS=-I../lib LDFLAGS=-L../lib $(MAKE) -C examples/ all @@ -88,6 +92,12 @@ examples: manual: $(MAKE) -C contrib/gen_html $@ +.PHONY: contrib +contrib: lib + $(MAKE) -C contrib/pzstd all + $(MAKE) -C contrib/seekable_format/examples all + $(MAKE) -C contrib/adaptive-compression all + .PHONY: cleanTabs cleanTabs: cd contrib; ./cleanTabs @@ -100,6 +110,9 @@ clean: @$(MAKE) -C $(ZWRAPDIR) $@ > $(VOID) @$(MAKE) -C examples/ $@ > $(VOID) @$(MAKE) -C contrib/gen_html $@ > $(VOID) + @$(MAKE) -C contrib/pzstd $@ > $(VOID) + @$(MAKE) -C contrib/seekable_format/examples $@ > $(VOID) + @$(MAKE) -C contrib/adaptive-compression $@ > $(VOID) @$(RM) zstd$(EXT) zstdmt$(EXT) tmp* @$(RM) -r lz4 @echo Cleaning completed @@ -231,31 +244,31 @@ msanregressiontest: # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63303 usan: clean - $(MAKE) test CC=clang MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=undefined" + $(MAKE) test CC=clang MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=undefined -Werror" asan: clean - $(MAKE) test CC=clang MOREFLAGS="-g -fsanitize=address" + $(MAKE) test CC=clang MOREFLAGS="-g -fsanitize=address -Werror" asan-%: clean - LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=address" $(MAKE) -C $(TESTDIR) $* + LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=address -Werror" $(MAKE) -C $(TESTDIR) $* msan: clean - $(MAKE) test CC=clang MOREFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer" HAVE_LZMA=0 # datagen.c fails this test for no obvious reason + $(MAKE) test CC=clang MOREFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer -Werror" HAVE_LZMA=0 # datagen.c fails this test for no obvious reason msan-%: clean - LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=memory -fno-omit-frame-pointer" FUZZER_FLAGS=--no-big-tests $(MAKE) -C $(TESTDIR) HAVE_LZMA=0 $* + LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=memory -fno-omit-frame-pointer -Werror" FUZZER_FLAGS=--no-big-tests $(MAKE) -C $(TESTDIR) HAVE_LZMA=0 $* asan32: clean $(MAKE) -C $(TESTDIR) test32 CC=clang MOREFLAGS="-g -fsanitize=address" uasan: clean - $(MAKE) test CC=clang MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=address,undefined" + $(MAKE) test CC=clang MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=address,undefined -Werror" uasan-%: clean - LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=address,undefined" $(MAKE) -C $(TESTDIR) $* + LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize-recover=signed-integer-overflow -fsanitize=address,undefined -Werror" $(MAKE) -C $(TESTDIR) $* tsan-%: clean - LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=thread" $(MAKE) -C $(TESTDIR) $* FUZZER_FLAGS=--no-big-tests + LDFLAGS=-fuse-ld=gold MOREFLAGS="-g -fno-sanitize-recover=all -fsanitize=thread -Werror" $(MAKE) -C $(TESTDIR) $* FUZZER_FLAGS=--no-big-tests apt-install: sudo apt-get -yq --no-install-suggests --no-install-recommends --force-yes install $(APT_PACKAGES) @@ -278,6 +291,9 @@ libc6install: gcc6install: apt-add-repo APT_PACKAGES="libc6-dev-i386 gcc-multilib gcc-6 gcc-6-multilib" $(MAKE) apt-install + +gcc7install: apt-add-repo + APT_PACKAGES="libc6-dev-i386 gcc-multilib gcc-7 gcc-7-multilib" $(MAKE) apt-install gpp6install: apt-add-repo APT_PACKAGES="libc6-dev-i386 g++-multilib gcc-6 g++-6 g++-6-multilib" $(MAKE) apt-install Modified: vendor/zstd/dist/NEWS ============================================================================== --- vendor/zstd/dist/NEWS Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/NEWS Mon Oct 22 19:50:43 2018 (r339612) @@ -1,5 +1,23 @@ +v1.3.4 +perf: faster speed (especially decoding speed) on recent cpus (haswell+) +perf: much better performance associating --long with multi-threading, by @terrelln +perf: better compression at levels 13-15 +cli : asynchronous compression by default, for faster experience (use --single-thread for former behavior) +cli : smoother status report in multi-threading mode +cli : added command --fast=#, for faster compression modes +cli : fix crash when not overwriting existing files, by Pádraig Brady (@pixelb) +api : `nbThreads` becomes `nbWorkers` : 1 triggers asynchronous mode +api : compression levels can be negative, for even more speed +api : ZSTD_getFrameProgression() : get precise progress status of ZSTDMT anytime +api : ZSTDMT can accept new compression parameters during compression +api : implemented all advanced dictionary decompression prototypes +build: improved meson recipe, by Shawn Landden (@shawnl) +build: VS2017 scripts, by @HaydnTrigg +misc: all /contrib projects fixed +misc: added /contrib/docker script by @gyscos + v1.3.3 -perf: faster zstd_opt strategy (levels 17-19) +perf: faster zstd_opt strategy (levels 16-19) fix : bug #944 : multithreading with shared ditionary and large data, reported by @gsliepen cli : fix : content size written in header by default cli : fix : improved LZ4 format support, by @felixhandte Modified: vendor/zstd/dist/README.md ============================================================================== --- vendor/zstd/dist/README.md Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/README.md Mon Oct 22 19:50:43 2018 (r339612) @@ -1,4 +1,4 @@ -

Zstandard

+

Zstandard

__Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios. @@ -21,24 +21,25 @@ Development branch status : [![Build Status][travisDev ### Benchmarks For reference, several fast compression algorithms were tested and compared -on a server running Linux Debian (`Linux version 4.8.0-1-amd64`), +on a server running Linux Debian (`Linux version 4.14.0-3-amd64`), with a Core i7-6700K CPU @ 4.0GHz, using [lzbench], an open-source in-memory benchmark by @inikep -compiled with GCC 6.3.0, +compiled with [gcc] 7.3.0, on the [Silesia compression corpus]. [lzbench]: https://github.com/inikep/lzbench [Silesia compression corpus]: http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia +[gcc]: https://gcc.gnu.org/ | Compressor name | Ratio | Compression| Decompress.| | --------------- | ------| -----------| ---------- | -| **zstd 1.1.3 -1** | 2.877 | 430 MB/s | 1110 MB/s | -| zlib 1.2.8 -1 | 2.743 | 110 MB/s | 400 MB/s | -| brotli 0.5.2 -0 | 2.708 | 400 MB/s | 430 MB/s | +| **zstd 1.3.4 -1** | 2.877 | 470 MB/s | 1380 MB/s | +| zlib 1.2.11 -1 | 2.743 | 110 MB/s | 400 MB/s | +| brotli 1.0.2 -0 | 2.701 | 410 MB/s | 430 MB/s | | quicklz 1.5.0 -1 | 2.238 | 550 MB/s | 710 MB/s | | lzo1x 2.09 -1 | 2.108 | 650 MB/s | 830 MB/s | -| lz4 1.7.5 | 2.101 | 720 MB/s | 3600 MB/s | -| snappy 1.1.3 | 2.091 | 500 MB/s | 1650 MB/s | +| lz4 1.8.1 | 2.101 | 750 MB/s | 3700 MB/s | +| snappy 1.1.4 | 2.091 | 530 MB/s | 1800 MB/s | | lzf 3.6 -1 | 2.077 | 400 MB/s | 860 MB/s | [zlib]:http://www.zlib.net/ @@ -50,15 +51,15 @@ Decompression speed is preserved and remains roughly t a property shared by most LZ compression algorithms, such as [zlib] or lzma. The following tests were run -on a server running Linux Debian (`Linux version 4.8.0-1-amd64`) +on a server running Linux Debian (`Linux version 4.14.0-3-amd64`) with a Core i7-6700K CPU @ 4.0GHz, using [lzbench], an open-source in-memory benchmark by @inikep -compiled with GCC 6.3.0, +compiled with [gcc] 7.3.0, on the [Silesia compression corpus]. Compression Speed vs Ratio | Decompression Speed ---------------------------|-------------------- -![Compression Speed vs Ratio](doc/images/Cspeed4.png "Compression Speed vs Ratio") | ![Decompression Speed](doc/images/Dspeed4.png "Decompression Speed") +![Compression Speed vs Ratio](doc/images/CSpeed2.png "Compression Speed vs Ratio") | ![Decompression Speed](doc/images/DSpeed3.png "Decompression Speed") A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph. For a larger picture including slow modes, [click on this link](doc/images/DCspeed5.png). @@ -128,8 +129,8 @@ A Meson project is provided within `contrib/meson`. Going into `build` directory, you will find additional possibilities: - Projects for Visual Studio 2005, 2008 and 2010. - + VS2010 project is compatible with VS2012, VS2013 and VS2015. -- Automated build scripts for Visual compiler by @KrzysFR , in `build/VS_scripts`, + + VS2010 project is compatible with VS2012, VS2013, VS2015 and VS2017. +- Automated build scripts for Visual compiler by [@KrzysFR](https://github.com/KrzysFR), in `build/VS_scripts`, which will build `zstd` cli and `libzstd` library without any need to open Visual Studio solution. Modified: vendor/zstd/dist/appveyor.yml ============================================================================== --- vendor/zstd/dist/appveyor.yml Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/appveyor.yml Mon Oct 22 19:50:43 2018 (r339612) @@ -2,14 +2,13 @@ version: 1.0.{build} branches: only: - - dev - master environment: matrix: - COMPILER: "gcc" HOST: "mingw" PLATFORM: "x64" - SCRIPT: "make allzstd MOREFLAGS=-static && make -C tests test-symbols fullbench-dll fullbench-lib" + SCRIPT: "make allzstd MOREFLAGS=-static && make -C tests test-symbols fullbench-lib" ARTIFACT: "true" BUILD: "true" - COMPILER: "gcc" @@ -80,12 +79,22 @@ SET "LDFLAGS=../../zlib/libz.a" && sh -c "%SCRIPT%" && ( if [%COMPILER%]==[gcc] if [%ARTIFACT%]==[true] + ECHO Creating artifacts && + ECHO %cd% && lib\dll\example\build_package.bat && make -C programs DEBUGFLAGS= clean zstd && cd programs\ && 7z a -tzip -mx9 zstd-win-binary-%PLATFORM%.zip zstd.exe && appveyor PushArtifact zstd-win-binary-%PLATFORM%.zip && cp zstd.exe ..\bin\zstd.exe && - cd ..\bin\ && 7z a -tzip -mx9 zstd-win-release-%PLATFORM%.zip * && + git clone --depth 1 --branch master https://github.com/facebook/zstd && + cd zstd && + git archive --format=tar master -o zstd-src.tar && + ..\zstd -19 zstd-src.tar && + appveyor PushArtifact zstd-src.tar.zst && + certUtil -hashfile zstd-src.tar.zst SHA256 > zstd-src.tar.zst.sha256.sig && + appveyor PushArtifact zstd-src.tar.zst.sha256.sig && + cd ..\..\bin\ && + 7z a -tzip -mx9 zstd-win-release-%PLATFORM%.zip * && appveyor PushArtifact zstd-win-release-%PLATFORM%.zip ) ) Modified: vendor/zstd/dist/contrib/adaptive-compression/adapt.c ============================================================================== --- vendor/zstd/dist/contrib/adaptive-compression/adapt.c Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/adaptive-compression/adapt.c Mon Oct 22 19:50:43 2018 (r339612) @@ -40,7 +40,6 @@ static unsigned g_compressionLevel = DEFAULT_COMPRESSI static UTIL_time_t g_startTime; static size_t g_streamedSize = 0; static unsigned g_useProgressBar = 1; -static UTIL_freq_t g_ticksPerSecond; static unsigned g_forceCompressionLevel = 0; static unsigned g_minCLevel = 1; static unsigned g_maxCLevel; @@ -576,13 +575,12 @@ static void* compressionThread(void* arg) /* begin compression */ { size_t const useDictSize = MIN(getUseableDictSize(cLevel), job->dictSize); - size_t const dictModeError = ZSTD_setCCtxParameter(ctx->cctx, ZSTD_p_forceRawDict, 1); ZSTD_parameters params = ZSTD_getParams(cLevel, 0, useDictSize); params.cParams.windowLog = 23; { size_t const initError = ZSTD_compressBegin_advanced(ctx->cctx, job->src.start + job->dictSize - useDictSize, useDictSize, params, 0); - size_t const windowSizeError = ZSTD_setCCtxParameter(ctx->cctx, ZSTD_p_forceWindow, 1); - if (ZSTD_isError(dictModeError) || ZSTD_isError(initError) || ZSTD_isError(windowSizeError)) { + size_t const windowSizeError = ZSTD_CCtx_setParameter(ctx->cctx, ZSTD_p_forceMaxWindow, 1); + if (ZSTD_isError(initError) || ZSTD_isError(windowSizeError)) { DISPLAY("Error: something went wrong while starting compression\n"); signalErrorToThreads(ctx); return arg; @@ -644,21 +642,17 @@ static void* compressionThread(void* arg) static void displayProgress(unsigned cLevel, unsigned last) { - UTIL_time_t currTime; - UTIL_getTime(&currTime); + UTIL_time_t currTime = UTIL_getTime(); if (!g_useProgressBar) return; - { - double const timeElapsed = (double)(UTIL_getSpanTimeMicro(g_ticksPerSecond, g_startTime, currTime) / 1000.0); + { double const timeElapsed = (double)(UTIL_getSpanTimeMicro(g_startTime, currTime) / 1000.0); double const sizeMB = (double)g_streamedSize / (1 << 20); double const avgCompRate = sizeMB * 1000 / timeElapsed; fprintf(stderr, "\r| Comp. Level: %2u | Time Elapsed: %7.2f s | Data Size: %7.1f MB | Avg Comp. Rate: %6.2f MB/s |", cLevel, timeElapsed/1000.0, sizeMB, avgCompRate); if (last) { fprintf(stderr, "\n"); - } - else { + } else { fflush(stderr); - } - } + } } } static void* outputThread(void* arg) @@ -971,7 +965,6 @@ static int compressFilename(const char* const srcFilen { int ret = 0; fcResources fcr = createFileCompressionResources(srcFilename, dstFilenameOrNull); - UTIL_getTime(&g_startTime); g_streamedSize = 0; ret |= performCompression(fcr.ctx, fcr.srcFile, fcr.otArg); ret |= freeFileCompressionResources(&fcr); @@ -1043,8 +1036,6 @@ int main(int argCount, const char* argv[]) int argNum; filenameTable[0] = stdinmark; g_maxCLevel = ZSTD_maxCLevel(); - - UTIL_initTimer(&g_ticksPerSecond); if (filenameTable == NULL) { DISPLAY("Error: could not allocate sapce for filename table.\n"); Modified: vendor/zstd/dist/contrib/gen_html/Makefile ============================================================================== --- vendor/zstd/dist/contrib/gen_html/Makefile Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/gen_html/Makefile Mon Oct 22 19:50:43 2018 (r339612) @@ -7,10 +7,10 @@ # in the COPYING file in the root directory of this source tree). # ################################################################ -CFLAGS ?= -O3 -CFLAGS += -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow -Wstrict-aliasing=1 -Wswitch-enum -Wno-comment -CFLAGS += $(MOREFLAGS) -FLAGS = $(CPPFLAGS) $(CFLAGS) $(CXXFLAGS) $(LDFLAGS) +CXXFLAGS ?= -O3 +CXXFLAGS += -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow -Wstrict-aliasing=1 -Wswitch-enum -Wno-comment +CXXFLAGS += $(MOREFLAGS) +FLAGS = $(CPPFLAGS) $(CXXFLAGS) $(CXXFLAGS) $(LDFLAGS) ZSTDAPI = ../../lib/zstd.h ZSTDMANUAL = ../../doc/zstd_manual.html Modified: vendor/zstd/dist/contrib/meson/meson.build ============================================================================== --- vendor/zstd/dist/contrib/meson/meson.build Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/meson/meson.build Mon Oct 22 19:50:43 2018 (r339612) @@ -38,21 +38,45 @@ libzstd_srcs = [ libzstd_includes = [include_directories(common_dir, dictbuilder_dir, compress_dir, lib_dir)] -if get_option('legacy_support') - message('Enabling legacy support') - libzstd_cflags = ['-DZSTD_LEGACY_SUPPORT=4'] +legacy = get_option('legacy_support') +if legacy == '0' + legacy = 'false' +endif +if legacy != 'false' + if legacy == 'true' + legacy = '1' + endif + #See ZSTD_LEGACY_SUPPORT of programs/README.md + message('Enabling legacy support back to version 0.' + legacy) + legacy_int = legacy.to_int() + if legacy_int > 7 + legacy_int = 7 + endif + libzstd_cflags = ['-DZSTD_LEGACY_SUPPORT=' + legacy] legacy_dir = join_paths(lib_dir, 'legacy') libzstd_includes += [include_directories(legacy_dir)] - libzstd_srcs += [ - join_paths(legacy_dir, 'zstd_v01.c'), - join_paths(legacy_dir, 'zstd_v02.c'), - join_paths(legacy_dir, 'zstd_v03.c'), - join_paths(legacy_dir, 'zstd_v04.c'), - join_paths(legacy_dir, 'zstd_v05.c'), - join_paths(legacy_dir, 'zstd_v06.c'), - join_paths(legacy_dir, 'zstd_v07.c') - ] + if legacy_int <= 1 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v01.c') + endif + if legacy_int <= 2 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v02.c') + endif + if legacy_int <= 3 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v03.c') + endif + if legacy_int <= 4 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v04.c') + endif + if legacy_int <= 5 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v05.c') + endif + if legacy_int <= 6 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v06.c') + endif + if legacy_int <= 7 + libzstd_srcs += join_paths(legacy_dir, 'zstd_v07.c') + endif else libzstd_cflags = [] endif @@ -70,7 +94,9 @@ libzstd = library('zstd', include_directories: libzstd_includes, c_args: libzstd_cflags, dependencies: libzstd_deps, - install: true) + install: true, + soversion: '1', + ) programs_dir = join_paths('..', '..', 'programs') Modified: vendor/zstd/dist/contrib/meson/meson_options.txt ============================================================================== --- vendor/zstd/dist/contrib/meson/meson_options.txt Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/meson/meson_options.txt Mon Oct 22 19:50:43 2018 (r339612) @@ -1,2 +1,3 @@ option('multithread', type: 'boolean', value: false) -option('legacy_support', type: 'boolean', value: false) +option('legacy_support', type: 'string', value: '4', + description: 'True or false, or 7 to 1 for v0.7+ to v0.1+.') Modified: vendor/zstd/dist/contrib/seekable_format/zstdseek_compress.c ============================================================================== --- vendor/zstd/dist/contrib/seekable_format/zstdseek_compress.c Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/seekable_format/zstdseek_compress.c Mon Oct 22 19:50:43 2018 (r339612) @@ -147,7 +147,7 @@ size_t ZSTD_seekable_initCStream(ZSTD_seekable_CStream /* make sure maxFrameSize has a reasonable value */ if (maxFrameSize > ZSTD_SEEKABLE_MAX_FRAME_DECOMPRESSED_SIZE) { - return ERROR(compressionParameter_unsupported); + return ERROR(frameParameter_unsupported); } zcs->maxFrameSize = maxFrameSize Modified: vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c ============================================================================== --- vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c Mon Oct 22 19:50:43 2018 (r339612) @@ -125,7 +125,7 @@ static int ZSTD_seekable_seek_buff(void* opaque, S64 o newOffset = (unsigned long long)buff->size - offset; break; } - if (newOffset < 0 || newOffset > buff->size) { + if (newOffset > buff->size) { return -1; } buff->pos = newOffset; @@ -145,7 +145,7 @@ typedef struct { int checksumFlag; } seekTable_t; -#define SEEKABLE_BUFF_SIZE ZSTD_BLOCKSIZE_ABSOLUTEMAX +#define SEEKABLE_BUFF_SIZE ZSTD_BLOCKSIZE_MAX struct ZSTD_seekable_s { ZSTD_DStream* dstream; Modified: vendor/zstd/dist/doc/README.md ============================================================================== --- vendor/zstd/dist/doc/README.md Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/doc/README.md Mon Oct 22 19:50:43 2018 (r339612) @@ -2,19 +2,24 @@ Zstandard Documentation ======================= This directory contains material defining the Zstandard format, -as well as for help using the `zstd` library. +as well as detailed instructions to use `zstd` library. +__`zstd_manual.html`__ : Documentation of `zstd.h` API, in html format. +Click on this link: [http://zstd.net/zstd_manual.html](http://zstd.net/zstd_manual.html) +to display documentation of latest release in readable format within a browser. + __`zstd_compression_format.md`__ : This document defines the Zstandard compression format. Compliant decoders must adhere to this document, and compliant encoders must generate data that follows it. +Should you look for ressources to develop your own port of Zstandard algorithm, +you may find the following ressources useful : + __`educational_decoder`__ : This directory contains an implementation of a Zstandard decoder, compliant with the Zstandard compression format. It can be used, for example, to better understand the format, -or as the basis for a separate implementation a Zstandard decoder/encoder. +or as the basis for a separate implementation of Zstandard decoder. -__`zstd_manual.html`__ : Documentation on the functions found in `zstd.h`. -See [http://zstd.net/zstd_manual.html](http://zstd.net/zstd_manual.html) for -the manual released with the latest official `zstd` release. - - +[__`decode_corpus`__](https://github.com/facebook/zstd/tree/dev/tests#decodecorpus---tool-to-generate-zstandard-frames-for-decoder-testing) : +This tool, stored in `/tests` directory, is able to generate random valid frames, +which is useful if you wish to test your decoder and verify it fully supports the specification. Added: vendor/zstd/dist/doc/images/CSpeed2.png ============================================================================== Binary file. No diff available. Added: vendor/zstd/dist/doc/images/DSpeed3.png ============================================================================== Binary file. No diff available. Modified: vendor/zstd/dist/doc/images/dict-cr.png ============================================================================== Binary file (source and/or target). No diff available. Modified: vendor/zstd/dist/doc/images/dict-cs.png ============================================================================== Binary file (source and/or target). No diff available. Modified: vendor/zstd/dist/doc/images/dict-ds.png ============================================================================== Binary file (source and/or target). No diff available. Added: vendor/zstd/dist/doc/images/linux-4.7-12-mt-compress.png ============================================================================== Binary file. No diff available. Added: vendor/zstd/dist/doc/images/linux-git-mt-compress.png ============================================================================== Binary file. No diff available. Modified: vendor/zstd/dist/doc/zstd_compression_format.md ============================================================================== --- vendor/zstd/dist/doc/zstd_compression_format.md Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/doc/zstd_compression_format.md Mon Oct 22 19:50:43 2018 (r339612) @@ -257,7 +257,7 @@ a decoder is allowed to reject a compressed frame which requests a memory size beyond decoder's authorized range. For improved interoperability, -decoders are recommended to be compatible with `Window_Size >= 8 MB`, +decoders are recommended to be compatible with `Window_Size <= 8 MB`, and encoders are recommended to not request more than 8 MB. It's merely a recommendation though, decoders are free to support larger or lower limits, Modified: vendor/zstd/dist/doc/zstd_manual.html ============================================================================== --- vendor/zstd/dist/doc/zstd_manual.html Mon Oct 22 19:46:35 2018 (r339611) +++ vendor/zstd/dist/doc/zstd_manual.html Mon Oct 22 19:50:43 2018 (r339612) @@ -1,17 +1,17 @@ -zstd 1.3.3 Manual +zstd 1.3.4 Manual -

zstd 1.3.3 Manual

+

zstd 1.3.4 Manual


Contents

  1. Introduction
  2. Version
  3. Simple API
  4. -
  5. Explicit memory management
  6. +
  7. Explicit context
  8. Simple dictionary API
  9. Bulk processing dictionary API
  10. Streaming
  11. @@ -19,17 +19,16 @@
  12. Streaming decompression - HowTo
  13. START OF ADVANCED AND EXPERIMENTAL FUNCTIONS
  14. Advanced types
  15. -
  16. Custom memory allocation functions
  17. -
  18. Frame size functions
  19. -
  20. Context memory usage
  21. -
  22. Advanced compression functions
  23. -
  24. Advanced decompression functions
  25. -
  26. Advanced streaming functions
  27. -
  28. Buffer-less and synchronous inner streaming functions
  29. -
  30. Buffer-less streaming compression (synchronous mode)
  31. -
  32. Buffer-less streaming decompression (synchronous mode)
  33. -
  34. New advanced API (experimental)
  35. -
  36. Block level API
  37. +
  38. Frame size functions
  39. +
  40. Memory management
  41. +
  42. Advanced compression functions
  43. +
  44. Advanced decompression functions
  45. +
  46. Advanced streaming functions
  47. +
  48. Buffer-less and synchronous inner streaming functions
  49. +
  50. Buffer-less streaming compression (synchronous mode)
  51. +
  52. Buffer-less streaming decompression (synchronous mode)
  53. +
  54. New advanced API (experimental)
  55. +
  56. Block level API

Introduction

@@ -40,11 +39,11 @@
   Levels >= 20, labeled `--ultra`, should be used with caution, as they require more memory.
   Compression can be done in:
     - a single step (described as Simple API)
-    - a single step, reusing a context (described as Explicit memory management)
+    - a single step, reusing a context (described as Explicit context)
     - unbounded multiple steps (described as Streaming compression)
   The compression ratio achievable on small data can be highly improved using a dictionary in:
     - a single step (described as Simple dictionary API)
-    - a single step, reusing a dictionary (described as Fast dictionary API)
+    - a single step, reusing a dictionary (described as Bulk-processing dictionary API)
 
   Advanced experimental functions can be accessed using #define ZSTD_STATIC_LINKING_ONLY before including zstd.h.
   Advanced experimental APIs shall never be used with a dynamic library.
@@ -103,22 +102,20 @@ unsigned long long ZSTD_getFrameContentSize(const void
 
 
unsigned long long ZSTD_getDecompressedSize(const void* src, size_t srcSize);
 

NOTE: This function is now obsolete, in favor of ZSTD_getFrameContentSize(). - Both functions work the same way, - but ZSTD_getDecompressedSize() blends - "empty", "unknown" and "error" results in the same return value (0), - while ZSTD_getFrameContentSize() distinguishes them. - - 'src' is the start of a zstd compressed frame. - @return : content size to be decompressed, as a 64-bits value _if known and not empty_, 0 otherwise. + Both functions work the same way, but ZSTD_getDecompressedSize() blends + "empty", "unknown" and "error" results to the same return value (0), + while ZSTD_getFrameContentSize() gives them separate return values. + `src` is the start of a zstd compressed frame. + @return : content size to be decompressed, as a 64-bits value _if known and not empty_, 0 otherwise.


Helper functions

#define ZSTD_COMPRESSBOUND(srcSize)   ((srcSize) + ((srcSize)>>8) + (((srcSize) < (128<<10)) ? (((128<<10) - (srcSize)) >> 11) /* margin, from 64 to 0 */ : 0))  /* this formula ensures that bound(A) + bound(B) <= bound(A+B) as long as A and B >= 128 KB */
-size_t      ZSTD_compressBound(size_t srcSize); /*!< maximum compressed size in worst case scenario */
+size_t      ZSTD_compressBound(size_t srcSize); /*!< maximum compressed size in worst case single-pass scenario */
 unsigned    ZSTD_isError(size_t code);          /*!< tells if a `size_t` function result is an error code */
 const char* ZSTD_getErrorName(size_t code);     /*!< provides readable string from an error code */
 int         ZSTD_maxCLevel(void);               /*!< maximum compression level available */
 

-

Explicit memory management


+

Explicit context


 
 

Compression context

  When compressing many times,
   it is recommended to allocate a context just once, and re-use it for each successive compression operation.
@@ -347,11 +344,18 @@ size_t ZSTD_decompressStream(ZSTD_DStream* zds, ZSTD_o
     ZSTD_frameParameters fParams;
 } ZSTD_parameters;
 

-

Custom memory allocation functions


-
-
typedef struct { ZSTD_allocFunction customAlloc; ZSTD_freeFunction customFree; void* opaque; } ZSTD_customMem;
+
typedef enum {
+    ZSTD_dct_auto=0,      /* dictionary is "full" when starting with ZSTD_MAGIC_DICTIONARY, otherwise it is "rawContent" */
+    ZSTD_dct_rawContent,  /* ensures dictionary is always loaded as rawContent, even if it starts with ZSTD_MAGIC_DICTIONARY */
+    ZSTD_dct_fullDict     /* refuses to load a dictionary if it does not respect Zstandard's specification */
+} ZSTD_dictContentType_e;
 

-

Frame size functions


+
typedef enum {
+    ZSTD_dlm_byCopy = 0, /**< Copy dictionary content internally */
+    ZSTD_dlm_byRef,      /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
+} ZSTD_dictLoadMethod_e;
+

+

Frame size functions


 
 
size_t ZSTD_findFrameCompressedSize(const void* src, size_t srcSize);
 

`src` should point to the start of a ZSTD encoded frame or skippable frame @@ -390,7 +394,7 @@ size_t ZSTD_decompressStream(ZSTD_DStream* zds, ZSTD_o @return : size of the Frame Header


-

Context memory usage


+

Memory management


 
 
size_t ZSTD_sizeof_CCtx(const ZSTD_CCtx* cctx);
 size_t ZSTD_sizeof_DCtx(const ZSTD_DCtx* dctx);
@@ -399,7 +403,7 @@ size_t ZSTD_sizeof_DStream(const ZSTD_DStream* zds);
 size_t ZSTD_sizeof_CDict(const ZSTD_CDict* cdict);
 size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict);
 

These functions give the current memory usage of selected object. - Object memory usage can evolve when re-used multiple times. + Object memory usage can evolve when re-used.


size_t ZSTD_estimateCCtxSize(int compressionLevel);
@@ -412,8 +416,8 @@ size_t ZSTD_estimateDCtxSize(void);
   It will also consider src size to be arbitrarily "large", which is worst case.
   If srcSize is known to always be small, ZSTD_estimateCCtxSize_usingCParams() can provide a tighter estimation.
   ZSTD_estimateCCtxSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
-  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbThreads is > 1.
-  Note : CCtx estimation is only correct for single-threaded compression 
+  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbWorkers is >= 1.
+  Note : CCtx size estimation is only correct for single-threaded compression. 
 


size_t ZSTD_estimateCStreamSize(int compressionLevel);
@@ -425,8 +429,8 @@ size_t ZSTD_estimateDStreamSize_fromFrame(const void* 
   It will also consider src size to be arbitrarily "large", which is worst case.
   If srcSize is known to always be small, ZSTD_estimateCStreamSize_usingCParams() can provide a tighter estimation.
   ZSTD_estimateCStreamSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
-  ZSTD_estimateCStreamSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbThreads is set to a value > 1.
-  Note : CStream estimation is only correct for single-threaded compression.
+  ZSTD_estimateCStreamSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbWorkers is >= 1.
+  Note : CStream size estimation is only correct for single-threaded compression.
   ZSTD_DStream memory budget depends on window Size.
   This information can be passed manually, using ZSTD_estimateDStreamSize,
   or deducted from a valid frame Header, using ZSTD_estimateDStreamSize_fromFrame();
@@ -435,83 +439,59 @@ size_t ZSTD_estimateDStreamSize_fromFrame(const void* 
          In this case, get total size by adding ZSTD_estimate?DictSize 
 


-
typedef enum {
-    ZSTD_dlm_byCopy = 0,     /**< Copy dictionary content internally */
-    ZSTD_dlm_byRef,          /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
-} ZSTD_dictLoadMethod_e;
-

size_t ZSTD_estimateCDictSize(size_t dictSize, int compressionLevel);
 size_t ZSTD_estimateCDictSize_advanced(size_t dictSize, ZSTD_compressionParameters cParams, ZSTD_dictLoadMethod_e dictLoadMethod);
 size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod);
 

ZSTD_estimateCDictSize() will bet that src size is relatively "small", and content is copied, like ZSTD_createCDict(). - ZSTD_estimateCStreamSize_advanced_usingCParams() makes it possible to control precisely compression parameters, like ZSTD_createCDict_advanced(). - Note : dictionary created by reference using ZSTD_dlm_byRef are smaller + ZSTD_estimateCDictSize_advanced() makes it possible to control compression parameters precisely, like ZSTD_createCDict_advanced(). + Note : dictionaries created by reference (`ZSTD_dlm_byRef`) are logically smaller.


-

Advanced compression functions


-
-
ZSTD_CCtx* ZSTD_createCCtx_advanced(ZSTD_customMem customMem);
-

Create a ZSTD compression context using external alloc and free functions +

ZSTD_CCtx*    ZSTD_initStaticCCtx(void* workspace, size_t workspaceSize);
+ZSTD_CStream* ZSTD_initStaticCStream(void* workspace, size_t workspaceSize);    /**< same as ZSTD_initStaticCCtx() */
+

Initialize an object using a pre-allocated fixed-size buffer. + workspace: The memory area to emplace the object into. + Provided pointer *must be 8-bytes aligned*. + Buffer must outlive object. + workspaceSize: Use ZSTD_estimate*Size() to determine + how large workspace must be to support target scenario. + @return : pointer to object (same address as workspace, just different type), + or NULL if error (size too small, incorrect alignment, etc.) + Note : zstd will never resize nor malloc() when using a static buffer. + If the object requires more memory than available, + zstd will just error out (typically ZSTD_error_memory_allocation). + Note 2 : there is no corresponding "free" function. + Since workspace is allocated externally, it must be freed externally too. + Note 3 : cParams : use ZSTD_getCParams() to convert a compression level + into its associated cParams. + Limitation 1 : currently not compatible with internal dictionary creation, triggered by + ZSTD_CCtx_loadDictionary(), ZSTD_initCStream_usingDict() or ZSTD_initDStream_usingDict(). + Limitation 2 : static cctx currently not compatible with multi-threading. + Limitation 3 : static dctx is incompatible with legacy support. +


-
ZSTD_CCtx* ZSTD_initStaticCCtx(void* workspace, size_t workspaceSize);
-

workspace: The memory area to emplace the context into. - Provided pointer must 8-bytes aligned. - It must outlive context usage. - workspaceSize: Use ZSTD_estimateCCtxSize() or ZSTD_estimateCStreamSize() - to determine how large workspace must be to support scenario. - @return : pointer to ZSTD_CCtx* (same address as workspace, but different type), - or NULL if error (typically size too small) - Note : zstd will never resize nor malloc() when using a static cctx. - If it needs more memory than available, it will simply error out. - Note 2 : there is no corresponding "free" function. - Since workspace was allocated externally, it must be freed externally too. - Limitation 1 : currently not compatible with internal CDict creation, such as - ZSTD_CCtx_loadDictionary() or ZSTD_initCStream_usingDict(). - Limitation 2 : currently not compatible with multi-threading +

ZSTD_DStream* ZSTD_initStaticDStream(void* workspace, size_t workspaceSize);    /**< same as ZSTD_initStaticDCtx() */
+

+
typedef void* (*ZSTD_allocFunction) (void* opaque, size_t size);
+typedef void  (*ZSTD_freeFunction) (void* opaque, void* address);
+typedef struct { ZSTD_allocFunction customAlloc; ZSTD_freeFunction customFree; void* opaque; } ZSTD_customMem;
+static ZSTD_customMem const ZSTD_defaultCMem = { NULL, NULL, NULL };  /**< this constant defers to stdlib's functions */
+

These prototypes make it possible to pass your own allocation/free functions. + ZSTD_customMem is provided at creation time, using ZSTD_create*_advanced() variants listed below. + All allocation/free operations will be completed using these custom variants instead of regular ones.


+

Advanced compression functions


+
 
ZSTD_CDict* ZSTD_createCDict_byReference(const void* dictBuffer, size_t dictSize, int compressionLevel);
 

Create a digested dictionary for compression Dictionary content is simply referenced, and therefore stays in dictBuffer. It is important that dictBuffer outlives CDict, it must remain read accessible throughout the lifetime of CDict


-
typedef enum { ZSTD_dm_auto=0,        /* dictionary is "full" if it starts with ZSTD_MAGIC_DICTIONARY, otherwise it is "rawContent" */
-               ZSTD_dm_rawContent,    /* ensures dictionary is always loaded as rawContent, even if it starts with ZSTD_MAGIC_DICTIONARY */
-               ZSTD_dm_fullDict       /* refuses to load a dictionary if it does not respect Zstandard's specification */
-} ZSTD_dictMode_e;
-

-
ZSTD_CDict* ZSTD_createCDict_advanced(const void* dict, size_t dictSize,
-                                      ZSTD_dictLoadMethod_e dictLoadMethod,
-                                      ZSTD_dictMode_e dictMode,
-                                      ZSTD_compressionParameters cParams,
-                                      ZSTD_customMem customMem);
-

Create a ZSTD_CDict using external alloc and free, and customized compression parameters -


- -
ZSTD_CDict* ZSTD_initStaticCDict(
-                void* workspace, size_t workspaceSize,
-          const void* dict, size_t dictSize,
-                ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_dictMode_e dictMode,
-                ZSTD_compressionParameters cParams);
-

Generate a digested dictionary in provided memory area. - workspace: The memory area to emplace the dictionary into. - Provided pointer must 8-bytes aligned. - It must outlive dictionary usage. - workspaceSize: Use ZSTD_estimateCDictSize() - to determine how large workspace must be. - cParams : use ZSTD_getCParams() to transform a compression level - into its relevants cParams. - @return : pointer to ZSTD_CDict* (same address as workspace, but different type), - or NULL if error (typically, size too small). - Note : there is no corresponding "free" function. - Since workspace was allocated externally, it must be freed externally. - -


-
ZSTD_compressionParameters ZSTD_getCParams(int compressionLevel, unsigned long long estimatedSrcSize, size_t dictSize);
 

@return ZSTD_compressionParameters structure for a selected compression level and estimated srcSize. `estimatedSrcSize` value is optional, select 0 if not known @@ -546,7 +526,7 @@ size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_di

Same as ZSTD_compress_usingCDict(), with fine-tune control over frame parameters


-

Advanced decompression functions


+

Advanced decompression functions


 
 
unsigned ZSTD_isFrame(const void* buffer, size_t size);
 

Tells if the content of `buffer` starts with a valid Frame Identifier. @@ -555,28 +535,6 @@ size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_di Note 3 : Skippable Frame Identifiers are considered valid.


-
ZSTD_DCtx* ZSTD_createDCtx_advanced(ZSTD_customMem customMem);
-

Create a ZSTD decompression context using external alloc and free functions -


- -
ZSTD_DCtx* ZSTD_initStaticDCtx(void* workspace, size_t workspaceSize);
-

workspace: The memory area to emplace the context into. - Provided pointer must 8-bytes aligned. - It must outlive context usage. - workspaceSize: Use ZSTD_estimateDCtxSize() or ZSTD_estimateDStreamSize() - to determine how large workspace must be to support scenario. - @return : pointer to ZSTD_DCtx* (same address as workspace, but different type), - or NULL if error (typically size too small) - Note : zstd will never resize nor malloc() when using a static dctx. - If it needs more memory than available, it will simply error out. - Note 2 : static dctx is incompatible with legacy support - Note 3 : there is no corresponding "free" function. - Since workspace was allocated externally, it must be freed externally. - Limitation : currently not compatible with internal DDict creation, - such as ZSTD_initDStream_usingDict(). - -


-
ZSTD_DDict* ZSTD_createDDict_byReference(const void* dictBuffer, size_t dictSize);
 

Create a digested dictionary, ready to start decompression operation without startup delay. Dictionary content is referenced, and therefore stays in dictBuffer. @@ -584,27 +542,6 @@ size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_di it must remain read accessible throughout the lifetime of DDict


-
ZSTD_DDict* ZSTD_createDDict_advanced(const void* dict, size_t dictSize,
-                                      ZSTD_dictLoadMethod_e dictLoadMethod,
-                                      ZSTD_customMem customMem);
-

Create a ZSTD_DDict using external alloc and free, optionally by reference -


- -
ZSTD_DDict* ZSTD_initStaticDDict(void* workspace, size_t workspaceSize,
-                                 const void* dict, size_t dictSize,
-                                 ZSTD_dictLoadMethod_e dictLoadMethod);
-

Generate a digested dictionary in provided memory area. - workspace: The memory area to emplace the dictionary into. - Provided pointer must 8-bytes aligned. - It must outlive dictionary usage. - workspaceSize: Use ZSTD_estimateDDictSize() - to determine how large workspace must be. - @return : pointer to ZSTD_DDict*, or NULL if error (size too small) - Note : there is no corresponding "free" function. - Since workspace was allocated externally, it must be freed externally. - -


-
unsigned ZSTD_getDictID_fromDict(const void* dict, size_t dictSize);
 

Provides the dictID stored within dictionary. if @return == 0, the dictionary is not conformant with Zstandard specification. @@ -629,11 +566,9 @@ size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_di When identifying the exact failure cause, it's possible to use ZSTD_getFrameHeader(), which will provide a more precise error code.


-

Advanced streaming functions


+

Advanced streaming functions


 
-

Advanced Streaming compression functions

ZSTD_CStream* ZSTD_createCStream_advanced(ZSTD_customMem customMem);
-ZSTD_CStream* ZSTD_initStaticCStream(void* workspace, size_t workspaceSize);    /**< same as ZSTD_initStaticCCtx() */
-size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pledgedSrcSize);   /**< pledgedSrcSize must be correct. If it is not known at init time, use ZSTD_CONTENTSIZE_UNKNOWN. Note that, for compatibility with older programs, "0" also disables frame content size field. It may be enabled in the future. */
+

Advanced Streaming compression functions

size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pledgedSrcSize);   /**< pledgedSrcSize must be correct. If it is not known at init time, use ZSTD_CONTENTSIZE_UNKNOWN. Note that, for compatibility with older programs, "0" also disables frame content size field. It may be enabled in the future. */
 size_t ZSTD_initCStream_usingDict(ZSTD_CStream* zcs, const void* dict, size_t dictSize, int compressionLevel); /**< creates of an internal CDict (incompatible with static CCtx), except if dict == NULL or dictSize < 8, in which case no dict is used. Note: dict is loaded with ZSTD_dm_auto (treated as a full zstd dictionary if it begins with ZSTD_MAGIC_DICTIONARY, else as raw content) and ZSTD_dlm_byCopy.*/
 size_t ZSTD_initCStream_advanced(ZSTD_CStream* zcs, const void* dict, size_t dictSize,
                                              ZSTD_parameters params, unsigned long long pledgedSrcSize);  /**< pledgedSrcSize must be correct. If srcSize is not known at init time, use value ZSTD_CONTENTSIZE_UNKNOWN. dict is loaded with ZSTD_dm_auto and ZSTD_dlm_byCopy. */
@@ -647,26 +582,30 @@ size_t ZSTD_initCStream_usingCDict_advanced(ZSTD_CStre
   If pledgedSrcSize is not known at reset time, use macro ZSTD_CONTENTSIZE_UNKNOWN.
   If pledgedSrcSize > 0, its value must be correct, as it will be written in header, and controlled at the end.
   For the time being, pledgedSrcSize==0 is interpreted as "srcSize unknown" for compatibility with older programs,
-  but it may change to mean "empty" in some future version, so prefer using macro ZSTD_CONTENTSIZE_UNKNOWN.
+  but it will change to mean "empty" in future version, so use macro ZSTD_CONTENTSIZE_UNKNOWN instead.
  @return : 0, or an error code (which can be tested using ZSTD_isError()) 
 


-

Advanced Streaming decompression functions

ZSTD_DStream* ZSTD_createDStream_advanced(ZSTD_customMem customMem);
-ZSTD_DStream* ZSTD_initStaticDStream(void* workspace, size_t workspaceSize);    /**< same as ZSTD_initStaticDCtx() */
-typedef enum { DStream_p_maxWindowSize } ZSTD_DStreamParameter_e;
+
typedef struct {
+    unsigned long long ingested;
+    unsigned long long consumed;
+    unsigned long long produced;
+} ZSTD_frameProgression;
+

+

Advanced Streaming decompression functions

typedef enum { DStream_p_maxWindowSize } ZSTD_DStreamParameter_e;
 size_t ZSTD_setDStreamParameter(ZSTD_DStream* zds, ZSTD_DStreamParameter_e paramType, unsigned paramValue);   /* obsolete : this API will be removed in a future version */
 size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize); /**< note: no dictionary will be used if dict == NULL or dictSize < 8 */
 size_t ZSTD_initDStream_usingDDict(ZSTD_DStream* zds, const ZSTD_DDict* ddict);  /**< note : ddict is referenced, it must outlive decompression session */
 size_t ZSTD_resetDStream(ZSTD_DStream* zds);  /**< re-use decompression parameters from previous init; saves dictionary loading */
 

-

Buffer-less and synchronous inner streaming functions

+

Buffer-less and synchronous inner streaming functions

   This is an advanced API, giving full control over buffer management, for users which need direct control over memory.
   But it's also a complex one, with several restrictions, documented below.
   Prefer normal streaming API for an easier experience.
  
 
-

Buffer-less streaming compression (synchronous mode)

+

Buffer-less streaming compression (synchronous mode)

   A ZSTD_CCtx object is required to track streaming operations.
   Use ZSTD_createCCtx() / ZSTD_freeCCtx() to manage resource.
   ZSTD_CCtx object can be re-used multiple times within successive compression operations.
@@ -702,7 +641,7 @@ size_t ZSTD_compressBegin_usingCDict(ZSTD_CCtx* cctx, 
 size_t ZSTD_compressBegin_usingCDict_advanced(ZSTD_CCtx* const cctx, const ZSTD_CDict* const cdict, ZSTD_frameParameters const fParams, unsigned long long const pledgedSrcSize);   /* compression parameters are already set within cdict. pledgedSrcSize must be correct. If srcSize is not known, use macro ZSTD_CONTENTSIZE_UNKNOWN */
 size_t ZSTD_copyCCtx(ZSTD_CCtx* cctx, const ZSTD_CCtx* preparedCCtx, unsigned long long pledgedSrcSize); /**<  note: if pledgedSrcSize is not known, use ZSTD_CONTENTSIZE_UNKNOWN */
 

-

Buffer-less streaming decompression (synchronous mode)

+

Buffer-less streaming decompression (synchronous mode)

   A ZSTD_DCtx object is required to track streaming operations.
   Use ZSTD_createDCtx() / ZSTD_freeDCtx() to manage it.
   A ZSTD_DCtx object can be re-used multiple times.
@@ -788,15 +727,15 @@ size_t ZSTD_decodingBufferSize_min(unsigned long long 
 

typedef enum { ZSTDnit_frameHeader, ZSTDnit_blockHeader, ZSTDnit_block, ZSTDnit_lastBlock, ZSTDnit_checksum, ZSTDnit_skippableFrame } ZSTD_nextInputType_e;
 

-

New advanced API (experimental)


+

New advanced API (experimental)


 
 
typedef enum {
-    /* Question : should we have a format ZSTD_f_auto ?
-     * For the time being, it would mean exactly the same as ZSTD_f_zstd1.
-     * But, in the future, should several formats be supported,
+    /* Opened question : should we have a format ZSTD_f_auto ?
+     * Today, it would mean exactly the same as ZSTD_f_zstd1.
+     * But, in the future, should several formats become supported,
      * on the compression side, it would mean "default format".
-     * On the decompression side, it would mean "multi format",
-     * and ZSTD_f_zstd1 could be reserved to mean "accept *only* zstd frames".
+     * On the decompression side, it would mean "automatic format detection",
+     * so that ZSTD_f_zstd1 would mean "accept *only* zstd frames".
      * Since meaning is a little different, another option could be to define different enums for compression and decompression.
      * This question could be kept for later, when there are actually multiple formats to support,
      * but there is also the question of pinning enum values, and pinning value `0` is especially important */
@@ -814,43 +753,77 @@ size_t ZSTD_decodingBufferSize_min(unsigned long long 
     /* compression parameters */
     ZSTD_p_compressionLevel=100, /* Update all compression parameters according to pre-defined cLevel table
                               * Default level is ZSTD_CLEVEL_DEFAULT==3.
-                              * Special: value 0 means "do not change cLevel". */
+                              * Special: value 0 means "do not change cLevel".
+                              * Note 1 : it's possible to pass a negative compression level by casting it to unsigned type.
+                              * Note 2 : setting a level sets all default values of other compression parameters.
+                              * Note 3 : setting compressionLevel automatically updates ZSTD_p_compressLiterals. */
     ZSTD_p_windowLog,        /* Maximum allowed back-reference distance, expressed as power of 2.
                               * Must be clamped between ZSTD_WINDOWLOG_MIN and ZSTD_WINDOWLOG_MAX.
-                              * Special: value 0 means "do not change windowLog".
+                              * Special: value 0 means "use default windowLog".
                               * Note: Using a window size greater than ZSTD_MAXWINDOWSIZE_DEFAULT (default: 2^27)
-                              * requires setting the maximum window size at least as large during decompression. */
+                              *       requires explicitly allowing such window size during decompression stage. */
     ZSTD_p_hashLog,          /* Size of the probe table, as a power of 2.
                               * Resulting table size is (1 << (hashLog+2)).
                               * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX.
                               * Larger tables improve compression ratio of strategies <= dFast,
                               * and improve speed of strategies > dFast.
-                              * Special: value 0 means "do not change hashLog". */
+                              * Special: value 0 means "use default hashLog". */
     ZSTD_p_chainLog,         /* Size of the full-search table, as a power of 2.
                               * Resulting table size is (1 << (chainLog+2)).
                               * Larger tables result in better and slower compression.
                               * This parameter is useless when using "fast" strategy.
-                              * Special: value 0 means "do not change chainLog". */
+                              * Special: value 0 means "use default chainLog". */
     ZSTD_p_searchLog,        /* Number of search attempts, as a power of 2.
                               * More attempts result in better and slower compression.
                               * This parameter is useless when using "fast" and "dFast" strategies.
-                              * Special: value 0 means "do not change searchLog". */
+                              * Special: value 0 means "use default searchLog". */
     ZSTD_p_minMatch,         /* Minimum size of searched matches (note : repCode matches can be smaller).
                               * Larger values make faster compression and decompression, but decrease ratio.
                               * Must be clamped between ZSTD_SEARCHLENGTH_MIN and ZSTD_SEARCHLENGTH_MAX.
                               * Note that currently, for all strategies < btopt, effective minimum is 4.
-                              * Note that currently, for all strategies > fast, effective maximum is 6.
-                              * Special: value 0 means "do not change minMatchLength". */
-    ZSTD_p_targetLength,     /* Only useful for strategies >= btopt.
-                              * Length of Match considered "good enough" to stop search.
-                              * Larger values make compression stronger and slower.
-                              * Special: value 0 means "do not change targetLength". */
+                              *                    , for all strategies > fast, effective maximum is 6.
+                              * Special: value 0 means "use default minMatchLength". */
+    ZSTD_p_targetLength,     /* Impact of this field depends on strategy.
+                              * For strategies btopt & btultra:
+                              *     Length of Match considered "good enough" to stop search.
+                              *     Larger values make compression stronger, and slower.
+                              * For strategy fast:
+                              *     Distance between match sampling.
+                              *     Larger values make compression faster, and weaker.
+                              * Special: value 0 means "use default targetLength". */
     ZSTD_p_compressionStrategy, /* See ZSTD_strategy enum definition.
                               * Cast selected strategy as unsigned for ZSTD_CCtx_setParameter() compatibility.
                               * The higher the value of selected strategy, the more complex it is,
                               * resulting in stronger and slower compression.
-                              * Special: value 0 means "do not change strategy". */
+                              * Special: value 0 means "use default strategy". */
 
+    ZSTD_p_enableLongDistanceMatching=160, /* Enable long distance matching.
+                                         * This parameter is designed to improve compression ratio
+                                         * for large inputs, by finding large matches at long distance.
+                                         * It increases memory usage and window size.
+                                         * Note: enabling this parameter increases ZSTD_p_windowLog to 128 MB
+                                         * except when expressly set to a different value. */
+    ZSTD_p_ldmHashLog,       /* Size of the table for long distance matching, as a power of 2.
+                              * Larger values increase memory usage and compression ratio,
+                              * but decrease compression speed.
+                              * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX
+                              * default: windowlog - 7.
+                              * Special: value 0 means "automatically determine hashlog". */
+    ZSTD_p_ldmMinMatch,      /* Minimum match size for long distance matcher.
+                              * Larger/too small values usually decrease compression ratio.
+                              * Must be clamped between ZSTD_LDM_MINMATCH_MIN and ZSTD_LDM_MINMATCH_MAX.
+                              * Special: value 0 means "use default value" (default: 64). */
+    ZSTD_p_ldmBucketSizeLog, /* Log size of each bucket in the LDM hash table for collision resolution.
+                              * Larger values improve collision resolution but decrease compression speed.
+                              * The maximum value is ZSTD_LDM_BUCKETSIZELOG_MAX .
+                              * Special: value 0 means "use default value" (default: 3). */
+    ZSTD_p_ldmHashEveryLog,  /* Frequency of inserting/looking up entries in the LDM hash table.
+                              * Must be clamped between 0 and (ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN).
+                              * Default is MAX(0, (windowLog - ldmHashLog)), optimizing hash table usage.
+                              * Larger values improve compression speed.
+                              * Deviating far from default value will likely result in a compression ratio decrease.
+                              * Special: value 0 means "automatically determine hashEveryLog". */

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***

From owner-svn-src-vendor@freebsd.org  Mon Oct 22 19:55:19 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 47837106B2F6;
 Mon, 22 Oct 2018 19:55:19 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id F20F28FBE1;
 Mon, 22 Oct 2018 19:55:18 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id D2E7E24F60;
 Mon, 22 Oct 2018 19:55:18 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MJtIfr081247;
 Mon, 22 Oct 2018 19:55:18 GMT (envelope-from cem@FreeBSD.org)
Received: (from cem@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MJtI2I081246;
 Mon, 22 Oct 2018 19:55:18 GMT (envelope-from cem@FreeBSD.org)
Message-Id: <201810221955.w9MJtI2I081246@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org
 using -f
From: Conrad Meyer 
Date: Mon, 22 Oct 2018 19:55:18 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339613 - vendor/zstd/1.3.4
X-SVN-Group: vendor
X-SVN-Commit-Author: cem
X-SVN-Commit-Paths: vendor/zstd/1.3.4
X-SVN-Commit-Revision: 339613
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Mon, 22 Oct 2018 19:55:19 -0000

Author: cem
Date: Mon Oct 22 19:55:18 2018
New Revision: 339613
URL: https://svnweb.freebsd.org/changeset/base/339613

Log:
  tag import of zstd 1.3.4

Added:
  vendor/zstd/1.3.4/
     - copied from r339612, vendor/zstd/dist/

From owner-svn-src-vendor@freebsd.org  Mon Oct 22 20:00:35 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id EBD6E106B7A1;
 Mon, 22 Oct 2018 20:00:34 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 9F97970041;
 Mon, 22 Oct 2018 20:00:34 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 9A12C24F76;
 Mon, 22 Oct 2018 20:00:34 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MK0YKJ081583;
 Mon, 22 Oct 2018 20:00:34 GMT (envelope-from cem@FreeBSD.org)
Received: (from cem@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MK0UWE081563;
 Mon, 22 Oct 2018 20:00:30 GMT (envelope-from cem@FreeBSD.org)
Message-Id: <201810222000.w9MK0UWE081563@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org
 using -f
From: Conrad Meyer 
Date: Mon, 22 Oct 2018 20:00:30 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339614 - in vendor/zstd/dist: .
 contrib/adaptive-compression contrib/gen_html contrib/meson contrib/pzstd
 contrib/seekable_format contrib/seekable_format/examples doc doc/images lib
 li...
X-SVN-Group: vendor
X-SVN-Commit-Author: cem
X-SVN-Commit-Paths: in vendor/zstd/dist: . contrib/adaptive-compression
 contrib/gen_html contrib/meson contrib/pzstd contrib/seekable_format
 contrib/seekable_format/examples doc doc/images lib lib/common lib/compress
 lib...
X-SVN-Commit-Revision: 339614
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Mon, 22 Oct 2018 20:00:35 -0000

Author: cem
Date: Mon Oct 22 20:00:30 2018
New Revision: 339614
URL: https://svnweb.freebsd.org/changeset/base/339614

Log:
  import zstd 1.3.7

Added:
  vendor/zstd/dist/CODE_OF_CONDUCT.md
  vendor/zstd/dist/doc/images/cdict_v136.png   (contents, props changed)
  vendor/zstd/dist/doc/images/zstd_cdict_v1_3_5.png   (contents, props changed)
  vendor/zstd/dist/lib/common/debug.c   (contents, props changed)
  vendor/zstd/dist/lib/common/debug.h   (contents, props changed)
  vendor/zstd/dist/lib/compress/hist.c   (contents, props changed)
  vendor/zstd/dist/lib/compress/hist.h   (contents, props changed)
  vendor/zstd/dist/lib/dictBuilder/cover.h   (contents, props changed)
  vendor/zstd/dist/lib/dictBuilder/fastcover.c   (contents, props changed)
  vendor/zstd/dist/programs/zstdgrep.1   (contents, props changed)
  vendor/zstd/dist/programs/zstdgrep.1.md
  vendor/zstd/dist/programs/zstdless.1   (contents, props changed)
  vendor/zstd/dist/programs/zstdless.1.md
  vendor/zstd/dist/tests/libzstd_partial_builds.sh   (contents, props changed)
  vendor/zstd/dist/tests/rateLimiter.py   (contents, props changed)
Deleted:
  vendor/zstd/dist/circle.yml
  vendor/zstd/dist/doc/images/ldmCspeed.png
  vendor/zstd/dist/doc/images/ldmDspeed.png
  vendor/zstd/dist/doc/images/linux-4.7-12-compress.png
  vendor/zstd/dist/doc/images/linux-4.7-12-decompress.png
  vendor/zstd/dist/doc/images/linux-4.7-12-mt-compress.png
  vendor/zstd/dist/doc/images/linux-git-compress.png
  vendor/zstd/dist/doc/images/linux-git-decompress.png
  vendor/zstd/dist/doc/images/linux-git-mt-compress.png
Modified:
  vendor/zstd/dist/.gitattributes
  vendor/zstd/dist/Makefile
  vendor/zstd/dist/NEWS
  vendor/zstd/dist/README.md
  vendor/zstd/dist/TESTING.md
  vendor/zstd/dist/appveyor.yml
  vendor/zstd/dist/contrib/adaptive-compression/Makefile
  vendor/zstd/dist/contrib/gen_html/Makefile
  vendor/zstd/dist/contrib/meson/meson.build
  vendor/zstd/dist/contrib/pzstd/Makefile
  vendor/zstd/dist/contrib/pzstd/Options.cpp
  vendor/zstd/dist/contrib/pzstd/Pzstd.cpp
  vendor/zstd/dist/contrib/seekable_format/examples/Makefile
  vendor/zstd/dist/contrib/seekable_format/examples/seekable_compression.c
  vendor/zstd/dist/contrib/seekable_format/examples/seekable_decompression.c
  vendor/zstd/dist/contrib/seekable_format/zstd_seekable.h
  vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c
  vendor/zstd/dist/doc/zstd_compression_format.md
  vendor/zstd/dist/doc/zstd_manual.html
  vendor/zstd/dist/lib/BUCK
  vendor/zstd/dist/lib/Makefile
  vendor/zstd/dist/lib/README.md
  vendor/zstd/dist/lib/common/bitstream.h
  vendor/zstd/dist/lib/common/compiler.h
  vendor/zstd/dist/lib/common/cpu.h
  vendor/zstd/dist/lib/common/entropy_common.c
  vendor/zstd/dist/lib/common/fse.h
  vendor/zstd/dist/lib/common/fse_decompress.c
  vendor/zstd/dist/lib/common/huf.h
  vendor/zstd/dist/lib/common/mem.h
  vendor/zstd/dist/lib/common/pool.c
  vendor/zstd/dist/lib/common/pool.h
  vendor/zstd/dist/lib/common/xxhash.c
  vendor/zstd/dist/lib/common/zstd_common.c
  vendor/zstd/dist/lib/common/zstd_internal.h
  vendor/zstd/dist/lib/compress/fse_compress.c
  vendor/zstd/dist/lib/compress/huf_compress.c
  vendor/zstd/dist/lib/compress/zstd_compress.c
  vendor/zstd/dist/lib/compress/zstd_compress_internal.h
  vendor/zstd/dist/lib/compress/zstd_double_fast.c
  vendor/zstd/dist/lib/compress/zstd_double_fast.h
  vendor/zstd/dist/lib/compress/zstd_fast.c
  vendor/zstd/dist/lib/compress/zstd_fast.h
  vendor/zstd/dist/lib/compress/zstd_lazy.c
  vendor/zstd/dist/lib/compress/zstd_lazy.h
  vendor/zstd/dist/lib/compress/zstd_ldm.c
  vendor/zstd/dist/lib/compress/zstd_ldm.h
  vendor/zstd/dist/lib/compress/zstd_opt.c
  vendor/zstd/dist/lib/compress/zstd_opt.h
  vendor/zstd/dist/lib/compress/zstdmt_compress.c
  vendor/zstd/dist/lib/compress/zstdmt_compress.h
  vendor/zstd/dist/lib/decompress/huf_decompress.c
  vendor/zstd/dist/lib/decompress/zstd_decompress.c
  vendor/zstd/dist/lib/dictBuilder/cover.c
  vendor/zstd/dist/lib/dictBuilder/divsufsort.c
  vendor/zstd/dist/lib/dictBuilder/zdict.c
  vendor/zstd/dist/lib/dictBuilder/zdict.h
  vendor/zstd/dist/lib/legacy/zstd_v01.c
  vendor/zstd/dist/lib/legacy/zstd_v02.c
  vendor/zstd/dist/lib/legacy/zstd_v03.c
  vendor/zstd/dist/lib/legacy/zstd_v04.c
  vendor/zstd/dist/lib/legacy/zstd_v05.c
  vendor/zstd/dist/lib/legacy/zstd_v06.c
  vendor/zstd/dist/lib/legacy/zstd_v07.c
  vendor/zstd/dist/lib/zstd.h
  vendor/zstd/dist/programs/Makefile
  vendor/zstd/dist/programs/README.md
  vendor/zstd/dist/programs/bench.c
  vendor/zstd/dist/programs/bench.h
  vendor/zstd/dist/programs/datagen.c
  vendor/zstd/dist/programs/dibio.c
  vendor/zstd/dist/programs/dibio.h
  vendor/zstd/dist/programs/fileio.c
  vendor/zstd/dist/programs/fileio.h
  vendor/zstd/dist/programs/platform.h
  vendor/zstd/dist/programs/util.h
  vendor/zstd/dist/programs/zstd.1
  vendor/zstd/dist/programs/zstd.1.md
  vendor/zstd/dist/programs/zstdcli.c
  vendor/zstd/dist/tests/.gitignore
  vendor/zstd/dist/tests/Makefile
  vendor/zstd/dist/tests/README.md
  vendor/zstd/dist/tests/decodecorpus.c
  vendor/zstd/dist/tests/fullbench.c
  vendor/zstd/dist/tests/fuzz/fuzz.h
  vendor/zstd/dist/tests/fuzz/fuzz.py
  vendor/zstd/dist/tests/fuzz/regression_driver.c
  vendor/zstd/dist/tests/fuzz/zstd_helpers.c
  vendor/zstd/dist/tests/fuzzer.c
  vendor/zstd/dist/tests/gzip/Makefile
  vendor/zstd/dist/tests/legacy.c
  vendor/zstd/dist/tests/longmatch.c
  vendor/zstd/dist/tests/paramgrill.c
  vendor/zstd/dist/tests/playTests.sh
  vendor/zstd/dist/tests/poolTests.c
  vendor/zstd/dist/tests/roundTripCrash.c
  vendor/zstd/dist/tests/symbols.c
  vendor/zstd/dist/tests/test-zstd-versions.py
  vendor/zstd/dist/tests/zstreamtest.c
  vendor/zstd/dist/zlibWrapper/examples/minigzip.c
  vendor/zstd/dist/zlibWrapper/examples/zwrapbench.c
  vendor/zstd/dist/zlibWrapper/gzguts.h
  vendor/zstd/dist/zlibWrapper/gzlib.c
  vendor/zstd/dist/zlibWrapper/gzwrite.c

Modified: vendor/zstd/dist/.gitattributes
==============================================================================
--- vendor/zstd/dist/.gitattributes	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/.gitattributes	Mon Oct 22 20:00:30 2018	(r339614)
@@ -19,6 +19,3 @@
 # Windows
 *.bat text eol=crlf
 *.cmd text eol=crlf
-
-# .travis.yml merging
-.travis.yml merge=ours

Added: vendor/zstd/dist/CODE_OF_CONDUCT.md
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ vendor/zstd/dist/CODE_OF_CONDUCT.md	Mon Oct 22 20:00:30 2018	(r339614)
@@ -0,0 +1,5 @@
+# Code of Conduct
+
+Facebook has adopted a Code of Conduct that we expect project participants to adhere to.
+Please read the [full text](https://code.fb.com/codeofconduct/)
+so that you can understand what actions will and will not be tolerated.

Modified: vendor/zstd/dist/Makefile
==============================================================================
--- vendor/zstd/dist/Makefile	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/Makefile	Mon Oct 22 20:00:30 2018	(r339614)
@@ -23,20 +23,19 @@ else
 EXT =
 endif
 
+## default: Build lib-release and zstd-release
 .PHONY: default
 default: lib-release zstd-release
 
 .PHONY: all
-all: | allmost examples manual contrib
+all: allmost examples manual contrib
 
 .PHONY: allmost
-allmost: allzstd
-	$(MAKE) -C $(ZWRAPDIR) all
+allmost: allzstd zlibwrapper
 
-#skip zwrapper, can't build that on alternate architectures without the proper zlib installed
+# skip zwrapper, can't build that on alternate architectures without the proper zlib installed
 .PHONY: allzstd
-allzstd:
-	$(MAKE) -C $(ZSTDDIR) all
+allzstd: lib
 	$(MAKE) -C $(PRGDIR) all
 	$(MAKE) -C $(TESTDIR) all
 
@@ -45,58 +44,62 @@ all32:
 	$(MAKE) -C $(PRGDIR) zstd32
 	$(MAKE) -C $(TESTDIR) all32
 
-.PHONY: lib
-lib:
+.PHONY: lib lib-release libzstd.a
+lib lib-release :
 	@$(MAKE) -C $(ZSTDDIR) $@
 
-.PHONY: lib-release
-lib-release:
-	@$(MAKE) -C $(ZSTDDIR)
-
-.PHONY: zstd
-zstd:
+.PHONY: zstd zstd-release
+zstd zstd-release:
 	@$(MAKE) -C $(PRGDIR) $@
 	cp $(PRGDIR)/zstd$(EXT) .
 
-.PHONY: zstd-release
-zstd-release:
-	@$(MAKE) -C $(PRGDIR)
-	cp $(PRGDIR)/zstd$(EXT) .
-
 .PHONY: zstdmt
 zstdmt:
 	@$(MAKE) -C $(PRGDIR) $@
 	cp $(PRGDIR)/zstd$(EXT) ./zstdmt$(EXT)
 
 .PHONY: zlibwrapper
-zlibwrapper:
-	$(MAKE) -C $(ZWRAPDIR) test
+zlibwrapper: lib
+	$(MAKE) -C $(ZWRAPDIR) all
 
+## test: run long-duration tests
 .PHONY: test
+test: MOREFLAGS += -g -DDEBUGLEVEL=1 -Werror
 test:
-	$(MAKE) -C $(PRGDIR) allVariants MOREFLAGS+="-g -DZSTD_DEBUG=1"
+	MOREFLAGS="$(MOREFLAGS)" $(MAKE) -j -C $(PRGDIR) allVariants
 	$(MAKE) -C $(TESTDIR) $@
 
+## shortest: same as `make check`
 .PHONY: shortest
 shortest:
 	$(MAKE) -C $(TESTDIR) $@
 
+## check: run basic tests for `zstd` cli
 .PHONY: check
 check: shortest
 
+## examples: build all examples in `/examples` directory
 .PHONY: examples
-examples:
+examples: lib
 	CPPFLAGS=-I../lib LDFLAGS=-L../lib $(MAKE) -C examples/ all
 
+## manual: generate API documentation in html format
 .PHONY: manual
 manual:
 	$(MAKE) -C contrib/gen_html $@
 
+## man: generate man page
+.PHONY: man
+man:
+	$(MAKE) -C programs $@
+
+## contrib: build all supported projects in `/contrib` directory
 .PHONY: contrib
 contrib: lib
 	$(MAKE) -C contrib/pzstd all
 	$(MAKE) -C contrib/seekable_format/examples all
 	$(MAKE) -C contrib/adaptive-compression all
+	$(MAKE) -C contrib/largeNbDicts all
 
 .PHONY: cleanTabs
 cleanTabs:
@@ -113,21 +116,39 @@ clean:
 	@$(MAKE) -C contrib/pzstd $@ > $(VOID)
 	@$(MAKE) -C contrib/seekable_format/examples $@ > $(VOID)
 	@$(MAKE) -C contrib/adaptive-compression $@ > $(VOID)
+	@$(MAKE) -C contrib/largeNbDicts $@ > $(VOID)
 	@$(RM) zstd$(EXT) zstdmt$(EXT) tmp*
 	@$(RM) -r lz4
 	@echo Cleaning completed
 
 #------------------------------------------------------------------------------
-# make install is validated only for Linux, OSX, Hurd and some BSD targets
+# make install is validated only for Linux, macOS, Hurd and some BSD targets
 #------------------------------------------------------------------------------
-ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU FreeBSD DragonFly NetBSD MSYS_NT))
+ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD DragonFly NetBSD MSYS_NT Haiku))
 
 HOST_OS = POSIX
-CMAKE_PARAMS = -DZSTD_BUILD_CONTRIB:BOOL=ON -DZSTD_BUILD_STATIC:BOOL=ON -DZSTD_BUILD_TESTS:BOOL=ON -DZSTD_ZLIB_SUPPORT:BOOL=ON -DZSTD_LZMA_SUPPORT:BOOL=ON
+CMAKE_PARAMS = -DZSTD_BUILD_CONTRIB:BOOL=ON -DZSTD_BUILD_STATIC:BOOL=ON -DZSTD_BUILD_TESTS:BOOL=ON -DZSTD_ZLIB_SUPPORT:BOOL=ON -DZSTD_LZMA_SUPPORT:BOOL=ON -DCMAKE_BUILD_TYPE=Release
 
+EGREP = egrep --color=never
+
+# Print a two column output of targets and their description. To add a target description, put a
+# comment in the Makefile with the format "## : ".  For example:
+#
+## list: Print all targets and their descriptions (if provided)
 .PHONY: list
 list:
-	@$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$@$$' | xargs
+	@TARGETS=$$($(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null \
+		| awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' \
+		| $(EGREP) -v  -e '^[^[:alnum:]]' | sort); \
+	{ \
+	    printf "Target Name\tDescription\n"; \
+	    printf "%0.s-" {1..16}; printf "\t"; printf "%0.s-" {1..40}; printf "\n"; \
+	    for target in $$TARGETS; do \
+	        line=$$($(EGREP) "^##[[:space:]]+$$target:" $(lastword $(MAKEFILE_LIST))); \
+	        description=$$(echo $$line | awk '{i=index($$0,":"); print substr($$0,i+1)}' | xargs); \
+	        printf "$$target\t$$description\n"; \
+	    done \
+	} | column -t -s $$'\t'
 
 .PHONY: install clangtest armtest usan asan uasan
 install:
@@ -183,6 +204,7 @@ armfuzz: clean
 	CC=arm-linux-gnueabi-gcc QEMU_SYS=qemu-arm-static MOREFLAGS="-static" FUZZER_FLAGS=--no-big-tests $(MAKE) -C $(TESTDIR) fuzztest
 
 aarch64fuzz: clean
+	ld -v
 	CC=aarch64-linux-gnu-gcc QEMU_SYS=qemu-aarch64-static MOREFLAGS="-static" FUZZER_FLAGS=--no-big-tests $(MAKE) -C $(TESTDIR) fuzztest
 
 ppcfuzz: clean
@@ -206,7 +228,7 @@ gcc6test: clean
 
 clangtest: clean
 	clang -v
-	$(MAKE) all CXX=clang-++ CC=clang MOREFLAGS="-Werror -Wconversion -Wno-sign-conversion -Wdocumentation"
+	$(MAKE) all CXX=clang++ CC=clang MOREFLAGS="-Werror -Wconversion -Wno-sign-conversion -Wdocumentation"
 
 armtest: clean
 	$(MAKE) -C $(TESTDIR) datagen   # use native, faster
@@ -295,6 +317,9 @@ gcc6install: apt-add-repo
 gcc7install: apt-add-repo
 	APT_PACKAGES="libc6-dev-i386 gcc-multilib gcc-7 gcc-7-multilib" $(MAKE) apt-install
 
+gcc8install: apt-add-repo
+	APT_PACKAGES="libc6-dev-i386 gcc-multilib gcc-8 gcc-8-multilib" $(MAKE) apt-install
+
 gpp6install: apt-add-repo
 	APT_PACKAGES="libc6-dev-i386 g++-multilib gcc-6 g++-6 g++-6-multilib" $(MAKE) apt-install
 
@@ -326,23 +351,23 @@ cmakebuild:
 
 c90build: clean
 	$(CC) -v
-	CFLAGS="-std=c90" $(MAKE) allmost  # will fail, due to missing support for `long long`
+	CFLAGS="-std=c90 -Werror" $(MAKE) allmost  # will fail, due to missing support for `long long`
 
 gnu90build: clean
 	$(CC) -v
-	CFLAGS="-std=gnu90" $(MAKE) allmost
+	CFLAGS="-std=gnu90 -Werror" $(MAKE) allmost
 
 c99build: clean
 	$(CC) -v
-	CFLAGS="-std=c99" $(MAKE) allmost
+	CFLAGS="-std=c99 -Werror" $(MAKE) allmost
 
 gnu99build: clean
 	$(CC) -v
-	CFLAGS="-std=gnu99" $(MAKE) allmost
+	CFLAGS="-std=gnu99 -Werror" $(MAKE) allmost
 
 c11build: clean
 	$(CC) -v
-	CFLAGS="-std=c11" $(MAKE) allmost
+	CFLAGS="-std=c11 -Werror" $(MAKE) allmost
 
 bmix64build: clean
 	$(CC) -v
@@ -356,7 +381,10 @@ bmi32build: clean
 	$(CC) -v
 	CFLAGS="-O3 -mbmi -m32 -Werror" $(MAKE) -C $(TESTDIR) test
 
-staticAnalyze: clean
+# static analyzer test uses clang's scan-build
+# does not analyze zlibWrapper, due to detected issues in zlib source code
+staticAnalyze: SCANBUILD ?= scan-build
+staticAnalyze:
 	$(CC) -v
-	CPPFLAGS=-g scan-build --status-bugs -v $(MAKE) all
+	CC=$(CC) CPPFLAGS=-g $(SCANBUILD) --status-bugs -v $(MAKE) allzstd examples contrib
 endif

Modified: vendor/zstd/dist/NEWS
==============================================================================
--- vendor/zstd/dist/NEWS	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/NEWS	Mon Oct 22 20:00:30 2018	(r339614)
@@ -1,3 +1,39 @@
+v1.3.7
+perf: slightly better decompression speed on clang (depending on hardware target)
+fix : performance of dictionary compression for small input < 4 KB at levels 9 and 10
+build: no longer build backtrace by default in release mode; restrict further automatic mode
+build: control backtrace support through build macro BACKTRACE
+misc: added man pages for zstdless and zstdgrep, by @samrussell
+
+v1.3.6
+perf: much faster dictionary builder, by @jenniferliu
+perf: faster dictionary compression on small data when using multiple contexts, by @felixhandte
+perf: faster dictionary decompression when using a very large number of dictionaries simultaneously
+cli : fix : does no longer overwrite destination when source does not exist (#1082)
+cli : new command --adapt, for automatic compression level adaptation
+api : fix : block api can be streamed with > 4 GB, reported by @catid
+api : reduced ZSTD_DDict size by 2 KB
+api : minimum negative compression level is defined, and can be queried using ZSTD_minCLevel().
+build: support Haiku target, by @korli
+build: Read Legacy format is limited to v0.5+ by default. Can be changed at compile time with macro ZSTD_LEGACY_SUPPORT.
+doc : zstd_compression_format.md updated to match wording in IETF RFC 8478
+misc: tests/paramgrill, a parameter optimizer, by @GeorgeLu97
+
+v1.3.5
+perf: much faster dictionary compression, by @felixhandte
+perf: small quality improvement for dictionary generation, by @terrelln
+perf: slightly improved high compression levels (notably level 19)
+mem : automatic memory release for long duration contexts
+cli : fix : overlapLog can be manually set
+cli : fix : decoding invalid lz4 frames
+api : fix : performance degradation for dictionary compression when using advanced API, by @terrelln
+api : change : clarify ZSTD_CCtx_reset() vs ZSTD_CCtx_resetParameters(), by @terrelln
+build: select custom libzstd scope through control macros, by @GeorgeLu97
+build: OpenBSD patch, by @bket
+build: make and make all are compatible with -j
+doc : clarify zstd_compression_format.md, updated for IETF RFC process
+misc: pzstd compatible with reproducible compilation, by @lamby
+
 v1.3.4
 perf: faster speed (especially decoding speed) on recent cpus (haswell+)
 perf: much better performance associating --long with multi-threading, by @terrelln

Modified: vendor/zstd/dist/README.md
==============================================================================
--- vendor/zstd/dist/README.md	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/README.md	Mon Oct 22 20:00:30 2018	(r339614)
@@ -4,7 +4,7 @@ __Zstandard__, or `zstd` as short version, is a fast l
 targeting real-time compression scenarios at zlib-level and better compression ratios.
 It's backed by a very fast entropy stage, provided by [Huff0 and FSE library](https://github.com/Cyan4973/FiniteStateEntropy).
 
-The project is provided as an open-source BSD-licensed **C** library,
+The project is provided as an open-source dual [BSD](LICENSE) and [GPLv2](COPYING) licensed **C** library,
 and a command line utility producing and decoding `.zst`, `.gz`, `.xz` and `.lz4` files.
 Should your project require another programming language,
 a list of known ports and bindings is provided on [Zstandard homepage](http://www.zstd.net/#other-languages).
@@ -120,6 +120,8 @@ Other available options include:
 A `cmake` project generator is provided within `build/cmake`.
 It can generate Makefiles or other build scripts
 to create `zstd` binary, and `libzstd` dynamic and static libraries.
+
+By default, `CMAKE_BUILD_TYPE` is set to `Release`.
 
 #### Meson
 

Modified: vendor/zstd/dist/TESTING.md
==============================================================================
--- vendor/zstd/dist/TESTING.md	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/TESTING.md	Mon Oct 22 20:00:30 2018	(r339614)
@@ -41,4 +41,4 @@ They consist of the following tests:
 - `pzstd` with asan and tsan, as well as in 32-bits mode
 - Testing `zstd` with legacy mode off
 - Testing `zbuff` (old streaming API)
-- Entire test suite and make install on OS X
+- Entire test suite and make install on macOS

Modified: vendor/zstd/dist/appveyor.yml
==============================================================================
--- vendor/zstd/dist/appveyor.yml	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/appveyor.yml	Mon Oct 22 20:00:30 2018	(r339614)
@@ -181,15 +181,15 @@
     - COMPILER: "gcc"
       HOST:     "mingw"
       PLATFORM: "x64"
-      SCRIPT:   "make allzstd"
+      SCRIPT:   "CPPFLAGS=-DDEBUGLEVEL=2 CFLAGS=-Werror make -j allzstd DEBUGLEVEL=2"
     - COMPILER: "gcc"
       HOST:     "mingw"
       PLATFORM: "x86"
-      SCRIPT:   "make allzstd"
+      SCRIPT:   "CFLAGS=-Werror make -j allzstd"
     - COMPILER: "clang"
       HOST:     "mingw"
       PLATFORM: "x64"
-      SCRIPT:   "MOREFLAGS='--target=x86_64-w64-mingw32 -Werror -Wconversion -Wno-sign-conversion' make allzstd"
+      SCRIPT:   "CFLAGS='--target=x86_64-w64-mingw32 -Werror -Wconversion -Wno-sign-conversion' make -j allzstd"
 
     - COMPILER: "visual"
       HOST:     "visual"

Modified: vendor/zstd/dist/contrib/adaptive-compression/Makefile
==============================================================================
--- vendor/zstd/dist/contrib/adaptive-compression/Makefile	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/adaptive-compression/Makefile	Mon Oct 22 20:00:30 2018	(r339614)
@@ -48,7 +48,7 @@ clean:
 	@echo "finished cleaning"
 
 #-----------------------------------------------------------------------------
-# make install is validated only for Linux, OSX, BSD, Hurd and Solaris targets
+# make install is validated only for Linux, macOS, BSD, Hurd and Solaris targets
 #-----------------------------------------------------------------------------
 ifneq (,$(filter $(shell uname),Linux Darwin GNU/kFreeBSD GNU OpenBSD FreeBSD NetBSD DragonFly SunOS))
 

Modified: vendor/zstd/dist/contrib/gen_html/Makefile
==============================================================================
--- vendor/zstd/dist/contrib/gen_html/Makefile	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/gen_html/Makefile	Mon Oct 22 20:00:30 2018	(r339614)
@@ -10,7 +10,7 @@
 CXXFLAGS ?= -O3
 CXXFLAGS += -Wall -Wextra -Wcast-qual -Wcast-align -Wshadow -Wstrict-aliasing=1 -Wswitch-enum -Wno-comment
 CXXFLAGS += $(MOREFLAGS)
-FLAGS   = $(CPPFLAGS) $(CXXFLAGS) $(CXXFLAGS) $(LDFLAGS)
+FLAGS   = $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS)
 
 ZSTDAPI = ../../lib/zstd.h
 ZSTDMANUAL = ../../doc/zstd_manual.html

Modified: vendor/zstd/dist/contrib/meson/meson.build
==============================================================================
--- vendor/zstd/dist/contrib/meson/meson.build	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/meson/meson.build	Mon Oct 22 20:00:30 2018	(r339614)
@@ -18,6 +18,7 @@ libzstd_srcs = [
     join_paths(common_dir, 'error_private.c'),
     join_paths(common_dir, 'xxhash.c'),
     join_paths(compress_dir, 'fse_compress.c'),
+    join_paths(compress_dir, 'hist.c'),
     join_paths(compress_dir, 'huf_compress.c'),
     join_paths(compress_dir, 'zstd_compress.c'),
     join_paths(compress_dir, 'zstd_fast.c'),
@@ -130,6 +131,7 @@ test('fuzzer', fuzzer)
 if target_machine.system() != 'windows'
     paramgrill = executable('paramgrill',
                             datagen_c, join_paths(tests_dir, 'paramgrill.c'),
+                            join_paths(programs_dir, 'bench.c'),
                             include_directories: test_includes,
                             link_with: libzstd,
                             dependencies: libm)

Modified: vendor/zstd/dist/contrib/pzstd/Makefile
==============================================================================
--- vendor/zstd/dist/contrib/pzstd/Makefile	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/pzstd/Makefile	Mon Oct 22 20:00:30 2018	(r339614)
@@ -42,7 +42,7 @@ PZSTD_LDFLAGS   =
 EXTRA_FLAGS     =
 ALL_CFLAGS      = $(EXTRA_FLAGS) $(CPPFLAGS) $(PZSTD_CPPFLAGS) $(CFLAGS)   $(PZSTD_CFLAGS)
 ALL_CXXFLAGS    = $(EXTRA_FLAGS) $(CPPFLAGS) $(PZSTD_CPPFLAGS) $(CXXFLAGS) $(PZSTD_CXXFLAGS)
-ALL_LDFLAGS     = $(EXTRA_FLAGS) $(LDFLAGS) $(PZSTD_LDFLAGS)
+ALL_LDFLAGS     = $(EXTRA_FLAGS) $(CXXFLAGS) $(LDFLAGS) $(PZSTD_LDFLAGS)
 
 
 # gtest libraries need to go before "-lpthread" because they depend on it.
@@ -50,7 +50,7 @@ GTEST_LIB  = -L googletest/build/googlemock/gtest
 LIBS       =
 
 # Compilation commands
-LD_COMMAND  = $(CXX) $^          $(ALL_LDFLAGS) $(LIBS) -lpthread -o $@
+LD_COMMAND  = $(CXX) $^          $(ALL_LDFLAGS) $(LIBS) -pthread -o $@
 CC_COMMAND  = $(CC)  $(DEPFLAGS) $(ALL_CFLAGS)   -c $<  -o $@
 CXX_COMMAND = $(CXX) $(DEPFLAGS) $(ALL_CXXFLAGS) -c $<  -o $@
 

Modified: vendor/zstd/dist/contrib/pzstd/Options.cpp
==============================================================================
--- vendor/zstd/dist/contrib/pzstd/Options.cpp	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/pzstd/Options.cpp	Mon Oct 22 20:00:30 2018	(r339614)
@@ -18,17 +18,6 @@
 #include 
 #include 
 
-#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(_WIN32) ||     \
-    defined(__CYGWIN__)
-#include  /* _isatty */
-#define IS_CONSOLE(stdStream) _isatty(_fileno(stdStream))
-#elif defined(_POSIX_C_SOURCE) || defined(_XOPEN_SOURCE) || defined(_POSIX_SOURCE) || (defined(__APPLE__) && defined(__MACH__)) || \
-      defined(__DragonFly__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)  /* https://sourceforge.net/p/predef/wiki/OperatingSystems/ */
-#include  /* isatty */
-#define IS_CONSOLE(stdStream) isatty(fileno(stdStream))
-#else
-#define IS_CONSOLE(stdStream) 0
-#endif
 
 namespace pzstd {
 
@@ -85,7 +74,7 @@ void usage() {
   std::fprintf(stderr, "Usage:\n");
   std::fprintf(stderr, "  pzstd [args] [FILE(s)]\n");
   std::fprintf(stderr, "Parallel ZSTD options:\n");
-  std::fprintf(stderr, "  -p, --processes   #    : number of threads to use for (de)compression (default:%d)\n", defaultNumThreads());
+  std::fprintf(stderr, "  -p, --processes   #    : number of threads to use for (de)compression (default:)\n");
 
   std::fprintf(stderr, "ZSTD options:\n");
   std::fprintf(stderr, "  -#                     : # compression level (1-%d, default:%d)\n", kMaxNonUltraCompressionLevel, kDefaultCompressionLevel);

Modified: vendor/zstd/dist/contrib/pzstd/Pzstd.cpp
==============================================================================
--- vendor/zstd/dist/contrib/pzstd/Pzstd.cpp	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/pzstd/Pzstd.cpp	Mon Oct 22 20:00:30 2018	(r339614)
@@ -6,6 +6,7 @@
  * LICENSE file in the root directory of this source tree) and the GPLv2 (found
  * in the COPYING file in the root directory of this source tree).
  */
+#include "platform.h"   /* Large Files support, SET_BINARY_MODE */
 #include "Pzstd.h"
 #include "SkippableFrame.h"
 #include "utils/FileSystem.h"
@@ -21,14 +22,6 @@
 #include 
 #include 
 
-#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(_WIN32) || defined(__CYGWIN__)
-#  include     /* _O_BINARY */
-#  include        /* _setmode, _isatty */
-#  define SET_BINARY_MODE(file) { if (_setmode(_fileno(file), _O_BINARY) == -1) perror("Cannot set _O_BINARY"); }
-#else
-#  include    /* isatty */
-#  define SET_BINARY_MODE(file)
-#endif
 
 namespace pzstd {
 

Modified: vendor/zstd/dist/contrib/seekable_format/examples/Makefile
==============================================================================
--- vendor/zstd/dist/contrib/seekable_format/examples/Makefile	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/seekable_format/examples/Makefile	Mon Oct 22 20:00:30 2018	(r339614)
@@ -9,19 +9,25 @@
 
 # This Makefile presumes libzstd is built, using `make` in / or /lib/
 
-LDFLAGS += ../../../lib/libzstd.a
+ZSTDLIB_PATH = ../../../lib
+ZSTDLIB_NAME = libzstd.a
+ZSTDLIB = $(ZSTDLIB_PATH)/$(ZSTDLIB_NAME)
+
 CPPFLAGS += -I../ -I../../../lib -I../../../lib/common
 
 CFLAGS ?= -O3
 CFLAGS += -g
 
-SEEKABLE_OBJS = ../zstdseek_compress.c ../zstdseek_decompress.c
+SEEKABLE_OBJS = ../zstdseek_compress.c ../zstdseek_decompress.c $(ZSTDLIB)
 
 .PHONY: default all clean test
 
 default: all
 
 all: seekable_compression seekable_decompression parallel_processing
+
+$(ZSTDLIB):
+	make -C $(ZSTDLIB_PATH) $(ZSTDLIB_NAME)
 
 seekable_compression : seekable_compression.c $(SEEKABLE_OBJS)
 	$(CC) $(CPPFLAGS) $(CFLAGS) $^ $(LDFLAGS) -o $@

Modified: vendor/zstd/dist/contrib/seekable_format/examples/seekable_compression.c
==============================================================================
--- vendor/zstd/dist/contrib/seekable_format/examples/seekable_compression.c	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/seekable_format/examples/seekable_compression.c	Mon Oct 22 20:00:30 2018	(r339614)
@@ -101,7 +101,7 @@ static void compressFile_orDie(const char* fname, cons
     free(buffOut);
 }
 
-static const char* createOutFilename_orDie(const char* filename)
+static char* createOutFilename_orDie(const char* filename)
 {
     size_t const inL = strlen(filename);
     size_t const outL = inL + 5;
@@ -109,7 +109,7 @@ static const char* createOutFilename_orDie(const char*
     memset(outSpace, 0, outL);
     strcat(outSpace, filename);
     strcat(outSpace, ".zst");
-    return (const char*)outSpace;
+    return (char*)outSpace;
 }
 
 int main(int argc, const char** argv) {
@@ -124,8 +124,9 @@ int main(int argc, const char** argv) {
     {   const char* const inFileName = argv[1];
         unsigned const frameSize = (unsigned)atoi(argv[2]);
 
-        const char* const outFileName = createOutFilename_orDie(inFileName);
+        char* const outFileName = createOutFilename_orDie(inFileName);
         compressFile_orDie(inFileName, outFileName, 5, frameSize);
+        free(outFileName);
     }
 
     return 0;

Modified: vendor/zstd/dist/contrib/seekable_format/examples/seekable_decompression.c
==============================================================================
--- vendor/zstd/dist/contrib/seekable_format/examples/seekable_decompression.c	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/seekable_format/examples/seekable_decompression.c	Mon Oct 22 20:00:30 2018	(r339614)
@@ -84,7 +84,7 @@ static void fseek_orDie(FILE* file, long int offset, i
 }
 
 
-static void decompressFile_orDie(const char* fname, unsigned startOffset, unsigned endOffset)
+static void decompressFile_orDie(const char* fname, off_t startOffset, off_t endOffset)
 {
     FILE* const fin  = fopen_orDie(fname, "rb");
     FILE* const fout = stdout;
@@ -129,8 +129,8 @@ int main(int argc, const char** argv)
 
     {
         const char* const inFilename = argv[1];
-        unsigned const startOffset = (unsigned) atoi(argv[2]);
-        unsigned const endOffset = (unsigned) atoi(argv[3]);
+        off_t const startOffset = atoll(argv[2]);
+        off_t const endOffset = atoll(argv[3]);
         decompressFile_orDie(inFilename, startOffset, endOffset);
     }
 

Modified: vendor/zstd/dist/contrib/seekable_format/zstd_seekable.h
==============================================================================
--- vendor/zstd/dist/contrib/seekable_format/zstd_seekable.h	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/seekable_format/zstd_seekable.h	Mon Oct 22 20:00:30 2018	(r339614)
@@ -6,8 +6,10 @@ extern "C" {
 #endif
 
 #include 
+#include "zstd.h"   /* ZSTDLIB_API */
 
-static const unsigned ZSTD_seekTableFooterSize = 9;
+
+#define ZSTD_seekTableFooterSize 9
 
 #define ZSTD_SEEKABLE_MAGICNUMBER 0x8F92EAB1
 

Modified: vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c
==============================================================================
--- vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/contrib/seekable_format/zstdseek_decompress.c	Mon Oct 22 20:00:30 2018	(r339614)
@@ -24,7 +24,7 @@
 #endif
 
 /* ************************************************************
-* Avoid fseek()'s 2GiB barrier with MSVC, MacOS, *BSD, MinGW
+* Avoid fseek()'s 2GiB barrier with MSVC, macOS, *BSD, MinGW
 ***************************************************************/
 #if defined(_MSC_VER) && _MSC_VER >= 1400
 #   define LONG_SEEK _fseeki64
@@ -56,6 +56,7 @@
 
 #include  /* malloc, free */
 #include   /* FILE* */
+#include 
 
 #define XXH_STATIC_LINKING_ONLY
 #define XXH_NAMESPACE ZSTD_
@@ -88,7 +89,7 @@ static int ZSTD_seekable_read_FILE(void* opaque, void*
     return 0;
 }
 
-static int ZSTD_seekable_seek_FILE(void* opaque, S64 offset, int origin)
+static int ZSTD_seekable_seek_FILE(void* opaque, long long offset, int origin)
 {
     int const ret = LONG_SEEK((FILE*)opaque, offset, origin);
     if (ret) return ret;
@@ -110,9 +111,9 @@ static int ZSTD_seekable_read_buff(void* opaque, void*
     return 0;
 }
 
-static int ZSTD_seekable_seek_buff(void* opaque, S64 offset, int origin)
+static int ZSTD_seekable_seek_buff(void* opaque, long long offset, int origin)
 {
-    buffWrapper_t* buff = (buffWrapper_t*) opaque;
+    buffWrapper_t* const buff = (buffWrapper_t*) opaque;
     unsigned long long newOffset;
     switch (origin) {
     case SEEK_SET:
@@ -124,6 +125,8 @@ static int ZSTD_seekable_seek_buff(void* opaque, S64 o
     case SEEK_END:
         newOffset = (unsigned long long)buff->size - offset;
         break;
+    default:
+        assert(0);  /* not possible */
     }
     if (newOffset > buff->size) {
         return -1;
@@ -197,7 +200,7 @@ size_t ZSTD_seekable_free(ZSTD_seekable* zs)
  *  Performs a binary search to find the last frame with a decompressed offset
  *  <= pos
  *  @return : the frame's index */
-U32 ZSTD_seekable_offsetToFrameIndex(ZSTD_seekable* const zs, U64 pos)
+U32 ZSTD_seekable_offsetToFrameIndex(ZSTD_seekable* const zs, unsigned long long pos)
 {
     U32 lo = 0;
     U32 hi = zs->seekTable.tableLen;
@@ -222,13 +225,13 @@ U32 ZSTD_seekable_getNumFrames(ZSTD_seekable* const zs
     return zs->seekTable.tableLen;
 }
 
-U64 ZSTD_seekable_getFrameCompressedOffset(ZSTD_seekable* const zs, U32 frameIndex)
+unsigned long long ZSTD_seekable_getFrameCompressedOffset(ZSTD_seekable* const zs, U32 frameIndex)
 {
     if (frameIndex >= zs->seekTable.tableLen) return ZSTD_SEEKABLE_FRAMEINDEX_TOOLARGE;
     return zs->seekTable.entries[frameIndex].cOffset;
 }
 
-U64 ZSTD_seekable_getFrameDecompressedOffset(ZSTD_seekable* const zs, U32 frameIndex)
+unsigned long long ZSTD_seekable_getFrameDecompressedOffset(ZSTD_seekable* const zs, U32 frameIndex)
 {
     if (frameIndex >= zs->seekTable.tableLen) return ZSTD_SEEKABLE_FRAMEINDEX_TOOLARGE;
     return zs->seekTable.entries[frameIndex].dOffset;
@@ -294,7 +297,6 @@ static size_t ZSTD_seekable_loadSeekTable(ZSTD_seekabl
         {   /* Allocate an extra entry at the end so that we can do size
              * computations on the last element without special case */
             seekEntry_t* entries = (seekEntry_t*)malloc(sizeof(seekEntry_t) * (numFrames + 1));
-            const BYTE* tableBase = zs->inBuff + ZSTD_skippableHeaderSize;
 
             U32 idx = 0;
             U32 pos = 8;
@@ -311,8 +313,8 @@ static size_t ZSTD_seekable_loadSeekTable(ZSTD_seekabl
             /* compute cumulative positions */
             for (; idx < numFrames; idx++) {
                 if (pos + sizePerEntry > SEEKABLE_BUFF_SIZE) {
-                    U32 const toRead = MIN(remaining, SEEKABLE_BUFF_SIZE);
                     U32 const offset = SEEKABLE_BUFF_SIZE - pos;
+                    U32 const toRead = MIN(remaining, SEEKABLE_BUFF_SIZE - offset);
                     memmove(zs->inBuff, zs->inBuff + pos, offset); /* move any data we haven't read yet */
                     CHECK_IO(src.read(src.opaque, zs->inBuff+offset, toRead));
                     remaining -= toRead;
@@ -372,7 +374,7 @@ size_t ZSTD_seekable_initAdvanced(ZSTD_seekable* zs, Z
     return 0;
 }
 
-size_t ZSTD_seekable_decompress(ZSTD_seekable* zs, void* dst, size_t len, U64 offset)
+size_t ZSTD_seekable_decompress(ZSTD_seekable* zs, void* dst, size_t len, unsigned long long offset)
 {
     U32 targetFrame = ZSTD_seekable_offsetToFrameIndex(zs, offset);
     do {

Added: vendor/zstd/dist/doc/images/cdict_v136.png
==============================================================================
Binary file. No diff available.

Added: vendor/zstd/dist/doc/images/zstd_cdict_v1_3_5.png
==============================================================================
Binary file. No diff available.

Modified: vendor/zstd/dist/doc/zstd_compression_format.md
==============================================================================
--- vendor/zstd/dist/doc/zstd_compression_format.md	Mon Oct 22 19:55:18 2018	(r339613)
+++ vendor/zstd/dist/doc/zstd_compression_format.md	Mon Oct 22 20:00:30 2018	(r339614)
@@ -16,7 +16,7 @@ Distribution of this document is unlimited.
 
 ### Version
 
-0.2.6 (19/08/17)
+0.3.0 (25/09/18)
 
 
 Introduction
@@ -27,6 +27,8 @@ that is independent of CPU type, operating system,
 file system and character set, suitable for
 file compression, pipe and streaming compression,
 using the [Zstandard algorithm](http://www.zstandard.org).
+The text of the specification assumes a basic background in programming
+at the level of bits and other primitive data representations.
 
 The data can be produced or consumed,
 even for an arbitrarily long sequentially presented input data stream,
@@ -39,11 +41,6 @@ for detection of data corruption.
 The data format defined by this specification
 does not attempt to allow random access to compressed data.
 
-This specification is intended for use by implementers of software
-to compress data into Zstandard format and/or decompress data from Zstandard format.
-The text of the specification assumes a basic background in programming
-at the level of bits and other primitive data representations.
-
 Unless otherwise indicated below,
 a compliant compressor must produce data sets
 that conform to the specifications presented here.
@@ -57,6 +54,12 @@ Whenever it does not support a parameter defined in th
 it must produce a non-ambiguous error code and associated error message
 explaining which parameter is unsupported.
 
+This specification is intended for use by implementers of software
+to compress data into Zstandard format and/or decompress data from Zstandard format.
+The Zstandard format is supported by an open source reference implementation,
+written in portable C, and available at : https://github.com/facebook/zstd .
+
+
 ### Overall conventions
 In this document:
 - square brackets i.e. `[` and `]` are used to indicate optional fields or parameters.
@@ -69,7 +72,7 @@ A frame is completely independent, has a defined begin
 and a set of parameters which tells the decoder how to decompress it.
 
 A frame encapsulates one or multiple __blocks__.
-Each block can be compressed or not,
+Each block contains arbitrary content, which is described by its header,
 and has a guaranteed maximum content size, which depends on frame parameters.
 Unlike frames, each block depends on previous blocks for proper decoding.
 However, each block can be decompressed without waiting for its successor,
@@ -92,14 +95,14 @@ Overview
 Frames
 ------
 Zstandard compressed data is made of one or more __frames__.
-Each frame is independent and can be decompressed indepedently of other frames.
+Each frame is independent and can be decompressed independently of other frames.
 The decompressed content of multiple concatenated frames is the concatenation of
 each frame decompressed content.
 
 There are two frame formats defined by Zstandard:
   Zstandard frames and Skippable frames.
 Zstandard frames contain compressed data, while
-skippable frames contain no data and can be used for metadata.
+skippable frames contain custom user metadata.
 
 ## Zstandard frames
 The structure of a single Zstandard frame is following:
@@ -112,6 +115,11 @@ __`Magic_Number`__
 
 4 Bytes, __little-endian__ format.
 Value : 0xFD2FB528
+Note: This value was selected to be less probable to find at the beginning of some random file.
+It avoids trivial patterns (0x00, 0xFF, repeated bytes, increasing bytes, etc.),
+contains byte values outside of ASCII range,
+and doesn't map into UTF8 space.
+It reduces the chances that a text file represent this value by accident.
 
 __`Frame_Header`__
 
@@ -171,8 +179,8 @@ according to the following table:
 |`FCS_Field_Size`| 0 or 1 |  2  |  4  |  8  |
 
 When `Flag_Value` is `0`, `FCS_Field_Size` depends on `Single_Segment_flag` :
-if `Single_Segment_flag` is set, `Field_Size` is 1.
-Otherwise, `Field_Size` is 0 : `Frame_Content_Size` is not provided.
+if `Single_Segment_flag` is set, `FCS_Field_Size` is 1.
+Otherwise, `FCS_Field_Size` is 0 : `Frame_Content_Size` is not provided.
 
 __`Single_Segment_flag`__
 
@@ -196,10 +204,10 @@ depending on local limitations.
 
 __`Unused_bit`__
 
-The value of this bit should be set to zero.
-A decoder compliant with this specification version shall not interpret it.
-It might be used in a future version,
-to signal a property which is not mandatory to properly decode the frame.
+A decoder compliant with this specification version shall not interpret this bit.
+It might be used in any future version,
+to signal a property which is transparent to properly decode the frame.
+An encoder compliant with this specification version must set this bit to zero.
 
 __`Reserved_bit`__
 
@@ -218,11 +226,11 @@ __`Dictionary_ID_flag`__
 
 This is a 2-bits flag (`= FHD & 3`),
 telling if a dictionary ID is provided within the header.
-It also specifies the size of this field as `Field_Size`.
+It also specifies the size of this field as `DID_Field_Size`.
 
-|`Flag_Value`|  0  |  1  |  2  |  3  |
-| ---------- | --- | --- | --- | --- |
-|`Field_Size`|  0  |  1  |  2  |  4  |
+|`Flag_Value`    |  0  |  1  |  2  |  3  |
+| -------------- | --- | --- | --- | --- |
+|`DID_Field_Size`|  0  |  1  |  2  |  4  |
 
 #### `Window_Descriptor`
 
@@ -249,6 +257,9 @@ Window_Size = windowBase + windowAdd;
 The minimum `Window_Size` is 1 KB.
 The maximum `Window_Size` is `(1<<41) + 7*(1<<38)` bytes, which is 3.75 TB.
 
+In general, larger `Window_Size` tend to improve compression ratio,
+but at the cost of memory usage.
+
 To properly decode compressed data,
 a decoder will need to allocate a buffer of at least `Window_Size` bytes.
 
@@ -257,8 +268,8 @@ a decoder is allowed to reject a compressed frame
 which requests a memory size beyond decoder's authorized range.
 
 For improved interoperability,
-decoders are recommended to be compatible with `Window_Size <= 8 MB`,
-and encoders are recommended to not request more than 8 MB.
+it's recommended for decoders to support `Window_Size` of up to 8 MB,
+and it's recommended for encoders to not generate frame requiring `Window_Size` larger than 8 MB.
 It's merely a recommendation though,
 decoders are free to support larger or lower limits,
 depending on local limitations.
@@ -268,9 +279,10 @@ depending on local limitations.
 This is a variable size field, which contains
 the ID of the dictionary required to properly decode the frame.
 `Dictionary_ID` field is optional. When it's not present,
-it's up to the decoder to make sure it uses the correct dictionary.
+it's up to the decoder to know which dictionary to use.
 
-Field size depends on `Dictionary_ID_flag`.
+`Dictionary_ID` field size is provided by `DID_Field_Size`.
+`DID_Field_Size` is directly derived from value of `Dictionary_ID_flag`.
 1 byte can represent an ID 0-255.
 2 bytes can represent an ID 0-65535.
 4 bytes can represent an ID 0-4294967295.
@@ -280,13 +292,21 @@ It's allowed to represent a small ID (for example `13`
 with a large 4-bytes dictionary ID, even if it is less efficient.
 
 _Reserved ranges :_
-If the frame is going to be distributed in a private environment,
-any dictionary ID can be used.
-However, for public distribution of compressed frames using a dictionary,
-the following ranges are reserved and shall not be used :
+Within private environments, any `Dictionary_ID` can be used.
+
+However, for frames and dictionaries distributed in public space,
+`Dictionary_ID` must be attributed carefully.
+Rules for public environment are not yet decided,
+but the following ranges are reserved for some future registrar :
 - low range  : `<= 32767`
 - high range : `>= (1 << 31)`
 
+Outside of these ranges, any value of `Dictionary_ID`
+which is both `>= 32768` and `< (1<<31)` can be used freely,
+even in public environment.
+
+
+
 #### `Frame_Content_Size`
 
 This is the original (uncompressed) size. This information is optional.
@@ -359,22 +379,23 @@ There are 4 block types :
 
 - `Reserved` - this is not a block.
   This value cannot be used with current version of this specification.
+  If such a value is present, it is considered corrupted data.
 
 __`Block_Size`__
 
 The upper 21 bits of `Block_Header` represent the `Block_Size`.
+`Block_Size` is the size of the block excluding the header.
+A block can contain any number of bytes (even zero), up to
+`Block_Maximum_Decompressed_Size`, which is the smallest of:
+-  Window_Size
+-  128 KB
 
-Block sizes must respect a few rules :
-- For `Compressed_Block`, `Block_Size` is always strictly less than decompressed size.
-- Block decompressed size is always <= `Window_Size`
-- Block decompressed size is always <= 128 KB.
+A `Compressed_Block` has the extra restriction that `Block_Size` is always
+strictly less than the decompressed size.
+If this condition cannot be respected,
+the block must be sent uncompressed instead (`Raw_Block`).
 
-A block can contain any number of bytes (even empty),
-up to `Block_Maximum_Decompressed_Size`, which is the smallest of :
-- `Window_Size`
-- 128 KB
 
-
 Compressed Blocks
 -----------------
 To decompress a compressed block, the compressed size must be provided
@@ -390,11 +411,17 @@ data in [Sequence Execution](#sequence-execution)
 #### Prerequisites
 To decode a compressed block, the following elements are necessary :
 - Previous decoded data, up to a distance of `Window_Size`,
-  or all previously decoded data when `Single_Segment_flag` is set.
+  or beginning of the Frame, whichever is smaller.
 - List of "recent offsets" from previous `Compressed_Block`.
-- Decoding tables of previous `Compressed_Block` for each symbol type
-  (literals, literals lengths, match lengths, offsets).
+- The previous Huffman tree, required by `Treeless_Literals_Block` type
+- Previous FSE decoding tables, required by `Repeat_Mode`
+  for each symbol type (literals lengths, match lengths, offsets)
 
+Note that decoding tables aren't always from the previous `Compressed_Block`.
+
+- Every decoding table can come from a dictionary.
+- The Huffman tree comes from the previous `Compressed_Literals_Block`.
+
 Literals Section
 ----------------
 All literals are regrouped in the first part of the block.
@@ -405,11 +432,11 @@ Literals can be stored uncompressed or compressed usin
 When compressed, an optional tree description can be present,
 followed by 1 or 4 streams.
 
-| `Literals_Section_Header` | [`Huffman_Tree_Description`] | Stream1 | [Stream2] | [Stream3] | [Stream4] |
-| ------------------------- | ---------------------------- | ------- | --------- | --------- | --------- |
+| `Literals_Section_Header` | [`Huffman_Tree_Description`] | [jumpTable] | Stream1 | [Stream2] | [Stream3] | [Stream4] |
+| ------------------------- | ---------------------------- | ----------- | ------- | --------- | --------- | --------- |
 
 
-#### `Literals_Section_Header`
+### `Literals_Section_Header`
 
 Header is in charge of describing how literals are packed.
 It's a byte-aligned variable-size bitfield, ranging from 1 to 5 bytes,
@@ -460,18 +487,21 @@ For values spanning several bytes, convention is __lit
 
 __`Size_Format` for `Raw_Literals_Block` and `RLE_Literals_Block`__ :
 
-- Value ?0 : `Size_Format` uses 1 bit.
+`Size_Format` uses 1 _or_ 2 bits.
+Its value is : `Size_Format = (Literals_Section_Header[0]>>2) & 3`
+
+- `Size_Format` == 00 or 10 : `Size_Format` uses 1 bit.
                `Regenerated_Size` uses 5 bits (0-31).
-               `Literals_Section_Header` has 1 byte.
-               `Regenerated_Size = Header[0]>>3`
-- Value 01 : `Size_Format` uses 2 bits.
+               `Literals_Section_Header` uses 1 byte.
+               `Regenerated_Size = Literals_Section_Header[0]>>3`
+- `Size_Format` == 01 : `Size_Format` uses 2 bits.
                `Regenerated_Size` uses 12 bits (0-4095).
-               `Literals_Section_Header` has 2 bytes.
-               `Regenerated_Size = (Header[0]>>4) + (Header[1]<<4)`
-- Value 11 : `Size_Format` uses 2 bits.
+               `Literals_Section_Header` uses 2 bytes.
+               `Regenerated_Size = (Literals_Section_Header[0]>>4) + (Literals_Section_Header[1]<<4)`
+- `Size_Format` == 11 : `Size_Format` uses 2 bits.
                `Regenerated_Size` uses 20 bits (0-1048575).
-               `Literals_Section_Header` has 3 bytes.
-               `Regenerated_Size = (Header[0]>>4) + (Header[1]<<4) + (Header[2]<<12)`
+               `Literals_Section_Header` uses 3 bytes.
+               `Regenerated_Size = (Literals_Section_Header[0]>>4) + (Literals_Section_Header[1]<<4) + (Literals_Section_Header[2]<<12)`
 
 Only Stream1 is present for these cases.
 Note : it's allowed to represent a short value (for example `13`)
@@ -479,66 +509,74 @@ using a long format, even if it's less efficient.
 
 __`Size_Format` for `Compressed_Literals_Block` and `Treeless_Literals_Block`__ :
 
-- Value 00 : _A single stream_.
+`Size_Format` always uses 2 bits.
+
+- `Size_Format` == 00 : _A single stream_.
                Both `Regenerated_Size` and `Compressed_Size` use 10 bits (0-1023).
-               `Literals_Section_Header` has 3 bytes.
-- Value 01 : 4 streams.
+               `Literals_Section_Header` uses 3 bytes.
+- `Size_Format` == 01 : 4 streams.
                Both `Regenerated_Size` and `Compressed_Size` use 10 bits (0-1023).
-               `Literals_Section_Header` has 3 bytes.
-- Value 10 : 4 streams.
+               `Literals_Section_Header` uses 3 bytes.
+- `Size_Format` == 10 : 4 streams.
                Both `Regenerated_Size` and `Compressed_Size` use 14 bits (0-16383).
-               `Literals_Section_Header` has 4 bytes.
-- Value 11 : 4 streams.
+               `Literals_Section_Header` uses 4 bytes.
+- `Size_Format` == 11 : 4 streams.
                Both `Regenerated_Size` and `Compressed_Size` use 18 bits (0-262143).
-               `Literals_Section_Header` has 5 bytes.
+               `Literals_Section_Header` uses 5 bytes.
 
 Both `Compressed_Size` and `Regenerated_Size` fields follow __little-endian__ convention.
 Note: `Compressed_Size` __includes__ the size of the Huffman Tree description
 _when_ it is present.
 
-### Raw Literals Block
+#### Raw Literals Block
 The data in Stream1 is `Regenerated_Size` bytes long,
 it contains the raw literals data to be used during [Sequence Execution].
 
-### RLE Literals Block
+#### RLE Literals Block
 Stream1 consists of a single byte which should be repeated `Regenerated_Size` times
 to generate the decoded literals.
 
-### Compressed Literals Block and Treeless Literals Block
+#### Compressed Literals Block and Treeless Literals Block
 Both of these modes contain Huffman encoded data.
-`Treeless_Literals_Block` does not have a `Huffman_Tree_Description`.
 
-#### `Huffman_Tree_Description`
+For `Treeless_Literals_Block`,
+the Huffman table comes from previously compressed literals block,
+or from a dictionary.
+
+
+### `Huffman_Tree_Description`
 This section is only present when `Literals_Block_Type` type is `Compressed_Literals_Block` (`2`).
 The format of the Huffman tree description can be found at [Huffman Tree description](#huffman-tree-description).
 The size of `Huffman_Tree_Description` is determined during decoding process,
 it must be used to determine where streams begin.
 `Total_Streams_Size = Compressed_Size - Huffman_Tree_Description_Size`.
 
-For `Treeless_Literals_Block`,
-the Huffman table comes from previously compressed literals block.
 
-Huffman compressed data consists of either 1 or 4 Huffman-coded streams.
+### Jump Table
+The Jump Table is only present when there are 4 Huffman-coded streams.
 
+Reminder : Huffman compressed data consists of either 1 or 4 Huffman-coded streams.
+
 If only one stream is present, it is a single bitstream occupying the entire
 remaining portion of the literals block, encoded as described within
 [Huffman-Coded Streams](#huffman-coded-streams).
 
-If there are four streams, the literals section header only provides enough
-information to know the decompressed and compressed sizes of all four streams _combined_.

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***

From owner-svn-src-vendor@freebsd.org  Mon Oct 22 20:00:44 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 57CDF106B7BC;
 Mon, 22 Oct 2018 20:00:44 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 0EAE67013A;
 Mon, 22 Oct 2018 20:00:44 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id E480824F80;
 Mon, 22 Oct 2018 20:00:43 +0000 (UTC) (envelope-from cem@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9MK0hql081632;
 Mon, 22 Oct 2018 20:00:43 GMT (envelope-from cem@FreeBSD.org)
Received: (from cem@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9MK0h0Z081631;
 Mon, 22 Oct 2018 20:00:43 GMT (envelope-from cem@FreeBSD.org)
Message-Id: <201810222000.w9MK0h0Z081631@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: cem set sender to cem@FreeBSD.org
 using -f
From: Conrad Meyer 
Date: Mon, 22 Oct 2018 20:00:43 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339615 - vendor/zstd/1.3.7
X-SVN-Group: vendor
X-SVN-Commit-Author: cem
X-SVN-Commit-Paths: vendor/zstd/1.3.7
X-SVN-Commit-Revision: 339615
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Mon, 22 Oct 2018 20:00:44 -0000

Author: cem
Date: Mon Oct 22 20:00:43 2018
New Revision: 339615
URL: https://svnweb.freebsd.org/changeset/base/339615

Log:
  tag import of zstd 1.3.7

Added:
  vendor/zstd/1.3.7/
     - copied from r339614, vendor/zstd/dist/

From owner-svn-src-vendor@freebsd.org  Tue Oct 23 10:58:11 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6F5C81074F1E;
 Tue, 23 Oct 2018 10:58:11 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 2366175CC5;
 Tue, 23 Oct 2018 10:58:11 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 1DFE86F4A;
 Tue, 23 Oct 2018 10:58:11 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9NAwBcT046131;
 Tue, 23 Oct 2018 10:58:11 GMT (envelope-from mm@FreeBSD.org)
Received: (from mm@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9NAw8MQ046116;
 Tue, 23 Oct 2018 10:58:08 GMT (envelope-from mm@FreeBSD.org)
Message-Id: <201810231058.w9NAw8MQ046116@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: mm set sender to mm@FreeBSD.org
 using -f
From: Martin Matuska 
Date: Tue, 23 Oct 2018 10:58:08 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339640 - in vendor/libarchive/dist: . cpio libarchive
 libarchive/test tar test_utils
X-SVN-Group: vendor
X-SVN-Commit-Author: mm
X-SVN-Commit-Paths: in vendor/libarchive/dist: . cpio libarchive
 libarchive/test tar test_utils
X-SVN-Commit-Revision: 339640
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Tue, 23 Oct 2018 10:58:11 -0000

Author: mm
Date: Tue Oct 23 10:58:07 2018
New Revision: 339640
URL: https://svnweb.freebsd.org/changeset/base/339640

Log:
  Update vendor/libarchive/dist to git d5f35a90a4cb1eeb918213bff9d78e8b0471dc0a
  
  Relevant vendor changes:
    PR #1013: Add missing h_base offset when performing absolute seeks in
              xar decompression
    PR #1061: Add support for extraction of RAR v5 archives
    PR #1066: Fix out of bounds read on empty string filename for gnutar, pax
              and v7tar
    PR #1067: Fix temporary file path buffer overflow in tests
    IS #1068: Correctly process and verify integer arguments passed to
              bsdcpio and bsdtar
    PR #1070: Don't default XAR entry atime/mtime to the current time

Added:
  vendor/libarchive/dist/libarchive/archive_blake2.h   (contents, props changed)
  vendor/libarchive/dist/libarchive/archive_blake2_impl.h   (contents, props changed)
  vendor/libarchive/dist/libarchive/archive_blake2s_ref.c   (contents, props changed)
  vendor/libarchive/dist/libarchive/archive_blake2sp_ref.c   (contents, props changed)
  vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c   (contents, props changed)
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5.c   (contents, props changed)
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_arm.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_blake2.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_compressed.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part01.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part02.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part03.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part04.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part05.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part06.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part07.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive.part08.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive_solid.part01.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive_solid.part02.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive_solid.part03.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiarchive_solid.part04.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiple_files.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_multiple_files_solid.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_solid.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_stored.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_stored_manyfiles.rar.uu
  vendor/libarchive/dist/libarchive/test/test_read_format_rar5_win32.rar.uu
Modified:
  vendor/libarchive/dist/CMakeLists.txt
  vendor/libarchive/dist/COPYING
  vendor/libarchive/dist/Makefile.am
  vendor/libarchive/dist/NEWS
  vendor/libarchive/dist/README.md
  vendor/libarchive/dist/configure.ac
  vendor/libarchive/dist/cpio/cpio.c
  vendor/libarchive/dist/libarchive/CMakeLists.txt
  vendor/libarchive/dist/libarchive/archive.h
  vendor/libarchive/dist/libarchive/archive_read_support_format_all.c
  vendor/libarchive/dist/libarchive/archive_read_support_format_by_code.c
  vendor/libarchive/dist/libarchive/archive_read_support_format_xar.c
  vendor/libarchive/dist/libarchive/archive_write_set_format_gnutar.c
  vendor/libarchive/dist/libarchive/archive_write_set_format_pax.c
  vendor/libarchive/dist/libarchive/archive_write_set_format_v7tar.c
  vendor/libarchive/dist/libarchive/test/CMakeLists.txt
  vendor/libarchive/dist/libarchive/test/test_read_format_xar.c
  vendor/libarchive/dist/tar/bsdtar.c
  vendor/libarchive/dist/test_utils/test_main.c

Modified: vendor/libarchive/dist/CMakeLists.txt
==============================================================================
--- vendor/libarchive/dist/CMakeLists.txt	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/CMakeLists.txt	Tue Oct 23 10:58:07 2018	(r339640)
@@ -179,6 +179,7 @@ include(CTest)
 
 OPTION(ENABLE_NETTLE "Enable use of Nettle" ON)
 OPTION(ENABLE_OPENSSL "Enable use of OpenSSL" ON)
+OPTION(ENABLE_LIBB2 "Enable the use of the system LIBB2 library if found" ON)
 OPTION(ENABLE_LZ4 "Enable the use of the system LZ4 library if found" ON)
 OPTION(ENABLE_LZO "Enable the use of the system LZO library if found" OFF)
 OPTION(ENABLE_LZMA "Enable the use of the system LZMA library if found" ON)
@@ -507,6 +508,33 @@ IF(LZO2_FOUND)
 ENDIF(LZO2_FOUND)
 MARK_AS_ADVANCED(CLEAR LZO2_INCLUDE_DIR)
 MARK_AS_ADVANCED(CLEAR LZO2_LIBRARY)
+#
+# Find libb2
+#
+IF(ENABLE_LIBB2)
+  IF (LIBB2_INCLUDE_DIR)
+    # Already in cache, be silent
+    SET(LIBB2_FIND_QUIETLY TRUE)
+  ENDIF (LIBB2_INCLUDE_DIR)
+
+  FIND_PATH(LIBB2_INCLUDE_DIR blake2.h)
+  FIND_LIBRARY(LIBB2_LIBRARY NAMES b2 libb2)
+  INCLUDE(FindPackageHandleStandardArgs)
+  FIND_PACKAGE_HANDLE_STANDARD_ARGS(LIBB2 DEFAULT_MSG LIBB2_LIBRARY LIBB2_INCLUDE_DIR)
+ELSE(ENABLE_LIBB2)
+  SET(LIBB2_FOUND FALSE) # Override cached value
+ENDIF(ENABLE_LIBB2)
+IF(LIBB2_FOUND)
+  SET(HAVE_LIBB2 1)
+  SET(HAVE_BLAKE2_H 1)
+  SET(ARCHIVE_BLAKE2 FALSE)
+  LIST(APPEND ADDITIONAL_LIBS ${LIBB2_LIBRARY})
+  SET(CMAKE_REQUIRED_LIBRARIES ${LIBB2_LIBRARY})
+  SET(CMAKE_REQUIRED_INCLUDES ${LIBB2_INCLUDE_DIR})
+  CHECK_FUNCTION_EXISTS(blake2sp_init HAVE_LIBB2)
+ELSE(LIBB2_FOUND)
+  SET(ARCHIVE_BLAKE2 TRUE)
+ENDIF(LIBB2_FOUND)
 #
 # Find LZ4
 #

Modified: vendor/libarchive/dist/COPYING
==============================================================================
--- vendor/libarchive/dist/COPYING	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/COPYING	Tue Oct 23 10:58:07 2018	(r339640)
@@ -23,6 +23,13 @@ the actual statements in the files are controlling.
 * The following source files are in the public domain:
    libarchive/archive_getdate.c
 
+* The following source files are triple-licensed with the ability to choose
+  from CC0 1.0 Universal, OpenSSL or Apache 2.0 licenses:
+   libarchive/archive_blake2.h
+   libarchive/archive_blake2_impl.h
+   libarchive/archive_blake2s_ref.c
+   libarchive/archive_blake2sp_ref.c
+
 * The build files---including Makefiles, configure scripts,
   and auxiliary scripts used as part of the compile process---have
   widely varying licensing terms.  Please check individual files before
@@ -34,7 +41,7 @@ do use the license below.  The varying licensing of th
 seems to be an unavoidable mess.
 
 
-Copyright (c) 2003-2009 
+Copyright (c) 2003-2018 
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without

Modified: vendor/libarchive/dist/Makefile.am
==============================================================================
--- vendor/libarchive/dist/Makefile.am	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/Makefile.am	Tue Oct 23 10:58:07 2018	(r339640)
@@ -179,6 +179,7 @@ libarchive_la_SOURCES= \
 	libarchive/archive_read_support_format_lha.c \
 	libarchive/archive_read_support_format_mtree.c \
 	libarchive/archive_read_support_format_rar.c \
+	libarchive/archive_read_support_format_rar5.c \
 	libarchive/archive_read_support_format_raw.c \
 	libarchive/archive_read_support_format_tar.c \
 	libarchive/archive_read_support_format_warc.c \
@@ -251,6 +252,12 @@ libarchive_la_SOURCES+= \
 	libarchive/filter_fork_windows.c
 endif
 
+if INC_BLAKE2
+libarchive_la_SOURCES+= \
+	libarchive/archive_blake2s_ref.c \
+	libarchive/archive_blake2sp_ref.c
+endif
+
 if INC_LINUX_ACL
 libarchive_la_SOURCES+= libarchive/archive_disk_acl_linux.c
 else
@@ -485,6 +492,7 @@ libarchive_test_SOURCES= \
 	libarchive/test/test_read_format_rar_encryption_partially.c \
 	libarchive/test/test_read_format_rar_encryption_header.c \
 	libarchive/test/test_read_format_rar_invalid1.c \
+	libarchive/test/test_read_format_rar5.c \
 	libarchive/test/test_read_format_raw.c \
 	libarchive/test/test_read_format_tar.c \
 	libarchive/test/test_read_format_tar_concatenated.c \

Modified: vendor/libarchive/dist/NEWS
==============================================================================
--- vendor/libarchive/dist/NEWS	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/NEWS	Tue Oct 23 10:58:07 2018	(r339640)
@@ -1,3 +1,5 @@
+Oct 06, 2018: RAR 5.0 reader
+
 Sep 03, 2018: libarchive 3.3.3 released
 
 Jul 19, 2018: Avoid super-linear slowdown on malformed mtree files

Modified: vendor/libarchive/dist/README.md
==============================================================================
--- vendor/libarchive/dist/README.md	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/README.md	Tue Oct 23 10:58:07 2018	(r339640)
@@ -86,7 +86,7 @@ Currently, the library automatically detects and reads
   * 7-Zip archives
   * Microsoft CAB format
   * LHA and LZH archives
-  * RAR archives (with some limitations due to RAR's proprietary status)
+  * RAR and RAR 5.0 archives (with some limitations due to RAR's proprietary status)
   * XAR archives
 
 The library also detects and handles any of the following before evaluating the archive:

Modified: vendor/libarchive/dist/configure.ac
==============================================================================
--- vendor/libarchive/dist/configure.ac	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/configure.ac	Tue Oct 23 10:58:07 2018	(r339640)
@@ -340,6 +340,16 @@ if test "x$with_bz2lib" != "xno"; then
   esac
 fi
 
+AC_ARG_WITH([libb2],
+  AS_HELP_STRING([--without-libb2], [Don't build support for BLAKE2 through libb2]))
+
+if test "x$with_libb2" != "xno"; then
+  AC_CHECK_HEADERS([blake2.h])
+  AC_CHECK_LIB(b2,blake2sp_init)
+fi
+
+AM_CONDITIONAL([INC_BLAKE2], [test "x$ac_cv_lib_b2_blake2sp_init" != "xyes"])
+
 AC_ARG_WITH([iconv],
   AS_HELP_STRING([--without-iconv], [Don't try to link against iconv]))
 

Modified: vendor/libarchive/dist/cpio/cpio.c
==============================================================================
--- vendor/libarchive/dist/cpio/cpio.c	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/cpio/cpio.c	Tue Oct 23 10:58:07 2018	(r339640)
@@ -134,8 +134,9 @@ main(int argc, char *argv[])
 	struct cpio _cpio; /* Allocated on stack. */
 	struct cpio *cpio;
 	const char *errmsg;
+	char *tptr;
 	int uid, gid;
-	int opt;
+	int opt, t;
 
 	cpio = &_cpio;
 	memset(cpio, 0, sizeof(*cpio));
@@ -204,9 +205,15 @@ main(int argc, char *argv[])
 			cpio->add_filter = opt;
 			break;
 		case 'C': /* NetBSD/OpenBSD */
-			cpio->bytes_per_block = atoi(cpio->argument);
-			if (cpio->bytes_per_block <= 0)
-				lafe_errc(1, 0, "Invalid blocksize %s", cpio->argument);
+			errno = 0;
+			tptr = NULL;
+			t = (int)strtol(cpio->argument, &tptr, 10);
+			if (errno || t <= 0 || *(cpio->argument) == '\0' ||
+			    tptr == NULL || *tptr != '\0') {
+				lafe_errc(1, 0, "Invalid blocksize: %s",
+				    cpio->argument);
+			}
+			cpio->bytes_per_block = t;
 			break;
 		case 'c': /* POSIX 1997 */
 			cpio->format = "odc";

Modified: vendor/libarchive/dist/libarchive/CMakeLists.txt
==============================================================================
--- vendor/libarchive/dist/libarchive/CMakeLists.txt	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/libarchive/CMakeLists.txt	Tue Oct 23 10:58:07 2018	(r339640)
@@ -100,6 +100,7 @@ SET(libarchive_SOURCES
   archive_read_support_format_lha.c
   archive_read_support_format_mtree.c
   archive_read_support_format_rar.c
+  archive_read_support_format_rar5.c
   archive_read_support_format_raw.c
   archive_read_support_format_tar.c
   archive_read_support_format_warc.c
@@ -214,6 +215,11 @@ IF(WIN32 AND NOT CYGWIN)
   LIST(APPEND libarchive_SOURCES archive_write_disk_windows.c)
   LIST(APPEND libarchive_SOURCES filter_fork_windows.c)
 ENDIF(WIN32 AND NOT CYGWIN)
+
+IF(ARCHIVE_BLAKE2)
+  LIST(APPEND libarchive_SOURCES archive_blake2sp_ref.c)
+  LIST(APPEND libarchive_SOURCES archive_blake2s_ref.c)
+ENDIF(ARCHIVE_BLAKE2)
 
 IF(ARCHIVE_ACL_DARWIN)
   LIST(APPEND libarchive_SOURCES archive_disk_acl_darwin.c)

Modified: vendor/libarchive/dist/libarchive/archive.h
==============================================================================
--- vendor/libarchive/dist/libarchive/archive.h	Tue Oct 23 08:55:16 2018	(r339639)
+++ vendor/libarchive/dist/libarchive/archive.h	Tue Oct 23 10:58:07 2018	(r339640)
@@ -338,6 +338,7 @@ typedef const char *archive_passphrase_callback(struct
 #define	ARCHIVE_FORMAT_LHA			0xB0000
 #define	ARCHIVE_FORMAT_CAB			0xC0000
 #define	ARCHIVE_FORMAT_RAR			0xD0000
+#define	ARCHIVE_FORMAT_RAR_V5			(ARCHIVE_FORMAT_RAR | 1)
 #define	ARCHIVE_FORMAT_7ZIP			0xE0000
 #define	ARCHIVE_FORMAT_WARC			0xF0000
 
@@ -449,6 +450,7 @@ __LA_DECL int archive_read_support_format_iso9660(stru
 __LA_DECL int archive_read_support_format_lha(struct archive *);
 __LA_DECL int archive_read_support_format_mtree(struct archive *);
 __LA_DECL int archive_read_support_format_rar(struct archive *);
+__LA_DECL int archive_read_support_format_rar5(struct archive *);
 __LA_DECL int archive_read_support_format_raw(struct archive *);
 __LA_DECL int archive_read_support_format_tar(struct archive *);
 __LA_DECL int archive_read_support_format_warc(struct archive *);

Added: vendor/libarchive/dist/libarchive/archive_blake2.h
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ vendor/libarchive/dist/libarchive/archive_blake2.h	Tue Oct 23 10:58:07 2018	(r339640)
@@ -0,0 +1,194 @@
+/*
+   BLAKE2 reference source code package - reference C implementations
+
+   Copyright 2012, Samuel Neves .  You may use this under the
+   terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
+   your option.  The terms of these licenses can be found at:
+
+   - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
+   - OpenSSL license   : https://www.openssl.org/source/license.html
+   - Apache 2.0        : http://www.apache.org/licenses/LICENSE-2.0
+
+   More information about the BLAKE2 hash function can be found at
+   https://blake2.net.
+*/
+#ifndef BLAKE2_H
+#define BLAKE2_H
+
+#include 
+#include 
+
+#if defined(_MSC_VER)
+#define BLAKE2_PACKED(x) __pragma(pack(push, 1)) x __pragma(pack(pop))
+#else
+#define BLAKE2_PACKED(x) x __attribute__((packed))
+#endif
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+  enum blake2s_constant
+  {
+    BLAKE2S_BLOCKBYTES = 64,
+    BLAKE2S_OUTBYTES   = 32,
+    BLAKE2S_KEYBYTES   = 32,
+    BLAKE2S_SALTBYTES  = 8,
+    BLAKE2S_PERSONALBYTES = 8
+  };
+
+  enum blake2b_constant
+  {
+    BLAKE2B_BLOCKBYTES = 128,
+    BLAKE2B_OUTBYTES   = 64,
+    BLAKE2B_KEYBYTES   = 64,
+    BLAKE2B_SALTBYTES  = 16,
+    BLAKE2B_PERSONALBYTES = 16
+  };
+
+  typedef struct blake2s_state__
+  {
+    uint32_t h[8];
+    uint32_t t[2];
+    uint32_t f[2];
+    uint8_t  buf[BLAKE2S_BLOCKBYTES];
+    size_t   buflen;
+    size_t   outlen;
+    uint8_t  last_node;
+  } blake2s_state;
+
+  typedef struct blake2b_state__
+  {
+    uint64_t h[8];
+    uint64_t t[2];
+    uint64_t f[2];
+    uint8_t  buf[BLAKE2B_BLOCKBYTES];
+    size_t   buflen;
+    size_t   outlen;
+    uint8_t  last_node;
+  } blake2b_state;
+
+  typedef struct blake2sp_state__
+  {
+    blake2s_state S[8][1];
+    blake2s_state R[1];
+    uint8_t       buf[8 * BLAKE2S_BLOCKBYTES];
+    size_t        buflen;
+    size_t        outlen;
+  } blake2sp_state;
+
+  typedef struct blake2bp_state__
+  {
+    blake2b_state S[4][1];
+    blake2b_state R[1];
+    uint8_t       buf[4 * BLAKE2B_BLOCKBYTES];
+    size_t        buflen;
+    size_t        outlen;
+  } blake2bp_state;
+
+  BLAKE2_PACKED(struct blake2s_param__
+  {
+    uint8_t  digest_length; /* 1 */
+    uint8_t  key_length;    /* 2 */
+    uint8_t  fanout;        /* 3 */
+    uint8_t  depth;         /* 4 */
+    uint32_t leaf_length;   /* 8 */
+    uint32_t node_offset;  /* 12 */
+    uint16_t xof_length;    /* 14 */
+    uint8_t  node_depth;    /* 15 */
+    uint8_t  inner_length;  /* 16 */
+    /* uint8_t  reserved[0]; */
+    uint8_t  salt[BLAKE2S_SALTBYTES]; /* 24 */
+    uint8_t  personal[BLAKE2S_PERSONALBYTES];  /* 32 */
+  });
+
+  typedef struct blake2s_param__ blake2s_param;
+
+  BLAKE2_PACKED(struct blake2b_param__
+  {
+    uint8_t  digest_length; /* 1 */
+    uint8_t  key_length;    /* 2 */
+    uint8_t  fanout;        /* 3 */
+    uint8_t  depth;         /* 4 */
+    uint32_t leaf_length;   /* 8 */
+    uint32_t node_offset;   /* 12 */
+    uint32_t xof_length;    /* 16 */
+    uint8_t  node_depth;    /* 17 */
+    uint8_t  inner_length;  /* 18 */
+    uint8_t  reserved[14];  /* 32 */
+    uint8_t  salt[BLAKE2B_SALTBYTES]; /* 48 */
+    uint8_t  personal[BLAKE2B_PERSONALBYTES];  /* 64 */
+  });
+
+  typedef struct blake2b_param__ blake2b_param;
+
+  typedef struct blake2xs_state__
+  {
+    blake2s_state S[1];
+    blake2s_param P[1];
+  } blake2xs_state;
+
+  typedef struct blake2xb_state__
+  {
+    blake2b_state S[1];
+    blake2b_param P[1];
+  } blake2xb_state;
+
+  /* Padded structs result in a compile-time error */
+  enum {
+    BLAKE2_DUMMY_1 = 1/(sizeof(blake2s_param) == BLAKE2S_OUTBYTES),
+    BLAKE2_DUMMY_2 = 1/(sizeof(blake2b_param) == BLAKE2B_OUTBYTES)
+  };
+
+  /* Streaming API */
+  int blake2s_init( blake2s_state *S, size_t outlen );
+  int blake2s_init_key( blake2s_state *S, size_t outlen, const void *key, size_t keylen );
+  int blake2s_init_param( blake2s_state *S, const blake2s_param *P );
+  int blake2s_update( blake2s_state *S, const void *in, size_t inlen );
+  int blake2s_final( blake2s_state *S, void *out, size_t outlen );
+
+  int blake2b_init( blake2b_state *S, size_t outlen );
+  int blake2b_init_key( blake2b_state *S, size_t outlen, const void *key, size_t keylen );
+  int blake2b_init_param( blake2b_state *S, const blake2b_param *P );
+  int blake2b_update( blake2b_state *S, const void *in, size_t inlen );
+  int blake2b_final( blake2b_state *S, void *out, size_t outlen );
+
+  int blake2sp_init( blake2sp_state *S, size_t outlen );
+  int blake2sp_init_key( blake2sp_state *S, size_t outlen, const void *key, size_t keylen );
+  int blake2sp_update( blake2sp_state *S, const void *in, size_t inlen );
+  int blake2sp_final( blake2sp_state *S, void *out, size_t outlen );
+
+  int blake2bp_init( blake2bp_state *S, size_t outlen );
+  int blake2bp_init_key( blake2bp_state *S, size_t outlen, const void *key, size_t keylen );
+  int blake2bp_update( blake2bp_state *S, const void *in, size_t inlen );
+  int blake2bp_final( blake2bp_state *S, void *out, size_t outlen );
+
+  /* Variable output length API */
+  int blake2xs_init( blake2xs_state *S, const size_t outlen );
+  int blake2xs_init_key( blake2xs_state *S, const size_t outlen, const void *key, size_t keylen );
+  int blake2xs_update( blake2xs_state *S, const void *in, size_t inlen );
+  int blake2xs_final(blake2xs_state *S, void *out, size_t outlen);
+
+  int blake2xb_init( blake2xb_state *S, const size_t outlen );
+  int blake2xb_init_key( blake2xb_state *S, const size_t outlen, const void *key, size_t keylen );
+  int blake2xb_update( blake2xb_state *S, const void *in, size_t inlen );
+  int blake2xb_final(blake2xb_state *S, void *out, size_t outlen);
+
+  /* Simple API */
+  int blake2s( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+  int blake2b( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+
+  int blake2sp( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+  int blake2bp( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+
+  int blake2xs( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+  int blake2xb( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+
+  /* This is simply an alias for blake2b */
+  int blake2( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen );
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif

Added: vendor/libarchive/dist/libarchive/archive_blake2_impl.h
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ vendor/libarchive/dist/libarchive/archive_blake2_impl.h	Tue Oct 23 10:58:07 2018	(r339640)
@@ -0,0 +1,160 @@
+/*
+   BLAKE2 reference source code package - reference C implementations
+
+   Copyright 2012, Samuel Neves .  You may use this under the
+   terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
+   your option.  The terms of these licenses can be found at:
+
+   - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
+   - OpenSSL license   : https://www.openssl.org/source/license.html
+   - Apache 2.0        : http://www.apache.org/licenses/LICENSE-2.0
+
+   More information about the BLAKE2 hash function can be found at
+   https://blake2.net.
+*/
+#ifndef BLAKE2_IMPL_H
+#define BLAKE2_IMPL_H
+
+#include 
+#include 
+
+#if !defined(__cplusplus) && (!defined(__STDC_VERSION__) || __STDC_VERSION__ < 199901L)
+  #if   defined(_MSC_VER)
+    #define BLAKE2_INLINE __inline
+  #elif defined(__GNUC__)
+    #define BLAKE2_INLINE __inline__
+  #else
+    #define BLAKE2_INLINE
+  #endif
+#else
+  #define BLAKE2_INLINE inline
+#endif
+
+static BLAKE2_INLINE uint32_t load32( const void *src )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  uint32_t w;
+  memcpy(&w, src, sizeof w);
+  return w;
+#else
+  const uint8_t *p = ( const uint8_t * )src;
+  return (( uint32_t )( p[0] ) <<  0) |
+         (( uint32_t )( p[1] ) <<  8) |
+         (( uint32_t )( p[2] ) << 16) |
+         (( uint32_t )( p[3] ) << 24) ;
+#endif
+}
+
+static BLAKE2_INLINE uint64_t load64( const void *src )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  uint64_t w;
+  memcpy(&w, src, sizeof w);
+  return w;
+#else
+  const uint8_t *p = ( const uint8_t * )src;
+  return (( uint64_t )( p[0] ) <<  0) |
+         (( uint64_t )( p[1] ) <<  8) |
+         (( uint64_t )( p[2] ) << 16) |
+         (( uint64_t )( p[3] ) << 24) |
+         (( uint64_t )( p[4] ) << 32) |
+         (( uint64_t )( p[5] ) << 40) |
+         (( uint64_t )( p[6] ) << 48) |
+         (( uint64_t )( p[7] ) << 56) ;
+#endif
+}
+
+static BLAKE2_INLINE uint16_t load16( const void *src )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  uint16_t w;
+  memcpy(&w, src, sizeof w);
+  return w;
+#else
+  const uint8_t *p = ( const uint8_t * )src;
+  return ( uint16_t )((( uint32_t )( p[0] ) <<  0) |
+                      (( uint32_t )( p[1] ) <<  8));
+#endif
+}
+
+static BLAKE2_INLINE void store16( void *dst, uint16_t w )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  memcpy(dst, &w, sizeof w);
+#else
+  uint8_t *p = ( uint8_t * )dst;
+  *p++ = ( uint8_t )w; w >>= 8;
+  *p++ = ( uint8_t )w;
+#endif
+}
+
+static BLAKE2_INLINE void store32( void *dst, uint32_t w )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  memcpy(dst, &w, sizeof w);
+#else
+  uint8_t *p = ( uint8_t * )dst;
+  p[0] = (uint8_t)(w >>  0);
+  p[1] = (uint8_t)(w >>  8);
+  p[2] = (uint8_t)(w >> 16);
+  p[3] = (uint8_t)(w >> 24);
+#endif
+}
+
+static BLAKE2_INLINE void store64( void *dst, uint64_t w )
+{
+#if defined(NATIVE_LITTLE_ENDIAN)
+  memcpy(dst, &w, sizeof w);
+#else
+  uint8_t *p = ( uint8_t * )dst;
+  p[0] = (uint8_t)(w >>  0);
+  p[1] = (uint8_t)(w >>  8);
+  p[2] = (uint8_t)(w >> 16);
+  p[3] = (uint8_t)(w >> 24);
+  p[4] = (uint8_t)(w >> 32);
+  p[5] = (uint8_t)(w >> 40);
+  p[6] = (uint8_t)(w >> 48);
+  p[7] = (uint8_t)(w >> 56);
+#endif
+}
+
+static BLAKE2_INLINE uint64_t load48( const void *src )
+{
+  const uint8_t *p = ( const uint8_t * )src;
+  return (( uint64_t )( p[0] ) <<  0) |
+         (( uint64_t )( p[1] ) <<  8) |
+         (( uint64_t )( p[2] ) << 16) |
+         (( uint64_t )( p[3] ) << 24) |
+         (( uint64_t )( p[4] ) << 32) |
+         (( uint64_t )( p[5] ) << 40) ;
+}
+
+static BLAKE2_INLINE void store48( void *dst, uint64_t w )
+{
+  uint8_t *p = ( uint8_t * )dst;
+  p[0] = (uint8_t)(w >>  0);
+  p[1] = (uint8_t)(w >>  8);
+  p[2] = (uint8_t)(w >> 16);
+  p[3] = (uint8_t)(w >> 24);
+  p[4] = (uint8_t)(w >> 32);
+  p[5] = (uint8_t)(w >> 40);
+}
+
+static BLAKE2_INLINE uint32_t rotr32( const uint32_t w, const unsigned c )
+{
+  return ( w >> c ) | ( w << ( 32 - c ) );
+}
+
+static BLAKE2_INLINE uint64_t rotr64( const uint64_t w, const unsigned c )
+{
+  return ( w >> c ) | ( w << ( 64 - c ) );
+}
+
+/* prevents compiler optimizing out memset() */
+static BLAKE2_INLINE void secure_zero_memory(void *v, size_t n)
+{
+  static void *(*const volatile memset_v)(void *, int, size_t) = &memset;
+  memset_v(v, 0, n);
+}
+
+#endif

Added: vendor/libarchive/dist/libarchive/archive_blake2s_ref.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ vendor/libarchive/dist/libarchive/archive_blake2s_ref.c	Tue Oct 23 10:58:07 2018	(r339640)
@@ -0,0 +1,367 @@
+/*
+   BLAKE2 reference source code package - reference C implementations
+
+   Copyright 2012, Samuel Neves .  You may use this under the
+   terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
+   your option.  The terms of these licenses can be found at:
+
+   - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
+   - OpenSSL license   : https://www.openssl.org/source/license.html
+   - Apache 2.0        : http://www.apache.org/licenses/LICENSE-2.0
+
+   More information about the BLAKE2 hash function can be found at
+   https://blake2.net.
+*/
+
+#include 
+#include 
+#include 
+
+#include "archive_blake2.h"
+#include "archive_blake2_impl.h"
+
+static const uint32_t blake2s_IV[8] =
+{
+  0x6A09E667UL, 0xBB67AE85UL, 0x3C6EF372UL, 0xA54FF53AUL,
+  0x510E527FUL, 0x9B05688CUL, 0x1F83D9ABUL, 0x5BE0CD19UL
+};
+
+static const uint8_t blake2s_sigma[10][16] =
+{
+  {  0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15 } ,
+  { 14, 10,  4,  8,  9, 15, 13,  6,  1, 12,  0,  2, 11,  7,  5,  3 } ,
+  { 11,  8, 12,  0,  5,  2, 15, 13, 10, 14,  3,  6,  7,  1,  9,  4 } ,
+  {  7,  9,  3,  1, 13, 12, 11, 14,  2,  6,  5, 10,  4,  0, 15,  8 } ,
+  {  9,  0,  5,  7,  2,  4, 10, 15, 14,  1, 11, 12,  6,  8,  3, 13 } ,
+  {  2, 12,  6, 10,  0, 11,  8,  3,  4, 13,  7,  5, 15, 14,  1,  9 } ,
+  { 12,  5,  1, 15, 14, 13,  4, 10,  0,  7,  6,  3,  9,  2,  8, 11 } ,
+  { 13, 11,  7, 14, 12,  1,  3,  9,  5,  0, 15,  4,  8,  6,  2, 10 } ,
+  {  6, 15, 14,  9, 11,  3,  0,  8, 12,  2, 13,  7,  1,  4, 10,  5 } ,
+  { 10,  2,  8,  4,  7,  6,  1,  5, 15, 11,  9, 14,  3, 12, 13 , 0 } ,
+};
+
+static void blake2s_set_lastnode( blake2s_state *S )
+{
+  S->f[1] = (uint32_t)-1;
+}
+
+/* Some helper functions, not necessarily useful */
+static int blake2s_is_lastblock( const blake2s_state *S )
+{
+  return S->f[0] != 0;
+}
+
+static void blake2s_set_lastblock( blake2s_state *S )
+{
+  if( S->last_node ) blake2s_set_lastnode( S );
+
+  S->f[0] = (uint32_t)-1;
+}
+
+static void blake2s_increment_counter( blake2s_state *S, const uint32_t inc )
+{
+  S->t[0] += inc;
+  S->t[1] += ( S->t[0] < inc );
+}
+
+static void blake2s_init0( blake2s_state *S )
+{
+  size_t i;
+  memset( S, 0, sizeof( blake2s_state ) );
+
+  for( i = 0; i < 8; ++i ) S->h[i] = blake2s_IV[i];
+}
+
+/* init2 xors IV with input parameter block */
+int blake2s_init_param( blake2s_state *S, const blake2s_param *P )
+{
+  const unsigned char *p = ( const unsigned char * )( P );
+  size_t i;
+
+  blake2s_init0( S );
+
+  /* IV XOR ParamBlock */
+  for( i = 0; i < 8; ++i )
+    S->h[i] ^= load32( &p[i * 4] );
+
+  S->outlen = P->digest_length;
+  return 0;
+}
+
+
+/* Sequential blake2s initialization */
+int blake2s_init( blake2s_state *S, size_t outlen )
+{
+  blake2s_param P[1];
+
+  /* Move interval verification here? */
+  if ( ( !outlen ) || ( outlen > BLAKE2S_OUTBYTES ) ) return -1;
+
+  P->digest_length = (uint8_t)outlen;
+  P->key_length    = 0;
+  P->fanout        = 1;
+  P->depth         = 1;
+  store32( &P->leaf_length, 0 );
+  store32( &P->node_offset, 0 );
+  store16( &P->xof_length, 0 );
+  P->node_depth    = 0;
+  P->inner_length  = 0;
+  /* memset(P->reserved, 0, sizeof(P->reserved) ); */
+  memset( P->salt,     0, sizeof( P->salt ) );
+  memset( P->personal, 0, sizeof( P->personal ) );
+  return blake2s_init_param( S, P );
+}
+
+int blake2s_init_key( blake2s_state *S, size_t outlen, const void *key, size_t keylen )
+{
+  blake2s_param P[1];
+
+  if ( ( !outlen ) || ( outlen > BLAKE2S_OUTBYTES ) ) return -1;
+
+  if ( !key || !keylen || keylen > BLAKE2S_KEYBYTES ) return -1;
+
+  P->digest_length = (uint8_t)outlen;
+  P->key_length    = (uint8_t)keylen;
+  P->fanout        = 1;
+  P->depth         = 1;
+  store32( &P->leaf_length, 0 );
+  store32( &P->node_offset, 0 );
+  store16( &P->xof_length, 0 );
+  P->node_depth    = 0;
+  P->inner_length  = 0;
+  /* memset(P->reserved, 0, sizeof(P->reserved) ); */
+  memset( P->salt,     0, sizeof( P->salt ) );
+  memset( P->personal, 0, sizeof( P->personal ) );
+
+  if( blake2s_init_param( S, P ) < 0 ) return -1;
+
+  {
+    uint8_t block[BLAKE2S_BLOCKBYTES];
+    memset( block, 0, BLAKE2S_BLOCKBYTES );
+    memcpy( block, key, keylen );
+    blake2s_update( S, block, BLAKE2S_BLOCKBYTES );
+    secure_zero_memory( block, BLAKE2S_BLOCKBYTES ); /* Burn the key from stack */
+  }
+  return 0;
+}
+
+#define G(r,i,a,b,c,d)                      \
+  do {                                      \
+    a = a + b + m[blake2s_sigma[r][2*i+0]]; \
+    d = rotr32(d ^ a, 16);                  \
+    c = c + d;                              \
+    b = rotr32(b ^ c, 12);                  \
+    a = a + b + m[blake2s_sigma[r][2*i+1]]; \
+    d = rotr32(d ^ a, 8);                   \
+    c = c + d;                              \
+    b = rotr32(b ^ c, 7);                   \
+  } while(0)
+
+#define ROUND(r)                    \
+  do {                              \
+    G(r,0,v[ 0],v[ 4],v[ 8],v[12]); \
+    G(r,1,v[ 1],v[ 5],v[ 9],v[13]); \
+    G(r,2,v[ 2],v[ 6],v[10],v[14]); \
+    G(r,3,v[ 3],v[ 7],v[11],v[15]); \
+    G(r,4,v[ 0],v[ 5],v[10],v[15]); \
+    G(r,5,v[ 1],v[ 6],v[11],v[12]); \
+    G(r,6,v[ 2],v[ 7],v[ 8],v[13]); \
+    G(r,7,v[ 3],v[ 4],v[ 9],v[14]); \
+  } while(0)
+
+static void blake2s_compress( blake2s_state *S, const uint8_t in[BLAKE2S_BLOCKBYTES] )
+{
+  uint32_t m[16];
+  uint32_t v[16];
+  size_t i;
+
+  for( i = 0; i < 16; ++i ) {
+    m[i] = load32( in + i * sizeof( m[i] ) );
+  }
+
+  for( i = 0; i < 8; ++i ) {
+    v[i] = S->h[i];
+  }
+
+  v[ 8] = blake2s_IV[0];
+  v[ 9] = blake2s_IV[1];
+  v[10] = blake2s_IV[2];
+  v[11] = blake2s_IV[3];
+  v[12] = S->t[0] ^ blake2s_IV[4];
+  v[13] = S->t[1] ^ blake2s_IV[5];
+  v[14] = S->f[0] ^ blake2s_IV[6];
+  v[15] = S->f[1] ^ blake2s_IV[7];
+
+  ROUND( 0 );
+  ROUND( 1 );
+  ROUND( 2 );
+  ROUND( 3 );
+  ROUND( 4 );
+  ROUND( 5 );
+  ROUND( 6 );
+  ROUND( 7 );
+  ROUND( 8 );
+  ROUND( 9 );
+
+  for( i = 0; i < 8; ++i ) {
+    S->h[i] = S->h[i] ^ v[i] ^ v[i + 8];
+  }
+}
+
+#undef G
+#undef ROUND
+
+int blake2s_update( blake2s_state *S, const void *pin, size_t inlen )
+{
+  const unsigned char * in = (const unsigned char *)pin;
+  if( inlen > 0 )
+  {
+    size_t left = S->buflen;
+    size_t fill = BLAKE2S_BLOCKBYTES - left;
+    if( inlen > fill )
+    {
+      S->buflen = 0;
+      memcpy( S->buf + left, in, fill ); /* Fill buffer */
+      blake2s_increment_counter( S, BLAKE2S_BLOCKBYTES );
+      blake2s_compress( S, S->buf ); /* Compress */
+      in += fill; inlen -= fill;
+      while(inlen > BLAKE2S_BLOCKBYTES) {
+        blake2s_increment_counter(S, BLAKE2S_BLOCKBYTES);
+        blake2s_compress( S, in );
+        in += BLAKE2S_BLOCKBYTES;
+        inlen -= BLAKE2S_BLOCKBYTES;
+      }
+    }
+    memcpy( S->buf + S->buflen, in, inlen );
+    S->buflen += inlen;
+  }
+  return 0;
+}
+
+int blake2s_final( blake2s_state *S, void *out, size_t outlen )
+{
+  uint8_t buffer[BLAKE2S_OUTBYTES] = {0};
+  size_t i;
+
+  if( out == NULL || outlen < S->outlen )
+    return -1;
+
+  if( blake2s_is_lastblock( S ) )
+    return -1;
+
+  blake2s_increment_counter( S, ( uint32_t )S->buflen );
+  blake2s_set_lastblock( S );
+  memset( S->buf + S->buflen, 0, BLAKE2S_BLOCKBYTES - S->buflen ); /* Padding */
+  blake2s_compress( S, S->buf );
+
+  for( i = 0; i < 8; ++i ) /* Output full hash to temp buffer */
+    store32( buffer + sizeof( S->h[i] ) * i, S->h[i] );
+
+  memcpy( out, buffer, outlen );
+  secure_zero_memory(buffer, sizeof(buffer));
+  return 0;
+}
+
+int blake2s( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen )
+{
+  blake2s_state S[1];
+
+  /* Verify parameters */
+  if ( NULL == in && inlen > 0 ) return -1;
+
+  if ( NULL == out ) return -1;
+
+  if ( NULL == key && keylen > 0) return -1;
+
+  if( !outlen || outlen > BLAKE2S_OUTBYTES ) return -1;
+
+  if( keylen > BLAKE2S_KEYBYTES ) return -1;
+
+  if( keylen > 0 )
+  {
+    if( blake2s_init_key( S, outlen, key, keylen ) < 0 ) return -1;
+  }
+  else
+  {
+    if( blake2s_init( S, outlen ) < 0 ) return -1;
+  }
+
+  blake2s_update( S, ( const uint8_t * )in, inlen );
+  blake2s_final( S, out, outlen );
+  return 0;
+}
+
+#if defined(SUPERCOP)
+int crypto_hash( unsigned char *out, unsigned char *in, unsigned long long inlen )
+{
+  return blake2s( out, BLAKE2S_OUTBYTES, in, inlen, NULL, 0 );
+}
+#endif
+
+#if defined(BLAKE2S_SELFTEST)
+#include 
+#include "blake2-kat.h"
+int main( void )
+{
+  uint8_t key[BLAKE2S_KEYBYTES];
+  uint8_t buf[BLAKE2_KAT_LENGTH];
+  size_t i, step;
+
+  for( i = 0; i < BLAKE2S_KEYBYTES; ++i )
+    key[i] = ( uint8_t )i;
+
+  for( i = 0; i < BLAKE2_KAT_LENGTH; ++i )
+    buf[i] = ( uint8_t )i;
+
+  /* Test simple API */
+  for( i = 0; i < BLAKE2_KAT_LENGTH; ++i )
+  {
+    uint8_t hash[BLAKE2S_OUTBYTES];
+    blake2s( hash, BLAKE2S_OUTBYTES, buf, i, key, BLAKE2S_KEYBYTES );
+
+    if( 0 != memcmp( hash, blake2s_keyed_kat[i], BLAKE2S_OUTBYTES ) )
+    {
+      goto fail;
+    }
+  }
+
+  /* Test streaming API */
+  for(step = 1; step < BLAKE2S_BLOCKBYTES; ++step) {
+    for (i = 0; i < BLAKE2_KAT_LENGTH; ++i) {
+      uint8_t hash[BLAKE2S_OUTBYTES];
+      blake2s_state S;
+      uint8_t * p = buf;
+      size_t mlen = i;
+      int err = 0;
+
+      if( (err = blake2s_init_key(&S, BLAKE2S_OUTBYTES, key, BLAKE2S_KEYBYTES)) < 0 ) {
+        goto fail;
+      }
+
+      while (mlen >= step) {
+        if ( (err = blake2s_update(&S, p, step)) < 0 ) {
+          goto fail;
+        }
+        mlen -= step;
+        p += step;
+      }
+      if ( (err = blake2s_update(&S, p, mlen)) < 0) {
+        goto fail;
+      }
+      if ( (err = blake2s_final(&S, hash, BLAKE2S_OUTBYTES)) < 0) {
+        goto fail;
+      }
+
+      if (0 != memcmp(hash, blake2s_keyed_kat[i], BLAKE2S_OUTBYTES)) {
+        goto fail;
+      }
+    }
+  }
+
+  puts( "ok" );
+  return 0;
+fail:
+  puts("error");
+  return -1;
+}
+#endif

Added: vendor/libarchive/dist/libarchive/archive_blake2sp_ref.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ vendor/libarchive/dist/libarchive/archive_blake2sp_ref.c	Tue Oct 23 10:58:07 2018	(r339640)
@@ -0,0 +1,359 @@
+/*
+   BLAKE2 reference source code package - reference C implementations
+
+   Copyright 2012, Samuel Neves .  You may use this under the
+   terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
+   your option.  The terms of these licenses can be found at:
+
+   - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
+   - OpenSSL license   : https://www.openssl.org/source/license.html
+   - Apache 2.0        : http://www.apache.org/licenses/LICENSE-2.0
+
+   More information about the BLAKE2 hash function can be found at
+   https://blake2.net.
+*/
+
+#include 
+#include 
+#include 
+
+#if defined(_OPENMP)
+#include 
+#endif
+
+#include "archive_blake2.h"
+#include "archive_blake2_impl.h"
+
+#define PARALLELISM_DEGREE 8
+
+/*
+  blake2sp_init_param defaults to setting the expecting output length
+  from the digest_length parameter block field.
+
+  In some cases, however, we do not want this, as the output length
+  of these instances is given by inner_length instead.
+*/
+static int blake2sp_init_leaf_param( blake2s_state *S, const blake2s_param *P )
+{
+  int err = blake2s_init_param(S, P);
+  S->outlen = P->inner_length;
+  return err;
+}
+
+static int blake2sp_init_leaf( blake2s_state *S, size_t outlen, size_t keylen, uint32_t offset )
+{
+  blake2s_param P[1];
+  P->digest_length = (uint8_t)outlen;
+  P->key_length = (uint8_t)keylen;
+  P->fanout = PARALLELISM_DEGREE;
+  P->depth = 2;
+  store32( &P->leaf_length, 0 );
+  store32( &P->node_offset, offset );
+  store16( &P->xof_length, 0 );
+  P->node_depth = 0;
+  P->inner_length = BLAKE2S_OUTBYTES;
+  memset( P->salt, 0, sizeof( P->salt ) );
+  memset( P->personal, 0, sizeof( P->personal ) );
+  return blake2sp_init_leaf_param( S, P );

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***

From owner-svn-src-vendor@freebsd.org  Tue Oct 23 11:34:16 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 59DAA1076045;
 Tue, 23 Oct 2018 11:34:16 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id F0BBD76E12;
 Tue, 23 Oct 2018 11:34:15 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id E5F9975E0;
 Tue, 23 Oct 2018 11:34:15 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9NBYFAm066595;
 Tue, 23 Oct 2018 11:34:15 GMT (envelope-from mm@FreeBSD.org)
Received: (from mm@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9NBYFD4066594;
 Tue, 23 Oct 2018 11:34:15 GMT (envelope-from mm@FreeBSD.org)
Message-Id: <201810231134.w9NBYFD4066594@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: mm set sender to mm@FreeBSD.org
 using -f
From: Martin Matuska 
Date: Tue, 23 Oct 2018 11:34:15 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339641 - vendor/libarchive/dist/libarchive
X-SVN-Group: vendor
X-SVN-Commit-Author: mm
X-SVN-Commit-Paths: vendor/libarchive/dist/libarchive
X-SVN-Commit-Revision: 339641
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Tue, 23 Oct 2018 11:34:16 -0000

Author: mm
Date: Tue Oct 23 11:34:15 2018
New Revision: 339641
URL: https://svnweb.freebsd.org/changeset/base/339641

Log:
  Update vendor/libarchive/dist to git 58ae9e02093aa47dc6eb27a66d4e95b05e9e672e
  
  Relevant ventor changes:
    RAR5 reader: declare some constants static

Modified:
  vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c

Modified: vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c
==============================================================================
--- vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Tue Oct 23 10:58:07 2018	(r339640)
+++ vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Tue Oct 23 11:34:15 2018	(r339641)
@@ -75,10 +75,10 @@
  *
  * The array itself is decrypted in `rar5_init` function. */
 
-unsigned char rar5_signature[] = { 243, 192, 211, 128, 187, 166, 160, 161 };
-const ssize_t rar5_signature_size = sizeof(rar5_signature);
-const size_t g_unpack_buf_chunk_size = 1024;
-const size_t g_unpack_window_size = 0x20000;
+static unsigned char rar5_signature[] = { 243, 192, 211, 128, 187, 166, 160, 161 };
+static const ssize_t rar5_signature_size = sizeof(rar5_signature);
+static const size_t g_unpack_buf_chunk_size = 1024;
+static const size_t g_unpack_window_size = 0x20000;
 
 struct file_header {
     ssize_t bytes_remaining;

From owner-svn-src-vendor@freebsd.org  Tue Oct 23 12:54:18 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 503B0FD6E3B;
 Tue, 23 Oct 2018 12:54:18 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id F30507AAFF;
 Tue, 23 Oct 2018 12:54:17 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id EDCE6102C2;
 Tue, 23 Oct 2018 12:54:17 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9NCsHZD010793;
 Tue, 23 Oct 2018 12:54:17 GMT (envelope-from mm@FreeBSD.org)
Received: (from mm@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9NCsH03010792;
 Tue, 23 Oct 2018 12:54:17 GMT (envelope-from mm@FreeBSD.org)
Message-Id: <201810231254.w9NCsH03010792@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: mm set sender to mm@FreeBSD.org
 using -f
From: Martin Matuska 
Date: Tue, 23 Oct 2018 12:54:17 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339644 - vendor/libarchive/dist/libarchive
X-SVN-Group: vendor
X-SVN-Commit-Author: mm
X-SVN-Commit-Paths: vendor/libarchive/dist/libarchive
X-SVN-Commit-Revision: 339644
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Tue, 23 Oct 2018 12:54:18 -0000

Author: mm
Date: Tue Oct 23 12:54:17 2018
New Revision: 339644
URL: https://svnweb.freebsd.org/changeset/base/339644

Log:
  Update vendor/libarchive/dist to git b1dc8bb16e192d71442a94fdcd0096ba9e2946b4
  
  Relevant ventor changes:
    RAR5 reader: comment out unused constant

Modified:
  vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c

Modified: vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c
==============================================================================
--- vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Tue Oct 23 12:53:09 2018	(r339643)
+++ vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Tue Oct 23 12:54:17 2018	(r339644)
@@ -77,7 +77,7 @@
 
 static unsigned char rar5_signature[] = { 243, 192, 211, 128, 187, 166, 160, 161 };
 static const ssize_t rar5_signature_size = sizeof(rar5_signature);
-static const size_t g_unpack_buf_chunk_size = 1024;
+/* static const size_t g_unpack_buf_chunk_size = 1024; */
 static const size_t g_unpack_window_size = 0x20000;
 
 struct file_header {

From owner-svn-src-vendor@freebsd.org  Thu Oct 25 23:10:07 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4472C10CC870;
 Thu, 25 Oct 2018 23:10:07 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id EECA0777E5;
 Thu, 25 Oct 2018 23:10:06 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id B60B61891D;
 Thu, 25 Oct 2018 23:10:06 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9PNA6v7012456;
 Thu, 25 Oct 2018 23:10:06 GMT (envelope-from mm@FreeBSD.org)
Received: (from mm@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9PNA6L5012455;
 Thu, 25 Oct 2018 23:10:06 GMT (envelope-from mm@FreeBSD.org)
Message-Id: <201810252310.w9PNA6L5012455@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: mm set sender to mm@FreeBSD.org
 using -f
From: Martin Matuska 
Date: Thu, 25 Oct 2018 23:10:06 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339750 - vendor/libarchive/dist/libarchive
X-SVN-Group: vendor
X-SVN-Commit-Author: mm
X-SVN-Commit-Paths: vendor/libarchive/dist/libarchive
X-SVN-Commit-Revision: 339750
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Thu, 25 Oct 2018 23:10:07 -0000

Author: mm
Date: Thu Oct 25 23:10:06 2018
New Revision: 339750
URL: https://svnweb.freebsd.org/changeset/base/339750

Log:
  Update vendor/libarchive/dist to git 1266f6d281a6d7c6604a8c14cdad14dc83ea4b88
  
  Relevant vendor changes:
    RAR5 reader: FreeBSD build platform fixes for powerpc(64), mips(64),
                 sparc64 and riscv64

Modified:
  vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c

Modified: vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c
==============================================================================
--- vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Thu Oct 25 22:55:18 2018	(r339749)
+++ vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Thu Oct 25 23:10:06 2018	(r339750)
@@ -737,11 +737,11 @@ static void dist_cache_push(struct rar5* rar, int valu
     q[0] = value;
 }
 
-static int dist_cache_touch(struct rar5* rar, int index) {
+static int dist_cache_touch(struct rar5* rar, int idx) {
     int* q = rar->cstate.dist_cache;
-    int i, dist = q[index];
+    int i, dist = q[idx];
 
-    for(i = index; i > 0; i--)
+    for(i = idx; i > 0; i--)
         q[i] = q[i - 1];
 
     q[0] = dist;
@@ -1500,10 +1500,10 @@ static int process_head_main(struct archive_read* a, s
     (void) entry;
 
     int ret;
-    size_t extra_data_size,
-        extra_field_size,
-        extra_field_id,
-        archive_flags;
+    size_t extra_data_size = 0;
+    size_t extra_field_size = 0;
+    size_t extra_field_id = 0;
+    size_t archive_flags = 0;
 
     if(block_flags & HFL_EXTRA_DATA) {
         if(!read_var_sized(a, &extra_data_size, NULL))
@@ -1528,7 +1528,7 @@ static int process_head_main(struct archive_read* a, s
     rar->main.solid = (archive_flags & SOLID) > 0;
 
     if(archive_flags & VOLUME_NUMBER) {
-        size_t v;
+        size_t v = 0;
         if(!read_var_sized(a, &v, NULL)) {
             return ARCHIVE_EOF;
         }
@@ -1644,7 +1644,8 @@ static int process_base_block(struct archive_read* a,
     struct rar5* rar = get_context(a);
     uint32_t hdr_crc, computed_crc;
     size_t raw_hdr_size, hdr_size_len, hdr_size;
-    size_t header_id, header_flags;
+    size_t header_id = 0;
+    size_t header_flags = 0;
     const uint8_t* p;
     int ret;
 
@@ -2529,8 +2530,8 @@ static int do_uncompress_block(struct archive_read* a,
 
             continue;
         } else if(num < 262) {
-            const int index = num - 258;
-            const int dist = dist_cache_touch(rar, index);
+            const int idx = num - 258;
+            const int dist = dist_cache_touch(rar, idx);
 
             uint16_t len_slot;
             int len;

From owner-svn-src-vendor@freebsd.org  Fri Oct 26 21:15:37 2018
Return-Path: 
Delivered-To: svn-src-vendor@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0646E1088B2D;
 Fri, 26 Oct 2018 21:15:37 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org
 [IPv6:2610:1c1:1:606c::19:3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "mxrelay.nyi.freebsd.org",
 Issuer "Let's Encrypt Authority X3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id A41216A544;
 Fri, 26 Oct 2018 21:15:36 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 7CF792655C;
 Fri, 26 Oct 2018 21:15:36 +0000 (UTC) (envelope-from mm@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w9QLFaqp096579;
 Fri, 26 Oct 2018 21:15:36 GMT (envelope-from mm@FreeBSD.org)
Received: (from mm@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id w9QLFa4I096578;
 Fri, 26 Oct 2018 21:15:36 GMT (envelope-from mm@FreeBSD.org)
Message-Id: <201810262115.w9QLFa4I096578@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: mm set sender to mm@FreeBSD.org
 using -f
From: Martin Matuska 
Date: Fri, 26 Oct 2018 21:15:36 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-all@freebsd.org,
 svn-src-vendor@freebsd.org
Subject: svn commit: r339792 - vendor/libarchive/dist/libarchive
X-SVN-Group: vendor
X-SVN-Commit-Author: mm
X-SVN-Commit-Paths: vendor/libarchive/dist/libarchive
X-SVN-Commit-Revision: 339792
X-SVN-Commit-Repository: base
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-vendor@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: SVN commit messages for the vendor work area tree
 
List-Unsubscribe: , 
 
List-Archive: 
List-Post: 
List-Help: 
List-Subscribe: ,
 
X-List-Received-Date: Fri, 26 Oct 2018 21:15:37 -0000

Author: mm
Date: Fri Oct 26 21:15:36 2018
New Revision: 339792
URL: https://svnweb.freebsd.org/changeset/base/339792

Log:
  Update vendor/libarchive/dist to git d661131393def793a9919d1e3fd54c9992888bd6
  
  Relevant vendor changes:
    RAR5 reader: more maybe-uninitialized size_t fixes for riscv64
                 FreeBSD build

Modified:
  vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c

Modified: vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c
==============================================================================
--- vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Fri Oct 26 21:04:17 2018	(r339791)
+++ vendor/libarchive/dist/libarchive/archive_read_support_format_rar5.c	Fri Oct 26 21:15:36 2018	(r339792)
@@ -1281,8 +1281,12 @@ static int process_head_file(struct archive_read* a, s
         struct archive_entry* entry, size_t block_flags)
 {
     ssize_t extra_data_size = 0;
-    size_t data_size, file_flags, file_attr, compression_info, host_os,
-           name_size;
+    size_t data_size = 0;
+    size_t file_flags = 0;
+    size_t file_attr = 0;
+    size_t compression_info = 0;
+    size_t host_os = 0;
+    size_t name_size = 0;
     uint64_t unpacked_size;
     uint32_t mtime = 0, crc;
     int c_method = 0, c_version = 0, is_dir;
@@ -1297,7 +1301,7 @@ static int process_head_file(struct archive_read* a, s
     }
 
     if(block_flags & HFL_EXTRA_DATA) {
-        size_t edata_size;
+        size_t edata_size = 0;
         if(!read_var_sized(a, &edata_size, NULL))
             return ARCHIVE_EOF;