Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 18 Jan 2014 19:29:53 +0000 (UTC)
From:      Raphael Kubo da Costa <rakuco@FreeBSD.org>
To:        ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org
Subject:   svn commit: r340207 - in head/devel/kdevplatform: . files
Message-ID:  <201401181929.s0IJTrAi002777@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: rakuco
Date: Sat Jan 18 19:29:53 2014
New Revision: 340207
URL: http://svnweb.freebsd.org/changeset/ports/340207
QAT: https://qat.redports.org/buildarchive/r340207/

Log:
  - Fix the build with libc++ by backporting some upstream commits.
  
    In short, kdevplatform used to have a copy of Google's sparsehash library
    but had a hardcoded config.h file that assumed libstdc++ and its
    non-standard hash_fun.h header were used. kdevplatform master got rid of
    its sparsehash altogether and replaced it with some much simpler code.
    These change were backported by the developers after we reported our
    packaging/build issues. [1]
  
  - Remove patch-util__kdevplatform_shell_environment.sh, use USES=shebangfix
    instead and also fix kdev_format_source.
  
  PR:		ports/185859

Added:
  head/devel/kdevplatform/files/patch-git_ad31296_a4ed2fc   (contents, props changed)
Deleted:
  head/devel/kdevplatform/files/patch-util__kdevplatform_shell_environment.sh
  head/devel/kdevplatform/files/patch-util_google_sparsehash_sparsehashconfig.h
Modified:
  head/devel/kdevplatform/Makefile

Modified: head/devel/kdevplatform/Makefile
==============================================================================
--- head/devel/kdevplatform/Makefile	Sat Jan 18 19:07:01 2014	(r340206)
+++ head/devel/kdevplatform/Makefile	Sat Jan 18 19:29:53 2014	(r340207)
@@ -3,6 +3,7 @@
 
 PORTNAME=	kdevplatform
 PORTVERSION=	${KDEVELOP_VERSION:S/4./1./}
+PORTREVISION=	1
 CATEGORIES=	devel kde
 MASTER_SITES=	${MASTER_SITE_KDE}
 MASTER_SITE_SUBDIR=	${KDEVELOP_BRANCH}/kdevelop/${KDEVELOP_VERSION}/src
@@ -20,9 +21,11 @@ USE_KDE4=	kate_run kdehier kdelibs kdepr
 USE_QT4=	qmake_build moc_build uic_build rcc_build \
 		corelib designer_build gui webkit
 USE_XZ=		yes
-USES=		cmake
+USES=		cmake shebangfix
 USE_LDCONFIG=	yes
 
+SHEBANG_FILES=	util/kdev_format_source util/kdevplatform_shell_environment.sh
+
 PLIST_SUB+=	SHLIB_VER=7.0.0 \
 		SHLIB_SHVER=7
 

Added: head/devel/kdevplatform/files/patch-git_ad31296_a4ed2fc
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/devel/kdevplatform/files/patch-git_ad31296_a4ed2fc	Sat Jan 18 19:29:53 2014	(r340207)
@@ -0,0 +1,7205 @@
+The following commits from the 1.6 kdevplatform branch are necessary to fix
+the build when libc++ is used (as there's no hash_fun.h header in this case,
+and not __gnu_cxx namespace, as defined in sparseconfig.h.
+
+commit a4ed2fcdb5ec4daa22ba432b1aa57f3af1a2e402
+Author: Milian Wolff <mail@milianw.de>
+Date:   Sat Dec 7 16:46:19 2013 +0100
+
+    Minor: remove dead code.
+    
+    Conflicts:
+    	language/duchain/duchain.cpp
+
+commit ad312969bdfcd79e45cfd22e7a16d51c31c834c4
+Author: Milian Wolff <mail@milianw.de>
+Date:   Sat Dec 7 17:33:41 2013 +0100
+
+    Remove google sparse/dense hash code.
+    
+    Replace the single place where it was used with a simple QSet.
+    Also cleanup the code, as this is not a hot-path thus the performance
+    is really not that important here.
+    
+    Comparing before/after showed that DUContext::imports is roughly being
+    called 10k times when importing GammaRay as a project with a clean
+    cache. Parsing takes ca. 40s, of which 0.08s where spent in ::imports
+    with the old code. The new simpler QSet based variant takes ca. 0.1s
+    which is just as fine - esp. if we can get rid of so much code!
+    
+    This cherry-pick was requested by FreeBSD packager rakuco.
+    
+    (cherry picked from commit 64fe266b53a4655621609bd5bafeabbf76d10287)
+    
+    Conflicts:
+    	language/duchain/ducontext.cpp
+    	language/duchain/ducontextdynamicdata.h
+
+diff --git a/language/CMakeLists.txt b/language/CMakeLists.txt
+index 2f777e4..c54eb72 100644
+--- language/CMakeLists.txt
++++ language/CMakeLists.txt
+@@ -355,5 +355,3 @@ install(FILES
+     checks/controlflownode.h
+     DESTINATION ${INCLUDE_INSTALL_DIR}/kdevplatform/language/checks COMPONENT Devel
+ )
+-
+-
+diff --git a/language/duchain/duchain.cpp b/language/duchain/duchain.cpp
+index 091a20e..ef25132 100644
+--- language/duchain/duchain.cpp
++++ language/duchain/duchain.cpp
+@@ -42,8 +42,6 @@
+ #include <interfaces/foregroundlock.h>
+ #include <interfaces/isession.h>
+ 
+-#include <util/google/dense_hash_map>
+-
+ #include "../interfaces/ilanguagesupport.h"
+ #include "../interfaces/icodehighlighting.h"
+ #include "../backgroundparser/backgroundparser.h"
+
+diff --git a/cleanup_includeguards.sh b/cleanup_includeguards.sh
+index f8fb683..7b57555 100644
+--- cleanup_includeguards.sh
++++ cleanup_includeguards.sh
+@@ -8,6 +8,5 @@
+ # Use carrefully and precautious review needed as it won't match
+ # if there's as example, trailing whitespace, #define FOO_H 1, or define FOO_H_
+ # It can also match some non wanted pattern like #undef HAVE_STDINT_H
+-# in util/google/sparsehash/sparseconfig_windows.h 
+ find . -name '*.h' -print | xargs sed -i 's/[A-Z].*_H$/KDEVPLATFORM_&/'
+ #find plugins/ -name '*.h' -print | xargs sed -i 's/[A-Z].*_H$/PLUGIN_&/'
+diff --git a/format_sources b/format_sources
+index bda9c75..051f289 100644
+--- format_sources
++++ format_sources
+@@ -1,5 +1,5 @@
+ # Ignore copied-in code, by matching it with an empty formatting command
+-plugins/patchreview/libdiff2/* plugins/patchreview/settings/* plugins/quickopen/expandingtree/* util/google/* : 
++plugins/patchreview/libdiff2/* plugins/patchreview/settings/* plugins/quickopen/expandingtree/* :
+ 
+ # 2-space indentation
+ plugins/classbrowser/*.cpp plugins/classbrowser/*.h plugins/contextbrowser/*.cpp plugins/contextbrowser/*.h plugins/quickopen/*.cpp plugins/quickopen/*.h language/*.h language/*.cpp : uncrustify -l CPP -f $TMPFILE -c format.config.uncrustify.2_spaces -o $TMPFILE
+diff --git a/language/duchain/ducontext.cpp b/language/duchain/ducontext.cpp
+index 9166cab..53566a1 100644
+--- language/duchain/ducontext.cpp
++++ language/duchain/ducontext.cpp
+@@ -144,40 +144,20 @@ void DUContextDynamicData::scopeIdentifier(bool includeClasses, QualifiedIdentif
+     target += m_context->d_func()->m_scopeIdentifier;
+ }
+ 
+-bool DUContextDynamicData::importsSafeButSlow(const DUContext* context, const TopDUContext* source,
+-                                              ImportsHash& checked) const
+-{
+-  if( this == context->m_dynamicData )
+-    return true;
+-
+-  if(checked.find(this) != checked.end())
+-    return false;
+-  checked.insert(std::make_pair(this, true));
+-
+-  FOREACH_FUNCTION( const DUContext::Import& ctx, m_context->d_func()->m_importedContexts ) {
+-    DUContext* import = ctx.context(source);
+-    if(import == context || (import && import->m_dynamicData->importsSafeButSlow(context, source, checked)))
+-      return true;
+-  }
+-
+-  return false;
+-}
+-
+ bool DUContextDynamicData::imports(const DUContext* context, const TopDUContext* source,
+-                                   int maxDepth) const
++                                   QSet<const DUContextDynamicData*>* recursionGuard) const
+ {
+   if( this == context->m_dynamicData )
+     return true;
+ 
+-  if(maxDepth == 0) {
+-    ImportsHash checked(500);
+-    checked.set_empty_key(0);
+-    return importsSafeButSlow(context, source, checked);
++  if (recursionGuard->contains(this)) {
++    return false;
+   }
++  recursionGuard->insert(this);
+ 
+   FOREACH_FUNCTION( const DUContext::Import& ctx, m_context->d_func()->m_importedContexts ) {
+     DUContext* import = ctx.context(source);
+-    if(import == context || (import && import->m_dynamicData->imports(context, source, maxDepth-1)))
++    if(import == context || (import && import->m_dynamicData->imports(context, source, recursionGuard)))
+       return true;
+   }
+ 
+@@ -854,7 +834,9 @@ bool DUContext::imports(const DUContext* origin, const CursorInRevision& /*posit
+ {
+   ENSURE_CAN_READ
+ 
+-  return m_dynamicData->imports(origin, topContext(), 4);
++  QSet<const DUContextDynamicData*> recursionGuard;
++  recursionGuard.reserve(8);
++  return m_dynamicData->imports(origin, topContext(), &recursionGuard);
+ }
+ 
+ bool DUContext::addIndirectImport(const DUContext::Import& import) {
+diff --git a/language/duchain/ducontextdynamicdata.h b/language/duchain/ducontextdynamicdata.h
+index 6884f70..78a8ca9 100644
+--- language/duchain/ducontextdynamicdata.h
++++ language/duchain/ducontextdynamicdata.h
+@@ -23,7 +23,6 @@
+ #define DUCONTEXTDYNAMICDATA_H
+ 
+ #include "ducontextdata.h"
+-#include <util/google/dense_hash_map>
+ 
+ namespace KDevelop {
+ 
+@@ -201,21 +200,8 @@ public:
+   /**
+    * Returns true if this context is imported by the given one, on any level.
+    * */
+-  bool imports(const DUContext* context, const TopDUContext* source, int maxDepth) const;
+-
+-  /**
+-   * This can deal with endless recursion
+-   */
+-  
+-  struct ImportsHash_Op {
+-    size_t operator() (const DUContextDynamicData* data) const {
+-      return (size_t)data;
+-    }
+-  };
+-  
+-  typedef google::dense_hash_map<const DUContextDynamicData*, bool, ImportsHash_Op> ImportsHash;
+-  
+-  bool importsSafeButSlow(const DUContext* context, const TopDUContext* source, ImportsHash& checked) const;
++  bool imports(const DUContext* context, const TopDUContext* source,
++               QSet<const DUContextDynamicData*>* recursionGuard) const;
+ };
+ 
+ }
+diff --git a/util/google/dense_hash_map b/util/google/dense_hash_map
+deleted file mode 100644
+index 7522b6e..0000000
+--- util/google/dense_hash_map
++++ /dev/null
+@@ -1,369 +0,0 @@
+-// Copyright (c) 2005, Google Inc.
+-// All rights reserved.
+-//
+-// Redistribution and use in source and binary forms, with or without
+-// modification, are permitted provided that the following conditions are
+-// met:
+-//
+-//     * Redistributions of source code must retain the above copyright
+-// notice, this list of conditions and the following disclaimer.
+-//     * Redistributions in binary form must reproduce the above
+-// copyright notice, this list of conditions and the following disclaimer
+-// in the documentation and/or other materials provided with the
+-// distribution.
+-//     * Neither the name of Google Inc. nor the names of its
+-// contributors may be used to endorse or promote products derived from
+-// this software without specific prior written permission.
+-//
+-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+-
+-// ----
+-//
+-// This is just a very thin wrapper over densehashtable.h, just
+-// like sgi stl's stl_hash_map is a very thin wrapper over
+-// stl_hashtable.  The major thing we define is operator[], because
+-// we have a concept of a data_type which stl_hashtable doesn't
+-// (it only has a key and a value).
+-//
+-// NOTE: this is exactly like sparse_hash_map.h, with the word
+-// "sparse" replaced by "dense", except for the addition of
+-// set_empty_key().
+-//
+-//   YOU MUST CALL SET_EMPTY_KEY() IMMEDIATELY AFTER CONSTRUCTION.
+-//
+-// Otherwise your program will die in mysterious ways.  (Note if you
+-// use the constructor that takes an InputIterator range, you pass in
+-// the empty key in the constructor, rather than after.  As a result,
+-// this constructor differs from the standard STL version.)
+-//
+-// In other respects, we adhere mostly to the STL semantics for
+-// hash-map.  One important exception is that insert() may invalidate
+-// iterators entirely -- STL semantics are that insert() may reorder
+-// iterators, but they all still refer to something valid in the
+-// hashtable.  Not so for us.  Likewise, insert() may invalidate
+-// pointers into the hashtable.  (Whether insert invalidates iterators
+-// and pointers depends on whether it results in a hashtable resize).
+-// On the plus side, delete() doesn't invalidate iterators or pointers
+-// at all, or even change the ordering of elements.
+-//
+-// Here are a few "power user" tips:
+-//
+-//    1) set_deleted_key():
+-//         If you want to use erase() you *must* call set_deleted_key(),
+-//         in addition to set_empty_key(), after construction.
+-//         The deleted and empty keys must differ.
+-//
+-//    2) resize(0):
+-//         When an item is deleted, its memory isn't freed right
+-//         away.  This allows you to iterate over a hashtable,
+-//         and call erase(), without invalidating the iterator.
+-//         To force the memory to be freed, call resize(0).
+-//         For tr1 compatibility, this can also be called as rehash(0).
+-//
+-//    3) min_load_factor(0.0)
+-//         Setting the minimum load factor to 0.0 guarantees that
+-//         the hash table will never shrink.
+-//
+-// Roughly speaking:
+-//   (1) dense_hash_map: fastest, uses the most memory unless entries are small
+-//   (2) sparse_hash_map: slowest, uses the least memory
+-//   (3) hash_map / unordered_map (STL): in the middle
+-//
+-// Typically I use sparse_hash_map when I care about space and/or when
+-// I need to save the hashtable on disk.  I use hash_map otherwise.  I
+-// don't personally use dense_hash_set ever; some people use it for
+-// small sets with lots of lookups.
+-//
+-// - dense_hash_map has, typically, about 78% memory overhead (if your
+-//   data takes up X bytes, the hash_map uses .78X more bytes in overhead).
+-// - sparse_hash_map has about 4 bits overhead per entry.
+-// - sparse_hash_map can be 3-7 times slower than the others for lookup and,
+-//   especially, inserts.  See time_hash_map.cc for details.
+-//
+-// See /usr/(local/)?doc/sparsehash-*/dense_hash_map.html
+-// for information about how to use this class.
+-
+-#ifndef _DENSE_HASH_MAP_H_
+-#define _DENSE_HASH_MAP_H_
+-
+-#include <google/sparsehash/sparseconfig.h>
+-#include <algorithm>                        // needed by stl_alloc
+-#include <functional>                       // for equal_to<>, select1st<>, etc
+-#include <memory>                           // for alloc
+-#include <utility>                          // for pair<>
+-#include <google/sparsehash/densehashtable.h>        // IWYU pragma: export
+-#include <google/sparsehash/libc_allocator_with_realloc.h>
+-#include HASH_FUN_H                 // for hash<>
+-_START_GOOGLE_NAMESPACE_
+-
+-template <class Key, class T,
+-          class HashFcn = SPARSEHASH_HASH<Key>,   // defined in sparseconfig.h
+-          class EqualKey = std::equal_to<Key>,
+-          class Alloc = libc_allocator_with_realloc<std::pair<const Key, T> > >
+-class dense_hash_map {
+- private:
+-  // Apparently select1st is not stl-standard, so we define our own
+-  struct SelectKey {
+-    typedef const Key& result_type;
+-    const Key& operator()(const std::pair<const Key, T>& p) const {
+-      return p.first;
+-    }
+-  };
+-  struct SetKey {
+-    void operator()(std::pair<const Key, T>* value, const Key& new_key) const {
+-      *const_cast<Key*>(&value->first) = new_key;
+-      // It would be nice to clear the rest of value here as well, in
+-      // case it's taking up a lot of memory.  We do this by clearing
+-      // the value.  This assumes T has a zero-arg constructor!
+-      value->second = T();
+-    }
+-  };
+-  // For operator[].
+-  struct DefaultValue {
+-    std::pair<const Key, T> operator()(const Key& key) {
+-      return std::make_pair(key, T());
+-    }
+-  };
+-
+-  // The actual data
+-  typedef dense_hashtable<std::pair<const Key, T>, Key, HashFcn, SelectKey,
+-                          SetKey, EqualKey, Alloc> ht;
+-  ht rep;
+-
+- public:
+-  typedef typename ht::key_type key_type;
+-  typedef T data_type;
+-  typedef T mapped_type;
+-  typedef typename ht::value_type value_type;
+-  typedef typename ht::hasher hasher;
+-  typedef typename ht::key_equal key_equal;
+-  typedef Alloc allocator_type;
+-
+-  typedef typename ht::size_type size_type;
+-  typedef typename ht::difference_type difference_type;
+-  typedef typename ht::pointer pointer;
+-  typedef typename ht::const_pointer const_pointer;
+-  typedef typename ht::reference reference;
+-  typedef typename ht::const_reference const_reference;
+-
+-  typedef typename ht::iterator iterator;
+-  typedef typename ht::const_iterator const_iterator;
+-  typedef typename ht::local_iterator local_iterator;
+-  typedef typename ht::const_local_iterator const_local_iterator;
+-
+-  // Iterator functions
+-  iterator begin()                               { return rep.begin(); }
+-  iterator end()                                 { return rep.end(); }
+-  const_iterator begin() const                   { return rep.begin(); }
+-  const_iterator end() const                     { return rep.end(); }
+-
+-
+-  // These come from tr1's unordered_map. For us, a bucket has 0 or 1 elements.
+-  local_iterator begin(size_type i)              { return rep.begin(i); }
+-  local_iterator end(size_type i)                { return rep.end(i); }
+-  const_local_iterator begin(size_type i) const  { return rep.begin(i); }
+-  const_local_iterator end(size_type i) const    { return rep.end(i); }
+-
+-  // Accessor functions
+-  allocator_type get_allocator() const           { return rep.get_allocator(); }
+-  hasher hash_funct() const                      { return rep.hash_funct(); }
+-  hasher hash_function() const                   { return hash_funct(); }
+-  key_equal key_eq() const                       { return rep.key_eq(); }
+-
+-
+-  // Constructors
+-  explicit dense_hash_map(size_type expected_max_items_in_table = 0,
+-                          const hasher& hf = hasher(),
+-                          const key_equal& eql = key_equal(),
+-                          const allocator_type& alloc = allocator_type())
+-    : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) {
+-  }
+-
+-  template <class InputIterator>
+-  dense_hash_map(InputIterator f, InputIterator l,
+-                 const key_type& empty_key_val,
+-                 size_type expected_max_items_in_table = 0,
+-                 const hasher& hf = hasher(),
+-                 const key_equal& eql = key_equal(),
+-                 const allocator_type& alloc = allocator_type())
+-    : rep(expected_max_items_in_table, hf, eql, SelectKey(), SetKey(), alloc) {
+-    set_empty_key(empty_key_val);
+-    rep.insert(f, l);
+-  }
+-  // We use the default copy constructor
+-  // We use the default operator=()
+-  // We use the default destructor
+-
+-  void clear()                        { rep.clear(); }
+-  // This clears the hash map without resizing it down to the minimum
+-  // bucket count, but rather keeps the number of buckets constant
+-  void clear_no_resize()              { rep.clear_no_resize(); }
+-  void swap(dense_hash_map& hs)       { rep.swap(hs.rep); }
+-
+-
+-  // Functions concerning size
+-  size_type size() const              { return rep.size(); }
+-  size_type max_size() const          { return rep.max_size(); }
+-  bool empty() const                  { return rep.empty(); }
+-  size_type bucket_count() const      { return rep.bucket_count(); }
+-  size_type max_bucket_count() const  { return rep.max_bucket_count(); }
+-
+-  // These are tr1 methods.  bucket() is the bucket the key is or would be in.
+-  size_type bucket_size(size_type i) const    { return rep.bucket_size(i); }
+-  size_type bucket(const key_type& key) const { return rep.bucket(key); }
+-  float load_factor() const {
+-    return size() * 1.0f / bucket_count();
+-  }
+-  float max_load_factor() const {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    return grow;
+-  }
+-  void max_load_factor(float new_grow) {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    rep.set_resizing_parameters(shrink, new_grow);
+-  }
+-  // These aren't tr1 methods but perhaps ought to be.
+-  float min_load_factor() const {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    return shrink;
+-  }
+-  void min_load_factor(float new_shrink) {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    rep.set_resizing_parameters(new_shrink, grow);
+-  }
+-  // Deprecated; use min_load_factor() or max_load_factor() instead.
+-  void set_resizing_parameters(float shrink, float grow) {
+-    rep.set_resizing_parameters(shrink, grow);
+-  }
+-
+-  void resize(size_type hint)         { rep.resize(hint); }
+-  void rehash(size_type hint)         { resize(hint); }      // the tr1 name
+-
+-  // Lookup routines
+-  iterator find(const key_type& key)                 { return rep.find(key); }
+-  const_iterator find(const key_type& key) const     { return rep.find(key); }
+-
+-  data_type& operator[](const key_type& key) {       // This is our value-add!
+-    // If key is in the hashtable, returns find(key)->second,
+-    // otherwise returns insert(value_type(key, T()).first->second.
+-    // Note it does not create an empty T unless the find fails.
+-    return rep.template find_or_insert<DefaultValue>(key).second;
+-  }
+-
+-  size_type count(const key_type& key) const         { return rep.count(key); }
+-
+-  std::pair<iterator, iterator> equal_range(const key_type& key) {
+-    return rep.equal_range(key);
+-  }
+-  std::pair<const_iterator, const_iterator> equal_range(const key_type& key)
+-      const {
+-    return rep.equal_range(key);
+-  }
+-
+-
+-  // Insertion routines
+-  std::pair<iterator, bool> insert(const value_type& obj) {
+-    return rep.insert(obj);
+-  }
+-  template <class InputIterator> void insert(InputIterator f, InputIterator l) {
+-    rep.insert(f, l);
+-  }
+-  void insert(const_iterator f, const_iterator l) {
+-    rep.insert(f, l);
+-  }
+-  // Required for std::insert_iterator; the passed-in iterator is ignored.
+-  iterator insert(iterator, const value_type& obj) {
+-    return insert(obj).first;
+-  }
+-
+-  // Deletion and empty routines
+-  // THESE ARE NON-STANDARD!  I make you specify an "impossible" key
+-  // value to identify deleted and empty buckets.  You can change the
+-  // deleted key as time goes on, or get rid of it entirely to be insert-only.
+-  void set_empty_key(const key_type& key)   {           // YOU MUST CALL THIS!
+-    rep.set_empty_key(value_type(key, data_type()));    // rep wants a value
+-  }
+-  key_type empty_key() const {
+-    return rep.empty_key().first;                       // rep returns a value
+-  }
+-
+-  void set_deleted_key(const key_type& key)   { rep.set_deleted_key(key); }
+-  void clear_deleted_key()                    { rep.clear_deleted_key(); }
+-  key_type deleted_key() const                { return rep.deleted_key(); }
+-
+-  // These are standard
+-  size_type erase(const key_type& key)               { return rep.erase(key); }
+-  void erase(iterator it)                            { rep.erase(it); }
+-  void erase(iterator f, iterator l)                 { rep.erase(f, l); }
+-
+-
+-  // Comparison
+-  bool operator==(const dense_hash_map& hs) const    { return rep == hs.rep; }
+-  bool operator!=(const dense_hash_map& hs) const    { return rep != hs.rep; }
+-
+-
+-  // I/O -- this is an add-on for writing hash map to disk
+-  //
+-  // For maximum flexibility, this does not assume a particular
+-  // file type (though it will probably be a FILE *).  We just pass
+-  // the fp through to rep.
+-
+-  // If your keys and values are simple enough, you can pass this
+-  // serializer to serialize()/unserialize().  "Simple enough" means
+-  // value_type is a POD type that contains no pointers.  Note,
+-  // however, we don't try to normalize endianness.
+-  typedef typename ht::NopointerSerializer NopointerSerializer;
+-
+-  // serializer: a class providing operator()(OUTPUT*, const value_type&)
+-  //    (writing value_type to OUTPUT).  You can specify a
+-  //    NopointerSerializer object if appropriate (see above).
+-  // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a
+-  //    pointer to a class providing size_t Write(const void*, size_t),
+-  //    which writes a buffer into a stream (which fp presumably
+-  //    owns) and returns the number of bytes successfully written.
+-  //    Note basic_ostream<not_char> is not currently supported.
+-  template <typename ValueSerializer, typename OUTPUT>
+-  bool serialize(ValueSerializer serializer, OUTPUT* fp) {
+-    return rep.serialize(serializer, fp);
+-  }
+-
+-  // serializer: a functor providing operator()(INPUT*, value_type*)
+-  //    (reading from INPUT and into value_type).  You can specify a
+-  //    NopointerSerializer object if appropriate (see above).
+-  // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a
+-  //    pointer to a class providing size_t Read(void*, size_t),
+-  //    which reads into a buffer from a stream (which fp presumably
+-  //    owns) and returns the number of bytes successfully read.
+-  //    Note basic_istream<not_char> is not currently supported.
+-  // NOTE: Since value_type is std::pair<const Key, T>, ValueSerializer
+-  // may need to do a const cast in order to fill in the key.
+-  template <typename ValueSerializer, typename INPUT>
+-  bool unserialize(ValueSerializer serializer, INPUT* fp) {
+-    return rep.unserialize(serializer, fp);
+-  }
+-};
+-
+-// We need a global swap as well
+-template <class Key, class T, class HashFcn, class EqualKey, class Alloc>
+-inline void swap(dense_hash_map<Key, T, HashFcn, EqualKey, Alloc>& hm1,
+-                 dense_hash_map<Key, T, HashFcn, EqualKey, Alloc>& hm2) {
+-  hm1.swap(hm2);
+-}
+-
+-_END_GOOGLE_NAMESPACE_
+-
+-#endif /* _DENSE_HASH_MAP_H_ */
+diff --git a/util/google/dense_hash_set b/util/google/dense_hash_set
+deleted file mode 100644
+index 1c11a2c..0000000
+--- util/google/dense_hash_set
++++ /dev/null
+@@ -1,338 +0,0 @@
+-// Copyright (c) 2005, Google Inc.
+-// All rights reserved.
+-//
+-// Redistribution and use in source and binary forms, with or without
+-// modification, are permitted provided that the following conditions are
+-// met:
+-//
+-//     * Redistributions of source code must retain the above copyright
+-// notice, this list of conditions and the following disclaimer.
+-//     * Redistributions in binary form must reproduce the above
+-// copyright notice, this list of conditions and the following disclaimer
+-// in the documentation and/or other materials provided with the
+-// distribution.
+-//     * Neither the name of Google Inc. nor the names of its
+-// contributors may be used to endorse or promote products derived from
+-// this software without specific prior written permission.
+-//
+-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+-
+-// ---
+-//
+-// This is just a very thin wrapper over densehashtable.h, just
+-// like sgi stl's stl_hash_set is a very thin wrapper over
+-// stl_hashtable.  The major thing we define is operator[], because
+-// we have a concept of a data_type which stl_hashtable doesn't
+-// (it only has a key and a value).
+-//
+-// This is more different from dense_hash_map than you might think,
+-// because all iterators for sets are const (you obviously can't
+-// change the key, and for sets there is no value).
+-//
+-// NOTE: this is exactly like sparse_hash_set.h, with the word
+-// "sparse" replaced by "dense", except for the addition of
+-// set_empty_key().
+-//
+-//   YOU MUST CALL SET_EMPTY_KEY() IMMEDIATELY AFTER CONSTRUCTION.
+-//
+-// Otherwise your program will die in mysterious ways.  (Note if you
+-// use the constructor that takes an InputIterator range, you pass in
+-// the empty key in the constructor, rather than after.  As a result,
+-// this constructor differs from the standard STL version.)
+-//
+-// In other respects, we adhere mostly to the STL semantics for
+-// hash-map.  One important exception is that insert() may invalidate
+-// iterators entirely -- STL semantics are that insert() may reorder
+-// iterators, but they all still refer to something valid in the
+-// hashtable.  Not so for us.  Likewise, insert() may invalidate
+-// pointers into the hashtable.  (Whether insert invalidates iterators
+-// and pointers depends on whether it results in a hashtable resize).
+-// On the plus side, delete() doesn't invalidate iterators or pointers
+-// at all, or even change the ordering of elements.
+-//
+-// Here are a few "power user" tips:
+-//
+-//    1) set_deleted_key():
+-//         If you want to use erase() you must call set_deleted_key(),
+-//         in addition to set_empty_key(), after construction.
+-//         The deleted and empty keys must differ.
+-//
+-//    2) resize(0):
+-//         When an item is deleted, its memory isn't freed right
+-//         away.  This allows you to iterate over a hashtable,
+-//         and call erase(), without invalidating the iterator.
+-//         To force the memory to be freed, call resize(0).
+-//         For tr1 compatibility, this can also be called as rehash(0).
+-//
+-//    3) min_load_factor(0.0)
+-//         Setting the minimum load factor to 0.0 guarantees that
+-//         the hash table will never shrink.
+-//
+-// Roughly speaking:
+-//   (1) dense_hash_set: fastest, uses the most memory unless entries are small
+-//   (2) sparse_hash_set: slowest, uses the least memory
+-//   (3) hash_set / unordered_set (STL): in the middle
+-//
+-// Typically I use sparse_hash_set when I care about space and/or when
+-// I need to save the hashtable on disk.  I use hash_set otherwise.  I
+-// don't personally use dense_hash_set ever; some people use it for
+-// small sets with lots of lookups.
+-//
+-// - dense_hash_set has, typically, about 78% memory overhead (if your
+-//   data takes up X bytes, the hash_set uses .78X more bytes in overhead).
+-// - sparse_hash_set has about 4 bits overhead per entry.
+-// - sparse_hash_set can be 3-7 times slower than the others for lookup and,
+-//   especially, inserts.  See time_hash_map.cc for details.
+-//
+-// See /usr/(local/)?doc/sparsehash-*/dense_hash_set.html
+-// for information about how to use this class.
+-
+-#ifndef _DENSE_HASH_SET_H_
+-#define _DENSE_HASH_SET_H_
+-
+-#include <google/sparsehash/sparseconfig.h>
+-#include <algorithm>                        // needed by stl_alloc
+-#include <functional>                       // for equal_to<>, select1st<>, etc
+-#include <memory>                           // for alloc
+-#include <utility>                          // for pair<>
+-#include <google/sparsehash/densehashtable.h>        // IWYU pragma: export
+-#include <google/sparsehash/libc_allocator_with_realloc.h>
+-#include HASH_FUN_H                 // for hash<>
+-_START_GOOGLE_NAMESPACE_
+-
+-template <class Value,
+-          class HashFcn = SPARSEHASH_HASH<Value>,   // defined in sparseconfig.h
+-          class EqualKey = std::equal_to<Value>,
+-          class Alloc = libc_allocator_with_realloc<Value> >
+-class dense_hash_set {
+- private:
+-  // Apparently identity is not stl-standard, so we define our own
+-  struct Identity {
+-    typedef const Value& result_type;
+-    const Value& operator()(const Value& v) const { return v; }
+-  };
+-  struct SetKey {
+-    void operator()(Value* value, const Value& new_key) const {
+-      *value = new_key;
+-    }
+-  };
+-
+-  // The actual data
+-  typedef dense_hashtable<Value, Value, HashFcn, Identity, SetKey,
+-                          EqualKey, Alloc> ht;
+-  ht rep;
+-
+- public:
+-  typedef typename ht::key_type key_type;
+-  typedef typename ht::value_type value_type;
+-  typedef typename ht::hasher hasher;
+-  typedef typename ht::key_equal key_equal;
+-  typedef Alloc allocator_type;
+-
+-  typedef typename ht::size_type size_type;
+-  typedef typename ht::difference_type difference_type;
+-  typedef typename ht::const_pointer pointer;
+-  typedef typename ht::const_pointer const_pointer;
+-  typedef typename ht::const_reference reference;
+-  typedef typename ht::const_reference const_reference;
+-
+-  typedef typename ht::const_iterator iterator;
+-  typedef typename ht::const_iterator const_iterator;
+-  typedef typename ht::const_local_iterator local_iterator;
+-  typedef typename ht::const_local_iterator const_local_iterator;
+-
+-
+-  // Iterator functions -- recall all iterators are const
+-  iterator begin() const                  { return rep.begin(); }
+-  iterator end() const                    { return rep.end(); }
+-
+-  // These come from tr1's unordered_set. For us, a bucket has 0 or 1 elements.
+-  local_iterator begin(size_type i) const { return rep.begin(i); }
+-  local_iterator end(size_type i) const   { return rep.end(i); }
+-
+-
+-  // Accessor functions
+-  allocator_type get_allocator() const    { return rep.get_allocator(); }
+-  hasher hash_funct() const               { return rep.hash_funct(); }
+-  hasher hash_function() const            { return hash_funct(); }  // tr1 name
+-  key_equal key_eq() const                { return rep.key_eq(); }
+-
+-
+-  // Constructors
+-  explicit dense_hash_set(size_type expected_max_items_in_table = 0,
+-                          const hasher& hf = hasher(),
+-                          const key_equal& eql = key_equal(),
+-                          const allocator_type& alloc = allocator_type())
+-      : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) {
+-  }
+-
+-  template <class InputIterator>
+-  dense_hash_set(InputIterator f, InputIterator l,
+-                 const key_type& empty_key_val,
+-                 size_type expected_max_items_in_table = 0,
+-                 const hasher& hf = hasher(),
+-                 const key_equal& eql = key_equal(),
+-                 const allocator_type& alloc = allocator_type())
+-      : rep(expected_max_items_in_table, hf, eql, Identity(), SetKey(), alloc) {
+-    set_empty_key(empty_key_val);
+-    rep.insert(f, l);
+-  }
+-  // We use the default copy constructor
+-  // We use the default operator=()
+-  // We use the default destructor
+-
+-  void clear()                        { rep.clear(); }
+-  // This clears the hash set without resizing it down to the minimum
+-  // bucket count, but rather keeps the number of buckets constant
+-  void clear_no_resize()              { rep.clear_no_resize(); }
+-  void swap(dense_hash_set& hs)       { rep.swap(hs.rep); }
+-
+-
+-  // Functions concerning size
+-  size_type size() const              { return rep.size(); }
+-  size_type max_size() const          { return rep.max_size(); }
+-  bool empty() const                  { return rep.empty(); }
+-  size_type bucket_count() const      { return rep.bucket_count(); }
+-  size_type max_bucket_count() const  { return rep.max_bucket_count(); }
+-
+-  // These are tr1 methods.  bucket() is the bucket the key is or would be in.
+-  size_type bucket_size(size_type i) const    { return rep.bucket_size(i); }
+-  size_type bucket(const key_type& key) const { return rep.bucket(key); }
+-  float load_factor() const {
+-    return size() * 1.0f / bucket_count();
+-  }
+-  float max_load_factor() const {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    return grow;
+-  }
+-  void max_load_factor(float new_grow) {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    rep.set_resizing_parameters(shrink, new_grow);
+-  }
+-  // These aren't tr1 methods but perhaps ought to be.
+-  float min_load_factor() const {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    return shrink;
+-  }
+-  void min_load_factor(float new_shrink) {
+-    float shrink, grow;
+-    rep.get_resizing_parameters(&shrink, &grow);
+-    rep.set_resizing_parameters(new_shrink, grow);
+-  }
+-  // Deprecated; use min_load_factor() or max_load_factor() instead.
+-  void set_resizing_parameters(float shrink, float grow) {
+-    rep.set_resizing_parameters(shrink, grow);
+-  }
+-
+-  void resize(size_type hint)         { rep.resize(hint); }
+-  void rehash(size_type hint)         { resize(hint); }     // the tr1 name
+-
+-  // Lookup routines
+-  iterator find(const key_type& key) const           { return rep.find(key); }
+-
+-  size_type count(const key_type& key) const         { return rep.count(key); }
+-
+-  std::pair<iterator, iterator> equal_range(const key_type& key) const {
+-    return rep.equal_range(key);
+-  }
+-
+-
+-  // Insertion routines
+-  std::pair<iterator, bool> insert(const value_type& obj) {
+-    std::pair<typename ht::iterator, bool> p = rep.insert(obj);
+-    return std::pair<iterator, bool>(p.first, p.second);   // const to non-const
+-  }
+-  template <class InputIterator> void insert(InputIterator f, InputIterator l) {
+-    rep.insert(f, l);
+-  }
+-  void insert(const_iterator f, const_iterator l) {
+-    rep.insert(f, l);
+-  }
+-  // Required for std::insert_iterator; the passed-in iterator is ignored.
+-  iterator insert(iterator, const value_type& obj)   {
+-    return insert(obj).first;
+-  }
+-
+-  // Deletion and empty routines
+-  // THESE ARE NON-STANDARD!  I make you specify an "impossible" key
+-  // value to identify deleted and empty buckets.  You can change the
+-  // deleted key as time goes on, or get rid of it entirely to be insert-only.
+-  void set_empty_key(const key_type& key)     { rep.set_empty_key(key); }
+-  key_type empty_key() const                  { return rep.empty_key(); }
+-
+-  void set_deleted_key(const key_type& key)   { rep.set_deleted_key(key); }
+-  void clear_deleted_key()                    { rep.clear_deleted_key(); }
+-  key_type deleted_key() const                { return rep.deleted_key(); }
+-
+-  // These are standard
+-  size_type erase(const key_type& key)               { return rep.erase(key); }
+-  void erase(iterator it)                            { rep.erase(it); }
+-  void erase(iterator f, iterator l)                 { rep.erase(f, l); }
+-
+-
+-  // Comparison
+-  bool operator==(const dense_hash_set& hs) const    { return rep == hs.rep; }
+-  bool operator!=(const dense_hash_set& hs) const    { return rep != hs.rep; }
+-
+-
+-  // I/O -- this is an add-on for writing metainformation to disk
+-  //
+-  // For maximum flexibility, this does not assume a particular
+-  // file type (though it will probably be a FILE *).  We just pass
+-  // the fp through to rep.
+-
+-  // If your keys and values are simple enough, you can pass this
+-  // serializer to serialize()/unserialize().  "Simple enough" means
+-  // value_type is a POD type that contains no pointers.  Note,
+-  // however, we don't try to normalize endianness.
+-  typedef typename ht::NopointerSerializer NopointerSerializer;
+-
+-  // serializer: a class providing operator()(OUTPUT*, const value_type&)
+-  //    (writing value_type to OUTPUT).  You can specify a
+-  //    NopointerSerializer object if appropriate (see above).
+-  // fp: either a FILE*, OR an ostream*/subclass_of_ostream*, OR a
+-  //    pointer to a class providing size_t Write(const void*, size_t),
+-  //    which writes a buffer into a stream (which fp presumably
+-  //    owns) and returns the number of bytes successfully written.
+-  //    Note basic_ostream<not_char> is not currently supported.
+-  template <typename ValueSerializer, typename OUTPUT>
+-  bool serialize(ValueSerializer serializer, OUTPUT* fp) {
+-    return rep.serialize(serializer, fp);
+-  }
+-
+-  // serializer: a functor providing operator()(INPUT*, value_type*)
+-  //    (reading from INPUT and into value_type).  You can specify a
+-  //    NopointerSerializer object if appropriate (see above).
+-  // fp: either a FILE*, OR an istream*/subclass_of_istream*, OR a
+-  //    pointer to a class providing size_t Read(void*, size_t),
+-  //    which reads into a buffer from a stream (which fp presumably
+-  //    owns) and returns the number of bytes successfully read.
+-  //    Note basic_istream<not_char> is not currently supported.
+-  template <typename ValueSerializer, typename INPUT>
+-  bool unserialize(ValueSerializer serializer, INPUT* fp) {
+-    return rep.unserialize(serializer, fp);
+-  }
+-};
+-
+-template <class Val, class HashFcn, class EqualKey, class Alloc>
+-inline void swap(dense_hash_set<Val, HashFcn, EqualKey, Alloc>& hs1,
+-                 dense_hash_set<Val, HashFcn, EqualKey, Alloc>& hs2) {
+-  hs1.swap(hs2);
+-}
+-
+-_END_GOOGLE_NAMESPACE_
+-
+-#endif /* _DENSE_HASH_SET_H_ */
+diff --git a/util/google/sparse_hash_map b/util/google/sparse_hash_map
+deleted file mode 100644
+index adda509..0000000
+--- util/google/sparse_hash_map
++++ /dev/null
+@@ -1,363 +0,0 @@
+-// Copyright (c) 2005, Google Inc.
+-// All rights reserved.
+-//
+-// Redistribution and use in source and binary forms, with or without
+-// modification, are permitted provided that the following conditions are
+-// met:
+-//
+-//     * Redistributions of source code must retain the above copyright
+-// notice, this list of conditions and the following disclaimer.
+-//     * Redistributions in binary form must reproduce the above
+-// copyright notice, this list of conditions and the following disclaimer
+-// in the documentation and/or other materials provided with the
+-// distribution.
+-//     * Neither the name of Google Inc. nor the names of its
+-// contributors may be used to endorse or promote products derived from
+-// this software without specific prior written permission.
+-//
+-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+-
+-// ---
+-//
+-// This is just a very thin wrapper over sparsehashtable.h, just
+-// like sgi stl's stl_hash_map is a very thin wrapper over
+-// stl_hashtable.  The major thing we define is operator[], because
+-// we have a concept of a data_type which stl_hashtable doesn't
+-// (it only has a key and a value).
+-//
+-// We adhere mostly to the STL semantics for hash-map.  One important
+-// exception is that insert() may invalidate iterators entirely -- STL
+-// semantics are that insert() may reorder iterators, but they all
+-// still refer to something valid in the hashtable.  Not so for us.
+-// Likewise, insert() may invalidate pointers into the hashtable.
+-// (Whether insert invalidates iterators and pointers depends on
+-// whether it results in a hashtable resize).  On the plus side,
+-// delete() doesn't invalidate iterators or pointers at all, or even
+-// change the ordering of elements.
+-//
+-// Here are a few "power user" tips:
+-//
+-//    1) set_deleted_key():
+-//         Unlike STL's hash_map, if you want to use erase() you
+-//         *must* call set_deleted_key() after construction.
+-//
+-//    2) resize(0):
+-//         When an item is deleted, its memory isn't freed right
+-//         away.  This is what allows you to iterate over a hashtable
+-//         and call erase() without invalidating the iterator.
+-//         To force the memory to be freed, call resize(0).
+-//         For tr1 compatibility, this can also be called as rehash(0).
+-//
+-//    3) min_load_factor(0.0)
+-//         Setting the minimum load factor to 0.0 guarantees that
+-//         the hash table will never shrink.
+-//
+-// Roughly speaking:
+-//   (1) dense_hash_map: fastest, uses the most memory unless entries are small
+-//   (2) sparse_hash_map: slowest, uses the least memory
+-//   (3) hash_map / unordered_map (STL): in the middle
+-//

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201401181929.s0IJTrAi002777>