Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Aug 2020 15:46:31 -0400
From:      Aryeh Friedman <aryeh.friedman@gmail.com>
To:        Tim Daneliuk <tundra@tundraware.com>
Cc:        FreeBSD Mailing List <freebsd-questions@freebsd.org>
Subject:   Re: OT: Dealing with a hosting company with it's head up it's rear end
Message-ID:  <CAGBxaXndmZ8qrhjQ%2B54YALsTQjLBSe6DivK5wXvMpJot7z1JcA@mail.gmail.com>
In-Reply-To: <97fd6d35-ef35-8583-5ef2-3ea761c36c12@tundraware.com>
References:  <CAGBxaXmg0DGSEYtWBZcbmQbqc2vZFtpHrmW68txBck0nKJak=w@mail.gmail.com> <CAGBxaX=XbbFLyZm5-BO=6jCCrU%2BV%2BjubxAkTMYKnZZZq=XK50A@mail.gmail.com> <CALeGphwfr7j-xgSwMdiXeVxUPOP-Wb8WFs95tT_%2Ba8jig_Skxw@mail.gmail.com> <CAGBxaX=CXbZq-k6=udNaXTj2m%2BgnpDCB%2Bui4wgvtrzyHhjGeSw@mail.gmail.com> <40xvq0.qf0q3x.1hge1ap-qmf@smtp.boon.family> <CAGBxaX=9asO=X32RucVyNz5kppPhbZc9Ayx-pyiXMBi85BeJ6w@mail.gmail.com> <20200814004312.bb0dd9f1.freebsd@edvax.de> <20200814065701.2b390145ac6d189161bc31b4@sohara.org> <173ed205550.27bc.0b331fcf0b21179f1640bd439e3f4a1e@tundraware.com> <CAGBxaX=gs57EXsm028%2B6Var89MUoGh-7d1gfPdGmbm5gPBnufA@mail.gmail.com> <4d320acd-a995-7a35-5c0e-c2c22e7e6f96@radel.com> <CAGBxaXnjDAnZPjx_nksb_ed-f%2BX=PowLTUYMX706oMScd8HDaw@mail.gmail.com> <df55f102-228f-021d-62ba-b26520e78740@radel.com> <CAGBxaXkYpjUGwFwR-WZo9Ud0b_ZwmP7QVY74QH3vyt0Z12NmXQ@mail.gmail.com> <97fd6d35-ef35-8583-5ef2-3ea761c36c12@tundraware.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Aug 14, 2020 at 2:48 PM Tim Daneliuk <tundra@tundraware.com> wrote:

> On 8/14/20 12:49 PM, Aryeh Friedman wrote:
> > If the controls can be circumvented they are essentially useless and
> > shouldn't be in place in the first place.   Besides anyone who knows what
> > RDP or SSH is would also know how to circumvent controls designed for
> > non-technical people so that makes the blocking of them even more short
> > sighted.   This is what I meant by security by obfuscation (i.e. hiding
> > obvious truths that everyone with any knowledge knows).
>
> I am not taking a position on whether or not blocking ssh is always good,
> bad, or irrelevant.  However, I pretty fundamentally disagree with the
> position above as written. It is absolutely possible to dramatically
> reduce the technical attack surface by limiting what ports can be accessed
> on a
> given machine.
>

The question was not about blocking incoming ports, it was about blocking
outgoing ones.


> For example, suppose I have some batch process  that ingests data and
> produces some sort of results.  Assume that I only permit the inbound
> data and outbound results to be made available over a single mechanism -
> let's use an MQ system if you like. No other ports of any kind are open
> beyond the TCP/IP interface to the MQ system.
>

The issue was not the very idea of limiting ports in general (which I agree
can be useful up to a point), but rather the fact that the hosting
company's *NEW* policy is to limit ports to what *THEY* think you need, not
what you actually need, and then refuse to open what you actually need.

Also, IMO, the only reason outbound ports should be blocked is to prevent
malware/spyware automatically/invisibly sending stuff.  I *DO NOT* agree or
support the idea that humans should be blocked from doing anything (anybody
who really wants to get this will find some way, even if it is just what is
between their ears).

BTW, message queues are a fundamentally flawed assumption in many
application domains such as the one I am dealing with.   The reason why
this is bad is it makes it impossible for third party applications to be
developed that interface directly to the DB which is not avoidable if your
magic message queue is closed source and only works with a set
configuration (which is the case in many such areas).   It gives a false
sense of solving the concurrency issues when there is no such solution in
place (the only way to solve them is with true record locking).   And it
gives the developers of any system the false impression they don't need to
worry about concurrency at all.   This is the *ROOT CAUSE* of why all the
issues with the hosting came up in the first place -- the other vendor I
only mentioned in passing made just such a system and due to high turn over
no one in their org has any idea of what concurrency issues, if any, exist
in their app, thus we need to get paranoid with backups, and this is what
caused all the flaws with the hosting provider to become obvious and major
hurdles.

Every other system I have seen based on message queues, like OpenStack, are
disasters waiting to happen (OpenStack even admits it when they say the
worst possible disaster for a cloud is a power failure?!?!?!?!?).




> Let's further suppose that access to the MQ system, in- or outbound,
> is narrowly limited in time with dynamic firewalling/network rules.
> And let's harden this even more by making those inbound- and outbound
> payloads encrypted using one-time pad asymmetric keys.
>

That's the very system the law requires for us and I can tell you from
first hand experience it is nowhere near secure and anyone who says it is
has never attempted to actually use such a system.    The exception is the
one-time pad since there is no such thing in practice (not even this
idiotic idea the hosting company has of useing TOTP).


> Can that system NEVER be compromised?  Of course it can,  but the
> compromise has to happen either at the physical server (or, by proxy,
> the hosting entity's console interface... OR it has to happens somewhere
> *outside* the server itself.
>
> Think about what an attack on this system would entail:
>
> - Hacking access into the private network where all this runs.
>

Which, in a datacenter that has public components, is so much easier than
you think.


> - Figuring out how to compromise access to the MQ system at the moments
>   in time it was handling traffic to/from the server AND showing up
>   as a legitimate subscriber to those topics.
>

Completely trivial on most message queues.   The fact you're even holding
the message makes it vulnerable.


> - Figuring out how to crack into an one-time pad encoded payload -
>   something known to be computationally impossible in reasonable time
>   for a sufficiently good key - at least until quantum cell phones are
> available.
>

Relying on too many moving parts is always less secure then fewer and
better designed ones.   This solution has far too many moving parts and is
frankly the main source of idiocy of the hosting provider this thread is
asking about.  (See other replies in the thread beyond mine to see why).



>
> Is the risk zero?  No.  And certainly the same set of concerns have to be
> extended
> to the surrounding infrastructure (network, MQ series, key management and
> distribution
> system ...)  But the system as described above, and built with proper
> rigor and skill,
> is really, really, REALLY hard to break into, in large part because the
> only place
> where the plain data lives is in a server that has only very brief
> connection with
> anything and then only over a very narrow mechanism.
>

The system above increases (not reduces) your attack surface exponsionally.


> My point is that the "principle of least privilege" is very much a proper
> construct
> for designing security hardened systems. So not allowing ssh on a system
> with a web server isn't security by obscurity.  It's just limiting the
> attack
> surface ... a very reasonable decision for some applications.
>

Yes the principle is sound but not the application you're making of it, nor
is any attempt to externally limit what can be done and what can't be done
(except for passive firewalls).


>
> In general, security has to be seen as a risk management activity, not
> a technical one.  The amount of security focus on, say, the nuclear launch
> codes, had jolly well be exponential greater than protecting the grocery
> list
> on your cell phone.  But *if* you need great protection, reduction of
> access
> is entirely legit.
>

Security first and foremost is a technical issue and it is a huge mistake
to say it is not.   If you can't afford and/or the right security makes the
system unusable and you need to loosen it up for that reason, that is when
it becomes non-technical in that you need to decide where to compromise.


> The truth is that the single greatest weakness in the design above has
> nothing
> to do with the technology at all.  It has to do with the recipient of the
>

The technical aspects of it *ARE* it's single biggest weakness because the
technical aspects are fundamentally flawed starting with the mindset behind
them (i.e. the mindset of -- "I know better than the mere mortals who
actually have to use it because they are all idiots").    It makes it
impossible to secure stuff with the only thing in the data universe that is
100% secure which is what is between my ears [it is impossible to force
someone who would rather die than give out their password to ever give it,
but once you write it down you have lost this last line of defense].   This
assumes that they have had proper training in not falling for social
engineering (which no truly paranoid person would do anyways).



> report generated by our mythical server.  If that recipient is a person,
> the
> risk is that they will "leak" the report outside the organization in a
> stupid
> or malevolent manner. THAT is what Data Loss Prevention systems are
> supposedly
>

If you don't trust someone to do stuff right in the first place *DON'T*
hire them once you hire someone you don't trust they no amount of
safeguards will prevent data loss (if nothing else there is always what's
between their ears)


> addressing (often poorly in my experience).  Most companies try to
> materially
> reduce this particular threat by turning off USB access on laptops,
> eliminating
> any form of remote access outside their own networks, dividing their
> networks into
> separate, hardened subnets, doing deep scans and audits on email traffic,
> and so
> forth.  And yet, even when done with almost infinite money and endless
> security
> paranoia, this remains one of the most intractable problems in information
> security. Two words: Edward Snowden
>

Like I said if you don't trust someone don't hire them and if your
management can't be trusted to not piss off its employees so much that they
might turn against your org then it is an organizational and not security
problem.


-- 
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGBxaXndmZ8qrhjQ%2B54YALsTQjLBSe6DivK5wXvMpJot7z1JcA>