Date: Tue, 7 Mar 2006 08:53:32 -0500 From: Bart Silverstrim <bsilver@chrononomicon.com> To: "Noel Jones" <noeldude@gmail.com> Cc: freebsd-questions Questions list <freebsd-questions@freebsd.org> Subject: Re: awk question Message-ID: <551fa2ce1b8832dd3370d0e781c5b301@chrononomicon.com> In-Reply-To: <cce506b0603061345n4de96301sd9b8a8dd17deeac1@mail.gmail.com> References: <75a11e816bee8f2664ae1ccbd618dca7@athensasd.org> <cce506b0603061345n4de96301sd9b8a8dd17deeac1@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mar 6, 2006, at 4:45 PM, Noel Jones wrote: > On 3/6/06, Bart Silverstrim <bsilverstrim@athensasd.org> wrote: >> I'm totally drawing a blank on where to start out on this. >> >> If I have a list of URLs like >> http://www.happymountain.com/archive/digest.gif >> >> How could I use Awk or Sed to strip everything after the .com? Or is >> there a "better" way to do it? I'd like to just pipe the information >> from the logs to this mini-script and end up with a list of URLs >> consisting of just the domain (http://www.happymountain.com). >> > > > | cut -d / -f 1-3 Oh boy was that one easy. It was a BAD mental hiccup. I'll add a sort and uniq and it should be all ready to go. =10Thanks!
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?551fa2ce1b8832dd3370d0e781c5b301>