c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Mon, 02 Jan 2017

Securing a new server

Happy new year! New year means new servers, right?

That provides its own set of interesting circumstances!

The server we’re investigating in this scenario was chosen for being a dedicated box in a country that has quite tight privacy laws. And it was a great deal offered on LEB.

So herein is the fascinating bit. The rig took a few days for the provider to set up and, upon completion, the password for SSHing into the root account was emailed out. (o_0)

In very security-minded considerations, that means that there was a window of opportunity for bad guys to work on guessing the password before its owner even tuned in. That window remains open until the server is better secured. Luckily, there was a nice interface for reinstalling the OS permitting its purchaser to select a password.

My preferred approach was to script the basic lock-down so that we can reinstall the base OS and immediately start closing gaps.


In order:

  • Set up SSH keys (scripted)
  • Disable password usage for root (scripted)
  • Install and configure IPset (scripted. details in next post)
  • Install and configure fail2ban
  • Install and configure PortSentry

  • In this post, we’re focused on the first two steps.


    The tasks to be handled are:

  • Generate keys
  • Configure local SSH to use key
  • Transmit key to target server
  • Disable usage of password for ‘root’ account

  • We’ll use ssh-keygen to generate a key — and stick with RSA for ease. If you’d prefer ECC then you’re probably reading the wrong blog but feel encouraged to contact me privately.

    The code:

    #!/bin/bash
    #configure variables
    remote_host="myserver.com"
    remote_user="j0rg3"
    remote_pass="thisisaratheraquitecomplicatedpasswordbatterystaple" # https://xkcd.com/936/
    local_user=`whoami`
    local_host=`hostname`
    local_date=`date -I`
    local_filename=~/.ssh/id_rsa@$remote_host

    #generate key without passphrase
    ssh-keygen -b 4096 -P "" -C $local_user@local_host-$local_date -f $local_filename

    #add reference to generated key to local configuration
    printf '%s\n' "Host $remote_host" "IdentityFile $local_filename" >> ~/.ssh/config

    #copy key to remote host
    sshpass -p $remote_pass ssh-copy-id $remote_user@$remote_host

    #disable password for root on remote
    ssh $remote_user@$remote_host "cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak && sed -i '0,/RE/s/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config"

    We just run this script soon as the OS is reinstalled and we’re substantially safer. As a Deb8 install, quickly pulling down fail2ban and PortSentry makes things quite a lot tighter.

    In another post, we’ll visit the 2017 version of making a DIY script to batten the hatches using a variety of publicly provided blocklists.

    Download here:
        ssh_quick_fix.sh


    Tags: , , , ,
    Permalink: 20170102.securing.a.new.server

    Sun, 13 Jul 2014

    Simple Protection with iptables, ipset and Blacklists

    Seems I’ve always just a few more things going on than I can comfortably handle. One of those is an innocent little server holding the beginnings of a new project.

    If you expose a server to the Internet, very quickly your ports are getting scanned and tested. If you’ve an SSH server, there are going to be attempts to login as ‘root’ which is why it is ubiquitously advised that you disable root login. Also why many advise against allowing passwords at all.

    We could talk for days about improvements; it’s usually not difficult to introduce some form of two-factor authentication (2FA) for sensitive points of entry such as SSH. You can install monitoring software like Logwatch which can summarize important points from your logs, such as: who has logged via SSH, how many times root was used, etc.

    DenyHosts and Fail2ban are very great ways to secure things, according to your needs.

    DenyHosts works primarily with SSH and asks very little from you in way of configuration, especially if you’re using a package manager to install a version that is configured for the distribution on which you’re working. If you’re installing from source you may need to find where are your SSH logs (e.g., /var/log/secure, /var/log/auth.log). It’s extremely easy to set up DenyHosts to synchronize so that you’re automatically blocking widely-known offenders whether or not they’re after your server.

    In contrast, Fail2ban is going to take more work to get set up. However, it is extremely configurable and works with any log file you point it toward which means that it can watch anything (e.g., FTP, web traffic, mail traffic). You define your own jails which means you can ban problematic IP addresses according to preference. Ban bad HTTP attempts from HTTP only or stick their noses in the virtual corner and don’t accept any traffic from them until they’ve served their time-out by completely disallowing their traffic. You can even use Fail2ban to scan its own logs, so repeating offenders can be locked out for longer.

    Today we’re going to assume that you’ve a new server that shouldn’t be seeing any traffic except from you and any others involved in the project. In that case, you probably want to block traffic pretty aggressively. If you’ve physical access to the server (or the ability to work with staff at the datacenter) then it’s better to err in the direction of accidentally blocking good guys than trying to be overly fault-tolerant.

    The server we’re working on today is a Debian Wheezy system. It has become a common misconception that Ubuntu and Debian are, intents and purposes, interchangeable. They’re similar in many respects and Ubuntu is great preparation for using Debian but they are not the same. The differences, I think, won’t matter for this exercise but I am unsure because this was written using Wheezy.

    Several minutes after bringing my new server online, I started seeing noise in the logs. I was still getting set up and really didn’t want to stop and take protective measures but there’s no point in securing a server after its been compromised. The default Fail2ban configuration was too forgiving for my use. It was scanning for 10 minutes and banning for 10 minutes. Since only a few people should be accessing this server, there’s no reason for anyone to be trying a different password every 15 minutes (for hours).

    I found a ‘close-enough’ script and modified it. Here, we’ll deal with a simplified version.

    First, lets create a name for these ne’er-do-wells in iptables:
    iptables -N bad_traffic

    For this one, we’ll use Perl. We’ll look at our Apache log files to find people sniffing ‘round and we’ll block their traffic. Specifically, we’re going to check Apache’s ‘error.log’ for the phrases ‘File does not exist’ and ‘client denied by server configuration’ and block people causing those errors. This would be excessive for servers intended to serve the general populace. For a personal project, it works just fine as a ‘DO NOT DISTURB’ sign.


    #!/usr/bin/env perl
    use strict;
    use POSIX qw(strftime);

    my $log = ($ARGV[0] ? $ARGV[0] : "/var/log/apache2/error.log");
    my $chain = ($ARGV[1] ? $ARGV[1] : "bad_traffic");

    my @bad = `grep -iE 'File does not exist|client denied by server configuration' $log |cut -f8 -d" " | sed 's/]//' | sort -u`;
    my @ablk = `/sbin/iptables -S $chain|grep DROP|awk '{print $4}'|cut -d"/" -f1`;

    foreach my $ip (@bad) {
    if (!grep $_ eq $ip, @ablk) {
    chomp $ip;
    `/sbin/iptables -A $chain -s $ip -j DROP`;
    print strftime("%b %d %T",localtime(time))." badht: blocked bad HTTP traffic from: $ip\n";
    }
    }

    That gives us some great, utterly unforgiving, blockage. Looking at the IP addresses attempting to pry, I noticed that most of them were on at least one of the popular block-lists.

    So let’s make use of some of those block-lists! I found a program intended to apply those lists locally but, of course, it didn’t work for me. Here’s a similar program; this one will use ipset for managing the block-list though only minor changes would be needed to use iptables as above:

    #!/bin/bash
    IP_TMP=ip.tmp
    IP_BLACKLIST_TMP=ip-blacklist.tmp

    IP_BLACKLIST=ip-blacklist.conf

    WIZ_LISTS="chinese nigerian russian lacnic exploited-servers"

    BLACKLISTS=(
    "http://danger.rulez.sk/projects/bruteforceblocker/blist.php" # BruteForceBlocker IP List
    "http://rules.emergingthreats.net/blockrules/compromised-ips.txt" # Emerging Threats - Compromised IPs
    "http://www.spamhaus.org/drop/drop.txt" # Spamhaus Don't Route Or Peer List (DROP)
    "http://www.spamhaus.org/drop/edrop.txt" # Spamhaus Don't Route Or Peer List (DROP) Extended
    "http://cinsscore.com/list/ci-badguys.txt" # C.I. Army Malicious IP List
    "http://www.openbl.org/lists/base.txt" # OpenBL.org 90 day List
    "http://www.autoshun.org/files/shunlist.csv" # Autoshun Shun List
    "http://lists.blocklist.de/lists/all.txt" # blocklist.de attackers
    )

    for address in "${BLACKLISTS[@]}"
    do
    echo -e "\nFetching $address\n"
    curl "$address" >> $IP_TMP
    done

    for list in $WIZ_LISTS
    do
    wget "http://www.wizcrafts.net/$list-iptables-blocklist.html" -O - >> $IP_TMP
    done

    wget 'http://wget-mirrors.uceprotect.net/rbldnsd-all/dnsbl-3.uceprotect.net.gz' -O - | gunzip | tee -a $IP_TMP

    grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[/][0-9]\{1,3\}' $IP_TMP | tee -a $IP_BLACKLIST_TMP
    grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[^/]' $IP_TMP | tee -a $IP_BLACKLIST_TMP

    sed -i 's/\t//g' $IP_BLACKLIST_TMP
    sort -u $IP_BLACKLIST_TMP | tee $IP_BLACKLIST

    rm $IP_TMP
    rm $IP_BLACKLIST_TMP

    wc -l $IP_BLACKLIST

    if hash ipset 2>/dev/null
    then
    ipset flush bloxlist
    while IFS= read -r ip
    do
    ipset add bloxlist $ip
    done < $IP_BLACKLIST
    else
    echo -e '\nipset not found\n'
    echo -e "\nYour bloxlist file is: $IP_BLACKLIST\n"
    fi


    Download here:
        bad_traffic.pl
        bloxlist.sh


    Tags: , , , , , , , , , ,
    Permalink: 20140713.simple.protection.with.iptables.ipset.and.blacklilsts

    Thu, 06 Jun 2013

    Managing to use man pages through simple CLI tips

    Recently, an author I admire and time-honored spinner of the Interwebs, Tony Lawrence emphasized the value of using man pagesmanual pagesDocumentation available from the command line.
    > man ls
    as a sanity check before getting carried away with powerful commands. I didn’t know about this one but he has written about a situation in which killall could produce some shocking, and potentially quite unpleasant, results.

    Personally, I often quickly check man pages to be certain that I am using the correct flags or, as in the above case, anticipating results that bear some resemblance to what is actually likely to happen. Yet, it seems many people flock toward SERPSearch Engine Results Page A tasteful replacement for mentioning any particular search-engine by name.
    Also useful as a verb:
    I dunno. You’ll have to SERP it.
    s for this information.

    Perhaps the most compelling reason to head for the web is leaving the cursor amid the line you’re working on, without disturbing the command. SERPing the command however, could easily lead you to information about a variant that is more common than the one available to you. More importantly, the information retrieved from the search engine is almost certainly written by someone who did read the man page — and may even come with the admonishment that you RTFMRead The F#!$!*#’n Manual as a testament to the importance of developing this habit.

    This can be made easier with just a few CLI shortcuts.

    <CTRL+u> to cut what you have typed so far and <CTRL+y> to paste it back.

    That is, you press <CTRL+u> and the line will be cleared, so you can then type man {command} and read the documentation. Don’t hesitate to jot quick notes of which flags you intend to use, if needed. Then exit the man page, press <CTRL+y> and finish typing right where you left off.

    This is another good use for screen or tmux but let’s face it. There are times when you don’t want the overhead of opening another window for a quick look-up and even instances when these tools aren’t available.

    A few other tips to make life easier when building complex commands:

    Use the command fc to open up an editor in which you can build your complex command and, optionally, even save it as a shell script for future reuse.

    Repeat the last word from the previous command (often a filename) with <ALT+.> or use an item from the last command by position, in reverse order:
    > ls -lahtr *archive*
    <ALT+1+.> : *archive*
    <ALT+2+.> : -lahtr
    <ALT+3+.> : ls

    You can also use Word Designators to use items from history, such as adding sudo to the last command typed by:
    sudo !!

    This allows for tricks like replacing bits of a previous command:
    !:s/misspelled/corrected/

    Lastly, if you need a command that was typed earlier, you can search history by pressing <CTRL+r> and start typing an identifying portion of the command.

    (Note: I have used these in Zsh and Bash, specifically. They can, however, be missing or overwritten — if a feature you want isn’t working, you can bind keys in a configuration file. Don’t just write it off, once you’ve solved the problem it will never again be an intimidating one.)

    Happy hacking!


    Tags: , , , , , , , ,
    Permalink: 20130606.managing.to.use.man.pages

    Mon, 13 May 2013

    Zsh and hash

    Documentation for this one seems a bit hard to come by but it is one of the things I love about Zsh.

    I’ve seen many .bashrc files that have things like:
    alias www='cd /var/www'
    alias music='cd /home/j0rg3/music'

    And that’s a perfectly sensible way to make life a little easier, especially if the paths are very long.

    In Zsh, however, we can use the hash command and the shortcut we get from it works fully as the path. Other words, using the version above, if we want to edit ‘index.html’ in the ‘www’ directory, we would have to issue the shortcut to get there and then edit the file, in two steps:
    > www
    > vim index.html

    The improved version in .zshrc would look like:
    hash www=/var/www
    hash -d www=/var/www

    Then, at any time, you can use tilde (~) and your shortcut in place of path.
    > vim ~www/index.html

    Even better, it integrates with Zsh’s robust completions so you can, for example, type cd ~www/ and then use the tab key to cycle through subdirectories and files.

    On this system, I’m using something like this:
    (.zshrc)
    hash posts=/home/j0rg3/weblog/posts
    hash -d posts=/home/j0rg3/weblog/posts

    Then we can make a function to create a new post, to paste into .zshrc. Since we want to be able to edit and save, without partial posts becoming visible, while we are working, we’ll use an extra .tmp extension at the end:
    post() { vim ~posts/`date +%Y-%m`/`date +%Y%m%d`.$1.txt.tmp }

    [ In-line date command unfamiliar? See earlier explanation ]

    But, surely there is going to be a point when we need to save a post and finish it later. For now, let’s assume that only a single post will be in limbo at any time. We definitely don’t want to have to remember the exact name of the post — and we don’t want to have hunt it down every time.

    We can make those things easier like this:
    alias resume="vim `find ~posts/ -name '*.txt.tmp'`"

    Now, we can just enter resume and the system will go find the post we were working on and open it up for us to finish. The file will need the extension renamed from .txt.tmp to only .txt to publish the post but, for the sake of brevity, we’ll think about that (and having multiple posts in editing) on another day.


    Tags: , , , ,
    Permalink: 20130513.zsh.and.hash

    Tue, 07 May 2013

    Welcome, traveler.

    Thanks for visiting my little spot on the web. This is a Blosxom ‘blog which, for those who don’t know, is a CGI written in Perl using the file-system (rather than a database).

    To the CLI-addicted, this is an awesome little product. Accepting, of course, that you’re going to get under the hood if you’re going to make it the product you want. After some modules and hacking, I’m pleased with the result.

    My posts are just text files, meaning I start a new one like: vim ~posts/`date +%Y%m%d`.brief.subject.txt

    Note: the back-ticks (`) tell the system that you want to execute the command between ticks, and dynamically insert its output into the command. In this case, the command date with these parameters:
    1. (+) we’re going to specify a format
    2. (%Y) four-digit year
    3. (%m) two-digit month
    4. (%d) two-digit day
    That means the command above will use Vim to edit a text file named ‘20130507.brief.subject.txt’ in the directory I have assigned to the hash of ‘posts’. (using hash this way is a function of Zsh that I’ll cover in another post)

    In my CLI-oriented ‘blog, I can sprinkle in my own HTML or use common notation like wrapping a word in underscores to have it underlined, forward-slashes for italics and asterisks for bold.

    Toss in a line that identifies tags and, since Perl is the beast of Regex, we pick up the tags and make them links, meta-tags, etc.

    Things here are likely to change a lot at first, while I twiddle with CSS and hack away at making a Blosxom that perfectly fits my tastes — so don’t be too alarmed if you visit and things look a tad wonky. It just means that I’m tinkering.

    Once the saw-horses have been tucked away, I’m going to take the various notes I’ve made during my years in IT and write them out, in a very simple breakdown, aimed at sharing these with people who know little about how to negotiate the command line. The assumption here is that you have an interest in *nix/BSD. If you’ve that and the CLI is not a major part of your computing experience, it probably will be at some point. If you’re working on systems remotely, graphical interfaces often just impede you.

    Once you’ve started working on remote machines, the rest is inevitable. You can either remember how to do everything two ways, through a graphical interface and CLI — or just start using the CLI for everything.

    So let’s take a little journey through the kinds of things that make me love the CLI.


    Tags: , , , , , , , , ,
    Permalink: 20130507.greetings