c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Sat, 04 Mar 2017

Official(ish) deep dark onion code::j0rg3 mirror

Recently I decided that I wanted my blog to be available inside of the Deep, Dark Onion (Tor).

First time around, I set up a proxy that I modified to access only the clear web version of the blog and to avail that inside Tor as a ‘hidden service’.

My blog is hosted on equipment provided by the kind folk at insomnia247.nl and I found that, within a week or so, the address of my proxy was blocked. It’s safe for us to assume that it was simply because of the outrageous popularity it received inside Tor.

By “safe for us to assume” I mean that it is highly probable that no significant harm would come from making that assumption. It would not be a correct assumption, though.

What’s more true is that within Tor things are pretty durn anonymous. Your logs will show Tor traffic coming from 127.0.0.1 only. This is a great situation for parties that would like to scan sites repeatedly looking for vulnerabilities — because you can’t block them. They can scan your site over and over and over. And the more features you have (e.g., comments, searches, any form of user input), the more attack vectors are plausible.

So why not scan endlessly? They do. Every minute of every hour.

Since insomnia247 is a provider of free shells, it is incredibly reasonable that they don’t want to take the hit for that volume of traffic. They’re providing this service to untold numbers of other users, blogs and projects.

For that reason, I decided to set up a dedicated mirror.

Works like this: my blog lives here. I have a machine at home which uses rsync to make a local copy of this blog. Immediately thereafter it rsyncs any newly gotten data up to the mirror in onionland.

After consideration, I realized that this was also a better choice just in case there is something exploitable in my blog. Instead of even risking the possibility that an attacker could get access to insomnia247, they can only get to my completely disposable VPS which has hardly anything on it except this blog and a few scripts to which I’ve already opened the source code.

I’ve not finished combing through but I’ve taken efforts to ensure it doesn’t link back to clear web. To be clear, there’s nothing inherently wrong with that. Tor users will only appear as the IP address of their exit node and should still remain anonymous. To me, it’s just onion etiquette. You let the end-user decide when they want to step outside.

To that end, the Tor mirror does not have the buttons to share to Facebook, Twitter, LinkedIn, Google Plus.

That being said, if you’re a lurker of those Internet back-alleys then you can find the mirror at: http://aacnshdurq6ihmcs.onion

Happy hacking, friends!


Tags: , , , , , , , , , , ,
Permalink: 20170304.deep.dark.onion

Sat, 18 Feb 2017

The making of a Docker: Part II - Wickr: with bonus analysis

Recently, I read a rather excited attention-catching piece about how Wickr is the super-secure version of Slack. Attention caught in part because I feel like Wickr has been around for a while. I’d not seen anyone raving about its security in places where I normally interact with those who are highly informed about such subjects.

Good is that it seems the folk at Wickr did a fine job of making sure valuable data aren’t left behind.
The bad: closed-source, not subject to independent review; crazy marketin’-fancy-talk without a thorough description of how it does what is claimed.
Any time I’m looking at a product or service that boasts security, I sort of expect to see a threat model.

[ Update: At the time I was working on this project, the folk at Wickr were, evidently, opening their source. That’s spectacular news! Check it out on Github. ]

This began as an exercise to provide another piece of security-ish software in a Docker container. Anyone who has used a live distro (e.g., Kali, TAILS) with any regularity knows the ritual of installing favorite tools at each boot, data stores on removable media.

For me, there is tremendous appeal in reducing that to something like:
git clone https://georgeglarson/wickr
cd docker-wickr
./install.sh
wickr

Let’s dig in!

Having created a number of Docker containers my workflow is to queue up the base OS and go through the steps needed to get the software running while keeping careful notes. In this case, I had originally tried to install Wickr on a current copy of Kali. It was already known that Wickr, based off of Ubuntu 14.04, needed an older unicode library. So we begin with Ubuntu 14.04.

Grab a copy of Wickr and see what’s required:
dpkg -I wickr-me_2.6.0_amd64.deb

new debian package, version 2.0.
size 78890218 bytes: control archive=4813 bytes.
558 bytes, 14 lines control
558 bytes, 14 lines control64
10808 bytes, 140 lines md5sums
Package: wickr-me
Architecture: amd64
Section: net
Priority: optional
Version: 2.6.0-4
Replaces: wickr
Conflicts: wickr
Depends: libsqlcipher0, libuuid1, libicu52, libavutil52|libavutil54, libc6, libssl1.0.0, libx264-142, libglib2.0-0, libpulse0, libxrender1, libgl1-mesa-glx
Recommends: libnotify-bin, gstreamer-plugins0.10-good, gstreamer-plugins0.10-bad, gstreamer-plugins0.10-ugly
Maintainer: Wickr Inc.
Installed-Size: 200000
Description: Secure Internet Chat and Media Exchange agent
Wickr is a secure communications client

Okay. The CLI should do most of the work for us, giving a formatted list of dependencies.
dpkg -I wickr-me_2.6.0_amd64.deb | grep -E "^ Depends: | Recommends: " | sed -e "s/ Depends: //" -e "s/ Recommends: //" -e "s/,//g" -e "s/ / \\\ \n/g"

libsqlcipher0 \
libuuid1 \
libicu52 \
libavutil54 \
libc6 \
libssl1.0.0 \
libx264-142 \
libglib2.0-0 \
libpulse0 \
libxrender1 \
libgl1-mesa-glx
libnotify-bin \
gstreamer-plugins0.10-good \
gstreamer-plugins0.10-bad \
gstreamer-plugins0.10-ugly \

Attempting to get those with apt-get reports that it cannot find the gstreamer bits.

Let’s find:
apt-cache search gstreamer | grep -i plugin | grep -E "good|bad|ugly"

gstreamer0.10-plugins-good - GStreamer plugins from the "good" set
...
gstreamer0.10-plugins-bad - GStreamer plugins from the "bad" set
...
gstreamer0.10-plugins-ugly - GStreamer plugins from the "ugly" set

So, there’s the format we need to get the gstreamer dependencies. We know that we’ll also want SSH and wget. That should be enough for our Dockerfile.

We’ll pull down Wickr:
wget https://dls.wickr.com/Downloads/wickr-me_2.6.0_amd64.deb

Then install:
dpkg -i wickr-me_2.6.0_amd64.deb

Okay! We are, in theory, ready to run Wickr. We’re about to see we aren’t yet there — but these sorts of problems are pretty commonplace.
wickr-me

wickr-me: error while loading shared libraries: libxslt.so.1: cannot open shared object file: No such file or directory

Huh! We need libxslt. Let’s fix that: apt-get install libxslt1-dev

Now we can run it.
wickr-me

This application failed to start because it could not find or load the Qt platform plugin "xcb".

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, xcb.

Reinstalling the application may fix this problem.
Aborted (core dumped)

One more: apt-get install xcb

Okay. That really was the last one. Now we have a complete list of dependencies for our Dockerfile:
RUN apt-get update && apt-get install -y \
gstreamer0.10-plugins-good \
gstreamer0.10-plugins-bad \
gstreamer0.10-plugins-ugly \
libsqlcipher0 \
libuuid1 \
libicu52 \
libavutil52 \
libc6 \
libssl1.0.0 \
libx264-142 \
libglib2.0-0 \
libpulse0 \
libxrender1 \
libxslt1-dev \
libgl1-mesa-glx \
libnotify-bin \
ssh \
wget \
xcb \
&& apt-get clean \

We now have Wickr in a Docker container and, because we are the curious sort, need to peek into what’s happening.

A natural first step is to set Wireshark atop Wickr. At a glance, seems to be communicating with a single IP address (204.232.166.114) via HTTPS.

Unsurprsingly, the client communicates to the server whenever a message is sent. Further it appears to poll the same address periodically asking for new messages. We see that the address resolves to Rackspace in San Antonio, TX.

We can easily establish the link between this IP address, Rackspace and the application.

Well, that’s enough. Right?

Good!

Wait.

What?

We’re still a little curious.

Aren’t we?

I mean, what’s the big question here? What happens if there’s a man in the middle? Persons so eagerly connect to any free WIFI, it is clearly a plausible scenario. Well… One way to find out!

Here’s what we learned. Server-side, the application is written in PHP. The IP address is resolved by the URI ‘secex.info’.

When we send, it calls ‘postMessage.php’:

When we receive, ‘downloadMessage.php’:

And it calls ‘newMessageCheck.php’ to, y’know, check for new messages.

Other analyses have forensically examined artefacts left behind; there are published descriptions of the encryption methods used for the local database connection. We didn’t go into more aggressive efforts such as disassembly because we are too lazy for that jazz!

My opinion, we didn’t learn anything wildly unexpected. Overall, Wickr seems an okay solution for convenient encrypted messaging. That’s always the trade: convenience vs. security. Least we ended with a Docker container for the software!

Github | Docker


Tags: , , ,
Permalink: 20170218.making.a.docker.wickr

Fri, 17 Feb 2017

The making of a Docker: Part I - Bitmessage GUI with SSH X forwarding

Lately, I’ve been doing a lot of work from a laptop running Kali. Engaged in pursuit of a new job, I’m brushing up on some old tools and skills, exploring some bits that have changed.

My primary desktop rig is currently running Arch because I love the fine grain control and the aggressive releases. Over the years, I’ve Gentoo’d and Slacked, Crunchbanged, BSD’d, Solarised, et cet. And I’ve a fondness for all of them, especially the security-minded focus of OpenBSD. But, these days we’re usually on Arch or Kali. Initially, I went with Black Arch on the laptop but I felt the things and ways I was fixing things were too specific to my situation to be good material for posts.

Anyway, I wanted to get Bitmessage running, corresponding to another post I have in drafts. On Kali, it wasn’t going well so I put it on the Arch box and just ran it over the network. A reasonable solution if you’re in my house but also the sort of solution that will keep a hacker up at night.

If you’re lucky, there’s someone maintaining a package for the piece of software that you want to run. However, that’s often not the case.

If I correctly recall, to “fix” the problem with Bitmessage on Kali would’ve required the manual installation an older version of libraries that were already present. Those libraries should, in fact, be all ebony and ivory, living together in harmony. However, I just didn’t love the idea of that solution. I wanted to find an approach that would be useful on a broader scale.

Enter containerization/virtualization!

Wanting the lightest solution, I quickly went to Docker and realized something. I have not before built a Docker container for a GUI application. And Bitmessage’s CLI/daemon mode doesn’t provide the fluid UX that I wanted. Well, the easy way to get a GUI out of a Docker container is to forward DISPLAY as an evironment variable (i.e., docker run -e DISPLAY=$DISPLAY). Splendid!

Except that it doesn’t work on current Kali which is using QT4. There’s a when graphical apps are run as root and though it is fixed in QT5, we are using current Kali. And that means we are, by default, uid 0 and QT4.

I saw a bunch of workarounds that seemed to have spotty (at best) rates of success including seting QT’s graphics system to Native and giving Xorg over to root. They, mostly, seemed to be cargo cult solutions.

What made the most sense to my (generally questionable) mind was to use X forwarding. Since I had already been running Bitmessage over X forwarding from my Arch box, I knew it should work just the same.

To be completely truthful, the first pass I took at this was with Vagrant mostly because it’s SO easy. Bring up your Vagrant Box and then:
vagrant ssh -- -X
Viola!

Having proof of concept, I wanted a Docker container. The reason for this is practical. Vagrant, while completely awesome, has substantially more overhead than Docker by virtualizing the kernel. We don’t want a separate kernel running for each application. Therefore Docker is the better choice for this project.

Also, we want this whole thing to be seemless. We want to run the command bitmessage and it should fire up with minimal awkwardness and hopefully no extra steps. That is we do not want to run the Docker container then SSH into it and execute Bitmessage as individual steps. Even though that’s going to be how we begin.

The Bitmessage wiki accurately describes how to install the software so we’ll focus on the SSH setup. Though when we build the Dockerfile we will need to add SSH to the list from the wiki.

We’re going to want the container to start so that the SSH daemon is ready. Until then we can’t SSH (with X forwarding) into the container. Then we’ll want to use SSH to kick off the Bitmessage application, drawing the graphical interface using our host system’s X11.

We’re going to take advantage of Docker’s -v --volume option which allows us to specify a directory on our host system to be mounted inside our container. Using this feature, we’ll generate our SSH keys on the host and make them automatically available inside the container. We’ll tuck the keys inside the directory that Bitmessage uses for storing its configuration and data. That way Bitmessage’s configuration and stored messages can be persistent between runs — and all of your pieces are kept in a single place.

When we generate the container /etc/ssh/sshd_config is configured to allow root login without password only (i.e., using keys). So here’s how we’ll get this done:
mkdir -p ~/.config/PyBitmessage/keys #Ensure that our data directories exist
cd ~/.config/PyBitmessage/keys
ssh-keygen -b 4096 -P "" -C $"$(whoami)@$(hostname)-$(date -I)" -f docker-bitmessage-keys #Generate our SSH keys
ln -fs docker-bitmessage-keys.pub authorized_keys #for container to see pubkey

Build our container (sources available at Github and Docker) and we’ll make the script to handle Bitmessage to our preferences. #!/bin/bash
# filename: bitmessage
set -euxo pipefail

# open Docker container:
# port 8444 available, sharing local directories for SSH and Bitmessage data
# detatched, interactive, pseudo-tty (-dit)
# record container ID in $DID (Docker ID)
DID=$(docker run -p 8444:8444 -v ~/.config/PyBitmessage/:/root/.config/PyBitmessage -v ~/.config/PyBitmessage/keys/:/root/.ssh/ -dit j0rg3/bitmessage-gui bash)

# find IP address of new container, record in $DIP (Docker IP)
DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)

# pause for one second to allow container's SSHD to come online
sleep 1

# SSH into container and execute Bitmessage
ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oIdentityFile=~/.config/PyBitmessage/keys/docker-bitmessage-keys -X $DIP ./PyBitmessage/src/bitmessagemain.py

# close container if Bitmessage is closed
docker kill $DID

Okay, let’s make it executable: chmod +x bitmessage

Put a link to it where it can be picked up system-wide: ln -fs ~/docker-bitmessage/bitmessage /usr/local/bin/bitmessage

There we have it! We now have a functional Bitmessage inside a Docker container. \o/

In a future post we’ll look at using eCryptfs to further protect our Bitmessage data stores.

  Project files: Github and Docker


Tags: , , , , , , , , , , ,
Permalink: 20170217.making.a.docker.bitmessage

Tue, 10 Jan 2017

[-] Auxiliary failed: Msf::OptionValidateError The following options failed to validate: RHOSTS.

Mucking about with a fresh copy of Kali brings to attention that it’s packaged with an Armitage that doesn’t correctly work.

I know what you’re thinking… Good. Type the commands into Msfconsole like a real man, y’uh lazy good-fer-naught! And, in practice, that was my immediate solution. But I can’t resist a good tinker when things are misbehaving.

I was anticipating that the problem would be thoroughly solved when I ixquicked it. That was partially correct. Surprised, however, when apt-get update && apt-get upgrade didn’t fix the issue. More surprised at the age of the issue. Most surprised that I could see lots of evidence that users have been plagued by this issue — but no clear work arounds were quickly found.

Guess what we’re doing today?

Okay. The issue is quite minor but just enough to be heartbreaking to the fledgling pentester trying to get a VM off the ground.

In brief, the owner of Armitage’s Github explains:

The MSF Scans feature in Armitage parses output from Metasploit’s portscan/tcp module and uses these results to build a list of targets it should run various Metasploit auxiliary modules against. A recent-ish update to the Metasploit Framework changed the format of the portscan/tcp module output. A patch to fix this issue just needs to account for the new format of the portscan/tcp module.

That is, a colon makes it into the input for the Msfconsole command to define RHOSTS. I.e.: set RHOSTS 172.16.223.150: - 172.16.223.150

An other kind coder tweaked the regex and submitted the patch and pull request, which was successfully incorporated into the project.

Sadly, things have stalled out there. So if this problem is crippling your rig, let’s fix it!

We just want a fresh copy of the project.
root@kali:~/armitage# git clone https://github.com/rsmudge/armitage

Cloning into ‘armitage’…
remote: Counting objects: 7564, done.
remote: Total 7564 (delta 0), reused 0 (delta 0), pack-reused 7564
Receiving objects: 100% (7564/7564), 47.12 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (5608/5608), done.

Kali is Debian-based and we’re going to need Apache Ant:
root@kali:~/armitage# apt-get install ant

Then, we’ll build our new fella:
root@kali:~/armitage# cd armitage
root@kali:~/armitage# ./package.sh

Buildfile: /root/test/armitage/build.xml

clean:

BUILD SUCCESSFUL
Total time: 0 seconds
Buildfile: /root/test/armitage/build.xml

init:
[mkdir] Created dir: /root/test/armitage/bin

compile:
[javac] Compiling 111 source files to /root/test/armitage/bin
[javac] depend attribute is not supported by the modern compiler
[javac] Note: /root/test/armitage/src/ui/MultiFrame.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

BUILD SUCCESSFUL
Total time: 2 seconds
Buildfile: /root/test/armitage/build.xml

init:

compile:

jar:
[unzip] Expanding: /root/test/armitage/lib/sleep.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/jgraphx.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/msgpack-0.6.12-devel.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/postgresql-9.1-901.jdbc4.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/javassist-3.15.0-GA.jar into /root/test/armitage/bin
[copy] Copying 4 files to /root/test/armitage/bin/scripts-cortana
[jar] Building jar: /root/test/armitage/armitage.jar
[jar] Building jar: /root/test/armitage/cortana.jar

BUILD SUCCESSFUL
Total time: 1 second
armitage/
armitage/readme.txt
armitage/teamserver
armitage/cortana.jar
armitage/armitage.jar
armitage/armitage-logo.png
armitage/armitage
armitage/whatsnew.txt
adding: readme.txt (deflated 55%)
adding: armitage.exe (deflated 49%)
adding: cortana.jar (deflated 5%)
adding: armitage.jar (deflated 5%)
adding: whatsnew.txt (deflated 65%)
armitage/
armitage/readme.txt
armitage/teamserver
armitage/cortana.jar
armitage/armitage.jar
armitage/armitage-logo.png
armitage/armitage
armitage/whatsnew.txt
Archive: ../../armitage.zip
inflating: readme.txt
inflating: armitage.exe
inflating: cortana.jar
inflating: armitage.jar
inflating: whatsnew.txt

And here, best I can guess from messages read, is where a lot of people are running into trouble. We have successfully produced our new working copy of armitage. However, it is in our own local directory and will not be run if we just enter the command: armitage

Let’s review how to figure out what we want to do about that.

First, we want to verify what happens when we run the command armitage.
root@kali:~/armitage# which armitage

/usr/bin/armitage

Good! Let’s check and see what that does!
root@kali:~/armitage# head /usr/bin/armitage

#!/bin/sh

cd /usr/share/armitage/
exec ./armitage “$@”

Almost there! It’s running /usr/share/armitage/armitage with whatever variables we’ve passed in. We’ll check that out.
root@kali:~/armitage# head /usr/share/armitage/armitage

#!/bin/sh
java -XX:+AggressiveHeap -XX:+UseParallelGC -jar armitage.jar $@

We have enough information to assemble a solution.

I trust that the people behind Kali and Armitage will get this corrected so I don’t want to suggest a solution that would replace the armitage command and prevent an updated version from running later. So, let’s just make a temporary replacement?

root@kali:~/armitage# echo -e '#!/bin/sh\njava -XX:+AggressiveHeap -XX:+UseParallelGC -jar ~/armitage/armitage.jar $@' > /usr/bin/tmparmitage

Hereafter, we can use the command ‘tmparmitage’ (either CLI or ALT-F2) to run our fresh version until things catch up.

And, of course, to save you the time, weary hacker:

Download here:
    armitage_quick_fix.sh


Tags: , , , , , , ,
Permalink: 20170110.armitage.not.working.in.kali

Sun, 13 Jul 2014

Simple Protection with iptables, ipset and Blacklists

Seems I’ve always just a few more things going on than I can comfortably handle. One of those is an innocent little server holding the beginnings of a new project.

If you expose a server to the Internet, very quickly your ports are getting scanned and tested. If you’ve an SSH server, there are going to be attempts to login as ‘root’ which is why it is ubiquitously advised that you disable root login. Also why many advise against allowing passwords at all.

We could talk for days about improvements; it’s usually not difficult to introduce some form of two-factor authentication (2FA) for sensitive points of entry such as SSH. You can install monitoring software like Logwatch which can summarize important points from your logs, such as: who has logged via SSH, how many times root was used, etc.

DenyHosts and Fail2ban are very great ways to secure things, according to your needs.

DenyHosts works primarily with SSH and asks very little from you in way of configuration, especially if you’re using a package manager to install a version that is configured for the distribution on which you’re working. If you’re installing from source you may need to find where are your SSH logs (e.g., /var/log/secure, /var/log/auth.log). It’s extremely easy to set up DenyHosts to synchronize so that you’re automatically blocking widely-known offenders whether or not they’re after your server.

In contrast, Fail2ban is going to take more work to get set up. However, it is extremely configurable and works with any log file you point it toward which means that it can watch anything (e.g., FTP, web traffic, mail traffic). You define your own jails which means you can ban problematic IP addresses according to preference. Ban bad HTTP attempts from HTTP only or stick their noses in the virtual corner and don’t accept any traffic from them until they’ve served their time-out by completely disallowing their traffic. You can even use Fail2ban to scan its own logs, so repeating offenders can be locked out for longer.

Today we’re going to assume that you’ve a new server that shouldn’t be seeing any traffic except from you and any others involved in the project. In that case, you probably want to block traffic pretty aggressively. If you’ve physical access to the server (or the ability to work with staff at the datacenter) then it’s better to err in the direction of accidentally blocking good guys than trying to be overly fault-tolerant.

The server we’re working on today is a Debian Wheezy system. It has become a common misconception that Ubuntu and Debian are, intents and purposes, interchangeable. They’re similar in many respects and Ubuntu is great preparation for using Debian but they are not the same. The differences, I think, won’t matter for this exercise but I am unsure because this was written using Wheezy.

Several minutes after bringing my new server online, I started seeing noise in the logs. I was still getting set up and really didn’t want to stop and take protective measures but there’s no point in securing a server after its been compromised. The default Fail2ban configuration was too forgiving for my use. It was scanning for 10 minutes and banning for 10 minutes. Since only a few people should be accessing this server, there’s no reason for anyone to be trying a different password every 15 minutes (for hours).

I found a ‘close-enough’ script and modified it. Here, we’ll deal with a simplified version.

First, lets create a name for these ne’er-do-wells in iptables:
iptables -N bad_traffic

For this one, we’ll use Perl. We’ll look at our Apache log files to find people sniffing ‘round and we’ll block their traffic. Specifically, we’re going to check Apache’s ‘error.log’ for the phrases ‘File does not exist’ and ‘client denied by server configuration’ and block people causing those errors. This would be excessive for servers intended to serve the general populace. For a personal project, it works just fine as a ‘DO NOT DISTURB’ sign.


#!/usr/bin/env perl
use strict;
use POSIX qw(strftime);

my $log = ($ARGV[0] ? $ARGV[0] : "/var/log/apache2/error.log");
my $chain = ($ARGV[1] ? $ARGV[1] : "bad_traffic");

my @bad = `grep -iE 'File does not exist|client denied by server configuration' $log |cut -f8 -d" " | sed 's/]//' | sort -u`;
my @ablk = `/sbin/iptables -S $chain|grep DROP|awk '{print $4}'|cut -d"/" -f1`;

foreach my $ip (@bad) {
if (!grep $_ eq $ip, @ablk) {
chomp $ip;
`/sbin/iptables -A $chain -s $ip -j DROP`;
print strftime("%b %d %T",localtime(time))." badht: blocked bad HTTP traffic from: $ip\n";
}
}

That gives us some great, utterly unforgiving, blockage. Looking at the IP addresses attempting to pry, I noticed that most of them were on at least one of the popular block-lists.

So let’s make use of some of those block-lists! I found a program intended to apply those lists locally but, of course, it didn’t work for me. Here’s a similar program; this one will use ipset for managing the block-list though only minor changes would be needed to use iptables as above:

#!/bin/bash
IP_TMP=ip.tmp
IP_BLACKLIST_TMP=ip-blacklist.tmp

IP_BLACKLIST=ip-blacklist.conf

WIZ_LISTS="chinese nigerian russian lacnic exploited-servers"

BLACKLISTS=(
"http://danger.rulez.sk/projects/bruteforceblocker/blist.php" # BruteForceBlocker IP List
"http://rules.emergingthreats.net/blockrules/compromised-ips.txt" # Emerging Threats - Compromised IPs
"http://www.spamhaus.org/drop/drop.txt" # Spamhaus Don't Route Or Peer List (DROP)
"http://www.spamhaus.org/drop/edrop.txt" # Spamhaus Don't Route Or Peer List (DROP) Extended
"http://cinsscore.com/list/ci-badguys.txt" # C.I. Army Malicious IP List
"http://www.openbl.org/lists/base.txt" # OpenBL.org 90 day List
"http://www.autoshun.org/files/shunlist.csv" # Autoshun Shun List
"http://lists.blocklist.de/lists/all.txt" # blocklist.de attackers
)

for address in "${BLACKLISTS[@]}"
do
echo -e "\nFetching $address\n"
curl "$address" >> $IP_TMP
done

for list in $WIZ_LISTS
do
wget "http://www.wizcrafts.net/$list-iptables-blocklist.html" -O - >> $IP_TMP
done

wget 'http://wget-mirrors.uceprotect.net/rbldnsd-all/dnsbl-3.uceprotect.net.gz' -O - | gunzip | tee -a $IP_TMP

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[/][0-9]\{1,3\}' $IP_TMP | tee -a $IP_BLACKLIST_TMP
grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[^/]' $IP_TMP | tee -a $IP_BLACKLIST_TMP

sed -i 's/\t//g' $IP_BLACKLIST_TMP
sort -u $IP_BLACKLIST_TMP | tee $IP_BLACKLIST

rm $IP_TMP
rm $IP_BLACKLIST_TMP

wc -l $IP_BLACKLIST

if hash ipset 2>/dev/null
then
ipset flush bloxlist
while IFS= read -r ip
do
ipset add bloxlist $ip
done < $IP_BLACKLIST
else
echo -e '\nipset not found\n'
echo -e "\nYour bloxlist file is: $IP_BLACKLIST\n"
fi


Download here:
    bad_traffic.pl
    bloxlist.sh


Tags: , , , , , , , , , ,
Permalink: 20140713.simple.protection.with.iptables.ipset.and.blacklilsts

Tue, 18 Mar 2014

Random datums with Random.org

I was reading about the vulnerability in the ‘random’ number generator in iOS 7 (http://threatpost.com/weak-random-number-generator-threatens-ios-7-kernel-exploit-mitigations/104757) and thought I would share a method that I’ve used. Though, I certainly was not on any version of iOS but, at least, I can help in GNU/Linux and BSDs.

What if we want to get some random numbers or strings? We need a salt or something and, in the interest of best practices, we’re trying to restrain the trust given to any single participant in the chain. That is, we do not want to generate the random data from the server that we are on. Let’s make it somewhere else!

YEAH! I know, right? Where to get a random data can be a headache-inducing challenge. Alas! The noble people at Random.org are using atmospheric noise to provide randomness for us regular folk!

Let’s head over and ask for some randomness:
https://www.random.org/strings/?num=1&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

Great! Still feels a little plain. What if big brother saw the string coming over and knew what I was doing? I mean, aside from the fact that I could easily wrap the string with other data of my choosing.

Random.org is really generous with the random data, so why don’t we take advantage of that? Instead of asking for a single string, let’s ask for several. Nobunny will know which one I picked, except for me!

We’ll do that by increasing the 'num' part of the request that we’re sending:
https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

They were also thinking of CLI geeks like us with the 'format' variable. We set that to 'plain' and all the fancy formatting for human eyes is dropped and we get a simple list that we don’t need to use any clever tricks to parse!
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new'

Now we’ve got 25 strings at the CLI. We can pick one to copy and paste. But what if we don’t want to do the picking? Well, we can use the local system to do that. We’ll have it pick a number between 1 and 25. It’s zero-indexed, so we’re going to increment by 1 so that our number is really between 1 and 25.
echo $(( (RANDOM % 25)+1 ))

Next, we’ll send our list to head with our random number. Head is going to give us the first X lines of what we send to it. So if our random number is 17, then it will give us the first 17 lines from Random.org
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 ))

After that, we’ll use the friend of head: tail. We’ll tell tail that we want only the last record of the list, of psuedo-random length, of strings that are random.
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 )) | tail -1

And we can walk away feeling pretty good about having gotten some properly random data for our needs!


Tags: ,
Permalink: 20140318.random.data

Mon, 17 Feb 2014

Installing INN’s Project Largo in a Docker containter

Prereqruisites: Docker, Git, SSHFS.

Today we’re going to look at using Docker to create a WordPress installation with the Project Largo parent theme and a child theme stub for us to play with.

Hart Hoover has established an image for getting a WordPress installation up and running using Docker. For whatever reason, it didn’t work for me out-of-box but we’re going to use his work to get started.

Let’s make a place to work and move into that directory:
cd ~
mkdir project.largo.wordpress.docker
cd project.largo.wordpress.docker

We’ll clone the Docker/Wordpress project. For me, it couldn’t untar the latest WordPress. So we’ll download it outside the container, untar it and modify the Dockerfile to simply pull in a copy:
git clone https://github.com/hhoover/docker-wordpress.git
cd docker-wordpress/
ME=$(whoami)
wget http://wordpress.org/latest.tar.gz
tar xvf latest.tar.gz
sed -i 's/ADD http:\/\/wordpress.org\/latest.tar.gz \/wordpress.tar.gz/ADD \.\/wordpress \/wordpress/' Dockerfile
sed -i '/RUN tar xvzf \/wordpress\.tar\.gz/d' Dockerfile

Then, build the project which may take some time.
sudo docker build -t $ME/wordpress .

If you’ve not the images ready for Docker, the process should begin with something like:
Step 0 : FROM boxcar/raring
Pulling repository boxcar/raring
32737f8072d0: Downloading [> ] 2.228 MB/149.7 MB 12m29s

And end something like:
Step 20 : CMD ["/bin/bash", "/start.sh"]
---> Running in db53e215e2fc
---> 3f3f6489c700
Successfully built 3f3f6489c700

Once the project is built, we will start it and forward ports from the container to the host system, so that the Docker container’s site can be accessed through port 8000 of the host system. So, if you want to see it from the computer that you’ve installed it on, you could go to ‘HTTP://127.0.0.1:8000’. Alternatively, if your host system is already running a webserver, we could use SSHFS to mount the container’s files within the web-space of the host system.

In this example, however, we’ll just forward the ports and mount the project locally (using SSHFS) so we can easily edit the files perhaps using a graphical IDE such as NetBeans or Eclipse.

Okay, time to start our Docker image and find its IP address (so we can mount its files):
DID=$(docker run -p 8000:80 -d $ME/wordpress)
DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)
docker logs $DID| grep 'ssh user password:' --color

Copy the SSH password and we will make a local directory to access the WordPress installation of our containter.
cd ~
mkdir largo.mount.from.docker.container
sshfs user@$DIP:/var/www $HOME/largo.mount.from.docker.container
cd largo.mount.from.docker.container
PROJECT=$(pwd -P)

Now, we can visit the WordPress installation and finish setting up. From the host machine, it should be ‘HTTP://127.0.0.1:8000’. There you can configure Title, Username, Password, et cet. and finish installing WordPress.

Now, let’s get us some Largo! Since this is a test project, we’ll sacrifice security to make things easy. Our Docker WordPress site isn’t ready for us to easily install the Largo parent theme, so we’ll make the web directory writable by everybody. Generally, this is not a practice I would condone. It’s okay while we’re experimenting but permissions are very important on live systems!

Lastly, we’ll download and install Largo and the Largo child theme stub.
ssh user@$DIP 'sudo chmod -R 777 /var/www'
wget https://github.com/INN/Largo/archive/master.zip -O $PROJECT/wp-content/themes/largo.zip
unzip $PROJECT/wp-content/themes/largo.zip -d $PROJECT/wp-content/themes/
mv $PROJECT/wp-content/themes/Largo-master $PROJECT/wp-content/themes/largo
wget http://largoproject.wpengine.netdna-cdn.com/wp-content/uploads/2012/08/largo-child.zip -O $PROJECT/wp-content/themes/largo-child.zip
unzip $PROJECT/wp-content/themes/largo-child.zip -d $PROJECT/wp-content/themes
rm -rf $PROJECT/wp-content/themes/__MACOSX/

We are now ready to customize our Project Largo child theme!


Tags: , , , , , ,
Permalink: 20140217.project.largo.docker

Wed, 26 Jun 2013

Terminal suddenly Chinese

The other day, I was updating one of my systems and I noticed that it had decided to communicate with me in Chinese. Since I don’t know a lick of Chinese, it made for a clumsy exchange.

It was Linux Mint (an Ubuntu variant), so a snip of the output from an ‘apt-get upgrade’ looked like this: terminal screen with Chinese characters

I’m pretty sure I caused it — but there’s no telling what I was working on and how it slipped past me. Anyway, it’s not a difficult problem to fix but I imagine it could look like big trouble.

So, here’s what I did:
> locale

The important part of the output was this:
LANG=en_US.UTF-8
LANGUAGE=zh_CN.UTF-8

If you want to set your system to use a specific editor, you can set $EDITOR=vi and then you’re going to learn that some programs expect the configuration to be set in $VISUAL and you’ll need to change it there too.

In a similar way, many things were using the en_US.UTF-8 set in LANG, but other things were looking to LANGUAGE and determining that I wanted Chinese.

Having identified the problem, the fix was simple. Firstly, I just changed it in my local environment:
> LANGUAGE=en_US.UTF-8

That solved the immediate problem but, sooner or later, I’m going to reboot the machine and the Chinese setting would have come back. I needed to record the change somewhere for the system to know about it in the future.

> vim /etc/default/locale

Therein was the more permanent record, so I changed LANGUAGE there also, giving the result:

LANG=en_US.UTF-8
LANGUAGE=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC=en_US.UTF-8
LC_TIME=en_US.UTF-8
LC_COLLATE=”en_US.UTF-8”
LC_MONETARY=en_US.UTF-8
LC_MESSAGES=”en_US.UTF-8”
LC_PAPER=en_US.UTF-8
LC_NAME=en_US.UTF-8
LC_ADDRESS=en_US.UTF-8
LC_TELEPHONE=en_US.UTF-8
LC_MEASUREMENT=en_US.UTF-8
LC_IDENTIFICATION=en_US.UTF-8
LC_ALL=

And now, the computer is back to using characters that I (more-or-less) understand.


Tags: , , , , ,
Permalink: 20130626.terminal.suddenly.chinese

Thu, 23 May 2013

GNU Screen: Roll your own system monitor

Working on remote servers, some tools are practically ubiquitous — while others are harder to come by. Even if you’ve the authority to install your preferred tools on every server you visit, it’s not always something you want to do. If you’ve hopped on to a friend’s server just to troubleshoot a problem, there is little reason to install tools that your friend is not in the habit of using. Some servers, for security reasons, are very tightly locked down to include only a core set of tools, to complicate the job of any prying intruders. Or perhaps it is a machine that you normally use through a graphical interface but on this occasion you need to work from the CLI.

These are very compelling reasons to get comfortable, at the very least, with tools like Vim, mail, grep and sed. Eventually, you’re likely to encounter a situation where only the classic tools are available. If you aren’t competent with those tools, you’ll end up facing the obstacle of how to get files from the server to your local environment where you can work and, subsequently, how to get the files back when you’re done. In a secured environment, this may not be possible without violating protocols.

Let’s take a look at how we can build a makeshift system monitor using some common tools. This particular configuration is for a server running PHP, MySQL and has the tools Htop and mytop installed. These can easily be replaced with top and a small script to SHOW FULL PROCESSLIST, if needed. The point here is illustrative, to provide a template to be modified according to each specific environment.

(Note: I generally prefer tmux to Gnu Screen but screen is the tool more likely to be already installed, so we’ll use it for this example.)

We’re going to make a set of windows, by a configuration file, to help us keep tabs on what is happening in this system. In so doing, we’ll be using the well-known tools less and watch. More specifically, less +F which tells less to “scroll forward”. Other words, less will continue to read the file making sure any new lines are added to the display. You can exit this mode with CTRL+c, search the file (/), quit(q) or get back into scroll-forward mode with another uppercase F.

Using watch, we’ll include the “-d” flag which tells watch we want to highlight any changes (differences).

We will create a configuration file for screen by typing:

> vim monitor.screenrc

In the file, paste the following:

# Screen setup for system monitoring
# screen -c monitor.screenrc
hardstatus alwayslastline
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

screen -t htop 0 htop
screen -t mem 1 watch -d "free -t -m"
screen -t mpstat 2 watch -d "mpstat -A"
screen -t iostat 3 watch -d "iostat"
screen -t w 4 watch -d "w"
screen -t messages 5 less +F /var/log/messages
screen -t warn 6 less +F /var/log/warn
screen -t database 7 less +F /srv/www/log/db_error
screen -t mytop 8 mytop
screen -t php 9 less +F /srv/www/log/php_error

(Note: -t sets the title, then the window number, followed by the command running in that window)

Save the file (:wq) or, if you’d prefer, you can grab a copy by right-clicking and saving this file.

Then we will execute screen using this configuration, as noted in the comment:

> screen -c monitor.screenrc

Then you can switch between windows using CTRL+a, n (next) or CTRL+a, p (previous).

I use this technique on my own computers, running in a TTY different from the one used by X. If the graphical interface should get flaky, I can simply switch to that TTY (e.g., CTRL+ALT+F5) to see what things are going on — and take corrective actions, if needed.


Tags: , , , , , , , , , ,
Permalink: 20130523.gnu.screen.system.monitor

Wed, 08 May 2013

Deleting backup files left behind by Vim

It’s generally a great idea to have Vim keep backups. Once in awhile, they can really save your bacon.

The other side of that coin, though, is that they can get left behind here and there, eventually causing aggravation.

Here’s a snippet to find and eliminate those files from the current directory down:

find ./ -name '*~' -exec rm '{}' \; -print -or -name ".*~" -exec rm {} \; -print
This uses find from the current directory down (./) to execute an rm statement on all files with an extension ending in tilde (~)
Alternatively, you could just store your backups elsewhere. In Vim, use :help backupdir for more information.


Tags: , , , ,
Permalink: 20130508.delete.vim.backups