c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Thu, 13 Jul 2017

Improved Anonymity on Kali Linux

I’m not entirely certain when BackTrack/Kali began behaving more like a regular desktop distro but I seem to recall that originally, networking subsystems were down when you booted up into Run Level 3. It was up to you to turn on the interfaces and fire up a GUI if such was desired. IMO, that’s precisely how it should be. I get it. Most of us aren’t ever won’t ever find ourselves in a clandestine lot, inside of a snack and caffeine filled, non-descript, conversion van with a Yagi pointed at the bubble-window, ready to pilfer innocent datums just trying to get by in this lossy-protocoled, collision-rife, world.

Rather, very many of us just want the stinking box online so we can run through our tutorials and hack our own intentionally vulnerable VMs. A thorough taste of hacking’s un-glamorous underbelly is quite enough for many.

I’m confident that the BT fora were inundated with fledgling hackers complaining that their fresh install couldn’t find WiFi or didn’t load the desktop. However, I feel that distros dedicated to the Red Team should try to instill good habits. Having your machine boot and activate an interface announcing your presence and spewing out MAC and hostname is bad for business. Booting into a (comparatively) heavy GUI is also not where I want to begin.

Let’s imagine that we’re trying to crack into a thing. Don’t we want to apply maximal CPU resources, rather than having GUI elements bringing little beyond cost? If you notice, very many of the related tools still live on the CLI. The typical course of development (e.g.: Nmap, Metasploit) is that the CLI version is thoroughly developed before someone drops a GUI atop (respectively: Zenmap, Armitage).


So let’s take our Kali and make a few quick changes. We want to boot up in text/CLI mode and we want networking left off until we choose to make noise. Further, we want to randomize our MAC address and hostname at every boot.

We’ll use iwconfig to enumerate our wireless interfaces.
lo        no wireless extensions.

wlan1     IEEE 802.11 ESSID:"ESSID"
          Mode:Managed Frequency:2.412 GHz Access Point: 17:23:53:96:BE:67
          Bit Rate=72.2 Mb/s Tx-Power=20 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:off
          Link Quality=70/70 Signal level=-21 dBm
          Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
          Tx excessive retries:253 Invalid misc:400 Missed beacon:0

eth0      no wireless extensions.

wlan0     IEEE 802.11 ESSID:off/any
          Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:on

We have two wireless interfaces: wlan0, wlan1

Okay, first let’s configure to start up in text mode:
> systemctl set-default multi-user.target
Created symlink /etc/systemd/system/default.target → /lib/systemd/system/multi-user.target.

Traditionally from text mode, we bring up the GUI desktop with the command startx. Since we don’t yet have that command, let’s create it:
> echo "systemctl start gdm3.service" > /usr/sbin/startx && chmod +x /usr/sbin/startx

Disable network-manager autostart:
> systemctl disable network-manager.service
> sed -i 's/5min/30sec/' /etc/systemd/system/network-online.target.wants/networking.service

Now, let’s randomize our hostname and MAC addresses at every boot by adding some cronjobs:
> crontab -e

We’ll add two jobs to randomize MAC address and one for our host name:
@reboot macchanger -r wlan0
@reboot macchanger -r wlan1
@reboot hostname `strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'`

We ‘re good! We’ve improved efficiency by staving off the GUI for when we genuinely want it and improved anonymity by randomizing some common ways of identifying the rig.


Tags: , , , , , , , ,
Permalink: 2017-07-10.improved.anonymity.on.kali.linux

Tue, 07 Mar 2017

Privacy Part II: VPN/IPVanish - Install IPVanish on Kali Linux

Okay, so you’re running Whonix, Tails or, at least, TorBrowser.

What’s next? You may wish to consider using a VPN. In simple terms, it’s somewhat similar to what Tor offers. That is: you connect to the VPN and your connection passes through them such that the site that you are visiting will see the VPN’s IP address rather than yours. Of course, that means that you can chain them.

That is: (You)->VPN->Tor->Exit node->Web site

The reason that you might feel compelled to take this step is that a party which is able to see your traffic into and out of Tor could still identify you. The thinking is that the parties who wish to interfere with your privacy could be compelled to run Tor bridges, relays and exit nodes. If traffic from your IP address could be matched to requests coming from the Tor exit node then you could, effectively, be identified.

Some people hold that using a VPN to access Tor does not improve your anonymousness. I am not among them. In particular, you will find that IPVanish offers VPN service for under $7 per month and is popular among users of the Tor network. Which means that in addition to the fact that IPVanish is not logging your traffic, there’s an excellent chance that other users are going from IPVanish into Tor, helping to reduce the uniqueness of your traffic.

By the way, I’d suggest poking around the web a little bit. While their prices are already great you can find some even deeper discounts: https://signup.ipvanish.com/?aff=vpnfan-promo

IPVanish’s site offers instructions for installing the VPN in Ubuntu so we’re going to take a look at using IPVanish in Kali — including an interesting and unanticipated snag (and, of course, how to fix it).

Let’s grab the OpenVPN configuration:
wget http://www.ipvanish.com/software/configs/ca.ipvanish.com.crt; wget http://www.ipvanish.com/software/configs/ipvanish-US-New-York-nyc-a01.ovpn

We will need the OpenVPN package for Gnome:
apt install network-manager-openvpn-gnome

Click on the tray in the upper right corner, then the wrench/screwdriver icon:

Select the ‘Network’ folder icon:

We’re choosing ‘Wired’ (even though we’re using wlan0 interface):

We’re setting up a VPN, of course:

Import from file:

Choose the configuration file that we downloaded previously:

Enter ‘User name’ and ‘Password’:

We are connected!

Verified at IPVanish’s site: https://www.ipvanish.com/checkIP.php

And this is where I had anticipated the installation instructions would end.

I just wanted to check a few more things. And I would love to tell you that it was simply my thoroughness and unbridled CLI-fu that led to discover that I was still making ipv6 connections outside of the VPN. Seems that it wasn’t noticed by the test at IPVanish because they deal only in ipv4. I was able to prove my ipv6 address and geolocation by using: http://whatismyipaddress.com/

Further, we can establish that the test at IPVanish is not ipv6-compatible with a quick test.

The easy fix here is to disable ipv6 locally. It is plausible that this could cause unintended consequences and, to be thorough, it would be best to handle your VPN at the firewall. Having support for OpenVPN, you’ll be able to get this running with a huge variety of routing/firewall solutions. You can grab any number of tiny computers and build a professional-quality firewall solution with something like pfSense. Maybe we’ll take a look at getting that configured in a future post.

But, for now, let’s shut down ipv6 in a way that doesn’t involve any grandiose hand-waving magic (i.e., unexplained commands which probably should work) and then test to get confidence in our results.

Let’s use sysctl to find our ipv6 kernel bits and turn them off. Then we’ll load our configuration changes. As a safety, it wouldn’t be a bad idea to look in /etc/sysctl.conf to verify that there aren’t any ipv6 configs in there.

We’ll back up our config file then turn off everything ipv6 by listing everything with the words ‘ipv6’ and ‘disable’:
cp /etc/sysctl.conf /etc/$(date +%Y-%m-%d.%H-%M-%S).sysctl.conf.bak && \
sysctl -a | grep -i ipv6 | grep disable | sed 's/0/1/g' >> /etc/sysctl.conf && \
sysctl -p

To explain what we’re doing:
List all kernel flags; show uonly those containing the string ‘ipv6’; of those that remain, show only those that contain the string ‘disable’:
sysctl -a | grep -i ipv6 | grep disable
Replace the 0 values with 1, to turn ON the disabling, by piping output to:
sed 's/0/1/g'
That all gets stuck on the end of ‘sysctl.conf’ by redirecting stdout to append to the end of that file:
>> /etc/sysctl.conf
Then we reload with:
sysctl -p

Then as a final sanity-check we’ll make sure we can’t find any ipv6 packets sneaking about:
tcpdump -t -n -i wlan0 -s 256 -vv ip6

At this point, assuming our tcpdump doesn’t show any traffic, we should be ipv6-free with all of our ipv4 traffic shipped-off nicely through IPVanish!


Tags: , , , , , , , , , , , , ,
Permalink: 20170307.privacy.vpn.ipvanish

Sun, 13 Jul 2014

Simple Protection with iptables, ipset and Blacklists

Seems I’ve always just a few more things going on than I can comfortably handle. One of those is an innocent little server holding the beginnings of a new project.

If you expose a server to the Internet, very quickly your ports are getting scanned and tested. If you’ve an SSH server, there are going to be attempts to login as ‘root’ which is why it is ubiquitously advised that you disable root login. Also why many advise against allowing passwords at all.

We could talk for days about improvements; it’s usually not difficult to introduce some form of two-factor authentication (2FA) for sensitive points of entry such as SSH. You can install monitoring software like Logwatch which can summarize important points from your logs, such as: who has logged via SSH, how many times root was used, etc.

DenyHosts and Fail2ban are very great ways to secure things, according to your needs.

DenyHosts works primarily with SSH and asks very little from you in way of configuration, especially if you’re using a package manager to install a version that is configured for the distribution on which you’re working. If you’re installing from source you may need to find where are your SSH logs (e.g., /var/log/secure, /var/log/auth.log). It’s extremely easy to set up DenyHosts to synchronize so that you’re automatically blocking widely-known offenders whether or not they’re after your server.

In contrast, Fail2ban is going to take more work to get set up. However, it is extremely configurable and works with any log file you point it toward which means that it can watch anything (e.g., FTP, web traffic, mail traffic). You define your own jails which means you can ban problematic IP addresses according to preference. Ban bad HTTP attempts from HTTP only or stick their noses in the virtual corner and don’t accept any traffic from them until they’ve served their time-out by completely disallowing their traffic. You can even use Fail2ban to scan its own logs, so repeating offenders can be locked out for longer.

Today we’re going to assume that you’ve a new server that shouldn’t be seeing any traffic except from you and any others involved in the project. In that case, you probably want to block traffic pretty aggressively. If you’ve physical access to the server (or the ability to work with staff at the datacenter) then it’s better to err in the direction of accidentally blocking good guys than trying to be overly fault-tolerant.

The server we’re working on today is a Debian Wheezy system. It has become a common misconception that Ubuntu and Debian are, intents and purposes, interchangeable. They’re similar in many respects and Ubuntu is great preparation for using Debian but they are not the same. The differences, I think, won’t matter for this exercise but I am unsure because this was written using Wheezy.

Several minutes after bringing my new server online, I started seeing noise in the logs. I was still getting set up and really didn’t want to stop and take protective measures but there’s no point in securing a server after its been compromised. The default Fail2ban configuration was too forgiving for my use. It was scanning for 10 minutes and banning for 10 minutes. Since only a few people should be accessing this server, there’s no reason for anyone to be trying a different password every 15 minutes (for hours).

I found a ‘close-enough’ script and modified it. Here, we’ll deal with a simplified version.

First, lets create a name for these ne’er-do-wells in iptables:
iptables -N bad_traffic

For this one, we’ll use Perl. We’ll look at our Apache log files to find people sniffing ‘round and we’ll block their traffic. Specifically, we’re going to check Apache’s ‘error.log’ for the phrases ‘File does not exist’ and ‘client denied by server configuration’ and block people causing those errors. This would be excessive for servers intended to serve the general populace. For a personal project, it works just fine as a ‘DO NOT DISTURB’ sign.


#!/usr/bin/env perl
use strict;
use POSIX qw(strftime);

my $log = ($ARGV[0] ? $ARGV[0] : "/var/log/apache2/error.log");
my $chain = ($ARGV[1] ? $ARGV[1] : "bad_traffic");

my @bad = `grep -iE 'File does not exist|client denied by server configuration' $log |cut -f8 -d" " | sed 's/]//' | sort -u`;
my @ablk = `/sbin/iptables -S $chain|grep DROP|awk '{print $4}'|cut -d"/" -f1`;

foreach my $ip (@bad) {
if (!grep $_ eq $ip, @ablk) {
chomp $ip;
`/sbin/iptables -A $chain -s $ip -j DROP`;
print strftime("%b %d %T",localtime(time))." badht: blocked bad HTTP traffic from: $ip\n";
}
}

That gives us some great, utterly unforgiving, blockage. Looking at the IP addresses attempting to pry, I noticed that most of them were on at least one of the popular block-lists.

So let’s make use of some of those block-lists! I found a program intended to apply those lists locally but, of course, it didn’t work for me. Here’s a similar program; this one will use ipset for managing the block-list though only minor changes would be needed to use iptables as above:

#!/bin/bash
IP_TMP=ip.tmp
IP_BLACKLIST_TMP=ip-blacklist.tmp

IP_BLACKLIST=ip-blacklist.conf

WIZ_LISTS="chinese nigerian russian lacnic exploited-servers"

BLACKLISTS=(
"http://danger.rulez.sk/projects/bruteforceblocker/blist.php" # BruteForceBlocker IP List
"http://rules.emergingthreats.net/blockrules/compromised-ips.txt" # Emerging Threats - Compromised IPs
"http://www.spamhaus.org/drop/drop.txt" # Spamhaus Don't Route Or Peer List (DROP)
"http://www.spamhaus.org/drop/edrop.txt" # Spamhaus Don't Route Or Peer List (DROP) Extended
"http://cinsscore.com/list/ci-badguys.txt" # C.I. Army Malicious IP List
"http://www.openbl.org/lists/base.txt" # OpenBL.org 90 day List
"http://www.autoshun.org/files/shunlist.csv" # Autoshun Shun List
"http://lists.blocklist.de/lists/all.txt" # blocklist.de attackers
)

for address in "${BLACKLISTS[@]}"
do
echo -e "\nFetching $address\n"
curl "$address" >> $IP_TMP
done

for list in $WIZ_LISTS
do
wget "http://www.wizcrafts.net/$list-iptables-blocklist.html" -O - >> $IP_TMP
done

wget 'http://wget-mirrors.uceprotect.net/rbldnsd-all/dnsbl-3.uceprotect.net.gz' -O - | gunzip | tee -a $IP_TMP

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[/][0-9]\{1,3\}' $IP_TMP | tee -a $IP_BLACKLIST_TMP
grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[^/]' $IP_TMP | tee -a $IP_BLACKLIST_TMP

sed -i 's/\t//g' $IP_BLACKLIST_TMP
sort -u $IP_BLACKLIST_TMP | tee $IP_BLACKLIST

rm $IP_TMP
rm $IP_BLACKLIST_TMP

wc -l $IP_BLACKLIST

if hash ipset 2>/dev/null
then
ipset flush bloxlist
while IFS= read -r ip
do
ipset add bloxlist $ip
done < $IP_BLACKLIST
else
echo -e '\nipset not found\n'
echo -e "\nYour bloxlist file is: $IP_BLACKLIST\n"
fi


Download here:
    bad_traffic.pl
    bloxlist.sh


Tags: , , , , , , , , , ,
Permalink: 20140713.simple.protection.with.iptables.ipset.and.blacklilsts

Thu, 06 Jun 2013

Managing to use man pages through simple CLI tips

Recently, an author I admire and time-honored spinner of the Interwebs, Tony Lawrence emphasized the value of using man pagesmanual pagesDocumentation available from the command line.
> man ls
as a sanity check before getting carried away with powerful commands. I didn’t know about this one but he has written about a situation in which killall could produce some shocking, and potentially quite unpleasant, results.

Personally, I often quickly check man pages to be certain that I am using the correct flags or, as in the above case, anticipating results that bear some resemblance to what is actually likely to happen. Yet, it seems many people flock toward SERPSearch Engine Results Page A tasteful replacement for mentioning any particular search-engine by name.
Also useful as a verb:
I dunno. You’ll have to SERP it.
s for this information.

Perhaps the most compelling reason to head for the web is leaving the cursor amid the line you’re working on, without disturbing the command. SERPing the command however, could easily lead you to information about a variant that is more common than the one available to you. More importantly, the information retrieved from the search engine is almost certainly written by someone who did read the man page — and may even come with the admonishment that you RTFMRead The F#!$!*#’n Manual as a testament to the importance of developing this habit.

This can be made easier with just a few CLI shortcuts.

<CTRL+u> to cut what you have typed so far and <CTRL+y> to paste it back.

That is, you press <CTRL+u> and the line will be cleared, so you can then type man {command} and read the documentation. Don’t hesitate to jot quick notes of which flags you intend to use, if needed. Then exit the man page, press <CTRL+y> and finish typing right where you left off.

This is another good use for screen or tmux but let’s face it. There are times when you don’t want the overhead of opening another window for a quick look-up and even instances when these tools aren’t available.

A few other tips to make life easier when building complex commands:

Use the command fc to open up an editor in which you can build your complex command and, optionally, even save it as a shell script for future reuse.

Repeat the last word from the previous command (often a filename) with <ALT+.> or use an item from the last command by position, in reverse order:
> ls -lahtr *archive*
<ALT+1+.> : *archive*
<ALT+2+.> : -lahtr
<ALT+3+.> : ls

You can also use Word Designators to use items from history, such as adding sudo to the last command typed by:
sudo !!

This allows for tricks like replacing bits of a previous command:
!:s/misspelled/corrected/

Lastly, if you need a command that was typed earlier, you can search history by pressing <CTRL+r> and start typing an identifying portion of the command.

(Note: I have used these in Zsh and Bash, specifically. They can, however, be missing or overwritten — if a feature you want isn’t working, you can bind keys in a configuration file. Don’t just write it off, once you’ve solved the problem it will never again be an intimidating one.)

Happy hacking!


Tags: , , , , , , , ,
Permalink: 20130606.managing.to.use.man.pages

Wed, 15 May 2013

Git: an untracked mess?

There may be times when you find your Git repository burdened with scads of untracked files left aside while twiddling, testing bug patches, or what-have-youse.

For the especially scatter-brained among us, these things can go unchecked until a day when the useful bits of a git status scroll off the screen due to utterly unimportant stuff. Well, hopefully unimportant.

But we’d better not just cleave away everything that we haven’t checked in. You wonder:
What if there’s something important in one of those files?

You are so right!

Let’s fix this!

Firstly, we want a solution that’s reproducible. Only want to invent this wheel once, right?

Let’s begin with the play-by-play:

Git, we want a list of what isn’t tracked: git ls-files -o --exclude-standard -z

We’ll back these files up in our home directory (~), using CPIO but we don’t want a poorly-named directory or finding anything will become its own obstacle. So we’ll take use the current date (date +%Y-%m-%d), directory (pwd) and branch we’re using (git branch) and we’ll twist all of it into a meaningful, but appropriate, directory name using sed. git ls-files -o --exclude-standard -z | cpio -pmdu ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/

Then Tell Git to remove the untracked files and directories: git clean -d -f

Ahhhh… Much better. Is there anything left out? Perhaps. What if we decide that moving these files away was a mistake? The kind of mistake that breaks something. If we realize right away, it’s easily-enough undone. But what if we break something and don’t notice for a week or two? It’d probably be best if we had an automated script to put things back the way they were. Let’s do that.

Simple enough. We’ll just take the opposite commands and echo them into a script to be used in case of emergency.

Create the restore script (restore.sh), to excuse faulty memory: echo "(cd ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Make the restore script executable: chmod u+x ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Lastly, the magic, compressed into one line that will stop if any command does not report success: a='untracked-git-backup-'`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`; git ls-files -o --exclude-standard -z | cpio -pmdu ~/$a/ && git clean -d -f && echo "(cd ~/$a/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/$a/restore.sh && chmod +x ~/$a/restore.sh; unset a


Tags: , , , ,
Permalink: 20130515.git.untracked.mess