c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Thu, 13 Jul 2017

Improved Anonymity on Kali Linux

I’m not entirely certain when BackTrack/Kali began behaving more like a regular desktop distro but I seem to recall that originally, networking subsystems were down when you booted up into Run Level 3. It was up to you to turn on the interfaces and fire up a GUI if such was desired. IMO, that’s precisely how it should be. I get it. Most of us aren’t ever won’t ever find ourselves in a clandestine lot, inside of a snack and caffeine filled, non-descript, conversion van with a Yagi pointed at the bubble-window, ready to pilfer innocent datums just trying to get by in this lossy-protocoled, collision-rife, world.

Rather, very many of us just want the stinking box online so we can run through our tutorials and hack our own intentionally vulnerable VMs. A thorough taste of hacking’s un-glamorous underbelly is quite enough for many.

I’m confident that the BT fora were inundated with fledgling hackers complaining that their fresh install couldn’t find WiFi or didn’t load the desktop. However, I feel that distros dedicated to the Red Team should try to instill good habits. Having your machine boot and activate an interface announcing your presence and spewing out MAC and hostname is bad for business. Booting into a (comparatively) heavy GUI is also not where I want to begin.

Let’s imagine that we’re trying to crack into a thing. Don’t we want to apply maximal CPU resources, rather than having GUI elements bringing little beyond cost? If you notice, very many of the related tools still live on the CLI. The typical course of development (e.g.: Nmap, Metasploit) is that the CLI version is thoroughly developed before someone drops a GUI atop (respectively: Zenmap, Armitage).


So let’s take our Kali and make a few quick changes. We want to boot up in text/CLI mode and we want networking left off until we choose to make noise. Further, we want to randomize our MAC address and hostname at every boot.

We’ll use iwconfig to enumerate our wireless interfaces.
lo        no wireless extensions.

wlan1     IEEE 802.11 ESSID:"ESSID"
          Mode:Managed Frequency:2.412 GHz Access Point: 17:23:53:96:BE:67
          Bit Rate=72.2 Mb/s Tx-Power=20 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:off
          Link Quality=70/70 Signal level=-21 dBm
          Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
          Tx excessive retries:253 Invalid misc:400 Missed beacon:0

eth0      no wireless extensions.

wlan0     IEEE 802.11 ESSID:off/any
          Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:on

We have two wireless interfaces: wlan0, wlan1

Okay, first let’s configure to start up in text mode:
> systemctl set-default multi-user.target
Created symlink /etc/systemd/system/default.target → /lib/systemd/system/multi-user.target.

Traditionally from text mode, we bring up the GUI desktop with the command startx. Since we don’t yet have that command, let’s create it:
> echo "systemctl start gdm3.service" > /usr/sbin/startx && chmod +x /usr/sbin/startx

Disable network-manager autostart:
> systemctl disable network-manager.service
> sed -i 's/5min/30sec/' /etc/systemd/system/network-online.target.wants/networking.service

Now, let’s randomize our hostname and MAC addresses at every boot by adding some cronjobs:
> crontab -e

We’ll add two jobs to randomize MAC address and one for our host name:
@reboot macchanger -r wlan0
@reboot macchanger -r wlan1
@reboot hostname `strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'`

We ‘re good! We’ve improved efficiency by staving off the GUI for when we genuinely want it and improved anonymity by randomizing some common ways of identifying the rig.


Tags: , , , , , , , ,
Permalink: 2017-07-10.improved.anonymity.on.kali.linux

Tue, 07 Mar 2017

Privacy Part II: VPN/IPVanish - Install IPVanish on Kali Linux

Okay, so you’re running Whonix, Tails or, at least, TorBrowser.

What’s next? You may wish to consider using a VPN. In simple terms, it’s somewhat similar to what Tor offers. That is: you connect to the VPN and your connection passes through them such that the site that you are visiting will see the VPN’s IP address rather than yours. Of course, that means that you can chain them.

That is: (You)->VPN->Tor->Exit node->Web site

The reason that you might feel compelled to take this step is that a party which is able to see your traffic into and out of Tor could still identify you. The thinking is that the parties who wish to interfere with your privacy could be compelled to run Tor bridges, relays and exit nodes. If traffic from your IP address could be matched to requests coming from the Tor exit node then you could, effectively, be identified.

Some people hold that using a VPN to access Tor does not improve your anonymousness. I am not among them. In particular, you will find that IPVanish offers VPN service for under $7 per month and is popular among users of the Tor network. Which means that in addition to the fact that IPVanish is not logging your traffic, there’s an excellent chance that other users are going from IPVanish into Tor, helping to reduce the uniqueness of your traffic.

By the way, I’d suggest poking around the web a little bit. While their prices are already great you can find some even deeper discounts: https://signup.ipvanish.com/?aff=vpnfan-promo

IPVanish’s site offers instructions for installing the VPN in Ubuntu so we’re going to take a look at using IPVanish in Kali — including an interesting and unanticipated snag (and, of course, how to fix it).

Let’s grab the OpenVPN configuration:
wget http://www.ipvanish.com/software/configs/ca.ipvanish.com.crt; wget http://www.ipvanish.com/software/configs/ipvanish-US-New-York-nyc-a01.ovpn

We will need the OpenVPN package for Gnome:
apt install network-manager-openvpn-gnome

Click on the tray in the upper right corner, then the wrench/screwdriver icon:

Select the ‘Network’ folder icon:

We’re choosing ‘Wired’ (even though we’re using wlan0 interface):

We’re setting up a VPN, of course:

Import from file:

Choose the configuration file that we downloaded previously:

Enter ‘User name’ and ‘Password’:

We are connected!

Verified at IPVanish’s site: https://www.ipvanish.com/checkIP.php

And this is where I had anticipated the installation instructions would end.

I just wanted to check a few more things. And I would love to tell you that it was simply my thoroughness and unbridled CLI-fu that led to discover that I was still making ipv6 connections outside of the VPN. Seems that it wasn’t noticed by the test at IPVanish because they deal only in ipv4. I was able to prove my ipv6 address and geolocation by using: http://whatismyipaddress.com/

Further, we can establish that the test at IPVanish is not ipv6-compatible with a quick test.

The easy fix here is to disable ipv6 locally. It is plausible that this could cause unintended consequences and, to be thorough, it would be best to handle your VPN at the firewall. Having support for OpenVPN, you’ll be able to get this running with a huge variety of routing/firewall solutions. You can grab any number of tiny computers and build a professional-quality firewall solution with something like pfSense. Maybe we’ll take a look at getting that configured in a future post.

But, for now, let’s shut down ipv6 in a way that doesn’t involve any grandiose hand-waving magic (i.e., unexplained commands which probably should work) and then test to get confidence in our results.

Let’s use sysctl to find our ipv6 kernel bits and turn them off. Then we’ll load our configuration changes. As a safety, it wouldn’t be a bad idea to look in /etc/sysctl.conf to verify that there aren’t any ipv6 configs in there.

We’ll back up our config file then turn off everything ipv6 by listing everything with the words ‘ipv6’ and ‘disable’:
cp /etc/sysctl.conf /etc/$(date +%Y-%m-%d.%H-%M-%S).sysctl.conf.bak && \
sysctl -a | grep -i ipv6 | grep disable | sed 's/0/1/g' >> /etc/sysctl.conf && \
sysctl -p

To explain what we’re doing:
List all kernel flags; show uonly those containing the string ‘ipv6’; of those that remain, show only those that contain the string ‘disable’:
sysctl -a | grep -i ipv6 | grep disable
Replace the 0 values with 1, to turn ON the disabling, by piping output to:
sed 's/0/1/g'
That all gets stuck on the end of ‘sysctl.conf’ by redirecting stdout to append to the end of that file:
>> /etc/sysctl.conf
Then we reload with:
sysctl -p

Then as a final sanity-check we’ll make sure we can’t find any ipv6 packets sneaking about:
tcpdump -t -n -i wlan0 -s 256 -vv ip6

At this point, assuming our tcpdump doesn’t show any traffic, we should be ipv6-free with all of our ipv4 traffic shipped-off nicely through IPVanish!


Tags: , , , , , , , , , , , , ,
Permalink: 20170307.privacy.vpn.ipvanish

Sat, 04 Mar 2017

Official(ish) deep dark onion code::j0rg3 mirror

Recently I decided that I wanted my blog to be available inside of the Deep, Dark Onion (Tor).

First time around, I set up a proxy that I modified to access only the clear web version of the blog and to avail that inside Tor as a ‘hidden service’.

My blog is hosted on equipment provided by the kind folk at insomnia247.nl and I found that, within a week or so, the address of my proxy was blocked. It’s safe for us to assume that it was simply because of the outrageous popularity it received inside Tor.

By “safe for us to assume” I mean that it is highly probable that no significant harm would come from making that assumption. It would not be a correct assumption, though.

What’s more true is that within Tor things are pretty durn anonymous. Your logs will show Tor traffic coming from 127.0.0.1 only. This is a great situation for parties that would like to scan sites repeatedly looking for vulnerabilities — because you can’t block them. They can scan your site over and over and over. And the more features you have (e.g., comments, searches, any form of user input), the more attack vectors are plausible.

So why not scan endlessly? They do. Every minute of every hour.

Since insomnia247 is a provider of free shells, it is incredibly reasonable that they don’t want to take the hit for that volume of traffic. They’re providing this service to untold numbers of other users, blogs and projects.

For that reason, I decided to set up a dedicated mirror.

Works like this: my blog lives here. I have a machine at home which uses rsync to make a local copy of this blog. Immediately thereafter it rsyncs any newly gotten data up to the mirror in onionland.

After consideration, I realized that this was also a better choice just in case there is something exploitable in my blog. Instead of even risking the possibility that an attacker could get access to insomnia247, they can only get to my completely disposable VPS which has hardly anything on it except this blog and a few scripts to which I’ve already opened the source code.

I’ve not finished combing through but I’ve taken efforts to ensure it doesn’t link back to clear web. To be clear, there’s nothing inherently wrong with that. Tor users will only appear as the IP address of their exit node and should still remain anonymous. To me, it’s just onion etiquette. You let the end-user decide when they want to step outside.

To that end, the Tor mirror does not have the buttons to share to Facebook, Twitter, LinkedIn, Google Plus.

That being said, if you’re a lurker of those Internet back-alleys then you can find the mirror at: http://aacnshdurq6ihmcs.onion

Happy hacking, friends!


Tags: , , , , , , , , , , ,
Permalink: 20170304.deep.dark.onion

Tue, 18 Mar 2014

Random datums with Random.org

I was reading about the vulnerability in the ‘random’ number generator in iOS 7 (http://threatpost.com/weak-random-number-generator-threatens-ios-7-kernel-exploit-mitigations/104757) and thought I would share a method that I’ve used. Though, I certainly was not on any version of iOS but, at least, I can help in GNU/Linux and BSDs.

What if we want to get some random numbers or strings? We need a salt or something and, in the interest of best practices, we’re trying to restrain the trust given to any single participant in the chain. That is, we do not want to generate the random data from the server that we are on. Let’s make it somewhere else!

YEAH! I know, right? Where to get a random data can be a headache-inducing challenge. Alas! The noble people at Random.org are using atmospheric noise to provide randomness for us regular folk!

Let’s head over and ask for some randomness:
https://www.random.org/strings/?num=1&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

Great! Still feels a little plain. What if big brother saw the string coming over and knew what I was doing? I mean, aside from the fact that I could easily wrap the string with other data of my choosing.

Random.org is really generous with the random data, so why don’t we take advantage of that? Instead of asking for a single string, let’s ask for several. Nobunny will know which one I picked, except for me!

We’ll do that by increasing the 'num' part of the request that we’re sending:
https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

They were also thinking of CLI geeks like us with the 'format' variable. We set that to 'plain' and all the fancy formatting for human eyes is dropped and we get a simple list that we don’t need to use any clever tricks to parse!
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new'

Now we’ve got 25 strings at the CLI. We can pick one to copy and paste. But what if we don’t want to do the picking? Well, we can use the local system to do that. We’ll have it pick a number between 1 and 25. It’s zero-indexed, so we’re going to increment by 1 so that our number is really between 1 and 25.
echo $(( (RANDOM % 25)+1 ))

Next, we’ll send our list to head with our random number. Head is going to give us the first X lines of what we send to it. So if our random number is 17, then it will give us the first 17 lines from Random.org
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 ))

After that, we’ll use the friend of head: tail. We’ll tell tail that we want only the last record of the list, of psuedo-random length, of strings that are random.
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 )) | tail -1

And we can walk away feeling pretty good about having gotten some properly random data for our needs!


Tags: ,
Permalink: 20140318.random.data

Wed, 15 May 2013

Git: an untracked mess?

There may be times when you find your Git repository burdened with scads of untracked files left aside while twiddling, testing bug patches, or what-have-youse.

For the especially scatter-brained among us, these things can go unchecked until a day when the useful bits of a git status scroll off the screen due to utterly unimportant stuff. Well, hopefully unimportant.

But we’d better not just cleave away everything that we haven’t checked in. You wonder:
What if there’s something important in one of those files?

You are so right!

Let’s fix this!

Firstly, we want a solution that’s reproducible. Only want to invent this wheel once, right?

Let’s begin with the play-by-play:

Git, we want a list of what isn’t tracked: git ls-files -o --exclude-standard -z

We’ll back these files up in our home directory (~), using CPIO but we don’t want a poorly-named directory or finding anything will become its own obstacle. So we’ll take use the current date (date +%Y-%m-%d), directory (pwd) and branch we’re using (git branch) and we’ll twist all of it into a meaningful, but appropriate, directory name using sed. git ls-files -o --exclude-standard -z | cpio -pmdu ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/

Then Tell Git to remove the untracked files and directories: git clean -d -f

Ahhhh… Much better. Is there anything left out? Perhaps. What if we decide that moving these files away was a mistake? The kind of mistake that breaks something. If we realize right away, it’s easily-enough undone. But what if we break something and don’t notice for a week or two? It’d probably be best if we had an automated script to put things back the way they were. Let’s do that.

Simple enough. We’ll just take the opposite commands and echo them into a script to be used in case of emergency.

Create the restore script (restore.sh), to excuse faulty memory: echo "(cd ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Make the restore script executable: chmod u+x ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Lastly, the magic, compressed into one line that will stop if any command does not report success: a='untracked-git-backup-'`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`; git ls-files -o --exclude-standard -z | cpio -pmdu ~/$a/ && git clean -d -f && echo "(cd ~/$a/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/$a/restore.sh && chmod +x ~/$a/restore.sh; unset a


Tags: , , , ,
Permalink: 20130515.git.untracked.mess

Mon, 13 May 2013

Zsh and hash

Documentation for this one seems a bit hard to come by but it is one of the things I love about Zsh.

I’ve seen many .bashrc files that have things like:
alias www='cd /var/www'
alias music='cd /home/j0rg3/music'

And that’s a perfectly sensible way to make life a little easier, especially if the paths are very long.

In Zsh, however, we can use the hash command and the shortcut we get from it works fully as the path. Other words, using the version above, if we want to edit ‘index.html’ in the ‘www’ directory, we would have to issue the shortcut to get there and then edit the file, in two steps:
> www
> vim index.html

The improved version in .zshrc would look like:
hash www=/var/www
hash -d www=/var/www

Then, at any time, you can use tilde (~) and your shortcut in place of path.
> vim ~www/index.html

Even better, it integrates with Zsh’s robust completions so you can, for example, type cd ~www/ and then use the tab key to cycle through subdirectories and files.

On this system, I’m using something like this:
(.zshrc)
hash posts=/home/j0rg3/weblog/posts
hash -d posts=/home/j0rg3/weblog/posts

Then we can make a function to create a new post, to paste into .zshrc. Since we want to be able to edit and save, without partial posts becoming visible, while we are working, we’ll use an extra .tmp extension at the end:
post() { vim ~posts/`date +%Y-%m`/`date +%Y%m%d`.$1.txt.tmp }

[ In-line date command unfamiliar? See earlier explanation ]

But, surely there is going to be a point when we need to save a post and finish it later. For now, let’s assume that only a single post will be in limbo at any time. We definitely don’t want to have to remember the exact name of the post — and we don’t want to have hunt it down every time.

We can make those things easier like this:
alias resume="vim `find ~posts/ -name '*.txt.tmp'`"

Now, we can just enter resume and the system will go find the post we were working on and open it up for us to finish. The file will need the extension renamed from .txt.tmp to only .txt to publish the post but, for the sake of brevity, we’ll think about that (and having multiple posts in editing) on another day.


Tags: , , , ,
Permalink: 20130513.zsh.and.hash

Tue, 07 May 2013

Welcome, traveler.

Thanks for visiting my little spot on the web. This is a Blosxom ‘blog which, for those who don’t know, is a CGI written in Perl using the file-system (rather than a database).

To the CLI-addicted, this is an awesome little product. Accepting, of course, that you’re going to get under the hood if you’re going to make it the product you want. After some modules and hacking, I’m pleased with the result.

My posts are just text files, meaning I start a new one like: vim ~posts/`date +%Y%m%d`.brief.subject.txt

Note: the back-ticks (`) tell the system that you want to execute the command between ticks, and dynamically insert its output into the command. In this case, the command date with these parameters:
  1. (+) we’re going to specify a format
  2. (%Y) four-digit year
  3. (%m) two-digit month
  4. (%d) two-digit day
That means the command above will use Vim to edit a text file named ‘20130507.brief.subject.txt’ in the directory I have assigned to the hash of ‘posts’. (using hash this way is a function of Zsh that I’ll cover in another post)

In my CLI-oriented ‘blog, I can sprinkle in my own HTML or use common notation like wrapping a word in underscores to have it underlined, forward-slashes for italics and asterisks for bold.

Toss in a line that identifies tags and, since Perl is the beast of Regex, we pick up the tags and make them links, meta-tags, etc.

Things here are likely to change a lot at first, while I twiddle with CSS and hack away at making a Blosxom that perfectly fits my tastes — so don’t be too alarmed if you visit and things look a tad wonky. It just means that I’m tinkering.

Once the saw-horses have been tucked away, I’m going to take the various notes I’ve made during my years in IT and write them out, in a very simple breakdown, aimed at sharing these with people who know little about how to negotiate the command line. The assumption here is that you have an interest in *nix/BSD. If you’ve that and the CLI is not a major part of your computing experience, it probably will be at some point. If you’re working on systems remotely, graphical interfaces often just impede you.

Once you’ve started working on remote machines, the rest is inevitable. You can either remember how to do everything two ways, through a graphical interface and CLI — or just start using the CLI for everything.

So let’s take a little journey through the kinds of things that make me love the CLI.


Tags: , , , , , , , , ,
Permalink: 20130507.greetings