c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Sat, 04 Mar 2017

Official(ish) deep dark onion code::j0rg3 mirror

Recently I decided that I wanted my blog to be available inside of the Deep, Dark Onion (Tor).

First time around, I set up a proxy that I modified to access only the clear web version of the blog and to avail that inside Tor as a ‘hidden service’.

My blog is hosted on equipment provided by the kind folk at insomnia247.nl and I found that, within a week or so, the address of my proxy was blocked. It’s safe for us to assume that it was simply because of the outrageous popularity it received inside Tor.

By “safe for us to assume” I mean that it is highly probable that no significant harm would come from making that assumption. It would not be a correct assumption, though.

What’s more true is that within Tor things are pretty durn anonymous. Your logs will show Tor traffic coming from 127.0.0.1 only. This is a great situation for parties that would like to scan sites repeatedly looking for vulnerabilities — because you can’t block them. They can scan your site over and over and over. And the more features you have (e.g., comments, searches, any form of user input), the more attack vectors are plausible.

So why not scan endlessly? They do. Every minute of every hour.

Since insomnia247 is a provider of free shells, it is incredibly reasonable that they don’t want to take the hit for that volume of traffic. They’re providing this service to untold numbers of other users, blogs and projects.

For that reason, I decided to set up a dedicated mirror.

Works like this: my blog lives here. I have a machine at home which uses rsync to make a local copy of this blog. Immediately thereafter it rsyncs any newly gotten data up to the mirror in onionland.

After consideration, I realized that this was also a better choice just in case there is something exploitable in my blog. Instead of even risking the possibility that an attacker could get access to insomnia247, they can only get to my completely disposable VPS which has hardly anything on it except this blog and a few scripts to which I’ve already opened the source code.

I’ve not finished combing through but I’ve taken efforts to ensure it doesn’t link back to clear web. To be clear, there’s nothing inherently wrong with that. Tor users will only appear as the IP address of their exit node and should still remain anonymous. To me, it’s just onion etiquette. You let the end-user decide when they want to step outside.

To that end, the Tor mirror does not have the buttons to share to Facebook, Twitter, LinkedIn, Google Plus.

That being said, if you’re a lurker of those Internet back-alleys then you can find the mirror at: http://aacnshdurq6ihmcs.onion

Happy hacking, friends!


Tags: , , , , , , , , , , ,
Permalink: 20170304.deep.dark.onion

Fri, 17 Feb 2017

The making of a Docker: Part I - Bitmessage GUI with SSH X forwarding

Lately, I’ve been doing a lot of work from a laptop running Kali. Engaged in pursuit of a new job, I’m brushing up on some old tools and skills, exploring some bits that have changed.

My primary desktop rig is currently running Arch because I love the fine grain control and the aggressive releases. Over the years, I’ve Gentoo’d and Slacked, Crunchbanged, BSD’d, Solarised, et cet. And I’ve a fondness for all of them, especially the security-minded focus of OpenBSD. But, these days we’re usually on Arch or Kali. Initially, I went with Black Arch on the laptop but I felt the things and ways I was fixing things were too specific to my situation to be good material for posts.

Anyway, I wanted to get Bitmessage running, corresponding to another post I have in drafts. On Kali, it wasn’t going well so I put it on the Arch box and just ran it over the network. A reasonable solution if you’re in my house but also the sort of solution that will keep a hacker up at night.

If you’re lucky, there’s someone maintaining a package for the piece of software that you want to run. However, that’s often not the case.

If I correctly recall, to “fix” the problem with Bitmessage on Kali would’ve required the manual installation an older version of libraries that were already present. Those libraries should, in fact, be all ebony and ivory, living together in harmony. However, I just didn’t love the idea of that solution. I wanted to find an approach that would be useful on a broader scale.

Enter containerization/virtualization!

Wanting the lightest solution, I quickly went to Docker and realized something. I have not before built a Docker container for a GUI application. And Bitmessage’s CLI/daemon mode doesn’t provide the fluid UX that I wanted. Well, the easy way to get a GUI out of a Docker container is to forward DISPLAY as an evironment variable (i.e., docker run -e DISPLAY=$DISPLAY). Splendid!

Except that it doesn’t work on current Kali which is using QT4. There’s a when graphical apps are run as root and though it is fixed in QT5, we are using current Kali. And that means we are, by default, uid 0 and QT4.

I saw a bunch of workarounds that seemed to have spotty (at best) rates of success including seting QT’s graphics system to Native and giving Xorg over to root. They, mostly, seemed to be cargo cult solutions.

What made the most sense to my (generally questionable) mind was to use X forwarding. Since I had already been running Bitmessage over X forwarding from my Arch box, I knew it should work just the same.

To be completely truthful, the first pass I took at this was with Vagrant mostly because it’s SO easy. Bring up your Vagrant Box and then:
vagrant ssh -- -X
Viola!

Having proof of concept, I wanted a Docker container. The reason for this is practical. Vagrant, while completely awesome, has substantially more overhead than Docker by virtualizing the kernel. We don’t want a separate kernel running for each application. Therefore Docker is the better choice for this project.

Also, we want this whole thing to be seemless. We want to run the command bitmessage and it should fire up with minimal awkwardness and hopefully no extra steps. That is we do not want to run the Docker container then SSH into it and execute Bitmessage as individual steps. Even though that’s going to be how we begin.

The Bitmessage wiki accurately describes how to install the software so we’ll focus on the SSH setup. Though when we build the Dockerfile we will need to add SSH to the list from the wiki.

We’re going to want the container to start so that the SSH daemon is ready. Until then we can’t SSH (with X forwarding) into the container. Then we’ll want to use SSH to kick off the Bitmessage application, drawing the graphical interface using our host system’s X11.

We’re going to take advantage of Docker’s -v --volume option which allows us to specify a directory on our host system to be mounted inside our container. Using this feature, we’ll generate our SSH keys on the host and make them automatically available inside the container. We’ll tuck the keys inside the directory that Bitmessage uses for storing its configuration and data. That way Bitmessage’s configuration and stored messages can be persistent between runs — and all of your pieces are kept in a single place.

When we generate the container /etc/ssh/sshd_config is configured to allow root login without password only (i.e., using keys). So here’s how we’ll get this done:
mkdir -p ~/.config/PyBitmessage/keys #Ensure that our data directories exist
cd ~/.config/PyBitmessage/keys
ssh-keygen -b 4096 -P "" -C $"$(whoami)@$(hostname)-$(date -I)" -f docker-bitmessage-keys #Generate our SSH keys
ln -fs docker-bitmessage-keys.pub authorized_keys #for container to see pubkey

Build our container (sources available at Github and Docker) and we’ll make the script to handle Bitmessage to our preferences. #!/bin/bash
# filename: bitmessage
set -euxo pipefail

# open Docker container:
# port 8444 available, sharing local directories for SSH and Bitmessage data
# detatched, interactive, pseudo-tty (-dit)
# record container ID in $DID (Docker ID)
DID=$(docker run -p 8444:8444 -v ~/.config/PyBitmessage/:/root/.config/PyBitmessage -v ~/.config/PyBitmessage/keys/:/root/.ssh/ -dit j0rg3/bitmessage-gui bash)

# find IP address of new container, record in $DIP (Docker IP)
DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)

# pause for one second to allow container's SSHD to come online
sleep 1

# SSH into container and execute Bitmessage
ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oIdentityFile=~/.config/PyBitmessage/keys/docker-bitmessage-keys -X $DIP ./PyBitmessage/src/bitmessagemain.py

# close container if Bitmessage is closed
docker kill $DID

Okay, let’s make it executable: chmod +x bitmessage

Put a link to it where it can be picked up system-wide: ln -fs ~/docker-bitmessage/bitmessage /usr/local/bin/bitmessage

There we have it! We now have a functional Bitmessage inside a Docker container. \o/

In a future post we’ll look at using eCryptfs to further protect our Bitmessage data stores.

  Project files: Github and Docker


Tags: , , , , , , , , , , ,
Permalink: 20170217.making.a.docker.bitmessage

Mon, 17 Feb 2014

Installing INN’s Project Largo in a Docker containter

Prereqruisites: Docker, Git, SSHFS.

Today we’re going to look at using Docker to create a WordPress installation with the Project Largo parent theme and a child theme stub for us to play with.

Hart Hoover has established an image for getting a WordPress installation up and running using Docker. For whatever reason, it didn’t work for me out-of-box but we’re going to use his work to get started.

Let’s make a place to work and move into that directory:
cd ~
mkdir project.largo.wordpress.docker
cd project.largo.wordpress.docker

We’ll clone the Docker/Wordpress project. For me, it couldn’t untar the latest WordPress. So we’ll download it outside the container, untar it and modify the Dockerfile to simply pull in a copy:
git clone https://github.com/hhoover/docker-wordpress.git
cd docker-wordpress/
ME=$(whoami)
wget http://wordpress.org/latest.tar.gz
tar xvf latest.tar.gz
sed -i 's/ADD http:\/\/wordpress.org\/latest.tar.gz \/wordpress.tar.gz/ADD \.\/wordpress \/wordpress/' Dockerfile
sed -i '/RUN tar xvzf \/wordpress\.tar\.gz/d' Dockerfile

Then, build the project which may take some time.
sudo docker build -t $ME/wordpress .

If you’ve not the images ready for Docker, the process should begin with something like:
Step 0 : FROM boxcar/raring
Pulling repository boxcar/raring
32737f8072d0: Downloading [> ] 2.228 MB/149.7 MB 12m29s

And end something like:
Step 20 : CMD ["/bin/bash", "/start.sh"]
---> Running in db53e215e2fc
---> 3f3f6489c700
Successfully built 3f3f6489c700

Once the project is built, we will start it and forward ports from the container to the host system, so that the Docker container’s site can be accessed through port 8000 of the host system. So, if you want to see it from the computer that you’ve installed it on, you could go to ‘HTTP://127.0.0.1:8000’. Alternatively, if your host system is already running a webserver, we could use SSHFS to mount the container’s files within the web-space of the host system.

In this example, however, we’ll just forward the ports and mount the project locally (using SSHFS) so we can easily edit the files perhaps using a graphical IDE such as NetBeans or Eclipse.

Okay, time to start our Docker image and find its IP address (so we can mount its files):
DID=$(docker run -p 8000:80 -d $ME/wordpress)
DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)
docker logs $DID| grep 'ssh user password:' --color

Copy the SSH password and we will make a local directory to access the WordPress installation of our containter.
cd ~
mkdir largo.mount.from.docker.container
sshfs user@$DIP:/var/www $HOME/largo.mount.from.docker.container
cd largo.mount.from.docker.container
PROJECT=$(pwd -P)

Now, we can visit the WordPress installation and finish setting up. From the host machine, it should be ‘HTTP://127.0.0.1:8000’. There you can configure Title, Username, Password, et cet. and finish installing WordPress.

Now, let’s get us some Largo! Since this is a test project, we’ll sacrifice security to make things easy. Our Docker WordPress site isn’t ready for us to easily install the Largo parent theme, so we’ll make the web directory writable by everybody. Generally, this is not a practice I would condone. It’s okay while we’re experimenting but permissions are very important on live systems!

Lastly, we’ll download and install Largo and the Largo child theme stub.
ssh user@$DIP 'sudo chmod -R 777 /var/www'
wget https://github.com/INN/Largo/archive/master.zip -O $PROJECT/wp-content/themes/largo.zip
unzip $PROJECT/wp-content/themes/largo.zip -d $PROJECT/wp-content/themes/
mv $PROJECT/wp-content/themes/Largo-master $PROJECT/wp-content/themes/largo
wget http://largoproject.wpengine.netdna-cdn.com/wp-content/uploads/2012/08/largo-child.zip -O $PROJECT/wp-content/themes/largo-child.zip
unzip $PROJECT/wp-content/themes/largo-child.zip -d $PROJECT/wp-content/themes
rm -rf $PROJECT/wp-content/themes/__MACOSX/

We are now ready to customize our Project Largo child theme!


Tags: , , , , , ,
Permalink: 20140217.project.largo.docker

Tue, 04 Jun 2013

Painless protection with Yubico’s Yubikey

Recently, I ordered a Yubikey and, in the comments section of the order, I promised to write about the product. At the time, I assumed that there was going to be something about which to write: (at least a few) steps of setting up and configuration or a registration process. They’ve made the task of writing about it difficult, by making the process of using it so easy.

Plug it in. The light turns solid green and you push the button when you need to enter the key. That’s the whole thing!

Physically, the device has a hole for a keychain or it can slip easily into your wallet. It draws power from the USB port on the computer, so there’s none stored in the device, meaning it should be completely unfazed if you accidentally get it wet.

Let’s take a look at the device.

> lsusb | grep Yubico

Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey

We see that it is on Bus 5, Device 4. How about a closer look?

> lsusb -v -s5:4

Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey
Couldn't open device, some information will be missing
Device Descriptor:
  bLength                18
  bDescriptorType         1
  bcdUSB               2.00
  bDeviceClass            0 (Defined at Interface level)
  bDeviceSubClass         0 
  bDeviceProtocol         0 
  bMaxPacketSize0         8
  idVendor           0x1050 Yubico.com
  idProduct          0x0010 Yubikey
  bcdDevice            2.41
  iManufacturer           1 
  iProduct                2 
  iSerial                 0 
  bNumConfigurations      1
  Configuration Descriptor:
    bLength                 9
    bDescriptorType         2
    wTotalLength           34
    bNumInterfaces          1
    bConfigurationValue     1
    iConfiguration          0 
    bmAttributes         0x80
      (Bus Powered)
    MaxPower               30mA
    Interface Descriptor:
      bLength                 9
      bDescriptorType         4
      bInterfaceNumber        0
      bAlternateSetting       0
      bNumEndpoints           1
      bInterfaceClass         3 Human Interface Device
      bInterfaceSubClass      1 Boot Interface Subclass
      bInterfaceProtocol      1 Keyboard
      iInterface              0 
        HID Device Descriptor:
          bLength                 9
          bDescriptorType        33
          bcdHID               1.11
          bCountryCode            0 Not supported
          bNumDescriptors         1
          bDescriptorType        34 Report
          wDescriptorLength      71
         Report Descriptors: 
           ** UNAVAILABLE **
      Endpoint Descriptor:
        bLength                 7
        bDescriptorType         5
        bEndpointAddress     0x81  EP 1 IN
        bmAttributes            3
          Transfer Type            Interrupt
          Synch Type               None
          Usage Type               Data
        wMaxPacketSize     0x0008  1x 8 bytes
        bInterval              10

There’s not a great deal to be seen here. As it tells you right on Yubico’s site, the device presents as a keyboard and it “types” out its key when you press the button, adding another long and complex password to combine with the long and complex password that you’re already using.

Keep in mind that this device is unable to protect you from keyloggers, some of which are hardware-based. It’s critically important that you are very, very careful about where you’re sticking your Yubikey. Even Yubico cannot protect us from ourselves.


Tags: , , , ,
Permalink: 20130604.yay.yubico.yubikey

Thu, 30 May 2013

Making ixquick your default search engine

In this writer’s opinion, it is vitally important that we take reasonable measures now to help insure anonymity, lest we create a situation where privacy no longer exists, and the simple want of, becomes suspicious.

Here’s how to configure your browser to automatically use a search engine that respects your privacy.

Chrome:

  1. Click Settings.
  2. Click “Set pages” in the “On startup” section.
  3. Enter https://ixquick.com/eng/ in the “Add a new page” text field.
  4. Click OK.
  5. Click “Manage search engines…”
  6. At the bottom of the “Search Engines” dialog, click in the “Add a new search engine” field.
  7. Enter
    ixquick
    ixquick.com
    https://ixquick.com/do/search?lui=english&language=english&cat=web&query=%s
  8. Click “Make Default”.
  9. Click “Done”.

Firefox:

  1. Click the Tools Menu.
  2. Click Options.
  3. Click the General tab.
  4. In “When Firefox Starts” dropdown, select “Show my home page”.
  5. Enter https://ixquick.com/eng/ in the “Home Page” text field.
  6. Click one of the English options here.
  7. Check box for “Start using it right away.”
  8. Click “Add”.

Opera:

  1. Click “Manage Search Engines
  2. Click “Add”
  3. Enter
    Name: ixquick
    Keyword: x
    Address: https://ixquick.com/do/search?lui=english&language=english&cat=web&query=%s
  4. Check “Use as default search engine”
  5. Click “OK”

Internet Explorer:

      _     ___  _ __        ___   _ _____ ___ 
     | |   / _ \| |\ \      / / | | |_   _|__ \
     | |  | | | | | \ \ /\ / /| | | | | |   / /
     | |__| |_| | |__\ V  V / | |_| | | |  |_| 
     |_____\___/|_____\_/\_/   \___/  |_|  (_) 
    
    
    (This is not a good strategy for privacy.)

Congratulations!

\o/

You are now one step closer to not having every motion on the Internet recorded.

This is a relatively small measure, though. You can improve your resistance to prying eyes (e.g., browser fingerprinting) by using the Torbrowser Bundle, or even better, Tails, and routing your web usage through Tor, i2p, or FreeNet.

If you would like more on subjects like anonymyzing, privacy and security then drop me a line via email or Bitmessage me: BM-2D9tDkYEJSTnEkGDKf7xYA5rUj2ihETxVR


Tags: , , , , , , , , , , , , , ,
Permalink: 20130530.hey.you.get.offa.my.data

Wed, 15 May 2013

Git: an untracked mess?

There may be times when you find your Git repository burdened with scads of untracked files left aside while twiddling, testing bug patches, or what-have-youse.

For the especially scatter-brained among us, these things can go unchecked until a day when the useful bits of a git status scroll off the screen due to utterly unimportant stuff. Well, hopefully unimportant.

But we’d better not just cleave away everything that we haven’t checked in. You wonder:
What if there’s something important in one of those files?

You are so right!

Let’s fix this!

Firstly, we want a solution that’s reproducible. Only want to invent this wheel once, right?

Let’s begin with the play-by-play:

Git, we want a list of what isn’t tracked: git ls-files -o --exclude-standard -z

We’ll back these files up in our home directory (~), using CPIO but we don’t want a poorly-named directory or finding anything will become its own obstacle. So we’ll take use the current date (date +%Y-%m-%d), directory (pwd) and branch we’re using (git branch) and we’ll twist all of it into a meaningful, but appropriate, directory name using sed. git ls-files -o --exclude-standard -z | cpio -pmdu ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/

Then Tell Git to remove the untracked files and directories: git clean -d -f

Ahhhh… Much better. Is there anything left out? Perhaps. What if we decide that moving these files away was a mistake? The kind of mistake that breaks something. If we realize right away, it’s easily-enough undone. But what if we break something and don’t notice for a week or two? It’d probably be best if we had an automated script to put things back the way they were. Let’s do that.

Simple enough. We’ll just take the opposite commands and echo them into a script to be used in case of emergency.

Create the restore script (restore.sh), to excuse faulty memory: echo "(cd ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Make the restore script executable: chmod u+x ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

Lastly, the magic, compressed into one line that will stop if any command does not report success: a='untracked-git-backup-'`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`; git ls-files -o --exclude-standard -z | cpio -pmdu ~/$a/ && git clean -d -f && echo "(cd ~/$a/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/$a/restore.sh && chmod +x ~/$a/restore.sh; unset a


Tags: , , , ,
Permalink: 20130515.git.untracked.mess

Mon, 13 May 2013

Zsh and hash

Documentation for this one seems a bit hard to come by but it is one of the things I love about Zsh.

I’ve seen many .bashrc files that have things like:
alias www='cd /var/www'
alias music='cd /home/j0rg3/music'

And that’s a perfectly sensible way to make life a little easier, especially if the paths are very long.

In Zsh, however, we can use the hash command and the shortcut we get from it works fully as the path. Other words, using the version above, if we want to edit ‘index.html’ in the ‘www’ directory, we would have to issue the shortcut to get there and then edit the file, in two steps:
> www
> vim index.html

The improved version in .zshrc would look like:
hash www=/var/www
hash -d www=/var/www

Then, at any time, you can use tilde (~) and your shortcut in place of path.
> vim ~www/index.html

Even better, it integrates with Zsh’s robust completions so you can, for example, type cd ~www/ and then use the tab key to cycle through subdirectories and files.

On this system, I’m using something like this:
(.zshrc)
hash posts=/home/j0rg3/weblog/posts
hash -d posts=/home/j0rg3/weblog/posts

Then we can make a function to create a new post, to paste into .zshrc. Since we want to be able to edit and save, without partial posts becoming visible, while we are working, we’ll use an extra .tmp extension at the end:
post() { vim ~posts/`date +%Y-%m`/`date +%Y%m%d`.$1.txt.tmp }

[ In-line date command unfamiliar? See earlier explanation ]

But, surely there is going to be a point when we need to save a post and finish it later. For now, let’s assume that only a single post will be in limbo at any time. We definitely don’t want to have to remember the exact name of the post — and we don’t want to have hunt it down every time.

We can make those things easier like this:
alias resume="vim `find ~posts/ -name '*.txt.tmp'`"

Now, we can just enter resume and the system will go find the post we were working on and open it up for us to finish. The file will need the extension renamed from .txt.tmp to only .txt to publish the post but, for the sake of brevity, we’ll think about that (and having multiple posts in editing) on another day.


Tags: , , , ,
Permalink: 20130513.zsh.and.hash

Wed, 08 May 2013

Deleting backup files left behind by Vim

It’s generally a great idea to have Vim keep backups. Once in awhile, they can really save your bacon.

The other side of that coin, though, is that they can get left behind here and there, eventually causing aggravation.

Here’s a snippet to find and eliminate those files from the current directory down:

find ./ -name '*~' -exec rm '{}' \; -print -or -name ".*~" -exec rm {} \; -print
This uses find from the current directory down (./) to execute an rm statement on all files with an extension ending in tilde (~)
Alternatively, you could just store your backups elsewhere. In Vim, use :help backupdir for more information.


Tags: , , , ,
Permalink: 20130508.delete.vim.backups