c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Tue, 10 Jan 2017

[-] Auxiliary failed: Msf::OptionValidateError The following options failed to validate: RHOSTS.

Mucking about with a fresh copy of Kali brings to attention that it’s packaged with an Armitage that doesn’t correctly work.

I know what you’re thinking… Good. Type the commands into Msfconsole like a real man, y’uh lazy good-fer-naught! And, in practice, that was my immediate solution. But I can’t resist a good tinker when things are misbehaving.

I was anticipating that the problem would be thoroughly solved when I ixquicked it. That was partially correct. Surprised, however, when apt-get update && apt-get upgrade didn’t fix the issue. More surprised at the age of the issue. Most surprised that I could see lots of evidence that users have been plagued by this issue — but no clear work arounds were quickly found.

Guess what we’re doing today?

Okay. The issue is quite minor but just enough to be heartbreaking to the fledgling pentester trying to get a VM off the ground.

In brief, the owner of Armitage’s Github explains:

The MSF Scans feature in Armitage parses output from Metasploit’s portscan/tcp module and uses these results to build a list of targets it should run various Metasploit auxiliary modules against. A recent-ish update to the Metasploit Framework changed the format of the portscan/tcp module output. A patch to fix this issue just needs to account for the new format of the portscan/tcp module.

That is, a colon makes it into the input for the Msfconsole command to define RHOSTS. I.e.: set RHOSTS 172.16.223.150: - 172.16.223.150

An other kind coder tweaked the regex and submitted the patch and pull request, which was successfully incorporated into the project.

Sadly, things have stalled out there. So if this problem is crippling your rig, let’s fix it!

We just want a fresh copy of the project.
root@kali:~/armitage# git clone https://github.com/rsmudge/armitage

Cloning into ‘armitage’…
remote: Counting objects: 7564, done.
remote: Total 7564 (delta 0), reused 0 (delta 0), pack-reused 7564
Receiving objects: 100% (7564/7564), 47.12 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (5608/5608), done.

Kali is Debian-based and we’re going to need Apache Ant:
root@kali:~/armitage# apt-get install ant

Then, we’ll build our new fella:
root@kali:~/armitage# cd armitage
root@kali:~/armitage# ./package.sh

Buildfile: /root/test/armitage/build.xml

clean:

BUILD SUCCESSFUL
Total time: 0 seconds
Buildfile: /root/test/armitage/build.xml

init:
[mkdir] Created dir: /root/test/armitage/bin

compile:
[javac] Compiling 111 source files to /root/test/armitage/bin
[javac] depend attribute is not supported by the modern compiler
[javac] Note: /root/test/armitage/src/ui/MultiFrame.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

BUILD SUCCESSFUL
Total time: 2 seconds
Buildfile: /root/test/armitage/build.xml

init:

compile:

jar:
[unzip] Expanding: /root/test/armitage/lib/sleep.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/jgraphx.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/msgpack-0.6.12-devel.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/postgresql-9.1-901.jdbc4.jar into /root/test/armitage/bin
[unzip] Expanding: /root/test/armitage/lib/javassist-3.15.0-GA.jar into /root/test/armitage/bin
[copy] Copying 4 files to /root/test/armitage/bin/scripts-cortana
[jar] Building jar: /root/test/armitage/armitage.jar
[jar] Building jar: /root/test/armitage/cortana.jar

BUILD SUCCESSFUL
Total time: 1 second
armitage/
armitage/readme.txt
armitage/teamserver
armitage/cortana.jar
armitage/armitage.jar
armitage/armitage-logo.png
armitage/armitage
armitage/whatsnew.txt
adding: readme.txt (deflated 55%)
adding: armitage.exe (deflated 49%)
adding: cortana.jar (deflated 5%)
adding: armitage.jar (deflated 5%)
adding: whatsnew.txt (deflated 65%)
armitage/
armitage/readme.txt
armitage/teamserver
armitage/cortana.jar
armitage/armitage.jar
armitage/armitage-logo.png
armitage/armitage
armitage/whatsnew.txt
Archive: ../../armitage.zip
inflating: readme.txt
inflating: armitage.exe
inflating: cortana.jar
inflating: armitage.jar
inflating: whatsnew.txt

And here, best I can guess from messages read, is where a lot of people are running into trouble. We have successfully produced our new working copy of armitage. However, it is in our own local directory and will not be run if we just enter the command: armitage

Let’s review how to figure out what we want to do about that.

First, we want to verify what happens when we run the command armitage.
root@kali:~/armitage# which armitage

/usr/bin/armitage

Good! Let’s check and see what that does!
root@kali:~/armitage# head /usr/bin/armitage

#!/bin/sh

cd /usr/share/armitage/
exec ./armitage “$@”

Almost there! It’s running /usr/share/armitage/armitage with whatever variables we’ve passed in. We’ll check that out.
root@kali:~/armitage# head /usr/share/armitage/armitage

#!/bin/sh
java -XX:+AggressiveHeap -XX:+UseParallelGC -jar armitage.jar $@

We have enough information to assemble a solution.

I trust that the people behind Kali and Armitage will get this corrected so I don’t want to suggest a solution that would replace the armitage command and prevent an updated version from running later. So, let’s just make a temporary replacement?

root@kali:~/armitage# echo -e '#!/bin/sh\njava -XX:+AggressiveHeap -XX:+UseParallelGC -jar ~/armitage/armitage.jar $@' > /usr/bin/tmparmitage

Hereafter, we can use the command ‘tmparmitage’ (either CLI or ALT-F2) to run our fresh version until things catch up.

And, of course, to save you the time, weary hacker:

Download here:
    armitage_quick_fix.sh


Tags: , , , , , , ,
Permalink: 20170110.armitage.not.working.in.kali

Sun, 13 Jul 2014

Simple Protection with iptables, ipset and Blacklists

Seems I’ve always just a few more things going on than I can comfortably handle. One of those is an innocent little server holding the beginnings of a new project.

If you expose a server to the Internet, very quickly your ports are getting scanned and tested. If you’ve an SSH server, there are going to be attempts to login as ‘root’ which is why it is ubiquitously advised that you disable root login. Also why many advise against allowing passwords at all.

We could talk for days about improvements; it’s usually not difficult to introduce some form of two-factor authentication (2FA) for sensitive points of entry such as SSH. You can install monitoring software like Logwatch which can summarize important points from your logs, such as: who has logged via SSH, how many times root was used, etc.

DenyHosts and Fail2ban are very great ways to secure things, according to your needs.

DenyHosts works primarily with SSH and asks very little from you in way of configuration, especially if you’re using a package manager to install a version that is configured for the distribution on which you’re working. If you’re installing from source you may need to find where are your SSH logs (e.g., /var/log/secure, /var/log/auth.log). It’s extremely easy to set up DenyHosts to synchronize so that you’re automatically blocking widely-known offenders whether or not they’re after your server.

In contrast, Fail2ban is going to take more work to get set up. However, it is extremely configurable and works with any log file you point it toward which means that it can watch anything (e.g., FTP, web traffic, mail traffic). You define your own jails which means you can ban problematic IP addresses according to preference. Ban bad HTTP attempts from HTTP only or stick their noses in the virtual corner and don’t accept any traffic from them until they’ve served their time-out by completely disallowing their traffic. You can even use Fail2ban to scan its own logs, so repeating offenders can be locked out for longer.

Today we’re going to assume that you’ve a new server that shouldn’t be seeing any traffic except from you and any others involved in the project. In that case, you probably want to block traffic pretty aggressively. If you’ve physical access to the server (or the ability to work with staff at the datacenter) then it’s better to err in the direction of accidentally blocking good guys than trying to be overly fault-tolerant.

The server we’re working on today is a Debian Wheezy system. It has become a common misconception that Ubuntu and Debian are, intents and purposes, interchangeable. They’re similar in many respects and Ubuntu is great preparation for using Debian but they are not the same. The differences, I think, won’t matter for this exercise but I am unsure because this was written using Wheezy.

Several minutes after bringing my new server online, I started seeing noise in the logs. I was still getting set up and really didn’t want to stop and take protective measures but there’s no point in securing a server after its been compromised. The default Fail2ban configuration was too forgiving for my use. It was scanning for 10 minutes and banning for 10 minutes. Since only a few people should be accessing this server, there’s no reason for anyone to be trying a different password every 15 minutes (for hours).

I found a ‘close-enough’ script and modified it. Here, we’ll deal with a simplified version.

First, lets create a name for these ne’er-do-wells in iptables:
iptables -N bad_traffic

For this one, we’ll use Perl. We’ll look at our Apache log files to find people sniffing ‘round and we’ll block their traffic. Specifically, we’re going to check Apache’s ‘error.log’ for the phrases ‘File does not exist’ and ‘client denied by server configuration’ and block people causing those errors. This would be excessive for servers intended to serve the general populace. For a personal project, it works just fine as a ‘DO NOT DISTURB’ sign.


#!/usr/bin/env perl
use strict;
use POSIX qw(strftime);

my $log = ($ARGV[0] ? $ARGV[0] : "/var/log/apache2/error.log");
my $chain = ($ARGV[1] ? $ARGV[1] : "bad_traffic");

my @bad = `grep -iE 'File does not exist|client denied by server configuration' $log |cut -f8 -d" " | sed 's/]//' | sort -u`;
my @ablk = `/sbin/iptables -S $chain|grep DROP|awk '{print $4}'|cut -d"/" -f1`;

foreach my $ip (@bad) {
if (!grep $_ eq $ip, @ablk) {
chomp $ip;
`/sbin/iptables -A $chain -s $ip -j DROP`;
print strftime("%b %d %T",localtime(time))." badht: blocked bad HTTP traffic from: $ip\n";
}
}

That gives us some great, utterly unforgiving, blockage. Looking at the IP addresses attempting to pry, I noticed that most of them were on at least one of the popular block-lists.

So let’s make use of some of those block-lists! I found a program intended to apply those lists locally but, of course, it didn’t work for me. Here’s a similar program; this one will use ipset for managing the block-list though only minor changes would be needed to use iptables as above:

#!/bin/bash
IP_TMP=ip.tmp
IP_BLACKLIST_TMP=ip-blacklist.tmp

IP_BLACKLIST=ip-blacklist.conf

WIZ_LISTS="chinese nigerian russian lacnic exploited-servers"

BLACKLISTS=(
"http://danger.rulez.sk/projects/bruteforceblocker/blist.php" # BruteForceBlocker IP List
"http://rules.emergingthreats.net/blockrules/compromised-ips.txt" # Emerging Threats - Compromised IPs
"http://www.spamhaus.org/drop/drop.txt" # Spamhaus Don't Route Or Peer List (DROP)
"http://www.spamhaus.org/drop/edrop.txt" # Spamhaus Don't Route Or Peer List (DROP) Extended
"http://cinsscore.com/list/ci-badguys.txt" # C.I. Army Malicious IP List
"http://www.openbl.org/lists/base.txt" # OpenBL.org 90 day List
"http://www.autoshun.org/files/shunlist.csv" # Autoshun Shun List
"http://lists.blocklist.de/lists/all.txt" # blocklist.de attackers
)

for address in "${BLACKLISTS[@]}"
do
echo -e "\nFetching $address\n"
curl "$address" >> $IP_TMP
done

for list in $WIZ_LISTS
do
wget "http://www.wizcrafts.net/$list-iptables-blocklist.html" -O - >> $IP_TMP
done

wget 'http://wget-mirrors.uceprotect.net/rbldnsd-all/dnsbl-3.uceprotect.net.gz' -O - | gunzip | tee -a $IP_TMP

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[/][0-9]\{1,3\}' $IP_TMP | tee -a $IP_BLACKLIST_TMP
grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}[^/]' $IP_TMP | tee -a $IP_BLACKLIST_TMP

sed -i 's/\t//g' $IP_BLACKLIST_TMP
sort -u $IP_BLACKLIST_TMP | tee $IP_BLACKLIST

rm $IP_TMP
rm $IP_BLACKLIST_TMP

wc -l $IP_BLACKLIST

if hash ipset 2>/dev/null
then
ipset flush bloxlist
while IFS= read -r ip
do
ipset add bloxlist $ip
done < $IP_BLACKLIST
else
echo -e '\nipset not found\n'
echo -e "\nYour bloxlist file is: $IP_BLACKLIST\n"
fi


Download here:
    bad_traffic.pl
    bloxlist.sh


Tags: , , , , , , , , , ,
Permalink: 20140713.simple.protection.with.iptables.ipset.and.blacklilsts

Tue, 18 Mar 2014

Random datums with Random.org

I was reading about the vulnerability in the ‘random’ number generator in iOS 7 (http://threatpost.com/weak-random-number-generator-threatens-ios-7-kernel-exploit-mitigations/104757) and thought I would share a method that I’ve used. Though, I certainly was not on any version of iOS but, at least, I can help in GNU/Linux and BSDs.

What if we want to get some random numbers or strings? We need a salt or something and, in the interest of best practices, we’re trying to restrain the trust given to any single participant in the chain. That is, we do not want to generate the random data from the server that we are on. Let’s make it somewhere else!

YEAH! I know, right? Where to get a random data can be a headache-inducing challenge. Alas! The noble people at Random.org are using atmospheric noise to provide randomness for us regular folk!

Let’s head over and ask for some randomness:
https://www.random.org/strings/?num=1&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

Great! Still feels a little plain. What if big brother saw the string coming over and knew what I was doing? I mean, aside from the fact that I could easily wrap the string with other data of my choosing.

Random.org is really generous with the random data, so why don’t we take advantage of that? Instead of asking for a single string, let’s ask for several. Nobunny will know which one I picked, except for me!

We’ll do that by increasing the 'num' part of the request that we’re sending:
https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

They were also thinking of CLI geeks like us with the 'format' variable. We set that to 'plain' and all the fancy formatting for human eyes is dropped and we get a simple list that we don’t need to use any clever tricks to parse!
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new'

Now we’ve got 25 strings at the CLI. We can pick one to copy and paste. But what if we don’t want to do the picking? Well, we can use the local system to do that. We’ll have it pick a number between 1 and 25. It’s zero-indexed, so we’re going to increment by 1 so that our number is really between 1 and 25.
echo $(( (RANDOM % 25)+1 ))

Next, we’ll send our list to head with our random number. Head is going to give us the first X lines of what we send to it. So if our random number is 17, then it will give us the first 17 lines from Random.org
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 ))

After that, we’ll use the friend of head: tail. We’ll tell tail that we want only the last record of the list, of psuedo-random length, of strings that are random.
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 )) | tail -1

And we can walk away feeling pretty good about having gotten some properly random data for our needs!


Tags: ,
Permalink: 20140318.random.data

Thu, 13 Jun 2013

Blogitechture continued… Simplify with Vim

Last we were discussing the structure and design of your own CLI-centric blog platform, we had some crude methods of starting and resuming posts before publishing.

Today, let’s explore a little more into setting up a bloging-friendly environment because we need to either make the experience of blogging easy or we’ll grow tired of the hassle and lose interest.

We can reasonably anticipate that we won’t want to beleaguered with repetitious typing of HTML bits. If we’re going to apply paragraph tags, hyperlinks, codeblocks, etc. with any frequency, that task is best to be simplified. Using Vim as our preferred editor, we will use Tim Pope’s brilliant plug-ins ‘surround’ and ‘repeat’, combined with abbreviations to take away the tedium.

The plug-ins just need dropped into your Vim plugin directory (~/.vim/plugin/). The directory may not exist if you don’t have any plug-ins yet. That’s no problem, though. Let’s grab the plugins:

cd ~/.vim/
wget "http://www.vim.org/scripts/download_script.php?src_id=19287" -O surround.zip
wget "http://www.vim.org/scripts/download_script.php?src_id=19285" -O repeat.zip

Expand the archives into the appropriate directories:

unzip surround.zip
unzip repeat.zip

Ta-da! Your Vim is now configured to quickly wrap (surround) in any variety of markup. When working on a blog, you might use <p> tags a lot by putting your cursor amid the paragraph and typing yss<p>. The plug-in will wrap it with opening and closing paragraph tags. Move to your next paragraph and then press . to repeat.

That out of the way, let’s take advantage of Vim’s abbreviations for some customization. In our .vimrc file, we can define a few characters that Vim will expand according to their definition. For example, you might use:
ab <gclb> <code class="prettyprint lang-bsh linenums:1">
Then, any time you type <gclb> and bress <enter>, you’ll get:
<code class="prettyprint lang-bsh linenums:1">

The next time that we take a look at blogitecture, we will focus on making the posts convenient to manage from our CLI.


Tags: , , , ,
Permalink: 20130613.blogitechture.continued

Tue, 04 Jun 2013

Painless protection with Yubico’s Yubikey

Recently, I ordered a Yubikey and, in the comments section of the order, I promised to write about the product. At the time, I assumed that there was going to be something about which to write: (at least a few) steps of setting up and configuration or a registration process. They’ve made the task of writing about it difficult, by making the process of using it so easy.

Plug it in. The light turns solid green and you push the button when you need to enter the key. That’s the whole thing!

Physically, the device has a hole for a keychain or it can slip easily into your wallet. It draws power from the USB port on the computer, so there’s none stored in the device, meaning it should be completely unfazed if you accidentally get it wet.

Let’s take a look at the device.

> lsusb | grep Yubico

Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey

We see that it is on Bus 5, Device 4. How about a closer look?

> lsusb -v -s5:4

Bus 005 Device 004: ID 1050:0010 Yubico.com Yubikey
Couldn't open device, some information will be missing
Device Descriptor:
  bLength                18
  bDescriptorType         1
  bcdUSB               2.00
  bDeviceClass            0 (Defined at Interface level)
  bDeviceSubClass         0 
  bDeviceProtocol         0 
  bMaxPacketSize0         8
  idVendor           0x1050 Yubico.com
  idProduct          0x0010 Yubikey
  bcdDevice            2.41
  iManufacturer           1 
  iProduct                2 
  iSerial                 0 
  bNumConfigurations      1
  Configuration Descriptor:
    bLength                 9
    bDescriptorType         2
    wTotalLength           34
    bNumInterfaces          1
    bConfigurationValue     1
    iConfiguration          0 
    bmAttributes         0x80
      (Bus Powered)
    MaxPower               30mA
    Interface Descriptor:
      bLength                 9
      bDescriptorType         4
      bInterfaceNumber        0
      bAlternateSetting       0
      bNumEndpoints           1
      bInterfaceClass         3 Human Interface Device
      bInterfaceSubClass      1 Boot Interface Subclass
      bInterfaceProtocol      1 Keyboard
      iInterface              0 
        HID Device Descriptor:
          bLength                 9
          bDescriptorType        33
          bcdHID               1.11
          bCountryCode            0 Not supported
          bNumDescriptors         1
          bDescriptorType        34 Report
          wDescriptorLength      71
         Report Descriptors: 
           ** UNAVAILABLE **
      Endpoint Descriptor:
        bLength                 7
        bDescriptorType         5
        bEndpointAddress     0x81  EP 1 IN
        bmAttributes            3
          Transfer Type            Interrupt
          Synch Type               None
          Usage Type               Data
        wMaxPacketSize     0x0008  1x 8 bytes
        bInterval              10

There’s not a great deal to be seen here. As it tells you right on Yubico’s site, the device presents as a keyboard and it “types” out its key when you press the button, adding another long and complex password to combine with the long and complex password that you’re already using.

Keep in mind that this device is unable to protect you from keyloggers, some of which are hardware-based. It’s critically important that you are very, very careful about where you’re sticking your Yubikey. Even Yubico cannot protect us from ourselves.


Tags: , , , ,
Permalink: 20130604.yay.yubico.yubikey

Thu, 23 May 2013

GNU Screen: Roll your own system monitor

Working on remote servers, some tools are practically ubiquitous — while others are harder to come by. Even if you’ve the authority to install your preferred tools on every server you visit, it’s not always something you want to do. If you’ve hopped on to a friend’s server just to troubleshoot a problem, there is little reason to install tools that your friend is not in the habit of using. Some servers, for security reasons, are very tightly locked down to include only a core set of tools, to complicate the job of any prying intruders. Or perhaps it is a machine that you normally use through a graphical interface but on this occasion you need to work from the CLI.

These are very compelling reasons to get comfortable, at the very least, with tools like Vim, mail, grep and sed. Eventually, you’re likely to encounter a situation where only the classic tools are available. If you aren’t competent with those tools, you’ll end up facing the obstacle of how to get files from the server to your local environment where you can work and, subsequently, how to get the files back when you’re done. In a secured environment, this may not be possible without violating protocols.

Let’s take a look at how we can build a makeshift system monitor using some common tools. This particular configuration is for a server running PHP, MySQL and has the tools Htop and mytop installed. These can easily be replaced with top and a small script to SHOW FULL PROCESSLIST, if needed. The point here is illustrative, to provide a template to be modified according to each specific environment.

(Note: I generally prefer tmux to Gnu Screen but screen is the tool more likely to be already installed, so we’ll use it for this example.)

We’re going to make a set of windows, by a configuration file, to help us keep tabs on what is happening in this system. In so doing, we’ll be using the well-known tools less and watch. More specifically, less +F which tells less to “scroll forward”. Other words, less will continue to read the file making sure any new lines are added to the display. You can exit this mode with CTRL+c, search the file (/), quit(q) or get back into scroll-forward mode with another uppercase F.

Using watch, we’ll include the “-d” flag which tells watch we want to highlight any changes (differences).

We will create a configuration file for screen by typing:

> vim monitor.screenrc

In the file, paste the following:

# Screen setup for system monitoring
# screen -c monitor.screenrc
hardstatus alwayslastline
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

screen -t htop 0 htop
screen -t mem 1 watch -d "free -t -m"
screen -t mpstat 2 watch -d "mpstat -A"
screen -t iostat 3 watch -d "iostat"
screen -t w 4 watch -d "w"
screen -t messages 5 less +F /var/log/messages
screen -t warn 6 less +F /var/log/warn
screen -t database 7 less +F /srv/www/log/db_error
screen -t mytop 8 mytop
screen -t php 9 less +F /srv/www/log/php_error

(Note: -t sets the title, then the window number, followed by the command running in that window)

Save the file (:wq) or, if you’d prefer, you can grab a copy by right-clicking and saving this file.

Then we will execute screen using this configuration, as noted in the comment:

> screen -c monitor.screenrc

Then you can switch between windows using CTRL+a, n (next) or CTRL+a, p (previous).

I use this technique on my own computers, running in a TTY different from the one used by X. If the graphical interface should get flaky, I can simply switch to that TTY (e.g., CTRL+ALT+F5) to see what things are going on — and take corrective actions, if needed.


Tags: , , , , , , , , , ,
Permalink: 20130523.gnu.screen.system.monitor

Mon, 13 May 2013

Zsh and hash

Documentation for this one seems a bit hard to come by but it is one of the things I love about Zsh.

I’ve seen many .bashrc files that have things like:
alias www='cd /var/www'
alias music='cd /home/j0rg3/music'

And that’s a perfectly sensible way to make life a little easier, especially if the paths are very long.

In Zsh, however, we can use the hash command and the shortcut we get from it works fully as the path. Other words, using the version above, if we want to edit ‘index.html’ in the ‘www’ directory, we would have to issue the shortcut to get there and then edit the file, in two steps:
> www
> vim index.html

The improved version in .zshrc would look like:
hash www=/var/www
hash -d www=/var/www

Then, at any time, you can use tilde (~) and your shortcut in place of path.
> vim ~www/index.html

Even better, it integrates with Zsh’s robust completions so you can, for example, type cd ~www/ and then use the tab key to cycle through subdirectories and files.

On this system, I’m using something like this:
(.zshrc)
hash posts=/home/j0rg3/weblog/posts
hash -d posts=/home/j0rg3/weblog/posts

Then we can make a function to create a new post, to paste into .zshrc. Since we want to be able to edit and save, without partial posts becoming visible, while we are working, we’ll use an extra .tmp extension at the end:
post() { vim ~posts/`date +%Y-%m`/`date +%Y%m%d`.$1.txt.tmp }

[ In-line date command unfamiliar? See earlier explanation ]

But, surely there is going to be a point when we need to save a post and finish it later. For now, let’s assume that only a single post will be in limbo at any time. We definitely don’t want to have to remember the exact name of the post — and we don’t want to have hunt it down every time.

We can make those things easier like this:
alias resume="vim `find ~posts/ -name '*.txt.tmp'`"

Now, we can just enter resume and the system will go find the post we were working on and open it up for us to finish. The file will need the extension renamed from .txt.tmp to only .txt to publish the post but, for the sake of brevity, we’ll think about that (and having multiple posts in editing) on another day.


Tags: , , , ,
Permalink: 20130513.zsh.and.hash

Tue, 07 May 2013

Welcome, traveler.

Thanks for visiting my little spot on the web. This is a Blosxom ‘blog which, for those who don’t know, is a CGI written in Perl using the file-system (rather than a database).

To the CLI-addicted, this is an awesome little product. Accepting, of course, that you’re going to get under the hood if you’re going to make it the product you want. After some modules and hacking, I’m pleased with the result.

My posts are just text files, meaning I start a new one like: vim ~posts/`date +%Y%m%d`.brief.subject.txt

Note: the back-ticks (`) tell the system that you want to execute the command between ticks, and dynamically insert its output into the command. In this case, the command date with these parameters:
  1. (+) we’re going to specify a format
  2. (%Y) four-digit year
  3. (%m) two-digit month
  4. (%d) two-digit day
That means the command above will use Vim to edit a text file named ‘20130507.brief.subject.txt’ in the directory I have assigned to the hash of ‘posts’. (using hash this way is a function of Zsh that I’ll cover in another post)

In my CLI-oriented ‘blog, I can sprinkle in my own HTML or use common notation like wrapping a word in underscores to have it underlined, forward-slashes for italics and asterisks for bold.

Toss in a line that identifies tags and, since Perl is the beast of Regex, we pick up the tags and make them links, meta-tags, etc.

Things here are likely to change a lot at first, while I twiddle with CSS and hack away at making a Blosxom that perfectly fits my tastes — so don’t be too alarmed if you visit and things look a tad wonky. It just means that I’m tinkering.

Once the saw-horses have been tucked away, I’m going to take the various notes I’ve made during my years in IT and write them out, in a very simple breakdown, aimed at sharing these with people who know little about how to negotiate the command line. The assumption here is that you have an interest in *nix/BSD. If you’ve that and the CLI is not a major part of your computing experience, it probably will be at some point. If you’re working on systems remotely, graphical interfaces often just impede you.

Once you’ve started working on remote machines, the rest is inevitable. You can either remember how to do everything two ways, through a graphical interface and CLI — or just start using the CLI for everything.

So let’s take a little journey through the kinds of things that make me love the CLI.


Tags: , , , , , , , , ,
Permalink: 20130507.greetings