c0d3 :: j0rg3

A collection of tips, tricks and snips. A proud Blosxom weblog. All code. No cruft.

Thu, 13 Jul 2017

Improved Anonymity on Kali Linux

I’m not entirely certain when BackTrack/Kali began behaving more like a regular desktop distro but I seem to recall that originally, networking subsystems were down when you booted up into Run Level 3. It was up to you to turn on the interfaces and fire up a GUI if such was desired. IMO, that’s precisely how it should be. I get it. Most of us aren’t ever won’t ever find ourselves in a clandestine lot, inside of a snack and caffeine filled, non-descript, conversion van with a Yagi pointed at the bubble-window, ready to pilfer innocent datums just trying to get by in this lossy-protocoled, collision-rife, world.

Rather, very many of us just want the stinking box online so we can run through our tutorials and hack our own intentionally vulnerable VMs. A thorough taste of hacking’s un-glamorous underbelly is quite enough for many.

I’m confident that the BT fora were inundated with fledgling hackers complaining that their fresh install couldn’t find WiFi or didn’t load the desktop. However, I feel that distros dedicated to the Red Team should try to instill good habits. Having your machine boot and activate an interface announcing your presence and spewing out MAC and hostname is bad for business. Booting into a (comparatively) heavy GUI is also not where I want to begin.

Let’s imagine that we’re trying to crack into a thing. Don’t we want to apply maximal CPU resources, rather than having GUI elements bringing little beyond cost? If you notice, very many of the related tools still live on the CLI. The typical course of development (e.g.: Nmap, Metasploit) is that the CLI version is thoroughly developed before someone drops a GUI atop (respectively: Zenmap, Armitage).


So let’s take our Kali and make a few quick changes. We want to boot up in text/CLI mode and we want networking left off until we choose to make noise. Further, we want to randomize our MAC address and hostname at every boot.

We’ll use iwconfig to enumerate our wireless interfaces.
lo        no wireless extensions.

wlan1     IEEE 802.11 ESSID:"ESSID"
          Mode:Managed Frequency:2.412 GHz Access Point: 17:23:53:96:BE:67
          Bit Rate=72.2 Mb/s Tx-Power=20 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:off
          Link Quality=70/70 Signal level=-21 dBm
          Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
          Tx excessive retries:253 Invalid misc:400 Missed beacon:0

eth0      no wireless extensions.

wlan0     IEEE 802.11 ESSID:off/any
          Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm
          Retry short limit:7 RTS thr:off Fragment thr:off
          Encryption key:off
          Power Management:on

We have two wireless interfaces: wlan0, wlan1

Okay, first let’s configure to start up in text mode:
> systemctl set-default multi-user.target
Created symlink /etc/systemd/system/default.target → /lib/systemd/system/multi-user.target.

Traditionally from text mode, we bring up the GUI desktop with the command startx. Since we don’t yet have that command, let’s create it:
> echo "systemctl start gdm3.service" > /usr/sbin/startx && chmod +x /usr/sbin/startx

Disable network-manager autostart:
> systemctl disable network-manager.service
> sed -i 's/5min/30sec/' /etc/systemd/system/network-online.target.wants/networking.service

Now, let’s randomize our hostname and MAC addresses at every boot by adding some cronjobs:
> crontab -e

We’ll add two jobs to randomize MAC address and one for our host name:
@reboot macchanger -r wlan0
@reboot macchanger -r wlan1
@reboot hostname `strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'`

We ‘re good! We’ve improved efficiency by staving off the GUI for when we genuinely want it and improved anonymity by randomizing some common ways of identifying the rig.


Tags: , , , , , , , ,
Permalink: 2017-07-10.improved.anonymity.on.kali.linux

Sat, 04 Mar 2017

Official(ish) deep dark onion code::j0rg3 mirror

Recently I decided that I wanted my blog to be available inside of the Deep, Dark Onion (Tor).

First time around, I set up a proxy that I modified to access only the clear web version of the blog and to avail that inside Tor as a ‘hidden service’.

My blog is hosted on equipment provided by the kind folk at insomnia247.nl and I found that, within a week or so, the address of my proxy was blocked. It’s safe for us to assume that it was simply because of the outrageous popularity it received inside Tor.

By “safe for us to assume” I mean that it is highly probable that no significant harm would come from making that assumption. It would not be a correct assumption, though.

What’s more true is that within Tor things are pretty durn anonymous. Your logs will show Tor traffic coming from 127.0.0.1 only. This is a great situation for parties that would like to scan sites repeatedly looking for vulnerabilities — because you can’t block them. They can scan your site over and over and over. And the more features you have (e.g., comments, searches, any form of user input), the more attack vectors are plausible.

So why not scan endlessly? They do. Every minute of every hour.

Since insomnia247 is a provider of free shells, it is incredibly reasonable that they don’t want to take the hit for that volume of traffic. They’re providing this service to untold numbers of other users, blogs and projects.

For that reason, I decided to set up a dedicated mirror.

Works like this: my blog lives here. I have a machine at home which uses rsync to make a local copy of this blog. Immediately thereafter it rsyncs any newly gotten data up to the mirror in onionland.

After consideration, I realized that this was also a better choice just in case there is something exploitable in my blog. Instead of even risking the possibility that an attacker could get access to insomnia247, they can only get to my completely disposable VPS which has hardly anything on it except this blog and a few scripts to which I’ve already opened the source code.

I’ve not finished combing through but I’ve taken efforts to ensure it doesn’t link back to clear web. To be clear, there’s nothing inherently wrong with that. Tor users will only appear as the IP address of their exit node and should still remain anonymous. To me, it’s just onion etiquette. You let the end-user decide when they want to step outside.

To that end, the Tor mirror does not have the buttons to share to Facebook, Twitter, LinkedIn, Google Plus.

That being said, if you’re a lurker of those Internet back-alleys then you can find the mirror at: http://aacnshdurq6ihmcs.onion

Happy hacking, friends!


Tags: , , , , , , , , , , ,
Permalink: 20170304.deep.dark.onion

Tue, 18 Mar 2014

Random datums with Random.org

I was reading about the vulnerability in the ‘random’ number generator in iOS 7 (http://threatpost.com/weak-random-number-generator-threatens-ios-7-kernel-exploit-mitigations/104757) and thought I would share a method that I’ve used. Though, I certainly was not on any version of iOS but, at least, I can help in GNU/Linux and BSDs.

What if we want to get some random numbers or strings? We need a salt or something and, in the interest of best practices, we’re trying to restrain the trust given to any single participant in the chain. That is, we do not want to generate the random data from the server that we are on. Let’s make it somewhere else!

YEAH! I know, right? Where to get a random data can be a headache-inducing challenge. Alas! The noble people at Random.org are using atmospheric noise to provide randomness for us regular folk!

Let’s head over and ask for some randomness:
https://www.random.org/strings/?num=1&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

Great! Still feels a little plain. What if big brother saw the string coming over and knew what I was doing? I mean, aside from the fact that I could easily wrap the string with other data of my choosing.

Random.org is really generous with the random data, so why don’t we take advantage of that? Instead of asking for a single string, let’s ask for several. Nobunny will know which one I picked, except for me!

We’ll do that by increasing the 'num' part of the request that we’re sending:
https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=html&rnd=new

They were also thinking of CLI geeks like us with the 'format' variable. We set that to 'plain' and all the fancy formatting for human eyes is dropped and we get a simple list that we don’t need to use any clever tricks to parse!
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new'

Now we’ve got 25 strings at the CLI. We can pick one to copy and paste. But what if we don’t want to do the picking? Well, we can use the local system to do that. We’ll have it pick a number between 1 and 25. It’s zero-indexed, so we’re going to increment by 1 so that our number is really between 1 and 25.
echo $(( (RANDOM % 25)+1 ))

Next, we’ll send our list to head with our random number. Head is going to give us the first X lines of what we send to it. So if our random number is 17, then it will give us the first 17 lines from Random.org
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 ))

After that, we’ll use the friend of head: tail. We’ll tell tail that we want only the last record of the list, of psuedo-random length, of strings that are random.
curl -s 'https://www.random.org/strings/?num=25&len=16&digits=on&loweralpha=on&unique=on&format=plain&rnd=new' | head -$(( (RANDOM % 25)+1 )) | tail -1

And we can walk away feeling pretty good about having gotten some properly random data for our needs!


Tags: ,
Permalink: 20140318.random.data

Thu, 06 Jun 2013

Managing to use man pages through simple CLI tips

Recently, an author I admire and time-honored spinner of the Interwebs, Tony Lawrence emphasized the value of using man pagesmanual pagesDocumentation available from the command line.
> man ls
as a sanity check before getting carried away with powerful commands. I didn’t know about this one but he has written about a situation in which killall could produce some shocking, and potentially quite unpleasant, results.

Personally, I often quickly check man pages to be certain that I am using the correct flags or, as in the above case, anticipating results that bear some resemblance to what is actually likely to happen. Yet, it seems many people flock toward SERPSearch Engine Results Page A tasteful replacement for mentioning any particular search-engine by name.
Also useful as a verb:
I dunno. You’ll have to SERP it.
s for this information.

Perhaps the most compelling reason to head for the web is leaving the cursor amid the line you’re working on, without disturbing the command. SERPing the command however, could easily lead you to information about a variant that is more common than the one available to you. More importantly, the information retrieved from the search engine is almost certainly written by someone who did read the man page — and may even come with the admonishment that you RTFMRead The F#!$!*#’n Manual as a testament to the importance of developing this habit.

This can be made easier with just a few CLI shortcuts.

<CTRL+u> to cut what you have typed so far and <CTRL+y> to paste it back.

That is, you press <CTRL+u> and the line will be cleared, so you can then type man {command} and read the documentation. Don’t hesitate to jot quick notes of which flags you intend to use, if needed. Then exit the man page, press <CTRL+y> and finish typing right where you left off.

This is another good use for screen or tmux but let’s face it. There are times when you don’t want the overhead of opening another window for a quick look-up and even instances when these tools aren’t available.

A few other tips to make life easier when building complex commands:

Use the command fc to open up an editor in which you can build your complex command and, optionally, even save it as a shell script for future reuse.

Repeat the last word from the previous command (often a filename) with <ALT+.> or use an item from the last command by position, in reverse order:
> ls -lahtr *archive*
<ALT+1+.> : *archive*
<ALT+2+.> : -lahtr
<ALT+3+.> : ls

You can also use Word Designators to use items from history, such as adding sudo to the last command typed by:
sudo !!

This allows for tricks like replacing bits of a previous command:
!:s/misspelled/corrected/

Lastly, if you need a command that was typed earlier, you can search history by pressing <CTRL+r> and start typing an identifying portion of the command.

(Note: I have used these in Zsh and Bash, specifically. They can, however, be missing or overwritten — if a feature you want isn’t working, you can bind keys in a configuration file. Don’t just write it off, once you’ve solved the problem it will never again be an intimidating one.)

Happy hacking!


Tags: , , , , , , , ,
Permalink: 20130606.managing.to.use.man.pages