Thoughts on Open Source Software Development

28 10 2013

The last year I did a lot of work with some small Open Source projects (Nova, Honeyd, Neighbor-Cache Fingerprinter, Node.js Github Issue Bot…). I’ve also used Linux for all of my development and have used a lot of Open Source projects in that time. In some ways I’ve come out being more of on Open Source advocate than ever, and in other ways I’ve come out a bit jaded. What does Open Sourcing a project get you?

Good thing: free feedback on features and project direction

Unless you’re Steve Jobs, you probably don’t know what customers want. If you’re an engineer like most people reading this blog, you really probably don’t know what customers want. Open Sourcing the project can provide free user feedback. If you’re writing a business application, people will tell you they want pretty graphs generated for data that you never thought would be important. If you’re writing something with dependencies, users will tell you they want you to support multiple versions of potentially incompatible libraries that you would never have bothered with on your own.

If you’ve got an IRC channel, you’ll occasionally find a person who’s more than willing to chat about his or her opinions on the project and what features they think would be useful, in addition to the occasional issue tickets and emails.

The Open Source community can be your customers when you don’t have any real customers yet.

Good thing: free testing

Everyone who downloads and uses the project becomes someone that can help with the testing effort. All software has bugs, and if they’re annoying enough, people will report them. I’ve tried to make small contributions to bigger Open Source projects by reporting issues I’ve found in things like Node.js, Express, Backtrack, Gimp, cvv8… As a result, code becomes better tested and more stable.

Good thing: free marketing

Open Sourcing the project, at least in theory, means people will use it. They’ll talk about it to their friends, they’ll write articles and reviews about it, and if the project is actually useful it’ll start gaining popularity.

Misconception: you’ll get herds of developers willing to work on your project for free

I’ve reported dozens of bugs in big Open Source projects. I’ve modified the source code of Nmap and Apache for various data collection reasons. I’ve never submitted a patch bigger than about 3 lines of code to someone else’s Open Source project. That’s depressing to admit, but it’s the norm. People will file bug tickets, sometimes offer suggestions on features, but don’t expect a herd of developers working for free and flocking to your project to make it better. Even the most hardcore Open Source advocates have their own pet projects they would rather work on than fixing bugs or writing features into yours. Not only that, the effort to fix a bug in a foreign code base is significantly higher than the effort required for the original developer of the code to fix it. Why spend 3 hours setting up the development environment and trying to fix a bug, when you can file a ticket and the guy that wrote the code can probably fix it in 3 minutes?

There are large Open Source projects (Linux, Open Office, Apache…) that have a bunch of dedicated developers. They’re the exceptions. From what I’ve seen, most Open Source projects are run by one person or a small core group.

Misconception: the community will take over maintaining projects even if the core developer leaves

We used a Node.js library called Nowjs quite a lot. It’s a wonderful package that takes away all the tedium of manual AJAX or socket.io work and makes Javascript RPC amazingly easy. It has over 2,000 followers on Github, and probably ten times that many people using it. One day the developer decided to abandon the project to work on other things; not that unusual for a pet project. Sadly, that was the death of the project. Github makes it trivial to clone the project, with a single press of a button someone could make a copy of the repository and take over maintaining and extending it. Dozens of people initially made forks of the project in order to do that, and dozens more made forks to fix bugs they found.

What’s left? A mess consisting of dozens of Github forks of the project, all with different bugs being fixed or features added, and the “official” project left abandoned in such a way no one can figure out which fork they should use. There’s no one left to merge in patches or to make project direction decisions. New users can’t figure out which fork to use and old users that actually write patches don’t know where to submit them anymore.

The developer of Nowjs moved on to develop Bridge-js. Then Bridge-js got abandoned too.

Bridge is still open source but the engineers behind Bridge have returned to school.

This pattern is almost an epidemic in Node.js. Someone creates a really amazing module, publishes it to Github and NPM, and then abandons it. Dozens of people try to take over the development, but in the end all fail (partly because Github’s lack of specifying which fork of the project is “official”, and the Open Source problem that there is no “official” fork). A dozen new people create the same basic module from scratch, most of which never become popular, and most of which also become abandoned… You see the picture.

If you sense a hint of frustration, you’d be right. On multiple occasions I had to dig through dozens of half abandoned projects trying to figure out which library I wanted to use to do something as common as SQL in Node.js.

The reason it’s an epidemic with Node is because no one is really sure what they want yet, and projects haven’t become popular enough to have momentum to continue after abandonment by their original authors. Hopefully at some point projects will acquire a big enough user base and enough developers that they can sustain themselves.

Fork is a four letter word

Even the biggest projects aren’t immune to the anarchy of forks. Libreoffice and Openoffice, GNU Emacs vs XEmacs, the list goes on. For the end user of these software suits, this is mainly annoying. I’ve switched between LibreOffice and OpenOffice more than once now, because I keep finding bugs in one but not the other.

Sometimes forks break out for ridiculous reasons. The popular IM client Pidgin was forked into the Carrier project. Why?

As of version 2.4 and later, the ability to manually resize the text input box of conversations has been altered—Pidgin now automatically resizes between a number of lines set in ‘Preferences’ and 50% of the window depending on how much is typed. Some users find this an annoyance rather than a feature and find this solution unacceptable. The inability to manually resize the input area eventually led to a fork, Carrier (originally Funpidgin). – https://en.wikipedia.org/wiki/Pidgin_(software)

You can view the 300+ post argument about the issue on the original Pidgin ticket here.

The fact there’s no single “official” version of a project and the sometimes trivial reasons that forks break out cause a lot of inefficiency as bug are fixed in some forks but not others, and eventually code bases diverge so much that they also develop in one fork or another.

Misconception: people outside of software development understand Open Source

I once heard someone ask in confusion how Open Source software can possibly be secure, because can’t anyone upload backdoors into it? They thought Open Source projects were like Wikipedia, where anyone could edit the code and their changes would be somehow instantly applied without review. After all, people keep telling them, “Open Source is great, anyone can modify the code!”.

A half dozen times, ranging from family members to customers and business people, I’ve had to try and explain how Open Source security products can work even though the code is available. If people can see the code, they can figure out how to break and bypass it, right? Ugh…

And don’t even get me started on the people that will start comparing Open Source to communism.

Concluding thoughts

I believe Open Source software has plenty of advantages, but I also think there’s a lot of hype surrounding it. The vast majority of Open Source projects are hacked together as hobby projects and abandoned shortly after. A study of Sourceforge projects showed that less than 17% of projects actually become successful; the other 83% are abandoned in the early stages. Most projects only have a few core developers and the project won’t outlive their interest in it. The community may submit occasional patches, but are unlikely to do serious feature development.

Why release Open Source software then? I think the answer often becomes, “why not?”. Plenty of developers write code in their spare time. They don’t plan to make money directly from it: selling software is hard. They do it to sharpen their saws. They do it for fun, self improvement, learning, future career opportunities, and to release something into the world that just might be useful and make people’s lives better. If you’re writing code with no intention of making money off it, there’s really no reason not to release it as Open Source.

What if you do want to make money off it? Well, why not dual licence your code and have a free Open Source trial version along with an Enterprise version? You’ll get the advantages of free marketing, testing, and user feedback. There is the risk that someone will come along and extend the Open Source trial version into a program that has all of the same features, or even more features, as your Enterprise version, and this is something that needs to be considered. However, as I mentioned before, it’s hard to find people that will take over the development and maintenance of Open Source projects. I think it’s more likely that someone will steal your idea and create their own implementation than bother with trying to extend your trial version code, but I don’t have any proof or evidence of that.





Neighbor Cache Fingerprinter: Operating System Version Detection with ARP

30 12 2012

I’ve released the first prototype (written in C++) of an Open Source tool called the Neighbor Cache Fingerprinter on Github today. A few months ago, I was watching the output of a lightweight honeypot in a Wireshark capture and noticed that although it had the capability to fool nmap’s operating system scanner into thinking it was a certain operating system, there were subtle differences in the ARP behavior that could be detected. This gave me the idea to explore the possibility of doing OS version detection with nothing except ARP. The holidays provided a perfect time to destroy my sleep schedule and get some work done on this little side project (see commit punchcard, note best work done Sunday at 2:00am).

ncfpunchcard

The tool is currently capable of uniquely identifying the following operating systems,

Windows 7
Windows XP (fingerprint from Service Pack 3)
Linux 3.x (fingerprint from Ubuntu 12.10)
Linux 2.6 (fingerprint from Century Link Q1000 DSL Router)
Linux 2.6 (newer than 2.6.24) (fingerprint from Ubuntu 8.04)
Linux 2.6 (older than 2.6.24) (fingerprint from Knoppix 5)
Linux 2.4 (fingerprint from Damn Small Linux 4.4.10)
FreeBSD 9.0-RELEASE
Android 4.0.4
Android 3.2
Minix 3.2
ReactOS 0.3.13

More operating systems should follow as I get around to spinning up more installs on Virtual Machines and adding to the fingerprints file. Although it’s still a fairly early prototype, I believe it’s already a useful enough tool that it can be beneficial, so install it and let me know via the Github issues page if you find any bugs. There’s very little existing research on this; arp-fingerprint (a perl script that uses arp-scan) is the only thing remotely close, and it attempts to identify the OS only by looking at responses to ARP REQUEST packets. The Neighbor Cache Fingerprinter focuses on sending different types of ARP REPLY packets as well as analyzing several other behavioral quirks of ARP discussed in the notes below.

The main advantage of the Neighbor Cache Fingerprinter versus an Nmap OS scan is that the tool can do OS version detection on a machine that has all closed ports. The next big feature I’m working on is expanding the probe types to allow it to work on machines that respond to ICMP pings, OR have open TCP ports, OR have closed TCP ports, OR have closed UDP ports. The tool just needs the ability to elicit a reply from the target being scanned, and a pong, TCP/RST, TCP/ACK, or ICMP unreachable message will all provide that.

The following are my notes taken from the README file,

Introduction

What is the Neighbor Cache? The Neighbor Cache is an operating system’s mapping of network addresses to link layer addresses maintained and updated via the protocol ARP (Address Resolution Protocol) in IPv4 or NDP (Neighbor Discovery Protocol) in IPv6. The neighbor cache can be as simple as a lookup table updated every time an ARP or NDP reply is seen, to something as complex as a cache that has multiple timeout values for each entry, which are updated based on positive feedback from higher level protocols and usage characteristics of that entry by the operating system’s applications, along with restrictions on malformed or unsolicited update packets.

This tool provides a mechanism for remote operating system detection by extrapolating characteristics of the target system’s underlying Neighbor Cache and general ARP behavior. Given the non-existence of any standard specification for how the Neighbor Cache should behave, there several differences in operating system network stack implementations that can be used for unique identification.

Traditional operating system fingerprinting tools such as Nmap and Xprobe2 rely on creating fingerprints from higher level protocols such as TCP, UDP, and ICMP. The downside of these tools is that they usually require open TCP ports and responses to ICMP probes. This tool works by sending a TCP SYN packet to a port which can be either open or closed. The target machine will either respond with a SYN/ACK packet or a SYN/RST packet, but either way it must first discover the MAC address to send the reply to via queries to the ARP Neighbor Cache. This allows for fingerprinting on target machines that have nothing but closed TCP ports and give no ICMP responses.

The main disadvantage of this tool versus traditional fingerprinting is that because it’s based on a Layer 2 protocol instead of a Layer 3 protocol, the target machine that is being tested must reside on the same Ethernet broadcast domain (usually the same physical network). It also has the disadvantage of being fairly slow compared to other OS scanners (a scan can take ~5 minutes).

Fingerprint Technique: Number of ARP Requests

When an operating system performs an ARP query it will often resend the request multiple times in case the request or the reply was lost. A simple count of the number of requests that are sent can provide a fingerprint feature. In addition, there can be differences in the number of responses to open and closed ports due to multiple retries on the higher level protocols, and attempting to send a probe multiple times can result in different numbers of ARP requests (Android will initially send 2 ARP requests, but the second time it will only send 1).

For example,

Windows XP: Sends 1 request

Windows 7: Sends 3 if probe to closed port (9 if probe to open port)

Linux: Sends 3 requests

Android 3: Sends 2 requests the first probe, then 1 request after
A minimum and maximum number of requests seen is recorded in the fingerprint.

Fingerprint Technique: Timing of ARP Request Retries

On hosts that retry ARP requests, the timing values can be used to deduce more information. Linux hosts generally have a constant retry time of 1 second, while Windows hosts generally back off on the timing, sending their first retry after between 500ms and 1s, and their second retry after 1 second.

The fingerprint contains the minimum time difference between requests seen, maximum time difference, and a boolean value indicating if the time differences are constant or changing.

Fingerprint Technique: Time before cache entry expires

After a proper request/reply ARP exchange, the Neighbor Cache gets an entry put in it for the IP address and for a certain amount of time communication will continue without additional ARP requests. At some point, the operating system will decide the entry in the cache is stale and make an attempt to update it by sending a new ARP request.

To test this a SYN packet is sent, an ARP exchange happens, and then SYN packets are sent once per second until another ARP request is seen.

Operating system response examples,

Windows XP : Timeout after 10 minutes (if referred to)

Windows 7/Vista/Server 2008 : Timeout between 15 seconds and 45 seconds

Freebsd : Timeout after 20 minutes

Linux : Timeout usually around 30 seconds
More research needs to be done on the best way to capture the values of delay_first_probe_time and differences between stale timing and actually falling out of the table and being gc’ed in Linux.

Waiting 20 minutes to finish the OS scan is unfeasible in most cases, so the fingerprinting mode only waits about 60 seconds. This may be changed later to make it easier to detect an oddity in older windows targets where cache entries expire faster if they aren’t used (TODO).

Fingerprint Technique: Response to Gratuitous ARP Replies

A gratuitous or unsolicited ARP reply is an ARP reply for which there was no request. The usual use case for this is notification of machines on the network of IP changes or systems coming online. The problem for implementers is that several of the fields in the ARP packet no longer make much sense.

Who is the Target Protocol Address for the ARP packet? The broadcast address? Zero? The specification surprisingly says neither: the target Protocol address should be the same IP address as the Sender Protocol Address.

When there’s no specific target for the ARP packet, the Target Hardware Address also becomes a confusing field. The specification says it’s value shouldn’t matter, but should be set to zero. However, most implementations will use the Ethernet broadcast address of FF:FF:FF:FF:FF instead, because internally they have some function to send an ARP reply that only takes one argument for the destination MAC address (and is put in both the Ethernet frame destination and the ARP packet’s Target Hardware Address). We can also experiment with setting the Target Hardware Address to the same thing as the Sender Hardware Address (the same method the spec says to use for the Target Protocol field).

Even the ARP opcode becomes confusing in the case of unsolicited ARP packets. Is it a “request” for other machines to update their cache? Or is it a “reply”, even though it isn’t a reply to anything? Most operating systems will update their cache no matter the opcode.

There are several variations of the gratuitous ARP packet that can be generated by changing the following fields,

Ethernet Frame Destination Address : Bcast or the MAC of our target

ARP Target Hardware Address : 0, bcast, or the MAC of our target

ARP Target Protocol Address : 0 or the IP address of our target

ARP Opcode : REPLY or REQUEST
This results in 36 different gratuitous packet permutations.

Most operating systems have the interesting behavior that they will ignore gratuitous ARP packets if the sender is not in the Neighbor Cache already, but if the sender is in the Neighbor Cache, they will update the MAC address, and in some operating systems also update the timeouts.
The following sequence shows the testing technique for this feature,

Send ARP packet that is known to update most caches with srcmac = srcMacArg Send gratuitous ARP packet that is currently being tested with srcmac = srcMacArg + 1 Send probe packet with a source MAC address of srcMacArg in the Ethernet frame

The first packet attempts to get the cache entry into a known state: up to date and storing the source MAC address that is our default or the command line argument –srcmac. The following ARP packet is the actual probe permutation that’s being tested.

If the reply to the probe packet is to (srcMacArg + 1), then we know the gratuitous packet successfully updated the cache entry. If the reply to the probe is just (srcMacArg), then we know the cache was not updated and still contains the old value.

The reason the Ethernet frame source MAC address in the probe is set to the original srcMacArg is to ensure the target isn’t just replying to the MAC address it sees packets from and is really pulling the entry out of ARP.

Sometimes the Neighbor Cache entry will get into a state that makes it ignore gratuitous packets even though, given a normal state, it would accept them and update the entry. This can result in some timing related result changes. For now I haven’t made an attempt to fix this as it’s actually useful as a fingerprinting method in itself.

Fingerprint Technique: Can we get put into the cache with a gratuitous packet?

As mentioned in the last section, most operating systems won’t add a new entry to the cache given a gratuitous ARP packet, but they will update existing entries. One of the few differences between Windows XP and FreeBSD’s fingerprint is that we can place an entry in the cache by sending a certain gratuitous packet to a FreeBSD machine, and test if it was in the cache by seeing if a probe gets a response or not.

Fingerprint Technique: ARP Flood Prevention (Ignored rapid ARP replies)

RFC1122 (Requirements for Internet Hosts) states,

“A mechanism to prevent ARP flooding (repeatedly sending an ARP Request for the same IP address, at a high rate) MUST be included. The recommended maximum rate is 1 per second per destination.”

Linux will not only ignore duplicate REQUEST packets within a certain time, but also duplicate REPLY packets. We can test this by sending a set of unsolicited ARP replies within a short time range with difference MAC addresses being reported by each reply. Sending a probe will reveal in the probe response destination MAC address if the host responds to the first MAC address we ARPed or the last, indicating if it ignored the later rapid replies.

Fingerprint Technique: Correct Reply to RFC5227 ARP Probe

This test sends an “ARP Probe” as defined by RFC 5227 (IPv4 Address Conflict Detection) and checks the response to see if it confirms to the specification. The point of the ARP Probe is to check if an IP address is being used without the risk of accidentally causing someone’s ARP cache to update with your own MAC address when it sees your query. Given that you’re likely trying to tell if an IP address is being used because you want to claim it, you likely don’t have an IP address of your own yet, so the Sender Protocol Address field is set to 0 in the ARP REQUEST.

The RFC specifies the response as,

“(the probed host) MAY elect to attempt to defend its address by … broadcasting one single ARP Announcement, giving its own IP and hardware addresses as the sender addresses of the ARP, with the ‘target IP address’ set to its own IP address, and the ‘target hardware address’ set to all zeroes.”

But any Linux kernel older than 2.6.24 and some other operating systems will respond incorrectly, with a packet that has tpa == spa and tha == sha. Checking if tpa == 0 has proven sufficient for a boolean fingerprint feature.

TODO RESEARCH IN PROGRESS Fingerprint Technique

Feedback from higher protocols extending timeout values

Linux has the ability to extend timeout values if there’s positive feedback from higher level protocols, such as a 3 way TCP handshake. Need to write tests for this and do some source diving in the kernel to see what else counts besides a 3 way handshake for positive feedback.

TODO RESEARCH IN PROGRESS Fingerprint Technique

Infer Neighbor Cache size by flooding to cause entry dumping

Can we fill the ARP table with garbage entries in order for it to start dumping old ones? Can we reliably use this to infer the table size, even with Linux’s near random cache garbage collection rules? Can we do this on class A networks, or do we really need class B network subnets in order to make this a viable test?





All about network configuration in Ubuntu Server 12.04/12.10

1 11 2012

Network configuration in Linux can be confusing; this post traces hrough the layers from top to bottom in order to take away some of the confusion and provide some detailed insight into the network initialization and configuration process in Ubuntu Server 12.04 and 12.10.

Initial System Startup

In Ubuntu, upstart is gradually replacing traditional init scripts that start and stop based primarily on run levels. The upstart script for basic networking is located in /etc/init/networking.conf,

# networking – configure virtual network devices
#
# This task causes virtual network devices that do not have an associated
# kernel object to be started on boot.

description “configure virtual network devices”

emits static-network-up
emits net-device-up

start on (local-filesystems
and (stopped udevtrigger or container))

task

pre-start exec mkdir -p /run/network

exec ifup -a

The ‘local-filesystems’ event is triggered when all file systems have been finished being mounted, and the ‘stopped udevtrigger or container’ line is to ensure that the /run folder is ready to be used (which contains process IDs, locks, and other information programs want to temporarily store while they’re running). The task keyword tells upstart that this is a task that should end in a finite amount of time (rather than a service, which has daemon like behavior). The important thing to take away from this is that the command “ifup -a” is called when the system starts up.

Configuring ifup

The configuration file for ifup is located in /etc/network/interfaces and this is the file you’ll want to modify for a basic network configuration. The ifup tool allows configuring of your network interfaces and will attempt to serially go through and bring them up one at a time when ifconfig -a is called if the ‘auto’ keyword is specified. An example configuration follows,

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet static
address 192.168.0.42
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.1
dns-nameservers 192.168.0.1 8.8.8.8

 

This configuration starts out with the loopback adapter, which should be there by default. The next entry for eth0 will attempt to use DHCP, and the entry for eth1 will use a static IP and configuration. Something to keep in mind is that this will ONLY be run when the system starts up. For a simple desktop machine that is always plugged into the network, this configuration will probably be all you need to do. But what happens if you’re unplugging and plugging things in a lot, such as on a laptop? You will run into two problems: the first is that if you have interfaces set to DHCP and they aren’t plugged in when you’re booting, you’ll likely have a  “waiting for network configuration” message followed by “waiting up to 60 more seconds for network configuration..” which can slow your boot time by several minutes. The second problem is that once the system is booted, plugging in an Ethernet cable won’t actually cause a DHCP request to be sent, since ifup -a is only called once when the system is booting. If you want to avoid these problems, you’ll have to use a tool like ifplugd.

Using ifplugd to handle interfaces that are unplugged a lot

From the man page,

ifplugd is a daemon which will automatically configure your ethernet device when a cable is plugged in and automatically unconfigure it if the cable is pulled. This is useful on laptops with on-board network adapters, since it will only configure the interface when a cable is really connected.

 

Installing and configuring ifplugd is easy. First, go into /etc/network/interfaces and change the ‘auto eth0’ settings to ‘allow-hotplug eth0’. Now ifup -a will not activate this interface, but will instead allow the ifplugd daemon to bring it up. The configuration information for the interface will still be used from /etc/network/interfaces. To install and configure ifplugd run the following,

 

sudo apt-get install ifplugd

sudo dpkg-reconfigure ifplugd

Enter the names of all the interfaces that you want ifplugd to configure when the link status changes and the reconfigure tool will update /etc/default/ifplugd. Instead of using upstart, ifplugd currently uses the old style init.d scripts, and is launched from /etc/init.d/ifplugd.

Now you should be able to unplug and plug in Ethernet cables and DHCP requests will be sent each time!

Note for Ubuntu Desktop Users

Ubuntu Desktop uses the Network Manager GNOME tool to configure the network, and most of the time you should be able to configure everything graphically using it. This is specifically for Ubuntu Server or an Ubuntu version without Network Manager.





NOVA: Network Antireconnaissance with Defensive Honeypots

7 06 2012

Knowledge is power, especially when regards to computer and information security. From the standpoint of a hacker, knowledge about the victim’s network is essential and the first step in any sort of attack is reconnaissance. Every little piece of seemingly innocent information can be gathered and combined to form a profile of the victim’s network, and each bit of information can help discover vulnerabilities that can be exploited to get in. What operating systems are being used? What services are running? What are the IP and MAC addresses of the machines on a network? How many machines are on the network? What firewalls and routers are in place? What’s the overall network architecture? What are the uptime statistics for the machines?

Since network reconnaissance is the first step in attacking, it follows that antireconnaissance should be the first line of defense against attacks. What can be done to prevent information gathering?

The first step in making the difficult to gather information is simply to not release it. This is the realm of authentication and firewalls, where data is restricted to subsets of authorized users and groups. This doesn’t stop the gathering of information that, by it’s nature, must be to some extent publicly available for things to function. Imagine the real life analogy of a license plate. The license plate number of the car you drive is a mostly harmless piece of information, but hiding it isn’t an option. It’s a unique identifier for your car who’s entire point is to be displayed to world. But how harmless is it really? Your license plate could be used for tracking your location: imagine a camera at a parking garage that keeps logs of all the cars that go in and out. What if someone makes a copy of your license plate for their car and uses it to get free parking at places you have authorized parking? What if someone copies the plate and uses it while speeding through red light cameras or committing other crimes? What if someone created a massive online database of every license plate they’ve ever seen, along with where they saw it and the car and driver’s information?

Although a piece of information may seem harmless by itself, it can be combined to get a more in depth picture of things and potentially be a source of exploitation.  Like a license plate, there any many things on a network that are required to be publicly accessible in order for the network to function. Since you can’t just block access to this information with a firewall, what’s the next step in preventing and slowing down reconnaissance? This is where NOVA comes in.

Since hiding information on a LAN isn’t an option, Datasoft’s NOVA (Network Obfuscation and Virtualized Anti-reconnaissance) instead tries to slow down and detect attackers by making them go threw huge amounts of fake information in the form of virtual honeypots (created with honeyd). Imagine an nmap scan on a typical corporate network. You might discover that there are 50 computers on the network, all running Windows XP and living on a single subnet. All of your attacks could then target Windows XP services and vulnerabilities. You might find a router and a printer on the network too, and spend a lot of time manually poking at them attempting to find a weakness. With NOVA and Honeyd running on the network, the same nmap scan could see hundreds of computers on the network with multiple operating systems, dozens of services running, and multiple routers. The attacker could spend hours or even days attempting to get into the decoy machines. Meanwhile, all of the traffic to these machines is being logged and analyzed by machine learning algorithms to determine if it appears hostile (matches hostile training data of past network scans, intrusion attempts, etc).

At the moment NOVA is still a bit rough around the edges, but it’s an open source C++ Linux project in a usable state that could really use some more users and contributors (shameless plug). There’s currently a QT GUI and a web interface (nodejs server with cvv8 to bind C++ to Javascript) that should provide rudimentary control of it. Given the lack of user input we’ve gotten, there are bound to be things that make perfect sense to us but are confusing to a new user, so if you download it feel free to jump on our IRC channel #nova on irc.oftc.net or post some issues on the github repository.





Xmonad Configuration for DVORAK

30 03 2012

I’ve been using Xmonad for a couple of months now, and I really quite like it for software development. I would say it’s most useful with large dual monitors, but I’ve even tried it on a netbook (with limited usability success). Before using Xmonad, I would constantly loose track of windows. In Linux it was terminal windows. Being a command line guru, I would pull up a terminal to do everything from edit a text file in vim to just using a terminal as a launcher to quickly type ‘firefox &’ or some other application. Having to alt+tab through the inevitable piles of terminals I would have up was annoyingly painful, and instead of finding the one I want I would likely just launch a new one and add to the mess.

Now I’ve gotten my Xmonad habits and workflow down.

Workspaces 1 and 2 are used on the first monitor

Workspaces 3 and 4+ are used on the second monitor

 

Workspace 1: Browser and a terminal on the bottom for quick trivial commands

Workspace 2: Eclipse or other IDE

Workspace 3: 2-3 terminals and IRC window (most terminal related work done here)

Workspace 4: Usually a full screen application I’m testing

Workspace 4+: Misc usages as needed

 

And, of course, I’ve got all of the keyboard shortcuts optimized for the DVORAK homerow. Here’s my xmonad.hs configuration file if anyone wants to try my shortcut scheme.

 

import XMonad
import XMonad.Config.Gnome
import XMonad.Hooks.ManageHelpers
import XMonad.Layout.Gaps
import XMonad.Actions.FloatKeys
import XMonad.Actions.CycleWS
import XMonad.Hooks.ManageDocks
import XMonad.Hooks.DynamicLog
import XMonad.Util.EZConfig
import XMonad.Util.Run
import XMonad.Layout.NoBorders
import XMonad.Layout.ResizableTile
import XMonad.Actions.DwmPromote
import System.Exit

import qualified System.IO.UTF8
import qualified XMonad.StackSet as W
import qualified Data.Map as M

myManageHook = composeAll (
[ manageHook gnomeConfig
, className =? “Unity-2d-panel” –> doIgnore
, className =? “Unity-2d-launcher” –> doIgnore
, className =? “Gimp” –> doFloat
, className =? “novagui” –> doFloat
, isFullscreen –> doFullFloat
])

myKeys = \c -> mkKeymap c $
[ (“M-S-<Return>”, spawn “gnome-terminal”)

— launch programs
, (“M-r f f”, spawn “firefox”)
, (“M-r M-c”, spawn “chromium-browser”)
, (“M-r M-r”, spawn “grun”)
, (“M-r h a l t”, spawn “sudo shutdown -h now”)
, (“M-r s s”, spawn “scrot”)
, (“M-r s S-s”, spawn “scrot -s”)
, (“M-r v”, spawn “gvim”)

— Rotate through the available layout algorithms
, (“M-<Space>”, sendMessage NextLayout)
, (“M-S-<Space>”, sendMessage FirstLayout)

— close focused window
, (“M-w”, kill)
— Resize viewed windows to the correct size
, (“M-S-r”, refresh)

— Sceen lock
, (“M-l”, spawn $ “gnome-screensaver-command -l”)

— Toggle float
, (“M-d”, withFocused $ windows . W.sink)

— These are all DVORAK optimized navigation keys

— Move window focus with right/left index fingers
, (“M-u”, windows W.focusDown)
, (“M-h”, windows W.focusUp )
, (“M-<Return>”, dwmpromote )
— Swap window
, (“M-S-u”, windows W.swapDown >> windows W.focusDown)
, (“M-S-h”, windows W.swapUp >> windows W.focusUp)

— Resize the master area with right/left middle fingers
, (“M-t”, sendMessage Expand)
, (“M-e”, sendMessage Shrink)
, (“M-S-e”, sendMessage MirrorShrink)
, (“M-S-t”, sendMessage MirrorExpand)

— Change windows in the master area with right/left ring fingers
, (“M-n”, sendMessage (IncMasterN 1))
, (“M-o”, sendMessage (IncMasterN (-1)))

, (“M-s”, nextScreen)
, (“M-a”, prevScreen)
, (“M-S-s”, shiftNextScreen >> nextScreen)
, (“M-S-a”, shiftPrevScreen >> prevScreen)

— Quit xmonad
, (“M-S-q”, io (exitWith ExitSuccess))

— Restart xmonad
, (“M-q”, restart “xmonad” True)
] ++
— mod-[1..9], Switch to workspace N
— mod-shift-[1..9], Move client to workspace N
[(m ++ (show k), windows $ f i)
| (i, k) <- zip (XMonad.workspaces c) [1 .. 9]
, (f, m) <- [(W.greedyView, “M-“), (W.shift, “M-S-“)]
] ++

— moving floating window with key
[(c ++ m ++ k, withFocused $ f (d x))
| (d, k) <- zip [\a->(a, 0), \a->(0, a), \a->(0-a, 0), \a->(0, 0-a)] [“<Right>”, “<Down>”, “<Left>”, “<Up>”]
, (f, m) <- zip [keysMoveWindow, \d -> keysResizeWindow d (0, 0)] [“M-“, “M-S-“]
, (c, x) <- zip [“”, “C-“] [20, 2]
]

myLayouts = gaps [(U, 24)] $ layoutHook gnomeConfig

main = xmonad gnomeConfig {
manageHook = myManageHook
, layoutHook = myLayouts
, borderWidth = 2
, terminal = “gnome-terminal”
, normalBorderColor = “#000099”
, focusedBorderColor = “#009900”
, modMask = mod4Mask
, keys = myKeys }





Linux Tip: Arrow keys not working for command input? RLFE to the rescue.

26 01 2012

I’m always wandering across tools in Linux that don’t support line history (up arrow) or the ability to edit lines/move around with the arrow keys. Side note: if you write a Linux tool that takes user commands, stop being lazy and just go link it to the GNU readline library so your command line interface doesn’t make people hate you. There’s nothing more annoying than trying to go back to fix a typo with your arrow keys and getting a pile of gibberish instead of a moving cursor. For example in tclsh,

% puts “stuff goes herr”^[[D^[[D^[[D^[[A^[[C^[[B^[[D <- (typo, right arrow, right arrow, RAAAGE)

The solution? rlfe: the read line front-end processor. It’s got a few bugs, but it works great for things like telnet and tclsh that by default don’t have line history and arrow key navigation.

$ sudo apt-get install rlfe
$ rlfe tclsh

Replace tclsh with practically any command line tool and get back to typing without fear of typos. Plus, you don’t have to keep retyping/copy pasting things when you want to run them again. The rlfe process will stick around after you close the application, so you really only need to run it once with rlfe.





Building a MiniITX File Server for under $300

20 10 2011

Despite an old HP desktop I got used at a swap-meet for $10 still going strong, I’ve wanted to replace it for a while. Having a pile of computers on your desk makes you start to appreciate the little things that are important in small home file server.

  • Backups. If the hard drive in my old HP died today, I’d loose gigabytes of pictures, music, code, and random writings. I could throw a second drive in it, but then you still risk having the drive die as it’s getting just as much on-time as the other one.
  • Noise. There are currently 3 full size desktops running on my desk (regular use desktop, the HP server, and a friend’s desktop that acts as off sight storage for him). Most of the time, fan noise doesn’t bother me. it in fact helps me sleep easier. You know you’ve been around computers too much when the gentle blowing of computer fans and occasional hard drive clicks lulls you to sleep like those sound of nature noise CDs they sell on late night TV commercials. There are still occasions when I prefer a bit of silence though.
  • Heat. At one point, I was running 4 monitors (2 of which CRT) in addition to the three desktops previously mentioned, plus a router and piles of chargers and other electronic devices. With everything on, I could literally feel the temperature change walking into my room (probably 2-5 degrees). Piles of servers are great to play with, but unless you have a dedicated server room, cooling becomes an issue sooner than you may think.

 

MiniITX form factors make a great size for a cheap and small computer. Specifications for my new file server,

 

The downside of this build: I hate the case. Despite having decent reviews, it was just one thing after another when putting it together. First, there are 4 plastic snaps that hold the front panel on. The very first time I tried to remove it they were so brittle that one of them broke off. Not a huge deal, still stays on fine, but if you do have the misfortune of getting this case, be careful with the plastic snaps. The second fail was when I tried to install my 5.25″ to 3.5″ hard drive bay for the second hard drive. I vastly prefer cases that have the drive bays extended to the front of the case, and instead this case has about a 1″ gap with a hole for the CD to come out. I removed the flap and CD extender button, but the hole is about 1cm too small to fully remove my external drive. I’ve considered cutting a slice out of it, but I haven’t gotten the courage to mangle it up yet. It’ll be hard to make it look decent after that. Finally, when I got everything installed and powered it up, I found that despite being a small 150 Watt power supply, the thing is loud. The 80mm fan isn’t terribly quiet either, and the case has no sound proofing whatsoever. The power supply also generates a decent amount of heat for the size. I’m considering replacing both the PSU and the fan, but for now I’ll live with it. Basically: don’t get this case. Spend the extra $50 and find one that’s quiet and has a more efficient power supply, the total cost will still be below $300.

 

For software I put Xubuntu on it, though I’m considering swapping it out for something else. No hardware problems though, everything worked fine after install. I’ll post some updates on software later.

 

Build pictures





Dual Booting NB205: Try 2

4 04 2011

Over a year ago I attempted to install Ubuntu to my Toshiba NB205 and failed miserably. I would recommend extreme caution if you attempt to do this, but I finally got it working (though it took most of a day, spent simultaneously browsing and watching Deep Space 9 reruns). Problems/solutions below. I’m not going into details since frankly, if you’re not already very familiar with Linux, it’s a bad idea to try this.

I’m quite happy with my current Linux install, but I don’t know if I’d actually call it Ubuntu anymore. Calling it Ubuntu is like taking a VW bug, taking off the body, rebuilding the engine, turning it into a dune buggy, and then calling it a VW bug still. What I finally ended up with is a very small kernel running Fluxbox for a GUI (with conky, transparent aterm, and all the other fancy Fluxbox features).

Problem: failure to boot without pressing keys nonstop. It’s like the kernel just falls asleep while you’re booting, maybe waiting to probe some hardware, and pressing a key (shift, enter, whatever) seems to wake it back up.

Solution: I compiled a custom stripped down and Atom optimized kernel. The instructions for compiling your own kernel are too long and complex to put here, but instructions can be found on the internet here. Basically, I stripped out anything that wasn’t needed for my hardware, and I really have no idea which of the dozens/hundreds of things I stripped out actually fixed the problem. An easy workaround is simply to press the shift key repeatedly while booting so it’ll stop pausing.

Problem: Unable to boot with error “ALERT! /dev/disk/by-uuid/84b7f9ae-e9b3-44a5-8709-37f5bfb7d8e6 does not exist.”

Solution: When grub loads up, type ‘e’ on the Ubuntu entry and change the root=/dev/disk/blah/blah/blah to /dev/sda3, or whatever your Linux partition number is. For some reason using the UUID instead of the actual partition file was buggy after I installed. I haven’t gotten around to figuring out how to configure the new version of grub to stop using UUIDs and go back to the old schema.

Problem: Once Ubuntu is installed, Windows XP will blue screen of death with a STOP error when booting.

Solution: Use a Windows CD and get to the recovery console or command line, and then run “chkdsk C: /R” in order to fix the corrupted NTFS partition. Resizing the partition with gparted is the cause of this error, and it happens every time on this model of Toshiba for some reason.

Problem: Ubuntu netbook remix is too slow to be usable on the NB205.

Solution: For some reason, the NB205 just can’t seem to handle Gnome, and actually seemed to perform worse on the netbook remix GUI rather than the standard Gnome GUI. It’s not usable, but it is annoying. The solution is to switch to a lightweight window manager or GUI. I’m using Fluxbox right now, but you might prefer Xfce if you still want a lot of functionality.

Problem: Battery life is terrible.

Solution: When recompiling your kernel, change the default CPU governor to “ondemand” instead of “performance”. This will let the kernel use the Intel speedstep technology in the atom and lower the clock speed when it’s idle, increasing battery life to something that almost rivals windows. I’m sure there’s a way to load the needed modules and change the CPU governor without recompiling the kernel, but you’ll have to resort to Google for that one.





Bootable VM Images?

16 01 2010

Virtual Machines are great for testing things on and running software your native operating system won’t support. Sometimes, though, you actually want to be able to boot into them rather than run them on top of another operating system. If you run Linux for a main OS and Windows for gaming, a VM isn’t going to work that well for you due to all the extra overhead of running two operating systems at once, not to mention poor graphics frame rates on many VMs. Shouldn’t it be possible to create an image that you could both boot into like a regular partition, but also run inside a virtual machine? So far I’ve found nothing that can do this…





Back to Slack? Part 2

16 11 2009

I got the dual booting to work finally, as well as the wireless. For the dual boot, I reinstalled the Windows 7 bootloader as I said last time, then used easyBCD to add an entry for Linux. Finally I booted Slack up using my install DVD and ran liloconfig, this time installing lilo locally rather than to the MBR. Windows 7 boot loader now lets me chainload into lilo.

As for the wireless, an lspci gives,

0c:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01)

The B43 module loads fine, but to make it work you need to grab the firmware from here: http://linuxwireless.org/en/users/Drivers/b43. The instructions are,

wget http://bu3sch.de/b43/fwcutter/b43-fwcutter-012.tar.bz2
tar xjf b43-fwcutter-012.tar.bz2
cd b43-fwcutter-012
make
cd ..
export FIRMWARE_INSTALL_DIR=”/lib/firmware”
wget http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2
tar xjf broadcom-wl-4.150.10.5.tar.bz2
cd broadcom-wl-4.150.10.5/driver
sudo ../../b43-fwcutter-012/b43-fwcutter -w “$FIRMWARE_INSTALL_DIR” wl_apsta_mimo.o

Simple enough. According to that page, the firmware can’t be distributed due to copyrighting. Oddly though, this all works just fine in Ubuntu or Backtrack right after the install…

As for the actual wireless configuration, Slackware 13 still doesn’t have a decent GUI configuration utility for wireless. For WPA, the easiest way is to just edit /etc/rc.d/rc.local and add,

iwconfig wlan0 essid “Your ESSID”
wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf &
dhcpcd wlan0

Then go and edit /etc/wpa_supplicant and make sure it looks something like,

ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
eapol_version=1
ap_scan=0
fast_reauth=1

# WPA protected network, supply your own ESSID and WPAPSK here:
network={
scan_ssid=1
ssid=”Your ESSID”
proto=WPA RSN
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
group=CCMP TKIP WEP104 WEP40
psk=”Your WPA Pass Phrase”
priority=10
}

Make sure for both the ESSID and Pass Phrase you include the quotes. The only problem left is that in dmesg I keep seeing,

b43-phy0 ERROR: PHY transmission error

This isn’t new though, I had the same error all the time with Ubuntu. My home connection seems to work fine, but some access points I’ve gotten random dropping of the connection and the above error. Considering I don’t take this laptop anywhere much now that I have my netbook, I’m not too worried about it right now.

Now to go configure Slackware the way I like it. So far, even this is being difficult…

root@vostro:/home/pherricoxide# xorgsetup
Only root can configure X.
root@vostro:/home/pherricoxide# whoami
root