Thoughts on Open Source Software Development

28 10 2013

The last year I did a lot of work with some small Open Source projects (Nova, Honeyd, Neighbor-Cache Fingerprinter, Node.js Github Issue Bot…). I’ve also used Linux for all of my development and have used a lot of Open Source projects in that time. In some ways I’ve come out being more of on Open Source advocate than ever, and in other ways I’ve come out a bit jaded. What does Open Sourcing a project get you?

Good thing: free feedback on features and project direction

Unless you’re Steve Jobs, you probably don’t know what customers want. If you’re an engineer like most people reading this blog, you really probably don’t know what customers want. Open Sourcing the project can provide free user feedback. If you’re writing a business application, people will tell you they want pretty graphs generated for data that you never thought would be important. If you’re writing something with dependencies, users will tell you they want you to support multiple versions of potentially incompatible libraries that you would never have bothered with on your own.

If you’ve got an IRC channel, you’ll occasionally find a person who’s more than willing to chat about his or her opinions on the project and what features they think would be useful, in addition to the occasional issue tickets and emails.

The Open Source community can be your customers when you don’t have any real customers yet.

Good thing: free testing

Everyone who downloads and uses the project becomes someone that can help with the testing effort. All software has bugs, and if they’re annoying enough, people will report them. I’ve tried to make small contributions to bigger Open Source projects by reporting issues I’ve found in things like Node.js, Express, Backtrack, Gimp, cvv8… As a result, code becomes better tested and more stable.

Good thing: free marketing

Open Sourcing the project, at least in theory, means people will use it. They’ll talk about it to their friends, they’ll write articles and reviews about it, and if the project is actually useful it’ll start gaining popularity.

Misconception: you’ll get herds of developers willing to work on your project for free

I’ve reported dozens of bugs in big Open Source projects. I’ve modified the source code of Nmap and Apache for various data collection reasons. I’ve never submitted a patch bigger than about 3 lines of code to someone else’s Open Source project. That’s depressing to admit, but it’s the norm. People will file bug tickets, sometimes offer suggestions on features, but don’t expect a herd of developers working for free and flocking to your project to make it better. Even the most hardcore Open Source advocates have their own pet projects they would rather work on than fixing bugs or writing features into yours. Not only that, the effort to fix a bug in a foreign code base is significantly higher than the effort required for the original developer of the code to fix it. Why spend 3 hours setting up the development environment and trying to fix a bug, when you can file a ticket and the guy that wrote the code can probably fix it in 3 minutes?

There are large Open Source projects (Linux, Open Office, Apache…) that have a bunch of dedicated developers. They’re the exceptions. From what I’ve seen, most Open Source projects are run by one person or a small core group.

Misconception: the community will take over maintaining projects even if the core developer leaves

We used a Node.js library called Nowjs quite a lot. It’s a wonderful package that takes away all the tedium of manual AJAX or work and makes Javascript RPC amazingly easy. It has over 2,000 followers on Github, and probably ten times that many people using it. One day the developer decided to abandon the project to work on other things; not that unusual for a pet project. Sadly, that was the death of the project. Github makes it trivial to clone the project, with a single press of a button someone could make a copy of the repository and take over maintaining and extending it. Dozens of people initially made forks of the project in order to do that, and dozens more made forks to fix bugs they found.

What’s left? A mess consisting of dozens of Github forks of the project, all with different bugs being fixed or features added, and the “official” project left abandoned in such a way no one can figure out which fork they should use. There’s no one left to merge in patches or to make project direction decisions. New users can’t figure out which fork to use and old users that actually write patches don’t know where to submit them anymore.

The developer of Nowjs moved on to develop Bridge-js. Then Bridge-js got abandoned too.

Bridge is still open source but the engineers behind Bridge have returned to school.

This pattern is almost an epidemic in Node.js. Someone creates a really amazing module, publishes it to Github and NPM, and then abandons it. Dozens of people try to take over the development, but in the end all fail (partly because Github’s lack of specifying which fork of the project is “official”, and the Open Source problem that there is no “official” fork). A dozen new people create the same basic module from scratch, most of which never become popular, and most of which also become abandoned… You see the picture.

If you sense a hint of frustration, you’d be right. On multiple occasions I had to dig through dozens of half abandoned projects trying to figure out which library I wanted to use to do something as common as SQL in Node.js.

The reason it’s an epidemic with Node is because no one is really sure what they want yet, and projects haven’t become popular enough to have momentum to continue after abandonment by their original authors. Hopefully at some point projects will acquire a big enough user base and enough developers that they can sustain themselves.

Fork is a four letter word

Even the biggest projects aren’t immune to the anarchy of forks. Libreoffice and Openoffice, GNU Emacs vs XEmacs, the list goes on. For the end user of these software suits, this is mainly annoying. I’ve switched between LibreOffice and OpenOffice more than once now, because I keep finding bugs in one but not the other.

Sometimes forks break out for ridiculous reasons. The popular IM client Pidgin was forked into the Carrier project. Why?

As of version 2.4 and later, the ability to manually resize the text input box of conversations has been altered—Pidgin now automatically resizes between a number of lines set in ‘Preferences’ and 50% of the window depending on how much is typed. Some users find this an annoyance rather than a feature and find this solution unacceptable. The inability to manually resize the input area eventually led to a fork, Carrier (originally Funpidgin). –

You can view the 300+ post argument about the issue on the original Pidgin ticket here.

The fact there’s no single “official” version of a project and the sometimes trivial reasons that forks break out cause a lot of inefficiency as bug are fixed in some forks but not others, and eventually code bases diverge so much that they also develop in one fork or another.

Misconception: people outside of software development understand Open Source

I once heard someone ask in confusion how Open Source software can possibly be secure, because can’t anyone upload backdoors into it? They thought Open Source projects were like Wikipedia, where anyone could edit the code and their changes would be somehow instantly applied without review. After all, people keep telling them, “Open Source is great, anyone can modify the code!”.

A half dozen times, ranging from family members to customers and business people, I’ve had to try and explain how Open Source security products can work even though the code is available. If people can see the code, they can figure out how to break and bypass it, right? Ugh…

And don’t even get me started on the people that will start comparing Open Source to communism.

Concluding thoughts

I believe Open Source software has plenty of advantages, but I also think there’s a lot of hype surrounding it. The vast majority of Open Source projects are hacked together as hobby projects and abandoned shortly after. A study of Sourceforge projects showed that less than 17% of projects actually become successful; the other 83% are abandoned in the early stages. Most projects only have a few core developers and the project won’t outlive their interest in it. The community may submit occasional patches, but are unlikely to do serious feature development.

Why release Open Source software then? I think the answer often becomes, “why not?”. Plenty of developers write code in their spare time. They don’t plan to make money directly from it: selling software is hard. They do it to sharpen their saws. They do it for fun, self improvement, learning, future career opportunities, and to release something into the world that just might be useful and make people’s lives better. If you’re writing code with no intention of making money off it, there’s really no reason not to release it as Open Source.

What if you do want to make money off it? Well, why not dual licence your code and have a free Open Source trial version along with an Enterprise version? You’ll get the advantages of free marketing, testing, and user feedback. There is the risk that someone will come along and extend the Open Source trial version into a program that has all of the same features, or even more features, as your Enterprise version, and this is something that needs to be considered. However, as I mentioned before, it’s hard to find people that will take over the development and maintenance of Open Source projects. I think it’s more likely that someone will steal your idea and create their own implementation than bother with trying to extend your trial version code, but I don’t have any proof or evidence of that.

Neighbor Cache Fingerprinter: Operating System Version Detection with ARP

30 12 2012

I’ve released the first prototype (written in C++) of an Open Source tool called the Neighbor Cache Fingerprinter on Github today. A few months ago, I was watching the output of a lightweight honeypot in a Wireshark capture and noticed that although it had the capability to fool nmap’s operating system scanner into thinking it was a certain operating system, there were subtle differences in the ARP behavior that could be detected. This gave me the idea to explore the possibility of doing OS version detection with nothing except ARP. The holidays provided a perfect time to destroy my sleep schedule and get some work done on this little side project (see commit punchcard, note best work done Sunday at 2:00am).


The tool is currently capable of uniquely identifying the following operating systems,

Windows 7
Windows XP (fingerprint from Service Pack 3)
Linux 3.x (fingerprint from Ubuntu 12.10)
Linux 2.6 (fingerprint from Century Link Q1000 DSL Router)
Linux 2.6 (newer than 2.6.24) (fingerprint from Ubuntu 8.04)
Linux 2.6 (older than 2.6.24) (fingerprint from Knoppix 5)
Linux 2.4 (fingerprint from Damn Small Linux 4.4.10)
Android 4.0.4
Android 3.2
Minix 3.2
ReactOS 0.3.13

More operating systems should follow as I get around to spinning up more installs on Virtual Machines and adding to the fingerprints file. Although it’s still a fairly early prototype, I believe it’s already a useful enough tool that it can be beneficial, so install it and let me know via the Github issues page if you find any bugs. There’s very little existing research on this; arp-fingerprint (a perl script that uses arp-scan) is the only thing remotely close, and it attempts to identify the OS only by looking at responses to ARP REQUEST packets. The Neighbor Cache Fingerprinter focuses on sending different types of ARP REPLY packets as well as analyzing several other behavioral quirks of ARP discussed in the notes below.

The main advantage of the Neighbor Cache Fingerprinter versus an Nmap OS scan is that the tool can do OS version detection on a machine that has all closed ports. The next big feature I’m working on is expanding the probe types to allow it to work on machines that respond to ICMP pings, OR have open TCP ports, OR have closed TCP ports, OR have closed UDP ports. The tool just needs the ability to elicit a reply from the target being scanned, and a pong, TCP/RST, TCP/ACK, or ICMP unreachable message will all provide that.

The following are my notes taken from the README file,


What is the Neighbor Cache? The Neighbor Cache is an operating system’s mapping of network addresses to link layer addresses maintained and updated via the protocol ARP (Address Resolution Protocol) in IPv4 or NDP (Neighbor Discovery Protocol) in IPv6. The neighbor cache can be as simple as a lookup table updated every time an ARP or NDP reply is seen, to something as complex as a cache that has multiple timeout values for each entry, which are updated based on positive feedback from higher level protocols and usage characteristics of that entry by the operating system’s applications, along with restrictions on malformed or unsolicited update packets.

This tool provides a mechanism for remote operating system detection by extrapolating characteristics of the target system’s underlying Neighbor Cache and general ARP behavior. Given the non-existence of any standard specification for how the Neighbor Cache should behave, there several differences in operating system network stack implementations that can be used for unique identification.

Traditional operating system fingerprinting tools such as Nmap and Xprobe2 rely on creating fingerprints from higher level protocols such as TCP, UDP, and ICMP. The downside of these tools is that they usually require open TCP ports and responses to ICMP probes. This tool works by sending a TCP SYN packet to a port which can be either open or closed. The target machine will either respond with a SYN/ACK packet or a SYN/RST packet, but either way it must first discover the MAC address to send the reply to via queries to the ARP Neighbor Cache. This allows for fingerprinting on target machines that have nothing but closed TCP ports and give no ICMP responses.

The main disadvantage of this tool versus traditional fingerprinting is that because it’s based on a Layer 2 protocol instead of a Layer 3 protocol, the target machine that is being tested must reside on the same Ethernet broadcast domain (usually the same physical network). It also has the disadvantage of being fairly slow compared to other OS scanners (a scan can take ~5 minutes).

Fingerprint Technique: Number of ARP Requests

When an operating system performs an ARP query it will often resend the request multiple times in case the request or the reply was lost. A simple count of the number of requests that are sent can provide a fingerprint feature. In addition, there can be differences in the number of responses to open and closed ports due to multiple retries on the higher level protocols, and attempting to send a probe multiple times can result in different numbers of ARP requests (Android will initially send 2 ARP requests, but the second time it will only send 1).

For example,

Windows XP: Sends 1 request

Windows 7: Sends 3 if probe to closed port (9 if probe to open port)

Linux: Sends 3 requests

Android 3: Sends 2 requests the first probe, then 1 request after
A minimum and maximum number of requests seen is recorded in the fingerprint.

Fingerprint Technique: Timing of ARP Request Retries

On hosts that retry ARP requests, the timing values can be used to deduce more information. Linux hosts generally have a constant retry time of 1 second, while Windows hosts generally back off on the timing, sending their first retry after between 500ms and 1s, and their second retry after 1 second.

The fingerprint contains the minimum time difference between requests seen, maximum time difference, and a boolean value indicating if the time differences are constant or changing.

Fingerprint Technique: Time before cache entry expires

After a proper request/reply ARP exchange, the Neighbor Cache gets an entry put in it for the IP address and for a certain amount of time communication will continue without additional ARP requests. At some point, the operating system will decide the entry in the cache is stale and make an attempt to update it by sending a new ARP request.

To test this a SYN packet is sent, an ARP exchange happens, and then SYN packets are sent once per second until another ARP request is seen.

Operating system response examples,

Windows XP : Timeout after 10 minutes (if referred to)

Windows 7/Vista/Server 2008 : Timeout between 15 seconds and 45 seconds

Freebsd : Timeout after 20 minutes

Linux : Timeout usually around 30 seconds
More research needs to be done on the best way to capture the values of delay_first_probe_time and differences between stale timing and actually falling out of the table and being gc’ed in Linux.

Waiting 20 minutes to finish the OS scan is unfeasible in most cases, so the fingerprinting mode only waits about 60 seconds. This may be changed later to make it easier to detect an oddity in older windows targets where cache entries expire faster if they aren’t used (TODO).

Fingerprint Technique: Response to Gratuitous ARP Replies

A gratuitous or unsolicited ARP reply is an ARP reply for which there was no request. The usual use case for this is notification of machines on the network of IP changes or systems coming online. The problem for implementers is that several of the fields in the ARP packet no longer make much sense.

Who is the Target Protocol Address for the ARP packet? The broadcast address? Zero? The specification surprisingly says neither: the target Protocol address should be the same IP address as the Sender Protocol Address.

When there’s no specific target for the ARP packet, the Target Hardware Address also becomes a confusing field. The specification says it’s value shouldn’t matter, but should be set to zero. However, most implementations will use the Ethernet broadcast address of FF:FF:FF:FF:FF instead, because internally they have some function to send an ARP reply that only takes one argument for the destination MAC address (and is put in both the Ethernet frame destination and the ARP packet’s Target Hardware Address). We can also experiment with setting the Target Hardware Address to the same thing as the Sender Hardware Address (the same method the spec says to use for the Target Protocol field).

Even the ARP opcode becomes confusing in the case of unsolicited ARP packets. Is it a “request” for other machines to update their cache? Or is it a “reply”, even though it isn’t a reply to anything? Most operating systems will update their cache no matter the opcode.

There are several variations of the gratuitous ARP packet that can be generated by changing the following fields,

Ethernet Frame Destination Address : Bcast or the MAC of our target

ARP Target Hardware Address : 0, bcast, or the MAC of our target

ARP Target Protocol Address : 0 or the IP address of our target

This results in 36 different gratuitous packet permutations.

Most operating systems have the interesting behavior that they will ignore gratuitous ARP packets if the sender is not in the Neighbor Cache already, but if the sender is in the Neighbor Cache, they will update the MAC address, and in some operating systems also update the timeouts.
The following sequence shows the testing technique for this feature,

Send ARP packet that is known to update most caches with srcmac = srcMacArg Send gratuitous ARP packet that is currently being tested with srcmac = srcMacArg + 1 Send probe packet with a source MAC address of srcMacArg in the Ethernet frame

The first packet attempts to get the cache entry into a known state: up to date and storing the source MAC address that is our default or the command line argument –srcmac. The following ARP packet is the actual probe permutation that’s being tested.

If the reply to the probe packet is to (srcMacArg + 1), then we know the gratuitous packet successfully updated the cache entry. If the reply to the probe is just (srcMacArg), then we know the cache was not updated and still contains the old value.

The reason the Ethernet frame source MAC address in the probe is set to the original srcMacArg is to ensure the target isn’t just replying to the MAC address it sees packets from and is really pulling the entry out of ARP.

Sometimes the Neighbor Cache entry will get into a state that makes it ignore gratuitous packets even though, given a normal state, it would accept them and update the entry. This can result in some timing related result changes. For now I haven’t made an attempt to fix this as it’s actually useful as a fingerprinting method in itself.

Fingerprint Technique: Can we get put into the cache with a gratuitous packet?

As mentioned in the last section, most operating systems won’t add a new entry to the cache given a gratuitous ARP packet, but they will update existing entries. One of the few differences between Windows XP and FreeBSD’s fingerprint is that we can place an entry in the cache by sending a certain gratuitous packet to a FreeBSD machine, and test if it was in the cache by seeing if a probe gets a response or not.

Fingerprint Technique: ARP Flood Prevention (Ignored rapid ARP replies)

RFC1122 (Requirements for Internet Hosts) states,

“A mechanism to prevent ARP flooding (repeatedly sending an ARP Request for the same IP address, at a high rate) MUST be included. The recommended maximum rate is 1 per second per destination.”

Linux will not only ignore duplicate REQUEST packets within a certain time, but also duplicate REPLY packets. We can test this by sending a set of unsolicited ARP replies within a short time range with difference MAC addresses being reported by each reply. Sending a probe will reveal in the probe response destination MAC address if the host responds to the first MAC address we ARPed or the last, indicating if it ignored the later rapid replies.

Fingerprint Technique: Correct Reply to RFC5227 ARP Probe

This test sends an “ARP Probe” as defined by RFC 5227 (IPv4 Address Conflict Detection) and checks the response to see if it confirms to the specification. The point of the ARP Probe is to check if an IP address is being used without the risk of accidentally causing someone’s ARP cache to update with your own MAC address when it sees your query. Given that you’re likely trying to tell if an IP address is being used because you want to claim it, you likely don’t have an IP address of your own yet, so the Sender Protocol Address field is set to 0 in the ARP REQUEST.

The RFC specifies the response as,

“(the probed host) MAY elect to attempt to defend its address by … broadcasting one single ARP Announcement, giving its own IP and hardware addresses as the sender addresses of the ARP, with the ‘target IP address’ set to its own IP address, and the ‘target hardware address’ set to all zeroes.”

But any Linux kernel older than 2.6.24 and some other operating systems will respond incorrectly, with a packet that has tpa == spa and tha == sha. Checking if tpa == 0 has proven sufficient for a boolean fingerprint feature.


Feedback from higher protocols extending timeout values

Linux has the ability to extend timeout values if there’s positive feedback from higher level protocols, such as a 3 way TCP handshake. Need to write tests for this and do some source diving in the kernel to see what else counts besides a 3 way handshake for positive feedback.


Infer Neighbor Cache size by flooding to cause entry dumping

Can we fill the ARP table with garbage entries in order for it to start dumping old ones? Can we reliably use this to infer the table size, even with Linux’s near random cache garbage collection rules? Can we do this on class A networks, or do we really need class B network subnets in order to make this a viable test?

NOVA: Network Antireconnaissance with Defensive Honeypots

7 06 2012

Knowledge is power, especially when regards to computer and information security. From the standpoint of a hacker, knowledge about the victim’s network is essential and the first step in any sort of attack is reconnaissance. Every little piece of seemingly innocent information can be gathered and combined to form a profile of the victim’s network, and each bit of information can help discover vulnerabilities that can be exploited to get in. What operating systems are being used? What services are running? What are the IP and MAC addresses of the machines on a network? How many machines are on the network? What firewalls and routers are in place? What’s the overall network architecture? What are the uptime statistics for the machines?

Since network reconnaissance is the first step in attacking, it follows that antireconnaissance should be the first line of defense against attacks. What can be done to prevent information gathering?

The first step in making the difficult to gather information is simply to not release it. This is the realm of authentication and firewalls, where data is restricted to subsets of authorized users and groups. This doesn’t stop the gathering of information that, by it’s nature, must be to some extent publicly available for things to function. Imagine the real life analogy of a license plate. The license plate number of the car you drive is a mostly harmless piece of information, but hiding it isn’t an option. It’s a unique identifier for your car who’s entire point is to be displayed to world. But how harmless is it really? Your license plate could be used for tracking your location: imagine a camera at a parking garage that keeps logs of all the cars that go in and out. What if someone makes a copy of your license plate for their car and uses it to get free parking at places you have authorized parking? What if someone copies the plate and uses it while speeding through red light cameras or committing other crimes? What if someone created a massive online database of every license plate they’ve ever seen, along with where they saw it and the car and driver’s information?

Although a piece of information may seem harmless by itself, it can be combined to get a more in depth picture of things and potentially be a source of exploitation.  Like a license plate, there any many things on a network that are required to be publicly accessible in order for the network to function. Since you can’t just block access to this information with a firewall, what’s the next step in preventing and slowing down reconnaissance? This is where NOVA comes in.

Since hiding information on a LAN isn’t an option, Datasoft’s NOVA (Network Obfuscation and Virtualized Anti-reconnaissance) instead tries to slow down and detect attackers by making them go threw huge amounts of fake information in the form of virtual honeypots (created with honeyd). Imagine an nmap scan on a typical corporate network. You might discover that there are 50 computers on the network, all running Windows XP and living on a single subnet. All of your attacks could then target Windows XP services and vulnerabilities. You might find a router and a printer on the network too, and spend a lot of time manually poking at them attempting to find a weakness. With NOVA and Honeyd running on the network, the same nmap scan could see hundreds of computers on the network with multiple operating systems, dozens of services running, and multiple routers. The attacker could spend hours or even days attempting to get into the decoy machines. Meanwhile, all of the traffic to these machines is being logged and analyzed by machine learning algorithms to determine if it appears hostile (matches hostile training data of past network scans, intrusion attempts, etc).

At the moment NOVA is still a bit rough around the edges, but it’s an open source C++ Linux project in a usable state that could really use some more users and contributors (shameless plug). There’s currently a QT GUI and a web interface (nodejs server with cvv8 to bind C++ to Javascript) that should provide rudimentary control of it. Given the lack of user input we’ve gotten, there are bound to be things that make perfect sense to us but are confusing to a new user, so if you download it feel free to jump on our IRC channel #nova on or post some issues on the github repository.

Building a MiniITX File Server for under $300

20 10 2011

Despite an old HP desktop I got used at a swap-meet for $10 still going strong, I’ve wanted to replace it for a while. Having a pile of computers on your desk makes you start to appreciate the little things that are important in small home file server.

  • Backups. If the hard drive in my old HP died today, I’d loose gigabytes of pictures, music, code, and random writings. I could throw a second drive in it, but then you still risk having the drive die as it’s getting just as much on-time as the other one.
  • Noise. There are currently 3 full size desktops running on my desk (regular use desktop, the HP server, and a friend’s desktop that acts as off sight storage for him). Most of the time, fan noise doesn’t bother me. it in fact helps me sleep easier. You know you’ve been around computers too much when the gentle blowing of computer fans and occasional hard drive clicks lulls you to sleep like those sound of nature noise CDs they sell on late night TV commercials. There are still occasions when I prefer a bit of silence though.
  • Heat. At one point, I was running 4 monitors (2 of which CRT) in addition to the three desktops previously mentioned, plus a router and piles of chargers and other electronic devices. With everything on, I could literally feel the temperature change walking into my room (probably 2-5 degrees). Piles of servers are great to play with, but unless you have a dedicated server room, cooling becomes an issue sooner than you may think.


MiniITX form factors make a great size for a cheap and small computer. Specifications for my new file server,


The downside of this build: I hate the case. Despite having decent reviews, it was just one thing after another when putting it together. First, there are 4 plastic snaps that hold the front panel on. The very first time I tried to remove it they were so brittle that one of them broke off. Not a huge deal, still stays on fine, but if you do have the misfortune of getting this case, be careful with the plastic snaps. The second fail was when I tried to install my 5.25″ to 3.5″ hard drive bay for the second hard drive. I vastly prefer cases that have the drive bays extended to the front of the case, and instead this case has about a 1″ gap with a hole for the CD to come out. I removed the flap and CD extender button, but the hole is about 1cm too small to fully remove my external drive. I’ve considered cutting a slice out of it, but I haven’t gotten the courage to mangle it up yet. It’ll be hard to make it look decent after that. Finally, when I got everything installed and powered it up, I found that despite being a small 150 Watt power supply, the thing is loud. The 80mm fan isn’t terribly quiet either, and the case has no sound proofing whatsoever. The power supply also generates a decent amount of heat for the size. I’m considering replacing both the PSU and the fan, but for now I’ll live with it. Basically: don’t get this case. Spend the extra $50 and find one that’s quiet and has a more efficient power supply, the total cost will still be below $300.


For software I put Xubuntu on it, though I’m considering swapping it out for something else. No hardware problems though, everything worked fine after install. I’ll post some updates on software later.


Build pictures

Ubuntu User Security (or lack of)

8 03 2009

The other day was the first time I actually set up a user account for someone other than myself on my Ubuntu laptop. Something rather odd that I noticed, by default, the new /home/user directory has the file permissions set so anyone can read or execute the files. If I’m not mistaken, last time I made a user on Slackware or Gentoo it was set so only the owner could access and read the files located in his /home director… This discovery was followed by a “chmod -R go-rwx /home/user” on all my accounts, something that’s a good thing to do every now and then anyway if you’re security paranoid and in a multiuser enviornment. In the future, to make users created with adduser have more secure permissions,  run “sudo dpkg-reconfigure adduser”.

adduserAnd select no on the prompt asking if you want systm-wide readable home directories.

Ubuntu Keyring Password Change

22 02 2009

Due to some reason security problems with a server I had used a common password on, I took the time today to change all my passwords. I reset my Ubuntu password, tried to log on later in the day, and was greeted by a prompt asking me to enter my keyring password in order to connect to my wireless. After trying a few password, I quickly found out the password it wanted was my old one.

The keyring stores all your WEP keys, WAP keys, and other passwords that you let it. Truthfully, I’ve never liked the idea of storing all my passwords in one place, so I was only using it to store my wireless keys. In order to keep this annoying little application from prompting you every time you boot for your old password, you’ll have to blow away the keyring file and then start from scratch entering all your wireless keys. I’ve found no other solution after much Googling, so I will also show you how to just stop using this application all together if you choose rather than going through this again when you change passwords.

In order to reset the keyring, remove it’s files with this command,

rm ~/.gnome2/keyrings/*.keyring

Reboot the computer.

You should be greeted by this prompt when you try to use your wifi (nm-applet),

keyring1Now you have two options. Either input your new password, or leave it blank. If you input a new password, either after this prompt or on next reboot there will be a check box to make the application automatically log into your keyring on user login and life will go back to normal. If you go with option 2 and leave it blank, you’ll be greeted with this,

keyring2Select “Use Unsafe Storage” and your wireless keys will just be stored in plain text and gnome-keyring won’t bother you anymore. This IS less secure, if someone can read the files on your computer. Lets face it though, if someone is already far enough into your system to read your files, they probably have root access, and you’ve got worse things to worry about than your wireless keys.

Sharing Firefox Profiles on Dual Boot Systems

27 01 2009

A quick fix for sharing your Firefox profiles/bookmarks in a dual boot system is shown here. I’m dual booting XP and Ubuntu 8.10 but the operating systems shouldn’t matter much. Simply create a partition that all your operating systems can read, FAT32 format in my case, and makes sure all the operating systems mount it. This is always a good idea when dual booting to keep your files available for all your operating systems. To auto mount that partition in Ubuntu you’ll have to edit /etc/fstab to add something like the following line,

what_to_mount    where_to_mount    vfat    auto,users,rw,exec,uid=username,gid=groupname,umask=017    0    0

For example,

/dev/sda8    /mnt/storage    vfat    auto,users,rw,exec,uid=pherricoxide,gid=admin,umask=017    0    0

Then, copy one of your Firefox profiles to that partition. The profiles are located in ~/.mozilla/firefox/. If you’re using Windows, this would be in c:\\documents and settings\user\application data. If in Linux, this would be in /home/user/. It should be the only directory if you never set you multiple profiles, something with a lot of random looking numbers and letters.

Once that’s done open up a terminal or command prompt and run firefox -profilemanager. In my case, this wasn’t in the XP path, so I had to cd to program files/mozilla firefox/ before running it. After it comes up, click create new profile. Hit next, and then change directory. Change to the directory on your FAT32 partition of the profile that you copied over, and then just hit next. Just stick with the default profile name. That’s it, now Firefox is using the profile in that directory. Do that with all your operating systems and you should be set.

Note: I’m not sure how well sharing a profile would work with multiple versions of Firefox. It’s likely best to update all your versions before you get started.

Note 2: This messes up some of the more complex plugins, and Firefox will often complain that it’s installed new plugins when you switch to the other OS.

Linux Ignorance in Public School

10 12 2008

I think I just lost all faith in the public school system. Oh wait, being told that I didn’t know anything about computer security and that I caused a “mutiny” when I was going to Compuhigh (an online High Schoolprogram) did that, but this certainly didn’t help.

In recent news, a middle school teacher in Texas confiscated Linux CDs from a student. Apparently this has happened several times before, with teachers at various schools confiscating Ubuntu CDs and even suspending students for exchanging pirated software. This particular teacher decided to email the developers at Helios Linux and tell them what she thinks.

“…observed one of my students with a group of other children gathered around his laptop. Upon looking at his computer, I saw he was giving a demonstration of some sort. The student was showing the ability of the laptop and handing out Linux disks. After confiscating the disks I called a confrence with the student and that is how I came to discover you and your organization. Mr. Starks, I am sure you strongly believe in what you are doing but I cannot either support your efforts or allow them to happen in my classroom. At this point, I am not sure what you are doing is legal. No software is free and spreading that misconception is harmful. These children look up to adults for guidance and discipline. I will research this as time allows and I want to assure you, if you are doing anything illegal, I will pursue charges as the law allows. Mr. Starks, I along with many others tried Linux during college and I assure you, the claims you make are grossly over-stated and hinge on falsehoods. I admire your attempts in getting computers in the hands of disadvantaged people but putting linux on these machines is holding our kids back… I am sure if you contacted Microsoft, they would be more than happy to supply you with copies of an older verison of Windows and that way, your computers would actually be of service to those receiving them…” –

I don’t even know where to begin. She confiscated a student’s property when he wasn’t doing anything illegal. If she actually tried Linux in college she would know that it was free software, or if she actually did some research like she said. She thinks Microsoft would give out free copies of Windows to disadvantaged people? How about be a little open minded, especially for someone in the educational field? Mr. Starks of Helios planned a meeting with the school district’s superintendent, who agreed after seeing this email forwarded to him. Hopefully they’ll have an interesting talk about this teacher. Lets hope she’s just the PE teacher.

EDIT: The following is a fictional story, but after the above it wouldn’t surprise me if it could happen.

Topeka, KS – High school sophomore Brett Tyson was suspended today after teachers learned he may be using PHP. A teacher overheard him say that he was using PHP, and as part of our Zero-Tolerance policy against drug use, he was immediately suspended. No questions asked,” said Principal Clyde Thurlow.   “We’re not quite sure what PHP is, but we suspect it may be a derivative of PCP, or maybe a new designer drug like GHB.” –

Ubuntu 8.10 Tweaks

11 11 2008

As you saw in my last post, I installed the newest version of Ubuntu. A number of things annoyed me, a number of other things needed improvement, and a number of things I just felt like toggling, so here is a list of what I’ve done so far. I’m not going into details on how to do it, there are plenty of tutorials on the Internet, so there is no reason to reinvent the wheel. Just use the wonderful thing known as Google if you need directions for any of these tweaks.

Must have programs installed:

  • X-chat
  • Adobe Flash
  • Sun Java
  • Eclipse IDE (make sure to configure to use Sun’s Java)
  • Jedit
  • MP3 Codecs
  • Microsoft Windows Fonts (msttcorefonts package)
  • Advanced Compiz Configuration Manager

Things to stop running because I won’t use in Preferences -> Sessions

  • Bluetooth Manager
  • Evolution Alarm Notifier
  • Check for new hardware drivers
  • Update Notifier (I do manual updates when I feel like it)
  • Visual Assistance
  • Tracker
  • Gnome Login Sound
  • Pulse Audio (see last post, using ALSA only)

Things to stop running because I won’t use in Administration -> Services:

  • Bluetooth Manager

Keyboard shortcuts to add:

  • Alt+T Terminal
  • Alt+L Lock Screen

Enable a real root account:

  • sudo passwd root
  • usermod -U root (use if you get an error saying the account has expired)

Must change things in compiz because the alt+tab was annoying me:

  • Disable application switcher and enable static application switcher
  • Static Switcher Application -> Appearance -> Selected Window Highlight -> None

Disable Ubuntu splash screen:

  • I hate splash screens tha hide what’s going on in the background with a little loading bar, and Ubuntu 8.10 hides EVERYTHING. Edit /boot/grub/menu.lst and remove the “quiet splash” line from your Ubuntu entries. Text based FTW!

Disable IPV6 in about:config of Firefox.

Disable the drive icons on the desktop

And finally, make a sane shortcut scheme for all the pretty Compiz functions. Well.. somewhat sane.

For switching desktops on the 3D cube,

  • ctrl+alt+left move to left desktop
  • ctrl+alt+right move to right desktop
  • ctrl+alt+down unfold desktop cube
  • ctrl+alt+up move to desktop #1 (main desktop)
  • ctrl+alt+shift+left move to left desktop dragging current application
  • ctrl+alt+shift+right move to right desktop dragging current application

For switching applications,

  • Windows key + right: switch to right application
  • Windows key + left: switch to left application
  • Ctrl + Windows key + right: switch to right application (across desktops)
  • Ctrl + Windows key + left: switch to left application (across desktops)
  • Windows key + up: application picker (local desktop)
  • Ctrl + Windows key + up: application picker (across desktops)
  • Alt+tab: ring switcher next (local desktop)
  • Ctrl+alt+tan: rint switcher next (across desktops)

Windows Frustration

15 09 2008

Every once and a while I actually begin to like using Windows. Why not? You install software, and it actually runs. No fighting with kernel modules, package managers, and DHCP settings. Sure, it runs a little slower than my Linux install on the other partition, but I’ve got 2GB of RAM and CPU speed to burn. But then, one double click of an EXE, and it all comes back to me why I dislike Windows. Let me start from the beginning and highlight the annoyances along the way, with parenthesis and bold text (such as here).

The main Windows desktop that my parents use crashed (reason to hate Windows, instability), for no apparent reason. It would blue screen when booted, with or without safe mode, debug mode, or anything else. We tried using the recovery CD with no results. So, we pulled off an old backup copy of the partition and installed it. After a few hours of updates, it was back to the way it should be, and everyone was happy. Unfortunately, I forgot to update Internet Explorer (reason to hate Windows, Internet Explorer).  My parents then proceeded to browse the web, and within a week locate a page with a browser exploit which downloaded a pile of Trojan viruses to the machine (reason to hate Windows, viruses). I try to kill the program that was running manually, but end not getting anywhere. Where’s the killall command in XP (reason to hate Windows, lack of killall, grep, and other useful tools)? Perhaps there is such a thing hidden somewhere, but I don’t know how to get to it off of my head. The virus kept spawning more copies of itself faster than I could kill them. I boot up my laptop, stupidly into XP instead of Gentoo, and begin Googling for a fix. I find some files that look promising, scan them with Mcafee just in case they’re viruses themselves, and then click on one to see if it’s a demo or a full version of this virus software.

Nothing happens. A chill runs down my spine, the last time nothing happened when I clicked an executable file in Windows, bad things happened. My assumptions turned out to be correct; as I loaded up the task manager I saw processes such as a.exe, b.exe, c.exe, and video1080.cfg.exe sproouting up and consuming resources on my laptop. A moment later, a spoof of Windows security center launches itself and begins to tell me that it’s an unregistered version, and to remove infected files I’ll need to purchase some software.  My first response was to disable the Wifi as fast as possible. That likely saved me from even more of a hassle, as most viruses I’ve gotten have a habbit of going out and downloading more viruses. Safety in numbers? Perhaps they just get lonely. Either way, I killed this one’s connection to the outside world before it could download all it’s little buddies, or upload all my personal files to some hacker. I managed to kill all the running processes, disable them in msconfig, and track down where they were running from. I then Googled a bit and found some registry keys and DLL files that the virus modified. After fixing all that, things appear normal, but I’m still unsure if there isn’t some residue of the virus lurking in the corners of my registry (reason to hate Windows, registry). After spending hours hunting down every trace of the virus on my laptop, I don’t even feel like getting started on the desktop again… Note: both computers had up to date copies of Mcafee installed and running. Neither detected the virus.

So there you have it, wandering the Internet in Windows bliss, to spending hours attempting to remove viruses and hope that all your data isn’t destroyed, virus filled, or being uploaded to someone in Turkey. I think I’ll go back to Gentoo.. right after I finish this full system scan to make sure nothing’s left of the virus on my laptop.