Windows are a great user interface convention. They allow you to have multiple tasks on the go at once. You can even put different windows on different displays, such as e-mail or IRC on one monitor, so you can see when someone needs to talk to you, while still being able to focus on your work on a different display.

You will want to make the maximum use out of the space available, so you will resize your windows to be the size you need.

Such layouts are convenient, as you can spatially assign contexts, such that movement in the left of your vision means you have a new e-mail.

This is an improvement over having all your windows laid on top of each other and a way of recieving notifications that you need to look at a window, since there is less of a context switch between looking at an already open window and deciding it isn't important, and switching windows, re-adjusting your view and searching for whatever the message is about, then deciding it's not important.

However, it is possible to become dependent on this behaviour, so you'll avoid any actions that may mess up your optimally crafted layout, such as rebooting or detaching your second display from your laptop so you can take it to a meeting.

There are a few ways of trying to maintain this window placement:

  1. Use a desktop environmnent that preserves the locations of all your windows between all the different contexts that can cause your windows to be re-arranged and hope that it leaves you something useable in-between.

    The desktop environmnent of Apple Macs and Xfce attempt this with varying amounts of success.

  2. Use a tool to place your windows for you according to your programmed preferences.

    Tools such as the venerable Devilspie and Devilspie2 allow you to programmatically define where you want windows to appear.

  3. Use a desktop environment that helps you put windows in specific places.

    This is about making it quicker to lay-out your windows like you want them, rather than keeping them in the place you put them, but it works fairly well.

    The desktop environment provided by Windows 7 lets you drag windows to pre-defined parts of the screen, such as the left edge, and have it re-size the window to take up the left half of the screen on that display.

  4. Use a tiling window manager, so the default behaviour is making the optimal use of your available screen real-estate.

    When you open a window in a tiling window manager, it maximises itself. Opening an extra window will resize the original and move it next to the new window, with both having an equal proportion of the available display.

    This is my preferred option, since it's more flexible than option 3, where you have to exert effort to make your windows tile, and I find it more reliable than option 1.

    It doesn't hurt that some tiling window managers also allow you to configure them to place windows where you prefer, like you were using option 2.

The regular editors of Yakking all use tiling window managers. I use AwesomeWM, other editors use XMonad. There are others, such as dwm, Ion and the Shellshape extension to GNOME Shell.

AwesomeWM and XMonad are functionally similar, both offering a highly configurable solution by being fully programmable.

AwesomeWM is configured in Lua, and XMonad is configured in Haskell. My primary reason for using AwesomeWM is I do not understand Haskell.

Using an alternative window manager has its pitfalls, such as poor integration with existing desktop environments. For this reason I maintain instructions on how to make them integrate more nicely on my own blog, mostly for my own benefit when I have to re-install a system.

An alternative approach is to integrate tiling into an existing window manager, which is what the Shellshape GNOME Shell extension does.

This has the benefit of being more likely to continue to work as the desktop environment changes.

Posted Wed Sep 3 11:00:09 2014

In a previous article, Daniel covered Basics of the command line. Here's a few extra tips on shell editing keystrokes I use often that might be useful.

  • C-t exchanges the character at the cursor with the one before the cursor. This is handy for fixing typos.
  • M-t (or ESC t) is similar, but works on words instead of characters.
  • M-. (or ESC .) inserts the last word of the preceding command line. This is handy when you, say, first look at a file with less and then want to remove it, you can type r m ESC . for the second command.

There's a lot more. See the bash manual page for details, search for "Readline key bindings".

Posted Wed Sep 10 11:00:06 2014 Tags:

I recently watched an interesting video called What we know about software development and read an interesting blog post called Norris Numbers.

One of the interesting results in the video is that we can't usefully review more than about 200 lines of code in one sitting.

The blog post describes various thresholds for the manageability of a project. 2,000 lines of code for something that's quickly hacked together and is in a bit of a mess. 20,000 for something with well designed internal APIs.

So, to paraphrase the software commandment number 15, write less code.

Good ways of doing this are:

  1. Pick a language appropriate to the task.

    If you're tying simple programs together, and the most complicated data structure you need to care about is a string, shell is a good choice. C isn't, because of all the complexities of allocating memory, juggling file-descriptors and putting command line arguments together.

  2. Use appropriate libraries.

    Code from another library with a well defined API doesn't count towards your project, as you only need to worry about using the API correctly, rather than how the library works.

  3. In extreme cases, use a Domain Specific Language.

    Relational databases tend to be accessed by SQL queries. These let you manipulate data and retrieve results, without generally having to worry about how it's stored.

    The benefit of this is that you can avoid having to worry about locking to make concurrent access safe, how it's stored on disk, indexing your data to make it faster to retrieve, caching results so you can re-use them, or spreading your database across multiple machines, so if one machine goes down, you can still access your data.

As a case study, I'm going to look at the NetSurf web browser.

So looking at the 3 suggested approaches for reducing lines of code.

Choice of language

NetSurf is primarily written in C. This is not ideal for reducing the amount of code, but it is appropriate, as NetSurf needs to run on a variety of platforms, some of which haven't got a lot of CPU power or RAM.

The verbosity of the language is offset by other approaches to reduce the amount of code.

Using appropriate libraries

NetSurf initially used existing libraries, but for various reasons has written its own libraries to replace them, and split out code from the main project into new libraries.

The following was generated by the sloccount tool.

SLOC    Directory   SLOC-by-Language (Sorted)
187047  netsurf         ansic=171269,objc=8341,cpp=5716,perl=980,sh=447,
                        asm=288,php=6
111597  libdom          xml=81901,ansic=28064,perl=1269,sh=250,python=113
37841   libcss          ansic=37773,perl=68
13802   libhubbub       ansic=12531,jsp=1156,perl=97,sh=10,python=8
11178   libnsfb         ansic=11168,sh=10
5577    nsgenbind       ansic=3564,yacc=1509,lex=479,sh=25
5218    libparserutils  ansic=5099,perl=119
3175    librufl         ansic=3153,perl=22
2645    libsvgtiny      ansic=2645
1621    buildsystem     perl=1492,sh=108,ansic=21
1076    libnsbmp        ansic=1076
1015    libnsgif        ansic=935,perl=80
887     libpencil       ansic=887
857     librosprite     ansic=857
624     libwapcaplet    ansic=624

It shows NetSurf hovering around the 190,000 lines of code mark, with a lot of support libraries.

Domain specific languages

The above list doesn't include everything, since NetSurf uses a Domain Specific Language for binding its JavaScript engine to its Document Object Model (DOM).

There's 2,641 lines of .bnd code in the netsurf project, which is parsed by the nsgenbind program to produce 13,597 lines of C code.

This comes out a little ahead in terms of total lines of code, since nsgenbind is 5,577 lines. However, I am assured that in the future there will be greater gains as the bindings increase in size.

It also has some other benefits, as binding code is both tricky and dull, so it is best left to automation; and it allows NetSurf to support 2 different JavaScript APIs, and NetSurf is looking to support a third.

Summary

Your homework is to take a project you work on, look at how many lines of code it is with sloccount or cloc. If it's too big, think about how you could split it up to be more manageable.

Posted Wed Sep 17 11:00:09 2014

Many F/LOSS hackers, as they reach a certain level of hackerdom (I think it's usually level 14 or 15) typically reach the point of desiring services on their local network which are either poorly served by traditional "routers" or are simply beyond their usual functionality set. If you're that kind of hacker then bear with me while I go over a few common network services and how you can set up a Linux box to do them for you, giving you control and diagnostics beyond that which a trad. router would do.

For those of you who are perhaps level 26 or above, you might be annoyed that I don't do much in terms of treatment of IPv6 here, but I promise I'll do something on IPv6 services another time.

Rather than linking things all over the place, I'll give you one top level link here, to the Linux Home Networking website and hope you are capable of finding suitable tutorials for the parts of this article which interest you. If you're not a Linux person then I'm sure your chosen platform will have similar resources for you to exploit. Apart from the firewalling/routing, this should all be applicable no matter your chosen platform.

DHCP

The most basic service you need on a network is the service which lets a computer which joins the network get an address (and possibly name). The dynamic host configuration protocol, or DHCP, is the protocol for that. DHCP allows new hosts to query the network for what IP range it is in, what IP address the new machine should use, where the router is, what the naming scheme is inside the network and where other services such as DNS (see later) can be found.

You might use server software such as the ISC DHCP Daemon or perhaps something smaller such as DNSMasq, but whatever you use, you will need to decide on an IP address range for your network, and allocate at least one address statically to your new router. Normally home networks will use RFC1918 address ranges such as 192.168.x.y/24 or 10.x.y.z/8-31 and what you select will typically be a combination of the size of your home network, any other networks you might want to route or bridge to in the future, and also how lazy you're feeling. It's common practice to allocate either the bottom (e.g. 192.168.100.1) or the top (e.g. 192.168.77.254) address to the router. I tend to prefer the bottom address.

DNS

Once you have selected and configured the IP address range for your network, you will typically need to set up a DNS resolver. You can get away with getting your DHCP server to tell your devices to use a public DNS server, or that of your ISP, but in general it's good to have a local DNS server. It can both cache query results to reduce your bandwidth consumption, and also serve a DNS zone for your local network, so that you can refer to your devices by their names rather than trying to remember IP addresses for everything.

You might choose to use the full-power ISC BIND or you can opt to use DNSMasq as before. Wherever you choose to run the DNS server, remember to configure your DHCP server to hand out the right address. I tend to run the DHCP and DNS on the same system.

Routing

Of course, a network router is pointless without some kind of routing support. Commonly Linux-based routers combine the routing and firewalling into one service. Linux (currently) does all of this via a combination of a few simple sysctls and a tool called iptables.

There are any number of ways to configure the sysctls and iptables involved in setting up a basic router/firewall, but I shall name a few simple options for you. I personally use a tool called firehol which I find pretty easy to set up, but I have also used ufw in the past (UFW does need some extra help to be a router though). If you're interested in more complex and interesting routing setups then Shorewall is very competent and capable.

File server

In the bad old days, the only network filesystem supported by *NIX which was widely compatible and well implemented tended to be NFS. Unfortunately most random consumer devices don't tend to speak NFS, preferring instead of be compatible with Windows and similar devices by using CIFS. There is a competent free software implementation of a CIFS server called Samba and there's plenty of CIFS clients, in both userland and kernelspace.

These days if you have a heterogenous network (and really, who among us doesn't) you are better off going with CIFS/Samba for sharing your file server storage space.

Printing

As anyone in a household with more than one user and only one printer will tell you, managing access to the printer can be a pain. These days, many printers can be networked and yet all this does is shift the contention point from the computer the printer is plugged into, to the printer itself. Many printers get mightily confused if more than one person connects and sends a document to it at the same time.

Print servers exist, but as is the way of the world, printing is possibly the hardest thing to make work reliably and stably in a network. Most people these days use CUPS and if installed properly you can administer it from your web browser by pointing it at https://ROUTER_IP:631/.

Posted Wed Sep 24 11:00:08 2014