Most Unix shells let you customize the command prompt. The traditional prompt is just a $ or #, for normal users or the root user, respectively. A short prompt made sense when using Teletype terminals at 110 bit/s, or 10 characters/second, when a longer prompt might have taken a second or two to print.

These days, the length of a prompt is no longer a speed issue, and you can even get colours. This article gives a taste of the possibilities, for the Bash shell. For other shells, look into their documentation. The Zsh shell is especially powerful in this area.

In the Bash shell, you change the prompt by setting the PS1 variable to the value you want. You can use various escape sequences for dynamic content. For example:

\h     the hostname up to the first `.'
\t     the current time in 24-hour HH:MM:SS format
\w     the  current  working  directory,  with $HOME
       abbreviated with a tilde

See the bash manual page for the full list (search for "PROMPTING").

As an example, set PS1 like this:

PS1='\u@\h:\w\$ '

After this, the prompt will look something like this:

liw@exolobe1:~/.../yakking.branchable.com$ 

Here, \u will be expanded to the user running the shell, \h to the hostname, \w to the current working directory (with some shortening logic, as shown above), and \$ to either $ or # depending on whether the user is root or not.

You can also set the prompt even more dynamically, by setting the PROMPT_COMMAND variable. Bash will evaluate the contents of that variable as shell commands, and those commands can, for example, set PS1 to contain the currently checked out branch of a git repository. (Doing that is left as a tantalising exercise to the reader.)

Posted Wed Oct 1 11:00:10 2014 Tags:

In a previous article I discussed the importance of customising where you work to minimise context switching, so this week I thought I'd include examples of what I do.

My Workstation

At work I have a laptop, so I can take it to meetings or off-site.

I attach a USB mouse as it's more ergonomic than the touchpad, and my attempts to work only via keyboard failed, partly due to learned behaviours, and partly due to the fact that copying text between windows is a chore without a pointer device.

I have a monitor on my desk to serve as a second display, or primary display of a development board.

I run Debian Wheezy on my laptop, with AwesomeWM as my window manager.

I used to have some complicated customisations to AwesomeWM, but regressions in the packaged versions of AwesomeWM failing to load my customisations have led me to learn the defaults instead, so even though it's now fixed, I don't customise AwesomeWM.

I have an irssi window and a mutt window on my second display, always open so I can see when there's something that requires more immediate attention.

I have different tabs for my web browser and all my work terminals on my primary display, which I have in front of the secondary display, so I'm not always focussing on IRC or E-Mail, but I can glance up to see if there's any changes I need to be aware of.

I have a second browser open on my secondary display with Google Music running, as I find it helps reducing office noise distractions.

My Hackstation

I run Ubuntu 14.04 on my personal laptop, as I also use it to play games, and the required drivers were not available in Debian Wheezy.

I have a USB docking station with a built-in VGA port which is sent to my Laptop via DisplayLink. This means I can have an extra monitor, a keyboard and a mouse just by plugging in one USB cable.

The convenience of using just one USB cable is the difference between being bothered about using an extra monitor and not.

I'm considering upgrading my docking station to an UD-3900 since it includes an ethernet port and isn't limited to VGA displays, though it can't be powered over USB and it claims not to support Linux.

In addition to the docking station, I have two USB touchscreens. A Lilliput UM-80 and a Mimo UM-720, taking the physical separation of contexts further.

The touch functionality doesn't work so well in a multi-monitor setup, as the touch coordinates are assumed to be relative to the whole virtual desktop, rather than per-display. However, after fixing the quirks that make it treat the input as upside-down, they can be used in [mutli-seat][] as extra desktop sessions.

Apart from the different Linux distribution, I run mostly the same software at home as at work.

The exceptions being that I play my music on my Chromecast instead of on my laptop through headphones, and I occasionally use cool-old-term as my terminal emulator for fun.

I have my work terminals on my Laptop display. I put a web browser on my second monitor, and on my little USB touchscreens I run my irssi and mutt windows.

Posted Wed Oct 8 11:00:07 2014
Daniel Silverstone Software testing

A while ago, in the truisms series I talked about how if you were not testing your software regularly and consistently then you couldn't have confidence that it works. At the time, I didn't go into a lot of detail about ways to do software testing; I talked about some types of testing but not a lot and I left a large amount of it to you to research and learn.

Today I'd like to take some time to go into a little bit more detail about mechanisms and techniques for software testing, and show that just because modern techniques seem to focus on managed or scripting languages doesn't mean that they can't apply to C or similar. Wikipedia has a very good set of articles about software testing and I recommend you read them, but before you do, let me hilight a few things you might want to consider, based on my experience with testing C/C++, Lua, Python, Haskell and shell programs.


The first thing I'd like to hilight is that any testing that you do must be a live part of your project. Tests are not write-once-worry-never code. They should evolve with your project and for the most part, test code ought to be the very highest quality code in your project. If your tests are bad, they might be giving you false confidence (or false lack of confidence) in your project resulting in either undiscovered bugs, or wasted effort.

Tests should be simple, they should be clear to people who don't necessarily know the codebase intimately, they should be well documented, and perhaps most importantly of all, they should be obviously correct. This is a tall order for any code, and for test code it can be even harder, but if you strive for these attributes in your test code, you'll appreciate your tests more and more over time.

The second point I'd like to labour is that there is no such thing as a right or wrong testing technique. Different techniques are good at testing different kinds of surface and exposing different kinds of bugs or issues. Don't be reticent to add different kinds of tests to your codebase, and also don't be afraid to test the same thing in multiple different ways.

The third point links from the second -- test the surfaces you expose for the features you support. Different surfaces lend themselves to different kinds of tests, and so you will implicitly start to gather different test types. For instance, your project might involve a library (internal or otherwise) which supports a command line application. Testing internal surfaces of the library where appropriate (e.g. simple purpose utility classes or equivalent) and testing the external surface of the command line application are probably the two places to best concentrate your efforts. If your library is also exposed to others (e.g. if the commandline app is an example such as apt-get is) then you should also explicitly test the API you are claiming to support.


Testing techniques that I have benefitted from, and believe you will benefit strongly from learning about and applying in your projects are:

  • Unit testing for library APIs and similar
    • There's such a multitude of unit testing frameworks I'm not going to suggest a particular one.
    • Interestingly though, this overlaps with scenario testing to some extent thanks to tools like cucumber.
  • Scenario testing for higher level surfaces such as tools or services
    • Here I will make a partisan recommendation for the yarn tool which I helped to design and Lars wrote.
    • If you try no other method of testing, try this one. Yarn re-kindled my personal joy in testing software.

I have written my own test frameworks, I have used frameworks which exist already, and I cannot say which I prefer. Using existing frameworks certainly saves a lot of effort when you want features such as running each test in a subprocess, but sometimes nothing is as elegant as writing your own framework in 20 lines of Lua when you want it.

All in all, my strong recommendation is learn to test, and learn to love to test too. Now get on and read the Wikipedia software testing page and also Martin Fowler's article index on the matter and get busy.

Posted Wed Oct 15 11:00:08 2014

One way of looking at a computer program, is in terms of the resources it acquires, then uses to perform some useful computation.

You acquire memory, do some computations with it, then release it again.

Resource handling is so important, that it's a major task of your operating system.

You use system calls to allocate memory, create files, open file handles, write data to them, and remove files when they are no longer needed.

Many resources are cleaned up automatically when your program exits. Any memory your program allocated is released back to the operating system, and any file descriptors are closed.

Not all resources are automatically cleaned up though. Files on disk need to be unlink(2)ed before the space they occupied can be re-used.

In Linux, a file is only removed once both the reference count of file links reaches zero, when it can't be found in the file system tree any more, and no processes have open file handles to it.

Terminating your processes are not the only way to close open file handles though, as there is a limit to the number of file handles a process can have open.

For this reason, the close(2) system call exists, so when you no longer need a file to be open, you can close it.

Because of this, programs are generally structured around the acquisition of resources, operating on them, then releasing them again.

For example, in shell scripts you often need to allocate temporary files and directories. These are not automatically cleaned up when your process exits, so you need cleanup code. The trap shell builtin is the most useful tool available.

td="$(mktemp -d")
trap 'rm -rf "$td"' 0 # remove tempdir on exit
cd "$td"
…

Trap is limited to being given a command to run on a signal, which falls apart quickly when you have multiple resources to handle. At this point you should probably be using a more full-featured programming language.

Subsequent articles will cover appropriate resource cleanup idioms in other programming languages.

Posted Wed Oct 22 11:00:08 2014

Memory is one resource in your computer which your programs will use. Many languages ease the management of memory by being 'Garbage Collected' which means that rather than the software author having to keep track of memory and release it when they are finished with it, the language itself keeps track and does that on their behalf. Garbage collection schemes are both old (original LISP had gc) and the subject of modern research.

There are many schemes for garbage collection of memory, from simple but effective ones such as Perl 5's reference counting or Python's use of both reference counting and also cycle cleanup, through to the wider field of tracing garbage collectors.

Many programming languages include one or more garbage collection approaches in their memory management runtime, indeed almost every language except C/C++/Obj-C includes some form of dynamic memory management by default.

However, garbage collection is not a panacea. Indeed it can introduce problems to your software which you would not normally encounter if you were managing everything explicitly yourself. For example, depending on the scheme in use, your program may pause for unpredictable periods at unpredictable intervals. If you need your program to always respond smoothly to external events, this kind of behaviour can be a show-stopper. Of course, there's plenty of mechanisms in place to mitigate this kind of problem and your chosen language will have interfaces to the garbage collector to allow you to tell it when you want it to do work, but still, it can be a pain if you're not expecting it.

Garbage collection schemes can mean that your program ends up "using" much more RAM than it is, in fact, using. On large computers this might not be an issue, but if you're writing for more resource constrained situations, or working with super-large datasets, then this might turn out to be a problem too. Again, working with your language runtime's tweaking APIs can help you mitigate this problem.

Posted Wed Oct 29 12:00:06 2014