The Yakking Staff Yakking is on hold for now

After 242 articles and almost five years of almost weekly posts, we've run out of steam. Yakking is going to take an indefinite vacation. There will be no new articles, at least for now.

We might start again later. There might be occasional articles at irregular intervals, if we feel energetic. Or not.

We've enjoyed writing Yakking. We hope you've enjoyed reading Yakking. The site will remain, and the archives are still there for reading.

Happy hacking. May the farce be with you. Be well.

Daniel, Richard, and Lars.

Posted Tue May 15 12:00:10 2018 Tags:

Unless your program is compiled into one big binary lump it will typically need to load other assets on program start.

This is usually libraries, though other assets may also be required.

Your programming environment will define some standard locations (see hier(7) for some examples), but will normally have a way to specify more.

  • C programs will look for libraries in directories listed as : separated strings in the LD_LIBRARY_PATH environment variable.
  • Python programs will look in PYTHONPATH.
  • Lua programs will look in LUA_PATH, LUA_CPATH and other environment variables based on the version of the Lua interpreter.
  • Java will look in its class path, which can be set with the -classpath option.
  • Executables will be sought in the PATH environment variable.

If you only need assets in the standard locations then you wouldn't normally need to change anything.

However you're not always able to stick to only distribution provided software.

In this case you need to use software which has "bundled" its dependencies alongside its own software.

Linux ELF binaries can make use of its "RPATH" to add extra paths, but most executable formats don't have a direct equivalent.

In which case we can instead specify the new locations with a wrapper script. The standard trick is to use $0 for the name of the script, dirname(1) to get the directory the script is located in, and readlink(1) -f on the to turn it into an absolute path.

D="$(dirname "$(readlink -f "$0")")"
cp="$(set -- "$D/support/jars/"*.jar; IFS=:; printf %s "$*")"
exec java -classpath "$cp" com.myapplication.Main "$@"

This works for running the script in the directory the assets are stored in, but it can be convenient to add the program to a directory in PATH.

If written as a bash script you can use $BASH_SOURCE which is guaranteed to be the path of the script, and in circumstances I can now no longer reproduce I needed to use it instead of $0.

D="$(dirname "$(readlink -f "${BASH_SOURCE}")")"
cp="$(set -- "$D/support/jars/"*.jar; IFS=:; printf %s "$*")"
exec java -classpath "$cp" com.myapplication.Main "$@"
Posted Wed Apr 11 12:00:08 2018 Tags:
Richard Maw Text To Speech

What do you do if you need to be present on a call but you've lost your voice? Why write an 11 line shell script to replace it of course!

Our first port of call is of course to Google! So what is the first result for "linux text to speech"? (well for me it's RPi Text to Speech (Speech Synthesis)) and deep down there after Cepstral it's) Festival of course!

So how do we use that?

First we need to install it. Since I wrote this on an Ubuntu system I do:

$ apt install festival

This installs the festival command, which has a convenient --tts option!

$ echo hello world | festival --tts

This however has two problems:

  1. It is fatiguing on the fingers to tweak the parameters and run the command.
  2. The output of the command is to the speakers rather than a microphone.

We can fix problem 1 with a trivial shell script to produce output after every line instead.

while read -r REPLY; do
        printf %s "$REPLY" | festival --tts

The problem of outputting to a microphone is somewhat more involved.

It's possible to loop your speaker output through a recording device in Pulse Audio by setting the recording device to a "Monitor" device.

It's no doubt possible to drive this from the command-line, but since my chat software is graphical I've no problem using pavucontrol.

Once the chat software is running the "Recording" tab and change the input device of the application.

This works but is unsatisfying because you need a second output device otherwise other sounds will be broadcast and is prone to causing feedback.

What we need is some kind of virtual microphone. As usual, the likes of Google and StackOverflow come to hand, and a virtual microphone is what Pulse Audio calls a "null sink".

We can create a null sink and give it a recognisable name by running:

pacmd load-module module-null-sink sink_name=Festival
pacmd update-sink-proplist Festival device.description=Festival

Then we can remove it again by running:

pacmd unload-module module-null-sink

So how do we get festival to play its output to that?

We can't start the command, then tweak the parameters in pavucontrol because it doesn't run long enough to change that before it starts playing.

We can play audio to a specified device with the paplay command, but how do we get Festival to output?

Fortunately Festival lets you set some parameters in its scripting language.

We need to pick a common audio format that paplay can read and festival can produce. We can set this with:

(Parameter.set 'Audio_Required_Format 'aiff)

We need to tell festival to play audio through a specified Pulse Audio device. The best way I could find to do this was setting Audio_Method to Audio_Command and Audio_Command to a paplay command.

(Parameter.set 'Audio_Method 'Audio_Command)
(Parameter.set 'Audio_Command "paplay $FILE --client-name=Festival --stream-name=Speech --device=Festival")

Festival lets us run commands on its command-line so the final script we get is:

pacmd load-module module-null-sink sink_name=Festival
pacmd update-sink-proplist Festival device.description=Festival
while read -r REPLY; do
        festival --batch \
                '(Parameter.set '\''Audio_Required_Format '\''aiff)' \
                '(Parameter.set '\''Audio_Method '\''Audio_Command)' \
                '(Parameter.set '\''Audio_Command "paplay $FILE --client-name=Festival --stream-name=Speech --device=Festival")' \
                '(SayText "'"$REPLY"'")'
pacmd unload-module module-null-sink
  1. Run that in a terminal window.
  2. Start your chat program.
  3. Start pavucontrol and change the input device of your program to Festival.
  4. Type lines of text into the terminal to speak.

Since this was a project to achieve the goal of being able to participate in a group chat without being able to speak, development stopped there.

Should further development be warranted other changes could include:

  1. The module load and unload process is pretty fragile. Would need to use an API that is tied to process lifetime or at least unload by ID rather than name.
  2. No escaping mechanism for $REPLY. Would need to learn string escaping in lisp.
  3. Lots of work done per line of text. Festival has a server mode which could reduce the amount of work per line.
  4. Investigate a way to pipe audio directly between Festival and Pulse Audio. text2wave exists to write to a file, possibly standard output, and pacat exists to take audio from standard input and put it to speakers, but I couldn't get it to work at the time.
  5. Replace festival entirely. It is in need of maintainership, and has been broken in Fedora releases, so replacing the voice generation with pyttsx, espeak or flite could help.
Posted Wed Apr 4 15:23:39 2018 Tags:
Daniel Silverstone Coming back to a project

We've spoken about how you are not finished yet with your project, how to avoid burning bridges in conversation, how to tell if a project is dead, or merely resting, and on knowing when to retire from a project. Today I'd like to talk about coming back to a project which you have taken a hiatus from.

Whether your break from a project was planned or unplanned, whether you were fundamental to a project's progress, or "just" a helper, your absence will have been felt by others associated with the project, even if they've not said anything. It's always a nice thing to let anyone who ought to know, know that you have returned. Depending on your level of integration into that project's community and the particulars of your absence, your own sense of privacy around any reasons, etc., it can be worth letting the community know a little of why you were away, and why that means that now you are back.

If your time away was unannounced, unplanned, abrupt, or otherwise disruptive to the community, it can also help to mend things if you are prepared to apologise for your absence. NOTE I am not saying you have to apologise or excuse the reasons why you were absent, merely to note to the community that you are sorry to have been away and that you look forward to reintegrating.

Just like I'd recommend when you join a community, make a point of discussing what you intend to achieve in the short term, in case there's someone who wants to assist you or has been working on something similar. Also make it clear if you've kept up passively with the community in your time away, or if you've essentially a blank for the period of your absence. That can help others to decide if they need to tell you about things which might be important, or otherwise relevant, to your interests and plans.

If you were specifically, and solely, responsible for certain things when you were previously part of the project, you should, ideally, go back and check if anyone has picked it up in your absence. If so, make a point of thanking them, and asking them if they wish to continue with the work without you, share the burden with you, or pass the work back to you entirely. If that responsibility was part of why you had a hiatus, ensure that you don't end up in a similar position again.

There are many more suggestions I might make, but I'll leave you with one final one. Don't expect the project you re-join to be the same project you left. In your absence things will have progressed with respect to the project, others in the community, and yourself. Don't be saddened by this, instead rejoice in the diversity of life and dig back in with gusto.

Posted Wed Mar 28 13:17:44 2018
Lars Wirzenius Famous bugs

The history of computing has a number of famous bugs. It can be amusing to read about them. Here's a start:

What's your favourite bug? Please tell us in comments.

Posted Wed Mar 21 12:00:06 2018 Tags:
Daniel Silverstone So you think you are finished?

It's not often that we just link to a single article, but sometimes that article is simply "worth it". In this instance I recently read a posting by Loup Vaillant which made me think "This needs to be distributed to the yakking community" and so here we go…

Loup wrote about what needs to happen when you think you're finished writing your free/open source project's code, and I think you should run over there right now and read it: After your project is done — Loup Vaillant

If you're still here afterwards, well done. Now go and look over any projects of your own which you consider "finished" and see if there's anything Loup hilighted which still needs to be done to them. Maybe make a next-action to do something about it.

Posted Wed Mar 14 12:00:07 2018
Lars Wirzenius Cycles in development

Software development tends to happen in cycles and it's good to be aware of this.

The innermost cycle is the edit-build-test-debug loop. You write or change some code, you build the software (assuming a compiled language), and you run the software to test it, either manually or by using automated tests.

Being the innermost cycle, it's probably where most of development time happens. It tends to pay to optimise it to make it as fast as possible. Each of the parts can be optimised. For editing, use an editor you like and are comforable with. Also, a keyboard you can type efficiently with. For building, use incremental building, and maybe turn off optimisation to make the build got faster. For testing, maybe run only the relevant tests, and make those run quickly.

Other cycles are:

  • The TDD cycle: add a new test, add or change code to make test pass, refactor. This tends to embed the edit-build-test-debug loop. Usually at most minutes in length.

  • Adding a minimal user-visible change: a new feature is broken into the smallest increment that makes the user's life better, and this is then developed, often using a number of TDD cycles.

  • A "sprint" in Agile development: often a week or two or three, often adds an entire user-visible feature or other change, or several.

  • A release cycle: often many weeks or months, adds significant new features. The set of added features or other changes, and the length of the release cycle are all often determined by business interests, business strategy, and often ridiculed by developers. This sometimes happens for open source projects too, however. For example, a Linux distribution might synchronise its own release schedule with multiple major, critical components it includes.

  • The maintenance cycle: after the software has been "finished", and put into production, bugs and other misfeatures get fixed and once there's enough of them, a new release is made and put into production.

In each case, it is useful to know the intended and expected length of the cycle, and what needs to happen during the cycle, and what the intended and expected result should be. It is also useful to try to identify and remove unnecessary clutter from the cycles, to make things go smoothly.

Posted Wed Mar 7 12:03:07 2018 Tags:

The title of this article is intentionally provocative.

Git is a flexible tool that allows many kinds of workflow for using it. Here is the workflow I favour for teams:

  • The master branch is meant to be always releasable.

  • Every commit in master MUST pass the full test suite, though not all commits in merged change sets need to do that.

  • Changes are done in dedicated branches, which get merged to master frequently - avoid long-lived branches, since they tend to result in much effort having to be spent on resolving merge conflicts.

    • If frequent merging is, for some reason, not an option, at least rebase the branch onto current master frequently: at least daily. This keeps conflicts fairly small.
  • Before merging a branch into master, rebase it onto master and resolve any conflicts - also rebase the branch so it tells a clean story of the change.

    • git rebase -i master is a very powerful tool. Learn it.

    • A clean story doesn't have commits that fix mistakes earlier in the branch-to-be-merged, and introduces changes within the branch in chunks of a suitable size, and in an order that makes sense to the reader. Clean up "Fix typo in previous commit" type of commits.

  • Update the NEWS file when merging into master. Also Debian packaging files, if those are included in the source tree.

  • Tag releases using PGP signed, annotated tags. I use a tool called bumper, which updates NEWS,, debian/changelog, tags a release, and updates the files again with with +git appended to version number.

    • Review, update NEWS, debian/changelog before running bumper to make sure they're up to date.
  • Name branches and tags with a prefix foo/ where foo is your username, handle, or other identifier.

  • If master is broken, fixing it has highest priority for the project.

  • If there is a need for the project to support older releases, create a branch for each such, when needed, starting from the release's tag. Treat release branches as master for that release.

Posted Wed Feb 21 12:00:11 2018 Tags:
Lars Wirzenius Don't burn that bridge!

You may be familiar with some variant of this scenario:

You're on a mailing list (or web forum or Google group or whatever), where some topic you're interested in is being discussed. You see someone saying something you think is wrong. You fire off a quick reply telling them they're wrong, and move on to the next topic.

Later on, you get a reply, and for some reason they are upset at you telling them they're wrong, and you get upset at how rude they are, so you send another quick reply, putting them in their place. Who do they think they are, spouting off falsehoods and being rude about it.

The disagreement spirals and becomes hotter and more vicious each iteration. What common ground there was in the beginning is soon ruined by trenches, bomb craters, and barbed wire. Any bridges between the parties are on fire. There's no hope for peace.

This is called a flame war. It's not a good thing, but it's not uncommon in technical discussions on the Internet. Why does it happen and how can you avoid it?

As someone covered in scars of many a flame war, here are my observations (entirely unsubstantiated by sources):

  • Flame wars happen because people try to be seen as being more correct than others, or to be seen to win a disagreement. This often happens online because the communication medium lacks emotional bandwidth. It is difficult to express subtle emotions and cues over a text-only channel, especially, or any one-way channel.

    Disagreements spiral away more rarely in person, because in-person communication contains a lot of unspoken parts, which signal things like someone being upset, before the thing blows up entirely. In text-only communication, one needs to express such cues more explicitly, and be careful when reading to spot the more subtle cues.

  • In online discussions around free software there are also often no prior personal bonds between participants. Basically, they don't know each other. This makes it harder to understand each other.

  • The hottest flame wars tend to happen in contexts where the participants have the least to lose.

Some advice (again, no sources):

  • Try hard to understand the other parties in a disagreement. The technical term is empathy. You don't need to agree with them, but you need to try to understand why they say what they say and how they feel. As an example, I was once in a meeting where a co-worker arrived badly late, and the boss was quite angry. It was quickly spiralling into a real-life flame war, until someone pointed out that the boss was upset because he needed to get us developers do certain things, and people being late was making that harder to achieve, and at the same time the co-worker who was late was mourning his dog who'd been poorly for years and had recently committed suicide by forcing open a 6th floor window and jumping out.

  • Try even harder to not express anger and other unconstructive feelings, especially by attacking the other parties. Instead of "you're wrong, and you're so stupid that the only reason you don't suffocate is because breathing is an autonomous action that doesn't require the brain, go jump into a frozen lake", say something like "I don't agree with you, and I'm upset about this discussion so I'm going to stop participating, at least for a while". And then don't participate further.

  • Do express your emotions explicitly, if you think that'll mean others will understand you better.

  • Try to find at least something constructive to say, and some common ground. Just because someone is wrong about what the colour of the bike shed should be, doesn't mean you have to disagree whether a bike shed is useful.

  • Realise that shutting up doesn't mean you agree with the other parties in a disagreement, and it doesn't mean you "lose" the argument.

  • Apply rule 6 vigorously: write angry responses if it helps you deal with your emotions, but don't send them. You can then spend the rest of you life being smug about how badly other people have been humiliated and shown to be wrong.

Your homework for this week, should you choose to accept it, is to find an old flame war and read through it and see where the participants could've said something different and defuse the situation. You get bonus points if it's one which you've participated in yourself.

Posted Wed Feb 7 12:00:09 2018 Tags:
Daniel Silverstone Processing input

Computer programs typically need some input on which to perform their purpose. In order to ascribe meaning to the input, programs will perform a process called parsing. Depending on exactly how the author chooses to develop their program, there are a number of fundamentally different ways to convert a byte sequence to something with more semantic information layered on top.

Lexing and Parsing

Lexical analysis is the process by which a program takes a stream of bytes and converts it to a stream of tokens. Tokens have a little more meaning, such as taking the byte sequence "Hello" and representing it as a token of the form STRING whose value is Hello. Once a byte stream has been turned into a token stream, the program can then parse the token stream.

Typically, the parsing process consumes the token stream and produces as its output something like an abstract syntax tree. This AST layers enough semantic meaning onto the input to allow the program to make use of the input properly. As an example, in the right context, a parser might take a token stream of the form STRING(println) '(' STRING(Hello) ')' ';' and turn it into an AST node of the form FunctionInvocation("println", [ "Hello" ]). As you can see, that would be far more useful if the program in question is a compiler.

Parsing in this way is commonly applied when the language grammar in question meets certain rules which allow it to be expressed in such a way that a token stream can be unambiguously converted to the AST with no more than one "look-ahead" token. Such languages can convert "left-to-right" i.e. unidirectionally along the token stream and usually we call those languages LALR(1).

To facilitate easy lexical analysis and the generation of LALR(1) parsers, there exist a number of generator programs such as flex and bison, or re2c and lemon. Indeed such generators are available for non-C languages such as alex and happy for Haskell, or PLY for Python.

Parsing Expression Grammars

PEGs are a type of parser which typically end up represented as a recursive descent parser. PEGs sometimes allow for a parser to be represented in a way which is more natural for the language definer. Further, there is effectively infinite capability for look-ahead when using PEGs, allowing them to parse grammars which a more traditional LALR(1) would be unable to.

Combinatory Parsing

Parser combinators take advantage of higher order functions in programming languages to allow a parser to be built up by combining smaller parsers together into more complex parsers, until a full parser for the input can be built. The lowest level building blocks of such parsers are often called terminal recognisers and they recognise the smallest possible building block of the input (which could be a token from a lexical analyser or could be a byte or unicode character). Most parser combinator libraries offer a number of standard combinators, such as one which will recognise one or more of the passed in parser, returning the recognised elements as a list.

Sadly, due to the strong functional programming nature of combinators, it's often very hard to statically analyse the parser to check for ambiguities or inconsistencies in the grammar. These issues only tend to become obvious at runtime, meaning that if you're using parser combinators to build your parser, it's recommended that you carefully write your grammar first, and convert it to code second.


Find a program which you use, which consumes input in a form specific to the program itself. (Or find a library which is meant to parse some format) and take a deep look at how it performs lexical analysis and parsing.

Posted Thu Feb 1 12:00:09 2018