A lot of people keep a diary. This is a time-tested way of ensuring future historians have juicy rumours to read and write books about. A journal is like a diary, but it's meant to be more serious and less saucy.

I've been keeping a journal for some years now. It's been good in several ways:

  • It's a private place where I can vent entirely without any inhibition. As long as I don't leak the content, I can write anything there, things I couldn't share even with my closest friends and loved ones. In my journal I don't need to be fair or balanced or diplomatic; if calling my boss a poophead helps me, I can do that in my journal. More importantly, apart from name-calling, I can be open in my journal about my hopes and dreams, and speculate as freely as I need to about all sorts of crazy ideas. If I want to fantasize about writing my own Debian installer, my journal is where I'll do it. In my journal I don't have to worry about people misunderstanding me, or attacking any vague, half-developed crazy ideas, and ridicule me about them for the next several years.

    My journal is a safe place. (This is one of the reasons why my backups are encrypted.)

  • It's a place to keep an external memory. One of the the things I put in my journal is a stream of consciousness while developing. This allows me to answer questions of the form "what the bleeding heck was I thinking then designing this software" with quotes from my younger self. Perhaps more usefully, this can be applied to debugging as well: tricky problems often involve a lot of data to be kept to fully understand what's going on, and a written journal is a better place for that than the brain.

    A bug tracker is usually not a good place for this, or not the full stream of consciousness. Most of that stream is necessary for the process, but other people shouldn't be flooded with all of it, only the actually relevant parts.

  • I also put all sorts of other bits of information into my journal. In fact, over the years it has developed into a personal knowledge base, where I can find a bunch of things that are relevant to me, but not necessarily easy to find online. As an example, what's the command to do TOTP on Linux that works with Amazon? (oathtool --base32 --totp, not too difficult to find but easier in my journal.)

    Some information is private in nature, such as who were the people I had that interesting lunch with at that conference that one time.

There are a myriad of tools for keeping a journal, or personal knowledge base. There's wikis of various flavors, plain text files in git, online document services, etc etc. You can keep it on paper as well, though that makes grepping harder. As I prefer to not pour my heart and soul into a service run by other people, my journal is an ikiwiki instance that I run on my laptop, which renders a static HTML that is served via Apache on my laptop (and only to my laptop). This satisfies my needs for ease of use and privacy.

Using a wiki engine for this is nice, because linking adds a lot of power and can make finding relevant information faster. Using the ikiwiki inline directive, which produces pages by collecting other pages based on a pattern, I get pages for particular people (link to the person, the person's page includes the page linking to them), topics (e.g., projects), tags, and more. Quite nifty, and I'm afraid I can't show you.

Keeping a journal takes a bit of effort, of course. It also takes time for a journal to become useful: having diary entries for a week probably doesn't help much. Having them from a decade changes this in a qualitative, not just a quantitative way. Do you remember what you got your loved as a present seven years ago? I don't even remember what I got as a present last year.

Give it a try. It doesn't have to be perfect, but you need to keep doing it. Report back in a comment below in four years from now and tell use if it was helpful to you.

Posted Wed Jun 21 12:00:08 2017 Tags:
Richard Maw Piecemeal development

Daniel recently discussed the Minimum Viable Product development approach.

As discussed in his article it has the disadvantage that it results in a lot of code that will be thrown away after being deemed to be insufficient.

An alternative approach is to do some analysis up-front and determine the full set of functionality and split the work into sub-projects, so that each component is a project of value by itself.

For example, you may be writing a daemon in python which needs to be able to safely create and read PID files.

You would then need to investigate which projects exist for doing so, evaluate whether they are sufficient, and if not either develop a replacement or contribute to an existing project.

For an ambitious project this can provide more good in the short term, since while it will take a while before the project's main goal is achieved, some of the component parts are already of value.

This can be better for motivation than the Minimum Viable Product approach if the developers involved tend towards obsessing over minutiae and having the "correct" solution for every little part of the project.

However it is particularly vulnerable to losing motivation from introspection of the current state of the project compared to its goal, since it takes longer to get to something demonstrable if feedback on the individual components is not readily available.

This may be viewed as an agile vs. waterfall development methodology, since it requires a large amount of analysis up-front to determine which components are required and which should be split out into separate projects, but those individual components may be reusable.

Similarly, if it turns out that the goal of the project was bad, then at least the component parts of the project are salvageable.

If the project does eventually fail, then at least lessons will have been learned for future attempts.

Posted Wed Jun 14 12:00:07 2017 Tags:
Daniel Silverstone Minimal Viable Project

Quite a while ago now, Lars spoke about writing the scaffolding of your projects as a first step. We've covered all sorts of things like how to lay out your codebase and recently about motivation for projects in the first place.

And all this is good. You've decided you want to write something awesome, you have a design which you think will lead to an amazing program which will wow the world. But it's going to take you about six months of engineering before you can get to the result you've designed for. That's an awfully long time before you know if you're doing the right thing.

Lars spoke of TDD (test driven development) in his scaffolding article and that's an excellent way of ensuring that everything you write is validated. Sadly it does nothing to guarantee that everything you write is useful or desirable in implementing your final solution. There are a number of ways of bridging this gap and today I'm going to talk about one of them.

The "Minimal Viable Product" is a process whereby you develop the least you possibly can to give yourself an edge or unique selling point; and the same thing can be applied to software engineering. You can shorten the feedback loop, reduce the chance of developing something undesirable, and get to a program which is of use to you as soon as possible; if you focus your early development effort on a minimal viable project.

You should take time, after you've done your high level architecture and design discussions to gather by a whiteboard and come up with the absolute bare minimum of your feature set which you could implement and produce something which can actually be used for a real-world use-case.

Then, after you've implemented your MVP you've likely had to amend your design documentation a good amount (possibly even fundamentally). Fortunately for you, you come out of the process with something which does something useful (and thus can be effectively evaluated) and every change you make to your design as a result has a grounding in the reality of implementation.

Your MVP could be thought of as a prototype. Indeed some developers prefer, at this point, to throw away the MVP and reimplement it again with the insights gained during the implementaiton. What is important about an MVP though, is that once you've been around the prototype loop a few times, you keep the MVP code and evolve your final project out of that codebase. This critical difference between an MVP and a pure prototype allows you to bring online continuous testing and continuous integration and deployment if appropriate, the moment you have your MVP ready (or even before). And remember, if it's not tested, it doesn't work.

Your homework this week is to look at some of the designs or READMEs you've written of late, and think about what an MVP might be which could lead you to greater insights into the problem, or even toward your final solution.

Posted Wed Jun 7 12:00:06 2017

Useful, secure, finished. Pick two.

I've just spent a long time writing about how systemd is the solution to all your process management reliability woes.

As with everything though, as I've alluded to in the subtitle, there are trade-offs.

What's the catch?

It is arguable that increasing the responsibilities for init, historically a very simple daemon, is a dangerous thing to do.

I believe these changes have been warranted, since the traditional UNIX process model assumes processes are well-written and benign.

Security updates

To accommodate the changing world, init is now sufficiently complicated that it requires security updates.

This is a problem because you can only have one init process, so you can't just kill the old version and start a new one.

systemd has to work around this by re-executing /sbin/init when it, or any of its dependent libraries, have been updated.

This mechanism should not be relied upon, since it can fail and if it does fail recovery requires a reboot, so if you need to be prepared to reboot on update, why not just reboot the system when an update is required?

Rebooting woes

Rebooting is also further complicated by init being extended.

If a library that a process depends on is removed as part of an update then the running process may keep a copy of it open until the process re-executes or terminates.

This means file systems will refuse to be remounted as read-only until the process stops using certain files. This is hugely problematic if the filesystem is the root file system and the process is init, since init will want to remount the file system before terminating and the file system will want init to terminate before remounting.

Previously the approach would be to shut-down without remounting the file system read-only, but this doesn't cleanly unmount the file system so was a source of file system corruption.

The solution to this employed by systemd is for the init process to execute a systemd-shutdown binary.

So why not move the complicated bits out of PID 1?

PID 1 is complex, and this is a problem. Therefore either systemd's developers don't consider the problems important or there are good reasons why it can't be otherwise.

So, what responsibilities does PID 1 have, and why do they have to be in PID 1?

Process reaping

When a process terminates before reaping its child subprocesses, all those subprocesses are adopted by PID 1, which is then responsible for reaping them.

PR_SET_CHILD_SUBREAPER was added to prctl(2) which allows a different process subreaper in the process hierarchy, so that gets to reap orphaned subprocesses instead of PID 1.

However PID 1 still neads to be able to reap subreapers, so PID 1 needs the same reaping logic, and both implementations need to be either shared or maintained, at which point it's less difficult to just rely on PID 1 doing it.

Traditional init systems perform this function, so it is not controversial for systemd to perform this.

Spawning processes

There are no special requirements necessary to spawn subprocesses, so a separate process could be started to spawn subprocesses.

Unfortunately this has the same bootstrapping problem, where PID 1 needs the same logic for starting its helpers as needs to be used for arbitrary code in the rest of the system.

Traditional init systems perform this function, so it is not controversial for systemd to perform this.

Managing cgroups

Because processes can't be trusted to not escape, cgroups are required to contain them.

A single process is required to manage them.

If services started by init are to be contained by cgroups, then the cgroup management service must either be the init process or must be started by the init process and have special logic to contain itself first.

This is tractable, but if it's a separate process, then some form of IPC is required, which adds extra latency, complexity and points of failure.

A similar concern exists in the form of journald, which is a separate service that systemd needs to communicate with to get it to log the output of new services to a file.

This complexity already causes systemd trouble, as a crashing journald can bring the whole system to a halt, so similar complications should be avoided.

Communicating via DBus

The init process needs some form of IPC to instruct it to do things.

Historically this was just telinit writing to the /dev/initctl FIFO, so was a pretty trivial form of IPC.

However we've established that init now requires new responsibilities, so requires a much richer form of IPC.

Rather than inventing some bespoke IPC mechanism, systemd uses DBus.

systemd also participates in the system bus, once the DBus daemon has been started, which adds extra complexity since the DBus daemon is started by systemd.

This is handled by systemd also handling point-to-point DBus, though attempts have been made to move DBus into the kernel in the form of AF_BUS, kdbus and most recently bus1, and there has also been discussion of whether systemd should be a DBus daemon to break this circular dependency.

Summary

The traditional UNIX process model wasn't designed to support a complex init, because it assumed that programs would be benign and well written.

Because you can't trust processes to clean up after themselves properly you need to make init more complicated to cope with it.

Because init is complicated it needs to be able to be updated.

Because the UNIX process model doesn't have a way to safely replace init you have to allow for it failing and needing a reboot, so you can't safely perform live updates.

Alternative ways of structuring init would make it even more complex so more opportunity for things to go wrong.

Posted Wed May 31 12:00:05 2017 Tags:

Today's posting is by a guest author, Jon Dowland.


I wanted to refresh my Haskell skills recently so I picked up an old idea I'd had of writing a successor to WadC in Haskell. WadC is a LOGO-like language for making 2D drawings (Doom maps). You write a sequence of instructions that move a "pen" around and draw lines, place things, etc. It looks a bit like this:

...
movestep(32,32)
thing
straight(256)
rotright
straight(512)
rotright
...

The driving force behind for writing a successor to WadC is to work around one of its most serious limitations: you do not have direct access to the data structure that you are building. Once you place a line, it's defined and you can't change it, or even query existing drawn lines to find out if it exists. The appeal of using Haskell would be to give you access to the data structure that's "behind the scenes" as well as Haskell's rich features and libraries.

WadC's syntax, like LOGO, resembles an ordered sequence of operations that manipulate some hidden state relating to the Pen: its orientation, whether its on the "paper" or not, what colour line is being drawn, etc. I thought this would look particularly nice in Haskell's "do notation" syntax, e.g.:

blah = do
    down
    straight 64
    rotright
    straight 128
    rotright
...

Ignoring the syntax issue for a minute and thinking just about types, the most natural type to me for the atomic operations would be something like Context -> Context: simply put, each function would receive the current state of the world, and return a new state. For example Context could be something like

data Context = Context { location :: Point          -- (x,y)
                       , orientation :: Orientation -- N/E/S/W
                       , linedefs :: [Line]         -- Line = from/to (x,y)/(x,y)
                       , penstate :: PenState       -- up or down
                       ...

An example of a simple low-level function that would work with this:

    down c = c { penstate = Down }

The immediate advantage here over WadC is that the functions have access to all of the existing context: they could not only append lines, say, but replace any existing lines too. I was envisaging that you might, for example, adjust all drawn lines as if you were using a completely different geometric model to get some weird effects (such as one axis being warped towards a single point). Or perhaps you would super-impose two separate drawings and have fine control over how overlapping regions have their properties combined (one overrides the other, or blend them, etc.)

The problem is in uniting the simple types with using do-notation, which requires the routines to be "wrapped up" in Monads. I got a prototype working by writing a custom Monad, but it required me to either modify the type of each of the low-level functions, or wrap each invocation of them inside the do-block. If you imagine the wrap function is called wadL, that means either

down = wadL down' where
    down' c = c { penstate = Down }

or

blah = do
    wadL $ down
    wadL $ straight 64
    wadL $ rotright

Both approaches are pretty ugly. The former gets ugly fast when the functions concerned are more complicated in the first place; the latter pretty much throws away the niceness of using do-notation at all.

An alternative solution is one that the Diagrams package uses: define a new infix operator which is just function composition (.) backwards:

down        &
straight 64 &
rotright

(Diagrams uses # but I prefer &)

By playing around with this I've achieved my original goal of refreshing my Haskell skills. Unfortunately I don't have the time or inclination to continue with this side-project right now so I haven't published a working Haskell-powered Doom map editor.

If I return to this again, I might explore writing a completely distinct language (like WadC is) with the compiler/interpreter likely written using Haskell.

Posted Wed May 24 12:00:06 2017
Daniel Silverstone Whiteboarding your project

Recently I spent time with our very own Lars Wirzenius, in Helsinki. While there, I had the distinct pleasure of standing in front of a whiteboard with Lars and hashing out aspects of two different Free Software projects which we are both involved with. We have talked about irc in the past and we've talked about being gracious when dealing with your community. In general we've only really spoken online interactions with your project (excepting when we spoke of conferences) but today I'd like to introduce you to the inestimable value of whiteboarding with your collaborators.

While the value of having written discussion is very high, a log of what was said by whom and in response to what is very helpful, and nicely written design documentation can be critical to a project's success; the incredible bandwidth of face-to-face discussions is hard to beat. Conferences are, as previously discussed, one of the ways to achieve this. But for the raw base design work for a project, whiteboarding on your feet (or your rear-end I suppose) is one of the best ways I find to get the ideas flowing.

Whiteboarding is best done in a small group. Ideally two to four people. You need to have a room to yourselves because you will be making a lot of noise; talking over each other as the ideas pop up; and generally making a lot of mess on the whiteboard. If you can have multiple whiteboards in one room then larger projects can be easier to deal with. I have successfully whiteboarded with a fold-up whiteboard on a café table, though that was a lot of work.

The second thing that you really must have for whiteboarding is a camera. You might get lucky and have a "smart" whiteboard which you can extract an image from, though I generally find those to be less effective than a quick snap from a camera phone before wiping the whiteboard down.

Don't be afraid to write stuff up on the board which you're going to immediately say is wrong, because seeing it written up can spark ideas in others. Whiteboards give you the ability to sketch out diagrams as well as APIs with equal ease - I find that sometimes the best way to work out how a state machine works is to draw it up, despite the fact that I am almost entirely incapable of thinking in images at all.

Once you've done your design work and come to agreements, ensure that photographs of your whiteboards are circulated among the attendees. At this point the most important step of all happens -- you all should write up what was discussed and the decisions which were arrived at. Then you can all review each others' memories of what happened and come to a consensus as to the final decisions as a group. Interpreting the whiteboards after the fact gives the attendees further chance to jog an idea which may have been fermenting in their minds since the session.

I hope that you'll take this on board and try out whiteboarding with one of your projects just as soon as you can. However, since it can be hard to just magic up a whiteboard, I won't set you homework this week. Enjoy your respite.

Posted Wed May 17 12:00:06 2017

We previously discussed the traditional UNIX mechanisms for service management, and how they assumed benign and well written software.

Fortunately Linux provides more than just traditional UNIX system calls, so offers some features that can be used to track processes more completely.

Intercepting processes with ptrace(2)

If you could run some code when a process creates a subprocess or exits then you could use this to track which processes are active and where they came from.

Debuggers like gdb(1) also need to know this information since you might want to set a breakpoint for subprocesses too.

So it would be possible to do this using the same mechanism as debuggers.

This is what Upstart does to work out which process to track for double-forking daemons.

Unfortunately a process cannot be traced by multiple processes, so if Upstart is tracing a process to track its subprocesses then a debugger cannot be attached to the process.

For Upstart it detaches the debugger after it has worked out the main PID, so it's a small window where it is undebuggable, so it's only a problem for debugging faults during startup, but detaching after the double-fork means it can't trace any further subprocesses.

Continuing to trace subprocesses adds a noticeable performance impact though, so it's for the best that it stops tracing after the double-fork.

Store process in a cgroup

cgroups are a Linux virtual filesystem that lets you create hierarchies to organise processes, and apply resource controls at each level.

cgroups were created to handle the deficiency of traditional UNIX resource control system calls such as setrlimit(2), which only apply to a single process and can be thwarted by creating subprocesses, since while a process inherits limits of its parent process it does not share them with it.

Subprocesses of a process in a cgroup on the other hand are part of the same cgroup and share the same resource limits.

In each cgroup directory there is a cgroup.procs virtual file, which lists the process IDs of every process in the cgroup, making it effectively a kernel-maintained PIDfile.

This is what systemd uses for its services, and you can request a cgroup for your own processes by asking systemd (via systemd-run(1) or the DBus interface) or cgmanager (via cgm(1) or the DBus interface) to do so on your behalf.

Why can't I mount my own cgroupfs?

Unfortunately you can only safely have 1 process using a cgroup tree at a time, and you can only have one cgroupfs mounted at a time, so you always need to ask some daemon to manage cgroups on your behalf.

See Changes coming for systemd and control groups for why a single writer and a single hierarchy are required.

Conclusion

It is necessary to track all the subprocesses of a service somehow, using ptrace(2) prevents it being used for debugging, cgroups are an interface designed for this purpose but technical limitations mean you need to ask another service to do it.

So I would recommend writing a systemd service if your processes are a per-system or per-user service, or to use the DBus API to create cgroups if not.

Thus cgroups allow us to know our processes are running, and currently the best way to use cgroups is via systemd. The implications of relying on systemd to do this are best served as a subject of another article.

If you are interested in learning more about cgroups, I recommend reading Neil Brown's excellent series on LWN .

Posted Wed May 10 12:00:06 2017 Tags:

…or how to take a bad analogy far too far; without any goal in sight.

Sometimes you just need a break from all the bits and bytes; all the integers and floats and strings. Sometimes you just want to step away from the keyboard and do something completely different, such as bake a cake. There's nothing so different from writing software as baking a cake, right?

Right?

Wrong!

When you cook anything, you're essentially executing a program. If you're the sort of person who follows the recipe slavishly then you're executing a defined algorithm for turning ingredients into something else; if you're the sort of person who thinks recipes are essentially a guideline, then you're still running a program in your head, but it's more of a heuristic than an algorithm.

In the process of running this program, you use tools, process inputs, and produce outputs. You even likely have (albeit small, simplistic, and likely poorly defined) a test suite to ensure that you produce the right outputs (that of simply tasting things).

What's even worse (assuming you were trying to get away from producing software rather than simply wanting dinner) as you go, even if you weren't expecting to, you likely are debugging and bugfixing your program as you learn more about how you want to transform inputs to outputs effectively and efficiently.

Okay, so if programming and cooking are just more of the same, then why this article at all?

In brief, everyone needs something to do when the code just doesn't flow. I sometimes switch to reading a book, watching videos online, or going for a walk; but sometimes cooking is exactly the right thing to do. At least a few other programmers must agree since there's any number of programmer oriented cooking-related tools out there, a yaml syntax or three for defining recipes, and some desktop environments which shall remain unnamed even consider recipe tools to be worth mentioning in their release notes :-)

All this applies to those of us who slavishly try and write articles for you to enjoy every week. As such, sometimes we just need to have fun. Thus, your homework this week, assuming you're not allergic to any of the following, is to:

  1. Take two eggs and weigh them (shell on)
  2. Sift the same mass of self-raising flour and set aside.
  3. Melt the same mass of butter and beat the eggs (shell off), butter, and the same mass of sugar together.
  4. Gently fold the flour into the wet mix, trying to not knock the air back out of it. (Or just use an electric whisk to keep it floofy)
  5. Divide the mix between two pre-greased and floured cake tins, and bake in the centre of a pre-heated oven at (approximately) 170°C for 20-25 minutes (check the centre with a skewer)
  6. Allow the cakes to stand for 10 minutes before turning out onto a wire rack to cool thoroughly.
  7. Beat 150g to 250g of butter until smooth and soft (I prefer slightly salted butter, but unsalted is fine too)
  8. Mix icing sugar into the butter, a spoonful at a time, beating it until smooth each time. Repeat until the icing tastes sweet rather than savoury.
  9. Spread a small bit of icing on the centre of a plate, and use that to stick one of your sponges down. Ice the top of the sponge and then pop the second layer on top. If you have enough icing left, smear it all over the cake, remembering that presentation is far less important than deliciousness.

Now serve slices of your delicious cake with cups of delicious tea or coffee.

Also, please do share your favourite recipes in the comments below, before you take your coffee and cake and get on with coding up your latest masterpiece.

Posted Wed May 3 12:00:06 2017

We previously discussed issues with using PIDfiles.

One issue we encountered was that we need a way to handle multiple processes.

Process groups

If you've ever started a program in the background in a shell you might have noticed it gave you a "Job ID" to refer to it rather than a process ID.

This is not just to give you a memorable number for each task, but because jobs may contain multiple processes, which is how a pipeline of multiple processes may be a single job.

This is accomplished in Linux and traditional UNIXes with the setpgrp(2) system call which assigns a new process group to a process which will be inherited by its subprocesses.

This entire process group may be killed by passing the negation of the process group ID to the kill(2) system call.

A process may only be part of one process group though, so if you have processes that may call setpgrp(2) themselves then it is not possible to use process groups to manage terminating a whole process tree of a service.

UNIX Sessions

You may be wondering how anything can possibly work if you can't use process groups to track a user's processes.

The answer to this is that UNIX has a concept of sessions.

Every process is a part of a session, and each session has a "controlling TTY", which can be accessed via /dev/tty.

When a process creates a new session with setsid(2) it becomes the session leader.

If the session leader process is terminated then the entire session receives the SIGHUP signal, which by default terminates the process.

The controlling tty was traditionally a virtual terminal which emulates the old teletype terminals on modern computers. Terminal windows in graphical interfaces use pseudo terminals, which could be used to use sessions for grouping processes that don't belong to a device.

This is typically done by getty and login(1), terminal emulator or sshd, which also update utmp(5) to include the controlling TTY and session ID, to track the current active sessions.

There are a number of issues with using UNIX sessions for tracking processes.

  1. utmp(5) is an awkward interface, requiring multiple processes to access a single file without trampling over each other, requiring file range locking, which can't be done portably and in a thread-safe manner.

    I consider this to be analogous to /etc/mtab, which was an old, manually maintained file, which had to be replaced with a more reliable, kernel-provided interface.

  2. setsid(2) describes sessions and process groups as a strict two-level hierarchy.

    The implication of this is that any process can escape with setsid(2), so bypassing mtab and inspecting the contents of the sessionid flie in /proc won't work.

  3. Escaping session cleanup is by necessity a well-documented procedure, since traditional daemons are started by detaching from the current session rather than asking the init process to start the daemon.

    See nohup(1) for details about how to escape session cleanup.

Conclusion

The traditional UNIX system calls came from a time when it was believed you could trust programs to be well written and benign.

We do not live in this world, so we need a better approach to track which processes we run on our computers, which we will discuss in a future article.

Posted Wed Apr 26 12:00:07 2017 Tags:
Daniel Silverstone Find your motivation

A while ago I wrote about ensuring that you know why you're writing something in order that you keep focussed on that goal while you code. My focus at that point was on the specific project you were looking to undertake, but it may behoove us to look at the wider picture from time to time.

What motivates you?

It's important that you can answer this question, ideally without hesitation or backtracking. For each of us the answer will be different, and noone's answer is any less "right" than anyone elses. For myself, it took several years to be in a position to answer the question confidently, quickly, and consistently. That's not to say that my answer won't change in the future, but at least for now I know what motivates me and how that manifests in my day-to-day hacking.

I have had a word with Richard and he has explained his motivation to me, and so for your perusal and criticism, here's what motivates us both..

Daniel

For me, the primary motivation for writing free software is that I enjoy making it possible for other people to achieve things. I am, as it were, an "enabler" or "facilitator". This manifests itself in an interest in processes, meta-programming, and tooling. I find myself writing libraries, services, and test tooling; and I enjoy reading papers and architecture designs, thinking of new ways to solve old problems, and novel problems to solve. (And of course, I write articles on this here blog :-) )

Richard

My motivation in general is to learn something such that it can be applied to something which in some way may be construed as to the betterment of society. Or indeed those things which may improve society directly. In the free-software world, this has manifested in the topic of reliability and also freeing people from vendor lock-in.

(* note, I kinda paraphrased what Richard said)

Homework

You didn't think I'd let you get away with no homework this week did you? Hah! I'd like you to sit down, consider your motivation in the free software world and a few ways in which that manifests into projects you work on or with. If you're feeling super-enthusiastic about it, why not post a comment on this post and share your motivation with the rest of us?

Posted Wed Apr 19 11:24:16 2017 Tags: