You don't often think about how your computer starts up or shuts down, until something goes wrong, and then it becomes important.
Given how well standardised the PC boot sequence is, and how much we use computers that use it, I am going to describe the sequence from this perspective. Other systems obey roughly the same structure, but may differ in important details.
BIOS/EFI
When your computer starts, either after being powered on, or after being restarted, it executes a program stored on the motherboard. This is the BIOS or if your if your computer is only a few years old, the EFI program.
This is responsible for performing some basic hardware initialisation so it may then load a more general purpose program.
This is generally a program stored at the beginning of your "first" hard-disk, though GPT partitioned disks are increasingly common, and allow the program to be stored elsewhere, and the BIOS/EFI usually provides a way to select a different device than the "first" hard-disk.
This first program is usually the bootloader, but sometimes this step is skipped and the Operating System's kernel is loaded directly.
Bootloader
The bootloader is responsible for discovering and loading the next stage, while being small enough, quick enough and smart enough to do so according to the user's wishes.
Bootloaders often need to re-discover what the appropriate boot target is, but this allows the flexibility of having your operating system stored somewhere that is more complicated to access than the BIOS/EFI can cope with.
The bootloader must, either by a clever heuristic or sufficient user configuration, locate and load the kernel of your operating system.
At this point it must also locate and load other code. Typically this is the initramfs and device tree. Both are optional in certain circumstances. When running Linux on a PC, the initramfs is almost always used and the device tree is almost never used.
Finally, the bootloader must set up registers pointing to the configuration before starting the kernel. This configuration is usually to point to the initramfs or device tree, but it also includes the kernel command-line.
Kernel initialisation
At this point the Linux kernel sets up the essential hardware and tries to work out where the root file system its userland can be found is.
If the rootfs is stored somewhere easy to recognise, then this information can be passed on the kernel command-line, and there is no need to use an initramfs.
This is often not the case on PC hardware however, as hardware topology is complicated, so the initramfs needs to be used to decide which storage to use as the rootfs.
Early userland in the initramfs
Normally the initramfs is only responsible for finding the rootfs, but it is also possible to run your whole system out of the initramfs and never use a rootfs.
At this point you are running a functional Linux system, which can inspect the kernel command-line to locate and mount the rootfs.
In Linux desktop systems this locates the partition which contains the rootfs by looking at the UUID, but this also allows arbitrarily complicated storage setups, like encrypted storage spread across multiple devices, some of which may even be network connected on a server that needs the user to provide an authentication key.
After the rootfs has been appropriately mounted, the initramfs pivots into the rootfs and executes the init binary, which for the purposes of this article is systemd.
Userland service initialisation
Here is where all the software that the user cares about is started. From the perspective of this article this is very boring, only existing to keep the user happy before they instruct the computer to shut down.
Still, other file systems are mounted here, so the user's files can be stored on a different hard-disk to that of the operating system, to make it easier to backup separately.
Shutting down is managed by an appropriately privileged client using some form of IPC to request the init binary shut down, or by calling the reboot system call directly to skip ahead to kernel finalisation.
Userland service shutdown
init requests that userland services shut themselves down nicely, but after a timeout period it mercilessly kills them.
File systems also need to be cleanly unmounted so that they may write their data back to disk in time.
Not every file system may be cleanly unmounted however, as init is running from one of them. Traditionally this is handled by remounting the rootfs as read-only and hoping for the best.
systemd improved matters by executing a shutdown binary, which makes remounting as read-only more reliable, as it handles the case where the init binary was updated.
Traditionally, at this point the reboot system call is used to jump into
kernel finalisation, but with systemd there is another option if
/run/initramfs
is populated.
Late userland in the shutdownramfs
A semi-recent development (2011) in the boot and shut-down story is that systemd added shutdownramfs support so that the root file system can be cleanly unmounted.
This works by the same pivot facility as we used to go from the
initramfs into the rootfs, but instead of /init
being run to set up the
rootfs, /shutdown
is expected to unmount it.
After this userland is definitely finished, and kernel finalisation is entered with the reboot system call.
Kernel finalisation
Some hardware needs to be nicely shut down, rather than having the power yanked, or some other program attempting to make use of it, so at this point the kernel needs to do some stuff to prevent your computer exploding.
After this is complete, either your computer is shut down or rebooted, dependent on what was passed to the reboot system call.
Message queues, at their heart, are ways to get little packets of information (messages) from one place to another in some sort of order. Queues are often used to communicate between processes (and in some cases between systems). They vary from the simplest POSIX message Queues to the much more powerful AMQP implementations such as RabbitMQ, Apache Apollo, or Windows Azure Service Bus. There is also the no-broker needed ØMQ, or its lightweight counterpart nanomsg, for those who like the AMQP behaviour but want something which doesn't need a broker.
The varying levels of complexity of message bus extend from simply passing messages in the order they were added from sender to receiver (like a socket might), through priority ordering, to the more complex PUBSUB, REQREP, or PIPELINE patterns. The pattern which works best for your workload will be workload dependent, and the appropriate complexity of solution will depend on your expected scalability needs and deployment methodology. Whatever your problem, and whatever the implementation language you've chosen, there are plenty of bindings out there to such a wide variety of message queue implementations that before you think about writing your own job queue, why not take a look at one of those which already exist.
In the previous article on interfaces, we added a virtual ethernet pair with the following command.
$ sudo ip link add left type veth peer name right
We can inspect the addresses owned by these interfaces with the ip address
command.
$ ip address show dev left
6: left: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 56:9e:e6:2d:34:e6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::549e:e6ff:fe2d:34e6/64 scope link
valid_lft forever preferred_lft forever
This shows that we currently don't have an inet (IPv4) address, but we have the default inet6 (IPv6) link-local address.
We can add any address we want to this interface, but first we need to invent a subnet for the virtual ethernet device.
Subnets
The IPv4 address space is sub-divided by saying that every address that starts with a specified prefix belongs to that subnet.
So the subnet 127.0.0.0/8
says that every address starting with 127.
is in that subnet.
The /8
says how long the prefix is,
so that you can have a 0
component in a prefix
without it being ambiguous as to whether it's not part of the prefix,
or whether the prefix has a 0
byte in it.
The number after the /
is how many bits long the prefix is,
starting from the left.
So the network 127.0.0.0/8
has 224 addresses in it,
from 127.0.0.0
to 127.255.255.255
.
It just happens that usually only 127.0.0.1
is allocated,
and only on the loopback interface,
but you can configure more addresses for your loopback interface
than you will feasibly ever need.
You should only ever see the 127.0.0.0/8
subnet on your loopback interface,
as it is reserved for this purpose,
so that you can't confuse a global address for a local-only loopback address.
There are other such reserved ranges, such as the private subnet only ranges:
192.168.0.0/16
10.0.0.0/8
172.16.0.0/12
These ranges are reserved so that you can have private addresses for machines reachable only on your subnet.
These ranges can be subdivided as much as you want, to provide further subnetting.
For the rest of the examples
here we are going to use the subnet 10.248.179.0/24
.
Please note that this address was selected randomly,
please check if addresses in this subnet are allocated to your local network.
For a more detailed explanation of subnets please read this article on subnets.
Assigning addresses
When you assign an address to an interface,
you assign a given address in a given subnet.
So 10.248.179.1
is not sufficient information to configure the address.
You also need the prefix length,
so if we want to assign 10.248.179.1
to the left
virtual interface,
we can use this command:
$ sudo ip address add 10.248.179.1/24 dev left
Now we can see that our left
interface has this address.
$ ip address show dev left
6: left: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 56:9e:e6:2d:34:e6 brd ff:ff:ff:ff:ff:ff
inet 10.248.179.1/24 scope global left
valid_lft forever preferred_lft forever
inet6 fe80::549e:e6ff:fe2d:34e6/64 scope link
valid_lft forever preferred_lft forever
We can prove this address works with a pair of netcat commands.
$ nc -l 10.248.179.1 1234 >out.txt &
$ echo hello | nc 10.248.179.1 1234
$ cat out.txt
hello
Because left
is paired with right
,
if we configure an address for the right
end,
we can send messages through the virtual ethernet link.
$ sudo ip address add 10.248.179.2/24 dev right
$ nc -l 10.248.179.1 1234 >out.txt &
$ echo hello | nc -s 10.248.179.2 10.248.179.1 1234
To make these addresses persist across a reboot, you can use networkd .network files.
$ sudo install -m644 /dev/stdin /etc/systemd/network/left.network <<'EOF'
[Match]
Name=left
[Network]
Address=10.248.179.1/24
EOF
$ sudo install -m644 /dev/stdin /etc/systemd/network/right.network <<'EOF'
[Match]
Name=right
[Network]
Address=10.248.179.2/24
EOF
Routes
Because we configured this new address with an address prefix, we now have a route configured.
$ ip route
default via 192.168.1.254 dev wlan0 proto static
10.248.179.0/24 dev right proto kernel scope link src 10.248.179.2
10.248.179.0/24 dev left proto kernel scope link src 10.248.179.1
192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.94 metric 9
The first and last lines were added earlier by NetworkManager,
but the ip address add
commands added the two rules in the middle.
These routing rules are consulted when you need to make an IPv4 connection. Routing rules are selected most-specific first.
For our earlier call to nc -s 10.248.179.2 10.248.179.1 1234
we said we want to connect to 10.248.179.1
from 10.248.179.2
.
While there's a default rule for connecting everything through wlan0,
there is a more specific rule for connections coming from 10.248.179.2
,
and connecting to an address in the 10.248.179.0/24
subnet,
so the following rule matched:
10.248.179.0/24 dev right proto kernel scope link src 10.248.179.2
Which means that the connections must go through the right
interface.
Since we bound a service to port 1234
at address 10.248.179.1
,
which is assigned to the left
interface,
the connection succeeded, as there was a service waiting for connections.
We can make this matching rule more generic by running:
$ sudo ip route change 10.248.179.0/24 dev right
So now all connections to addresses in 10.248.179.0/24
will go through the right
interface.
We can make this configuration persist across reboots by replacing the .network file for our interface with this one:
$ sudo install -m644 /dev/stdin /etc/systemd/network/right.network <<'EOF'
[Match]
Name=right
[Network]
Address=10.248.179.2/24
[Route]
Destination=10.248.179.0/24
EOF
as we could connect to 10.248.179.1
without going through right
,
because it is one of our own addresses.
However if this were a real interface,
this would allow us to connect to other machines on the network,
that had chosen addresses in the 10.248.179.0/24
subnet.
Homework
- Find two computers not connected to any networks.
- Plug them into each others' ethernet ports.
- Configure the ethernet interfaces to addresses on the same subnet so that they can talk to each other.
Sadly, there still exist a large number of people in the world who don't fully appreciate what Free/Libre software is all about. You're here because you're one of the lucky few who realise that it's important to know about these things. So today I'm going to try and arm you with some answers to fallacies that the poor deluded masses often cling to, regarding Free/Libre Open Source Software.
You get what you pay for, so Free Software must be crap
In today's high-consumption low-cost society, we have a perception that paying more for something increases its worth to us. However, this is a very capitalist approach. Free Software introduces the dreaded 'c' word as discussed in our article about project organisation and that often muddies the waters, particularly for big business. The main way to combat this thinking is to shift the perception of the value of the software away from what you pay for it (nothing) and onto what you didn't have to pay for (potentially millions and millions of dollars if you believe the COCOMO model.
A corollary to this fallacy is:
If it costs me nothing to get, then it must have cost nothing to create.
Again, this hearks back to a capitalist belief that monetary cost is indicative of value. In this case the trick is to get people to understand that at a very basic level it costs essentially the same to develop software to give away as it does to develop software that you charge for. The gain is in the freedom to do what you want with the software rather than trying specifically for attracting particular users.
Of course, this leads on to:
If I invested this huge amount of money, I need to charge for the software.
This one only tends to apply to companies creating free software, but it's useful to remember and think about since you can use the arguments here as backup for the other points. You only need to charge for the software if you cannot monetise the project in any other way. A number of large companies such as RedHat manage to sell enough consultancy around a fundamentally free product quite successfully. Other companies simply produce free software as a side-effect of their day-to-day business in another market sector. Free Software can be an effective marketing tool for you, if you care enough.
If everyone can see the code, then it can hardly be secure.
Some people, sadly, still subscribe to the idea that security through obscurity is a good idea. However, the obvious and effective counter to this is that many eyes make bugs shallow.
Also, if you're arguing for opening software produced by your company then Linus's Law is also a handy carrot of the form "other people will help QA our software".
But, don't fall into this:
If I give it to you for free, you are duty bound to like it, use it, and help improve it.
A lot of companies think that opening their software will immediately spawn an effective, engaged, and enthusiastic community of geeks ready and willing to help at no cost. If only that were true, but sadly those of us who love Free Software do have lives which we need to lead too. Just like a patch supplied to a project can be considered a burden as well as a gift; so can software provided to a community.
It's all written for geeks, so it's not for me.
This certainly used to be the case. Linux on the desktop is almost a running joke, indeed PC World tried to state that 2015 is the year of linux everywhere but the desktop. It's quite possible that the particular Microsoft-centric (or Apple-centric) workflows you have won't be directly transferrable to Linux-based systems, but frankly most people just use a web browser, email, and write "Word documents" which is easily done with Free Software.
If I'm not paying for support, how can I expect it to be any good?
It is very much the case that if you're not paying for something you can't in any sense have a service level agreement in place. However it is not the case that you cannot get professional support for using Free Software. Above I mentioned RedHat, but there are plenty of other companies who will help you with your Free Software for money.
What if I need to sue somebody?
Liability is a very real problem in a large number of arenas and sometimes companies really do need to know who to sue if they get sued. In this instance some of the companies mentioned in point 6 will offer appropriate agreements and warranties on otherwise utterly unwarrantied software. Alternatively you should be saying to yourself: "If I need to ask who I get to sue, perhaps I should move to a yurt in a forest and live more simply".