Sometimes you need to do something asynchronously. Of course, the .NET standard asynchronous programming pattern using Begin/End methods is often the right choice, but sometimes a running thread must be be instructed to do something and return a result in response. The example below shows a simplified scenario:

The main thread starts a thread that waits for the sProvidedDataAvailable variable to be set. The main thread then initializes the sProvidedData variable and tells the started thread that there is data to process by setting sProducedDataAvailable. The started thread then grabs the provided value, multiplies it by two and assigns the result to the sProducedData variable. After doing so it sets the sProducedDataAvailable variable to notify the main thread that the result is ready.

The flow of execution is quite simple, but there is a pitfall. I’ve seen this a couple of times in different constellations during code reviews. The compiler will try to optimize the code and might reorder read/write operations to improve the use of available resources. Read operations may be executed earlier and write operations may be delayed. The compiler is not able to detect dependencies between read/write operations that are executed by different threads. A compiler applys optimizations that make the code run faster, but do not change the behavior of the code when executed by a single thread. The compiler cannot consider issues induced by multiple threads executing the code in parallel!

In the example above the compiler might set the sProvidedDataAvailable/sProducedDataAvailable variable before setting the sProvidedData/sProducedData variable. This may cause unexpected behavior, since a thread may read stale data. The problem can be fixed simply by adding the volatile modifier to the variables indicating completion.

But the compiler is not the only one that is able to reorder instructions. The processor may do that as well in order to optimize processor resources. That is why the volatile modifier does not only tell the compiler not to apply optimizations, but insert additional instructions (memory barriers) that force the processor to keep the order of reads/writes as well. The compiler will insert a read barrier before a read operation of a volatile variable and a write barrier after a volatile variable is written.

Declaring a variable volatile only ensures that the variable is read/written exactly where it is stated in the code. It does not introduce any kind of synchronization. If you try to declare a double variable as volatile, the compiler will refuse that, since it cannot ensure that it is read/written atomically on a 32-bit system.

Increasing server security with containers (chrooted environments)


Usually any server administrator is keen on increasing the security of his system. One of the most simple ways to increase overall system security is to isolate different services from each other to reduce the damage an attacker can cause, if he manages to compromise a service.

Of course, the easiest way is to buy new hardware, install a fresh operating system and be happy. But this way has several drawbacks… New hardware costs a lot of money while adding another point of failure, multiple servers have a higher power consumption and most likely do not use their hardware effiently. A more efficient way is to virtualize the servers and run several virtual servers on one and the same hardware. This way available hardware can be used efficiently, since all servers are really using the same CPU, RAM and HDDs. Virtualization can be done in a variety of ways. The most efficient one is software virtualization using Linux. Compared to traditional hardware virtualization software virtualization adds only little overhead while hardware virtualization can eat a lot of resources. Software virtualization on Linux is a cost effective way many providers use to provide cheap virtual servers to their customers.

If you own real server hardware you can install a virtualization platform like OpenVZ, create several containers and let your services run within the containers. This is both effective and secure. One container for handling mail, one for providing web pages, etc.

If you do not own real server hardware, but one of the cheap virtual servers, we were just talking about, then OpenVZ is not a good idea. Running a container in OpenVZ in a container that is hosted by another OpenVZ instance. That doesn’t sound very attractive, does it? No, it doesn’t. A way to create an isolated execution environment on a virtual server is to install a minimal linux in a chrooted environment. This is not as secure as a virtual server or even a real server, since it is not impossible to break out of the chrooted installation, but all known ways to break out require root permissions. If you do not give your services root permissions your server is quite secure – at least in regard to that point.

This article is about how to install Debian Linux 6.0 (codename squeeze) on another Debian Linux installation to create a complete linux installation in an isolated environment. Although I’m using Debian 6.0 as the host operating system here, the same instructions should also work for Ubuntu as Ubuntu uses the same package management system.

Preparing the Host System


First of all you need to install the debootstrap package needed to create a minimal Debian installation. This can be done easily using the Debian packet manager:

Create a directory that will keep the containers and one for the first container below it. Let’s assume the containers will be stored in /containers and the first container (let’s call it debian for simplicity) will be installed in the /containers/debian folder:

Create basic environment

Now you can start installing a basic Debian system using debootstrap. Depending on the architecture (32/64 bit) you want to use within the chrooted environment, you have the choice to install Debian for the i386 or the amd64 platform. If you omit the –arch parameter debootstrap will use the same architecture as used by the host system.

For a 32bit operating system use:

For a 64bit operating system use:

Configuration of the installed environment

Now it’s time to enter the freshly installed Linux:

Check the settings in /etc/apt/sources.list now and adapt it to your needs, if necessary. By default the list only contains the Debian stable packages, but neither security updates nor packages that are updated between Debian releases. So it’s a good idea to change the file to the following:

Update the package index files to synchronize your local package index with the archives specified in sources.list:

Edit /etc/mtab to enable some file system tools like df and mount to work properly. This will pretend to have a mounted root filesystem as well as the proc/sysfs filesystem.

Install and configure the locales package depending on the locale you intend to use:

Configure the timezone used within the chrooted environment. You can manipulate /etc/timezone and /etc/localtime manually or more easily use the following:

At this point the installation contains a very basic set of packages necessary to run Debian. Since we intend to use it in a container on top of an already running system, we neither need to install a kernel nor a bootloader.

Things that make life easier

Changing Command Prompt

Sometimes you will wonder whether you are working on your “real” system or within the container. A simple, but effective countermeasure is to change the command prompt as soon as you enter the chrooted installation. Assuming that you administrate the container as root you can edit /root/.bashrc and add the following line:

Just leave the container now and enter it again to see if it works:

You should see the changed command prompt now.


If you’re a real purist, then the command shell is probably your best friend, but most people will appreciate some tools that make life easier. I would recommend to install at least the Midnight Commander, a quite powerful file manager.

Installing and configuring services

Now it’s time to install the services that should be executed within the container. I think you know by yourself what you want to install :-)

Integrating the container into the startup procedure

You most likely want to start the services in the container at the time your server is starting up. The startup procedure of a container is something like the following:

  • Bind the /dev folder of the host operating system into the container
  • Mount the proc and the sysfs filesystem in the container
  • Mount the sysfs filesystem in the container
  • Chroot into the container and start installed services

Let’s write two small scripts to accomplish these tasks. The first script goes to /usr/local/sbin/start-container on your host system (not the chrooted environment):

The second script has to be put directly into the container. In our case we put it into /containers/debian/etc/init.d/container. This script calls the initscripts needed to get the container up and running:

Make the scripts executable:

To start the container when the host system is starting up, add the following to your /etc/rc.local:

Now everything should be done to get the container up and running automatically after startup.