Windows Server 2008 Remote Desktop access from Ubuntu client

Tried for weeks and weeks to connect to a Windows 2008 Server via remote desktop from my Xubuntu-12.04 laptop. To no avail. There were some steps required like setting up VPN with fixed IP, finding a remote desktop client and trying to connect to the server, so a lot of potential problem sources.

To cut straight to the point:
sudo aptitude install remmina
solved the problem.

Fast-IP-VPN was correctly configured, but the Ubuntu default remote desktop client rdesktop was refused by the 2008 server. What made the problem not so obvious was that the connection to another remote desktop at the same institution worked without problem, but that was a XP server. No one was aware of this and event the IT-support suggested rdesktop.

On top remmina comes with a graphical user interface and in Xubuntu an item in the system tray makes connection to a once correctly configured remote desktop a one-click affair.

Hattip to Jonathan Moeller and his The Ubuntu Beginner’s Guide.

A short howto remmina on


Data Backup in the AWS Cloud with rsync

After admitting that of all things Microsoft offers 25GB cloud storage for its Windows Live subscribers I will walk through my latest preliminary experiments regarding backup of important data using the using the Amazon Advanced Web Services. The storage is not free but quite cheap at around 0.1$ per GB and month.

If you use Windows and MS Office a lot use Skydrive and don’t read on 😉 There are posts which describe how to map the Skydrive like a local harddisk using MS Word.

On the long run I would like to mount a EBS storage like a local file tree, probably using WebDAV, but this is my first successful preliminary solution. s3cmd does not work for me.

Using Ubuntu/Linux rsync is a well established, reliable and easy to use tool to keep data between locations in sync. The following post marries rsync with an Elastic Cloud (EC2) server instance for an hour or some. One has to set up the so called rsync daemon and attach a persistent Elastic Block Storage.

This is another post. I will link to it later. There will also be a small script. There are some holes in this tutorial, only the direct configuration of the rsync daemon (including the script) is complete and working. I filled in some hints how to get to this stage. But will write follow ups on that.

System Out provided a nice tutorial of how to set up the rsync in demon mode on a server which listens for clients to sync their data.

Here is my version of it, with a short script at the end which should do the job.


Of course you need to have rsync on both machines (the server and the client); since both are Ubuntu this is the case.

I will write another post on how to start the server. It is completely possible and quite intuitive to achieve it in the Amazon web interface. When the server is running and an extra EBS harddisk is attached you have to connect to the server using ssh

Mount the persistent drive

There are some posts about the advantages of the xfs filesystem, so I sticked to it. Alestic recommends it for all persistent EC2 cloud disks and I trust they know what they are doing. But xfs is not per default included in the Ubuntu micro instance I use for my backups. That said, in the SSH shell:

sudo apt-get install -y xfsprogs
sudo modprobe xfs

If the backup volume is newly created then format it:
sudo mkfs.xfs /dev/xvdb
Note: Only the first time. Otherwise you wipe your data, of course. Note also the device name. I attached it as /dev/sdb. Though it showed up in the Ubuntu Oneiric i386 t1.micro instance as /dev/xvdb.

Now mount the volume
echo "/dev/xvdb /media/backup xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir /media/backup
sudo mount /media/backup
sudo chown ubuntu:ubuntu /media/backup
sudo chmod 777 /media/backup

Configuration files

On the server machine you need to set up a daemon to run in the background and host the rsync services.

Before you start the daemon you need to create some rsync daemon configuration files in the /etc directory.

Three files are necessary:

  1. /etc/rsyncd.conf, the actual configuration file,
  2. /etc/rsyncd.motd, Message Of The Day file (the contents of this file will be displayed by the server when a client machine connects) and
  3. /etc/rsyncd.scrt, the username and password pairs.

To create the files on the server:
sudo nano /etc/rsyncd.conf

Now enter the following information into the rsyncd.conf file:

motd file = /etc/rsyncd.motd
path = /media/backup
comment = the path to the backup directory on the server
uid = ubuntu
gid = ubuntu
read only = false
auth users = ubuntu
secrets file = /etc/rsyncd.scrt

Hit Ctrl-o to save and Ctrl-x to close nano.

The uid, gid, auth users are the users on the server. In the ssh session on the ec2 instance the user is ubuntu.

The format for the /etc/rsync.scrt file is

Use nano to put some arbitrary text into the /etc/rsync.motd.

Now you should have all the configuration information necessary, all that’s left to do is open the rsync port and start the daemon.

To open the port, open the /etc/default/rsync file, i.e.,

sudo nano /etc/default/rsync

and set RSYNC_ENABLE=true.

Here you might also specify another port than the default 873. Remember to open the port in the security group. Either with the AWS web interface in your browser or in the shell using the ec2-api-tools:
ec2-authorize default -p 873

Now to start the daemon,
sudo /etc/init.d/rsync restart
and exit the SSH session.

Syncing a folder

Now you can use your local shell to push some folders or files to the server. Update the server side from the client machine with ec2-api-tools installed:
EXIP=`ec2din | grep INSTANCE | grep -v terminated |awk '{print $4}'`
rsync -auv /home/rforge/articles ubuntu@$EXIP::backup/

$EXIP would be the server ip address

This gets the IP of the server from the ec2-api-tool and passes it to RSYNC.

Otherwise you have to remember the IP of your instance from the web interface and substitut it for
rsync -auv /PATH/TO/FOLDER/ ubuntu@$

::backup has to match [backup] in the /etc/rsyncd.conf file. You will see the rsyncd.motd message and get prompted for the password in the rsyncd.scrt file. Then rsync starts the upload.

A Script

The following script should do the daemon setup after connecting to the server via ssh and mounting the volume. Keep me posted if something does not work.

echo "motd file = /etc/rsyncd.motd
path = /media/backup
comment = the path to the backup directory on the server
uid = ubuntu
gid = ubuntu
read only = false
auth users = ubuntu
secrets file = /etc/rsyncd.scrt" > rsyncd.conf
sudo mv rsyncd.conf /etc/
sudo echo "Greetings! Give me the right password! Me want's it!" > rsyncd.motd
sudo mv rsyncd.motd /etc/
sudo echo "ubuntu:YourSecretPassword" > rsyncd.scrt
sudo mv rsyncd.scrt /etc/
sudo chmod 640 /etc/rsyncd.*
sudo chown root:root /etc/rsyncd.*
## enable demon mode in the /etc/default/rsync file
sudo cat /etc/default/rsync | sed 's/RSYNC_ENABLE=false/RSYNC_ENABLE=true/g' > rsync
sudo mv rsync /etc/default/
sudo chown root:root /etc/default/rsync
sudo chmod 644 /etc/default/rsync
sudo /etc/init.d/rsync restart # start the demon

Slow Filetransfer under Linux

After all Bill Gates has also an issue to prey on: Linux seems to be quite unefficient in transfering files from one partition (or harddisk or USB-disk) to another.

I asked for an explanation on the Linux Mint Forum.

‘optimize me’ came up with an interessting answer, which I just quote:

There is some long standing problems with how file transfers work. I never had a problem with it until kernel 2.6.24-19 came out for Ubuntu Hardy Heron back in June of 2008. At first, all I noticed is that file transfers to USB devices (flash drives, external HDDs) started moving slower than molasses in winter. CPU usage stayed below 15% or so, but transfers never went faster than 5MB/s and the speed always degraded to tens of K/s.

Research I did shows that this has been a problem on various distros going as far back as 2005. Over time, I’ve discovered that this bug is not limited to just moving files to USB devices, but also affects moving files between partitions on the same hard disk, and also affects network file transfers (both SMB & NFS).

If you look at your system monitor in the panel (I assume that’s what you’re using), and change the color scheme of your CPU monitor(s) to highly contrasting colors (Red, Blue, Yellow & Green, for example), you’ll see that what’s termed as “I/O Wait Time” is what’s eating up all your power. I’ve been up and down a million forums, been in contact with kernel and module developers, and spent countless hours researching the problem.

I’ve got zilch.

All I can tell you is that the problem doesn’t effect everyone – only, it seems, a small minority – so it’s not at all a priority for the developers to fix. You can do a google search for “slow+usb+linux” or “slow+usb+ubuntu” and you’ll probably find all my forums posts and all the same info I found. I’m not a programmer, I’m not a developer, and I don’t know jack about where to even begin tackling this problem. That’s where that stands.

I inserted some linebreaks for readability. Thanks a lot ‘optimize me’.

No Sound on HP EliteBook with Ubuntu Jaunty

Sound did not work on a HP EliteBook 2730p. It seemed the sound did not reach the speakers. I found the solution at for the EliteBook 2530p and some Linux distribution. It worked on the 2730p as well:

Open alsa-base.conf with any texteditor (here ‘gedit’) by typing
sudo gedit /etc/modprobe.d/alsa-base.conf
in the terminal.

Then add a line
options snd-hda-intel enable_msi=1 single_cmd=1 model=laptop
at the end of the file.

Save it and restart the computer.

PS: This is one-of-those-Newbee-with-Linux-problems which get solved by semi-competent replication of some geeky fiddling with foo.conf files in /etc or /boot or /bar folders … I have no Idea what alsa-base.conf is and why/how this works. I just found the fix with google at the site cited above and it works…

.dmrc Beeing Ignored

After installing the Fluxbox desktop in addition to the default Xfce4 in Xubuntu a persistent error message started to show up:

User's $home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permission. User's $home directory must be owned by user and not writable by other user's.

It seems to be a bug in Ubuntu. It turned out that the same error message appeared on a Ubuntu Jaunty (Gnome) fresh install and on my Linux Mint 7 (based on Ubuntu Jaunty) fresh install. From time to time it shows up and I have not figured out how to (re-)produce it.

chmod 644 .dmrc
chmod o-w /home/USER

does the job, where USER has to be replaced by you username.

Reinstalling Applications after a Fresh Install

Once a while I am tempted to upgrade my OS or try another flavor. Now I started trying them on cheap 8-16GB USB-disks so I do not need to mess up my working system anymore…

The problem always is, that after using an OS for some month a lot of applications were installed and configuered. This took a lot of time. It is always a lot of work to get them all in place again and often I forgot about them, until I needed them. Preferably in a situtation without internet connection, so no way “sudo aptitude install” …

I was already up and going to create a script, manually punching everything which I found necessary , but then I found a preconfigured solution.

According to the great Ubuntu Guide:

If you upgrade your Ubuntu system with a fresh install, it is possible to mark the packages and services installed on your old system (prior to the upgrade) and save the settings (“markings”) into a file. Then install the new version of Ubuntu and allow the system to reinstall packages and services using the settings saved in the “markings” file. For instructions, see this Ubuntu forum thread. In brief:

  • On the old system: Synaptic Package Manager -> File -> Save Markings
  • Save the markings file to an external medium, such as USB drive.
  • Complete the backup of your system’s other important files (e.g. the /home directory) before the fresh install of the new system.
  • In the freshly installed new system, again open Synaptic Package Manager -> File -> Read markings and load the file on your USB drive (or other external storage) previously saved.

Note: Many packages, dependencies, and compatibilities change between version of Ubuntu, so this method does not always work. Automated updates remains the recommended method.

Encrypt Home Partition with cryptsetup & LUKS

First step is to backup all necessary data, if something goes wrong your data will be lost in the process if it’s not backed up. Also note that your home folder needs to be located on a separate partition than your root partition, if not see #How to make partitions.

Second, install necessary software:

  sudo apt-get install cryptsetup

Insert the new module, dm-crypt into the kernel:

  sudo modprobe dm-crypt

Check to see what encryption schemes are available:

  cat /proc/crypto

If only MD5 is listed, try inserting the appropriate modules into the kernel:

  sudo modprobe serpent

Above is an example, this could also be twofish, blowfish or anything other crypto module that you would like to use.

The following commands will assume that your home partition is /dev/sda1, please change it to match your own configuration.

Next step we use cryptsetup to change the partition with the luksFormat option, this command will cause you to lose all data on /dev/sda1.

  sudo cryptsetup luksFormat -c algorithm -y -s size /dev/sda1

Where algorithm is the algorithm that you chose above such as serpent aes, etc.

Size is the key size for encryption, this is generally 128 or 256. Without specifying the algorithm or the size, I believe it defaults to AES 256, more information and additional options can be found by reading the man page. The above step will ask you to choose a password and verify it. Do not forget this password.

We can then use the luksOpen option to open the encrypted drive.

  sudo cryptsetup luksOpen /dev/sda1 home

Home is a nickname which cryptsetup uses to refer to /dev/sda1. It also creates the device /dev/mapper/home, this is what you would actually mount to access the filesystem. If you specify another name other than home, it will create the device /dev/mapper/[name], where [name] is the nickname that cryptsetup will use. This step will ask you for your LUKS passphrase, this is the password you created in the previous step.

Next, we create the actual filesystem on the device. I use reiserfs, but it could just as well be ext3.

  sudo mkreiserfs /dev/mapper/home


  sudo mkfs.ext3 /dev/mapper/home

Next step is to mount your encrypted device and copy your files back to your home directory.

  mkdir new_home
  sudo mount /dev/mapper/home new_home
  cp -r * new_home

Now we have to set up everything so that it’s ready to go at boot, we need to tell the system that there are encrypted disks that we want mounted.

  gksudo gedit /etc/crypttab

Enter the following as one line at the end of the file.

  home       /dev/sda1       none       luks,tries=3

remember home can be any name that you want, just remember that this maps to /dev/mapper/[name]. The option tries=3 allows 3 tries before a reboot is required or the disk is not decrypted.

Next enter the device info in fstab that we want to mount on boot.

  gksudo gedit /etc/fstab

Enter the information as one line at the end of the file.

  /dev/mapper/home       /home       reiserfs       defaults       0       0

Remember to substitute /dev/mapper/home with your device /dev/mapper/[name], /home is the mount point, since this is our home directory, reiserfs is the filesystem type, put ext3 if you formatted it as ext3. For now the default options should be good, change this if you need/require something else. Also, now is a good time to remove the old /dev/sda1 device entry so that fstab doesn’t try to load it at boot. This can be accomplished by commenting out the /dev/sda1 line or deleting it.

Final step is to make sure that the proper modules are loaded at boot time.

  gksudo gedit /etc/modules

Now add dm-crypt and the crypto module that you used earlier, such as serpent, aes, etc. Each needs to be on its own line.


That should be it, all that’s required is a reboot. During the reboot process, the computer will say “Starting early crypto disks” and ask for your passphrase. If the passphrase is accepted, it will unlock the encrypted partition and mount it on your specified mount point.

Manage Amazon S3 Buckets

Yeah, delight!

I was using crappy development scripts to fiddle with S3 buckets on Amazon Web Services (AWS). Creating, listing, deleting buckets and so on was not that straightforward and I found it not well documented… have a growing suspicion that I am just not capable of web-searches…

OK, there is an easy way:

A graphical user interface.

Unfortunately it refused to work with Ubuntu-Firefox, but did work in Windows XP-IE5.

Ok, another tool just found is the S3 manager add-on to Firefox. This finally turned out to be the easiest way to connect to Amazones Web Services and create an online storage (“bucket”), edit or delete them.