about:benjie

Random learnings and other thoughts from an unashamed geek

Ubuntu NFS Home Directory Issues

| Comments

Horny Humpback

If you choose to mount over NFS just one of your user’s home directories (e.g. /home/jem) under Ubuntu, then you may come accross issues such as failure to log in, the screen freezing (but mouse still moving), loss of configuration data (e.g. icons in your panels), being told that your login session lasted under 10 seconds, and just general instability.

The reason for this is, in Ubuntu’s rush to get you to the Desktop quickly, it loads up GDM (and possibly auto-logs you in) *before* your home directory is mounted over NFS “Network File System (protocol)”). This is a simple issue of priorities. However, if you log in before the home directory has been mounted, then gconfd-2 and other similar apps will load (or save) settings to your (supposedly empty) /home/jem on your hard drive. When you give up and log out (e.g. Control-Alt-Backspace, or a proper logout), and log back in again, these programs will still be accessing the wrong settings, because they continue to run in the background.

The solution is to abort Ubuntu’s Windows-like behaviour of allowing you to log in before everything has started running at boot time - change GDMs priority from 14 to 80 (or some other number). I chose to do this the lazy way, using “bum”. BUM, the Boot Up Manager, is a simple way to change all things related to booting. It is easy to use, though it does take quite a while (a few minutes!) to start the first time you run it. It must be run in a graphical environment. Simply tick the advanced box, go to the third tab, find gdm, and change its priority up to 80. Save, exit and reboot, and all is well again in the world… though you may have to restore your settings from a backup, or go through the long process of re-configuring your desktop the way you like it. Good luck!

MythWeb Aspect Ratio

| Comments

The MythTV menu (default blue theme)

It has bothered me a little for a while that the MythWeb (part of the fantastic MythTV package for Linux) aspect ratio is hard-coded to 4:3. Most of the TV that I watch (received over Freeview (DVB-T) in the UK) is in 16:9 “Aspect ratio (image)”), so watching it back on 4:3 is a bit of a pain. Thus I was motivated to change the hard coding to 16:9. The process is quite simple:

  1. Modify line 102 of mythweb/modules/stream/handler.pl - change “3/4” to “9/16”.
  2. Modify lines 35 and 37 of mythweb/modules/mythweb/tmpl/default/set_flvplayer.php - change “3/4” to “9/16” and “4:3” to “16:9” respectively.
  3. Modify line 505 of mythweb/modules/tv/tmpl/default/detail.php - change “3/4” to “9/16”.
  4. Optional: I also added to the end of line 165 of mythweb/modules/stream/handler.pl (which detailed the ${width}x$height) - adding .' -aspect '.shell_escape("16:9") (make sure you get the fullstop at the beginning!), though I am not sure if this modification is necessary or even beneficial!

There is a minor bug now where the player does not show the control bar at the bottom properly initially, but a click on the preview picture solves this.

I’m currently working on modifications to stream the video in 3gp format to my mobile (a Nokia 6120 Classic), however this seems a lot harder as I have to implement a RTSP server, and have to re-encode all jobs in advance (by using a MythTV User Job) which is not quite what I am after. There is a page about it in the MythTV wiki. I wonder if I can find a cunning way around it…

If this helps you, please let me know in the comments!

More Partitioning - RAID6 This Time!

| Comments

RAID 6 with fives disks (disk 0, 1, 2, 3, and 4) and each group of blocks (orange, yellow, green, and blue) have two distributed parity blocks that are distributed across the five disks

I told you about moving my data over to a RAID1+0 and RAID5 system in this previous post, but, as expected, I never got round to finishing it. Until now, that is…

I went to London on business for a few days, and came back to find MythTV had stopped working. The disk was full, so it had given up, and wouldn’t even let me get in to view the recordings - so I couldn’t even delete some to get it working again! (I wonder why Auto-Expire wasn’t working.) Anyway, this spurred me on to finally finish partitioning/setting up my drives. The process was quite simple, really. The following steps generally assume you are root already (sudo su) because I am a “bad” sysadmin and don’t believe in all this constant sudo malarchy. Following my instructions is, as always, at your own risk. I highly recommend that you read the relevant documentation before proceeding (such as this).

  1. Reboot into Ubuntu “Ubuntu (Linux distribution)”) LiveCD, don’t forget to upgrade the software on the LiveCD to prevent any issues!
  2. Disable swapspace (swapoff /dev/sd[abcd]1).
  3. Use cfdisk to finish formatting the drives (remembering to change partition types to the hexadecimal “fd” - Linux RAID Autodetect). Be careful NOT to modify ANY of the details of the partitions you are already using for data or you will probably lose data!
  4. Reboot back into your real system (not LiveCD - minimizes downtime).
  5. Optional: add hot spares to current RAID5 devices (mdadm /dev/md1 -a /dev/sdf5).
  6. Create the new RAID6 devices (mdadm -C /dev/md3 -l6 -n6 /dev/sd[abcdef]7).
  7. Optional: wait for the devices to finish resyncing (watch cat /proc/mdstat).
  8. Turn the new RAID devices into LVM “Logical Volume Manager (Linux)”) physical volumes (pvcreate /dev/md3).
  9. Stop any services that depend on /data (/etc/init.d/mythtv-backend stop; /etc/init.d/mpd stop).
  10. Unmount the data drive (umount /data).
  11. Add the new physical volumes to the current LVM volume group, “raid5” (vgextend raid5 /dev/md3).
  12. Expand the logical volume to the full size of the volume group (use pvdisplay to find out the size [411.05GB], and then run lvextend -L+411.05G /dev/raid5/data).
  13. Expand the filesystem [ext3] to the full size of the logical volume (e2fsck -f /dev/raid5/data; resize2fs /dev/raid5/data) - running e2fsck on a 600GB drive does take a while… Took about 30 minutes for me with little else running.
  14. Remount the data drive (mount /data).
  15. Restart the services you stopped eariler (/etc/init.d/mythtv-backend start; /etc/init.d/mpd start).
  16. Remember to update (I prefer to re-create) your initramfs and update grub (update-initramfs -k all -c; update-grub)
  17. All done!

I chose RAID6 over 6 disks over RAID5 over 5 disk with one hot spare because it has better redundancy and similar performance. It was added to the kernel at the end of 2003 so I think it should be fairly stable by now.

This process was not too complicated, and all of this can be done with very little downtime (if you are clever/daring, you can even re-partition without rebooting, but that was too risky for me!). You can even do the LVM stuff without taking /data offline! I wouldn’t advise it though.

If this post helps you, please let me know in the comments.

Snow, Snow, Everywhere…

| Comments

… well, not any more - most of it has melted now - but it was this morning! Snow during April, what is the world coming to? I have a few theories:

  1. Global warming is starting to take hold
  2. There are actually 53 weeks in a year, not 52, and thus our calendars are wrong and winter comes a week later each year.
  3. Government conspiracy.
  4. Its always been this way, but we’ve had a few years of warmth that have made us forget the snow in March/April.

Anyway, enjoy the pictures from Gosport!

SnowSnowSnow\

Synce-gnomevfs Install on Ubuntu

| Comments

Ubuntu (Linux distribution)Windows Mobile

Yesterday I tried to install the latest version of synce in order to get Jem’s Dad’s Windows Mobile 6 phone to share files with Linux (Ubuntu “Ubuntu (Linux distribution)”) Gutsy Gibbon in this case). After managing to get the software installed, I have been very impressed with it, however actually installing it was a bit of a challenge, though the solution is quite simple and I share it with you now.

  1. Uninstall everything synce related before starting.
  2. Follow the Synce with Ubuntu instructions.
  3. pls should work at this time.
  4. Follow the SynceVfs instructions. Use ./configure –prefix=/usr make; sudo make install
  5. Heres the important bit: cp /usr/etc/gnome-vfs-2.0/modules/synce-module.conf /etc/gnome-vfs-2.0/modules/
  6. killall gnome-vfs-daemon

I think that you can do step 5 alternatively by adding –sysconfdir=/etc to your ./configure command in step 4, however I have not tested this.

Once this is done you should be able to just plug your phone (or other Windows Mobile device) in to the USB, and type synce:/// into Nautilus’ address bar. Simple!

Moving Blog Software - Serendipity to Wordpress

| Comments

Screenshot Wordpress

I moved my blog (in fact my entire website!) over to Wordpress a couple of days ago. The move was not without it’s challenges - for a start I remembered Wordpress likes to have a well defined hostname, and I didn’t want any downtime. To get around this, I placed an entry in my /etc/hosts file for www.benjiegillam.com, pointing to the new domain, this way I could set up the new Wordpress blog privately (no-one else would know where it was) under the correct domain name, whilst still having access to the old blog to copy content over from.

My first issue was how to transfer the posts from the old blog to the new. I acheived this by doing a few minor hacks to serendipity, and using the export function (where you can export all posts as an RSS feed). To do this, I had to disable the “extended body” feature (i.e. make sure it was output as part of the feed), as explained in solution, part 1, here. Make sure your browser is not caching at this stage!

Once I had acquired the RSS file, I then had to convert it into a format that wordpress would understand. I cheated and wrote a very bad PHP file, here:

Convert s9y RSS into Wordpress-compatible format (convert_s9y_rss.phps) download
1
2
3
4
5
6
7
8
9
10
11
12
13
<?php
//Import the feed
$rss = file_get_contents('s9y.rss');
//Opening <![CDATA[s
$rss = str_replace("<content:encoded>","<content:encoded><![CDATA[",$rss);
//Closing  ]]>s
$rss = str_replace("</content:encoded>","]]></content:encoded>",$rss);
//Now replace all newline characters with a " " (this will BREAK any preformatted tags, but will stop wordpress putting <br />s everywhere
$rss = str_replace(array("\n","\r")," ",$rss);
//Finally remove all the htmlentities from the file and output to STDOUT, which you can then redirect to a file
echo html_entity_decode($rss,ENT_COMPAT,'UTF-8');

// I called this as convert_s9y_rss.php > wordpress.rss

I then used Wordpress’ RSS importer to import the posts (no comments, unfortunately). I then copied all of the uploaded files into the same file structure on the new site. The next thing to do was to go back through and edit all the posts and update their links. Only joking, I really couldn’t be bothered to do that! Instead, I made a folder called “serendipity” in the webroot (all of my posts were /serendipity/archives/… previously), and placed in it the following two files:

(s9y_htaccess) download
1
2
3
RewriteEngine On
#Direct *EVERYTHING* to the index.php file
RewriteRule .* index.php [L]
(s9y_index.phps) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<?php
//What URI was I accessed as?
$uri = $_SERVER['REQUEST_URI'];

//Remove everything except the last section
$uri = explode("/",$uri);
$uri = array_pop($uri);

//Convert to lower case (as in Wordpress)
$uri = strtolower($uri);

//Remove the post id from the beginning of the post
$uri = explode("-",$uri);
array_shift($uri);
$uri = implode("-",$uri);

//Remove the extension (.html)
$uri = explode(".",$uri);
array_pop($uri);
$uri = implode(".",$uri);


// Now send a 301 Moved Permanently and the new location
header("Location: /$uri",TRUE,301);
exit();

These caused all posts links to be re-written to a guess at the page name, and thankfully Wordpress was clever enough to work out what was meant. I am not sure if it worked for all posts, but it did for all that I tested. I hope this helps someone, if so leave me a comment (please! I lost all my old comments in the move!).

How I Converted My 4-disk RAID5 Into 6 Disk Super-Storage

| Comments

A hard disk drive with the disks and spindle motor hub removed. In
the center, the internal structure of the spindle motor can be seen. To
the left of center is the actuator arm with a read-write head under the
tip of its very end (near center); the orange wires along the side of
the arm are part of the path the signals take to and from the read-write head. The flexible, somewhat 'U'-shaped, ribbon cable barely visible below and to the left of the actuator arm is another part of its path connecting the head to the controller board on the opposite side.

On 5th February, only of my hard drives failed, and my computer started to choke. My root partition was on the failed drive, so only programs that were already in RAM could continue to run, though most of my storage was on a RAID-5 across 4 320GB disks, so that was still intact. Fortunately, due to my 6GB of RAM, swap was not in use also, so this did not crash my PC, but it did leave me in quite a bad position, not being able to run many of the built in system tools, in particular the tools from “smartmontools”. I did what I could to copy the important details over to Jem’s PC whilst my system was still running, albeit crippled.

It turned out that, upon rebooting of the system, everything was fine, and the disk worked, but it didn’t half give me the willies! I vowed then to get my root drive onto some form of redundant storage, and to have a hot spare always on hand. To that end I bought an Icy Dock 5-in-3 SATA cage and 2 500GB SATAII drives. Unfortunately, due to the amount of time I was spending working on buzzspotr.com with i-together, I was unable to actually use these immediately. I finally got round to incorporating them into my system last weekend. It was quite a challenge to do, so I thought I would document it for future reference.

The first thing I did was delete as much data as I could. The main things I focussed on were:

  • Old MythTV programs that I had recorded and seen, or that I was not going to watch
  • My ripped DVD library (I rip my DVDs to make it easy to get them to play from MythTV without having to get out of the sofa! I could always re-rip them later)
  • Old duplicate backups (for example, I had backups every 30 minutes for Blog Friends, which summed to almost 50 GB! I removed all of these except for those from 11:30pm each night)
  • Duplicate files
  • Caches

After removing all this data I reduced the “valuable” data on my computer to somewhere around 650GB.

I decided that the best way to lay out my computer data would be to have the following:

  • First 0.5GB of each drive - swap space and /boot partition. I chose to not make the swap redundant as it is rarely used (and I don’t mind if computer crashes if it HAS to!)
  • Next 39.5GB of each drive - RAID1+0 for /, totalling 117GB of fast redundant storage (theoretical peak bandwidth: 1800mbps read, 900mbps write). High priority data here - the root, my home directory and desktop, the mysql databases, the webroots of apache, etc etc. Ultimately everything where speed and redundancy are highest priority. This setup allows the loss of any one, and possibly up to 3, drives, and has less probability of a total fail than RAID0+1. I did *NOT* use the kernel raid10 driver.
  • Then 80GB stripes over 5 disks of RAID5, with one hot spare (on the 6th disk) up to the capacity of the 320GB drives, which would all be combined through LVM into one huge partition for lower priority data
    • music/TV/etc. Note that RAID5 write speed is not great.
  • The rest of the 500GB drives are currently unpartitioned, but I might use them as overflow for MythTV, or as hot spares for the RAID1+0

This seemed to me to be the best way to lay out my filesystem, but how on earth could I move my current data over to the new system, and be assured that it would still boot?

My previous setup was (4 320GB drives, remember):

  • First 0.5GB swap and /boot
  • Then 30GB stripes for RAID5, up to the last 20GB. Combined with LVM, two logical volumes.
  • Last 20GB was used for /home on one drive, 64-bit / on another, and a 32-bit / on another. The final drive was blank.

My first issue was how to boot into the Ubuntu LiveCD, and still have volume management. I found the best way to do this was the following:

  1. Boot the Gutsy Gibbon LiveCD, remembering to set screen resolution and keymap (for some reason, it crashes for me if I don’t…)
  2. Open up synaptic
  3. Edit the software sources (repositories) - tick all the boxes, and all the updates boxes (gparted is broken on the LiveCD if you try and use it on a completely raw (fresh from manufacturer) drive)
  4. Install all the updates (you can leave out obvious things like OpenOffice.org if you want)
  5. Install mdadm and lvm2 packages
  6. (Optionally?) Run modprobe raid0; modprobe raid1; modprobe raid5
  7. Then run mdadm -A -s --no-degraded
  8. (Optionally?) Run modprobe dm-mod
  9. Run vgchange -ay
  10. Now you should have all your RAID and LVM partitions up and running

If you don’t understand the commands, I highly recommend that you read their man pages to ensure that these are the right commands for you. You CAN lose data if you mess this up! I always have to check each command 4 or 5 times before I run it when this much data is involved!

Once I had done that, I had to take the plunge. First I checked each of the NEW UNFORMATTED disks with a long read-write test (actually, I did this before ever rebooting), by running badblocks -s -w /dev/sde. This is a destructive command, please be careful using it! It will erase any data already on the drive.

Once I was convinced that the drives would withstand the 2 days where they would be the critical data point, I partitioned them. Both got the standard 0.5GB and 39.5GB partitions at their fronts, and then sde got 3 80GB partitions, and the rest (220GB) turned into another partition; whilst sdf got the rest (460GB) turned into one large partition. I then copied everything over to these drives (starting with sdf, and then working backwards through the partitions in sde). I then had to take the jump and make my RAID1+0 (which was formed of striping the pairs sda-sdd, sdb-sde, sdc-sdf).

It was at this point I thought I would be clever. If I just deleted the partitions of sda, then the RAID5 would still be holding the data, and I could make my severely cripped RAID1+0 (really only a RAID0 in this idea) by combining sda5, sde5 and sdf5. I could then copy the data over and check if it booted, whilst still having lots of redundancy for my data. Unfortunately, the system would not boot like this (I guessed it was because I had two md0s - one for RAID1+0, and one from the old RAID5, though I was later proven wrong), so I had to give up and take the risk. I deleted the partitions from the other drives, and formed my RAID1+0, and tried to boot into it. It still would not boot. I even chrooted into the new environment and ran update-initramfs and update-grub, but still nothing.

At this point I was a little frustrated, and spent a long time researching. In the end I discovered that the initramfs was not being updated, and it still contained the old /etc/mdadm/mdadm.conf. Upon deleting and regenerating the initramfs, I could boot into the system. I quickly rebooted into the LiveCD and did all the other necessary changes (setting up the RAID5 across the 5 available disks (leaving the last disk with the data on), copying the data from sdf onto the new RAID5, updating the fstab, etc). I then rebooted into the system, and (not surprisingly) had to make some quite considerably changes due to the amount of data I had moved to new, “better”, locations. And finally, just 2 days later(!), I had my ultra fast and acceptably redundant system online.

I’m very glad that I took the time to do this, though I still have not got round to formatting sdf and setting it up as hot spares… it still has most of the old data on as a duplicate copy![image](http://img.zemanta.com/pixie.png?x-id=83b52854-f4c3-4c16-b475-9eb134d2f8d1)

Icy Dock MB-455SPF Review

| Comments

A 7-pin Serial ATA data cable.Image from Wikipedia My Icy Dock 5-in-3 SATAII/Icy+Dock+MB-455SPF+3.5’‘+SATA+?productId=23605) cage arrived today from Aria “Aria (manga)”). I was quite excited, and wanted to fit it straight away. I shut my computer down, took the case apart and set about fitting it. The first issue I came across was that the drive rails for my Antec P180 case would not fit properly - I had to fit only the bottom to, as fitting all three meant that they were all too close together to squeeze on the P180’s rails. Once I had this sorted, I set about trying to fit the power and data cables. ARGH! :@ There just wasn’t enough space! And to make things worse, the connectors on the back of the cage were the wrong way round for the L-shaped SATA cables (see image) to fit! After over an hours struggling, swearing, plugging, unplugging and general annoyance, I finally got the damned thing into my case. Breathing a sigh of relief, I set about the long task of attaching my drives to the mounting rails of the cage. I then slotted them in, and rebooted.

WOW!

WHAT THE HELL?!

THATS SO NOISY!

They rattled like anything. I quickly turned the machine off, tightened (really really overtightened, if you ask me!) the screws as much as I could, and then slipped the drives back in and powered up. This time the rattle was gone, but it was still very noisy from all the vibrations. The Icy Dock does not have any anti-vibration built in, making it much noisier than I was used to (I normally mount “Mount (computing)”) the drives using the rubber gromits that come with the P180 case). The fan was also very noisy, so much so that I had to change the setting on the back to 55oC instead of 45oC for the warning temperature.

Credit where credits due, the cage does keep my drives nice and cool, and it is very helpful to be able to see the individual hard drive status LEDs. It also have the obvious advantage of being able to quickly and easily swap a drive in and out in the event of a disk failure, without all the effort of having to go inside your case (and possibly knock a cable out of another disk drive without realising!).

Generally, though, I would sum up my feelings thus:

I recommend NOT buying an Icy Dock 5-in-3 SATA cage.Was hard to install&my computer is *miles* louder now+resonates.Pretty lights though…

By the way, this is the first post that I have written with Zemanta (my first “zemified” post). I’m quite impressed at its ease of use, and I will continue to use it for that reason. Towards the beginning of writing a post the articles it brings up tend to be quite random though… what does this post have to do with celebrity babies, for example? I really like the gallery feature though, and the links are especially helpful (though I am surprised that “Zemanta” isn’t among the link detection)! Great work, guys!

SimplePie Memory Leak Update

| Comments

It seems quite a few people are still having trouble with SimplePie’s memory leaks, so I thought I would write a new post to say how I have modified SimplePie. If you aren’t a PHP programmer, you probably don’t want to read this post… For more background on this post, you probably want to read my other post.

It is possible that version 1.1 of SimplePie has fixed this issue, though Aman left a comment on my other post telling me that it didn’t. I currently use a heavily patched version of revision 901 from the SVN (I think… I may have updated since then…) so I can’t really tell you… I wish I had time to update and re-patch everything!

To fix the memory leak issues I was experiencing, I replaced the SimplePie::__destruct() method with a more “hardcore” version, hopefully forcing PHP to clear all references, and thus allowing it to clear memory:

(32.phps) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<?php
function __destruct(){
  foreach ($this->data as $k=>&$v) {
    if (is_array($this->data[$k])) {
      foreach($this->data[$k] as $l=>&$w) {
        if (is_object($this->data[$k][$l])) $this->data[$k][$l]->__destruct();
        unset($this->data[$k][$l]);
      }
    } else if (is_object($this->data[$k])) {
      $this->data[$k]->__destruct();
    }
    unset($this->data[$k]);
  }
  unset($this->data);
  $this->sanitize->__destruct();
  unset($this->sanitize);
  return true;
}
?>

It seems to work for me, let me know if it works for you!