Amahi DNS not working

Amahi has proven itself to me as a solid and reliable home server solution. However, occasionally, I’ve had a problem where DNS stops working, sometimes after OS updates have been installed and the server restarted. It’s only happened 2 times over the last year or two, and the helpful Amahi folks on the IRC channel came to the rescue both times, but I’d thought I’d document the solution here.

Sympton: You use the Amahi network troubleshooter, and find you can’t ping hda from your hda box, i.e. name resolution is not working.

Solution 1: If you are running a desktop version of linux, or have installed some specific packages, like Jenkins, that install libvirtd service, then you need to disable that service. Unfortunately libvirtd takes the default DNS port and therefore Amahi, which uses dnsmasq, can’t work. This is documented here, and the solution is to use:

sudo systemctl disable libvirtd
sudo reboot

Solution 2: If you can now ping hda, but still can’t ping www.google.com, it means the DNS provider that amahi uses for the web is not reachable. Amahi support multiple DNS providers: OpenDNS, OpenNIC and Google Public DNS. Mine was set to OpenDNS, which I think is the default, the solution was to change it to Google DNS.

If you haven’t tried Amahi, and you have a home network with a bunch of computers, I highly recommend checking it out!

Flushing Amahi (dnsmasq) dns cache

The latest version of Amahi uses dnsmasq for network management including dns caching, which works really well. However somehow my Windows 8 laptop’s wifi was enabled at the same time as it’s ethernet LAN connect, and so the laptop ended up with two IP addresses. This is probably due to some bug in Windows 8, but even after I disabled the wifi adaptor, rebooted both my Amahi server and the laptop, the dns cache still retained the now unused IP address for the disabled wifi adapter. In the end I had to manually edit the dnsmasq.leases file in order to fix this – here’s how (as root on on Fedora 21):

1. View your dnsmasq leases to confirm there’s a bogus address you want to remove:

[root@hda-com]# cat /var/lib/dnsmasq/dnsmasq.leases

2. Stop the dnsmasq service

[root@hda-com]# service dnsmasq stop

3. Edit the dnsmasq.leases file to remove the bogus address(s):

[root@hda-com]# nano -w /var/lib/dnsmasq/dnsmasq.leases

2. Start the dnsmasq service

[root@hda-com]# service dnsmasq start

Then renew your client’s IP adress (e.g. using ipconfig /renew on Windows) and you’re done.

Missing Maximize / Minimize in Gnome

Gnome 3 continues to be abhorant. Not only can I not get to a terminal easily anymore, but there’s no maximize / minimze button on windows? Who designed this and have they used graphical user interfaces for more than a few minutes / hours? Anyway to turn this basic functionality back on using Fedora try (thanks to this thread):

sudo yum install gnome-tweak-tool

Then search for ‘tweak’ in the Gnome 3 Applications menu, open ‘tweak tool’ app and click the ‘Windows’ tab and enable the titlebar options as follows:

maxmin

Geez.

VNC to Linux PITA

A stubborn or short-sighted developer, it seems, is the cause of many people’s grief and effort to get VNC working to a linux box from any of the standard VNC clients (tiger, realvnc etc.). As of Fedora 19+, the code has changed in vinoserver, the default linux VNC server, such that the encryption mechanism is incompatible with ALL standard VNC viewers. You’ll see something like ‘No matching security types’:

tiger

The only option for the average user is to disable encryption. Pretty stupid, but I’ve seen this before. A developer ‘David King’ says “My code is right, everyone else is wrong.” and marks a bug as ‘NOTABUG’. How sad. Anyway, until someone else can take a look at this from another perspective, make sure you only use VNC to linux on a secure LAN environment and then do this:

gsettings set org.gnome.Vino require-encryption false

Make sure you do it as standard (not sudo) user. Sucks to disable security – but if complexity is a barrier to entry, that is the result. ūüôĀ

Mime types with Gnome Commander

One of the things that mime types are used for is to specify the default program to associate with a file extension. Unfortunately, this is currently broken with Gnome Commander (at at v1.2.8.17). To fix it, you’ll need to add a line to the following file:

~/.local/share/applications/defaults.list

Create that file if it doesn’t exist and add [Default Applications] on the first line. Then add the mime type (as shown on the gnome commander error dialog when you try to open your file) and specify the program to use. For example, for PNG file extension you might want to use Eye of Gnome (eog) so you’d add:

image/x-apple-ios-png=eog.desktop

mime

Save the file and you’re done. If it still doesn’t work, you should check this thread on the Ubuntu forums. Tested with Zorin OS 9. Hopefully this is fixed in a future version.

Opening files from Gnome Commander in foreground

Gnome Commander is my preferred file commander on linux, it’s simple and it works. However, I usually map F3 to my preferred editor (gedit) rather than use the internal viewer and when you do that, gedit is not brought into the foreground if it is already open. Which is totally lame. Luckily there’s an easy way to fix it. Create a script named e.g. gedit-foreground.sh with the following content:


#!/bin/sh
gedit $1
wmctrl -a gedit

Obviously you can modify this to open any app if you prefer. Save this in your home directory somewhere and then make it executable using:

chmod a+x gedit-foreground.sh

Then install wmctrl using:

sudo apt-get install wmctrl

Now in Gnome Commander go to Settings->Options->Programs and set the viewer to:

/path/to/your/scripts/gedit-foreground.sh %s

Now when you press F3 or whatever your key for ‘open in external viewer’ is, your editor will be launched and be brought to the foreground instantly!

Dual boot Zorin 9 with Windows 8.1

I got myself an Alienware 13 laptop after years of building desktop PCs, and I have to say i’m impressed with it so far. I intend to use it for both Android and Windows development, and I prefer to develop Android on Linux, so I needed to set up dual-boot. The linux distro I use is Zorin OS, which is by far the best desktop linux distro I have used yet. However, with this new UEFI bios that all computers come with these days, dual-booting linux and Windows 8+ is not as straightforward as in the past days of regular old BIOS. Here’s how I got it to work, based on Nehal Wani’s excellent YouTube video (but without the need to use a Ubuntu live disk):

  1. Laptop came with Windows 8.1 OEM pre-installed on a 256Gb HDD, so if you haven’t installed Windows already, do that first.
  2. You MUST shrink your OS partition down to make room for linux, which is easy to do using the built-in disk management utility in Windows (diskmgmt.msc). Just right-click you OS partition and choose ‘Shrink’, then enter the amount of space you want to reserve for linux, I chose 60Gb.
    shrink
  3. Disable ‘Fast Startup’ in windows 8 (via Power Options -> choose what the power button does).
    disable-fast-startup
  4. Go into your UEFI setup (I had to press F2 at boot time, but you may need to use Shift+restart to access it via Windows 8 logon screen).
  5. Disable ‘secure boot’, but you can leave everything else related to booting alone (i.e. you don’t need to enable ‘legacy boot’).
  6. Create a UEFI-enabled bootable USB key from the Zorin OS 9 ISO file using Rufus, making sure to select ‘GPT partition scheme for UEFI’.
    rufus
  7. Make sure you are connected to the internet with a wired ethernet cable at this point, or the zorin installation can fail due to needing to download updated packages relating to UEFI-enabled grub bootloader.
  8. Plug in the USB key and when booting, press F10 or whatever you need to access boot options. It should give you the option to boot from the UEFI-enabled USB key at this point, select that.
  9. Zorin OS 9 menu should appear, select to install or boot to live and then install, it makes no difference.
  10. When the installer gives you the option to download updates during installation, make sure you tick that checkbox.
  11. When the installer asks you where to install, it will NOT detect Windows 8 and therefore will NOT give you an option to install ‘alongside’. That’s fine, we will still achieve this regardless. Choose ‘Something else’ to specify partitioning manually.
  12. Now you have to select the free space you reserved for linux from step 2 and create a the following 3 partitions (using the plus button – credit to Nehal Wani for screenshots from his YouTube video):
  13. First create a 5Mb primary partition located at the end of this space for use as ‘Reserved BIOS boot area’:
    partition1
  14. Next create a 2043Mb primary partition located at the end of this space for use as ‘swap area’:
    partition2
  15. Finally create a primary partition located at the beginning of this space for use as ‘Ext4 journaling file system’ and mount point ‘/’:
    partition3
  16. Proceed with the rest of the install as normal, and when you are done and you restart you should hit the zorin grub-based bootloader, which should give you the option to enter Zorin (as the first preference) and also the option to enter Windows Bootloader which will enter your existing install of Windows 8.1.

That’s it!

Note that this laptop comes with WIFI hardware that is currently only supported in the very latest builds of linux, and is not supported by Zorin OS 9, which is based on Ubuntu 14.x. It may work with Zorin OS 10, which is based on Ubuntu 15.x, but I haven’t tested it and i’m not interested in an OS where the security updates last only a few months. The bug for this is here. I guess that’s still life when using linux on new hardware.

Can’t scp using git-shell

I recently created a new Android app on my local dev box, but when I went to copy it to my git server using:

scp -r -P 9876 AppProjectFolder.git git@myserver.address.com:/opt/git

I got this:

fatal: unrecognized command ‘scp -r -t /opt/git’
lost connection

After researching, I found it is due the git user on my server using git-shell (to restrict commands that can be used). However, I’m sure I used to be able to use the scp command even when using git-shell. I couldn’t find the root cause, so I just modified /etc/passwd to set the git user to use /bin/bash temporarily, then switched it back after the files were copied. My server is running Fedora 19, if anyone knows what can cause this and/or how to fix it please post a comment.

Auto-Complete not working in Eclipse

After upgrading from Linux Mint 13 to 17 (and Eclipse from Juno to Luna) I noticed that auto-complete (also called content assist) which is normally invoked using Ctrl+Space is not working anymore. After searching I found it is not actually caused by Eclipse, but instead the IBus (Keyboard) preferences eat Ctrl+Space so Eclipse never receives it! This is a very poor choice of global shortcut and it has been reported as a bug here. You can confirm Eclipse is not receiving it by going into Window->Preferences->General->Keys and attempting to enter Ctrl+Space for a key.

Until it is fixed, you can work-around it by right-clicking the keyboard icon in the system tray (next to the clock), choose ‘Preferences’ and select a different shortcut for ‘Next Input Method’.

Why I ditched RAID and Greyhole for MHDDFS

Yes it’s a mouthful, but in my opinion, mhddfs is far-and-away the most beautiful and elegant solution for large data storage. It has taken me 10+ years of searching and trying, but now I’m finally at peace with my home-cooked NAS setup. In this article, I will explain how you too can have large amounts of easily expandable and redundant storage available on your network for the cheapest price in the simplest way possible.

1. The Beginning: RAID

When I realised I had enough data and devices to justify a server, the natural option for storage was of course RAID. As I was a cheapskate (and I didn’t want to risk hardware failure), I used software raid level 5 on Gentoo with 5 drives. Although this worked, it was a pain and I didn’t sleep well:

  • If any drive died (which several did), I would have to re-learn the commands to remove the drive from the raid, shut-down the server, install a replacement drive, re-learn the commands to add the drive back to the raid and then wait nervously for the re-sync to complete, which usually took several days due to the size of my data.¬† This was a horrible process because it happened just in-frequently enough that I never got enough practice doing it, so every time it was a matter of googling and praying.
  • Expanding the array when space ran out was a similarly in-frequent task that also required some re-learning each time and hence was a nerve-racking exercise.¬† In addition, drive size and type ideally had to match the existing drives, which was a risk to source.
  • Since I was using RAID 5, if 2 drives died, BAM, all data was gone.¬† This was always on my mind, and made point (1) even more stressful.¬† Yes I could have used other RAID levels, but 5 was the right balance between speed and redundancy each time I weighed up the options.

2. The Middle: Greyhole

When building a home server, linux is usually the best choice, but getting your network set up right does require some linux know how, and when you start trying to configure firewall, you better hope you read the right blog or forum posts or who knows whether you got it right.¬† A few years ago I found out about Amahi, which is kind of a pre-packaged home server (based on Fedora, and now Ubuntu) that automatically sets you up a computer with everything a typical home server would normally need, out-of-the-box.¬† And it gives you a great web-based dashboard / control panel that allows you to further configure and monitor your system.¬† But mostly it just works, and I’m still using it today.

What especially interested me is that it is bundled with a thing called Greyhole, which is used to provide data storage via Samba for network clients.¬† Greyhole is great in concept in that it allows you to take a bunch of disks, of ANY size and format and location (local or network) and logically combine all their storage capacity together to create a single larger storage which clients see as a single volume.¬† Unfortunately, the implementation appears to be severly flawed, as I found the hard-way after using greyhole for about 6 months.¬† Greyhole works by subscribing to writes/renames/deletes to the samba share, which it then records in a SQL database.¬† Then later, it ‘processes’ those actions by spreading files out across different physical drives that are part of the storage pool you have created.¬† Depending on how you configured redudancy in your pool, your files might end up on one, two, three or all physical drives.¬† This is great in that you get quite good redundancy, you can easily expand the storage pool with any new disk you have lying around, and if any drive dies, you only lose the files on that drive, since individual files are not split across multiple drives.¬† The problem comes when you have a large number of small files and/or you perform a lot of operations on your file system which greyhole just can’t keep up with.¬† This results in greyhole basically falling behind on its tasks which starts to result in your files not getting copied / moved to the right places and in worst case, actually going missing (yes this happened to me).¬† Finally, greyhole filled up my entire dropzone with millions of tiny log files which killed my server completed after I ran out of inodes.¬† At that point I was done with greyhole.

3. The End: MHDDFS

Finally after googling again, I saw mention of a small linux util, mhddfs, that seemed like it might just fit the bill.¬† It is not heavily advertised, which is risky when dealing with file-systems, but I’ve been using it for 2+ years and it has performed beautifully (zero data loss).¬† There is only one blog post that explains how it works, and I will not repeat that here, so you should read this: Intro to MHDDFS.

Once you’ve read that, you’ll see it’s a simple matter of running a single linux command (or editing your fstab) to create your storage pool on boot up of your server.¬† Once created, you can simply share out your pool as a samba share for your network, and MHDDFS will take care of ensuring when one drive in the pool is full, it will seamlessly start writing to the next drive.¬† So clients just see one huge volume with lots of available space.¬† Adding drives is as simple as editing your fstab, and you can pull out a drive at any time and access all the files on it directly (since you can choose your own file system).¬† Files are not split across drives.

Performance

Since MHDDFS is a fuse-based file system (i.e. it is running in user-space), you may question its performance.  I tested read/write speeds over a Gigabit network to locations on in the storage pool and out of it, and can confirm we are talking about a very small performance degredation, something like 5% slower, which for me was more than acceptable.

Redundancy

MHDDFS does not provide any redundancy feature, which is actually nice, since it does one job and does it well.¬† This leaves you with lots of options to choose your own redundancy solution.¬† Mine was simply to have a backup computer with the same storage capacity, and use MHDDFS on that computer to create a ‘backup’ mirror storage pool.¬† Then I simply use rsync as a nightly scheduled task to keep the two pools in sync.

Configuration

Here’s my relevant fstab entrys:

UUID=60933834-6e2e-snip /mnt/mediaA ext4    defaults        1 2
UUID=a21d2e76-e58b-snip /mnt/mediaB ext4    defaults        1 2
UUID=e53b4fef-600e-snip /mnt/mediaC ext4    defaults        1 2
UUID=b94100c4-2926-snip /mnt/mediaD ext4    defaults        1 2
UUID=a10c3249-ae19-snip /mnt/mediaE ext4    defaults        1 2
UUID=4309390b-399f-snip /mnt/mediaF ext4    defaults        1 2
mhddfs#/mnt/mediaA,/mnt/mediaB,/mnt/mediaC,/mnt/mediaD,/mnt/mediaE,/mnt/mediaF /mnt/media fuse nonempty,allow_other 0 0

So /mnt/media becomes my storage pool share, which you can see easily using df -h:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             1.8T  1.7T   18G  99% /mnt/mediaA
/dev/sdb1             1.8T  1.7T   18G  99% /mnt/mediaB
/dev/sdc1             1.8T  1.7T   11G 100% /mnt/mediaC
/dev/sdd1             1.8T  1.7T   12G 100% /mnt/mediaD
/dev/sde1             1.8T  342G  1.4T  20% /mnt/mediaE
/dev/sdf1             1.8T  196M  1.7T   1% /mnt/mediaF
/mnt/mediaA;/mnt/mediaB;/mnt/mediaC;/mnt/mediaD;/mnt/mediaE;/mnt/mediaF
11T  7.1T  3.2T  70% /mnt/media

My rsync command runs as a scheduled task (cron job) at 5:30am every day:

rsync -r -t -v –progress -s -e “ssh -p 1234” /mnt/media/coding user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234” /mnt/media/projects user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234” /mnt/media/graphics user@backup:/mnt/media

Note that the rsync command is executed for each individual root folder in the share so I can choose which folders I want to make redundant.¬† Also I do not include the –delete option so that if I accidently delete something, i can recover it from the backup server at any time.¬† Then periodically I can use Beyond Compare to compare to two storage pools and remove anything I truely don’t need.¬† The first time you set up the backup storage pool, it will take quite a while for the rsync to complete (like several days), but thereafter, it is amazingly quick and finding just the diffs and replicating those only.¬† Yes everything you heard about rsync is true, it is awesome.

So that’s my data storage problem solved, if are looking for something that is scalable, powerful, flexible and most importantly, simple, I recommend mhddfs.¬† And then for redundancy, rsync is about as simple as it gets.

UPDATE:
If you are seeing a ‘transport endpoing not connected’ error randomly with your mhddfs storage pool, you’ll want to install this forked version:

https://github.com/vdudouyt/mhddfs-nosegfault

Hopefully this is fixed in the maintainer’s version soon!