Win Library Tool [revised for Windows 8]

Posted in Software, Windows 7 on November 3rd, 2009 by admin – 367 Comments

UPDATE: This tool has been updated and now works on Windows 8, however from my limited testing, the new metro interface has extremely limited support for libraries. Unfortunately using this tool I couldn’t find a way for photos added from a network drive to appear in the ‘photos’ metro app even though they appear in the library in Windows File Explorer. Windows 8 is astonishingly bad.

Download Executable | Source Code

Windows libraries (introduced in Windows 7) could have been a really useful feature of Windows, however unfortunately they arrived in a slighly cut-down form out of the box.  Microsoft decided against exposing some really useful capabilities to users, like adding network locations, pretty much the first thing I tried to do.  You get this message:

windows7libraryerror

Luckily, you can add network locations (and any other un-indexed locations), but it must be done programatically.  MS supply a command line utility slutil.exe, candidate for the worst named executable in history.  Pretty sure it stands for shell_library_util.  Anyway, I decided to write a tool to make it easy to add network locations, and added a few other features as well:

  • Add network (UNC or mapped drive) and any other un-indexed folders to libraries.
  • Backup library configuration, such that a saved set of libraries can be instantly restored at any point (like after a re-install of the OS or for transfer between multiple computers).
  • Create a mirror of all libraries (using symbolic links) in [SystemDrive]:\libraries.  This means you can reference all your files using a much shorter path, and also provides another entry-point to your files in many places in the Operating System (e.g. file open/save dialogs).
  • Change a library’s icon.

Hopefully it’s easy enough to use, so I don’t have to explain it :)

You can download it for free below.  (Note: This will only run on >= Windows 7.)

Download Installer | Source Code

I must give credit to Josh Smith for his TreeView CodeProject article, upon which this solution is modelled.

The application uses the Microsoft API CodePack to manipulate libraries, which I encourage you to check out if you are writing software to integrate / take advantage of new features in Windows 7.

If you want to learn why and how libraries were introduced in Windows 7, including diving into the .library-ms file format, you can read this MSDN article.

Now featured on Tekzilla!

Run unit tests in a batch loop

Posted in Programming, Testing on February 16th, 2015 by admin – Comments Off

We had a case where a tests was randomly crashing the MSTEST agent process, leave no trace of the root cause. So to reproduce, we ran the test in a loop in both MSTEST and xUnit frameworks using the following scripts, posted here for future reference:

rem Run unit tests in an assembly in a loop using xUnit (build 1705).
FOR /L %%i IN (1,1,5000) DO (
xunit.console.clr4 UnitTests.dll /xml C:\xunit-results\output%%i.xml
)

rem Run unit tests in an assembly in a loop using MSTEST.
FOR /L %%i IN (1,1,5000) DO (
mstest /testcontainer:UnitTests.dll /resultsfile:c:\mstest-results\output%%i.xml
)

CSR bluetooth dongle killed by Windows update

Posted in Bluetooth, Hardware, Windows 7 on January 20th, 2015 by admin – Comments Off

An optional update for my USB bluetooth dongle (which I used to connect my Logitech bluetooth keyboard) recently appeared on one of my Windows 7 machines. Trustingly, I proceeded with the update, only to find that afterwards, my keyboard no longer worked! And there was no blueooth icon in the system tray to inspect the status of bluetooth devices. After googling, I found out that this update has killed many people’s bluetooth hardware and the solution is to uninstall it an go back to the default Windows drivers. The solution that worked for me (and others) detailed on this thread, which I will summarise (paste) here:

To accomplish this select “Update Driver Software” from “CSR BlueCore Nanosira” (the working device), then select “Browse my computer for driver software” and then “Let me pick form a list of device drivers on my computer”. In my case Windows presents the new driver from CSR (“CSR BlueCore Nanosira”) and two generic drivers as you can see in screenshot 2.

I selected “Generic Bluetooth Radio” from the list and clicked OK. Then I had “Generic Bluetooth Radio” and “Microsoft Bluetooth Enumerator” back under “Bluetooth Radios” and the Bluetooth icon appeard in the taskbar again. After that I was able to use my bluetooth mouse and keyboard again (even without pairing it again).

Thanks to davewebb8211!

Then you will want to hide that nasty windows update as follows:

bluetooth-update

No sound in Kodi / XBMC with ASUS Xonar

Posted in Hardware, Music, Software, Windows 7 on January 20th, 2015 by admin – Comments Off

I recently replaced the motherboard in my media centre due to on-going bluescreens, and I unwittingly selected a refurbished board with no on-board audio (ASUS Rampage Extreme II). The two main PCIE sound card manufacturers appear to be ASUS and Creative. I selected ASUS Xonar PCIE 7.1 DX for two reasons:

  1. I’ve used plenty of Creative hardware before and they are getting worse over time.
  2. It was the only one they had in the shop!

Anyway, first problem was that the card didn’t physically fit in my PCIE x1_1 slot due to the CPU heatsink placement! Luckily I found out that you can put a smaller PCIE card in any larger PCIE slot. So I was able to install it in my 2nd PCIE x16 slot.

Second problem is that when I went the install the drivers from ASUS, the driver installation didn’t detect the card and just hung. I forced a reboot, ran the installation again, and amazingly the installation worked 2nd time around and sound in Windows 7 was now working.

When I fired up Kodi however, there was no sound. Looking into the log file i saw this:

CAESinkDirectSound::Initialize: cannot create secondary buffer (DSERR_UNSUPPORTED)

And after googling that I saw that many people were having problems with ASUS Xonar cards in XBMC / Kodi. The main solution was to go into system – audio settings and change from using DirectSound to WASAPI. This did work for me, however it means that when Kodi is running, no other application can output sound, i.e. Kodi has exclusive access to the audio hardware. While not optimal, this is at least a workable solution. But I probably won’t be buying ASUS Xonar sound cards in the future.

Starting a legacy animation with delay from script in Unity

Posted in Programming, Unity on December 21st, 2014 by admin – Comments Off

Applies to: Unity 4.5.5f1 (and possibly other versions)

Unity has introduced a new animation system called Mecanim, but the existing animation system is still available to use. If you are using the old animation system you’ll be using an ‘animation’ component rather than an ‘animator’ component on your gameobject. And when you do so, you’ll likely see at least one of these warnings:

The AnimationClip ‘x’ used by the Animation component ‘y’ must be marked as Legacy.
Default clip could not be found in attached animations list.

There are two ways to solve this, depending on whether you have a model (e.g. via a prefab) or not.

You have a model
Select the model and on the Rig tab of the import settings, change the animation type to ‘legacy’.

rig-tab

You don’t have a model
In this case, select the animation and then in the inspector, RIGHT-click the ‘Inspector’ tab and choose ‘Debug’.

inspector-debug

Then change the animation type value from 2 to 1. Then change back to ‘Normal’ mode on the Inspector tab and when you run your app, this warning should be gone.

It is simple to apply your legacy animation to a gameobject by simply adding an ‘animation’ compoment and dragging the animation to appply onto that component. In this case, the animation will by default play on startup of your app. But what if you want to start the animation after some delay via a script, for example. To do this, uncheck ‘Play automatically’ in the animation component, then add a new script component to your game object. You can use the following code to start the animation after a 3 second delay. This also demonstrates how to achieve a fixed delay from a script.

public class startanim : MonoBehaviour {

	private bool started = false;

	// Use this for initialization
	void Start () 
	{
	}
	
	// Update is called once per frame
	void Update ()
	{
		if (!started)
		{
			executeWait();
			started = true;
		}
	}

	private void executeWait()
	{
		StartCoroutine(Wait(3.0f));
		Debug.Log("This line runs *immediately* after starting the Coroutine");
	}
	
	private IEnumerator Wait(float seconds)
	{
		yield return new WaitForSeconds(seconds);
		Debug.Log("3 seconds has passed, animation is now starting.");
		this.animation.Play ();
	}
}

Finally, the debug output can be viewed at the bottom of the unity window:

debug-output

Auto-Complete not working in Eclipse

Posted in Android, Linux, Programming on November 2nd, 2014 by admin – Comments Off

After upgrading from Linux Mint 13 to 17 (and Eclipse from Juno to Luna) I noticed that auto-complete (also called content assist) which is normally invoked using Ctrl+Space is not working anymore. After searching I found it is not actually caused by Eclipse, but instead the IBus (Keyboard) preferences eat Ctrl+Space so Eclipse never receives it! This is a very poor choice of global shortcut and it has been reported as a bug here. You can confirm Eclipse is not receiving it by going into Window->Preferences->General->Keys and attempting to enter Ctrl+Space for a key.

Until it is fixed, you can work-around it by right-clicking the keyboard icon in the system tray (next to the clock), choose ‘Preferences’ and select a different shortcut for ‘Next Input Method’.

Unknown Hard Error

Posted in Testing, Windows 7 on June 20th, 2014 by admin – Comments Off

This stupid error plagued me for almost a year before a colleague of mine found a fix. If you see this:

unknown-hardware-error

You can make the following registry change to avoid this. This was happening on machines that were running UI tests, so this top-most dialog was causing tests to fail, pretty horrible considering there is no explanation as to the cause. Apply this reg change and move on:

HKLM/SYSTEM/CurrentControlSet/Control/Windows/ErrorMode=2

For reference see:

http://support.microsoft.com/kb/124873/en

Dispatch to the UI thread in WPF

Posted in Programming on June 18th, 2014 by admin – Comments Off

How to run code on the UI thread from a background thread in WPF:

System.Windows.Application.Current.Dispatcher.Invoke((Action)(() =>
{
// Your UI code goes here.
}));

Command history in Android Terminal

Posted in Android on May 15th, 2014 by admin – Comments Off

Here’s a tip if you are usingthe Terminal Emulator app by Jack Palevich in the Play Store:

To recall your previous command use VolumeUp + w (lower case) and VolumeUp + s (lower case) to move forward in the command history.

Because typing on a tablet soft keyboard sucks.

USB Driver for Nexus 5 on Windows 7

Posted in Android, Programming, Windows 7 on April 30th, 2014 by admin – Comments Off

When I plugged in my Nexus 5 to my Windows 7 PC, it was not recognised, even after I used the Android SDK Manager to install the latest Google USB Driver. I finally worked out that there is a manual step you must take: Check the tooltip to find out where the USB driver was downloaded to (I found one under Program Files folder that was not installable – make sure you refer to the tooltip), and use Device Manager to update the driver, pointing to this folder. Then you can debug on your Nexus using ADB.

usb-driver

Why I ditched RAID and Greyhole for MHDDFS

Posted in Data Management, Linux on April 27th, 2014 by admin – 2 Comments

Yes it’s a mouthful, but in my opinion, mhddfs is far-and-away the most beautiful and elegant solution for large data storage. It has taken me 10+ years of searching and trying, but now I’m finally at peace with my home-cooked NAS setup. In this article, I will explain how you too can have large amounts of easily expandable and redundant storage available on your network for the cheapest price in the simplest way possible.

1. The Beginning: RAID

When I realised I had enough data and devices to justify a server, the natural option for storage was of course RAID. As I was a cheapskate (and I didn’t want to risk hardware failure), I used software raid level 5 on Gentoo with 5 drives. Although this worked, it was a pain and I didn’t sleep well:

  • If any drive died (which several did), I would have to re-learn the commands to remove the drive from the raid, shut-down the server, install a replacement drive, re-learn the commands to add the drive back to the raid and then wait nervously for the re-sync to complete, which usually took several days due to the size of my data.  This was a horrible process because it happened just in-frequently enough that I never got enough practice doing it, so every time it was a matter of googling and praying.
  • Expanding the array when space ran out was a similarly in-frequent task that also required some re-learning each time and hence was a nerve-racking exercise.  In addition, drive size and type ideally had to match the existing drives, which was a risk to source.
  • Since I was using RAID 5, if 2 drives died, BAM, all data was gone.  This was always on my mind, and made point (1) even more stressful.  Yes I could have used other RAID levels, but 5 was the right balance between speed and redundancy each time I weighed up the options.

2. The Middle: Greyhole

When building a home server, linux is usually the best choice, but getting your network set up right does require some linux know how, and when you start trying to configure firewall, you better hope you read the right blog or forum posts or who knows whether you got it right.  A few years ago I found out about Amahi, which is kind of a pre-packaged home server (based on Fedora, and now Ubuntu) that automatically sets you up a computer with everything a typical home server would normally need, out-of-the-box.  And it gives you a great web-based dashboard / control panel that allows you to further configure and monitor your system.  But mostly it just works, and I’m still using it today.

What especially interested me is that it is bundled with a thing called Greyhole, which is used to provide data storage via Samba for network clients.  Greyhole is great in concept in that it allows you to take a bunch of disks, of ANY size and format and location (local or network) and logically combine all their storage capacity together to create a single larger storage which clients see as a single volume.  Unfortunately, the implementation appears to be severly flawed, as I found the hard-way after using greyhole for about 6 months.  Greyhole works by subscribing to writes/renames/deletes to the samba share, which it then records in a SQL database.  Then later, it ‘processes’ those actions by spreading files out across different physical drives that are part of the storage pool you have created.  Depending on how you configured redudancy in your pool, your files might end up on one, two, three or all physical drives.  This is great in that you get quite good redundancy, you can easily expand the storage pool with any new disk you have lying around, and if any drive dies, you only lose the files on that drive, since individual files are not split across multiple drives.  The problem comes when you have a large number of small files and/or you perform a lot of operations on your file system which greyhole just can’t keep up with.  This results in greyhole basically falling behind on its tasks which starts to result in your files not getting copied / moved to the right places and in worst case, actually going missing (yes this happened to me).  Finally, greyhole filled up my entire dropzone with millions of tiny log files which killed my server completed after I ran out of inodes.  At that point I was done with greyhole.

3. The End: MHDDFS

Finally after googling again, I saw mention of a small linux util, mhddfs, that seemed like it might just fit the bill.  It is not heavily advertised, which is risky when dealing with file-systems, but I’ve been using it for 2+ years and it has performed beautifully (zero data loss).  There is only one blog post that explains how it works, and I will not repeat that here, so you should read this: Intro to MHDDFS.

Once you’ve read that, you’ll see it’s a simple matter of running a single linux command (or editing your fstab) to create your storage pool on boot up of your server.  Once created, you can simply share out your pool as a samba share for your network, and MHDDFS will take care of ensuring when one drive in the pool is full, it will seamlessly start writing to the next drive.  So clients just see one huge volume with lots of available space.  Adding drives is as simple as editing your fstab, and you can pull out a drive at any time and access all the files on it directly (since you can choose your own file system).  Files are not split across drives.

Performance

Since MHDDFS is a fuse-based file system (i.e. it is running in user-space), you may question its performance.  I tested read/write speeds over a Gigabit network to locations on in the storage pool and out of it, and can confirm we are talking about a very small performance degredation, something like 5% slower, which for me was more than acceptable.

Redundancy

MHDDFS does not provide any redundancy feature, which is actually nice, since it does one job and does it well.  This leaves you with lots of options to choose your own redundancy solution.  Mine was simply to have a backup computer with the same storage capacity, and use MHDDFS on that computer to create a ‘backup’ mirror storage pool.  Then I simply use rsync as a nightly scheduled task to keep the two pools in sync.

Configuration

Here’s my relevant fstab entrys:

UUID=60933834-6e2e-snip /mnt/mediaA ext4    defaults        1 2
UUID=a21d2e76-e58b-snip /mnt/mediaB ext4    defaults        1 2
UUID=e53b4fef-600e-snip /mnt/mediaC ext4    defaults        1 2
UUID=b94100c4-2926-snip /mnt/mediaD ext4    defaults        1 2
UUID=a10c3249-ae19-snip /mnt/mediaE ext4    defaults        1 2
UUID=4309390b-399f-snip /mnt/mediaF ext4    defaults        1 2
mhddfs#/mnt/mediaA,/mnt/mediaB,/mnt/mediaC,/mnt/mediaD,/mnt/mediaE,/mnt/mediaF /mnt/media fuse nonempty,allow_other 0 0

So /mnt/media becomes my storage pool share, which you can see easily using df -h:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             1.8T  1.7T   18G  99% /mnt/mediaA
/dev/sdb1             1.8T  1.7T   18G  99% /mnt/mediaB
/dev/sdc1             1.8T  1.7T   11G 100% /mnt/mediaC
/dev/sdd1             1.8T  1.7T   12G 100% /mnt/mediaD
/dev/sde1             1.8T  342G  1.4T  20% /mnt/mediaE
/dev/sdf1             1.8T  196M  1.7T   1% /mnt/mediaF
/mnt/mediaA;/mnt/mediaB;/mnt/mediaC;/mnt/mediaD;/mnt/mediaE;/mnt/mediaF
11T  7.1T  3.2T  70% /mnt/media

My rsync command runs as a scheduled task (cron job) at 5:30am every day:

rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/coding user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/projects user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/graphics user@backup:/mnt/media

Note that the rsync command is executed for each individual root folder in the share so I can choose which folders I want to make redundant.  Also I do not include the –delete option so that if I accidently delete something, i can recover it from the backup server at any time.  Then periodically I can use Beyond Compare to compare to two storage pools and remove anything I truely don’t need.  The first time you set up the backup storage pool, it will take quite a while for the rsync to complete (like several days), but thereafter, it is amazingly quick and finding just the diffs and replicating those only.  Yes everything you heard about rsync is true, it is awesome.

So that’s my data storage problem solved, if are looking for something that is scalable, powerful, flexible and most importantly, simple, I recommend mhddfs.  And then for redundancy, rsync is about as simple as it gets.