Win Library Tool [revised for Windows 8]

Posted in Software, Windows 7 on November 3rd, 2009 by admin – 367 Comments

UPDATE: This tool has been updated and now works on Windows 8, however from my limited testing, the new metro interface has extremely limited support for libraries. Unfortunately using this tool I couldn’t find a way for photos added from a network drive to appear in the ‘photos’ metro app even though they appear in the library in Windows File Explorer. Windows 8 is astonishingly bad.

Download Executable | Source Code

Windows libraries (introduced in Windows 7) could have been a really useful feature of Windows, however unfortunately they arrived in a slighly cut-down form out of the box.  Microsoft decided against exposing some really useful capabilities to users, like adding network locations, pretty much the first thing I tried to do.  You get this message:

windows7libraryerror

Luckily, you can add network locations (and any other un-indexed locations), but it must be done programatically.  MS supply a command line utility slutil.exe, candidate for the worst named executable in history.  Pretty sure it stands for shell_library_util.  Anyway, I decided to write a tool to make it easy to add network locations, and added a few other features as well:

  • Add network (UNC or mapped drive) and any other un-indexed folders to libraries.
  • Backup library configuration, such that a saved set of libraries can be instantly restored at any point (like after a re-install of the OS or for transfer between multiple computers).
  • Create a mirror of all libraries (using symbolic links) in [SystemDrive]:\libraries.  This means you can reference all your files using a much shorter path, and also provides another entry-point to your files in many places in the Operating System (e.g. file open/save dialogs).
  • Change a library’s icon.

Hopefully it’s easy enough to use, so I don’t have to explain it :)

You can download it for free below.  (Note: This will only run on >= Windows 7.)

Download Installer | Source Code

I must give credit to Josh Smith for his TreeView CodeProject article, upon which this solution is modelled.

The application uses the Microsoft API CodePack to manipulate libraries, which I encourage you to check out if you are writing software to integrate / take advantage of new features in Windows 7.

If you want to learn why and how libraries were introduced in Windows 7, including diving into the .library-ms file format, you can read this MSDN article.

Now featured on Tekzilla!

Starting a legacy animation with delay from script in Unity

Posted in Programming, Unity on December 21st, 2014 by admin – Be the first to comment

Applies to: Unity 4.5.5f1 (and possibly other versions)

Unity has introduced a new animation system called Mecanim, but the existing animation system is still available to use. If you are using the old animation system you’ll be using an ‘animation’ component rather than an ‘animator’ component on your gameobject. And when you do so, you’ll likely see at least one of these warnings:

The AnimationClip ‘x’ used by the Animation component ‘y’ must be marked as Legacy.
Default clip could not be found in attached animations list.

There are two ways to solve this, depending on whether you have a model (e.g. via a prefab) or not.

You have a model
Select the model and on the Rig tab of the import settings, change the animation type to ‘legacy’.

rig-tab

You don’t have a model
In this case, select the animation and then in the inspector, RIGHT-click the ‘Inspector’ tab and choose ‘Debug’.

inspector-debug

Then change the animation type value from 2 to 1. Then change back to ‘Normal’ mode on the Inspector tab and when you run your app, this warning should be gone.

It is simple to apply your legacy animation to a gameobject by simply adding an ‘animation’ compoment and dragging the animation to appply onto that component. In this case, the animation will by default play on startup of your app. But what if you want to start the animation after some delay via a script, for example. To do this, uncheck ‘Play automatically’ in the animation component, then add a new script component to your game object. You can use the following code to start the animation after a 3 second delay. This also demonstrates how to achieve a fixed delay from a script.

public class startanim : MonoBehaviour {

	private bool started = false;

	// Use this for initialization
	void Start () 
	{
	}
	
	// Update is called once per frame
	void Update ()
	{
		if (!started)
		{
			executeWait();
			started = true;
		}
	}

	private void executeWait()
	{
		StartCoroutine(Wait(3.0f));
		Debug.Log("This line runs *immediately* after starting the Coroutine");
	}
	
	private IEnumerator Wait(float seconds)
	{
		yield return new WaitForSeconds(seconds);
		Debug.Log("3 seconds has passed, animation is now starting.");
		this.animation.Play ();
	}
}

Finally, the debug output can be viewed at the bottom of the unity window:

debug-output

Auto-Complete not working in Eclipse

Posted in Android, Linux, Programming on November 2nd, 2014 by admin – Comments Off

After upgrading from Linux Mint 13 to 17 (and Eclipse from Juno to Luna) I noticed that auto-complete (also called content assist) which is normally invoked using Ctrl+Space is not working anymore. After searching I found it is not actually caused by Eclipse, but instead the IBus (Keyboard) preferences eat Ctrl+Space so Eclipse never receives it! This is a very poor choice of global shortcut and it has been reported as a bug here. You can confirm Eclipse is not receiving it by going into Window->Preferences->General->Keys and attempting to enter Ctrl+Space for a key.

Until it is fixed, you can work-around it by right-clicking the keyboard icon in the system tray (next to the clock), choose ‘Preferences’ and select a different shortcut for ‘Next Input Method’.

Unknown Hard Error

Posted in Testing, Windows 7 on June 20th, 2014 by admin – Comments Off

This stupid error plagued me for almost a year before a colleague of mine found a fix. If you see this:

unknown-hardware-error

You can make the following registry change to avoid this. This was happening on machines that were running UI tests, so this top-most dialog was causing tests to fail, pretty horrible considering there is no explanation as to the cause. Apply this reg change and move on:

HKLM/SYSTEM/CurrentControlSet/Control/Windows/ErrorMode=2

For reference see:

http://support.microsoft.com/kb/124873/en

Dispatch to the UI thread in WPF

Posted in Programming on June 18th, 2014 by admin – Comments Off

How to run code on the UI thread from a background thread in WPF:

System.Windows.Application.Current.Dispatcher.Invoke((Action)(() =>
{
// Your UI code goes here.
}));

Command history in Android Terminal

Posted in Android on May 15th, 2014 by admin – Comments Off

Here’s a tip if you are usingthe Terminal Emulator app by Jack Palevich in the Play Store:

To recall your previous command use VolumeUp + w (lower case) and VolumeUp + s (lower case) to move forward in the command history.

Because typing on a tablet soft keyboard sucks.

USB Driver for Nexus 5 on Windows 7

Posted in Android, Programming, Windows 7 on April 30th, 2014 by admin – Comments Off

When I plugged in my Nexus 5 to my Windows 7 PC, it was not recognised, even after I used the Android SDK Manager to install the latest Google USB Driver. I finally worked out that there is a manual step you must take: Check the tooltip to find out where the USB driver was downloaded to (I found one under Program Files folder that was not installable – make sure you refer to the tooltip), and use Device Manager to update the driver, pointing to this folder. Then you can debug on your Nexus using ADB.

usb-driver

Why I ditched RAID and Greyhole for MHDDFS

Posted in Data Management, Linux on April 27th, 2014 by admin – 2 Comments

Yes it’s a mouthful, but in my opinion, mhddfs is far-and-away the most beautiful and elegant solution for large data storage. It has taken me 10+ years of searching and trying, but now I’m finally at peace with my home-cooked NAS setup. In this article, I will explain how you too can have large amounts of easily expandable and redundant storage available on your network for the cheapest price in the simplest way possible.

1. The Beginning: RAID

When I realised I had enough data and devices to justify a server, the natural option for storage was of course RAID. As I was a cheapskate (and I didn’t want to risk hardware failure), I used software raid level 5 on Gentoo with 5 drives. Although this worked, it was a pain and I didn’t sleep well:

  • If any drive died (which several did), I would have to re-learn the commands to remove the drive from the raid, shut-down the server, install a replacement drive, re-learn the commands to add the drive back to the raid and then wait nervously for the re-sync to complete, which usually took several days due to the size of my data.  This was a horrible process because it happened just in-frequently enough that I never got enough practice doing it, so every time it was a matter of googling and praying.
  • Expanding the array when space ran out was a similarly in-frequent task that also required some re-learning each time and hence was a nerve-racking exercise.  In addition, drive size and type ideally had to match the existing drives, which was a risk to source.
  • Since I was using RAID 5, if 2 drives died, BAM, all data was gone.  This was always on my mind, and made point (1) even more stressful.  Yes I could have used other RAID levels, but 5 was the right balance between speed and redundancy each time I weighed up the options.

2. The Middle: Greyhole

When building a home server, linux is usually the best choice, but getting your network set up right does require some linux know how, and when you start trying to configure firewall, you better hope you read the right blog or forum posts or who knows whether you got it right.  A few years ago I found out about Amahi, which is kind of a pre-packaged home server (based on Fedora, and now Ubuntu) that automatically sets you up a computer with everything a typical home server would normally need, out-of-the-box.  And it gives you a great web-based dashboard / control panel that allows you to further configure and monitor your system.  But mostly it just works, and I’m still using it today.

What especially interested me is that it is bundled with a thing called Greyhole, which is used to provide data storage via Samba for network clients.  Greyhole is great in concept in that it allows you to take a bunch of disks, of ANY size and format and location (local or network) and logically combine all their storage capacity together to create a single larger storage which clients see as a single volume.  Unfortunately, the implementation appears to be severly flawed, as I found the hard-way after using greyhole for about 6 months.  Greyhole works by subscribing to writes/renames/deletes to the samba share, which it then records in a SQL database.  Then later, it ‘processes’ those actions by spreading files out across different physical drives that are part of the storage pool you have created.  Depending on how you configured redudancy in your pool, your files might end up on one, two, three or all physical drives.  This is great in that you get quite good redundancy, you can easily expand the storage pool with any new disk you have lying around, and if any drive dies, you only lose the files on that drive, since individual files are not split across multiple drives.  The problem comes when you have a large number of small files and/or you perform a lot of operations on your file system which greyhole just can’t keep up with.  This results in greyhole basically falling behind on its tasks which starts to result in your files not getting copied / moved to the right places and in worst case, actually going missing (yes this happened to me).  Finally, greyhole filled up my entire dropzone with millions of tiny log files which killed my server completed after I ran out of inodes.  At that point I was done with greyhole.

3. The End: MHDDFS

Finally after googling again, I saw mention of a small linux util, mhddfs, that seemed like it might just fit the bill.  It is not heavily advertised, which is risky when dealing with file-systems, but I’ve been using it for 2+ years and it has performed beautifully (zero data loss).  There is only one blog post that explains how it works, and I will not repeat that here, so you should read this: Intro to MHDDFS.

Once you’ve read that, you’ll see it’s a simple matter of running a single linux command (or editing your fstab) to create your storage pool on boot up of your server.  Once created, you can simply share out your pool as a samba share for your network, and MHDDFS will take care of ensuring when one drive in the pool is full, it will seamlessly start writing to the next drive.  So clients just see one huge volume with lots of available space.  Adding drives is as simple as editing your fstab, and you can pull out a drive at any time and access all the files on it directly (since you can choose your own file system).  Files are not split across drives.

Performance

Since MHDDFS is a fuse-based file system (i.e. it is running in user-space), you may question its performance.  I tested read/write speeds over a Gigabit network to locations on in the storage pool and out of it, and can confirm we are talking about a very small performance degredation, something like 5% slower, which for me was more than acceptable.

Redundancy

MHDDFS does not provide any redundancy feature, which is actually nice, since it does one job and does it well.  This leaves you with lots of options to choose your own redundancy solution.  Mine was simply to have a backup computer with the same storage capacity, and use MHDDFS on that computer to create a ‘backup’ mirror storage pool.  Then I simply use rsync as a nightly scheduled task to keep the two pools in sync.

Configuration

Here’s my relevant fstab entrys:

UUID=60933834-6e2e-snip /mnt/mediaA ext4    defaults        1 2
UUID=a21d2e76-e58b-snip /mnt/mediaB ext4    defaults        1 2
UUID=e53b4fef-600e-snip /mnt/mediaC ext4    defaults        1 2
UUID=b94100c4-2926-snip /mnt/mediaD ext4    defaults        1 2
UUID=a10c3249-ae19-snip /mnt/mediaE ext4    defaults        1 2
UUID=4309390b-399f-snip /mnt/mediaF ext4    defaults        1 2
mhddfs#/mnt/mediaA,/mnt/mediaB,/mnt/mediaC,/mnt/mediaD,/mnt/mediaE,/mnt/mediaF /mnt/media fuse nonempty,allow_other 0 0

So /mnt/media becomes my storage pool share, which you can see easily using df -h:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             1.8T  1.7T   18G  99% /mnt/mediaA
/dev/sdb1             1.8T  1.7T   18G  99% /mnt/mediaB
/dev/sdc1             1.8T  1.7T   11G 100% /mnt/mediaC
/dev/sdd1             1.8T  1.7T   12G 100% /mnt/mediaD
/dev/sde1             1.8T  342G  1.4T  20% /mnt/mediaE
/dev/sdf1             1.8T  196M  1.7T   1% /mnt/mediaF
/mnt/mediaA;/mnt/mediaB;/mnt/mediaC;/mnt/mediaD;/mnt/mediaE;/mnt/mediaF
11T  7.1T  3.2T  70% /mnt/media

My rsync command runs as a scheduled task (cron job) at 5:30am every day:

rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/coding user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/projects user@backup:/mnt/media
rsync -r -t -v –progress -s -e “ssh -p 1234″ /mnt/media/graphics user@backup:/mnt/media

Note that the rsync command is executed for each individual root folder in the share so I can choose which folders I want to make redundant.  Also I do not include the –delete option so that if I accidently delete something, i can recover it from the backup server at any time.  Then periodically I can use Beyond Compare to compare to two storage pools and remove anything I truely don’t need.  The first time you set up the backup storage pool, it will take quite a while for the rsync to complete (like several days), but thereafter, it is amazingly quick and finding just the diffs and replicating those only.  Yes everything you heard about rsync is true, it is awesome.

So that’s my data storage problem solved, if are looking for something that is scalable, powerful, flexible and most importantly, simple, I recommend mhddfs.  And then for redundancy, rsync is about as simple as it gets.

Disabling NumLock Key

Posted in Linux, Windows 7 on April 27th, 2014 by admin – Comments Off

For me, the NumLock and CapsLock keys are relics of the past, and are now the two most annoying keys on the standard keyboard. Every time I want to type in a number with more than say 3 digits, I find it easier to use the number pad, but I find i am forced to look down and check the num lock light status and/or attempt to type to see if num lock is activated, or otherwise risk having to re-type everything because numlock was off. Which surprising is most of the time for some reason, even though I ensure num-lock is activated on boot-up in the bios. Caps lock key is not as bad, I don’t seem to bump that on as much as the num-lock key but heck, when was the last time you actually wanted to write an entire sentence in ALL CAPS? I hope not recently. Anyway, there’s an easy way to disable these keys on Windows and Linux as follows:

Linux (Mint)
1. Open Keyboard Layout preferences
2. On the Layouts tab, press ‘Options’, then you can configure caps lock and numlock to behave more sensibly as shown in the following two screenshots:

linux-caps-lock

linux-num-lock

Windows 7
1. Download SharpKeys (you can just get the portable EXE version).
2. Configure the app to disable num-lock and perhaps map caps lock to shift or calculator app.
3. Before pressing ‘Write Registry’, make sure NUM LOCK is in the state you want it (for me – ON), then press ‘Write Registry’ and reboot.

Ctrl+Alt+F7 on Linux

Posted in Linux on October 24th, 2013 by admin – Comments Off

Apparently Ctrl+Alt+Fx keys on linux switch console sessions and #7 happens to be the main windowing session. So pressing this sends all your screen black and pauses all apps with no instructions. Which is pretty frightening really. Luckily you can get back by pressing Ctrl+Alt+F8. Hopefully you have a second computer or tablet/phone handy to find this blog post if that happens :) BTW I’m using Linux Mint, YMMV with other distros.

Confusing A .Net Assembly

Posted in Programming, Security on August 1st, 2013 by admin – Comments Off

UPDATE: I no longer recommend using Confusor (or any other obsfucator tool) after I have discovered that many anti-virus programs detect obsfucated assemblies as false-positives. If that is a deal-breaker for you (as it is for me) then it seems there is no good way to secure (ok, make it slightly more difficult to reverse engineer) a .Net app. Which sucks.

Recently I had to obsfucate a .Net (C# WPF) project that was a license generator, and after researching it seems that the best free tool to use for this is Confuser. It comes in both a GUI and console version, so you can try it, and then integrate it as a post-build step if you’re happy. And I was. IlSpy showed pretty much everything was replaced with foreign / non-printable characters. FYI the console version (Confusor.console.exe) takes a crproj file, which you is created by first running the GUI app and then pressing save on that tool. Then it’s a simple matter of adding a post-build step to call the console version passing the path to the crproj file and you will have your obsfucated assembly created in a subfolder called ‘Confused’.

Two gotchas:

  1. The generated CRPROJ file contains absolute paths, which of course need to be replaced (manually) with relative paths. Remember that the current directory will be the build output directory. My CRPROJ file looks like this:
    
    <?xml version="1.0" encoding="utf-8"?>
    <project outputDir=".\Confused" 
       snKey="" 
       preset="normal" 
       xmlns="http://confuser.codeplex.com">
       <assembly path=".\LicenceGenerator.exe" isMain="true" />
    </project>
    
  2. The generated CRPROJ file contains the path to the executable to obsfucate, which usually includes a debug / release identifier. To fix this issue, you can simply conditionally call the post-build step by doing this:

    if $(ConfigurationName) == Release $(SolutionDir)Confuser\Confuser.Console.exe $(SolutionDir)$(SolutionName).Confusor.crproj

  3. Note that this post build step needs to be all on one line in your project properties field in Visual Studio.

Happy with that!