Tuesday, December 29, 2009

Playing DVDs in Windows 7

Oh, how I pine for the blissful days of Windows XP, when menus were still in fashion, the start menu didn't contain a tree view, applications didn't require a half-dozen clicks to launch (to expand the tree view, you see), and the user's colour theme choices were respected. Better days, when the customer was still king.

Lately I've been having difficulty playing back DVDs on my Windows 7 PC. My preferred media player, mplayerc aka Media Player Classic - and its close cousin, the more recently maintained Media Player Classic Home Cinema, failed with the error message "DVD: copy-protection failed". (I quote the message and versions in detail to increase the chances a search will find this page.) Windows Media Player has similar issues, though since they've hidden the menus and generally "simplified" the WMP UI, I can no longer figure out how to use it.

From my research and experimentation, it appears that ever since Windows Vista, RPC1 (aka region-free) DVD drives are no longer welcome. Windows will detect the drive as being region-free, and prevent DVD video playback. The only commonly-found player that still works is VLC, if you can stand its broken UI. I cannot: for example, try full-screen playback (F11) with hidden controls (Ctrl+H) in VLC; you'll find that hiding the controls breaks the full-screen toggle, F11! I've had to flash my DVD drive with a region-locked firmware just to play DVDs. Workarounds exist, such as using auto-reset firmwares if they exist for the drive, or what I'm probably going to do, use a separate DVD drive for each region, since DVD drives are exceedingly cheap - less than 20 GBP - and I only need two regions, Europe (2) and North America (1). (I note with slight puzzlement that DVD drives seem to be slightly cheaper in the UK than in the US; not normally the case.)

One thing does really puzzle me, though. All the error messages say that copy protection failed, when as far as I know region locking has nothing whatsoever to do with copy protection, but is rather about inhibiting global trade. With some irony, one of the workarounds I used to watch DVDs was to rip them and play from the file server across the network.

Monday, December 14, 2009

Commonly Confused Tidbits re .NET Garbage Collector

I recently read a less than informed blog post purporting to describe the .NET garbage collector, but when pressed to point to a better one, I couldn't find a single concise article that covered everything I thought should have been covered.

Type and Memory Safety

We all know what the gist of GC is about: you allocate new objects, but don't have to worry about freeing them. Combined with type safety in the language, this can give you another kind of guarantee, memory safety: that it's impossible to create a dangling pointer (a pointer to freed memory). Type safety's "safety" is compromised in the absence of memory safety. For example, in a type-safe yet non-GC language, if you allocate an object, make a copy of the reference, then free the object, you're left with a dangling pointer. If you then reallocate another object with a similar size, depending on the memory allocator's implementation, you may have actually created a type hole - the dangling pointer may now point to the newly allocated object but with the wrong type. This extra safety is one of the reasons why GC programs often have less bugs than ones using manual allocation.

Precise compacting GC

.NET has a precise compacting GC. It's precise because .NET deals with type-safe and memory-safe references into the managed heap, it can know the precise layout of every allocated object. The opposite is conservative GC; conservative GC doesn't necessarily know the types of variables stored on the stack or in registers, or the layouts of those types, so if a random number on the stack or in a register or the middle of an object happens to coincide with a valid object address, it can't collect that object. It's compacting because during a collection, it moves live (non-garbage) objects closer together, squeezing out the gaps left by dead, old objects which no longer have any references to them. This eliminates the problem of memory fragmentation, at least as far as the managed heap is concerned.

However, thinking of a precise compacting GC as "collecting garbage" is not always the best way. When looking at the asymptotic performance of GC, it's helpful to turn things around, and consider that the GC collects live objects and brings them together. Garbage is just the stuff that got overwritten during moving, or the stuff left at the end after everything has been compacted together. Why is this important? Because the cost of a precise compacting garbage collection is proportional to the number of live objects it has to trace through and move, rather than the amount of garbage it needs to collect. When compared with manual allocation, where the cost of freeing memory is usually proportional to the number of objects freed (i.e. the amount of garbage), something should become clear: the longer you can delay GC - i.e. the more garbage you can build up - the faster and cheaper, in per-garbage terms, GC is than manual allocation.

An example. Imagine a single linked list with 1000 elements, with a single variable in the program referring to the head of the list. Consider the problem of truncating the list, and leaving only the head behind. With manual memory allocation, all 999 trailing items will need to be disposed of, with bookkeeping for each one. With garbage collection, only the first element need be moved to the start of the heap, and the free pointer (where new allocations begin) set to begin directly after it. The 999 other elements got "collected" for free.

This is why the performance of precise compacting GC is proportional to the amount of garbage it is allowed to build up between collections, and hence proportional to the amount of memory it's allowed to consume. It's also the reason why GC programs tend to use more memory than manual allocation ones. It's also the reason why small single-use objects, such as for function return values, are very cheap and shouldn't be unduly avoided in GC languages. Congruent with this, code in a programming language with GC is often more naturally written, easier to read and easier to write because of the freedom with which one can return freshly allocated objects without concern for who is going to collect them, or the performance ramifications. Understanding this point is key to understanding why GC is so productivity-enhancing.

Resources

The garbage collector is for memory, not resources. I've written about this at length in the past, so I shall quote myself:

Disposal of resources in a long-running application using GC is not a performance issue. It's a correctness issue.

Garbage collection knows about memory. With most collectors, GC is only invoked when the GC is asked for more memory than is immediately available, taking specific tuning parameters into account. In other words, the GC is only sensitive to memory pressure. It doesn't know about resources that it sees only as pointer-sized handles, it doesn't know how much they "cost", and indeed those resources might be on a different machine or even spread across many different machines.

More critically, garbage collection gets its performance surplus over and above manual garbage collection by not collecting until as late as reasonably possible

In .NET, the deterministic disposal mechanism is via the IDisposable interface, and in languages like C#, the using keyword.

Finalizer vs IDisposable

Finalizers in .NET are overrides of the method System.Object::Finalize(void). In C#, the syntax looks like a C++ destructor, but other .NET languages have other syntaxes. If this method is overridden, the .NET GC treats the object specially. After it has become garbage, rather than just overwriting it during compaction phase of GC, it will delay collection until the Finalize method of the instance has been executed by the finalizer thread. The GC makes a best effort to call the method; if the finalizer thread is blocked (e.g. some other finalizer hung), then the finalizer might not get called. This is another reason why deterministic disposal is preferred to relying on finalizers.

Finalizers should only be used with objects that directly encapsulate a non-managed resource, such as a file handle, a window handle, a socket, etc. A file stream, or a window object, or a network stream, should not normally implement a finalizer. IDisposable is a different matter. IDisposable should be implemented by every class with a finalizer, and every class that logically owns (transitively) any other object that implements IDisposable. Calling IDisposable on the root of the ownership tree should ripple through and call IDisposable on everything beneath it; ownership trees and other deterministic lifetime approaches still exist with GC precisely because GC is just for memory, not resources. Most objects neither own nor encapsulate resources, however, so much of the burden is still lifted.

Generations

I've read many posts that describe how generational GC works: very loosely, you have three generations, gen0 through gen2; allocations come from gen0; gen0 is collected if it's full; gen1 is collected if gen0 collection didn't free enough space; gen2 is collected if gen1 didn't free enough; more memory is allocated if gen2 didn't free enough; if not enough memory, then out of memory exception.

They also describe the hypothesis behind generational collections: that young objects are more likely to die (become garbage) sooner rather than later, while old objects are likely to survive longer.

But this, however, doesn't capture the intuition behind why there are 3 generations, rather than 4, or 5, or some other number. Let me explain.

Most applications have a roughly constant, steady state of total allocated memory. It's made up of two kinds of objects: objects associated with currently executing code (let's call them ephemeral objects), and objects linked into a global object graph which is shared across multiple instances and iterations of executing code (let's call them persistent objects). We want to avoid tracing through the persistent objects as much as possible, as it's unlikely that any objects which have survived for a long time have become garbage. Actually, making old objects garbage is a pathology to be avoided in .NET programs, and has been described as the mid-life crisis problem.

The ephemeral objects, on the other hand, are likely to have a high proportion of garbage. But we have a problem: when we perform a garbage collection on the ephemeral objects, there'll be some live objects left over, currently being used by executing code - but we still don't want them in with the persistent objects. To solve this problem, there needs to be a generation in the middle, a half-way house, before resorting to increasing the persistent object set.

This combination of three generations is almost ideally suited to server applications which have relatively little variance in total per-request allocation and tend towards an average per-request allocation which is less than the size of gen0+gen1 per thread. If one could time per-thread collections of gen0 and gen1 correctly, in between requests, then in theory GC would be almost completely free, as there would be almost no live objects. There are complications in practice (we can't manually invoke a GC for just a single thread, for one thing), but when designing applications and servers, it's very much worth keeping the generational design of the GC in mind and working with its assumptions, rather than against them.

For example, this can mean that caching objects can turn out to be a pessimization rather than an optimization, because keeping them around moves them into gen2, where it is more expensive to collect them when they get evicted from the cache than it was to create them in the first place. Depending on how expensive the objects are to create vs cache, they may need to be cached permanently and updated as needed, or referred to through a WeakReference which won't keep them alive, or at worst managed manually one way or another, such as via unmanaged memory, or slots or ranges in an array, etc.