There was a post on the Delphi newsgroups that stuck in my head for some reason, and I felt I had to write a reply. The reply ended up being a lot longer than I originally intended, because I felt I had to justify my stance. I'm reposting it here in edited form.
"Virtual machine" has acquired pejorative overtones due to historical and social reasons that are probably too emotive to go into. Suffice it to say that I think it's another case of "good ideas don't win, proponents of bad ideas die out instead".
The way I see it, a virtual machine (in the context of programming language implementations) is a software implementation of an abstract machine with a closed-by-default set of semantics.
Let's take that definition apart:
- software implementation: Here, I don't mean that the machine cannot be implemented in hardware. Rather, I mean that if it's going to be "virtual", it is usually implemented in software, which gives rise to certain characteristics, which in turn imbue "virtual machine" with extra shades of meaning. It turns out that software implementation is better than implementing in hardware, largely because of flexibility.
- abstract machine: Every programming language has an abstract machine implicit or explicit in its definition, or otherwise its promised semantics are meaningless - you need a machine at some point to actually do things, and have effects. So, the abstract machine bit isn't controversial; it's its qualities that matter. Note that I differentiate between two different abstract machine concepts: a language's abstract machine, which it uses to model effectful operations, and a platform as an abstract machine. A CPU (+ memory + etc.) specification is an abstract machine, and a platform; the physical device, however, is a real machine, running on the laws of physics.
- closed-by-default semantics: Here, I mean that at the abstraction
level of the abstract machine in question, undefined behaviour is
outlawed. In defining our machine, we humbly accept our human frailties,
and do our best to prevent "unknown unknowns" becoming a problem by
reducing the scope of the problem domain. We limit the power of the
machine, in other words.
Since we do, eventually, want to be able to talk to hardware, legacy software and the rest of the real world, there do need to be carefully controlled holes and conduits built-in. But they're opt-in, not opt-out.
Let's look at some of the ramifications of this conception of VMs.
- Software implementation delivers a tremendous amount of flexibility. Some examples: runtime metaprogramming (e.g. runtime code generation, eval); dynamic live optimization (e.g. Hotspot JVM ); auto-tuning garbage collection; run-time type-aware linking (solving the template instantiation code-bloat problem); rich error diagnostics (e.g. break into REPL in dynamic languages).
- Abstract machine: Developments in programming language fashions have
made object orientation come to the fore (perhaps even too much to the
fore). However, our physical machines map much closer to procedural code
and a separation between code and data than the trends in language and
In other words, the platforms that historically popular type-unsafe  languages (like C++ and Delphi) have targeted aren't a close match for those languages' abstract machines. When they want to interoperate, either with other modules or with modules written in different languages, they face barriers, because their common denominator is the abstraction of the physical CPU. Hence C-level APIs being de facto industry standards, along with limited attempts to raise the abstraction level with COM (largely defined at the binary level in terms of C, explicitly referring to vtable concepts that are otherwise just hidden implementation details of other languages).
So, moving the abstraction level of the target machine closer to the average language abstract machine makes compiler implementation easier, reduces interoperation barriers, and provides more semantic content for the (typically) software implementation to work its flexibility magic.
- Closed-by-default eliminates whole categories of bugs. Type-safety can be guaranteed by the platform. Never again  have a random memory overwrite that shows up as a crash 5 minutes or 5 hours later. It also improves security  by having a well-defined whitelist of operations, rather than trying to wall things in with blacklists and conventions ("this structure is opaque, only pass to these methods" etc.).
 Some notable optimizations that become feasible when the program is
running live include virtual method inlining, lock hoisting and removal,
redundant null-check removal (think about argument-checking at different
levels of abstraction), etc. Steve Yegge's latest blog post, while
rambling, covers many optimizations that apply equally to static
languages running in a virtual machine and to dynamic languages (but of
course he's interested in promoting them as the apply to dynamic
 Any language that has dynamic memory allocation that it expects to be reclaimable (i.e. no infinite memory) and doesn't have a GC isn't type-safe. A single dangling pointer to deallocated memory kills your type safety: if a value of a different type gets allocated at the same location, you have a type violation.
 Unfortunately, RAM may occasionally flip bits due to cosmic rays etc. So, we want to use ECC RAM and checksum critical structures when it matters. Edge case nit.
 IMO, the capability-based security model is the best of those
available, ideally including eliminating ambient authority.
Guess what: you need a type-safe virtual machine to make some strong guarantees about capabilities, otherwise someone could come along and steal all your capabilities by scanning your memory.
See Capability Myths Demolished for more info: