I was at Google Developer Day here in London yesterday. Most of it was pretty light on technical details, but there was some good info on V8, Google's new JavaScript engine which is used in Google Chrome.
The specific sessions I went to were Intro to Android, Intro to Android SDK, Google Data API mashups, and V8 - "the Chrome Engine" (sic). The heavy tilt towards Android was chiefly driven by curiosity after Mike Jennings took out a pre-production OHA phone and demoed it in the keynote. It looks reasonably neat, borrowing some nifty effects from Apple's iPhone; no pinch-zooming though.
Intro to Android was a mix of business / very lightweight technical details about the phone. Apparently, Google has twisted some arms in back-rooms to get buy-in from operators & OEMs, so anything with the OHA / Android branding etc. should have some minimum level of openness etc. Should be a welcome relief from the vice-grips of Apple. There wasn't much description of the Dalvik VM running behind the scenes, other than that you take your .class files from javac and run them through a processor to get running on the device, and apparently the guy Mike talks to about the VM is "very bright". That was mentioned a number of times, so it must be important.
The Android SDK intro was given by a guy (Carl-Gusaf Harroch) from a local startup, not an actual Google guy. He very roughly described content providers, and also briefly outlined some entries in the application manifests, which are the mechanism by which the Android OS figures out what events your application is interested in (phone call arrived, moved certain distance according to GPS, that kind of thing). There was a laboured comparison of content providers with REST, in that there are methods that correspond to CRUD operations, but apparently there are other concerns such as observability etc. which make them not as simple as REST (and thus an invalid comparison, in my view). Apparently content stored on the phone and exposed to other applications is heavily skewed towards assuming that the content is living in an SQLite DB.
The GData mashup session wasn't interesting to anyone who has interacted with Google REST / AtomPub APIs even trivially. Once upon a time I wrote a blogger post app, so I didn't learn much.
Oh, and if you are writing a client to GData etc., I recommend that you don't start by trying to grok any of the Google API libraries unless you need deep integration. I didn't like the look of them last time (Java-itis, factories etc. everywhere), and I'm fairly sure they haven't improved.
Finally, I went to the V8 talk by Kevin Millikin. This was the best and most technical by a long, long way; to be honest, if it hadn't been for this talk, the day would have been a waste, on net. He described some V8 implementation details.
Roughly speaking, the main approach V8 takes is to try and shoe-horn classes into the JavaScript dynamic type system so that other, more classical dynamic object-oriented optimizations can be applied in the future. The way it works is with hidden classes. Every freshly new'd up object obj gets the same class, call it class_0. The first time you add a property, call it 'x', to obj, a new class class_1 is created which has a single property called 'x'. Additionally, a class transition edge is added to class_0 pointing at class_1, and this edge is labeled 'x'. This means that all objects which are freshly new'd up and have a single property called 'x' added to them will all share the same class. So, in theory, every object in the system which has the same properties in the same order will share the same class. The class transitions described above form a tree (I asked), and can't close over to make a dag, because if that happened, enumeration order would change. (I believe this is an implementation detail of ECMA, so people shouldn't be relying on it; however, it's something the V8 folks discovered that they could not change from extant practice.) This in turn means that if you do have conditional branches in your JavaScript constructors, you should assign to all properties in the same order, if that is feasible.
Anyhow, the use of classes as indicated above means that object use sites can now be optimized based on the runtime class of the object. Here's a specific example: whenever you access a property in JS running on V8, the access site will be a little stub function using one of 4 templates: uninitialized, pre-monomorphic, fast-monomorphic, and megamorphic. The first time the access site is invoked, the runtime class is inspected and noted, and the stub moves to the pre-monomorphic state. The second time it's accessed, the runtime class is checked against the previous class and if it's the same, the stub moves to the fast-monomorphic state, and is written so that it is very simple: compare object type, if it's as expected then dereference to object storage, then load property at the specific offset (stored in the property in the class but inlined as machine code in the access site). If the class wasn't as expected, then the stub is changed to the megamorphic state. Finally, the megamorphic state is the slow path that falls back to hash-based lookup, just like most other JS implementations.
Since arrays in JS are just tightly-packed hash buckets with particular key patterns, the same approach could be taken but it wouldn't necessarily be fast. Apparently the V8 folks discovered that a lot of artificial JS benchmarks were based around array manipulations, so they put a little work in this, but they're not finished in this area (as far as I could make out). In any case, array access strategy is governed by a heuristic; for small packed arrays accessed with integers, a direct lookup can be made. For larger, sparse arrays, the property access mechanism is used.
Variable capture in closures (a topic dear to my heart at this point) also works along similar lines as property creation, from what Kevin described. There is a caveat: if you are using the "with" statement in JavaScript, or there is eval( between the variable definition and the access site in your closure, then the access can't be optimized; these constructs fall back to the hash-based lookup, because they effectively need to use dynamic scope. So, don't create closures in that way unless you're prepared for the speed bump.
The majority of the JS standard library, things like array.join etc., are implemented in JavaScript itself in V8. They're using a special %AddProperty function to make them non-enumerable. V8 also uses a freeze-dried heap mechanism to reduce the cost of initializing the standard library. It can essentially save and restore the heap, with appropriate relocations etc. as required, so that it doesn't have to reparse everything on startup.
The garbage collector for V8 looks less interesting as a source of performance. It's almost certainly an improvement on what other JS implementations are using for GC, but I think it's some way from the last word on the topic. It has only 2 generations, so intermediate live objects that get promoted when collecting new-space will eventually force a costly old-space collection. Kevin didn't say that they were using write barriers to reduce need to scan old-space looking for pointers to new-space, but grepping the V8 sources turns up some write barrier hits, so maybe they are. V8's GC is definitely better than reference counting as implemented in IE of older days, of course.
Since the main approach taken thus far was just to give values a class, and use that to optimize property access, there is still a lot of scope for optimization in V8. I wouldn't be surprised to see significant (2-5x) performance improvements in the not too distant future in V8, as more techniques are integrated. They're currently going straight from the JS AST to machine code, no inlining of aforementioned property access sites (AFAICT - there was a 'ret' at the end of the demo access site). The main thing is (a) they have objects pinned down to types now, and (b) hopefully as JS developers learn how to make code run fast under this paradigm, objects will look even more type-ful and thus increase scope for other optimizations.