The specific sessions I went to were Intro to Android, Intro to Android SDK, Google Data API mashups, and V8 - "the Chrome Engine" (sic). The heavy tilt towards Android was chiefly driven by curiosity after Mike Jennings took out a pre-production OHA phone and demoed it in the keynote. It looks reasonably neat, borrowing some nifty effects from Apple's iPhone; no pinch-zooming though.
Intro to Android was a mix of business / very lightweight technical details about the phone. Apparently, Google has twisted some arms in back-rooms to get buy-in from operators & OEMs, so anything with the OHA / Android branding etc. should have some minimum level of openness etc. Should be a welcome relief from the vice-grips of Apple. There wasn't much description of the Dalvik VM running behind the scenes, other than that you take your .class files from javac and run them through a processor to get running on the device, and apparently the guy Mike talks to about the VM is "very bright". That was mentioned a number of times, so it must be important.
The Android SDK intro was given by a guy (Carl-Gusaf Harroch) from a local startup, not an actual Google guy. He very roughly described content providers, and also briefly outlined some entries in the application manifests, which are the mechanism by which the Android OS figures out what events your application is interested in (phone call arrived, moved certain distance according to GPS, that kind of thing). There was a laboured comparison of content providers with REST, in that there are methods that correspond to CRUD operations, but apparently there are other concerns such as observability etc. which make them not as simple as REST (and thus an invalid comparison, in my view). Apparently content stored on the phone and exposed to other applications is heavily skewed towards assuming that the content is living in an SQLite DB.
The GData mashup session wasn't interesting to anyone who has interacted with Google REST / AtomPub APIs even trivially. Once upon a time I wrote a blogger post app, so I didn't learn much.
Oh, and if you are writing a client to GData etc., I recommend that you don't start by trying to grok any of the Google API libraries unless you need deep integration. I didn't like the look of them last time (Java-itis, factories etc. everywhere), and I'm fairly sure they haven't improved.
Finally, I went to the V8 talk by Kevin Millikin. This was the best and most technical by a long, long way; to be honest, if it hadn't been for this talk, the day would have been a waste, on net. He described some V8 implementation details.
Anyhow, the use of classes as indicated above means that object use sites can now be optimized based on the runtime class of the object. Here's a specific example: whenever you access a property in JS running on V8, the access site will be a little stub function using one of 4 templates: uninitialized, pre-monomorphic, fast-monomorphic, and megamorphic. The first time the access site is invoked, the runtime class is inspected and noted, and the stub moves to the pre-monomorphic state. The second time it's accessed, the runtime class is checked against the previous class and if it's the same, the stub moves to the fast-monomorphic state, and is written so that it is very simple: compare object type, if it's as expected then dereference to object storage, then load property at the specific offset (stored in the property in the class but inlined as machine code in the access site). If the class wasn't as expected, then the stub is changed to the megamorphic state. Finally, the megamorphic state is the slow path that falls back to hash-based lookup, just like most other JS implementations.
Since arrays in JS are just tightly-packed hash buckets with particular key patterns, the same approach could be taken but it wouldn't necessarily be fast. Apparently the V8 folks discovered that a lot of artificial JS benchmarks were based around array manipulations, so they put a little work in this, but they're not finished in this area (as far as I could make out). In any case, array access strategy is governed by a heuristic; for small packed arrays accessed with integers, a direct lookup can be made. For larger, sparse arrays, the property access mechanism is used.
The garbage collector for V8 looks less interesting as a source of performance. It's almost certainly an improvement on what other JS implementations are using for GC, but I think it's some way from the last word on the topic. It has only 2 generations, so intermediate live objects that get promoted when collecting new-space will eventually force a costly old-space collection. Kevin didn't say that they were using write barriers to reduce need to scan old-space looking for pointers to new-space, but grepping the V8 sources turns up some write barrier hits, so maybe they are. V8's GC is definitely better than reference counting as implemented in IE of older days, of course.
Since the main approach taken thus far was just to give values a class, and use that to optimize property access, there is still a lot of scope for optimization in V8. I wouldn't be surprised to see significant (2-5x) performance improvements in the not too distant future in V8, as more techniques are integrated. They're currently going straight from the JS AST to machine code, no inlining of aforementioned property access sites (AFAICT - there was a 'ret' at the end of the demo access site). The main thing is (a) they have objects pinned down to types now, and (b) hopefully as JS developers learn how to make code run fast under this paradigm, objects will look even more type-ful and thus increase scope for other optimizations.