AVMGlue for Redtamarin and other targets


Between the many subjects as “HTML Target for AIR”, converting AS3 to JS, converting SWF to HTML, etc. one of the big thing that people underestimate is AVMGlue.

What is AVMGlue ?

You can see the first mention of “avmglue” in the Tamarin project (see avmplus), while this project was actively developed, surfaced a need a bit beyond what the ActionScript 3.0 builtins provide.

The builtins are things like the Object class, the Number class, etc. that are shared with ECMA-262 3rd edition.

The avmglue classes are what we can consider the Flash API and the Adobe AIR API classes (you may have seen me call that also the FPAPI, eg. the Flash Platform API).

Some are intrinsically linked with the AMV2 runtime, like the ByteArray class, the Domain / ApplicationDomain class as they rely heavily on native code (C++), others are a mixte of native code and ActionScript 3.0 code, and some are mostly pure ActionScript 3.0 code.

That’s why it is called “glue”, it is not clearly full native code, it is there to glue things between the native (AVM2) and the ActionScript 3.0 environment.

You can see a glimpse of it in the official Flash AS3 language Reference with the
Adobe ActionScript® 3 (AS3) API Reference

In the flash.utils package, with function such as describeType(), getQualifiedClassName() and getQualifiedSuperclassName().

Normally, in a playerglobal.swc you should be able to access avmplus::describeType you can see the class implementation here DescribeType.as.

The visible implementation flash.utils.describeType() is then simply defined as

package flash.utils
    import avmplus.*;
    public function describeType( value:* ):XML
        return avmplus.describeType( value, FLASH10_FLAGS );

In the AVM2, it works quite simply, some definitions use the keyword native and are implemented in C++, but because they are defined on the AS3 side you can use them as if they were AS3 definitions.

Why it is underestimated ?

Let’s say you would want to produce a Flash API for the JavaScript platform and do that in Typescript.

As long as you would use builtins, it would be OK, there are some syntax and rule differences, but that map 1to1 to each other.

The problem would arise when you would want to implement something like flash.media.Sound.

To do such implementation for JavaScript, you would need to rely on what is available on this platform, in short you would need to use the HTML5 API to implement the Flash API.

to be able to do something like

var music:Sound = new Sound( “path/tofile.mp3” );

your implementation would look something like

class Sound

    constructor( url:string )
        this.url = url;
        _context = new AudioContext();

    _load( url )
        // put the sound data in _buffer var

        var source = _context.createBufferSource();
        _load( url );
        source.buffer = _buffer;
        source.connect( _context.destination );
        source.start( 0 );

you would have used stuff like

First problem you would encounter was to load the sound, then figure out the right setup for TypeScript to use the Web Audio API, then learn the API, then write the TS class so all that end up producing a JS that more or less works everywhere.

Now, another way to implement that, would be to use ActionScript 3.0 instead of TypeScript, and use FlexJS with the help of dts2as.

You would face the same problem of finding a Web Audio API definition that works with Typescript, so you can convert the waa.d.ts to a waa.swc.

But with AS3, the syntax and SWC helping, the writing of the code would have been maybe a little less painful.

My point here is that even something as trivial as “loading and playing a sound” can become already a few days to maybe few weeks projects when you want to port this Flash API to JS, yep even with the latest things like the Web Audio API.

Now imagine you have to port everything audio-like in the Flash API: Sound, SoundChannel, SoundCodec, SoundLoaderContext, SoundMixer, SoundTransform, etc. and soon this little project become a big one.

That’s why AVMGlue is underestimated, the Flash API was not build in few months, it was developed over a decade, from AS1 to AS2, to AS2 to AS3, to become this API that we think is easy.

Sure the API is easy to use, but not that easy to implement, even less easy when you have to deal with an HTML/JS logic of “way of doing things” fighting the Flash API logic which was done in the AVM2 context.

Now, let make thing even more complexe, let’s say it’s not playing sound you are after, but using sockets and so decide to implement flash.net.Socket and any other socket-related Flash API, and that you want not only this to work in the HTML5/JS context of the browser, but also in the JS context of something like Node.js and other JS host like ChakraHost.

There you would face many problems as the Web API does not have this notion of connecting directly with sockets but instead use something higher level like XMLHttpRequest, or if you reuse some non-standard stuff you will have to make a special case when the thing is running with Chrome to use things like chrome.socket or sockets.udp / sockets.tcp, or worst use something like WebSocket which smell like sockets but are not exactly working like sockets.

Still not impossible to manage, but then to support different use case of implementations you would definitively need something like “conditional compilation” which sadly is not available in TypeScript, which probably make another incentive to use FlexJS.

As mentioned in FlexJS Component Source Code Patterns

Conditional Compilation
We use conditional compilation to specify platform-specific code so that both the SWF and JS versions can be compiled from the same .as file. Thus there is only one org.apache.flex.html.Button.as that gets compiled into a SWF/SWC and cross-compiled into a .JS (and included in the same SWC for use by FalconJX).
We have two conditional compilation flags: COMPILE::SWF and COMPILE::JS

That would allow us to have things like COMPILE::NODEJS, COMPILE::CHROMEJS, etc.

Still… I would argue that implementing the whole AVMGlue, even if you do it in AS3 and with FlexJS, even if you can isolate part of the implementation for specific JS environments with conditional compilation, is prone to be a big headache.

Why AVMGlue implemented in Redtamarin first ?

As a development platform Redtamarin is the closest we have of an AVM2 runtime without the dependencies of Flash or AIR.

Sure there are other dependencies like the Redtamarin native implementations (CLIB, RNLIB) but at least this allow us to provide a clean “reference implementation” using the ActionScript 3.0 language.

Once you fully master how to write the Sound or Socket API in pure AS3 (and I mean here without the browser API bullshit), you will damn well know what is missing (or not) when it come to provide an implementation for another “target” platform such as the web browser or node.js etc.

In fact, looking at how the TypeScript compiler manage this kind of difference can help quite a bit, see the post The case when you don’t want Node.js, the part about the System interface and how it end up defining the ChakraHost object.

Also because Redtamarin is about the command-line and does not do any GUI rendering you will not be tempted to bet everything on WebGL or Canvas or whatever else.

No, you will focus purely on the logic while you mock/abstract the rendering, and once you have a Flash display list logic that works then it is matter of deciding wether you actually do the rendering with Canvas or WebGL etc.

Now you may ask “why implement AVMGlue in the first place?”

Because it is what is underestimated when people approach the problem of converting “Flash to HTML” or “Flash to whatever”

Here some projects

Here what those projects have in common

  • they try to replicate the Flash API but only a part of it
  • they rely on their end-target too much
  • they don’t see that as reusable logic

But they all agree that dev would like to reuse that Flash API some other place, and thats’ also what should be a motivation to implement AVMGlue, be able to reuse a familiar API.

So let’s take another example: Starling
the part where they implement their own display list

What do they do ?

They replicate the display list logic and instead of doing the rendering with the display list itself they do the rendering with Stage3D.

A good bunch of this display list logic could be reused to do the same but using WebGL or Canvas as the rendering target, or even something specific to .NET or Java.

It’s not that obvious in Starling simply because it is an AS3 project meant to be used with AS3 for an AS3 target but technically they have been bitten by the same problem I mentioned above about underestimating the AVMGlue.

They did plan to port Starling to JS and then they “killed” it
Starling JS flocking to the Away Foundation

I don’t know the internal details but my bet is even if they ported most of the source code to JS, they faced the problem of “not having AVMGlue”, it is not just a port from one language to another, it is a bit more complicated than that.

Things like the event system, the rendering system, etc. are quite different, and when your context is not AS3 anymore, well… you can not rely on that old trusty AVMGlue, so either you port it too or you stop developing the project because one big piece of the puzzle is missing.

At the beginning, starting to write your own AVMGlue is not very difficult, if you focus on only a small part of it you can advance into it and even start to build stuff with it, like all the projects mentioned above, but at one moment you will inevitably hit a wall.

Like “oh… to do that I need ByteArray”, or “to make it work… I need a URL loader”, and many other things that seems “little” like that but are not.

And guess what?, this is also the very same problem that will face “HTML Target for AIR” if such project see the light of day.

That’s also why some time ago I tried to gather some interest to Flash/AIR API AVMGlue and maybe do a Kickstarter around it, which off course never interested anyone.

I’m pretty sure that people do not see the relation between an implementation of AVMGlue made in Redtamarin with something as an “HTML Target for AIR”, I get it they just want the final result eg. that magic button that convert their Flash to HTML.

Same for StarlingJS, everyone very excited about it because they could already imagine magically publishing their starling game to HTML/JS without too much efforts.

Here what people do not realise

  • we don’t have a reference implementation of the Flash API
    we have documentation, the signatures, but not what’s in it
  • even if you want to build AVMGlue, you need a lot of eyes to catch silly bugs
    eg. you want your AVMGlue implementation to behave more or less like the Flash API
  • when Adobe update their own closed source code (the Flash API)
    even them have to patch bugs that break old sites because
    developers expect things to work the same no matter what
  • unless you hope hard that Adobe gonna do it for you
    the only feasible way is a community effort
  • using ActionScript 3.0 to implement that Flash API
    make the most sense as the most interested dev will be flash-dev
    already knowing AS3, the tools, the compilers, etc.
  • it is most likely flash-dev who would catch first that an API
    behave differently (or other bugs) compared to what they are used to (the Flash API)
  • it would be easier for flash-dev to review code in AS3
    and even propose patch and other contrib using AS3

So I changed my approach, in itself Redtmarin does not badly need this AVMGlue, it is just a convenience so dev used to work with Flash or AIR can reuse the Flash or AIR API when they are working with Redtamarin.

Let’s say you wrote or reuse twitter-lib.swc, as a dev you probably want to just use it “as is”, which is fine.

That’s why I decided to implement this AVMGlue, in AS3 and based on Redtamarin, and I also planed to do it in such a way that it can work in cross-compilation with something like FlexJS and so you can use it in projects for HTML5/JS client or even Node.js.

But because I’m so disappointed with the community in general, I’m not gonna do it like a full-blown open source project.

Some parts maybe open source, in the sense you will be able to see the sources,
some other parts will be binary-like obfuscated-like e.g… not available in clear view.

It will be free though, and will provide libraries to be used with HTML/JS targets like chrome browser, electron, node.js etc.

All that I explained above, about using TypeScript then FlexJS, using conditional compilation etc. is basically how I plan to produce it and like I’m 100% sure nobody would contribute that’s why I will consider it as a closed source project, after all Google got away with it with Swiffy, I can probably do the same.

For Redtamarin specifically, it would be adding an avmglue.abc and/or avmglue.swc to your project and you would have the Flash API available, the only catch would be the event system as you would have to use function callback instead of “real” events, those would be available in v0.5.0 which is about 1 year away in term of implementation.

For dev interested purely in JS as reusing the AVMGlue in a JS context, either chrome browser, Node.js server etc. it would be like adding an avmglue-chrome.js or avmglue-kode.js library, the library will be packed, minified and so slightly obfuscated (not really easy to read) but should be able to load in Typescript and provide syntax completion etc.