Cracking Up The Event Loop


#1

So in the past I have struggled a lot to try to explain simply that it is not easy to build that event loop

see a bit of history

Event loop, let’s crack this egg wide open.


Code Execution

Basically you write ActionScript 3 source code, you compile it to byte code and then it is interpreted by the runtime.

That runtime “code engine” is the ActionScript Virtual Machine 2 (aka AVM2, or AVM+ or Avmplus),
here a few list of such runtimes:

  • the Flash Player
  • Adobe AIR
  • Redtamarin shell (eg. redshell)

ActionScript 3 is a single threaded language, that means when the AVM2 interpret the code,
it can not execute other code in parallel.

Let’s say you define an AS3 function, when you execute that function it continues executing until it exit.
In other words you can not interrupt the execution flow of that function with other AS3 code.

This is what we call synchronous or blocking, when AVM2 run your code, it executes one operation at a time, one after the other, meaning that as long as one operation is executing, it cannot be interrupted by other AS3 code.

The execution flow is uninterruptible, or synchronous or blocking.

Here a basic example

function aBigLoop():void
{
    for( var i:uint = 0; i < 100000; i++ )
    {
        _doSomethingSynchronously(); // cannot be interrupted
    }
}

Simple Event Loop

If you have worked with Flash or AIR you are probably familiar with the concept of frames.

You define code on frame1 that will execute before code on frame2 that will execute before code on frame3, etc.

If you tell the player to go back to frame1, then the code of this frame is executed again, and then frame2, and frame3, etc.

And if for some reason you wanted to use only 1 frame and at the end of the code execution you would tell the player to go back to frame1 you would expect the code to repeat over and over etc.

That’s what we call the execution model, it is sequenced, it just means if follow a particular order of execution.

And if you read the excellent post Updated ‘Elastic Racetrack’ for Flash 9 and AVM2,
you can find about what happen in details.

This is only 1 frame running on 1 thread,
all the different colored sections are uninterruptible code (synchronous/blocking).

The blue parts are the code you defined in AS3, the other parts (green and red) are native code defined in the runtime.

So let’s imagine for 10 seconds you wanted to create your own event loop without reusing the native event loop.

To make things simpler we would remove anything that render or invalidate, eg. we would keep only the blue parts.

Oh look here: we have a darker blue part that deal with events, and the light blue part that deal with executing code, so to emulate an event system we would just need to put everything in frame1, and at the end of this frame tell it to go back to frame1, and a pseudo code like that would work:

// ---- ---- ---- ----
// our super oversimplified event system

if( events == null )
{
    var events:Array = [];
}

function loop():void
{
    while( events.length > 0 )
    {
        var event = events.shift();
        dispatch( event );
    }
}

function addEvent( message:String )
{
    events.push( message );
}

function dispatch( message:String )
{
    trace( message );
}

// execute the event loop
loop();

// ---- ---- ---- ----
// start user code here

if( n == null )
{
    var n:uint = 0;
}

addEvent( "this is frame " + n );

n++;

// end user code
// ---- ---- ---- ----
gotoAndPlay( "frame1" );

It would totally rely on the frame system, pass very simple string messages, etc.
but yeah it would work without using the native event system of either Flash or AIR.

you would decompose it like that

  • execute all the events
  • execute all the code from frame1
  • then go back to play frame1
  • rinse and repeat

Yes we are cheating because we are using the native frame system but it is worth it to illustrate the logic with “frames”.


Simple Event Loop When You Don’t Have Frames

When you only have AVM2 like in Redtamarin you also do not have either a native event system nor a frame system, but you can emulate it :slight_smile:.

In the native part that execute the code, we just added a small native function that execute “a loop function” at the end.

Before:

  • the program starts
  • code is executed
  • then the program terminates

After:

  • execute part
    • the program starts
    • code is executed
    • then the program terminates
  • handlers part
    • execute the “loop handler”

OK… it is a bit more complicated than that, in the details it is like this:

  • all the code execute in sequence
    • the program starts
    • code is executed
  • handlers part
    • then we execute the “loop handler”
      • prevent to reach the “exit handler”
    • finally we execute the “exit handler”
      • then the program actually terminates

As long as the loop run it prevents the program to terminate or exit,
but if we were to call an exit() function it would force to break the loop and terminate the program.

Also, if we stop the loop then we fall to the “exit handler” and so exit the program.

Our pseudo code for the “loop handler” look like that

while( flag )
{
    processStuff();
    sleep( interval );
}

Yes it is that stupid simple, we use a blocking loop.
It even allow us to emulate those frames, eg. each loop frame++.

All in all it is a very simple straightforward architecture, it is like the blue parts described above,
but the darker blue is at the end instead of the beginning, also we don’t really have real frames of code
that we can re-execute, our frames are more like “virtual frame” (we just keep count of them).

We can set the interval to sleep “shorter” or “longer” but ultimately it is a variable frame rate
as any code that will execute in the event loop will block N amount of time, and in fact you could
have code that execute for a very long time that it will shutdown your fps to 0 but it is not a big issue

  1. we don’t do graphic rendering
    so we don’t care to try to have a constant FPS
  2. the runtime can “block forever”
    eg. we don’t really have this N seconds max execution frame
  3. we have much much less “media events” to deal with
    in fact most events would be user generated events

We don’t do it yet but we could time the code execution within the loop
and so subtract that execution time to the sleep interval, but again not a big deal.


Are We Asynchronous Yet?

All of the above is purely and strictly executing on a single thread and so is synchronous (or blocking).

See!!! We can have events without being asynchronous.
But we can not have all the events :smile:.

That’s the part when dev try to explain to you that events are synchronous in Flash/AIR,
because they execute in an ordered sequence and can be predictable.

Yes, in a way, but not all the way.

Simply put, if you stay single threaded your event system will be close to useless,
to really benefit from that event system you do want it to go asynchronous.

Here a very basic example:
you want to read a big file and this will take 30 seconds to just read that file in memory.

If you are single threaded and set a 24fps for your loop, while you read the file
it will block everything else, no other code will be able to execute.

It will work, as it will not blow up to your face, but if you wanted to have a timer event
running at the same time, well… it will be shifted by those 30 seconds it take to read that file,
and it would definitively kill your 24fps.

Now, if you could read that file in another thread, your main thread running the event loop
would keep a more tight loop and would have certainly more chances to preserve that 24fps.

That’s what people want when they say “asynchronous”, they mean they want to execute code in parallel so it does not slow down the main thread, they want multi-threading.


Multiple Single Threaded Execution

When you execute a runtime using AVM2, by default you are running the main isolate,
also known as the primordial worker, the first instance of the VM or your main thread.

With AVM2 you can create other isolates, also known as child workers (that run on different threads),
those isolates/workers do not share memory, they are separate instances of the VM and they
can communicate with each other by passing messages (not events).

When you spawn a child worker, the code executing inside it is still single threaded in the context of that worker.

That’s it, you can execute code on multiple threads, but each one of those threads is its own instance of the AVM2 and execute single-threaded till the code terminates.

So, if you want your event loop to work and be useful, you have to run it also in the child workers.


An Event Loop for Everyone

It’s where things get more complicated but I will try to keep the explanation simple.

Imagine you can use the Worker class but you don’t have access to the MessageChannel class, how would you go about replicating that?

You would try to do something like that:

  • your program starts, this is the primordial worker
  • it will spawn a child worker
  • the primordial worker would loop and wait
  • in the child worker
    • it will start to read that big file for 30 seconds or so
    • the thread of that child worker would block
      but the thread of the main worker would still loop
    • when the file is completely read
    • the child worker would send a message
  • in the primordial worker loop
  • it would read the value of variable to get the message of the child worker
  • then it would write a message to the child worker “I read it”
  • the child worker with its own event loop
    will know its previous message has been read and will terminate (stop its event loop)
  • the main worker loop will continue to execute or stop etc.
  • ultimately the main worker loop will stop
  • program terminates

Without the MessageChannel class you would use getSharedProperty()/setSharedProperty()
to write those messages from child worker to main worker and the opposite
and use a Mutex to make the access exclusive

for example (continuing our previous simple event loop)

function initChild():void
{
    var self:Worker = Worker.current;
    if( !self.isPrimordial )
    {
        self.setSharedProperty( "avm2_child_mutex", new Mutex() );
        self.setSharedProperty( "avm2_messages_queue", [] );
    }
}

function sendMessage( message:String ):void
{
    var self:Worker = Worker.current;
    var mutex:Mutex = Worker.current.getSharedProperty( "avm2_child_mutex" );
    mutex.lock();
    var messages:Array = child.getSharedProperty( "avm2_messages_queue" ) as Array;
        messages.push( message );
    mutex.unlock();
}

function pendingMessages():uint
{
    var pending:uint = 0;
    var self:Worker = Worker.current;
    var mutex:Mutex = Worker.current.getSharedProperty( "avm2_child_mutex" );
    mutex.lock();
    var messages:Array = child.getSharedProperty( "avm2_messages_queue" ) as Array;
    pending = messages.length;
    mutex.unlock();

    return pending;
}

function consumeWorkers():void
{
    var list:Vector.<Worker> = WorkerDomain.current.listWorkers();
    if( list.length > 0 )
    {
        for( var i:uint = 0; i < list.length; i++ )
        {
            var child:Worker = list[i];

            if( !child.isPrimordial )
            {
                var childMutex:Mutex = Worker.current.getSharedProperty( "avm2_child_mutex" );
                childMutex.lock();
                var messages:Array = child.getSharedProperty( "avm2_messages_queue" ) as Array;
                // read all messages
                while( messages.length > 0 )
                {
                    var message:String = messages.shift();
                    dispatch( message );
                }

                child.setSharedProperty( "avm2_messages_queue", [] ); // reset messages queue
                childMutex.unlock();
            }
        }
    }
}

in a child worker you would start with

initChild();

the mutex would be specific to that child, eg. not a global mutex for all workers

within the child worker when you want to send a message simply use

sendMessage( "big file read complete" );

finally in the child event loop

function loop():void
{
    while( events.length > 0 )
    {
        var event = events.shift();
        dispatch( event );
    }

    if( pendingMessages() == 0 )
    {
        exit(); // exit loop
    }
}

and in the main worker loop

function loop():void
{
    consumeWorkers();

    while( events.length > 0 )
    {
        var event = events.shift();
        dispatch( event );
    }
}

Ok, so that’s one way of doing it, basically

  • each child worker manage its own array of messages
  • each child worker have its one Mutex
  • the main worker loop through all the child workers
    to consume those messages
  • depending on Worker.current.isPrimordial
    you can decide which event loop to run

You could do it differently, but that’s the principle of it for simple illustration purpose.

With that principle you could delegate to a child workers any “big task” that would slow down your main worker to achieve concurrency.

Note:
if you look at MessageChannel documentation
you will see it is all about passing messages and the state of those messages
for ex:
“Code in the worker that creates the MessageChannel object can use it to send one-way messages to the Worker object specified as the receiver argument”
“Indicates whether the MessageChannel has one or more messages from the sending worker in its internal message queue.”
etc.
in fact, if we were pushing it we could pretty much emulate those message channels
with Redtamarin (except the part where a MessageChannel instance can be shared with workers)
eg. setSharedProperty() documentation

There are five types of objects that are an exception to the rule that objects aren’t shared between workers:

  • Worker
  • MessageChannel
  • shareable ByteArray (a ByteArray object with its shareable property set to true
  • Mutex
  • Condition

All that will make your event system asynchronous to manage

  • I/O
  • timers
  • streams
  • etc.

And you could also define specific rules like

  • exit a child loop only if the messages queue is empty
  • exit after 5 seconds timeout
  • force exit a child worker if the task last longer than N seconds
  • etc.

Here we are using simple string messages but it could be any serialisable objects,
for example: you could pass the ByteArray of the big file you read, a uint for the amount of seconds it took to be read, etc.

Simply put, doing all this we have defined an architecture.


Different Architectures

From the excellent post Understanding Flash Player with Adobe Scout
(archive here so you get the nice illustrations)

We can see the Flash Player Architecture

and now I can show you the Redtamarin Shell Architecture

As you can see what we are doing in Redtamarin is much much much simpler, but it does work :wink:
(anything in dot-lines is optional)


Here a little summary of the differences

  • Redtamarin

    • synchronous by default, asynchronous is optional
    • events work everywhere (sync and async)
      but some events will not occurs if not async (for ex: timers)
    • frames are virtual
    • no rendering occurs
    • you can exit/terminates at any time
      C.stdlib exit(), Program.exit(), Runtime.loop.stop()
    • you can pause the event loop
      Runtime.loop.stop()/Runtime.loop.start()
    • you can delegate the control of the event loop
      Runtime.loop.runOnce()
      case where you have a loop in a socket server
  • Flash/AIR

    • always asynchronous
    • events work everywhere all the time,
      if some events can not be consumed in 1 frame, they are delegated on next frame
    • frames are real (eg. if you move the playhead you can re-execute a frame)
    • rendering occurs on each frame
    • you can almost exit/terminate at any time
      System.exit() (Flash Player Debugger only)
      NativeApplication.exit() (AIR only)
    • you can almost pause the event loop
      System.pause()/System.resume() (Flash Player Debugger or AIR Debug Launcher only)
    • you can not delegate the control of the event loop

Things you can do with Redtamarin that you can not really do with Flash/AIR

  • read from standard input eg. stdin
  • write to standard error eg. stderr
  • you can catch signals: SIGINT, SIGHUP, etc.
    in asynchronous mode only

All that leading to the next step: blocking vs non-blocking programs.


Blocking vs Non-Blocking

In Redtamarin, the default behaviour is to be synchronous or blocking

it will behave exactly as any other command-line executable: an exe compiled from C, a shell script, etc.
and you will be able to manage synchronous events (anything in your user code using EventDispatcher).


If you activate the asynchronous or non-blocking mode using `Runtime.goAsync()`

it will behave almost like a regular command-line executable but it will “loop forever” waiting for events
or for your user code to explicitly exit the program (or an external signal like SIGKILL), you will be able to manage both synchronous and asynchronous events (like timers, signals, etc.).


In conclusion, all that is very well explained in the official doc:
ActionScript 3.0 Developer’s Guide
Understanding workers and concurrency

When an application doesn’t use workers, the application’s code executes in a single linear block of executing steps known as an execution thread. The thread executes the code that a developer writes. It also executes much of the code that’s part of the runtime, most notably the code that updates the screen when display objects’ properties change. Although code is written in chunks as methods and classes, at run time the code executes one line at a time as though it were written in a single long series of steps. Consider this hypothetical example of the steps that an application executes:

  1. Enter frame: The runtime calls any enterFrame event handlers and runs their code one at a time

  2. Mouse event: The user moves the mouse, and the runtime calls any mouse event handlers as the various rollover and rollout events happen

  3. Load complete event: A request to load an xml file from a url returns with the loaded file data. The event handler is called and runs its steps, reading the xml content and creating a set of objects from the xml data.

  4. Mouse event: The mouse has moved again, so the runtime calls the relevant mouse event handlers

  5. Rendering: No more events are waiting, so the runtime updates the screen based on any changes made to display objects

  6. Enter frame: The cycle begins again

As described in the example, the hypothetical steps 1-5 run in sequence within a single block of time called a frame. Because they run in sequence in a single thread, the runtime can’t interrupt one step of the process to run another one. At a frame rate of 30 frames-per-second, the runtime has less than one thirtieth of a second to execute all those operations. In many cases that is enough time for the code to run, and the runtime simply waits during the remaining time. However, suppose the xml data that loads in step 3 is a very large, deeply nested xml structure. As the code loops over the xml and creates objects, it might conceivably take longer than one thirtieth of a second to do that work. In that case, the later steps (responding to the mouse and redrawing the screen) do not happen as soon as they should. This causes the screen to freeze and stutter as the screen isn’t redrawn fast enough in response to the user moving the mouse.

If all the code executes in the same thread, there is only one way to avoid occasional stutters and freezes. This is to not do long-running operations such as looping over a large set of data. ActionScript workers provide another solution. Using a worker, you can execute long-running code in a separate worker. Each worker runs in a separate thread, so the background worker performs the long-running operation in its own thread. That frees up the main worker’s execution thread to redraw the screen each frame without being blocked by other work.

The ability to run multiple code operations at the same time in this way is known as concurrency. When the background worker finishes its work, or at “progress” points along the way, you can send the main worker notifications and data. In this way, you can write code that performs complex or time consuming operations but avoid the bad user experience of having the screen freeze.

Workers are useful because they decrease the chances of the frame rate dropping due to the main rendering thread being blocked by other code. However, workers require additional system memory and CPU use, which can be costly to overall application performance. Because each worker uses its own instance of the runtime virtual machine, even the overhead of a trivial worker can be large. When using workers, test your code across all your target platforms to ensure that the demands on the system are not too large. Adobe recommends that you do not use more than one or two background workers in a typical scenario.