Ambulant design, overall

Last updated for Ambulant version 1.1.

Design process

The design consists of a mixed bag of technologies:

If possible all design documents will carry a notice stating for which version of Ambulant they were last updated, so it is relatively easy to spot outdated (or potentially outdated) documents.

Global structure

To be able to use the code to create, say, a plugin SMIL renderer for use in a browser we need a global "playback engine" object that has all the others hanging off it (plus factory methods to create them, etc). To support this we have a global object player that is the controller of all aspects of playback of a single SMIL document.

In addition we use factories for creating things like renderers, file readers, parsers and windows. These factories are populated by the main program and then used during document playback.

With a structure like this the application itself becomes basically a skeleton embedder: it is responsible for the GUI, handling open/open URL/quit/etc, it creates playback engine objects when needed and has a small number of callbacks for "create window" and such.

In addition, because the main program is responsible for creating all the factories it should be possible to create a different main program that does not actually render anything, but only prints on stdout what should happen at what time, or any other form of symbolic execution.

Replaceable components

AmbulantPlayer is intended to be a research system, and therefore all components should be easy to replace with interfering with the rest of the system. This allows a researcher to concentrate on one issue, such as network protocols or scheduling algorithms, while the rest of the system is usable as-is.

This replacability is incorporated in the design through two means:

In the current implementation there are two main objects for which this has not been implemented (the DOM tree and the low-level event scheduler), this will be done at a later stage. All other functionality (media playback, data retrieval, parsing, layout, scheduling) does follow this pattern.

run-time system

Because we have a C++ implementation we cannot rely on refcounting or garbage collection in the underlying runtime. lib/refcount.h has a simple refcounting implementation that is used for garbage collection.

Ref-counting should be used when it is absolutely needed, it adds an overhead and some complexity since it can not happen automatically. On the other hand there are some cases where it simplifies the code a lot. If I judge from my code and the code of others I have seen, only very few objects need ref counting. First, objects owned by a class, and quite all are, never need to be ref counted. You just delete them. Ref-counting is needed when objects containing references are shared by completely independent components.

The architecture is fairly tightly coupled. The original idea of allowing the high-level scheduler to live on a different machine, precomputing schedules and ending these to a low level scheduler, isn't going to work for SMIL without putting almost all SMIL complexity in the low-level scheduler.

main loop

The basic architecture is event-driven, with a small number of worker threads picking up events from the event queue. The alternative is to use multiple threads all over, but it seems event-driven is the better choice. An object that wants to use multiple threads can do so more easily on an event infrastructure than the other way around, but these threads are "somebody else's problem", as they are hidden from the rest of the architecture.

At first glance that it appears some objects, such as a renderer, would benefit from a threaded architecture it turns out this isn't really so. The naive threaded implementation:

while data = read_data():
    render_data(data)

will not work, because many other things can also happen, such as a user-initiated event, or the timeline for the renderer being torn down. So, the naive loop sketched here will become hairy anyway, and look like:

while event = wait_for_some_interesting_event():
        switch event:
                case DATA: render_data(data)
                case STOP: close_resources_and_exit()
                ...

so we might as will split this out in the architecture.

The event handler architecture needs an elaborate priority scheme, that is expressive enough that the best execution order of things that happen "at the same time" is automatic.

Factory pattern

There are various factories that follow a common pattern. There is a client interface, called something like playable_factory, that can be used to create playable objects. If the factory cannot create the object it returns NULL.

This interface is implemented by all the providers of objects that have the playable interface. For example, the implementation of a video renderer for Cocoa on MacOSX will consist of a cocoa_video_playable implementation and a cocoa_video_playable_factory implementation.

The core also has a provider interface, usually called global_playable_factory. This interface has a method add_playable_factory that the playable provider uses to register its playable_factory. Then, when a client uses the global_playable_factory to create a playable it will iterate over all playable factories until one is found that can create the object.

These global_playable_factory objects should be singletons, but in the current implementation this isn't always true. Also, the global_ naming convention isn't strictly followed for all factories.

Machine-dependent code

The general way to handle machine-dependency is to create a machine-independent abstract base class, plus machine-dependent subclasses. Then there is a factory function that creates a machine-dependent instance and returns it casted to its machine-independent base class.

With this scheme we can handle machine-dependent extensions to the base class easily: modules using these extensions declare objects of the subclass and call the initializer directly in stead of through the factory function.

The scheme does not work for all objects, however: it breaks if we want to create static copies of the objects. For classes for which this is the case, such as the critical_region object, we declare an abstract object base_critical_section in lib/abstract_mtsync.h, subclass that as PLATFORM::critical_section in lib/PLATFORM/PLATFORM_mtsync.h, conditionally include that in lib/mtsync.h and create an empty subclass critical_section of it.

In case you really need a preprocessor define to trigger machine-dependent code on: ambulant/config.h defines a number of macros like AMBULANT_PLATFORM_UNIX, AMBULANT_PLATFORM_LINUX, AMBULANT_PLATFORM_MACOS, AMBULANT_PLATFORM_WIN32 and AMBULANT_PLATFORM_WIN32_WCE. Use of these is preferred over platorm-native defines.

integrating third-party tools

We need to be able to use existing toolkits that take work out of our hands. Think of QuickTime and DirectX, where you basically pass a URL and say "play" and have nothing to worry about anymore. Also, existing URL access libraries (such as the caching infrastructure on windows) and a third party RTSP library need to be used. This also means we don't have to handle firewalls and what more.

An informal dataflow diagram is available showing the various ways in which media bits can go from their source (far away on the net) to the screen.

We would like to be able to re-use existing (Explorer, Netscape) plugins when applicable, but no work on this has been done yet.

Other languages

It is our intention to eventually allow bridging of Ambulant Player to other languages such as Python or Java. An effort has been made to keep the APIs clean enough to allow this, and work has started on a Python API, but this is not finished yet.

Where next?

A good place to continue reading is the walkthrough document, which gives a terse description of how a SMIL document is opened, parsed and played. Then continue with the objects document which describes the main objects in more detail. Or go back to the main documentation index.