Last updated for Ambulant version 1.1.
This document describes the function of various of the more important objects and interfaces in the Ambulant Player. If you haven't already done so it is probably a good idea to first read the Overall design and walkthrough documents. The first one explains the design principles and some of the choices made, the second one is a brief walkthrough of how the player loads, parses and plays a SMIL document.
The nitty-gritty details of these objects, on a level interesting to developers, are available too, in the API documentation. This document may need to be regenerated, see the README file in the documentation directory if it doesn't seem to exist.
You may notice that the core of the player, the SMIL 2.0 scheduler, is not mentioned here. Unfortunately it is not documented yet.
There are also additional low-level objects like thread and critical_section that are not described here. See the API documentation for details.
The refcounting protocol is contained in the file lib/refcount.h. It needs to be implemented only by objects that are truly shared, i.e. any object whose lifetime is not predetermined by some other object. New instances of refcounted objects are created using the operator new. Any object that needs to share a particular instance calls add_ref() against this instance. The creator of the refcounted object and any sharer are responsible to call the release() method of the object when they don't need the object any more.
The player is the top-level object. When it is created you pass a DOM tree, a factories structure containing references to the playable_factory, datasource_factory and window_factory and an embedder object used for callbacks to the GUI (on state changes, opening of external documents, etc).
There is a UML diagram for player showing how the player relates to various other objects.
There are currently two implementations of the player interface: smil_player and mms_player. The first one is the all-singing-all-dancing SMIL 2.0 player, the second one can play MMS documents, which use a very restricted subset of SMIL 1.0.
A concise walkthrough of how the smil_player operates is given in the walkthrough document.
The XML parser roughly follows a SAX interface. To use it you provide it with objects having the sax_content_handler and sax_error_handler interfaces. You then feed your document to the parser and it will call back through those interfaces.
There is a UML diagram for parser showing how these classes relates to each other.
There are currently two parser implementations, expat_parser uses James Clark's expat parser, which is a fast and lean no-frills parser. xerces_parser uses the Apache Xerces library, which is at the other extreme of the spectrum: it can do document validation with both DTDs and Schemas and lots of other wonderful things. But this comes at the price of a rather hefty memory footprint.
There are actually a couple of these interfaces, but they are similar. Their function is to implement URL retrieval schemes or file I/O and get external data to media handlers and other modules requiring data access.
The general interface is that a datasource is acquired through a datasource_factory interface, which passes the URL to the various implementations until one is found that can handle it and returns a datasource object. The client then calls the start method on this object, passing a callback routine, and the datasource will arrange for the callback to be called as soon as data is available. No new callbacks will be done until a new call to start() is made, and the datasource has a buffer that can be limited, so this design allows for flow control over the net, if required.
There are specialised datasource interfaces for audio and video, that can handle extra things like converting audio from mp3 format to linear samples, or demultiplexing an audio/video stream.
There is a UML diagram for datasource showing how these classes relates to each other.
There is also a UML state diagram for datasource showing the state machine that a datasource should adhere to.
The playable interface is implemented by what are usually called a media handlers or media renderers: it is this interface the scheduler uses to make things appear on the screen (or sound out of the speakers, or otherwise do their thing).
Playables are created through global_playable_factory, which has references to all playable implementations and asks them in order whether they can handle playback of this specific DOM node, until one matches.
When the playable is started it is provided with a playable_notification object (implemented by the scheduler), which is where it can send its status messages (such as stopped() when the media is finished, or clicked() when the user clicks the mouse over the media item).
Most playables have an accompanying interface, renderer, which controls where the media item is rendered (non-rendering items such as SMIL animations are the exception to this rule).
There is a UML diagram for playable showing how these classes relates to each other.
There is also a UML state diagram for playable showing the state machine that a playable should adhere to.
While some media handlers implement the playable from scratch (an example is the aforementioned SMIL animation handler) there are a number of convenience classes that implement functionality shared by many media handlers. These are:
There is a UML diagram for renderer_playable showing how these classes relates to each other.
The layout manager determines where media items appear, and also governs things like z-ordering, background colors for regions and such.
The central interface is the surface, which is the object passed to a renderer. Whenever a renderer has something new to show it calls need_redraw() on this interface. Whenever it is time to actually redraw something the surface calls redraw() on the renderer.
The layout_manager interface maps DOM nodes to the surface objects on which they should play back.
There are two more auxiliary interfaces that are not strictly necessary but used by the layout implementation for historical reasons: surface_template and surface_factory. These interfaces are used to create subregions and toplevel windows, respectively.
There is a UML diagram for layout showing how these classes relates to each other.
The SMIL 2.0 implementation of surface, surface_template and surface_factory are the classes passive_region and passive_root_layout.
There is a UML diagram for region showing how these classes relates to each other.
This is the abstract interface used to create new windows and tie them to the layout implementation. The implementation is machine-dependent, obviously, and usually supplied by the hosting application.
In addition there is the gui_events interface which goes the other way: it is exposed by the layout implementation, and used by the machine dependent window implementation to communicate things like redraw requests.
There is a UML diagram for window showing how these classes relates to each other.
The animation interfaces are animation_destination and animation_notification. The SMIL 2.0 playable uses these interfaces to change parameters and send notification of those changes, respectively.
There is a UML diagram for animation showing how these classes relates to each other.
All clocks adhere to the abstract_timer interface. This interface allows you to get the current time and set the speed of the clock.
There is a companion interface abstract_timer_client (which is actually a base class of abstract_timer) that allows objects to get notification of changes in timer speed.
Currently there are two implementations of the abstract_timer interface: the operating-system specific realtime clock (of which you cannot set the speed) and timer, which implements a new zero-based clock based on another abstract_timer. Its speed is settable with set_speed, but it is tightly synchronized with its parent clock.
Eventually there may be other implementations of timer, such as clocks that are allowed to slip synchronization and other such semantics as required by SMIL.
There is a UML diagram for clocks showing how these classes relates to each other.
The event_processor is the low-level scheduler of the system. It is a priority runqueue with methods to add callbacks, with an optional delay until the callback becomes elegible.
There is a UML diagram for event processor showing how these classes relates to each other.
The document class contains the DOM tree and some auxiliary data:
There is a UML diagram for document showing how the document class relates to various other objects.
The node class represents a node in the DOM tree. Actually, our tree isn't 100% compatible with DOM, but close enough. The node objects store the tag, attributes and data pertaining to the XML node. There are basic methods to access the parent, next sibling and first child, to insert or remove nodes into a tree and more.
In the actual code there is a compile-time switch WITH_EXTERNAL_DOM that governs whether the node class is abstract or not. If defined, node is equivalent to node_interface, the abstract API, and node_impl is one possible implementation of it. If not defined node is equivalent to node_impl, which leads to a more efficient implementation with inline methods and such. This helps on low-power machines.
There are two auxiliary classes that augment the node functionality using only these interfaces:
There is a UML diagram for node showing how these classes relates to each other.
The implementation of SMIL transitions is fairly complex, because there are very many transition types and they also need to be implemented efficiently on multiple platforms. The current implementation is fully based on inheritance, a model with delegation would probably result in a cleaner design.
The central object is the transition_engine, which also supplies to interface used by clients. It has begin(), end(), step() and next_step_delay() methods, which the client (usually a media renderer) uses to control the transition. next_step_delay() needs an explanation: it returns the delay until the engine would like to get the next call to step() from the renderer.
On top of this central object there is a multiple inheritance graph where one leg is machine dependent, and does the actual bitblit operation to combine two images. The other leg is machine independent, but dependent on the actual transition type, and computes the parameters for the bitblit. These two then come together in a stub class that has all the functionality for a specific transition type on a specific platform.
There is a UML diagram for transitions showing how these classes relates to each other.