![]() |
AAX SDK
2.4.1
Avid Audio Extensions Development Kit
|
Details about parameter timing and how to keep parameter updates in sync.
At any given moment, a plug-in may be asked to handle events from multiple locations on the timeline. Each module in an AAX plug-in may be updated using a different timeline position. For example:
In this article, we will refer to the following timeline locations:
As an AAX plug-in developer, you don't usually need to worry about the fact that your plug-in's data model and algorithm may each represent a different point in the timeline; the AAX packet system handles all of the necessary synchronization between these two locations.
This works seamlessly in a normal AAX plug-in because the real-time algorithm is fully decoupled from the plug-in's data model. Since all of the state information for the algorithm is delivered through its context structure, the host can simply swap in the correct context data for each call to the processing callback. The plug-in does not require any special handling code to synchronize between the two timeline locations, and, as a bonus, AAX plug-ins can achieve deterministic, accurate automation playback without doing any extra work to handle time-stamped parameter update queues or other overhead.
When playing back automation, the AAX host calls UpdateParameterNormalizedValue() to update the data model state, then calls GenerateCoefficients() to trigger the generation of new packets. See Basic parameter update sequences for a full description of this sequence.
Before the host calls GenerateCoefficients() to generate packets for an automation breakpoint, it records the timeline position of the breakpoint (AAX_IController::GetCurrentAutomationTimestamp() provides this value as a sample offset from the beginning of playback.) Every packet that is posted during execution of GenerateCoefficients() is tagged with this timestamp when it is queued for delivery.
As the playhead advances and sample buffers are queued for processing, the host tracks the location of the next time-stamped packet in the packet queue. As the render time location for a Native plug-in processing chain approaches the next packet time-stamp for a plug-in in the chain, the host divides the plug-in's processing buffers into smaller buffers. When the render time location is as close as possible to the packet's time-stamp, the host delivers the packet. The packet data is available to the algorithm in its context the next time it is executed.
Because the host may divide native processing buffers down to a minimum size of AAX_eAudioBufferLengthNative_Min - 32 samples - the host can guarantee that all automation playback will be effected within 32 samples of the actual automation breakpoint location. In addition, with the help of some extra internal bookkeeping, AAX hosts also guarantee that the exact sample where an automation breakpoint is applied will be deterministic and will not change between different playback passes.
The packet delivery system for AAX DSP plug-ins works similarly to the system for AAX Native plug-ins. AAX DSP plug-ins use a fixed buffer size, so the host is not able to divide their playback buffers into smaller units: the plug-in will receive each data packet in the fixed-size playback buffer which most closely corresponds to the location of the automation event which triggered the packet.
An AAX DSP plug-in which declares an AAX_eProperty_DSP_AudioBufferLength value of N will be guaranteed to receive data packets within N/2 samples of the actual automation event position on the timeline. Since the default buffer size for an AAX DSP plug-in is 4 samples, this yields extremely accurate automation playback with no extra work required in the plug-in algorithm.
The packet system works perfectly to synchronize the states of the plug-in data model and algorithm, but only when the plug-in algorithm is fully decoupled from the data model. If the algorithm directly shares data with the data model then the algorithm will immediately start using any new data model state without waiting for the corresponding coefficient delivery.
Figure 3 shows one kind of problem that can arise when a plug-in uses the same state for both its data model and its algorithm. In this case, the plug-in applied a volume trim (shown in the automation lane at the top of the image) to its algorithm as soon as the parameter update was applied to its data model, even though the algorithm was not yet processing the audio at the Automation time location. As a result, the audio trim was applied several hundred samples too early.
Plug-ins that share data directly between their data model and algorithm are referred to as monolithic. All plug-ins that inherit from the AAX_CMonolithicParameters helper class are monolithic.
All monolithic plug-ins must include special handling code to reconcile the plug-in's automation time state with its render time state.
There are many possible solutions for the timing errors that arise when a plug-in combines data from different time locations. Ultimately, the plug-in must separate the state that is represented at different time locations.
In most cases, this requires deferring data model state changes from being applied to the algorithm until the relevant samples are being processed in the render callback. One easy way to accomplish this separation is to take advantage of the synchronization provided by the AAX packet delivery system. This approach benefits from the fact that it emulates the design of a normal, decoupled AAX plug-in.
After a packet is queued with a call to PostPacket(), the packet delivery system will wait to update the algorithm's context structure with the packet's data until the Render time location is very close to the automation event (see above.) This provides an appropriate mechanism for deferring state changes in the plug-in's data model until the Render time location has "caught up" to the correct sample.
Figure 4 shows the same scenario as Figure 3, but now the plug-in has been updated to defer data model updates from the automation time location so that they are applied as coefficients in the algorithm when the render time location has reached the correct point on the timeline.
Here is one way to use the packet delivery system to defer changes to the data model state:
This approach is incorporated directly into the design of AAX_CMonolithicParameters. If your plug-in data model is a subclass of AAX_CMonolithicParameters then you can follow these steps to ensure accurate parameter update timing in your plug-in:
NOTES
For reference, see DemoMIDI_Synth and the other example instrument plug-ins. All of the instrument examples in the AAX SDK use these facilities to achieve deterministic, accurate playback for automated parameters.
One benefit of this approach is that it provides a compatible interface with monolithic plug-in objects which are designed to work across multiple plug-in formats. For example, the set of parameter updates provided to AAX_CMonolithicParameters::RenderAudio() "RenderAudio" can be provided to plug-in objects which require a queue of time-stamped parameter updates for each audio render callback.
Of course, the approach described in this section is just one possible solution. The timestamp section below provides some alternatives to using the packet queue system for synchronization. Ultimately, the best design for your plug-in will depend on the facilities that are available in the plug-in's monolithic state object, the size of this object, its interface, the number of parameters representing its state, and other internal details.
Here are some additional factors to consider when using the packet queue system for time location synchronization of parameter updates:
The AAX packet queue provides a host-managed system for applying parameter updates at the correct location without requiring any special knowledge about the timeline. However, In some situations a plug-in may need to know the absolute sample position of a parameter change.
For example, a plug-in that synchronizes parameter changes to some external system, and which wants to forward these changes over to the external system as early as possible, would want to know the sample position for a coefficient update when the update is first triggered by a call to GenerateCoefficients.
In these situations it is not suitable to simply use a method like AAX_ITransport::GetCurrentNativeSampleLocation() which returns the current position of the audio render thread. The parameter update may be occurring at a different location on the timeline from the current render position, so using the current render position for the update would result in timeline offset problems similar to those described above.
AAX provides a variety of information that can be used for timeline synchronization. This information is provided through a combination of AAX_ITransport, AAX_IController, and MIDI beat clock data. Here is a summary of the relevant ways that a plug-in can get information about the timeline and timing synchronization data:
Each of the available methods for getting information about the timeline position has a particular purpose. No single interface method can be used to directly determine the sample location for a parameter update, but it is possible to determine this value by combining information from a few of the available methods.
Here are some possible approaches for determining the timeline position of a parameter update
NOTES
The reason that this approach yields an approximate value is that the TOD location and current playback location are both given in terms of the real-time audio workers, and these values continue to progress simultaneously with execution of methods on the automation update thread. As a result, this approach will yield an absolute timestamp that is "late" by between zero and one hardware buffer.
NOTES
You can refine the approach described above by using MBC events to detect the location of playback start.
NOTES