RackAFX generates several processAudio() methods:
How I understood from reading the comments, processAudioFrame() is offered to make things easier, but is less performant, while processRackAFXAudioBuffer() is more efficient.
processVSTAudioBuffer() on the other hand must be used for VST compatibility, but also works for other enviroments.
- Are my assumtions right?
- Why two processAudioBuffer() methods?
- processRackAFXAudioBuffer() is nerver called in my project, m_bWantBuffers is set true;
- What's the difference between the VST and the RackAFX methode?
- Do they differ performance wize?
Glad you asked. Here are the details:
processAudioFrame() is the easiest to understand and code but the least efficient.
processVSTAudioBuffer() uses the same buffer processing as both VST and AU; originally this was for teaching my students the concept in case they wanted to work directly in VST or AU plugin code. The details are in my book Appendix A (and in numerous places on the net). Some readers (and students) are afraid of buffers of pointers to buffers, etc... so that is why this is not the default behavior. However, for maximum compatibility it is good practice to use it if you intend on cutting-pasting code. The AU and VST exports default to this mechanism. In this case, you get a buffer of pointers to the input/output buffers. The left and right channels have their own buffers. On the client (RAFX or VST/AU) it requires some extra overhead to split the data into those buffers since the original audio files (WAV, AIFF, etc...) are interleaved. However, it is flexible since any channel count is easily supported (for a 5.1 WAV file there would be six buffers). But, RackAFX only supports mono and stereo as a client. That's the other reason for implementing this - if you compile as VST and test in other VST clients, you can support more than just mono/stereo.
processRackAFXAudioBuffer() was the original function I wrote for this several years ago. In this scenario, you receive one pointer for the input buffer and one pointer for the output buffer. It is up to you to de-interleave the buffers; stereo is L/R/L/R; you have to de-interleave and process, then re-interleave in the output buffer; as long as you know the channel count this is actually easy to do. The original reason I used this function was that in both AU and VST, the buffer sizes are not guaranteed to be a certain size. In AU you can try to set the buffer size by recommending one but there is no guarantee the buffers will be that size. In VST you can't request anything. Both are designed so that the buffer size is dynamic and may change due to system overhead. In many cases, you'd like the buffers to be a particular size - for example if you are doing FFT processing, it would be nice if the input and output sample counts were identical to the FFT size. You can do that with this function. Choose Audio/MIDI->Setup Audio Buffers (or click the toolbar button that looks like a green I.C.) and you can select the buffer size. This will set the buffer size that RAFX will always use regardless of system overhead. In addition, on the client side there is no splitting out or recombining buffers for I/O so it is the most efficient in RAFX only. On the down side, it isn't exactly compatible with VST/AU.
The flags work like this if you have BOTH set to true
m_bWantVSTBuffers = true;
m_bWantBuffers = true;
then the VST flag will override the RAFX buffer flag (which is most likely what you are experiencing) and only the VST function will get called . In no scenario will more than one of the functions get called.
So here is the rundown:
processAudioFrame() - easy to understand, default for my book, least efficient
processRackAFXAudioBuffer() - requires that you de/re interleave data if multi-channel but is the fastest in RAFX AND you can force the buffers to a certain size for faster FFT or other buffer-based processing.
processVSTAudioBuffer() - more complicated because of double-pointers but universal to both AU and VST so code can be duplicated later if you are working in those environments natively. More efficient than processAudioFrame() but less efficient than processRackAFXAudioBuffer() when working in RAFX.
Let me know if you have any other questions on this or need more explaining/information - you are the first to really dig into this (other than my students) so I am happy to get this documented in the Forum here!
I'm using RackAFX to make a pitch shifter based on FFT analysis/synthesis. First I tried to implement inside ProcessAudioFrame() but it was not possible to stream the processed buffer with this method, so I finally got here and read about my issue.
As I understood I need to use processRackAFXAudioBuffer() or VSTAudioBuffer to make it work properly, but there are certain things that are still not clear for me:
- RackAFXAudioBuffer seems easier to use since the buffer size is fixed, what is perfect for doing FFT. Does it mean that if I want to export my project to a final usable VST/AU I shouldn't use that method?
- In both cases, I guess I receive a pointer to the audio stream, so I have to de-interleave samples -no problem with this- in order to process but... Do I receive the pointer to the I/O audio stream or to the fixed buffer? If it's the second case, do I have to care about getting incoming samples into another buffer for further processing?
Thanks a lot for your time and help
Before getting into buffer-based FFT processing, please take a look at the new App Note I just put up on the site:
This App Note was written by one of my undergraduate students, Andrew Stockton. It shows how to do real-time frequency domain processing and gives 4 example algorithms. It does not use multi-threading or buffer processing! Everything is dene in processAudioFrame() using super fast FFT functions that he wrote. The audio buffer is zero-padded to begin with, then adds one new sample on each sample interval. The FFT is called on each sample interval with little CPU overhead! This means that there is no need to deal with partial buffers, as you would need with processVSTAudioBuffer(). It is also portable to MacOS (remove the #include "windows.h" from the ASXTransform.h file -- it is not needed).
This great RackAFX project shows how to implement windowing and overlap-and-add as well!
If you still want to do buffer processing (which would give you even more efficiency as the FFT would only need to be called once per buffer), my advice would be to start with this project, then implement the processing in processVSTAudioBuffer() -- there is already code in there for pass-through operation you can start with to understand how the buffer pointers work. But, you would still need to deal with the possibility of partial buffers, since there is no guarantee of buffer size during operation.
Try out Andrew's project - the simple pitch shifting and magnitude/phase swapper algorithms are simple and interesting sounding.
All the best,
Thanks for your answer. I'm cheking Stockton's project, seems quite interesting! Anyway I'd like to use the code I already wrote for pitch shifter, so I'll probably try both ways.
I think the best way to deal with partial buffers would be forcing the buffer size to the nearest upper power of 2 and filling with zeros.
I'm going to check Stockton's solution, think how to make it work and I'll let you know. Even if finally I don't use this method I'm pretty sure it will give me some nice ideas.
Sadly Stockton's project doesn't fit to my needs, so I'm finally using buffer processing. Anyways I'll check it in depth in a future, it seems a very interesting approach.
I only have 2 simple questions about buffers in processVSTAudioBuffer(), here are:
1. Is the argument called "inFramesToProcess" the actual number of samples in the buffer, even if it's a partial one?
2. I guess that using buffer methods I don't have to care about threads and keep getting samples, am I right?
Thanks a lot for your time and answers
For #2, yes with buffer processing you don't need to worry about multi-threading as long as your CPU is fast enough to process in realtime.
The reason I say "partial buffers" is that in the original VST spec, the size of the buffers was not guaranteed to be the same from one function call to the next, depending on system overhead. In addition, you can't dictate the buffer size - you have to take whatever it gives you. The same is true in AU, where you can "suggest" a buffer size, but it is not guaranteed that you will get it. So, "partial buffer" does not mean a partially filled buffer, it means a partial FFT buffer for you.
Suppose you are processing 1024 point FFTs, and a buffer comes in that is only 512 points - that is what I call a "partial buffer" - you have to be able to store that and handle that possibility. For example, if the next buffer is 32,768 samples then you need to take 512 of them for the FFT, then deal with the fact that you will end up with a half-filled FFT buffer at the end of the function call.
In VST (and RackAFX) a "frame" refers to a set of samples for one sample period - for stereo, a frame is 2 samples, left and right. For 5.1, a frame is six samples. This is why the passthrough code look like this - for stereo, you process samples form the left and right buffers:
while (--inFramesToProcess >= 0)
// Left channel processing
*pOutputL = *pInputL;
// If there is a right channel
*pOutputR = *pInputR;
// advance pointers
In RackAFX, the processRackAFXAudioBuffer() is a single buffer of mono or stereo (interleaved) samples. See the comment block above the function. Unlike VST/AU, RackAFX will let you force the buffer size to whatever you like. If you are doing 1024 point FFT's, set the RackAFX buffer size to 1024. If you have a mono wav file, you will get one buffer of 1024 points per function call. If you are doing stereo processing the buffer will be 2048 points. This makes FFT buffer processing really simple, and this is the reason I included the processRackAFXAudioBuffer() function since many students want to do easy FFT processing.
You might want to use this function first for prototyping and debugging your code since you never have to worry about the buffer size changing. Then, once everything works, port the code into processVSTAudioBuffer() knowing that you can not predict the buffer size and so may have partial FFT buffers to deal with. processVSTAudioBuffer() will port to VST2/3 and AU.
Ok, pretty clear explanaition.
I think the important issue is if we can know the range of possible buffer sizes, that way we can store all the incoming samples in a pre-buffer, whatever number of them, and get from it in a fixed size. First I thought was debugging and checking the value of inFramestoProcess, but if the size is actually fixed when working in RackAFX it seems not to be possible.
I'm going to make the prototype and think about how to deal with the buffer issue.
Thanks a lot!
The largest I've seen is 32768 samples back in VST2 days, using the old Cubasis SW. However, my current version of Cubase 7 is delivering 10% of the sample rate, or 441 samples at 44.1kHz for VST2/3.
If you use Make VST, you can compile and debug your project right inside your VST client. In Visual Studio, set the debugger to your client:
Properties->Debugging->Command (browse for your VST2/3 client's .exe) and then choose YES for Attach.
Start the client, then start the debugger. Set some breakpoints, which will be grey. When you load your plugin, the breakpoints will turn red. Then, you can see what the client is delivering to you. Check the process() function, and look at the ProcessData's numSamples member variable.
Oh - one more thing, again - because your RackAFX DLL is also a native VST2/3 plugin, you can also debug the RackAFX DLL inside your VST client. And, since I give you the .pdb files in your project folder, you can even step into the Sock2VST3 library that makes it possible to run RackAFX plugins as VST2/3.
Just do the same thing in the debugger and switch the Properties->Debugging->Command from .../RackAFX.exe to your VST client's .exe and follow the same procedure above. When you use m_bWantVSTBuffers and put your code in the processVSTAudioBuffer() function, you will get the buffer pointers directly from the VST client - all the library does is pass the same pointers to your function (it is a super thin wrapper) so you are actually getting the very pointers that the client is delivering - there is no manipulation or sub-frame grouping of samples like goes on with processAudioFrame(). Remember to reset the debugger Command back to RackAFX.exe if you want to debug using RackAFX as the client.
Just thought you'd like to know so you won't have to do the Make VST export if you don't want to.
I've been a bit busy (started a new job 2 weeks ago, working with VS by the way) so I didn't make any progress. Yesterday I started implementing processRackAFXBuffer() (I'm finally using this one for my degree work, in a future will try to implement VSTbuffer): did de-interleaving and started with some basic DSP, put some breakpoints to see what happens but... never enters the buffer function.
m_BwantBuffers is set to true, and m_bWantVSTBuffers to false. Nothing happens. Did some tests with VSTBuffers to true, but no luck.
What am I missing?
I did just verify that processRackAFXBuffer() is working for all cases: WAV, oscillator and sound-adapter. Not sure what happened in my earlier experiments, but I think I had the plugin unloaded at some point and the breakpoint did not get hit (I debug plugins through the RackAFX debugger so I can step through both RackAFX and plugin code so the breakpoints stay active (red)) -- I have removed the old reply.
When you set m_bWantBuffers = true, does processAudioFrame() get called instead? If your plugin is working and you hear the effect then one of the process methods is being called so you should be able to break on that.
Thanks for your fast answer.
I'm actually using a wav file and m_bWantBuffers set to true in the constructor plugin.cpp. Also tried to force the flag setting to true in my class constructor, but still no luck. Putting breakpoints inside processRackAFXBuffer() doesn't even pause the program when debugging.
No idea what it can be.
It says 'cannot find or open the PDB file'. Can't do step debuggin.
Now I have to leave, but tomorrow I'm going to start a fresh project and copy the code, in case there's a corrupt or missing file. I'll let you know if there's any news, and check the forum for new ideas in case you get something.
Thanks a lot and good night (evening there I gess)
I've been trying to make it work with no luck. As I told you first made a new project from the scratch and copied the DSP code already made, butthe behaviour is the same that last time. Put some breakpoints, load the plugin into RackAFX and started debugger from VS, but the execution ends and doesn't stop at any breakpoint.
The output looks like that:
Most Users Ever Online: 152
Currently Browsing this Page:
Guest Posters: 1
Newest Members:Mistahbrock, Jas, Rowan, sojourn, fabhenr, rg1, Niklas, Wutru, Tim Campbell, Danny Jonel
Moderators: W Pirkle: 573
Administrators: Tom: 74, JD Young: 80, Will Pirkle: 0, W Pirkle: 573, VariableCook: 3