Ive been building SynthLab Projects, and reading Designing software synthesizer book. Im able to build the Synthlab projects but I want to use the
Synth Engine in my own project using JUCE. Juce Audio Processing object is called "AudioProcessor". For this project I'm following Synthlabs tutorial "Creating a Additive Osicallator" https://www.willpirkle.com/synthlab/docs/html/create_osc.html. Im able to add the Oscillator Instance to my JUCE Project, but Im not sure how to send samples my output buffer to hear anything. Inside Juce there's a method that's called "ProcessBlock", that does the processing of audio buffers. Can anyone assist me with this? Below is a code snippet.
void AudioProcessor::processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
// In case we have more outputs than inputs, this code clears any output
// channels that didn't contain input data, (because these aren't
// guaranteed to be empty - they may contain garbage).
// This is here to avoid people getting screaming feedback
// when they first compile a plugin, but obviously you don't need to keep
// this code if your algorithm always overwrites all the output channels.
for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
buffer.clear (i, 0, buffer.getNumSamples());
float* leftOutBuffer = addOsc->getAudioBuffers()->getOutputBuffer(SynthLab::LEFT_CHANNEL);
float* rightOutBuffer = addOsc->getAudioBuffers()->getOutputBuffer(SynthLab::RIGHT_CHANNEL);
// --- Synthlab
for (uint32_t i = 0; i < blockSize; i++)
float leftSample = leftOutBuffer[i];
float rightSample = rightOutBuffer[i];
// --- send samples to your output buffer
// <your framework output buffer>[channel][sample] = synthOutputs[channel][i];
// This is the place where you'd normally do the guts of your plugin's
// audio processing...
// Make sure to reset the state if your inner loop is processing
// the samples and the outer loop is handling the channels.
// Alternatively, you can process the samples with the channels
// interleaved by keeping the same state.
for (int channel = 0; channel < totalNumInputChannels; ++channel)
auto* channelData = buffer.getWritePointer (channel);
I'm answering in case there is a question about getting audio from the SynthLab module.
The details for an oscillator are here:
Since the docs are detailed, and you have the book too, I won't go into the details. But, the sequence is pretty straightforward:
- you are running the module or module core in standalone mode (this is why I put this mode in place)
- you reset the module with the sample rate in the JUCE function that establishes the Fs
- you need to send a MIDI note message to the oscillator as per the documentation link above, in response to the JUCE MIDI message function/handler/mechanism, you can also send MIDI messages that are fake - the module won't know the difference so you can test with hard-coded MIDI message fakes
- the JUCE processing function will want you to render some number of samples into its output buffer(s). In theory, you could render all the samples into one buffer. But, that would make the MIDI sensitivity very choppy. So instead, render data in blocks of 64 as per the book, so that MIDI messages will be processed on each 64-sample boundary (again, as per the book's discussion about this). You will need to create a sub-processing loop that renders these blocks. I did not see the call to the render function in your code.
Assuming you rendered a block, you correctly get pointers to the output buffers for the oscillator and setup one sub-loop.
You need to write those values into the JUCE output buffers and there should be plenty of documentation on that in the JUCE stuff. That is framework-specific and not part of SynthLab.
Assuming you know how to write samples into the JUCE buffers (if not, please ask that at the JUCE forum), then the main issue is rendering in sub-blocks, which requires you to write data in chunks to the JUCE buffers, one chunk per SynthLab render function call (64 samples).
Hope that helps - SynthLab was designed to be agnostic to the various APIs so that you can use its modules, voices, or even the engines in any framework - SynthLab is pure C++ code. If you have frameworks specific issues, then those questions should go to the JUCE forum.
Most Users Ever Online: 152
Currently Browsing this Page:
Guest Posters: 2
Moderators: W Pirkle: 693