Avatar

Please consider registering
Guest

sp_LogInOut Log In sp_Registration Register

Register | Lost password?
Advanced Search

— Forum Scope —




— Match —





— Forum Options —





Minimum search word length is 3 characters - maximum search word length is 84 characters

sp_Feed sp_TopicIcon
Generating BLEP Tables
No permission to create posts
April 25, 2015
12:58 am
Avatar
WilliamR
Member
Members
Forum Posts: 6
Member Since:
July 31, 2013
sp_UserOfflineSmall Offline

Hi,

am working on a little app to generate BLEP Tables but I ran into a problem.
Currently, I use the following code to generate a sinc(x) table (called mSINCBuffer) and a blep Table (mBLEPBuffer):

// pointNum: number of zero crossings per side (distance between those is PI)
// bufferSize: 1024, 2048 , 4096 and so on

//sinc functiopn
for (int i = 0; i < bufferSize; i++) {
// calculate x, i use bufferSize-1 to get my outer points exactly at the zero corssings
double x = mPI * pointNum * 2 * (((float)i / (bufferSize-1)) - 0.5);
double val;
if (x != 0)
{
val = sin(x) / (x);
}
else
{
val = 0;
}
mSINCBuffer[i] = val;
}

// integrate
// factor to scale the whole thing
double factor = ((double)(pointNum) * 4.0) / (bufferSize-1);
mBLEPBuffer[0] = mSINCBuffer[0];

for (int i = 1; i < bufferSize; i++) {
mBLEPBuffer[i] = mSINCBuffer[i] + mBLEPBuffer[i - 1];
}

for (int i = 0; i < bufferSize; i++) {
mBLEPBuffer[i] *= factor;
}

My issue is the following: after integrating, the step never stops at y = 2 where it should be but bounces around y = 2 (depending on the number of points). With increasing number of zero crossings the value comes closer but never reaches it. My question is if I missed something or if this is correct (due to the matchmatics or something).
Here is an example with only one point where the error is quite harsh.
http://wrodewald.de/others/blep.png

For now, I corrected the offset manually by subtracting the difference of the residual. (linearly blending, so sample 0 gets no correction and sample n-1 gets gets the full correction.

I will have some other questions later but I want to see how far I come on my own. ;)

April 27, 2015
6:35 pm
Avatar
W Pirkle
Admin
Forum Posts: 140
Member Since:
January 28, 2017
sp_UserOnlineSmall Online

Hi William

The first issue involves the center value when x = 0 -- you do not need/want to store this value. It represents the sample-of-discontinuity. In addition, since your arrays are even numbers of samples, storing this value makes the table lopsided by one sample and makes the correction asymmetrical.

My process for generating the tables I used in the Synth Book are:

1) generate the sinc() table with the x = 0 value omitted producing a symmetrical table

2) window the table (optional, see the book)

3) integrate the table; I use the bilinear integrator (Figure 4.7 from Synth Book) instead of reverse Euler that you are using (but, my grad student Francisco Valencia just completed his thesis on BLEP improvements and he uses reverse Euler)

4) normalize the table; I use the built-in RackAFX function normalizeBuffer() which normalizes to 1.0

5) normalize again, this time dividing each value by the last value in the table (tableLen - 1); this forces the last values in the table to 1.0

6) convert the table from unipolar to bipolar (I use the unipolarToBipolar() function in synthfunctions.h)

7) subtract the unit-step producing the final residual table

Also, as Francisco notes, since the table is symmetrical you can also get away with only calculating half of it, though this makes the implementation a bit more tricky.

Lastly, I pumped the table out to the status window in RackAFX then used the ->Clipboard function to copy the table to the clipboard, then used google spreadsheet to both examine and plot the data. All the plots in the book are google spreadsheet plots directly from the output of RackAFX. Plotting the data in the spreadsheet really helped the development of the proper table.

Hope that helps -

Will

April 29, 2015
1:25 am
Avatar
WilliamR
Member
Members
Forum Posts: 6
Member Since:
July 31, 2013
sp_UserOfflineSmall Offline

Hi,

thanks for the help. I've noticed some of my mistakes already but I will include the other tips in my blep application I made for the fun. ;) It generates blep tables with different sizes (points per size), solutions and windowing functions and also exports them as an c++ array. :D

I still have a question. I've noticed a big misunderstanding of mine when I loaded one of the given tables in my app. The numbers of points per side the table corrected (4 / 8 in total) isn't equal to the number of zero crossings in the sinc function. That's what I though and thats why my tables were not usable. ;) However, I still don't know how these values (zero crossings in the sinc function and "to be corrected" points in the blep curve) relate to each other. Can you help me with that? :)

April 30, 2015
4:11 am
Avatar
maquirri91
New Member
Members
Forum Posts: 1
Member Since:
March 5, 2015
sp_UserOfflineSmall Offline

Hi William,

When you are implementing the original BLEP method, one zero crossing on each side corresponds to exactly one BLEP correction point. The reason for this is that the zero crossings in the sinc function are equally distributed, meaning one zero crossing per each period. When you integrate it, a zero (which means slope = 0) becomes a local maximum (peak) or local minimum (valley) in the integrated function. These points are the ones you are using for the BLEP correction, the reason for this is that BLEP tries to imitate the ripples in the additive synthesis trivial waveform around the discontinuity. In conclusion you get the number of zeros crossings equal to the number of peaks and valleys together in the integrated bipolar step equal to the number of correction points. Note that this analysis does not depend on the size of the table.

In the recent approach to the BLEP method, we discovered that when you apply a window (i.e. Blackman) to the sinc function before integrating (which improves the correction of the aliasing), the windowed sinc function has very shallow side-lobes, which after integration does not show as clear local maximum or minimum points as before. In this case, the number of correction points is not necessarily the same as the number of zero crossings. Analyzing several cases, it was found that when you use only one zero crossing and different number of correction points (i.e. 3, 4, or 5), you would get a better aliasing correction but a worst harmonic decay. When you use more zero crossings (i.e. 4), and a different number of correction points (i.e. 3 or 5), you would get a better harmonic envelope but not as good aliasing correction. When you use the same number for zero crossing and correction points, you get something in between in terms of aliasing and harmonic decay. Note that this analysis also does not depend on the size of the table.

In conclusion, start with the same number of zero crossings and correction points, and then start playing around with windowing and changing the number of points. The implementation on the book for generating the BLEP waveform lets you input any table and any number of correction points, it does not depend on the number of zero crossings of the sinc function. One of the best BLEP tables I created using the genetic algorithm had 4 zero crossings and 5 correction points, using a very narrow window I generated. Remember changing the table size does not affect this analysis, but a larger table would usually give you a better correction because of precision.

Best,

Francisco

April 30, 2015
6:23 pm
Avatar
W Pirkle
Admin
Forum Posts: 140
Member Since:
January 28, 2017
sp_UserOnlineSmall Online

Thanks for the detailed reply, Francisco.

FYI: Francisco's Master's Thesis was on optimizing the BLEP residual to eliminate perceptual aliasing while trying to maintain the purest harmonic envelope. We should have a PDF version on the University of Miami Music Engineering website between now and the beginning of Fall 2015 semester. There is a link to the Music Engineering site on the lower left sidebar of this site.

- Will

May 1, 2015
10:32 pm
Avatar
WilliamR
Member
Members
Forum Posts: 6
Member Since:
July 31, 2013
sp_UserOfflineSmall Offline

Hi,

thanks for the replay, it helps a lot. :) I am not really sure what you mean with harmonic envelope / decay but I guess you mean the frequency response, right? I already played around with the number of correction points and some combinations had a bigger impact on the frequency response bellow the actuall cutoof point (which should be 0.5 * Fs).

So when I would like to create an oscillator with really good antialiasing, it isn't really the best solution to just take bigger and bigger bleps? (meaning 8, 16, 32 ... zero crossings / correction points).

Cheers

May 2, 2015
12:52 am
Avatar
W Pirkle
Admin
Forum Posts: 140
Member Since:
January 28, 2017
sp_UserOnlineSmall Online

Harmonic envelope refers to the shape of the spectrum, or the relative amplitudes of the harmonics compared to the fundamental. If you look in my Synth book chapter 5, you can compare the harmonic envelopes of several windowed BLEP sawtooth oscillators with the ideal additive version as well as the analog oscillators from the Korg MS-20 and Volca Keys. The more severely windowed versions, which produce much less aliasing, also reduce the high frequency components significantly. So there is a tradeoff between them. Francisco's thesis finds the optimum BLEP residual as well as number of correction points for perceptually alias-free sawtooth oscillators.

Regarding the number of correction points, there are a couple of issues. First, as pointed out in Chapter 5, we can only easily use 4-points-per-side correction up to 1/4 Nyquist (~ 5kHz at 44.1k fs). Above that we have to switch to 2-points per side. This is because 1/4 Nyquist has 8-points per cycle, or 4 points per side. If you don't switch to 2-point correction above 1/4 Nyquist, you will have to use BLEP multiple times per point since each point will be in multiple transition region locations. This can not only be challenging, but will result in the BLEP function being called many times.This is also outlined in detail in the Leary/Bright patent that the book uses/references, in the hard-sync details where the problem is exacerbated even more. Increasing to 8, 16, 32... points will only make this issue worse.

Secondly, Francisco's research showed that a larger number of correction points is not necessarily better (this surprised us both) for his perceptually alias free solution.

All the best,
Will

May 3, 2015
2:08 am
Avatar
WilliamR
Member
Members
Forum Posts: 6
Member Since:
July 31, 2013
sp_UserOfflineSmall Offline

Ok, so I was right with my assmuption. :) I've already did some tests and noticed this behavior as well.

Thanks a lot for all the help! :)

Forum Timezone: America/New_York

Most Users Ever Online: 36

Currently Online: W Pirkle
3 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

Skyler: 47

Derek: 41

Peter: 41

clau_ste: 39

Frodson: 38

Gwen: 32

EZB: 24

lppier: 23

Msaldaña: 18

Jorge: 17

Member Stats:

Guest Posters: 1

Members: 478

Moderators: 1

Admins: 4

Forum Stats:

Groups: 11

Forums: 29

Topics: 479

Posts: 1870

Newest Members:

certvalue111, sobhana s, sam, annaharris, Marie Weaver, kev, Steven, Mr Anderson, mguy, omelc

Moderators: W Pirkle: 140

Administrators: Tom: 65, JD Young: 80, Will Pirkle: 0, W Pirkle: 140