November 2, 2015
So I am working on a plugin which features convolution with HRTF impulse responses to place the output sound in 3D space. As a result I require a way to import the impulse .wav files into my code, and for this purpose I have used the 'libsndfile' library (my C++ skills are unfortunately insufficient to approach reading the files in manually, so the use of this library is unavoidable for me).
After successfully prototyping this in RackAFX, I am now attempting to export my plugin as a VST for use in Ableton Live 9.5. In order to link with the external library, I have done the following in the Visual Studio project generated by 'Make VST' and placed in the VST3 SDK, which are the same steps I used to link the library in the original RackAFX VS project:
- Edited 'Linker -> General -> Additional Library Directories' to be the 'lib' folder of the libsndfile installation on my machine.
- Added the libsndfile .lib file to 'Linker -> Input -> Additional Dependencies'
- Added 'sndfile.h' (the libsndfile header file) to the 'source' folder of the VST VS project
- Added the libsndfile .lib, .dll and .def files to the source folder of the VST VS project (unsure which of these were necessary each time so just added all)
After doing all this my project compiles successfully into a .vst3 file, the extension of this I then rename to .dll to make it a VST2 file which will be compatible with Ableton Live and import into my Ableton plugins folder. However, upon scanning for new plugins Ableton fails to detect my plugin, implying there is something wrong with this .dll.
I have previously compiled a much simpler plugin as a VST and run it in Ableton, and a quick recompile just now of this simple plugin, with the exact same VS settings (aside from the linking) as my HRTF plugin, has been similarly successful.
Have I missed any steps / made any incorrect steps here, or does anyone have any other ideas as to what I may be doing wrong?
Thanks for your time in advance!
P.S, on an unrelated note - Will, if you're reading this; this work is for a final year undergraduate university project I am currently undertaking, and my co-supervisor is an ex student of yours named Felipe Espic - he said if I was ever to post on this forum I should tell you he says hi 🙂
January 29, 2017
Your approach looks correct to me (RackAFX uses libsndfile also) and I don't think VST validation would fail because of it so I'm not sure what to tell you there. In RackAFX, I place the libsndfile.lib file in the same folder as the source code, but I also need to place the libsndfile.dll file in the same folder as the RackAFX executable (you can see it in the Program Files (x86)/RackFX folder). Without the .DLL, RackAFX will not start and that might be your issue as well.
Typically, I have the problem where a client does not recognize the DLL because I've compiled it with the wrong bit-depth (my Ableton is 64-bit and it won't recognize 32-bit dlls).
I think using libsndfile may be overkill though. If your WAV files are uncompressed PCM audio, then you may be able to solve the issue by using the CWaveData object that is built into RackAFX instead of libsndfile. This is demo-ed in the ConvolveIt project here:
The CWaveData object allows you to easily import WAV data into your plugin by either hardcoding the .WAV file paths, or allowing the user to browse for them. This is documented on the page above. The CWaveData object opens WAV files and extracts the audio "guts" into a floating point buffer. For stereo files, the channels are interleaved L/R/L/R/etc...
And, since you are using external files with your plugin, you will need to ship them with the plugin - typically you place them in the VST3 folder alongside your plugin. RackAFX has a platform-independent function called getMyDLLDirectory() that will return the path where your DLL is located, then you can append your .WAV file names to that.
Here is a video I just put up last week for my students, who are doing a class project that involves morphing between two IRs. I use the CWaveData object to browse for, and then open, the WAV files with IRs:
This video also shows the new RackAFX GUI that is in beta testing - the theme shown is "Light Ableton" but there are other color themes as well (watch in HD).
Hopefully, you can use that object instead and not have to deal with libsndfile.
PS: Felipe was a great student! Please give him my best!
November 2, 2015
Thanks for your very thorough response! By the sounds of things, the CWaveData object performs exactly the operation which I am currently using libsndfile to perform, so I will investigate this now - the 64bit/32bit thing has stung me before, so I have already triple checked that to no avail unfortunately.
The issue of shipping the WAV files actually occurred to me the other day, managed to solve it using GetModuleFileNameW() and a ton of arguments I don't really understand (stack overflow is wonderful), I definitely should have enquired about these RackAFX functions sooner rather than trying to do everything the hard way!
The new GUI looks great, as an Ableton user I approve! Looking forward to the new version
Thanks again for your help, I will let you know how I get on!
November 2, 2015
November 2, 2015
In your video above you're demonstrating interpolation between impulse responses - would the method you're employing there work with HRTF impulses by any chance? Felipe and I looked into true interpolation between these for my project, however we got bogged down at the issue of phase unwrapping, and I found using a cruder method of mixing between nearby impulses actually sounded okay, so have only come back to the issue now that I have more time on my hands...
If you don't want to post anything which would give too much away to your students for their project then I understand!
Sorry for resurrecting an old post for an unrelated question, figured this was less ungainly than making a new topic but referencing what you said here!
January 29, 2017
Yes, the interpolation method will work for any sets of IRs as long as they are all the same length (which is surely the case).
In the project (which was due many weeks ago) you use linear interpolation to morph between the two IRs - you interpolate between each point using the built-in dLinTerp( ) method over 128 steps. Each pair of IR points sets up the start/end point pair (x1,y1 and x2,y2). Pretty simple. It is not exactly the same as just blending the outputs of the convolutions and the benefit is you only do one convolution.
If you search for papers by Dave Rossum you can find the origins of this - goes back to the 1990's with EMU's "z-plane morphing filters."
Most Users Ever Online: 36
Currently Browsing this Page:
Guest Posters: 1
Newest Members:Bill, hill william, NAUN_SONAR, sufy, Diane, Richard, drvenkman, venkman, Jfx, drorh
Moderators: W Pirkle: 247
Administrators: Tom: 67, JD Young: 80, Will Pirkle: 0, W Pirkle: 247